Comments (29)
@huningxin, there are plans to remove GPU extensions from the distribution as well (there were PriorBoxClustered and something else) - they will be a part of the plugin.
But supporting of extensions mechanism is a critical feature for some of the customers, thank you!
from inference-engine-node.
For GPU you may try the .cl files from OpenVINO (but they can be merged in the future release). Or just create own CPU extensions library and test (check extgen.py script that generates CPU extensions template).
from inference-engine-node.
@ysolovyov, you don't need to load extensions manually after 2019.3 release if you have not compiled them. So now AddExtension
is used only for user's extensions.
from inference-engine-node.
@dkurt , thanks for your sharing! It is very helpful.
from inference-engine-node.
@huningxin Hi, Ningxin, the APIs has been implemented. But I do not test the API core.setConfig()
with a neural network with speacial layer in cldnn
.
from inference-engine-node.
@dkurt , do you know any test cases that @lionkunonly can test CPU and GPU extensions? Thanks!
from inference-engine-node.
For GPU you may try the .cl files from OpenVINO (but they can be merged in the future release). Or just create own CPU extensions library and test (check extgen.py script that generates CPU extensions template).
Thanks for your suggestions, I will try.
from inference-engine-node.
Hi, @dkurt . I have used to the method in this link https://docs.openvinotoolkit.org/latest/extension_build.html to create a simple template extension named libtemplate_extension.so
. And I use this file and the file grn.cl
to test the API AddExtension
and SetConfig
.
The AddExtension
can load the libtemplate_extension.so
without reporting the error, and the SetConfig
can load the grn.cl
without errors.
But I also try to use the AddExtension
to load the GPU extension grn.cl
, and use SetConfig
to load the libtemplate_extension.so
. In my envision, both of them should report error because they load the extensions they should not accepte. But the fact is that they can load these files without errors, too.
I am confused. Do they report errors in native OpenVINO in this situation? If the API run without errors, it means that the extension loaded successfully, is it right? Cloud you give me some suggestions?
from inference-engine-node.
@lionkunonly, Hi! As far as I remember AddExtension
just appends the libraries to some internal list. There could be multiple libraries as well and I'm not sure that there is a error path for failed extensions loading. They probably are just skipped and plugin tries to find layer implementation in the next added extensions. If there is no extensions or all the libraries failed to load plugin should throw unknown layer exception.
@ilya-lavrenov, @ilyachur, can you add something about AddExtension behavior?
from inference-engine-node.
@lionkunonly Hi,
@dkurt is right. AddExtension
appends the library to some list of extensions and also try to load extensions to registered plugins which support such extension.
Did you try to use SetConfig
in order to load extension to the CPU plugin? CPU plugin doesn't throw an error because it ignores this parameter and doesn't parse it.
from inference-engine-node.
Did you try to use SetConfig in order to load extension to the CPU plugin? CPU plugin doesn't throw an error because it ignores this parameter and doesn't parse it.
Hi @i lyachur.
Yes, I tried to use SetConfig
to load extension to the CPU plugin. But in my experiment, I fixed the device paramenter as GPU
. Because I want to see if the SetConfig
will report any error, when it tries to load CPU plugin on GPU. And I found there was no error.
from inference-engine-node.
@lionkunonly Ok, thank you for clarrification.
@vladimir-paramuzov Can you take a look to this issue?
from inference-engine-node.
Yes, I tried to use SetConfig to load extension to the CPU plugin. But in my experiment, I fixed the device paramenter as GPU. Because I want to see if the SetConfig will report any error, when it tries to load CPU plugin on GPU. And I found there was no error.
@lionkunonly Do I get it correctly that you tried something like:
std::string e = "libcpu_extensions.so";
ie.SetConfig({{ CONFIG_KEY(CONFIG_FILE), e }}, "GPU");
and didn't get any errors? I checked this code and it throws an exception like:
Error loading custom layer configuration file: libcpu_extensions.so, No document element found at offset 18360
from here: https://github.com/openvinotoolkit/openvino/blob/master/inference-engine/src/cldnn_engine/cldnn_custom_layer.cpp#L226
Applying such config to CPU plugin leads to [NOT_FOUND] Unsupported property CONFIG_FILE by CPU plugin
error.
So if you use SetConfig in some other manner, could you please share code snippet, so we can check it?
from inference-engine-node.
@vladimir-paramuzov The code snippet looks like:
InferenceEngine::Core actual_;
std::string extension_absolute_path = "./test/cldnn_global_custom_kernels/libcpu_extension.so";
try {
actual_.SetConfig({{ie::PluginConfigParams::KEY_CONFIG_FILE, extension_absolute_path}}, "GPU");
} catch (const std::exception& error) {
Napi::TypeError::New(env, error.what()).ThrowAsJavaScriptException();
return;
} catch (...) {
Napi::Error::New(env, "Unknown/internal exception happened.")
.ThrowAsJavaScriptException();
return;
}
The implementation is coded by C++ with the N-API of Node.js. The ie
is the name of InferenceEngine. I tested the extension file libcpu_extension.so
and libtemplate_extension.so
again. And none of them throws exception.
from inference-engine-node.
@lionkunonly OK, I see. As I understand, this happens because ie.SetConfig() call doesn't trigger plugin creation, it just saves the config for not loaded plugins and passes it to the corresponding plugin later (on creation).
You can check it by adding for example ie.GetAvailableDevices();
call before SetConfig
. In this case exception will be thrown, because GetAvailableDevices
creates all the plugins, so the config will be actually validated immediately on SetConfig
call.
To my mind this is expected behavior, but cc @ilya-lavrenov to confirm it.
from inference-engine-node.
Right, behavior of SetConfig
which does not create plugin if it's not actually needed, is OK
from inference-engine-node.
@lionkunonly OK, I see. As I understand, this happens because ie.SetConfig() call doesn't trigger plugin creation, it just saves the config for not loaded plugins and passes it to the corresponding plugin later (on creation).
You can check it by adding for exampleie.GetAvailableDevices();
call beforeSetConfig
. In this case exception will be thrown, becauseGetAvailableDevices
creates all the plugins, so the config will be actually validated immediately onSetConfig
call.
Thanks for your help, I got your idea.
from inference-engine-node.
@vladimir-paramuzov Hi. Is there any existing native CPU extension or GPU extension to test the API SetConfig
and AddExtension
? If there is, could you please introduce them to me? Thanks
from inference-engine-node.
@huningxin after you build OpenVINO tests, there is a test extension library bin/intel64/Release/lib/libextension_tests.so
from inference-engine-node.
@ilya-lavrenov I give the comment for the error of building OpenVINO on Linux. openvinotoolkit/openvino#844. Cloud you give me some suggestions about it?
from inference-engine-node.
@ilyachur I wonder why template extensions cannot find libraries..
from inference-engine-node.
Hi @lionkunonly @ilya-lavrenov
I faced with this issue when I had another version of OpenVINO in my environment. For example if your environment (LD_LIBRARY_PATH, PATH or something like that) contains a path to old Inference Engine release which doesn't contain some libraries it can be a root cause of this issue.
@lionkunonly Could you please check that you don't have other version of OpenVINO in your environment?
Because in another way find_package can find incorrect version of the OpenVINO.
find_package(InferenceEngine REQUIRED)
from inference-engine-node.
doesn't contain some libraries it can be a root cause of this issue.
But all libraries are taken as is from this config file. So, if old find_package
contains InferenceEngine_LIBRARIES
which does not hold IE::inference_engine_c_api
, it should not be an issue.
from inference-engine-node.
@ilya-lavrenov It is just a proposal... We need to check it. If I remember right someone has the such problem with cmake, and he resolved it when removed other version of OpenVINO from the environment. Maybe I am wrong and forgot the root cause of the original problem, but it really easy to check.
I guess that problem can be because we build documentation together with the OpenVINO and in this case we get such target conflicts because IE CMake registries some targets, but found Inference Engine package doesn't contain such dependencies.
from inference-engine-node.
@huningxin after you build OpenVINO tests, there is a test extension library
bin/intel64/Release/lib/libextension_tests.so
Hi @ilya-lavrenov, does the libextension_tests.so
has been renamed as libtemplate_extension.so
in openvino 2020.4 ? If the answer is no, could you please introduce the role palyed by the libtemplate_extension.so
? Meanwhile, I also want to know if the libextension_tests.so
still exist in openvino 2020.4? I need your suggestions, thanks.
from inference-engine-node.
@lionkunonly Could you please check that you don't have other version of OpenVINO in your environment?
Because in another way find_package can find incorrect version of the OpenVINO.
Hi @ilyachur . I downloaded and installed a openvino 2020.1 in the path /opt/intel/<openvino_version>
when I meed the issue. Now, I just uninstall the openvino of the old version, and I build openvino without this error successfully. Thanks for your help.
But the libextension_tests.so
does not exist in the path bin/intel64/Release/lib/libextension_tests.so
. Does this file is removed in the openvino 2020.4?
from inference-engine-node.
@lionkunonly, There is no extensions library since 2019R4 if I'm not mistaken. But there is an option in all the demos and apps to specify any custom CPU library (flag -l
)
from inference-engine-node.
@lionkunonly libextension_tests.so
is used for tests only and not a part of OpenVINO package. libtemplate_extension.so
is used for documentation purposes, e.g. it's marked with doxygen anchors and https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Extensibility_DG_Intro.html is written based on this template extension. Template extension can be used as a start point to create our own extensions.
from inference-engine-node.
How to load default CPU extensions now? Previously I added it like
Core ie;
ie.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>(), "CPU");
auto exeNetwork = ie.LoadNetwork(network_, "CPU");
inferRequest_ = exeNetwork.CreateInferRequest();
in 2019.2.242
is this thing would be enough in 2020.4.287
? because i've read they were integrated into libMKLDNNPlugin
InferenceEngine::Core ie;
auto exeNetwork = ie.LoadNetwork(network_, "CPU");
inferRequest_ = exeNetwork.CreateInferRequest();
I don't get this from documentation. Also in all examples i see something like
IExtensionPtr extension_ptr = make_so_pointer<IExtension>(FLAGS_l);
ie.AddExtension(extension_ptr, "CPU");
pretty understandable samples where FLAGS_l not even defined in any of the files, what the hell is that FLAGS_l, seems that this is some path to so
or dylib
file
If i make something like
Core ie;
IExtensionPtr extension_ptr =
make_so_pointer<IExtension>("./lib/libMKLDNNPlugin.dylib");
ie.AddExtension(extension_ptr, "CPU");
....
auto exeNetwork = ie.LoadNetwork(network_, "CPU");
inferRequest_ = exeNetwork.CreateInferRequest();
i'm getting error
due to unexpected exception with message:
dlSym cannot locate method 'CreateExtension': dlsym(0x118f9a968,
CreateExtension): symbol not found
from inference-engine-node.
Related Issues (20)
- node.js crash when call InferRequest.getBlob with invalid argument HOT 1
- Add getAvailableDevices into Core interface HOT 2
- Add PreProcessInfo interface and preProcess property into InputInfo interface HOT 2
- Run webml-polyfill/examples/image_classification in Electron.js with inference-engine-node HOT 1
- Add GitHub icon into the Electron.js example
- What is the license of hello_classification_electron/main.js HOT 2
- Use getAvailableDevices in Electron.js example HOT 1
- Add an Electron.js example for SSD object detection
- [Example] use preprocess to optimize hello_classification_node example HOT 14
- Support PreProcessChannel in PreProcessInfo interface HOT 9
- Compare gaps between Node.js API vs Python IE API HOT 8
- A test bug on Core.getVersions('GPU') HOT 4
- Update inference-engine-node in npmjs.com HOT 1
- Inference Engine can not make full use of logic CPUs. HOT 1
- Fix tests, doc and electron example after removing createCore API
- Update inference-engine-node 0.5 in npmjs HOT 1
- `npm i inference-engine-node` meets error for version 0.1.5 HOT 2
- Multiple Async Models for MYRIAD HOT 9
- Transition to the OpenVINO 2.0 API
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from inference-engine-node.