Giter Club home page Giter Club logo

openvinotoolkit / openvino Goto Github PK

View Code? Open in Web Editor NEW
5.9K 186.0 2.0K 461.43 MB

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference

Home Page: https://docs.openvino.ai

License: Apache License 2.0

CMake 0.91% Python 16.31% C++ 79.71% C 2.85% Shell 0.05% Batchfile 0.01% HTML 0.06% JavaScript 0.08% CSS 0.02% PureBasic 0.01% PowerShell 0.01% TypeScript 0.01%
inference deep-learning openvino ai computer-vision diffusion-models generative-ai llm-inference natural-language-processing nlp

openvino's Introduction

PyPI Status Anaconda Status brew Status

PyPI Downloads Anaconda Downloads brew Downloads

Contents:

What is OpenVINO toolkit?

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.

  • Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks
  • Use models trained with popular frameworks like TensorFlow, PyTorch and more
  • Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud

This open-source version includes several components: namely OpenVINO Model Converter (OVC), OpenVINO™ Runtime, as well as CPU, GPU, NPU, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.

Components

  • OpenVINO™ Runtime - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
    • core - provides the base API for model representation and modification.
    • inference - provides an API to infer models on the device.
    • transformations - contains the set of common transformations which are used in OpenVINO plugins.
    • low precision transformations - contains the set of transformations that are used in low precision models
    • bindings - contains all available OpenVINO bindings which are maintained by the OpenVINO team.
      • c - C API for OpenVINO™ Runtime
      • python - Python API for OpenVINO™ Runtime
  • Plugins - contains OpenVINO plugins which are maintained in open-source by the OpenVINO team. For more information, take a look at the list of supported devices.
  • Frontends - contains available OpenVINO frontends that allow reading models from the native framework format.
  • OpenVINO Model Converter (OVC) - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, and adjusts deep learning models for optimal execution on end-point target devices.
  • Samples - applications in C, C++ and Python languages that show basic OpenVINO use cases.

Supported Hardware matrix

The OpenVINO™ Runtime can infer models on different hardware devices. This section provides the list of supported devices.

Device Plugin Library Short Description
CPU Intel CPU openvino_intel_cpu_plugin Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE), Intel® Advanced Matrix Extensions (Intel® AMX)
ARM CPU openvino_arm_cpu_plugin ARM CPUs with armv7a and higher, ARM64 CPUs with arm64-v8a and higher, Apple® Mac with Apple silicon
GPU Intel GPU openvino_intel_gpu_plugin Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics
NPU Intel NPU openvino_intel_npu_plugin Intel® Core™ Ultra Processors

OpenVINO™ Toolkit also contains several plugins which simplify loading models on several hardware devices:

Plugin Library Short Description
Auto openvino_auto_plugin Auto plugin enables selecting Intel device for inference automatically
Auto Batch openvino_auto_batch_plugin Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user
Hetero openvino_hetero_plugin Heterogeneous execution enables automatic inference splitting between several devices
Multi openvino_auto_plugin Multi plugin enables simultaneous inference of the same model on several devices in parallel

License

OpenVINO™ Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Telemetry

OpenVINO™ collects software performance and usage data for the purpose of improving OpenVINO™ tools. This data is collected directly by OpenVINO™ or through the use of Google Analytics 4. You can opt-out at any time by running the command:

opt_in_out --opt_out

More Information is available at https://docs.openvino.ai/latest/openvino_docs_telemetry_information.html.

Documentation

User documentation

The latest documentation for OpenVINO™ Toolkit is available here. This documentation contains detailed information about all OpenVINO components and provides all the important information you may need to create an application based on binary OpenVINO distribution or own OpenVINO version without source code modification.

Developer documentation

Developer documentation contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.

Tutorials

The list of OpenVINO tutorials:

Products which use OpenVINO

You can also check out Awesome OpenVINO to see all the community-made projects using OpenVINO!

System requirements

The system requirements vary depending on platform and are available on dedicated pages:

How to build

See How to build OpenVINO to get more information about the OpenVINO build process.

How to contribute

See Contributions Welcome for good first issues.

See CONTRIBUTING for contribution details. Thank you!

Visit Intel DevHub Discord server if you need help or wish to talk to OpenVINO developers. You can go to the channel dedicated to Good First Issue support if you are working on a task.

Take the issue

If you wish to be assigned to an issue please add a comment with .take command.

Get support

Report questions, issues and suggestions, using:

Additional Resources


* Other names and brands may be claimed as the property of others.

openvino's People

Contributors

a-sidorova avatar akuporos avatar egorduplensky avatar eshoguli avatar iefode avatar ilya-lavrenov avatar ilyachur avatar itikhono avatar jiwaszki avatar kblaszczak-intel avatar lyamin-roman avatar mateusztabaka avatar mitruska avatar mryzhov avatar msmykx-intel avatar mvafin avatar nosovmik avatar olpipi avatar pavel-esir avatar popovaan avatar praasz avatar riverlijunjie avatar rkazants avatar sgolebiewski-intel avatar sshlyapn avatar t-jankowski avatar v-golubev avatar vladimir-paramuzov avatar vurusovs avatar yeonbok avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openvino's Issues

how i can use batch to accelerate

hello!
I infer one image with resolution of 1280* 720 using openvino and the infer time is 129ms .
I try to crop this image into four patchs with resolution of 640* 360, and infer these four batches, and the infer time is about 129ms .
Why it has not been accelerated when i using batching? how i can use batch to accelerate or other way?

Thanks,
jly

Why does it take more time when I use two models at he same time?

I created a InferRequest using model A, and the infer time is 60ms,also another InferRequest using model B, and the infer time is 43ms.
However, I created two InferRequest using Model A (called infer_a) and Model B (called infer_b) at the same time and infer the same image sequentially using infer_a, infer_b, it takes time 105ms and 264ms respectively.
Why does it take more time when I use two models?
image

image

Install issues

When i cmake by "cmake -DCMAKE_BUILD_TYPE=Release ..", there will be:

-- Found OpenMP_C: -fopenmp
-- Found OpenMP_CXX: -fopenmp
-- Found OpenMP: TRUE
-- [clDNN] Selected capabilities: public
-- Looking for C++ include unistd.h
-- Looking for C++ include unistd.h - found
-- Looking for C++ include inttypes.h
-- Looking for C++ include inttypes.h - found
-- Looking for C++ include sys/stat.h
-- Looking for C++ include sys/stat.h - found
-- Looking for C++ include fnmatch.h
-- Looking for C++ include fnmatch.h - found
-- Looking for strtoll
-- Looking for strtoll - found
-- Found OpenCV: /home/why/openvino/dldt-2018/inference-engine/temp/opencv_4.0.0_ubuntu (found version "4.0.0")
-- Validation app build is switched off
-- Configuring incomplete, errors occurred!
See also "/home/why/openvino/dldt-2018/inference-engine/build/CMakeFiles/CMakeOutput.log".
See also "/home/why/openvino/dldt-2018/inference-engine/build/CMakeFiles/CMakeError.log".

How to solve this?

Inference Engine crash when using Activation layer with type="sigmoid"

Using the latest 2018R4 release of Intel's OpenVINO, a very simple network with a Convolution layer and a Activation layer crashes at run time with the following error:

Illegal instruction (core dumped)

But the network is ok if the activation type is changed from sigmoid to tanh.
The net is defined as follows in the IR xml:

<layers>
	<layer id="0" name="data" precision="FP32" type="Input">
		<output>
			<port id="0">
				<dim>16</dim>
				<dim>32</dim>
				<dim>8</dim>
				<dim>8</dim>
			</port>
		</output>
	</layer>
	<layer id="1" name="conv_fwd" precision="FP32" type="Convolution">
		<data dilations="1,1" group="1" kernel="3,3" output="32" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
		<input>
			<port id="0">
				<dim>16</dim>
				<dim>32</dim>
				<dim>8</dim>
				<dim>8</dim>
			</port>
		</input>
		<output>
			<port id="2">
				<dim>16</dim>
				<dim>32</dim>
				<dim>8</dim>
				<dim>8</dim>
			</port>
		</output>
		<blobs>
			<weights offset="0" size="36864"/>
		</blobs>
	</layer>
	<layer id="2" name="act_fwd" precision="FP32" type="Activation">
		<data type="sigmoid"/>
		<input>
			<port id="0">
				<dim>16</dim>
				<dim>32</dim>
				<dim>8</dim>
				<dim>8</dim>
			</port>
		</input>
		<output>
			<port id="1">
				<dim>16</dim>
				<dim>32</dim>
				<dim>8</dim>
				<dim>8</dim>
			</port>
		</output>
	</layer>
</layers>
<edges>
	<edge from-layer="0" from-port="0" to-layer="1" to-port="0"/>
	<edge from-layer="1" from-port="2" to-layer="2" to-port="0"/>
</edges>

split op questions for model otimizer

Hi, I ran across a problem when I use mo to convert a caffemodel, it shows 'LayerParameter' object has no attribute 'num_split' for split layer. However, as far as I am concerned, split layer in caffe is just a copy operation. So I think this sentence _out_shape[node.axis] = np.int64(input_node.shape[node.axis] / _node.pb.num_split) in mo/ops/split.py should be like this:
out_shape[node.axis] = np.int64(input_node.shape[node.axis])
You may mistake split layer as slice layer, I guess.

Supported primitive descriptors list is empty for node when using sample model

Hello,
I met a problem when I ran code blow using python api:

plugin = IEPlugin(device=DEVICE, plugin_dirs=PLUGIN_DIR)
if CPU_EXTENSION and 'CPU' in DEVICE:
    plugin.add_cpu_extension(CPU_EXTENSION)
net = IENetwork.from_ir(model=model_xml, weights=model_bin)
inputs = net.inputs
outputs = net.outputs
exec_net = plugin.load(network=net)

The error is:
Supported primitive descriptors list is empty for node: bottleneck1_1/dim_red/conv

The model is person-detection-retail-0013 which is one sample SSD model from installation package.

The problem comes from the last line exec_net = plugin.load(network=net) , but I have no idea how to solve it.

Thank you for your help.

Best,
dcgao

Movidius support?

Hi, In the repo, I don't see code for Movidius. Is Movidius device supported?

DLIA source code

Thanks for publishing the source code of OpenVINO! While the repository contains the sources for the CPU inference plugin libraries, I can't find anything related to FPGA. Does Intel have a timeline on publishing the RTL for the FPGA accelerators or the sources of the DLIA inference plugin?

Model Optimizer does not support array dimension value of -1

Received this error, when running model_optimizer on a frozen Tensorflow graph.
[ ERROR ] Shape [ -1 224 224 3] is not fully defined for output 0 of "input". Use --input_shape with positive integers to override model input shapes.

From the numpy.reshape() documentation (https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html):
"One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions.

The error appears because the model_optimizer requires all dimensions to be positive. And yes, if you use --input_shape [1,3,224,224] as command-line parameter, it works perfectly fine.

The issue here is that it's common practice in the entire "ML in python" field to have the batch size -1, as you don't want to constrict the model to a fixed batch size, but inferred at run time.
And almost every Tensorflow model uses -1 as batch size. So this will be an error that will pop-up every time someone just downloads a model from the internet and tries to optimize it.

Are there any plans of adding support for this type of "run-time inferred dimension" ?

Converting IR to Caffe

Hi there,
Thanks for your job!
I wonder if there is a way to convert IR files, i.e, .bin and xml files,to the original model, like .caffemodel and .prototxt?
Or could you please tell me where to find person-detection-retail-0013.caffemodel and person-detection-retail-0013.prototxt and other original files before converting to IR files?
Thanks again!

Problem with converting the Reshape operator of MXNet

Currently I have problem with converting the Reshape operator of MXNet.
The Reshape operator is defined as follows in my model-symbol.json file:

{
  "op": "Reshape", 
  "name": "backbone_stage2_blk1_reshape0", 
  "attrs": {"shape": "(0, -4, 2, -1, -2)"}, 
  "inputs": [[53, 0, 0]]
}, 

The following error message is generated by the Model Optimizer:

[ ERROR ]  Number of elements in input [1, 2, -4, 2352, -2] and output [1, -3, -2] of reshape node backbone_stage2_blk1_reshape1 mismatch
[ ERROR ]  Shape is not defined for output 0 of "backbone_stage2_blk1_reshape1".
[ ERROR ]  Cannot infer shapes or values for node "backbone_stage2_blk1_reshape1".

Apparantly the MO fails to calculate the output size for the Reshape operator. MO relies on mo.front.common.partial_infer.reshape.tf_reshape_shape_infer to calculate the output size for the Reshape operator, but this function is only able to process shape vectors with values in {-1, 0, >0}. However, MXNet applies more complex rules for Reshape, using values in {-4, -3, -2, -1, 0, >0}:

  • 0 copy this dimension from the input to the output shape.
  • -1 infers the dimension of the output shape by using the remainder of the input dimensions keeping the size of the new array same as that of the input array. At most one dimension of shape can be -1.
  • -2 copy all/remainder of the input dimensions to the output shape.
  • -3 use the product of two consecutive dimensions of the input shape as the output dimension.
  • -4 split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1).

For the case above, {"shape": "(0, -4, 2, -1, -2)"}, the original dims[1] is split to 2 dimensions [2, -1], where the -1 is to be calculated as dims[1]/2. This case is used in the ShuffleNet, where the channel dimension is reshaped to 2-d and then transposed to implement a channel shuffle operation.

Hopefully the Reshape operator of MXNet would be fully supported in the future.

install issues

make install target does not appear to install the .cmake files; it would also be ideal to also have pkg-config files too.

Problem converting tensorflow .pb with mo_tf.py

When trying to run model optimizer on a frozen .pb exported from keras/tensorflow I get the following error from mo:

[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  __new__() got an unexpected keyword argument 'serialized_options'
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/mo/main.py", line 173, in driver
    ret_code = check_requirements(framework=argv.framework)
  File "/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/mo/utils/versions_checker.py", line 136, in check_requirements
    exec("import {}".format(modules[name] if name in modules else name))
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/__init__.py", line 59, in <module>
    from tensorflow.core.framework.graph_pb2 import *
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/core/framework/graph_pb2.py", line 15, in <module>
    from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/core/framework/node_def_pb2.py", line 15, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/core/framework/attr_value_pb2.py", line 15, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/core/framework/tensor_pb2.py", line 15, in <module>
    from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/core/framework/resource_handle_pb2.py", line 22, in <module>
    serialized_pb=_b('\n/tensorflow/core/framework/resource_handle.proto\x12\ntensorflow\"r\n\x13ResourceHandleProto\x12\x0e\n\x06\x64\x65vice\x18\x01 \x01(\t\x12\x11\n\tcontainer\x18\x02 \x01(\t\x12\x0c\n\x04name\x18\x03 \x01(\t\x12\x11\n\thash_code\x18\x04 \x01(\x04\x12\x17\n\x0fmaybe_type_name\x18\x05 \x01(\tBn\n\x18org.tensorflow.frameworkB\x0eResourceHandleP\x01Z=github.com/tensorflow/tensorflow/tensorflow/go/core/framework\xf8\x01\x01\x62\x06proto3')
TypeError: __new__() got an unexpected keyword argument 'serialized_options'

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------```

Thanks in advance!
  // K-A

An error occurred while using GPU

environment: ubuntu 16.04
command:
ros2 run dynamic_vino_sample dynamic_vino_sample -m /home/intel/code/open_model_zoo/model_downloader/Transportation/object_detection/face/pruned_mobilenet_reduced_ssd_shared_weights/dldt/face-detection-adas-0001.xml -m_hp /home/intel/code/open_model_zoo/model_downloader/Transportation/object_attributes/headpose/vanilla_cnn/dldt/head-pose-estimation-adas-0001.xml -m_em /home/intel/code/open_model_zoo/model_downloader/Retail/object_attributes/emotions_recognition/0003/dldt/emotions-recognition-retail-0003.xml -m_ag /home/intel/code/open_model_zoo/model_downloader/Retail/object_attributes/age_gender/dldt/age-gender-recognition-retail-0013.xml -i StandardCamera -d GPU -d_hp CPU -d_em CPU -d_ag CPU

error:
[ ERROR ] failed to create engine: clGetPlatformIDs error -1001

The same situation occurred when using upstream OpenVINO tar ball, but upstream OpenVINO provided an opencl dependency file to solve this problem, but I did not find a solution here.

BTW, I have downloaded intel-opencl_18.28.11080_amd64.deb and installed it.

About setvars.sh setup

If the user doesn't install OpenVINO,how to set up the vars?

<INSTALL_DIR>/deployment_tools/inference_engine/bin/intel64/Release folder):
source ../../setvars.sh

Regards

Model Optimizer errors when trying to optimize official Tensorflow implementation of Resnet model

TF Model repo used: https://github.com/tensorflow/models/tree/master/official/resnet
Observation: The implementation uses the Estimator framework. A saved_model of the inference graph is exported.

Trying to optimize a saved model:
Model Optimizer command used:
python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir <model_dir>

Error:
Model Optimizer version: 1.2.185.5335e231
[ ERROR ] Cannot infer shapes or values for node "images".
[ ERROR ] 'bytes' object has no attribute 'shape'
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7fddbbbba488>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Stopped shape/value propagation at "images" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

Cause (in TF Python model implementation):
...
#Generate a summary node for the images
tf.summary.image('images', features, max_outputs=6)
...

FIX: After removing the above tf.summary.image statement, the same command was used and the following asserts were triggered:

1st one: /opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/ops/op.py:173

assert all(old_shape is None for old_shape in old_data_shape) or all([np.array_equal(old_data_shape[id], data_node.shape) for id, data_node in enumerate(data_nodes)])
Commented it, then the 2nd assert followed:

2nd one: /opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/middle/passes/fusing/fuse_linear_seq.py:81

assert (np.array_equal(fnodes[0].in_node(get_tensor_id(fnodes[0])).shape, fnodes[-1].out_node().shape))

Commented the assert and then the process finished with a
[SUCCESS] Generated IR model
And the generated .bin is NOT empty, as has happened with some errors.
But the inference engine fails while loading the network (Error: Segmentation Fault).

Attribute "factor" missing in Resample layer of IR converted from a MXNet model with UpSampling op

Following is the UpSampling op in MXNet's json file:

{
  "op": "UpSampling", 
  "name": "fpn3_upsample_", 
  "attrs": {
    "num_args": "1", 
    "sample_type": "nearest", 
    "scale": "2"
  }, 
  "inputs": [[214, 0, 0]]
}, 

Following is the Resample layer in the xml file of the IR:

	<layer id="53" name="fpn3_upsample_" precision="FP32" type="Resample">
		<data antialias="0" scale="2" type="caffe.ResampleParameter.NEAREST"/>
		<input>
			<port id="0">
				<dim>1</dim>
				<dim>96</dim>
				<dim>16</dim>
				<dim>16</dim>
			</port>
		</input>
		<output>
			<port id="1">
				<dim>1</dim>
				<dim>96</dim>
				<dim>32</dim>
				<dim>32</dim>
			</port>
		</output>
	</layer>

It turns out that the inference engine relies on the "factor" attribute to infer the output shape during calling CNNNetwork::reshape. So an exception is raised when trying to reshape the input at the inference engine side.

In order to fix this problem, function mo.front.mxnet.extractors.up_sampling.up_sampling_ext is modified as follows:

def up_sampling_ext(attrs):
    node_attrs = {
        'type': 'Resample',
        'scale': attrs.int("scale", 1),
        'factor': attrs.int("scale", 1),
        'sample_type': 'caffe.ResampleParameter.NEAREST',
        'antialias': 0,
        'infer': up_sampling_infer
    }
    return node_attrs

Unexpected exception happened. [ ERROR ] Please contact Model Optimizer developers and forward the following information: [ ERROR ] 'preprocessed_image_height'

python3 mo.py --input_model=/fine_tuned_model1/frozen_inference_graph.pb --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config extensions/front/tf/legacy_faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config=/fine_tuned_model1/pipeline.config --input_shape=[1,256,256,3]

I am getting below error while running the above command.

WARNING: the "SecondStagePostprocessorReplacement" is a legacy replacer that will be removed in the future release. Please, consider using replacers defined in the "extensions/front/tf/ObjectDetectionAPI.py"
[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] 'preprocessed_image_height'
[ ERROR ] Traceback (most recent call last):
File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 321, in main
return driver(argv)
File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
mean_scale_values=mean_scale)
File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 171, in tf2nx
class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 102, in apply_replacements
replacer.find_and_replace_pattern(graph)
File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/front/tf/replacement.py", line 91, in find_and_replace_pattern
self.replace_sub_graph(graph, match)
File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/front/common/replacement.py", line 115, in replace_sub_graph
new_sub_graph = self.generate_sub_graph(graph, match)
File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/extensions/front/tf/FasterRCNNs.py", line 243, in generate_sub_graph
config_attrs['input_height'] = graph.graph['preprocessed_image_height']
KeyError: 'preprocessed_image_height'

[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------

FusedBatchNorm doesn't support is_training=True

Hi, I ran into a problem when converting a meta file to IR using MO, it shows "mo.utils.error.Error: FusedBatchNorm doesn't support is_training=True". How can I solve this problem besides offloading this operation?

Wrong rounding-type for Pooling layers after converting a MXNet model to IR

In the original MXNet model, the pooling layer is defined with ceil_mode=False (resulting in pooling_convention=valid in the json file). But in the xml file of the converted IR, the rounding-type is "ceil". This causes the network producing incorrect results after CNNNetwork::reshape is called (which probably re-calculates the shape of the blobs) even if the input width and height are not changed.

op definition in MXNet json file:

{
  "op": "Pooling", 
  "name": "pool_fwd", 
  "attrs": {
    "global_pool": "False", 
    "kernel": "(3, 3)", 
    "pad": "(1, 1)", 
    "pool_type": "max", 
    "pooling_convention": "valid", 
    "stride": "(2, 2)"
  }, 
  "inputs": [[4, 0, 0]]
}

Layer in xml file of the IR:

	<layer id="3" name="pool_fwd" precision="FP32" type="Pooling">
		<data exclude-pad="false" kernel-x="3" kernel-y="3" pad-b="1" pad-r="1" pad-x="1" pad-y="1" pool-method="max" rounding-type="ceil" stride="1,1,2,2" stride-x="2" stride-y="2"/>
		<input>
			<port id="0">
				<dim>1</dim>
				<dim>32</dim>
				<dim>112</dim>
				<dim>112</dim>
			</port>
		</input>
		<output>
			<port id="1">
				<dim>1</dim>
				<dim>32</dim>
				<dim>56</dim>
				<dim>56</dim>
			</port>
		</output>
	</layer>

This problem seems to be caused by the following code in mo.front.mxnet.extractors.pooling.py:

pooling_conv = attrs.str("pooling_convention", 'valid')
if pooling_conv:
    data["pooling_convention"] = pooling_conv
    data["rounding_type"] = 'ceil'

I have modified the code as follows and then the network is correct:

pooling_conv = attrs.str("pooling_convention", 'valid')
if pooling_conv:
    data["pooling_convention"] = pooling_conv
    if pooling_conv == 'valid':
        data["rounding_type"] = 'floor'
    else:
        data["rounding_type"] = 'ceil'

#dldt Inference Engine samples giving error Error on or near line 198; exiting with status 1

./demo_squeezenet_download_convert_run.sh

Build Inference Engine samples

-- /etc/*-release distrib: Ubuntu 16.04
CMake Error at /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/src/extension/cmake/CPUID.cmake:324 (file):
file STRINGS file
"/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/samples/cpuid.txt"
cannot be read.
Call Stack (most recent call first):
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/src/extension/cmake/feature_defs.cmake:30 (include)
/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/src/extension/CMakeLists.txt:21 (include)

-- Host CPU features:
-- Validation app build is switched off
-- Configuring incomplete, errors occurred!
See also "/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/samples/CMakeFiles/CMakeOutput.log".
See also "/opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/samples/CMakeFiles/CMakeError.log".
Error on or near line 198; exiting with status 1

Plugins libs

Hello everyone,

I have downloaded and compiled this repo.
Could you please confirm me that plugins (libMKLDNNPlugin, libclDNNPlugin, ...) are not contained in this repo and that I need to download the toolkit on intel website to get them ?

Thanks for your help :)

Power Layer uses unsupported power value

Hi, when I use inference engine to run a CPN model, it returns "Power Layer tower_0/resnet_v1_50/conv1/BatchNorm/moments/SquaredDifference/squared_uses unsupported power value". After checking the code, I find this:

if (powerLayer->power != 1.0f && powerLayer->power != 0.5f) {
        THROW_CLDNN_EXCEPTION("Power Layer " << layer->name << "uses unsupported power value");
}

in the file ./src/cldnn_engine/cldnn_graph.cpp:1954
So I do not know why the power value in Power layer should be only 1 or 0.5?

Problem with converting a mxnet model to IR, using outputs from multiple layers

Hi there,

I am trying to output both the result of intermediate layers and the result of the final layer of a network. My network is trained using mxnet, and then converted to IR using the following cmd line in Ubuntu-16.04:
python3 mo_mxnet.py --input_model simple-multi-0000.params' --input_shape [1,3,128,128] --output pool1_fwd,pool2_fwd,fc_fwd

The net structure is as follows:
data -> conv1 -> relu1 -> pool1 -> conv2 -> relu2 -> pool2 -> fc

The following error happens when converting the model to IR:

[ ERROR ] Cannot infer shapes or values for node "conv2_fwd".
[ ERROR ] 0
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function mxnet_conv2d_infer at 0x7f60cfe771e0>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.

The model can be converted successfully when only one output is specified, being pool1_fwd, pool2_fwd, or fc_fwd. It seems that the mo_mxnet.py stops parsing the graph when reaching the first specified output node, pool1_fwd in this case.

Besides, the mo/pipeline/mx.py is buggy so that it cannot process the output names correctly. Function "driver" in mx.py calls function "add_output_ops" from mo.front.extractor, passing variable "outputs" as the second argument. "add_output_ops" expects a dict as the second argument, but variable "outputs" is a list. To fix this problem, the code is modified as follows:

_outputs = output_user_data_repack(graph, outputs)
graph, output_op_nodes = add_output_ops(graph, _outputs)

i.e. variable "outputs" is converted to a dict using function "output_user_data_repack" and then passed to "add_output_ops".
The model optimizer can work with a single output name after the modifications above, but still, output from both a intermediate layer and any of the later layers depending on it is not possible, due to the problem reported above. Hope the community can fix this problem soon.

Different performances for same alexnet from Caffe and ONNX

The same network (Alexnet) has different performance number executing on FPGA for Caffe and ONNX input networks: the ONNX input is almost 35% slower. Executing on CPU the performance is the same.
It is caused by Reshape operator, that exists on ONNX and did not on Caffe. As I could see, this operator is not needed by Inference Engine, as alexnet IR from Caffe does not have it and run well.
My suggestion is to suppress Reshape operator on MO, as it is not needed.

string processing bug in mo/front/tf/loader.py

Hi,when I use mo to convert a meta file, it returns "Cannot load input model: The passed save_path is not a valid checkpoint". After deep dive, I find when I input "snapshot_350.ckpt.meta", and this line in mo/front/tf/loader.py(restorer.restore(sess, meta_graph_file.strip(".meta"))) returns snapshot_350.ckp.
So maybe you should adjust the code to restorer.restore(sess, meta_graph_file[:-5])?

Failed to run inference engine on NCS2

Hi,
I have successfully created an .xml/.bin pair using the tensorflow model optimizer (mo_tf.py, version 1.4.292.6ef7232d) but when I try to run an inference on the Neural Compute Stick 2 I get the following error:

	API version ............ 1.4
	Build .................. 17328
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     testvga.png
[ INFO ] Loading plugin

	API version ............ 1.4
	Build .................. 17328
	Description ....... myriadPlugin
[ INFO ] Loading network files:
	SqueezeBoxDetector.xml
	SqueezeBoxDetector.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] outputDims[0] = 1
[ INFO ] outputDims[1] = 5
[ INFO ] outputDims[2] = 29
[ INFO ] outputDims[3] = 39
[ INFO ] Loading model to the plugin
[ INFO ] input->getTensorDesc().getDims()[0] = 1
[ INFO ] input->getTensorDesc().getDims()[1] = 3
[ INFO ] input->getTensorDesc().getDims()[2] = 480
[ INFO ] input->getTensorDesc().getDims()[3] = 640
[ INFO ] Starting inference (1 iterations)
E: [xLink] [         0] dispatcherEventReceive:308	dispatcherEventReceive() Read failed -4 | event 0x7f4f651cceb0 USB_READ_REL_RESP

E: [xLink] [         0] eventReader:254	eventReader stopped
E: [xLink] [         0] dispatcherWaitEventComplete:694	waiting is timeout, sending reset remote event
E: [ncAPI] [         0] ncFifoReadElem:2853	Packet reading is failed.
E: [ncAPI] [         0] ncFifoDestroy:2672	Failed to write to fifo before deleting it!
[ ERROR ] Failed to read output from FIFO: NC_ERROR```

Input and output dimensions are all good. The code running the inference is based on the "inference_engine/samples/classification_sample" with the only difference being that nothing at all is done with the output, for simplicity's sake.

Other apps are running on the myriad device without hitch, but this one just hangs for a while before exiting.

Thankful for any help!
Cheers,
  Karl-Anders

"Crop" layer failed when calling CNNNetwork::reshape

The model is converted from MXNet. The slice_axis operator in MXNet is converted to the Crop layer in the IR.

slice_axis operator in the json file of MXNet:

{
  "op": "slice_axis", 
  "name": "stage2_blk2_slice_axis0", 
  "attrs": {
    "axis": "1", 
    "begin": "0", 
    "end": "12"
  }, 
  "inputs": [[60, 0, 0]]
}, 

Crop layer in the IR:

	<layer id="16" name="stage2_blk2_slice_axis0" precision="FP32" type="Crop">
		<data axis="1" dim="12" offset="0"/>
		<input>
			<port id="0">
				<dim>1</dim>
				<dim>24</dim>
				<dim>16</dim>
				<dim>16</dim>
			</port>
		</input>
		<output>
			<port id="1">
				<dim>1</dim>
				<dim>12</dim>
				<dim>16</dim>
				<dim>16</dim>
			</port>
		</output>
	</layer>

The IR runs ok if CNNNetwork::reshape is not called. But exception is raised when CNNNetworl::reshape is called, even if the input shape is not changed at all. The error msg by the Inference Engine says that the Crop layer requires two outputs:

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Failed to infer shapes for Crop layer with error: second input is required to infer shapes, re-generate IR with latest MO
/teamcity/work/scoring_engine_build/releases_openvino-2018-r3/src/inference_engine/shape_infer/built-in/ie_crop_shape_infer.hpp:62
/t
/opt/intel/computer_vision_sdk/inference_engine/include/details/ie_exception_conversion.hpp:80

when i use the Movidius in the ubuntu 16.04, "[ ERROR ] Cannot find plugin to use :"

It is my command "./security_barrier_camera_demo -i /media/bozhon/ESD-USB/test5.avi -m /home/bozhon/openvino/open_model_zoo/model_downloader/Security/object_detection/barrier/0106/dldt/vehicle-license-plate-detection-barrier-0106.xml -d MYRIAD"

I do not find the myriad_plugin in the "/home/bozhon/openvino/dldt/inference-engine/src" .

Inference-engine build fails on ARM/Raspberry Pi

[dev] [mav@maverick-raspberry ~/var/build/opencv_dldt/inference-engine/build]$ cmake ..
-- BUILD_CONFIGURATION: Release
-- INTEL_VTUNE_DIR is not defined
-- Could NOT find INTEL_ITT (missing:  Located_ITT_INCLUDE_DIRS Located_ITT_LIBS)
-- INTEL_ITT is disabled
-- Detected 32 bit architecture
CMake Error at cmake/linux_name.cmake:22 (string):
  string sub-command REGEX, mode MATCH needs at least 5 arguments total to
  command.
Call Stack (most recent call first):
  cmake/check_features.cmake:50 (get_linux_name)
  cmake/dependencies.cmake:9 (include)
  CMakeLists.txt:90 (include)


CMake Warning at cmake/check_features.cmake:58 (message):
  Cannot detect Linux OS via reading /etc/*-release:


Call Stack (most recent call first):
  cmake/dependencies.cmake:9 (include)
  CMakeLists.txt:90 (include)


-- CI_BUILD_NUMBER: custom_HEAD_eae43f84291492e5e6094eb7efa6077f68d7aca8
-- ENABLE_MKL_DNN = OFF
-- ENABLE_CLDNN = OFF
-- ENABLE_CLDNN_BUILD = OFF
-- ENABLE_PROFILING_ITT = ON
-- ENABLE_PROFILING_RAW = OFF
-- ENABLE_OMP = ON
-- ENABLE_INTEL_OMP = ON
-- ENABLE_TESTS = OFF
-- ENABLE_SAMPLES_CORE = ON
-- ENABLE_SANITIZER = OFF
-- COVERAGE = OFF
-- ENABLE_STRESS_UNIT_TESTS = OFF
-- VERBOSE_BUILD = OFF
-- ENABLE_UNSAFE_LOCATIONS = OFF
-- ENABLE_ALTERNATIVE_TEMP = ON
-- ENABLE_SEGMENTATION_TESTS = ON
-- ENABLE_OBJECT_DETECTION_TESTS = ON
-- ENABLE_OPENCV = ON
-- OS_FOLDER = OFF
-- ENABLE_PLUGIN_RPATH = ON
-- GEMM = OPENBLAS
-- DL_SDK_TEMP envionment not set
-- A library with BLAS API found.
CMake Error at cmake/dependencies.cmake:97 (if):
  if given arguments:

    "STREQUAL" "Ubuntu 16.04"

  Unknown arguments specified
Call Stack (most recent call first):
  CMakeLists.txt:90 (include)


-- Configuring incomplete, errors occurred!
See also "/srv/maverick/var/build/opencv_dldt/inference-engine/build/CMakeFiles/CMakeOutput.log".
See also "/srv/maverick/var/build/opencv_dldt/inference-engine/build/CMakeFiles/CMakeError.log".

Is this project intended for Intel-only architecture, or should/will it work on ARM?

Option 'disable-inlined-alloca-merging' registered more than once

The Inference Engine fails when used together with the OpenCV DNN backend:

net.setPreferableBackend(cv::dnn::DNN_BACKEND_INFERENCE_ENGINE);
net.setPreferableTarget(cv::dnn::DNN_TARGET_OPENCL);

Runtime error:

: CommandLine Error: Option 'disable-inlined-alloca-merging' registered more than once!
LLVM ERROR: inconsistency in registered CommandLine options

Any hints why this is caused?

Inference Engien and Model Optimizer with Python 2.7

Hi, I'd like to check whether there is explicit requirement to build and use IE and MO with python3.
How about building and using it with pure python 2.7 ? Will it fail to build and use it with only python 2.7 ?

For example:
I have another application/demo which will use OpenVINO, but it only requires python 2.7 and has no python 3 support in its environment, that's to say, I have to make sure that all the python stuff in OpenVINO should be built and used with python 2.7 normally, or that demo will fail.

In this case, does it work to build and use OpenVINO with only python 2.7 ? and is there any doc/guideline to help on this ?

Thanks.

Warnings with Visual Studio 2017

I just downloaded R5 of the OpenVINO toolkit and built the hello_classification example with Visual Studio 2017. Unfortunately there are a lot of warnings coming from the inference_engine headers which clutter the build output:

1>------ Build started: Project: hello_classification, Configuration: Release x64 ------
1>main.cpp
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_layouts.h(127): warning C4251: 'InferenceEngine::BlockingDesc::blockedDims': class 'std::vector<size_t,std::allocator<_Ty>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::BlockingDesc'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_layouts.h(129): warning C4251: 'InferenceEngine::BlockingDesc::strides': class 'std::vector<size_t,std::allocator<_Ty>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::BlockingDesc'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>C:\Intel\computer_vision_sdk_2018.5.445\opencv\include\opencv2/core/mat.hpp(2682): note: see declaration of 'std::vector<size_t,std::allocator<_Ty>>'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_layouts.h(131): warning C4251: 'InferenceEngine::BlockingDesc::order': class 'std::vector<size_t,std::allocator<_Ty>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::BlockingDesc'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>C:\Intel\computer_vision_sdk_2018.5.445\opencv\include\opencv2/core/mat.hpp(2682): note: see declaration of 'std::vector<size_t,std::allocator<_Ty>>'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_layouts.h(134): warning C4251: 'InferenceEngine::BlockingDesc::offsetPaddingToData': class 'std::vector<size_t,std::allocator<_Ty>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::BlockingDesc'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>C:\Intel\computer_vision_sdk_2018.5.445\opencv\include\opencv2/core/mat.hpp(2682): note: see declaration of 'std::vector<size_t,std::allocator<_Ty>>'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_layouts.h(317): warning C4251: 'InferenceEngine::TensorDesc::dims': class 'std::vector<size_t,std::allocator<_Ty>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::TensorDesc'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>C:\Intel\computer_vision_sdk_2018.5.445\opencv\include\opencv2/core/mat.hpp(2682): note: see declaration of 'std::vector<size_t,std::allocator<_Ty>>'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_layouts.h(321): warning C4251: 'InferenceEngine::TensorDesc::precision': class 'InferenceEngine::Precision' needs to have dll-interface to be used by clients of class 'InferenceEngine::TensorDesc'
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_precision.hpp(19): note: see declaration of 'InferenceEngine::Precision'
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_layouts.h(343): warning C4251: 'InferenceEngine::LayoutOffsetCounter::_dims': class 'std::vector<size_t,std::allocator<_Ty>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::LayoutOffsetCounter'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>C:\Intel\computer_vision_sdk_2018.5.445\opencv\include\opencv2/core/mat.hpp(2682): note: see declaration of 'std::vector<size_t,std::allocator<_Ty>>'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_layouts.h(351): warning C4251: 'InferenceEngine::LayoutOffsetCounter::_muls': class 'std::vector<size_t,std::allocator<_Ty>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::LayoutOffsetCounter'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>C:\Intel\computer_vision_sdk_2018.5.445\opencv\include\opencv2/core/mat.hpp(2682): note: see declaration of 'std::vector<size_t,std::allocator<_Ty>>'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_data.h(33): warning C4251: 'InferenceEngine::Data::precision': class 'InferenceEngine::Precision' needs to have dll-interface to be used by clients of class 'InferenceEngine::Data'
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_precision.hpp(19): note: see declaration of 'InferenceEngine::Precision'
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_data.h(43): warning C4251: 'InferenceEngine::Data::dims': class 'std::vector<size_t,std::allocator<_Ty>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::Data'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>C:\Intel\computer_vision_sdk_2018.5.445\opencv\include\opencv2/core/mat.hpp(2682): note: see declaration of 'std::vector<size_t,std::allocator<_Ty>>'
1>        with
1>        [
1>            _Ty=size_t
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_data.h(48): warning C4251: 'InferenceEngine::Data::creatorLayer': class 'std::weak_ptr<InferenceEngine::CNNLayer>' needs to have dll-interface to be used by clients of class 'InferenceEngine::Data'
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_common.h(40): note: see declaration of 'std::weak_ptr<InferenceEngine::CNNLayer>'
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_data.h(53): warning C4251: 'InferenceEngine::Data::name': class 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::Data'
1>C:\Program Files (x86)\Microsoft Visual Studio\VS2017\Professional\VC\Tools\MSVC\14.16.27023\include\xstring(4373): note: see declaration of 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>'
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_data.h(59): warning C4251: 'InferenceEngine::Data::inputTo': class 'std::map<std::string,InferenceEngine::CNNLayerPtr,std::less<_Kty>,std::allocator<std::pair<const _Kty,_Ty>>>' needs to have dll-interface to be used by clients of class 'InferenceEngine::Data'
1>        with
1>        [
1>            _Kty=cv::String,
1>            _Ty=InferenceEngine::CNNLayerPtr
1>        ]
1>c:\intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_engine\include\ie_data.h(59): note: see declaration of 'std::map<std::string,InferenceEngine::CNNLayerPtr,std::less<_Kty>,std::allocator<std::pair<const _Kty,_Ty>>>'
1>        with
1>        [
1>            _Kty=cv::String,
1>            _Ty=InferenceEngine::CNNLayerPtr
1>        ]

clang support

Trying to compile inference engine with clang 3.9 and 6.0 , not many issues in general looks good and was able run the samples.

Can we add support for clang?

JFTR one minor issue was:

/src/OpenVino/dldt/inference-engine/src/hetero_plugin/fallback_policy.cpp:52:11: error: comparison of unsigned expression >= 0 is always true [-Werror,-Wtautological-compare] if (i >= 0) {

Thanks,

Nikos

ReadNetwork / ReadWeights

is ie_cnn_net_reader.cpp an open-source code ?

Because I want to read network and weights in memory, the function needs to be modified.
is another solution to do?

building libtensorflow_ie_call_layer.dll on windows 10

I was trying to build libtensorflow_ie_call_layer.dll on windows.
I manually performed equivalent steps mentioned in (deployment_tools\model_optimizer\tf_call_ie_layer\build.sh).

Following are contents of
\tensorflow\tensorflow\cc\inference_engine_layer\BUILD

load("//tensorflow:tensorflow.bzl", "tf_cc_shared_object")
tf_cc_shared_object(
name = "libtensorflow_call_layer.dll",
srcs = [
"extension.cpp",
"extension.h",
"tensorflow_layer.cpp",
"tensorflow_layer.h"],
copts = ["-Ithird_party/inference_engine/include"],
deps = [
"//tensorflow/cc:cc_ops",
"//tensorflow/core:tensorflow",
"//third_party/inference_engine:inference_engine",
],
visibility = ["//visibility:public"],
)

\tensorflow\third_party\inference_engine\BUILD

cc_library(
name = "inference_engine",
hdrs = glob(["include//*.h", "include//*.hpp"]),
visibility = ["//visibility:public"],
licenses = ["permissive"]
)

following command was used for triggering the build. (bazel version = 15.0)
bazel build --config=monolithic //tensorflow/cc/inference_engine_layer:libtensorflow_ie_call_layer.dll

Following error was encountered:
c:\users\abnsharm_bazel_abnsharm\wvgvakwo\execroot\org_tensorflow\third_party\inference_engine\include\details\os/win_shared_object_loader.h(49): error C3861: 'LoadLibrary': identifier not found

windows.h is already included by os/win_shared_object_loader.h so the symbol LoadLibrary should be found by compiler. What am I missing here?


ERROR: C:/perforce/thirdpartyexports/openvino_mo/trunk/18.4/source/tf_dir/tensorflow/tensorflow/cc/inference_engine_layer/BUILD:2:1: C++ compilation of rule '//tensorflow/cc/inference_engine_layer:libtensorflow_call_layer.dll' failed (Exit 2): cl.exe failed: error executing command
cd C:/users/abnsharm/bazel_abnsharm/wvgvakwo/execroot/org_tensorflow
SET INCLUDE=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE;C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE;C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\ucrt;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\shared;C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\winrt;
SET LIB=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64;C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\LIB\amd64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.16299.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.16299.0\um\x64;
SET PATH=C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow;C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64;C:\WINDOWS\Microsoft.NET\Framework64\v4.0.30319;C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\VCPackages;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Team Tools\Performance Tools\x64;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Team Tools\Performance Tools;C:\Program Files (x86)\Windows Kits\10\bin\x64;C:\Program Files (x86)\Windows Kits\10\bin\x86;C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.1 Tools\x64;;C:\WINDOWS\system32
SET PWD=/proc/self/cwd
SET PYTHON_BIN_PATH=C:/Program Files/Python36/python.exe
SET PYTHON_LIB_PATH=C:/Program Files/Python36/lib/site-packages
SET TEMP=C:\Users\abnsharm\AppData\Local\Temp
SET TF_DOWNLOAD_CLANG=0
SET TF_NEED_CUDA=0
SET TF_NEED_OPENCL_SYCL=0
SET TF_NEED_ROCM=0
SET TMP=C:\Users\abnsharm\AppData\Local\Temp
C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN32_WINNT=0x0600 /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS /bigobj /Zm500 /J /Gy /GF /EHsc /wd4351 /wd4291 /wd4250 /wd4996 /I. /Ibazel-out/x64_windows-opt/genfiles /Iexternal/com_google_absl /Ibazel-out/x64_windows-opt/genfiles/external/com_google_absl /Iexternal/bazel_tools /Ibazel-out/x64_windows-opt/genfiles/external/bazel_tools /Iexternal/eigen_archive /Ibazel-out/x64_windows-opt/genfiles/external/eigen_archive /Iexternal/local_config_sycl /Ibazel-out/x64_windows-opt/genfiles/external/local_config_sycl /Iexternal/nsync /Ibazel-out/x64_windows-opt/genfiles/external/nsync /Iexternal/gif_archive /Ibazel-out/x64_windows-opt/genfiles/external/gif_archive /Iexternal/jpeg /Ibazel-out/x64_windows-opt/genfiles/external/jpeg /Iexternal/protobuf_archive /Ibazel-out/x64_windows-opt/genfiles/external/protobuf_archive /Iexternal/com_googlesource_code_re2 /Ibazel-out/x64_windows-opt/genfiles/external/com_googlesource_code_re2 /Iexternal/farmhash_archive /Ibazel-out/x64_windows-opt/genfiles/external/farmhash_archive /Iexternal/fft2d /Ibazel-out/x64_windows-opt/genfiles/external/fft2d /Iexternal/highwayhash /Ibazel-out/x64_windows-opt/genfiles/external/highwayhash /Iexternal/zlib_archive /Ibazel-out/x64_windows-opt/genfiles/external/zlib_archive /Iexternal/double_conversion /Ibazel-out/x64_windows-opt/genfiles/external/double_conversion /Iexternal/snappy /Ibazel-out/x64_windows-opt/genfiles/external/snappy /Iexternal/local_config_cuda /Ibazel-out/x64_windows-opt/genfiles/external/local_config_cuda /Iexternal/lmdb /Ibazel-out/x64_windows-opt/genfiles/external/lmdb /Iexternal/org_sqlite /Ibazel-out/x64_windows-opt/genfiles/external/org_sqlite /Iexternal/png_archive /Ibazel-out/x64_windows-opt/genfiles/external/png_archive /Iexternal/icu /Ibazel-out/x64_windows-opt/genfiles/external/icu /Iexternal/eigen_archive /Ibazel-out/x64_windows-opt/genfiles/external/eigen_archive /Ibazel-out/x64_windows-opt/bin/external/eigen_archive /Iexternal/nsync/public /Ibazel-out/x64_windows-opt/genfiles/external/nsync/public /Ibazel-out/x64_windows-opt/bin/external/nsync/public /Iexternal/gif_archive/lib /Ibazel-out/x64_windows-opt/genfiles/external/gif_archive/lib /Ibazel-out/x64_windows-opt/bin/external/gif_archive/lib /Iexternal/gif_archive/windows /Ibazel-out/x64_windows-opt/genfiles/external/gif_archive/windows /Ibazel-out/x64_windows-opt/bin/external/gif_archive/windows /Iexternal/protobuf_archive/src /Ibazel-out/x64_windows-opt/genfiles/external/protobuf_archive/src /Ibazel-out/x64_windows-opt/bin/external/protobuf_archive/src /Iexternal/farmhash_archive/src /Ibazel-out/x64_windows-opt/genfiles/external/farmhash_archive/src /Ibazel-out/x64_windows-opt/bin/external/farmhash_archive/src /Iexternal/zlib_archive /Ibazel-out/x64_windows-opt/genfiles/external/zlib_archive /Ibazel-out/x64_windows-opt/bin/external/zlib_archive /Iexternal/double_conversion /Ibazel-out/x64_windows-opt/genfiles/external/double_conversion /Ibazel-out/x64_windows-opt/bin/external/double_conversion /Iexternal/local_config_cuda/cuda /Ibazel-out/x64_windows-opt/genfiles/external/local_config_cuda/cuda /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda /Iexternal/local_config_cuda/cuda/cuda/include /Ibazel-out/x64_windows-opt/genfiles/external/local_config_cuda/cuda/cuda/include /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda/cuda/include /Iexternal/local_config_cuda/cuda/cuda/include/crt /Ibazel-out/x64_windows-opt/genfiles/external/local_config_cuda/cuda/cuda/include/crt /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda/cuda/include/crt /Iexternal/png_archive /Ibazel-out/x64_windows-opt/genfiles/external/png_archive /Ibazel-out/x64_windows-opt/bin/external/png_archive /Iexternal/icu/icu4c/source/common /Ibazel-out/x64_windows-opt/genfiles/external/icu/icu4c/source/common /Ibazel-out/x64_windows-opt/bin/external/icu/icu4c/source/common /D__CLANG_SUPPORT_DYN_ANNOTATION
_ /DEIGEN_MPL2_ONLY /DEIGEN_MAX_ALIGN_BYTES=64 /DEIGEN_HAS_TYPE_TRAITS=0 /DTF_USE_SNAPPY /DSQLITE_OMIT_DEPRECATED /showIncludes /MD /O2 /DNDEBUG -w -Ithird_party/inference_engine/include -DIMPLEMENT_INFERENCE_ENGINE_API -showIncludes /Fobazel-out/x64_windows-opt/bin/tensorflow/cc/inference_engine_layer/_objs/libtensorflow_call_layer.dll/tensorflow_layer.obj /c tensorflow/cc/inference_engine_layer/tensorflow_layer.cpp
c:\users\abnsharm_bazel_abnsharm\wvgvakwo\execroot\org_tensorflow\third_party\inference_engine\include\details\os/win_shared_object_loader.h(49): error C3861: 'LoadLibrary': identifier not found
Target //tensorflow/cc/inference_engine_layer:libtensorflow_call_layer.dll failed to build
INFO: Elapsed time: 3768.222s, Critical Path: 346.91s
INFO: 2311 processes: 2311 local.
FAILED: Build did NOT complete successfully

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.