Giter Club home page Giter Club logo

models's Introduction

PyPI - Version CI CII Best Practices OpenSSF Scorecard REUSE compliant Ruff Black

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.

Use ONNX

Learn about the ONNX spec

Programming utilities for working with ONNX Graphs

Contribute

ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.

Check out our contribution guide to get started.

If you think some operator should be added to ONNX specification, please read this document.

Community meetings

The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here

Community Meetups are held at least once a year. Content from previous community meetups are at:

Discuss

We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.

Follow Us

Stay up to date with the latest ONNX news. [Facebook] [Twitter]

Roadmap

A roadmap process takes place every year. More details can be found here

Installation

Official Python packages

ONNX released packages are published in PyPi.

pip install onnx  # or pip install onnx[reference] for optional reference implementation dependencies

ONNX weekly packages are published in PyPI to enable experimentation and early testing.

vcpkg packages

onnx is in the maintenance list of vcpkg, you can easily use vcpkg to build and install it.

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat # For powershell
./bootstrap-vcpkg.sh # For bash
./vcpkg install onnx

Conda packages

A binary build of ONNX is available from Conda, in conda-forge:

conda install -c conda-forge onnx

Build ONNX from Source

Before building from source uninstall any existing versions of onnx pip uninstall onnx.

c++17 or higher C++ compiler version is required to build ONNX from source. Still, users can specify their own CMAKE_CXX_STANDARD version for building ONNX.

If you don't have protobuf installed, ONNX will internally download and build protobuf for ONNX build.

Or, you can manually install protobuf C/C++ libraries and tools with specified version before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:

Linux:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

Windows:

set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.

Windows

If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the version of protobuf. The tested and recommended version is 3.21.12.

The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.

You can get protobuf by running the following commands:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v21.12
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release

Then it will be built as a static library and installed to <protobuf_install_dir>. Please add the bin directory(which contains protoc.exe) to your PATH.

set CMAKE_PREFIX_PATH=<protobuf_install_dir>;%CMAKE_PREFIX_PATH%

Please note: if your protobuf_install_dir contains spaces, do not add quotation marks around it.

Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.

set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Linux

First, you need to install protobuf. The minimum Protobuf compiler (protoc) version required by ONNX is 3.6.1. Please note that old protoc versions might not work with CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON.

Ubuntu 20.04 (and newer) users may choose to install protobuf via

apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler

In this case, it is required to add -DONNX_USE_PROTOBUF_SHARED_LIBS=ON to CMAKE_ARGS in the ONNX build step.

A more general way is to build and install it from source. See the instructions below for more details.

Installing Protobuf from source

Debian/Ubuntu:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

CentOS/RHEL/Fedora:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake  -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# Optional: prefer lite proto
export CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Mac

export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protobuf-cpp-3.21.12.tar.gz
tar -xvf protobuf-cpp-3.21.12.tar.gz
cd protobuf-3.21.12
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# Optional: prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Verify Installation

After installation, run

python -c "import onnx"

to verify it works.

Common Build Options

For full list refer to CMakeLists.txt

Environment variables

  • USE_MSVC_STATIC_RUNTIME should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library. Default: USE_MSVC_STATIC_RUNTIME=0

  • DEBUG should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d at the end of the package name lines. For example, NAMES protobuf-lite would become NAMES protobuf-lited. Default: Debug=0

CMake variables

  • ONNX_USE_PROTOBUF_SHARED_LIBS should be ON or OFF. Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0 ONNX_USE_PROTOBUF_SHARED_LIBS determines how onnx links to protobuf libraries.

    • When set to ON - onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF and USE_MSVC_STATIC_RUNTIME must be 0.
    • When set to OFF - onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON (to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME can be 0 or 1.
  • ONNX_USE_LITE_PROTO should be ON or OFF. When set to ON onnx uses lite protobuf instead of full protobuf. Default: ONNX_USE_LITE_PROTO=OFF

  • ONNX_WERROR should be ON or OFF. When set to ON warnings are treated as errors. Default: ONNX_WERROR=OFF in local builds, ON in CI and release pipelines.

Common Errors

  • Note: the import onnx command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'. Change into another directory to fix this error.

  • If you run into any issues while building Protobuf as a static library, please ensure that shared Protobuf libraries, like libprotobuf, are not installed on your device or in the conda environment. If these shared libraries exist, either remove them to build Protobuf from source as a static library, or skip the Protobuf build from source to use the shared version directly.

  • If you run into any issues while building ONNX from source, and your error message reads, Could not find pythonXX.lib, ensure that you have consistent Python versions for common commands, such as python and pip. Clean all existing build files and rebuild ONNX again.

Testing

ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest:

pip install pytest nbval

After installing pytest, use the following command to run tests.

pytest

Development

Check out the contributor guide for instructions.

License

Apache License v2.0

Code of Conduct

ONNX Open Source Code of Conduct

models's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

models's Issues

Can't get accurate prediction on some pre-trained models

Hi,

I get good results on the traditional 1000 imagenet synset using the VGG16, bvlc_googlenet, and VGG19 models, with the data moved to [0, 1] and applied the std and mean normalization from ImageNet.

However I am not able to get accurate predictions using squeezenet, densenet121 pre-trained models using the same pre-processing. Have they been trained on a different synset? Do they need a specific pre-processing?

I am using MXNet as the backend.

Does the tiny-yolo-voc.onnx have pads?

When I convert the provided tiny-yolo-voc.onnx to another framework, it goes on well. But when I convert my own tiny-yolo-voc.onnx, it warns me that no supports for pads. Does that mean the provided onnx has no pads? And would you provide with tiny-yolo on COCO? Thanks a lot.

MNIST model weights

I was loading the MNIST model. I wanted to where the model weights were stored. In other onnx models, the Graph.initializer structure holds this information, but in this particular MNIST model, the Graph.initializer is empty.

Constant op_type definition

The mnist model contains a layer named Constant, along with Conv, Relu etc. I wanted to know what this Constant layer does in the model (What is the operation it performs?)

Shufflenet has an old version of BatchNormalization op (that uses consumed_inputs)

The Shufflenet model in this repo has an older version of BatchNormalization op in the model, that uses the attribute consumed_inputs. There are two issues with this:

  1. This version is quite old and several frameworks that support ONNX do not have support for it.
  2. Furthermore, the older version's spec does not have a good description for this attribute.

Is it possible to remove this attribute from the model and generate the model with the latest version of the BatchNormzalization op?

Pre-Processing stage

Hi guys,

I was wondering if anyone knew what pre-processing steps were taken for each respective model, more specifically VGG19. Additionally are these pre-processing steps the same or similar throughout all the models? Any information would be helpful.

Thank you

Error in the tiny yolo v2 model

I was trying out the tiny yolo v2 model, when I came across a strange error. Apparently, there exist a convolution.W key, which is called without being initialized. Can someone tell me the reason behind this?

Format of input and output Tensors

Certain model files contain npz inputs and outputs but no serialized protobuf TensorProtos.

The latter format is useful for Pythonless workflows. Would be good if either (i) all files contain protobuf files or (ii) a script for converting npz -> protobuf files is provided.

Proposed template for onnx models readme

Hi ONNX team, I'm working with @prasanthpul to standardize on a README template for the models in the zoo. Please let me know your thoughts. See here for the MNIST model for reference.

Model name

Download: link to model download
Model size: x MB

Description

Description of what the model does

Paper

Name and link to the paper that the model implements, if applicable.

Dataset

Dataset that was used to train the model

Source

Reference to the tutorial used for training and/or the source/Github repo of the code that the ONNX model was generated or converted from (e.g. TinyYOLO CoreML).
If this is different than the original model code/source (e.g. TinyYOLO DarkNet), provide a link to that, too.

Model input and output

Input

Expected model input format - type and shape

Output

Expected model output format - type and shape

Pre-processing steps

Description of what pre-processing steps need to be done on the data before feeding them to the model

Post-processing steps

If applicable, description or link to reference of any post-processing steps to perform on model output

Sample test data

Short description on the included sample test data (.pb or .npz).

Results/accuracy on test set

If possible, to motivate model reproducibility and integrity.

License

ValidationError “Field 'type' of attr is required but missing.” happened when invoking check_node function.

I'm green to DeepLearning, and I want to see the output of each node of Alex net by run_node function, but I met the following error.

import onnx
model = onnx.load('/path/bvlc_alexnet/model.pb')
node = model.graph.node[0] #use the first node of op_type as "Conv"
onnx.checker.check_node(node)

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/onnx/checker.py", line 32, in checker
proto.SerializeToString(), ir_version)
onnx.onnx_cpp2py_export.checker.ValidationError: Field 'type' of attr is required but missing.

node

input: "data_0"
input: "conv1_w_0"
input: "conv1_b_0"
output: "conv1_1"
name: ""
op_type: "Conv"
attribute {
name: "strides"
ints: 4
ints: 4
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
}
attribute {
name: "kernel_shape"
ints: 11
ints: 11
}

I tried to add 'type' of attr by following codes

from onnx import AttributeProto
attr = AttributeProto()
#I did't know which type was right, just tried to see the result of this workaround
#with 'AttributeProto.FLOAT' value.
attr.type = AttributeProto.FLOAT 
node.attribute.extend([attr])
node

input: "data_0"
input: "conv1_w_0"
input: "conv1_b_0"
output: "conv1_1"
name: ""
op_type: "Conv"
attribute {
name: "strides"
ints: 4
ints: 4
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
}
attribute {
name: "kernel_shape"
ints: 11
ints: 11
}
attribute {
type: FLOAT
}

onnx.checker.check_node(node)

, still got the same error.

tiny_yolov2 model output with floats

Hi! Can the tiny_yolov2 model be updated to output floats rather than doubles? Windows ML currently doesn't support doubles, so the model can't be used on Windows.

-

Posted in the wrong repo, delete this.

Caffe2 backend can't run resnet/mnist

The caffe2 backend can't run resnet/mnist it fails.

resnet:
RuntimeError: [enforce fail at conv_op_impl.h:37] X.ndim() == filter.ndim(). 5 vs 4 Error from operator:
input: "gpu_0/data_0" input: "gpu_0/conv1_w_0" output: "gpu_0/conv1_1" name: "" type: "Conv"

mnist:
RuntimeError: [enforce fail at reshape_op.h:120] total_size == size. 64 vs 256. Argument shape does not agree with the input data. (64 != 256) Error from operator:
input: "Pooling160_Output_0" output: "Pooling160_Output_0_reshape0" output: "OC2_DUMMY_1" name: "Times212_reshape0" type: "Reshape"

I think it could be a problem with the export of cntk?

Update to .onnx file extension and flat folder structure

Should the models get updated to use the .onnx file extension?

Also wondering, should the .tar.gz files be linked directly from the top-level README.md and the files named bvlc_alexnet.onnx, inception_v2.onnx etc instead of model.db?

Sample Code on README.md not working

I tried running the sample code on README.md and got the following error.

Traceback (most recent call last):
  File "test.py", line 3, in <module>
    import onnx_backend as backend
ImportError: No module named onnx_backend

I believe the sample is based on old code.

Update models to the newest ir_version

  1. And automate the process in onnx-caffe2.
  2. Update the file extension from .pb to .onnx (note this requires adding versioning mechanism to keep backward compatibility).

About AlexNet MaxPooling

Hi, is it a mistake that the last maxpooling layer of alex net has padding [0, 0, 1, 1]? This node has output named "pool5_1". Changing it to [1, 1, 1, 1] gives correct shape as well as correct results for us.

input-outputs pairs mismatched for mnist model

The 3 input / output pairs appear to be mismatched.

In particular,
Input 0 ---> Output 1
Input 1 ---> Output 2
Input 2 ---> Output 0.

So the 3 test_data_[012].npz files need to be recreated I guess.

The outputs of the R-CNN ILSVRC13 model is incorrect

For the onnx model - R-CNN ILSVRC13 The model outputs seem to be incorrect for the task of object detection. The model output has the dimension (1,200) and each of values in the tensors seem to be a negative value around -2.5. But for an object detection task we ideally need information of the

  • bounding box co-ordinates
  • classes of objects detected in each region/bounding box.

The output we get from the model, the (1,200) shape tensor does not provide this information.

The model README says that it is an implementation of this paper but the output does not seem to be accomplishing the task of object detection.

Can someone verify this?

Axis attribute of concat is missing for densnet121 and inception_v2

Hi,

I think the axis attribute is missing into the 16th node of the onnx model densenet121.

It is the same for inception_v2 for the 53th node, axis is not provided for the concat node.

According to the doc, the attr axis is required for the concat node.

Concat
Concatenate a list of tensors into a single tensor
Versioning
This operator is used if you are using version 1 of the default ONNX operator set until the next >BC-breaking change to this operator; e.g., it will be used if your protobuf has:

opset_import {
version = 1
}
Attributes

axis : int (required)
Which axis to concat on

How is it possible that the model is use as test suite of onnx with this kind of bug ? any ideas ?

SSD mobilenet?

Any chance someone has an SSD Mobilenet onnx model yet?
Are all the operations supported for an SSD?

Error with emotion_ferplus loaded in Caffe2

Hi,

I used

convert-onnx-to-caffe2 model.onnx -o predict_net.pb --init-net-output init_net.pb

But when trying to run it on an input, I get :

  File "/Users/louisabraham/miniconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 212, in RunNetOnce
    StringifyProto(net),
  File "/Users/louisabraham/miniconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 192, in CallWithExceptionIntercept
    return func(*args, **kwargs)
RuntimeError: [enforce fail at conv_pool_op_base.h:617] in_size + *pad_head + *pad_tail >= dkernel. 1 vs 3 Error from operator: 
input: "ReLU2700_Output_0" input: "Parameter675" output: "Convolution2714_Output_0" name: "Convolution2714" type: "Conv" arg { name: "kernels" ints: 3 ints: 3 } arg { name: "strides" ints: 1 ints: 1 } arg { name: "auto_pad" s: "SAME_UPPER" } arg { name: "group" i: 1 } arg { name: "dilations" ints: 1 ints: 1 } device_option { device_type: 0 cuda_gpu_id: 0 }

I think my input shape is correct (else I get an error on the first layer input: "Input2505").

Input test data should have a valid name

How to reproduce:

Download squeezenet model from https://s3.amazonaws.com/download.onnx/models/opset_7/squeezenet.tar.gz
Unzip it and run the script below with squeezenet\test_data_set_0\input_0.pb

import onnx
import sys

from onnx import TensorProto

proto = TensorProto()

# Read the existing address book.
f = open(sys.argv[1], "rb")
proto.ParseFromString(f.read())
f.close()
print(proto.name)

The output is empty.

We need this information to know this tensor is for which input.

AttributeError: 'NoneType' object has no attribute 'run'

I got this sample code from https://github.com/onnx/models and wanted to see it running with ONNX model. I took resnet50 as example and the following error.

import numpy as np
import onnx
import caffe2
from onnx.backend.base import Backend

model_path = '/home/onnx/Downloads/resnet50/model.pb'
npz_path = '/home/onnx/Downloads/resnet50/test_data_0.npz'
model = onnx.load(model_path)
sample = np.load(npz_path, encoding='bytes')

inputs = list(sample['inputs'])
outputs = list(sample['outputs'])
np.testing.assert_almost_equal(outputs, Backend.run_model(model, inputs))


AttributeError Traceback (most recent call last)
in ()
12 outputs = list(sample['outputs'])
13
---> 14 np.testing.assert_almost_equal(outputs,Backend.run_model(model, inputs))

/anaconda/lib/python2.7/site-packages/onnx/backend/base.pyc in run_model(cls, model, inputs, device, **kwargs)
55 @classmethod
56 def run_model(cls, model, inputs, device='CPU', **kwargs):
---> 57 cls.prepare(model, device, **kwargs).run(inputs)
58
59 @classmethod

AttributeError: 'NoneType' object has no attribute 'run'

Let me know how to overcome this error? Why is it returning 'NoneType'

Can't download model file opset_4/resnet50.tar.gz

I am using onnx 1.1.1, where onnx.defs.onnx_opset_version() returns 4, and thus Caffe2 tries to download https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz

Downloading this file gives the following error:

$ wget https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz
--2018-05-01 15:11:56--  https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.99.21
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.99.21|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2018-05-01 15:11:56 ERROR 403: Forbidden.

MNIST example fails ONNX checker

import onnx
model = onnx.load_model('mnist.onnx')
model = onnx.checker.check_model(model)

resulting error

---------------------------------------------------------------------------
ValidationError                           Traceback (most recent call last)
<ipython-input-2-4ef34880ef37> in <module>()
      1 model = onnx.load_model('mnist.onnx')
----> 2 model = onnx.checker.check_model(model)

~/py36/lib/python3.6/site-packages/onnx/checker.py in check_model(model)
     80 
     81 def check_model(model):  # type: (ModelProto) -> None
---> 82     C.check_model(model.SerializeToString())
     83 
     84 

ValidationError: model with IR version >= 3 must specify opset_import for ONNX

Same node output name in densenet121

In the net, it has two nodes with same output (the name is not unique, check below). This is not a valid model, right?
node {
input: "conv1"
input: "conv1/bn_scale"
input: "conv1/bn_bias"
input: "conv1/bn_mean"
input: "conv1/bn_var"
output: "conv1/bn"
name: ""
op_type: "SpatialBN"
attribute {
name: "is_test"
i: 1
}
attribute {
name: "epsilon"
f: 1e-05
}
attribute {
name: "consumed_inputs"
ints: 0
ints: 0
ints: 0
ints: 1
ints: 1
}
}

node {
input: "conv1/bn_internal"
input: "conv1/bn_b"
output: "conv1/bn"
name: ""
op_type: "Add"
attribute {
name: "broadcast"
i: 1
}
attribute {
name: "axis"
i: 1
}
}

Tiny_Yolov2 failed ONNX model checker

Looks like tiny_yolov2 only has one input (image). For it to be a valid model, all the initializers should be in the input too. @houseroad

Error log:

E       ValidationError: convolution.W in initializer but not in graph input

../../../../venv/lib/python2.7/site-packages/onnx/checker.py:76: ValidationError

shufflenet test data seems to be mistaken

hello

recently, I've been playing onnx models with ncnn converter.
I found that all the models in this repo can be processed using the bundled test data without errors, except the shufflenet model, which always produces different result with the desired test output.

And, I found that the three test data in shufflenet model archive are identical with each other.

test_data_0.npy
test_data_1.npy
test_data_2.npy

Is there anything wrong with the test data ?

Error in Emotion_ferplus output labels

The labels mentioned in the models readme is this -
emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}

It does not match with the labels mentioned in the dataset source
(0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral)

Can this be clarified @ebarsoum

Source of the models

Hi,

Where can i find the model source (in original framework) used to create the ONNX model?
For Resnet50 ONNX model:
what PreProcessing need to be done on the images before i can feed them as input to the network as NPZ files. (PreProceesing on Image that we get from LMDB or BMP with values 0-255)

Thx,
Barak

Bug in inception_v1 model ?

Hi,

I have a question about the onnx model inception_v1.

The last AveragePool node have the following proto :

graph.node[-5]
input: "inception_5b/output_1"
output: "pool5/7x7_s1_1"
name: ""
op_type: "AveragePool"
attribute {
name: "strides"
ints: 1
ints: 1
type: INTS
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 1
ints: 1
type: INTS
}
attribute {
name: "kernel_shape"
ints: 7
ints: 7
type: INTS
}

The input tensor shape is : [1, 1024, 7, 7]
So, an average pooling layer with kernel=7, stride=1, padding=0 before and 1 after will give an output tensor of shape [1, 1024, 2, 2].

This don't seem to make sense à the end of a network, where the output should be of shape [1, 1024, 1, 1].

So, according to me, the padding should be zero :
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
type: INTS
}
As you can see here for example : http://ethereon.github.io/netscope/#/preset/googlenet

Does someone can tell me from were this model was generated ?

ResNet50 Labels Giving Wrong Results

Hi everyone, I have successfully imported the ONNX model for ResNet50, I tried both the master and release versions, and labels from here: http://image-net.org/challenges/LSVRC/2015/browse-synsets (copied & pasted, and parsed in Python), and I tried the IMAGENET 2012 list as well http://www.image-net.org/archive/words.txt.

If I put a golden retriever it gives me .999.. percent accuracy... of a rugby ball. Does anyone know where the correct dataset is? The evaluated output does not correspond to the correct label. Thanks.

Graph inputs vs value_info

I'm looking at some of the models gathered in this repository (SqueezeNet, DenseNet-121, ResNet-50) and I noticed a pattern.

All parameters of the graph are specified as input fields and none as value_info fields. This mixes model inputs and trainable parameters into a single set and I'm not sure how to distinguish between them.

Separating model inputs and trainable variables may be useful for frameworks, which distinguish the concepts of placeholder and Variable.

Would you agree or am I missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.