Giter Club home page Giter Club logo

onnx-coreml's Introduction

PyPI - Version CI CII Best Practices OpenSSF Scorecard REUSE compliant Ruff Black

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.

Use ONNX

Learn about the ONNX spec

Programming utilities for working with ONNX Graphs

Contribute

ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.

Check out our contribution guide to get started.

If you think some operator should be added to ONNX specification, please read this document.

Community meetings

The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here

Community Meetups are held at least once a year. Content from previous community meetups are at:

Discuss

We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.

Follow Us

Stay up to date with the latest ONNX news. [Facebook] [Twitter]

Roadmap

A roadmap process takes place every year. More details can be found here

Installation

Official Python packages

ONNX released packages are published in PyPi.

pip install onnx  # or pip install onnx[reference] for optional reference implementation dependencies

ONNX weekly packages are published in PyPI to enable experimentation and early testing.

vcpkg packages

onnx is in the maintenance list of vcpkg, you can easily use vcpkg to build and install it.

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat # For powershell
./bootstrap-vcpkg.sh # For bash
./vcpkg install onnx

Conda packages

A binary build of ONNX is available from Conda, in conda-forge:

conda install -c conda-forge onnx

Build ONNX from Source

Before building from source uninstall any existing versions of onnx pip uninstall onnx.

c++17 or higher C++ compiler version is required to build ONNX from source. Still, users can specify their own CMAKE_CXX_STANDARD version for building ONNX.

If you don't have protobuf installed, ONNX will internally download and build protobuf for ONNX build.

Or, you can manually install protobuf C/C++ libraries and tools with specified version before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:

Linux:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

Windows:

set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.

Windows

If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the version of protobuf. The tested and recommended version is 3.21.12.

The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.

You can get protobuf by running the following commands:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v21.12
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release

Then it will be built as a static library and installed to <protobuf_install_dir>. Please add the bin directory(which contains protoc.exe) to your PATH.

set CMAKE_PREFIX_PATH=<protobuf_install_dir>;%CMAKE_PREFIX_PATH%

Please note: if your protobuf_install_dir contains spaces, do not add quotation marks around it.

Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.

set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Linux

First, you need to install protobuf. The minimum Protobuf compiler (protoc) version required by ONNX is 3.6.1. Please note that old protoc versions might not work with CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON.

Ubuntu 20.04 (and newer) users may choose to install protobuf via

apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler

In this case, it is required to add -DONNX_USE_PROTOBUF_SHARED_LIBS=ON to CMAKE_ARGS in the ONNX build step.

A more general way is to build and install it from source. See the instructions below for more details.

Installing Protobuf from source

Debian/Ubuntu:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

CentOS/RHEL/Fedora:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake  -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# Optional: prefer lite proto
export CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Mac

export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protobuf-cpp-3.21.12.tar.gz
tar -xvf protobuf-cpp-3.21.12.tar.gz
cd protobuf-3.21.12
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# Optional: prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Verify Installation

After installation, run

python -c "import onnx"

to verify it works.

Common Build Options

For full list refer to CMakeLists.txt

Environment variables

  • USE_MSVC_STATIC_RUNTIME should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library. Default: USE_MSVC_STATIC_RUNTIME=0

  • DEBUG should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d at the end of the package name lines. For example, NAMES protobuf-lite would become NAMES protobuf-lited. Default: Debug=0

CMake variables

  • ONNX_USE_PROTOBUF_SHARED_LIBS should be ON or OFF. Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0 ONNX_USE_PROTOBUF_SHARED_LIBS determines how onnx links to protobuf libraries.

    • When set to ON - onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF and USE_MSVC_STATIC_RUNTIME must be 0.
    • When set to OFF - onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON (to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME can be 0 or 1.
  • ONNX_USE_LITE_PROTO should be ON or OFF. When set to ON onnx uses lite protobuf instead of full protobuf. Default: ONNX_USE_LITE_PROTO=OFF

  • ONNX_WERROR should be ON or OFF. When set to ON warnings are treated as errors. Default: ONNX_WERROR=OFF in local builds, ON in CI and release pipelines.

Common Errors

  • Note: the import onnx command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'. Change into another directory to fix this error.

  • If you run into any issues while building Protobuf as a static library, please ensure that shared Protobuf libraries, like libprotobuf, are not installed on your device or in the conda environment. If these shared libraries exist, either remove them to build Protobuf from source as a static library, or skip the Protobuf build from source to use the shared version directly.

  • If you run into any issues while building ONNX from source, and your error message reads, Could not find pythonXX.lib, ensure that you have consistent Python versions for common commands, such as python and pip. Clean all existing build files and rebuild ONNX again.

Testing

ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest:

pip install pytest nbval

After installing pytest, use the following command to run tests.

pytest

Development

Check out the contributor guide for instructions.

License

Apache License v2.0

Code of Conduct

ONNX Open Source Code of Conduct

onnx-coreml's People

Contributors

1duo avatar anderspapitto avatar aseemw avatar bddppq avatar bhushan23 avatar dawars avatar dawerg avatar dimbl4 avatar edis219 avatar gag avatar leoiv avatar lsrock1 avatar mayokaze avatar mstronach avatar onnxbot avatar prasanthpul avatar smessmer avatar tkanmae avatar yuvdar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnx-coreml's Issues

Error converting network layer with mean operation

I'm trying to convert a layer which uses a simple mean function as the last step in the forward function. I'm getting this error during conversion. Without the mean I have no problems. Any thoughts?

TypeError: Error while converting op of type: ReduceMean. Error message: Unable to translate axes attribute to CoreML axis parameter for range(0, 4)

Unsupported ONNX ops of type: Gather,Tile

I was able to convert the model that created this source(https://github.com/yunjey/StarGAN) code to onnx.

However, an error occurred during conversion from onnx to coreml.
This is the error code.

Traceback (most recent call last):
  File "onnx2coreml.py", line 13, in <module>
    mlmodel = convert(model_in)
  File "hoge/lib/python3.6/site-packages/onnx_coreml/converter.py", line 449, in convert
    _check_unsupported_ops(graph.nodes)
  File "hoge/lib/python3.6/site-packages/onnx_coreml/converter.py", line 140, in _check_unsupported_ops
    ','.join(unsupported_op_types)))
NotImplementedError: Unsupported ONNX ops of type: Gather,Tile

This is the source code.

import onnx

model_in = "Generetaor.onnx"
model_out = "Generator.mlmodel"

model = onnx.load(model_in)
print(onnx.helper.printable_graph(model.graph))
mlmodel = convert(model_in)

This is the model.graph

print(onnx.helper.printable_graph(model.graph))

graph torch-jit-export (
  %0[FLOAT, 16x3x256x256]
  %1[FLOAT, 16x10]
) initializers (
  %2[FLOAT, 64x13x7x7]
  %3[FLOAT, 64]
  %4[FLOAT, 64]
  %5[FLOAT, 64]
  %6[FLOAT, 64]
  %7[INT64, scalar]
  %8[FLOAT, 128x64x4x4]
  %9[FLOAT, 128]
  %10[FLOAT, 128]
  %11[FLOAT, 128]
  %12[FLOAT, 128]
  %13[INT64, scalar]
  %14[FLOAT, 256x128x4x4]
  %15[FLOAT, 256]
  %16[FLOAT, 256]
  %17[FLOAT, 256]
  %18[FLOAT, 256]
  %19[INT64, scalar]
  %20[FLOAT, 256x256x3x3]
  %21[FLOAT, 256]
  %22[FLOAT, 256]
  %23[FLOAT, 256]
  %24[FLOAT, 256]
  %25[INT64, scalar]
  %26[FLOAT, 256x256x3x3]
  %27[FLOAT, 256]
  %28[FLOAT, 256]
  %29[FLOAT, 256]
  %30[FLOAT, 256]
  %31[INT64, scalar]
  %32[FLOAT, 256x256x3x3]
  %33[FLOAT, 256]
  %34[FLOAT, 256]
  %35[FLOAT, 256]
  %36[FLOAT, 256]
  %37[INT64, scalar]
  %38[FLOAT, 256x256x3x3]
  %39[FLOAT, 256]
  %40[FLOAT, 256]
  %41[FLOAT, 256]
  %42[FLOAT, 256]
  %43[INT64, scalar]
  %44[FLOAT, 256x256x3x3]
  %45[FLOAT, 256]
  %46[FLOAT, 256]
  %47[FLOAT, 256]
  %48[FLOAT, 256]
  %49[INT64, scalar]
  %50[FLOAT, 256x256x3x3]
  %51[FLOAT, 256]
  %52[FLOAT, 256]
  %53[FLOAT, 256]
  %54[FLOAT, 256]
  %55[INT64, scalar]
  %56[FLOAT, 256x256x3x3]
  %57[FLOAT, 256]
  %58[FLOAT, 256]
  %59[FLOAT, 256]
  %60[FLOAT, 256]
  %61[INT64, scalar]
  %62[FLOAT, 256x256x3x3]
  %63[FLOAT, 256]
  %64[FLOAT, 256]
  %65[FLOAT, 256]
  %66[FLOAT, 256]
  %67[INT64, scalar]
  %68[FLOAT, 256x256x3x3]
  %69[FLOAT, 256]
  %70[FLOAT, 256]
  %71[FLOAT, 256]
  %72[FLOAT, 256]
  %73[INT64, scalar]
  %74[FLOAT, 256x256x3x3]
  %75[FLOAT, 256]
  %76[FLOAT, 256]
  %77[FLOAT, 256]
  %78[FLOAT, 256]
  %79[INT64, scalar]
  %80[FLOAT, 256x256x3x3]
  %81[FLOAT, 256]
  %82[FLOAT, 256]
  %83[FLOAT, 256]
  %84[FLOAT, 256]
  %85[INT64, scalar]
  %86[FLOAT, 256x256x3x3]
  %87[FLOAT, 256]
  %88[FLOAT, 256]
  %89[FLOAT, 256]
  %90[FLOAT, 256]
  %91[INT64, scalar]
  %92[FLOAT, 256x128x4x4]
  %93[FLOAT, 128]
  %94[FLOAT, 128]
  %95[FLOAT, 128]
  %96[FLOAT, 128]
  %97[INT64, scalar]
  %98[FLOAT, 128x64x4x4]
  %99[FLOAT, 64]
  %100[FLOAT, 64]
  %101[FLOAT, 64]
  %102[FLOAT, 64]
  %103[INT64, scalar]
  %104[FLOAT, 3x64x7x7]
) {
  %105 = Constant[value = <Scalar Tensor []>]()
  %106 = Shape(%1)
  %107 = Gather[axis = 0](%106, %105)
  %108 = Constant[value = <Scalar Tensor []>]()
  %109 = Shape(%1)
  %110 = Gather[axis = 0](%109, %108)
  %111 = Constant[value = <Scalar Tensor []>]()
  %112 = Constant[value = <Scalar Tensor []>]()
  %113 = Unsqueeze[axes = [0]](%107)
  %114 = Unsqueeze[axes = [0]](%110)
  %115 = Unsqueeze[axes = [0]](%111)
  %116 = Unsqueeze[axes = [0]](%112)
  %117 = Concat[axis = 0](%113, %114, %115, %116)
  %118 = Reshape(%1, %117)
  %119 = Constant[value = <Scalar Tensor []>]()
  %120 = Shape(%0)
  %121 = Gather[axis = 0](%120, %119)
  %122 = Constant[value = <Scalar Tensor []>]()
  %123 = Shape(%0)
  %124 = Gather[axis = 0](%123, %122)
  %125 = Constant[value = <Scalar Tensor []>]()
  %126 = Constant[value = <Scalar Tensor []>]()
  %127 = Unsqueeze[axes = [0]](%125)
  %128 = Unsqueeze[axes = [0]](%126)
  %129 = Unsqueeze[axes = [0]](%121)
  %130 = Unsqueeze[axes = [0]](%124)
  %131 = Concat[axis = 0](%127, %128, %129, %130)
  %132 = Tile(%118, %131)
  %133 = Concat[axis = 1](%0, %132)
  %134 = Conv[dilations = [1, 1], group = 1, kernel_shape = [7, 7], pads = [3, 3, 3, 3], strides = [1, 1]](%133, %2)
  %135 = InstanceNormalization[epsilon = 9.99999974737875e-06](%134, %3, %4)
  %136 = Relu(%135)
  %137 = Conv[dilations = [1, 1], group = 1, kernel_shape = [4, 4], pads = [1, 1, 1, 1], strides = [2, 2]](%136, %8)
  %138 = InstanceNormalization[epsilon = 9.99999974737875e-06](%137, %9, %10)
  %139 = Relu(%138)
  %140 = Conv[dilations = [1, 1], group = 1, kernel_shape = [4, 4], pads = [1, 1, 1, 1], strides = [2, 2]](%139, %14)
  %141 = InstanceNormalization[epsilon = 9.99999974737875e-06](%140, %15, %16)
  %142 = Relu(%141)
  %143 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%142, %20)
  %144 = InstanceNormalization[epsilon = 9.99999974737875e-06](%143, %21, %22)
  %145 = Relu(%144)
  %146 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%145, %26)
  %147 = InstanceNormalization[epsilon = 9.99999974737875e-06](%146, %27, %28)
  %148 = Add(%142, %147)
  %149 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%148, %32)
  %150 = InstanceNormalization[epsilon = 9.99999974737875e-06](%149, %33, %34)
  %151 = Relu(%150)
  %152 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%151, %38)
  %153 = InstanceNormalization[epsilon = 9.99999974737875e-06](%152, %39, %40)
  %154 = Add(%148, %153)
  %155 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%154, %44)
  %156 = InstanceNormalization[epsilon = 9.99999974737875e-06](%155, %45, %46)
  %157 = Relu(%156)
  %158 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%157, %50)
  %159 = InstanceNormalization[epsilon = 9.99999974737875e-06](%158, %51, %52)
  %160 = Add(%154, %159)
  %161 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%160, %56)
  %162 = InstanceNormalization[epsilon = 9.99999974737875e-06](%161, %57, %58)
  %163 = Relu(%162)
  %164 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%163, %62)
  %165 = InstanceNormalization[epsilon = 9.99999974737875e-06](%164, %63, %64)
  %166 = Add(%160, %165)
  %167 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%166, %68)
  %168 = InstanceNormalization[epsilon = 9.99999974737875e-06](%167, %69, %70)
  %169 = Relu(%168)
  %170 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%169, %74)
  %171 = InstanceNormalization[epsilon = 9.99999974737875e-06](%170, %75, %76)
  %172 = Add(%166, %171)
  %173 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%172, %80)
  %174 = InstanceNormalization[epsilon = 9.99999974737875e-06](%173, %81, %82)
  %175 = Relu(%174)
  %176 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%175, %86)
  %177 = InstanceNormalization[epsilon = 9.99999974737875e-06](%176, %87, %88)
  %178 = Add(%172, %177)
  %179 = ConvTranspose[dilations = [1, 1], group = 1, kernel_shape = [4, 4], pads = [1, 1, 1, 1], strides = [2, 2]](%178, %92)
  %180 = InstanceNormalization[epsilon = 9.99999974737875e-06](%179, %93, %94)
  %181 = Relu(%180)
  %182 = ConvTranspose[dilations = [1, 1], group = 1, kernel_shape = [4, 4], pads = [1, 1, 1, 1], strides = [2, 2]](%181, %98)
  %183 = InstanceNormalization[epsilon = 9.99999974737875e-06](%182, %99, %100)
  %184 = Relu(%183)
  %185 = Conv[dilations = [1, 1], group = 1, kernel_shape = [7, 7], pads = [3, 3, 3, 3], strides = [1, 1]](%184, %104)
  %186 = Tanh(%185)
  return %186
}

Converted model has issues when importing into xcode

Hello,

I successfully exported a custom onnx model into a coreml model. However, when I import it into xcode I have the following message:

validator error: Layer '133' consumes an input named '132' which is not present in this network.

I checked the spec of the model and it looks fine:


input_names=['ip1', 'ip2', 'ip3' , 'ip4' ]
output_names=['preds']
coreml_model = convert(
    model,
    None,
    image_input_names=input_names,
    image_output_names=output_names,
)
coreml_model.save('mvdepth.mlmodel')

spec = coreml_model.get_spec()

spec - 

input {
  name: "ip1"
  type {
    imageType {
      width: 320
      height: 256
      colorSpace: RGB
    }
  }
}
input {
  name: "ip2"
  type {
    imageType {
      width: 320
      height: 256
      colorSpace: RGB
    }
  }
}
input {
  name: "ip3"
  type {
    imageType {
      width: 81920
      height: 3
      colorSpace: GRAYSCALE
    }
  }
}
input {
  name: "ip4"
  type {
    imageType {
      width: 1
      height: 3
      colorSpace: GRAYSCALE
    }
  }
}
output {
  name: "preds"
  type {
    imageType {
      width: 320
      height: 256
      colorSpace: GRAYSCALE
    }
  }
}

Any pointers on where to look to debug this ?

TypeError: ONNX node of type Constant is not supported.

Hi! i try to convert simple feedforward model to core-ml

Here is my code for reproducing:

from torch import nn, zeros, LongTensor, FloatTensor, onnx
from torch.autograd import Variable
import torch.nn.functional as F
from onnx_coreml import convert

class SimpleNet(nn.Module):
    def __init__(self, input_size):
        super(SimpleNet, self).__init__()
        
        self.embedd = nn.Embedding(input_size, 100)
        self.fc1 = nn.Linear(100, 200)
        self.fc2 = nn.Linear(200, 200)
        self.fc3 = nn.Linear(200, 300)
                
    def forward(self, input):
        r_0 = input.mm(self.embedd.weight)
        r_1 = F.tanh(self.fc1(r_0).div(10.))
        r_2 = F.tanh(self.fc2(r_1).div(10.)) + r_1
        r_3 = F.tanh(self.fc3(r_2).div(10.))
        return r_3

# Create fake one-hot encoded input
indices = [1,2,3,4,5]
input = zeros(1000).type(FloatTensor)

# create one-hot encoded input
for i in indices:
    input[i] = 1
    
input = Variable(input)
input = input.view(1, -1)

# forward pass with fake input
simple_net = SimpleNet(1000)
out = simple_net(input)

torch_out = onnx._export(simple_net,
                                           input,
                                           "test_simple_net.onnx",
                                            export_params=True,
                                             verbose=True
                                           )

Here is onnx graph:

graph(%0 : Float(1, 1000)
      %1 : Float(1000, 100)
      %2 : Float(200, 100)
      %3 : Float(200)
      %4 : Float(200, 200)
      %5 : Float(200)
      %6 : Float(300, 200)
      %7 : Float(300)) {
  %8 : UNKNOWN_TYPE = Constant[value={0}](), scope: SimpleNet
  %9 : Float(1, 100) = Gemm[alpha=1, beta=0, broadcast=1](%0, %1, %8), scope: SimpleNet
  %10 : Float(1, 200) = Gemm[alpha=1, beta=1, broadcast=1, transB=1](%9, %2, %3), scope: SimpleNet/Linear[fc1]
  %11 : UNKNOWN_TYPE = Constant[value={10}](), scope: SimpleNet
  %12 : Float(1, 200) = Div[broadcast=1](%10, %11), scope: SimpleNet
  %13 : Float(1, 200) = Tanh(%12), scope: SimpleNet
  %14 : Float(1, 200) = Gemm[alpha=1, beta=1, broadcast=1, transB=1](%13, %4, %5), scope: SimpleNet/Linear[fc2]
  %15 : UNKNOWN_TYPE = Constant[value={10}](), scope: SimpleNet
  %16 : Float(1, 200) = Div[broadcast=1](%14, %15), scope: SimpleNet
  %17 : Float(1, 200) = Tanh(%16), scope: SimpleNet
  %18 : Float(1, 200) = Add(%17, %13), scope: SimpleNet
  %19 : Float(1, 300) = Gemm[alpha=1, beta=1, broadcast=1, transB=1](%18, %6, %7), scope: SimpleNet/Linear[fc3]
  %20 : UNKNOWN_TYPE = Constant[value={10}](), scope: SimpleNet
  %21 : Float(1, 300) = Div[broadcast=1](%19, %20), scope: SimpleNet
  %22 : Float(1, 300) = Tanh(%21), scope: SimpleNet
  return (%22);
}

Then i try to convert my model from onnx to coreml:

model_filename = "test_simple_net.onnx"
converted = convert(model_filename)

and got this error:

TypeError                                 Traceback (most recent call last)
<ipython-input-7-15e4a0f3784e> in <module>()
----> 1 converted = convert(model_filename)

/home/d.parpulov/env/venv2/local/lib/python2.7/site-packages/onnx_coreml/_onnx_converter.pyc in convert(model, mode, image_input_names, preprocessing_args, image_output_names, deprocessing_args, class_labels, predicted_feature_name)
    217 
    218     for node in graph.nodes:
--> 219         _convert_node(builder, node)
    220 
    221     if add_deprocess:

/home/d.parpulov/env/venv2/local/lib/python2.7/site-packages/onnx_coreml/_layers.pyc in _convert_node(builder, node)
    388 
    389 def _convert_node(builder, node):
--> 390     converter_fn = _get_node_converter_fn(node)
    391     return converter_fn(builder, node)

/home/d.parpulov/env/venv2/local/lib/python2.7/site-packages/onnx_coreml/_layers.pyc in _get_node_converter_fn(node)
    383     else:
    384         raise TypeError(
--> 385             "ONNX node of type {} is not supported.".format(op_type,)
    386         )
    387 

TypeError: ONNX node of type Constant is not supported.

Could you add support for onnx Constant node type?
thank you

linear convert error

I convert the .onnx model to coreml model, And I found pooling pading mismatch bug , #7 .
And then , I found the linear layer convert error, There is missing w, just b for a linear layer, So I manully add w, and then the return MLModel(builder.spec) raises

RuntimeError: Error compiling model: "Error reading protobuf spec. validator error: Layer '250' consumes a layer named '136' which is not present in this network.".

I try to solve it by myself, and I found pytorch.onnx convert linear into Transpose and Gemm, I dived into the pytorch export file, and I got stucked in the trace part. So, I dived into onnx_coreml.convert file, and I got stucked in the 'from ..libcoremlpython import _MLModelProxy', thr libcoremlpython is .so file.

It's really a pain to use onnx_coreml, please spend some time to fix this, or tell me how to solve this.

Key Error: u'Constant339'

I'm trying to convert the emotion_ferplus.onnx model into CoreML.

I installed onnx-coreml using pip. I then downloaded the model and put it into a folder with this python script:

from onnx_coreml import convert
import onnx
modelFile = onnx.load('emotion_ferplus.onnx')
mlmodel = convert(modelFile)
mlmodel.save('emotion_ferplus.mlmodel')

I run the python script but get the error logs:

Traceback (most recent call last):
  File "script_onnx.py", line 4, in <module>
    mlmodel = convert(modelFile)
  File "/Users/Nomad/Library/Python/2.7/lib/python/site-packages/onnx_coreml/converter.py", line 458, in convert
    _convert_node(builder, node, graph, err)
  File "/Users/Nomad/Library/Python/2.7/lib/python/site-packages/onnx_coreml/_operators.py", line 1755, in _convert_node
    return converter_fn(builder, node, graph, err)
  File "/Users/Nomad/Library/Python/2.7/lib/python/site-packages/onnx_coreml/_operators.py", line 246, in _convert_sub
    _convert_broadcast_op(builder, node, graph, err, "ADD")
  File "/Users/Nomad/Library/Python/2.7/lib/python/site-packages/onnx_coreml/_operators.py", line 68, in _convert_broadcast_op
    ranks = [len(graph.onnx_coreml_shape_mapping[input_]) for input_ in node.inputs]
KeyError: u'Constant339'

I'm an iOS developer who is new the the world of machine learning so I'm probably missing something obvious. Am I supposed to know what the inputs for the model are and pass those in as arguments?

Thanks for the help.

Error when convert pytorch->onyx->coreml

I have modified resnet-18 model on PyTorch (without BatchNorm).
The flow of conversion: PyTorch -> onnx -> coreml.
PyTorch version: branch 0.3.0, commit 9622eaa

When I try to convert the model from onnx to coreml I see the follow error:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-2-3393594cc401> in <module>()
      6             deprocessing_args={},
      7             class_labels=None,
----> 8             predicted_feature_name='classLabel')

/Users/mikhail/anaconda/envs/p2/lib/python2.7/site-packages/onnx_coreml/_onnx_converter.pyc in convert(model, mode, image_input_names, preprocessing_args, image_output_names, deprocessing_args, class_labels, predicted_feature_name)
    217 
    218     for node in graph.nodes:
--> 219         _convert_node(builder, node)
    220 
    221     if add_deprocess:

/Users/mikhail/anaconda/envs/p2/lib/python2.7/site-packages/onnx_coreml/_layers.pyc in _convert_node(builder, node)
    394     print('------------------')
    395     converter_fn = _get_node_converter_fn(node)
--> 396     return converter_fn(builder, node)

/Users/mikhail/anaconda/envs/p2/lib/python2.7/site-packages/onnx_coreml/_layers.pyc in _convert_gemm(builder, node)
    304 def _convert_gemm(builder, node):
    305     print(node.attrs)
--> 306     if node.attrs["broadcast"].i != 1 or node.attrs["transB"].i != 1:
    307         raise ValueError(
    308             "Gemm is supported only for inner_product layer"

KeyError: 'transB'

Onnx model in attach
resnet_18.onnx.zip

Developments on slicing

Is there any timeline on extending slicing support? I'm getting this error (converting a PyTorch UNet):

TypeError: Error while converting op of type: Slice. Error message: Only single axis Slice is supported now

and I really don't know how I could get around it.

FC layer without Bias conversion issue

Introduction

Conversion from onnx to coreML fails if we use a Linear (Fully connected layer) with No bias. Pytorch generate a graph with the GEMM op if we use Bias, but uses a transpose and MatMul op if we set no Bias. Seems that there is an issue with the MatMul data initialization.

screen shot 2018-09-14 at 1 29 32 pmscreen shot 2018-09-14 at 1 31 37 pm

Code to reproduce

# Build a Mock Model in Pytorch with a convolution and a reduceMean layer\
import torch
import torch.nn as nn
import torch.onnx as torch_onnx
from onnx_coreml import convert

class Model_issue(nn.Module):
    def __init__(self):
        super(Model_issue, self).__init__()
        self.simple_nn = nn.Sequential(
            nn.Linear(256, 128, bias=False),
            #nn.Conv2d(in_channels=256, out_channels=128, kernel_size=(1,1), stride=1, padding=0, bias=False),
            nn.ReLU(),
        )

        # Initialize weights
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.Linear):
                size = m.weight.size()
                fan_out = size[0]  # number of rows
                fan_in = size[1]  # number of columns
                variance = np.sqrt(2.0 / (fan_in + fan_out))
                m.weight.data.normal_(0.0, variance)

    def forward(self, x):
        feat1 = self.simple_nn(x)
        return feat1

# Use this an input trace to serialize the model
x_dummy_conv = torch.rand(1, 256, 1, 1)
x_dummy = torch.rand(1, 256)
model_onnx_path = "torch_model_fc_no_bias_issue.onnx"
model = Model_issue()
model.train(False)

# Export the model to an ONNX file
output = torch_onnx.export(model, 
                          x_dummy, 
                          model_onnx_path, 
                          export_params=True,
                          verbose=True)

# Export to coreml
coreml_model = convert(model_onnx_path)
coreml_model.save('./test.mlmodel')

Error Stack

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-38-8ceb5c24bbd5> in <module>()
     44 
     45 # Export to coreml
---> 46 coreml_model = convert(model_onnx_path)
     47 coreml_model.save('./test.mlmodel')

/usr/local/lib/python3.5/dist-packages/onnx_coreml/converter.py in convert(model, mode, image_input_names, preprocessing_args, image_output_names, deprocessing_args, class_labels, predicted_feature_name, add_custom_layers, custom_conversion_functions)
    394     for i, node in enumerate(graph.nodes):
    395         print("%d/%d: Converting Node Type %s" %(i+1, len(graph.nodes), node.op_type))
--> 396         _convert_node(builder, node, graph, err)
    397 
    398     if add_deprocess:

/usr/local/lib/python3.5/dist-packages/onnx_coreml/_operators.py in _convert_node(builder, node, graph, err)
    992 def _convert_node(builder, node, graph, err):  # type: (NeuralNetworkBuilder, Node, Graph, ErrorHandling) -> None
    993     converter_fn = _get_node_converter_fn(builder, node, err)
--> 994     return converter_fn(builder, node, graph, err)

/usr/local/lib/python3.5/dist-packages/onnx_coreml/_operators.py in _convert_matmul(builder, node, graph, err)
    517     else:
    518         err.missing_initializer(node,
--> 519                                 "Weight tensor: {} not found in the graph initializer".format(weight_name, ))
    520 
    521     if len(W.shape) != 2:

/usr/local/lib/python3.5/dist-packages/onnx_coreml/_error_utils.py in missing_initializer(self, node, err_message)
     69         "Missing initializer error in op of type {}, with input name = {}, "
     70         "output name = {}. Error message: {}\n".
---> 71         format(node.op_type, node.inputs[0], node.outputs[0], err_message)
     72       )
     73 

ValueError: Missing initializer error in op of type MatMul, with input name = 0, output name = 3. Error message: Weight tensor: 2 not found in the graph initializer

ONNX file

Including the ONNX with and without the bias (Issue with the no bias version)
onnx_bias_no_bias.zip

Environment

I was able to reproduce the issue on the following case:

  • Pytorch 0.4.1 and 0.4.0
  • Python 3.5
  • onnx-coreml version from repository
  • onnx 1.2.1 and 1.3.0

Installation Error: Cloning into 'third_party/eigen' failed.

While installing from source, an error occurred.

$ git clone --recursive https://github.com/onnx/onnx-coreml.git

Cloning into 'onnx-coreml'...
remote: Counting objects: 1717, done.
remote: Compressing objects: 100% (47/47), done.
remote: Total 1717 (delta 15), reused 34 (delta 1), pack-reused 1669
Receiving objects: 100% (1717/1717), 326.54 KiB | 411.00 KiB/s, done.
Resolving deltas: 100% (877/877), done.
Checking connectivity... done.
......
Cloning into 'third_party/eigen'...
Username for 'https://github.com': naruya
Password for 'https://[email protected]': 
remote: Repository not found.
fatal: repository 'https://github.com/RLovelett/eigen.git/' not found
fatal: clone of 'https://github.com/RLovelett/eigen.git' into submodule path 'third_party/eigen' failed
Failed to recurse into submodule path 'third_party/pytorch'

Certainly the eigen repository does not exist.
How do I avoid this error...?

iOS ONNXLive Demo Black Screen

I am trying to run the iOS demo featured here:
https://pytorch.org/tutorials/advanced/ONNXLive.html

I build and run the app on my physical iPhone XS (iOS 12.1), I authorize the app to use the camera, and then I see Xcode debug error (out of memory). I notice the default CoreML models are set up for 1080 x 1080 size images, which is large, so I downsize to new 250 x 540 CoreML models per pytorch tutorial. Then I build and run again, no errors this time, but only a black screen shows on my iPhone, Xcode showing 95% GPU utilization for 15 minutes, no response from app, just black. Restarting and replugging iPhone and Xcode does not solve problem.

screenshot 2018-12-05 at 13 45 18

iPhone XS 12.1
Xcode 10.1

Remove `mode` parameter?

It doesn't seem like the mode parameter actually does anything. If I'm correct, it should be removed, as it is misleading.

NIMA (MobileNetV2 architecture) - ONNX 2 CoreML fails

Hi

i'm in the process of converting NIMA based on MobileNetV2 architecture (pytorch) model to CoreML (model can be found here https://modelzoo.co/model/nima).
Did manage to export model from PyTorch to ONNX. Now from ONNX to CoreML is giving the error listed bellow.

Did install from source (1.1.1).

tkx

p.s.: If it helps i can also send onnx file.

Source
import onnx
from onnx_coreml import convert
onnx_model = onnx.load('NIMA_Aesthetics.proto')

mlmodel = convert(onnx_model)

mlmodel.save("NIMA_Aesthetics.mlmodel")

Traceback (most recent call last):

File "", line 1, in
runfile('/xxx/.spyder/NIMA_ONNX2CoreML.py', wdir='/xxx/.spyder')

File "/anaconda2/envs/Pytorch2CoreML/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 668, in runfile
execfile(filename, namespace)

File "/anaconda2/envs/Pytorch2CoreML/lib/python2.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 100, in execfile
builtins.execfile(filename, *where)

File "NIMA_ONNX2CoreML.py", line 10, in
mlmodel = convert(onnx_model)

File "/anaconda2/envs/Pytorch2CoreML/lib/python2.7/site-packages/onnx_coreml/converter.py", line 325, in convert
graph = _prepare_onnx_graph(onnx_model.graph, transformers)

File "/anaconda2/envs/Pytorch2CoreML/lib/python2.7/site-packages/onnx_coreml/converter.py", line 245, in prepare_onnx_graph
graph
= Graph.from_onnx(graph)

File "/anaconda2/envs/Pytorch2CoreML/lib/python2.7/site-packages/onnx_coreml/_graph.py", line 185, in from_onnx
t.name: numpy_helper.to_array(t) for t in graph.initializer

AttributeError: 'NoneType' object has no attribute 'initializer'

PyTorch to CoreML: incorrect input type

Hello,

I am trying to run my PyTorch network on an iOS device. However, I face the issue of an incorrect input format when I put my network in Apple's demo (https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml), my input is a MultiArray (Double 3 x 224 x 224) and the one used by Apple is Image (Color 224 x 224).

I didn't find a way to get an input Image with all the research I have done. To convert my network to .mlmodel, I save it as a .onnx with torch.onnx._export and then I use this library.

I found some solutions on the internet saying to put this parameter: image_input_names='input' but it didn't work and I don't find a way to get the correct input type.

Do you have some ideas about how to do this?

Thank you!

TypeError: Error while converting op of type: Concat. Error message: unable to translate constant array shape to CoreML shape

I am receiving the following error message when converting Pytorch 1.0 -> ONNX model to CoreML. It seems to be due to torch.cat() operations. I am using Python 2.7. The ONNX model is available here:

https://storage.googleapis.com/ultralytics/model_concat_error.onnx (35 MB)

34/40: Converting Node Type LeakyRelu
35/40: Converting Node Type Conv
36/40: Converting Node Type Concat
Traceback (most recent call last):
  File "/Users/glennjocher/PycharmProjects/onnx-coreml/onnx_to_coreml.py", line 24, in <module>
    main()
  File "/Users/glennjocher/PycharmProjects/onnx-coreml/onnx_to_coreml.py", line 19, in main
    coreml_model = convert(model_proto, image_input_names=['0'], image_output_names=['156'])
  File "/Users/glennjocher/PycharmProjects/onnx-coreml/onnx_coreml/converter.py", line 463, in convert
    _add_const_inputs_if_required(builder, node, graph, err)
  File "/Users/glennjocher/PycharmProjects/onnx-coreml/onnx_coreml/_operators.py", line 1823, in _add_const_inputs_if_required
    _convert_const(builder, node, graph, err)
  File "/Users/glennjocher/PycharmProjects/onnx-coreml/onnx_coreml/_operators.py", line 1718, in _convert_const
    return err.unsupported_op_configuration(builder, node, graph, "unable to translate constant array shape to CoreML shape")
  File "/Users/glennjocher/PycharmProjects/onnx-coreml/onnx_coreml/_error_utils.py", line 56, in unsupported_op_configuration
    "Error while converting op of type: {}. Error message: {}\n".format(node.op_type, err_message, )
TypeError: Error while converting op of type: Concat. 
Error message: unable to translate constant array shape to CoreML shape

Force output shape

if len( image_output_names) > 0:

Is there any reason not always try and force an output shape?
If I have an output which is not an image, hence I don't want image_output_names to contain its name, but I still want to force the output shape, what should I do? (right now I put a non-existing output, which is an ugly hack I wish to remove)

Deprocessing args cause error in convert

I am converting a simple autoencoder from an ONNX model file to coreml using this command:

        mlmodel = convert(args.onnxsavename,
                          image_input_names=['0'],
                          preprocessing_args={'is_bgr':args.bgr,
                                              'blue_bias':args.mean[0],
                                              'green_bias':args.mean[1],
                                              'red_bias':args.mean[2],
                                              'image_scale': args.scale},
                          image_output_names=args.outputnames,
                          deprocessing_args={'is_bgr':args.bgr,
                                             'blue_bias':-1.0*args.mean[0],
                                             'green_bias':-1.0*args.mean[1],
                                             'red_bias':-1.0*args.mean[2],
                                             'image_scale': 1.0/args.scale}
                          )

Where the cmdline args are:

bgr=True, 
channels=3, 
height=28, 
width=28,
inputmodels=[],
mean=[0.0, 0.0, 0.0],
onnxsavename='/tmp/tmp.onnx', 
outputnames=['decodedimages'], 
scale=0.00392156862745098

The ONNX model is defined like this:

graph(%0 : Float(1, 3, 28, 28)
      %1 : Float(16, 3, 3, 3)
      %2 : Float(16)
      %3 : Float(8, 16, 3, 3)
      %4 : Float(8)
      %5 : Float(8, 16, 3, 3)
      %6 : Float(16)
      %7 : Float(16, 8, 5, 5)
      %8 : Float(8)
      %9 : Float(8, 3, 2, 2)
      %10 : Float(3)) {
  %11 : Float(1, 16, 10, 10) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[3, 3]](%0, %1, %2), scope: autoencoder/Sequential[encoder]/Conv2d[0]
  %12 : Float(1, 16, 10, 10) = onnx::Relu(%11), scope: autoencoder/Sequential[encoder]/ReLU[1]
  %13 : Float(1, 16, 5, 5) = onnx::MaxPool[kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%12), scope: autoencoder/Sequential[encoder]/MaxPool2d[2]
  %14 : Float(1, 8, 3, 3) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[2, 2]](%13, %3, %4), scope: autoencoder/Sequential[encoder]/Conv2d[3]
  %15 : Float(1, 8, 3, 3) = onnx::Relu(%14), scope: autoencoder/Sequential[encoder]/ReLU[4]
  %16 : Float(1, 8, 2, 2) = onnx::MaxPool[kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[1, 1]](%15), scope: autoencoder/Sequential[encoder]/MaxPool2d[5]
  %17 : Float(1, 16, 5, 5) = onnx::ConvTranspose[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%16, %5, %6), scope: autoencoder/Sequential[decoder]/ConvTranspose2d[0]
  %18 : Float(1, 16, 5, 5) = onnx::Relu(%17), scope: autoencoder/Sequential[decoder]/ReLU[1]
  %19 : Float(1, 8, 15, 15) = onnx::ConvTranspose[dilations=[1, 1], group=1, kernel_shape=[5, 5], pads=[1, 1, 1, 1], strides=[3, 3]](%18, %7, %8), scope: autoencoder/Sequential[decoder]/ConvTranspose2d[2]
  %20 : Float(1, 8, 15, 15) = onnx::Relu(%19), scope: autoencoder/Sequential[decoder]/ReLU[3]
  %21 : Float(1, 3, 28, 28) = onnx::ConvTranspose[dilations=[1, 1], group=1, kernel_shape=[2, 2], pads=[1, 1, 1, 1], strides=[2, 2]](%20, %9, %10), scope: autoencoder/Sequential[decoder]/ConvTranspose2d[4]
  %22 : Float(1, 3, 28, 28) = onnx::Tanh(%21), scope: autoencoder/Sequential[decoder]/Tanh[5]
  return (%22);
}

And the output of the convert command is:

1/12: Converting Node Type Conv
2/12: Converting Node Type Relu
3/12: Converting Node Type MaxPool
4/12: Converting Node Type Conv
5/12: Converting Node Type Relu
6/12: Converting Node Type MaxPool
7/12: Converting Node Type ConvTranspose
8/12: Converting Node Type Relu
9/12: Converting Node Type ConvTranspose
10/12: Converting Node Type Relu
11/12: Converting Node Type ConvTranspose
12/12: Converting Node Type Tanh
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-8-933176c8e46b> in <module>()
     12 args.savename = 'autoencoder_img_out.mlmodel'
     13 print(args)
---> 14 coreml_convert.convert_model(model, args)

~/TRASH-INC/Otto/pytrain/coreml_convert.py in convert_model(torch_model, args)
     94                                              'green_bias':-1.0*args.mean[1],
     95                                              'red_bias':-1.0*args.mean[2],
---> 96                                              'image_scale': 1.0/args.scale}
     97                           )
     98     else:

/usr/local/lib/python3.6/site-packages/onnx_coreml/converter.py in convert(model, mode, image_input_names, preprocessing_args, image_output_names, deprocessing_args, class_labels, predicted_feature_name, add_custom_layers, custom_conversion_functions)
    444                     builder.spec.description.output[i].shortDescription = 'This output is a sequence'
    445 
--> 446     mlmodel = MLModel(builder.spec)
    447 
    448     # print information about all ops for which custom layers have been added

/usr/local/lib/python3.6/site-packages/coremltools/models/model.py in __init__(self, model)
    213             filename = _tempfile.mktemp(suffix = '.mlmodel')
    214             _save_spec(model, filename)
--> 215             self.__proxy__ = _get_proxy_from_spec(filename)
    216         else:
    217             raise TypeError("Expected model to be a .mlmodel file or a Model_pb2 object")

/usr/local/lib/python3.6/site-packages/coremltools/models/model.py in _get_proxy_from_spec(filename)
    101             return None
    102         try:
--> 103             return _MLModelProxy(filename)
    104         except RuntimeError as e:
    105             warnings.warn(

IndexError: map::at:  key not found

The convert function does not appear to be handling the deprocessing args or it isn't recognizing that the image_output_names are in the processed feature output.

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

I get this error message when trying to export the following yolov3-tiny.onnx model to CoreML which I originally exported from PyTorch 1.0 correctly. I am on a Python 2.7 venv and installed ONNX with pip install -U onnx-coreml. The ONNX model is here. Other ONNX models export without issue, but this one causes segmentation fault:
https://storage.googleapis.com/ultralytics/yolov3-tiny.onnx (35 MB)

  • The pytorch model originates from https://github.com/ultralytics/yolov3, and operates correctly with no issues.
  • The ONNX model opens with the Netron viewer correctly, no issues there either.

UPDATE: I get the same error installing from source and using CLI:

Glenns-MBP:weights glennjocher$ convert-onnx-to-coreml -o yolov3-tiny.mlmodel yolov3-tiny.onnx
Segmentation fault: 11

After converting, coreML model is removing some factors

I converted my onnx model to coreML using onnx-coreml but my converted coreML model is not having dropout factor, which was initially present in my onnx model.
Further, this tool also removed momentum and affine from my batch normalisation.

Is there any way, I can convert my onnx model to coreML retaining these factors.

Issue when using matmul

Hello,

I am trying to convert a pytorch model to coreml. It converts to onnx properly but when exporting to coreml I have the following error

Missing initializer error in op of type MatMul, with input name = <layer no>, output name = <layer no>. Error message: Second input to Matmul layer must be a constant

my forward function looks like so

def forward(self, x, y):
    ...
    y_hat = torch.squeeze(torch.bmm(y_hat, y), 2)
    ...

The networks works properly with pytorch. I checked the code of _operators.py and it looks like y should be initialized as a Variable to be treated as a constant. However, that would hardcode the tensor input. Is there anyway to treat the y as an input from coreml? Or am I doing something wrong?

ConvTranspose2d with groups doesn't work

The following code will fail

model = torch.nn.ConvTranspose2d(4, 4, kernel_size=3, stride=2, output_padding=1, padding=1, groups=2)
input = torch.randn(1, 4, 8, 8)
torch.onnx.export(model, input, 'model.onnx', input_names=["input"], output_names=["output"])

mlmodel = convert('model.onnx')
mlmodel.save('model.mlmodel')

Looks like the problem is at

output_channels=params_dict['W'].shape[3],

as weight shape will be different from output if you have groups, but I don't know internals of the library enough to fix it correctly.

ReshapeInitTensorFuser in transformers removed a reshape layer in onnx model

when i transform the onnx model to coreml. I got an error like this:
179/182: Converting Node Type Sub Traceback (most recent call last): File "/home/dianxin/inpainting/pytorch-inpainting-with-partial-conv-master/onnx_to_coreml.py", line 11, in <module> coreml_model = convert(model_proto, image_input_names=['0'], image_output_names=['186']) File "/usr/local/lib/python2.7/dist-packages/onnx_coreml/converter.py", line 464, in convert _convert_node(builder, node, graph, err) File "/usr/local/lib/python2.7/dist-packages/onnx_coreml/_operators.py", line 1826, in _convert_node return converter_fn(builder, node, graph, err) File "/usr/local/lib/python2.7/dist-packages/onnx_coreml/_operators.py", line 247, in _convert_sub _convert_broadcast_op(builder, node, graph, err, "ADD") File "/usr/local/lib/python2.7/dist-packages/onnx_coreml/_operators.py", line 69, in _convert_broadcast_op ranks = [len(graph.onnx_coreml_shape_mapping[input_]) for input_ in node.inputs] KeyError: u'326'

when I check graph.onnx_coreml_shape_mapping but not find u'326'. Howerver, the onnx model graph has the reshape layer originally, like this:
%319 : Tensor = onnx::Constant[value={171}]() %320 : Tensor = onnx::Mul(%318, %319) %321 : Float(1, 3, 256, 256) = onnx::Clip[max=1, min=0](%314), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %322 : Float(1, 3, 256, 256) = onnx::Mul(%320, %321), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %323 : Float(1, 19, 256, 256) = onnx::Mul(%311, %312), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %324 : Float(1, 3, 256, 256) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%323, %75, %76), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %325 : Tensor = onnx::Constant[value= 1 3 1 1 [ CPULongType{4} ]](), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %326 : Float(1, 3, 1, 1) = onnx::Reshape(%76, %325), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %327 : Float(1, 3, 256, 256) = onnx::Sub(%324, %326), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %328 : Float(1, 3, 256, 256) = onnx::Mul(%327, %322), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %329 : Float(1, 3, 256, 256) = onnx::Add(%328, %326), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] %330 : Float(1, 3, 256, 256) = onnx::Mul(%329, %321), scope: PConvUNet/PCBActiv[dec_1]/PartialConv2d[conv] return (%330, %321); }

%325 and %326 have been removed by ReshapeInitTensorFuser transformers.How should I do ?Thanks very much!

NotImplementedError: Unsupported ONNX ops of type: Constant

I'm trying to export the following yolov3-tiny.onnx model to CoreML which I originally exported from PyTorch 1.0 correctly. The pytorch model originates from https://github.com/ultralytics/yolov3, and operates correctly with no issues. There are no PyTorch 'Constant' modules, so I don't understand the origin of the error. Please help.
https://storage.googleapis.com/ultralytics/yolov3-tiny.onnx (35 MB)

Traceback (most recent call last):
  File "/Users/glennjocher/onnx-coreml/onnx_to_coreml.py", line 24, in <module>
    main()
  File "/Users/glennjocher/onnx-coreml/onnx_to_coreml.py", line 19, in main
    coreml_model = convert(model_proto, image_input_names=['0'], image_output_names=['156'])
  File "/Users/glennjocher/onnx-coreml/onnx_coreml/converter.py", line 455, in convert
    _check_unsupported_ops(graph.nodes)
  File "/Users/glennjocher/onnx-coreml/onnx_coreml/converter.py", line 143, in _check_unsupported_ops
    ','.join(unsupported_op_types)))
NotImplementedError: Unsupported ONNX ops of type: Constant

Updates necessary for newly added test cases

We're shooting for ONNX v1 announcement at NIPS next week and thus it'd be awesome to make sure that all tests are passing with the latest ONNX master.

We added a few backend test cases upstream in ONNX (by exporting some of the PyTorch's tests) and some of them are not passing the tests in CoreML.

Notable issues:

  1. #5 addresses padding and tries to fix some reshapes
  2. Some of the ops are not properly mapped (e.g. PRelu)
  3. Some of the ops variants are not supported in CoreML (e.g. arbitrary dimensional SoftMax)

For 3) we need to figure out some way (throwing special exception?) to mark the tests as "expected unsupported behavior" instead of just failing them blindly.

Help on looking into these is appreciated.

Installation error with coremltools 2.0

I'm trying to set up a fastai v1 environment (python 3.7), ultimately for deployment on iOS. However, I'm not able to install onnx-coreml for some reason. Running pip install -U onnx-coreml I get the error: No matching distribution found for coremltools>=2.0 (from onnx-coreml).

Support different pre-processing for different images

My team and I have a use-case of CoreML in which it's necessary to use multiple image inputs that all need different pre-processing. CoreML supports this, but the converter does not.

Can support for this be added?

Failing Backend Node tests

Current State of OnnxBackend tests (if tests/onnx_backend_test.py in master is executed and all tests are included except the 9 model zoo tests):

72 passed, 279 skipped, 189 failed:

Breakup of 189 failed tests:
142: OnnxBackendNodeTest
39: OnnxBackendPyTorchConvertedModelTest
8: OnnxBackendPyTorchOperatorModelTest

Of the 142 failing OnnxBackendNodeTest tests, a bunch are failing due to fact that CoreML only supports TensorProto.FLOAT data types. Ignoring those failures along with the "test_cast_..." tests, we are left with 87 failing cases. These need to be either fixed or marked expected fail if CoreML does not support a parameter/op configuration. These are:

____________________ OnnxBackendNodeTest.test_add_bcast_cpu ____________________
_____________ OnnxBackendNodeTest.test_basic_conv_with_padding_cpu _____________
___________ OnnxBackendNodeTest.test_basic_conv_without_padding_cpu ____________
______________________ OnnxBackendNodeTest.test_ceil_cpu _______________________
__________________ OnnxBackendNodeTest.test_ceil_example_cpu ___________________
______________________ OnnxBackendNodeTest.test_clip_cpu _______________________
________________ OnnxBackendNodeTest.test_clip_default_max_cpu _________________
________________ OnnxBackendNodeTest.test_clip_default_min_cpu _________________
__________________ OnnxBackendNodeTest.test_clip_example_cpu ___________________
____________________ OnnxBackendNodeTest.test_constant_cpu _____________________
__________________ OnnxBackendNodeTest.test_constant_pad_cpu ___________________
____ OnnxBackendNodeTest.test_conv_with_strides_and_asymmetric_padding_cpu _____
__________ OnnxBackendNodeTest.test_conv_with_strides_no_padding_cpu ___________
____________ OnnxBackendNodeTest.test_conv_with_strides_padding_cpu ____________
__________________ OnnxBackendNodeTest.test_default_axes_cpu ___________________
____________________ OnnxBackendNodeTest.test_div_bcast_cpu ____________________
_______________________ OnnxBackendNodeTest.test_div_cpu _______________________
___________________ OnnxBackendNodeTest.test_div_example_cpu ___________________
____________________ OnnxBackendNodeTest.test_edge_pad_cpu _____________________
___________________ OnnxBackendNodeTest.test_elu_default_cpu ___________________
__________________ OnnxBackendNodeTest.test_flatten_axis0_cpu __________________
__________________ OnnxBackendNodeTest.test_flatten_axis1_cpu __________________
__________________ OnnxBackendNodeTest.test_flatten_axis2_cpu __________________
__________________ OnnxBackendNodeTest.test_flatten_axis3_cpu __________________
______________ OnnxBackendNodeTest.test_flatten_default_axis_cpu _______________
______________________ OnnxBackendNodeTest.test_floor_cpu ______________________
__________________ OnnxBackendNodeTest.test_floor_example_cpu __________________
____________________ OnnxBackendNodeTest.test_gather_0_cpu _____________________
____________________ OnnxBackendNodeTest.test_gather_1_cpu _____________________
_________________ OnnxBackendNodeTest.test_hardmax_axis_0_cpu __________________
_________________ OnnxBackendNodeTest.test_hardmax_axis_1_cpu __________________
_________________ OnnxBackendNodeTest.test_hardmax_axis_2_cpu __________________
______________ OnnxBackendNodeTest.test_hardmax_default_axis_cpu _______________
_________________ OnnxBackendNodeTest.test_hardmax_example_cpu _________________
_________________ OnnxBackendNodeTest.test_hardmax_one_hot_cpu _________________
___________________ OnnxBackendNodeTest.test_hardsigmoid_cpu ___________________
_______________ OnnxBackendNodeTest.test_hardsigmoid_default_cpu _______________
_______________ OnnxBackendNodeTest.test_hardsigmoid_example_cpu _______________
________________ OnnxBackendNodeTest.test_leakyrelu_default_cpu ________________
___________________ OnnxBackendNodeTest.test_log_example_cpu ___________________
________________ OnnxBackendNodeTest.test_logsoftmax_axis_0_cpu ________________
________________ OnnxBackendNodeTest.test_logsoftmax_axis_1_cpu ________________
________________ OnnxBackendNodeTest.test_logsoftmax_axis_2_cpu ________________
_____________ OnnxBackendNodeTest.test_logsoftmax_default_axis_cpu _____________
______________ OnnxBackendNodeTest.test_logsoftmax_example_1_cpu _______________
_____________ OnnxBackendNodeTest.test_logsoftmax_large_number_cpu _____________
____________________ OnnxBackendNodeTest.test_matmul_2d_cpu ____________________
____________________ OnnxBackendNodeTest.test_matmul_3d_cpu ____________________
____________________ OnnxBackendNodeTest.test_matmul_4d_cpu ____________________
__________________ OnnxBackendNodeTest.test_max_one_input_cpu __________________
__________________ OnnxBackendNodeTest.test_mean_example_cpu ___________________
_________________ OnnxBackendNodeTest.test_mean_one_input_cpu __________________
_________________ OnnxBackendNodeTest.test_mean_two_inputs_cpu _________________
___________________ OnnxBackendNodeTest.test_min_example_cpu ___________________
__________________ OnnxBackendNodeTest.test_min_one_input_cpu __________________
_________________ OnnxBackendNodeTest.test_min_two_inputs_cpu __________________
____________________ OnnxBackendNodeTest.test_mul_bcast_cpu ____________________
_________________ OnnxBackendNodeTest.test_pow_bcast_axis0_cpu _________________
____________________ OnnxBackendNodeTest.test_pow_bcast_cpu ____________________
_______________________ OnnxBackendNodeTest.test_pow_cpu _______________________
___________________ OnnxBackendNodeTest.test_pow_example_cpu ___________________
___________________ OnnxBackendNodeTest.test_reciprocal_cpu ____________________
_______________ OnnxBackendNodeTest.test_reciprocal_example_cpu ________________
___________________ OnnxBackendNodeTest.test_reflect_pad_cpu ___________________
______________ OnnxBackendNodeTest.test_reshape_extended_dims_cpu ______________
______________ OnnxBackendNodeTest.test_reshape_reduced_dims_cpu _______________
______________________ OnnxBackendNodeTest.test_selu_cpu _______________________
__________________ OnnxBackendNodeTest.test_selu_default_cpu ___________________
__________________ OnnxBackendNodeTest.test_selu_example_cpu ___________________
______________________ OnnxBackendNodeTest.test_slice_cpu ______________________
____________ OnnxBackendNodeTest.test_slice_start_out_of_bounds_cpu ____________
_________________ OnnxBackendNodeTest.test_softmax_axis_0_cpu __________________
_________________ OnnxBackendNodeTest.test_softmax_axis_1_cpu __________________
_________________ OnnxBackendNodeTest.test_softmax_axis_2_cpu __________________
______________ OnnxBackendNodeTest.test_softmax_default_axis_cpu _______________
_________________ OnnxBackendNodeTest.test_softmax_example_cpu _________________
______________ OnnxBackendNodeTest.test_softmax_large_number_cpu _______________
______________________ OnnxBackendNodeTest.test_sqrt_cpu _______________________
__________________ OnnxBackendNodeTest.test_sqrt_example_cpu ___________________
_____________________ OnnxBackendNodeTest.test_squeeze_cpu _____________________
____________________ OnnxBackendNodeTest.test_sub_bcast_cpu ____________________
_______________________ OnnxBackendNodeTest.test_sub_cpu _______________________
___________________ OnnxBackendNodeTest.test_sub_example_cpu ___________________
_________________ OnnxBackendNodeTest.test_thresholdedrelu_cpu _________________
_____________ OnnxBackendNodeTest.test_thresholdedrelu_default_cpu _____________
_____________ OnnxBackendNodeTest.test_thresholdedrelu_example_cpu _____________
____________________ OnnxBackendNodeTest.test_unsqueeze_cpu ____________________

Normalize input image per color channel

Read coremltools documentation, did search through Google and it is not clear to me the "best way" to normalize per color channel. I see that there are bias factors per channel and scale, though, seems to be applied to all.
My model is the same as it was exposed in #337 and they do the following

Python source

from torchvision import transforms


IMAGE_NET_MEAN = [0.485, 0.456, 0.406]
IMAGE_NET_STD = [0.229, 0.224, 0.225]

class Transform:
    def __init__(self):
        normalize = transforms.Normalize(
            mean=IMAGE_NET_MEAN,
            std=IMAGE_NET_STD)

        self._val_transform = transforms.Compose([
            transforms.Resize((224, 224)),
            transforms.ToTensor(),
            normalize])

@property
    def val_transform(self):
    return self._val_transform

...

and before feeding the model

from nima.common import Transform

...

   image = self.transform(image)
    
    image = image.unsqueeze_(0)
    
    image = image.to(device)

So, i see two ways of doing this.

1 - create a custom layer that will do the conversion after the root node

2 - do the normalization in swift or objective-c by transforming CVPixelBuffer

Before putting my hands "dirty" on this, just wanted to know if i'm missing something or this is the reality.

cheers

Segmentation Fault

When I use convert in onnx-coreml, segmentation fault arises.
My codes are:

import sys
from onnx import onnx_pb
from onnx_coreml import convert
model_in = sys.argv[1]
model_out = sys.argv[2]
model_file = open(model_in, 'rb')
model_proto = onnx_pb.ModelProto()
model_proto.ParseFromString(model_file.read())
coreml_model = convert(model_proto, preprocessing_args={'image_scale': 1 / 255.}, image_input_names=['0'], image_output_names=['258'], deprocessing_args={'image_scale': 255.})
coreml_model.save(model_out)

Are there any ideas to avoid segmentation fault?

My environment is like this:

python (3.5.2)
Click (7.0)
numpy (1.15.2)
onnx (1.3.0)
onnx-coreml (0.3.0)

Segmentation fault with pytorch 1.0

I'm facing a segfault when using torch==1.0 and onnx-coreml from the current master (commit https://github.com/onnx/onnx-coreml/tree/da33f4f0efde2245925b3a2444973786a3a91e0d) on Ubuntu 16.04 servers. Conversion is successful in the same environment with torch==0.4.1.

Script to reproduce:

import onnx
import onnx_coreml

import torch
from torch import nn


class Dummy(nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, inp):
        a, b = inp

        result = b + nn.functional.interpolate(a, scale_factor=2, mode="nearest")
        return result


def main(device='cuda:0'):
    model = Dummy()
    model = model.to(device)
    dummy_input = [torch.randn(1, 128, 32, 32).to(device), torch.randn(1, 128, 64, 64).to(device)]

    onnx_path = '/tmp/dummy.onnx'
    coreml_path = '/tmp/dummy.mlmodel'
    torch.onnx.export(model,
                      dummy_input,
                      f=onnx_path,
                      input_names=['input'],
                      output_names=['output'],
                      verbose=False,
                      export_params=True,
                      )
    onnx_model = onnx.load(onnx_path)
    coreml_model = onnx_coreml.convert(onnx_model)
    coreml_model.save(coreml_path)


if __name__ == '__main__':
    main()

Code Cleanup && Initial Improvements

  • Development Setup
    • onnx/caffe2 as submodule in git : #13
    • install.sh / install-develop.sh : #21
  • Setup CI
  • Revamp PyPi deployment
  • Make test cases pass
    • Make current test cases pass #36 #42 #69 #112
    • Move test cases that are general enough to onnx repo
    • Add layer tests
  • Test models
    • Run/Debug current model zoo (mostly vision models – Resnet, VGG etc)
      model zoo
      #42
    • Run/Debug newer models – ex: RNN, Sentiment Classification
  • Static type checking
  • Remove Caffe2 dependency
  • Run/test custom layer support
  • Clean up image preprocessing options
  • Address pending GitHub issues
  • [optional] Fusion of ops in the final model for better perf

Issue with converting LSTM pyTorch Model to ONNX to coreML

If I convert an LSTM model with num_layers=1, it will give me:

RuntimeError: Inferred shape and existing shape differ in rank: (4) vs (3)

If I convert an LSTM model with num_layers=2, it will give me:

NotImplementedError: Unsupported ONNX ops of type: Gather,ConstantFill

I then implemented the custom parameters. It can compile to coreML model but now it gives me in XCode:

validator error: Layer '31' of type 500 has 0 inputs but expects at least 1.

Here is the minimum test code, please send help!
https://gist.github.com/marcoleewow/2afb5762ed74f5244c9bd85eae35147d

onnx-coreml doesn't support for python 3.x

I installed onnx-coreml using “pip install -U onnx-coreml” with python 3.5 successfully. But when I import the package in python, I got an error as bellows.

image

So could you like to build a wheel package supporting python 3.x and upload it to pypi?

Pad op is not supported

Traceback (most recent call last):
File "", line 1, in
File "/home/cheng/anaconda3/envs/py27/lib/python2.7/site-packages/onnx_coreml/_onnx_converter.py", line 219, in convert
_convert_node(builder, node)
File "/home/cheng/anaconda3/envs/py27/lib/python2.7/site-packages/onnx_coreml/_layers.py", line 390, in _convert_node
converter_fn = _get_node_converter_fn(node)
File "/home/cheng/anaconda3/envs/py27/lib/python2.7/site-packages/onnx_coreml/_layers.py", line 385, in _get_node_converter_fn
"ONNX node of type {} is not supported.".format(op_type,)
TypeError: ONNX node of type Pad is not supported.

Could we add "pad" support? pad is a very common op.

torch.nn.Parameter marked as not present

The following code will fail

class SampleModule(torch.nn.Module):
    def __init__(self):
        super(SampleModule, self).__init__()
        self.ones = torch.nn.Parameter(torch.ones(2, 3))

    def forward(self, x):
        return self.ones+x


def convert_sample():
    inpt = torch.rand(2,3)
    net = SampleModule()

    torch.onnx.export(net, inpt, '/tmp/sample.onnx', verbose=True)
    convert('/tmp/sample.onnx')

will produce

graph(%0 : Float(2, 3)
      %1 : Float(2, 3)) {
  %2 : Float(2, 3) = onnx::Add(%1, %0), scope: SampleModule
  return (%2);
}

1/1: Converting Node Type Add
Traceback (most recent call last):
  File "prepare_coreml.py", line 117, in <module>
    convert_sample()
  File "prepare_coreml.py", line 110, in convert_sample
    mlmodel = convert('/tmp/sample.onnx')
  File "site-packages/onnx_coreml/converter.py", line 446, in convert
    mlmodel = MLModel(builder.spec)
  File "/site-packages/coremltools/models/model.py", line 153, in __init__
    self.__proxy__ = _get_proxy_from_spec(filename)
  File "site-packages/coremltools/models/model.py", line 77, in _get_proxy_from_spec
    return _MLModelProxy(filename)
RuntimeError: Error compiling model: "Error reading protobuf spec. validator error: Layer '2' consumes a layer named '1' which is not present in this network.".

I've hit this issue when trying to add CoordConv from recent uber's paper to some networks and the following is minimum example I could generate.

Moving self.ones to forward will change operation %1 to onnx::Constant but leaves error the same.

Add is just an example. The same problem is present with torch.cat, multiply and others.

"stringClassLabels" unavailable when using class_labels arg in convert

The follow error is happening for me when I try to convert an onnx model and include the string class names. Without class labels the conversion is successful.

File "/home/gen/.envs/otto/lib/python3.6/site-packages/onnx_coreml/converter.py", line 499, in convert
    predicted_feature_name=predicted_feature_name
  File "/home/gen/.envs/otto/lib/python3.6/site-packages/coremltools/models/neural_network/builder.py", line 337, in set_class_labels
    nn_spec.ClearField('stringClassLabels')
ValueError: Protocol message has no "stringClassLabels" field.

I'm having a hard time even debugging this error. Any guidance?

The original call looks like this:

        mlmodel = convert(args.onnxsavename,
                          image_input_names=['0'],
                          preprocessing_args={'is_bgr':args.bgr,
                                              'blue_bias':args.mean[0],
                                              'green_bias':args.mean[1],
                                              'red_bias':args.mean[2],
                                              'image_scale': args.scale},
                          predicted_feature_name='classLabel', #default variable name for layer containing predicted class labels
                          class_labels=args.labels_fname
        )

CoreML only supports Reshape layer when the target shape is static and known apriori

When I convert onnx model to mlmodel, I got the error below.

error message

Traceback (most recent call last):
  File "onnx_to_coreml.py", line 15, in <module>
    deprocessing_args={'image_scale': 255})
  File "/foo/lib/python3.6/site-packages/onnx_coreml/converter.py", line 458, in convert
    _convert_node(builder, node, graph, err)
  File "/foo/lib/python3.6/site-packages/onnx_coreml/_operators.py", line 1755, in _convert_node
    return converter_fn(builder, node, graph, err)
  File "/foo/lib/python3.6/site-packages/onnx_coreml/_operators.py", line 443, in _convert_reshape
    "CoreML only supports Reshape layer when the target shape is static and known apriori")
  File "/foo/lib/python3.6/site-packages/onnx_coreml/_error_utils.py", line 71, in missing_initializer
    format(node.op_type, node.inputs[0], node.outputs[0], err_message)
ValueError: Missing initializer error in op of type Reshape, with input name = 40, output name = 68. Error message: CoreML only supports Reshape layer when the target shape is static and known apriori

environment

onnx==1.3.0
onnx-coreml==0.3.0

I couldn't understand what the error message says.
Doesn't onnx-coreml support Reshape function? What should I do to fix this error?

This is model graph.

graph torch-jit-export (
  %input_image[FLOAT, 1x3x100x100]
) initializers (
  %learned_0[FLOAT, 56x3x5x5]
  %learned_1[FLOAT, 56]
  %learned_2[FLOAT, 1]
  %learned_3[FLOAT, 12x56x1x1]
  %learned_4[FLOAT, 12]
  %learned_5[FLOAT, 1]
  %learned_6[FLOAT, 12x12x3x3]
  %learned_7[FLOAT, 12]
  %learned_8[FLOAT, 1]
  %learned_9[FLOAT, 12x12x3x3]
  %learned_10[FLOAT, 12]
  %learned_11[FLOAT, 1]
  %learned_12[FLOAT, 12x12x3x3]
  %learned_13[FLOAT, 12]
  %learned_14[FLOAT, 1]
  %learned_15[FLOAT, 12x12x3x3]
  %learned_16[FLOAT, 12]
  %learned_17[FLOAT, 1]
  %learned_18[FLOAT, 12x12x1x1]
  %learned_19[FLOAT, 12]
  %learned_20[FLOAT, 1]
) {
  %22 = Pad[mode = 'reflect', pads = [0, 0, 2, 2, 0, 0, 2, 2]](%input_image)
  %23 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [0, 0, 0, 0], strides = [1, 1]](%22, %learned_0, %learned_1)
  %24 = PRelu(%23, %learned_2)
  %25 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%24, %learned_3, %learned_4)
  %26 = PRelu(%25, %learned_5)
  %27 = Pad[mode = 'reflect', pads = [0, 0, 1, 1, 0, 0, 1, 1]](%26)
  %28 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [0, 0, 0, 0], strides = [1, 1]](%27, %learned_6, %learned_7)
  %29 = PRelu(%28, %learned_8)
  %30 = Pad[mode = 'reflect', pads = [0, 0, 1, 1, 0, 0, 1, 1]](%29)
  %31 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [0, 0, 0, 0], strides = [1, 1]](%30, %learned_9, %learned_10)
  %32 = PRelu(%31, %learned_11)
  %33 = Pad[mode = 'reflect', pads = [0, 0, 1, 1, 0, 0, 1, 1]](%32)
  %34 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [0, 0, 0, 0], strides = [1, 1]](%33, %learned_12, %learned_13)
  %35 = PRelu(%34, %learned_14)
  %36 = Pad[mode = 'reflect', pads = [0, 0, 1, 1, 0, 0, 1, 1]](%35)
  %37 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [0, 0, 0, 0], strides = [1, 1]](%36, %learned_15, %learned_16)
  %38 = PRelu(%37, %learned_17)
  %39 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%38, %learned_18, %learned_19)
  %40 = PRelu(%39, %learned_20)
  %41 = Shape(%40)
  %42 = Slice[axes = [0], ends = [1], starts = [0]](%41)
  %43 = Squeeze[axes = [0]](%42)
  %44 = Shape(%40)
  %45 = Slice[axes = [0], ends = [2], starts = [1]](%44)
  %46 = Squeeze[axes = [0]](%45)
  %47 = Shape(%40)
  %48 = Slice[axes = [0], ends = [3], starts = [2]](%47)
  %49 = Squeeze[axes = [0]](%48)
  %50 = Shape(%40)
  %51 = Slice[axes = [0], ends = [4], starts = [3]](%50)
  %52 = Squeeze[axes = [0]](%51)
  %53 = Constant[value = <Scalar Tensor []>]()
  %54 = Div[broadcast = 1](%46, %53)
  %55 = Constant[value = <Scalar Tensor []>]()
  %56 = Mul[broadcast = 1](%49, %55)
  %57 = Constant[value = <Scalar Tensor []>]()
  %58 = Mul[broadcast = 1](%52, %57)
  %59 = Constant[value = <Scalar Tensor []>]()
  %60 = Constant[value = <Scalar Tensor []>]()
  %61 = Unsqueeze[axes = [0]](%43)
  %62 = Unsqueeze[axes = [0]](%54)
  %63 = Unsqueeze[axes = [0]](%59)
  %64 = Unsqueeze[axes = [0]](%60)
  %65 = Unsqueeze[axes = [0]](%49)
  %66 = Unsqueeze[axes = [0]](%52)
  %67 = Concat[axis = 0](%61, %62, %63, %64, %65, %66)
  %68 = Reshape(%40, %67)
  %69 = Transpose[perm = [0, 1, 4, 2, 5, 3]](%68)
  %70 = Unsqueeze[axes = [0]](%43)
  %71 = Unsqueeze[axes = [0]](%54)
  %72 = Unsqueeze[axes = [0]](%56)
  %73 = Unsqueeze[axes = [0]](%58)
  %74 = Concat[axis = 0](%70, %71, %72, %73)
  %output_image = Reshape(%69, %74)
  return %output_image
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.