Giter Club home page Giter Club logo

onnx-caffe2's Introduction

PyPI - Version CI CII Best Practices OpenSSF Scorecard REUSE compliant Ruff Black

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.

Use ONNX

Learn about the ONNX spec

Programming utilities for working with ONNX Graphs

Contribute

ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.

Check out our contribution guide to get started.

If you think some operator should be added to ONNX specification, please read this document.

Community meetings

The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here

Community Meetups are held at least once a year. Content from previous community meetups are at:

Discuss

We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.

Follow Us

Stay up to date with the latest ONNX news. [Facebook] [Twitter]

Roadmap

A roadmap process takes place every year. More details can be found here

Installation

Official Python packages

ONNX released packages are published in PyPi.

pip install onnx  # or pip install onnx[reference] for optional reference implementation dependencies

ONNX weekly packages are published in PyPI to enable experimentation and early testing.

vcpkg packages

onnx is in the maintenance list of vcpkg, you can easily use vcpkg to build and install it.

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat # For powershell
./bootstrap-vcpkg.sh # For bash
./vcpkg install onnx

Conda packages

A binary build of ONNX is available from Conda, in conda-forge:

conda install -c conda-forge onnx

Build ONNX from Source

Before building from source uninstall any existing versions of onnx pip uninstall onnx.

c++17 or higher C++ compiler version is required to build ONNX from source. Still, users can specify their own CMAKE_CXX_STANDARD version for building ONNX.

If you don't have protobuf installed, ONNX will internally download and build protobuf for ONNX build.

Or, you can manually install protobuf C/C++ libraries and tools with specified version before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:

Linux:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

Windows:

set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.

Windows

If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the version of protobuf. The tested and recommended version is 3.21.12.

The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.

You can get protobuf by running the following commands:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v21.12
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release

Then it will be built as a static library and installed to <protobuf_install_dir>. Please add the bin directory(which contains protoc.exe) to your PATH.

set CMAKE_PREFIX_PATH=<protobuf_install_dir>;%CMAKE_PREFIX_PATH%

Please note: if your protobuf_install_dir contains spaces, do not add quotation marks around it.

Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.

set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Linux

First, you need to install protobuf. The minimum Protobuf compiler (protoc) version required by ONNX is 3.6.1. Please note that old protoc versions might not work with CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON.

Ubuntu 20.04 (and newer) users may choose to install protobuf via

apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler

In this case, it is required to add -DONNX_USE_PROTOBUF_SHARED_LIBS=ON to CMAKE_ARGS in the ONNX build step.

A more general way is to build and install it from source. See the instructions below for more details.

Installing Protobuf from source

Debian/Ubuntu:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

CentOS/RHEL/Fedora:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake  -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# Optional: prefer lite proto
export CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Mac

export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protobuf-cpp-3.21.12.tar.gz
tar -xvf protobuf-cpp-3.21.12.tar.gz
cd protobuf-3.21.12
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# Optional: prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Verify Installation

After installation, run

python -c "import onnx"

to verify it works.

Common Build Options

For full list refer to CMakeLists.txt

Environment variables

  • USE_MSVC_STATIC_RUNTIME should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library. Default: USE_MSVC_STATIC_RUNTIME=0

  • DEBUG should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d at the end of the package name lines. For example, NAMES protobuf-lite would become NAMES protobuf-lited. Default: Debug=0

CMake variables

  • ONNX_USE_PROTOBUF_SHARED_LIBS should be ON or OFF. Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0 ONNX_USE_PROTOBUF_SHARED_LIBS determines how onnx links to protobuf libraries.

    • When set to ON - onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF and USE_MSVC_STATIC_RUNTIME must be 0.
    • When set to OFF - onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON (to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME can be 0 or 1.
  • ONNX_USE_LITE_PROTO should be ON or OFF. When set to ON onnx uses lite protobuf instead of full protobuf. Default: ONNX_USE_LITE_PROTO=OFF

  • ONNX_WERROR should be ON or OFF. When set to ON warnings are treated as errors. Default: ONNX_WERROR=OFF in local builds, ON in CI and release pipelines.

Common Errors

  • Note: the import onnx command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'. Change into another directory to fix this error.

  • If you run into any issues while building Protobuf as a static library, please ensure that shared Protobuf libraries, like libprotobuf, are not installed on your device or in the conda environment. If these shared libraries exist, either remove them to build Protobuf from source as a static library, or skip the Protobuf build from source to use the shared version directly.

  • If you run into any issues while building ONNX from source, and your error message reads, Could not find pythonXX.lib, ensure that you have consistent Python versions for common commands, such as python and pip. Clean all existing build files and rebuild ONNX again.

Testing

ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest:

pip install pytest nbval

After installing pytest, use the following command to run tests.

pytest

Development

Check out the contributor guide for instructions.

License

Apache License v2.0

Code of Conduct

ONNX Open Source Code of Conduct

onnx-caffe2's People

Contributors

anderspapitto avatar bddppq avatar djsutherland avatar dzhulgakov avatar ezyang avatar houseroad avatar huitseeker avatar jerryzh168 avatar maitek avatar onnxbot avatar sunaaron avatar yangqing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnx-caffe2's Issues

Interchangeable Device Ops from PyTorch

In my use case, I trained a model in GPU environment, and later exported it with CPU for a reason. It ran perfectly when importing the model from onnx_caffe2.backend. However, when I actually ran the net, it threw an error stating the network expects CPUContext while the caller expects CUDAContext. I suspect it's because I exported the model in CPU env? Now I'm trying to export the net using GPU and see if this works. If that's really the case, is there any way we can interchangeably export and load it in any devices we want?

Don't accept ONNX operators that aren't actually ONNX operators

Right now, if you feed onnx-caffe2 something that happens to be a valid Caffe2 operator, but not an ONNX operator, onnx-caffe2 will silently accept it. That's bad! We do not want to reward bad behavior. And with the versioning update onnx/onnx#186 we can backdoor Caffe2 ops using a vendor extension, if you absolutely need to. So... tighten this up!

Uselessly spends time materializing tensor proto that will never be used

In backend.py:

        # Caffe2 predictor requires all input blobs (including the
        # real model inputs) are initialized in init_net
        for value_info in graph_def.input:
            if value_info.name in initialized:
                continue
            op_def = caffe2_pb2.OperatorDef()
            op_def.output.extend([value_info.name])
            op_def.type = 'GivenTensorFill'

            shape = list(d.dim_value for d in value_info.type.tensor_type.shape.dim)
            # TODO: Putting this in the init net will make it run faster, but it
            # causes some tests to fail...
            # shape = (1,)

            shape_arg = op_def.arg.add()
            shape_arg.name = 'shape'
            shape_arg.ints.extend(shape)

            values_arg = op_def.arg.add()
            values_arg.name = 'values'
            values_arg.floats.extend(np.ones(shape).flatten().tolist())

            init_net.op.extend([op_def])

This is pretty pointless (we never actually use the values from the init net) AND it's really expensive (because we materialize a tensor proto for the inputs.) The shape hack used to work by Caffe2 rejects it now.

Hello guys I keep bumping into this error

File "/anaconda2/envs/pytorch/lib/python2.7/site-packages/onnx_caffe2/backend.py", line 439, in prepare
super(Caffe2Backend, cls).prepare(model, device, **kwargs)
File "/anaconda2/envs/pytorch/lib/python2.7/site-packages/onnx/backend/base.py", line 53, in prepare
onnx.checker.check_model(model)
File "/anaconda2/envs/pytorch/lib/python2.7/site-packages/onnx/checker.py", line 154, in check_model
check_graph(model.graph)
File "/anaconda2/envs/pytorch/lib/python2.7/site-packages/onnx/checker.py", line 127, in check_graph
check_node(node)
File "/anaconda2/envs/pytorch/lib/python2.7/site-packages/onnx/checker.py", line 40, in check_node
'NodeProto of type {} did not pass defs schema check.'.format(str(node.op_type)))
ValueError: NodeProto of type BatchNormalization did not pass defs schema check.

undefined symbol: _ZNK6google8protobuf7Message13SpaceUsedLongEv

I am trying to use this tool for converting a caffe2 model to onnx model using the example given #3

I am trying to convert resnet-101 model.

Below is my error log:

Traceback (most recent call last):
  File "conversion.py", line 1, in <module>
    import onnx_caffe2.frontend as c2_onnx
  File "/home/chaitanya/.local/lib/python2.7/site-packages/onnx_caffe2/frontend.py", line 8, in <module>
    from onnx import onnx_pb2, checker
  File "/home/chaitanya/.local/lib/python2.7/site-packages/onnx/__init__.py", line 7, in <module>
    from . import checker, helper
  File "/home/chaitanya/.local/lib/python2.7/site-packages/onnx/checker.py", line 14, in <module>
    from onnx import defs
  File "/home/chaitanya/.local/lib/python2.7/site-packages/onnx/defs/__init__.py", line 6, in <module>
    import onnx.onnx_cpp2py_export as C
ImportError: /home/chaitanya/.local/lib/python2.7/site-packages/onnx/onnx_cpp2py_export.so: undefined symbol: _ZNK6google8protobuf7Message13SpaceUsedLongEv

can someone help me out with the above issue.

Support for CPU-only caffe2

My goal is to deploy with caffe2 to IoT / mobile phones.

Surprisingly, onnx-caffe requires GPU installation from the very beginning

import onnx_caffe2.backend as backend 

Results in

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
~/projects/onnx/caffe2/build/caffe2/python/_import_c_extension.py in <module>()
     27     try:
---> 28         from caffe2.python.caffe2_pybind11_state_gpu import *  # noqa
     29         if num_cuda_devices():  # noqa

ModuleNotFoundError: No module named 'caffe2.python.caffe2_pybind11_state_gpu'

During handling of the above exception, another exception occurred:

ImportError                               Traceback (most recent call last)
~/projects/onnx/caffe2/build/caffe2/python/_import_c_extension.py in <module>()
     39         try:
---> 40             from caffe2.python.caffe2_pybind11_state import *  # noqa
     41         except ImportError as e:

ImportError: /home/arogozhn/projects/onnx/caffe2/build/caffe2/python/caffe2_pybind11_state.so: undefined symbol: _Py_ZeroStruct

During handling of the above exception, another exception occurred:

SystemExit                                Traceback (most recent call last)
<ipython-input-1-401d8c4e960a> in <module>()
----> 1 import onnx_caffe2.backend as backend

~/py36/lib/python3.6/site-packages/onnx_caffe2/backend.py in <module>()
     11 
     12 import caffe2
---> 13 from caffe2.python import core, workspace
     14 from caffe2.proto import caffe2_pb2
     15 import caffe2.python.utils

~/projects/onnx/caffe2/build/caffe2/python/core.py in <module>()
     28 
     29 from caffe2.proto import caffe2_pb2
---> 30 from caffe2.python import scope, utils, workspace
     31 from caffe2.python.control_ops_grad import \
     32     gen_do_gradient, gen_if_gradient, gen_while_gradient

~/projects/onnx/caffe2/build/caffe2/python/workspace.py in <module>()
     35 from caffe2.python import scope, utils
     36 
---> 37 import caffe2.python._import_c_extension as C
     38 
     39 logger = logging.getLogger(__name__)

~/projects/onnx/caffe2/build/caffe2/python/_import_c_extension.py in <module>()
     42             logging.critical(
     43                 'Cannot load caffe2.python. Error: {0}'.format(str(e)))
---> 44             sys.exit(1)
     45 
     46 # libcaffe2_python contains a global Workspace that we need to properly delete

(Pay attention to from caffe2.python.caffe2_pybind11_state_gpu import line)

tutorial/example

Hi,

would it be possible to get an example how to translate a valid caffe2 model into onnx?
Maybe this should go in the documentation README.md?!

Thanks

onnx converts Conv to Conv+broadcasting Add, which is not supported by SNPE

I used onnx to convert pytorch models to caffe2 models, and later on to SNPE models for running on mobile. However, because onnx converts Conv op to Conv op + broadcasting Add, and broadcasting Add is not supported by SNPE yet, the converted models can't be used on mobile. Can we merge the Conv and Add in the optimization pass?

Example code using alexnet:

import io
import os
import torch
from torch.autograd import Variable
import onnx
from onnx_caffe2.backend import Caffe2Backend
from onnx_caffe2.helper import save_caffe2_net
from torchvision.models import alexnet

model = alexnet()
test_input = torch.randn(1, 3, 224, 224)
f = io.BytesIO()
torch.onnx.export(model, Variable(test_input), f, verbose=True)
onnx_model = onnx.ModelProto.FromString(f.getvalue())

# now convert proto to caffe2
caffe2_dir = "~/local/caffe2"
if not os.path.exists(caffe2_dir):
    os.makedirs(caffe2_dir)
init_net, pred_net = Caffe2Backend.onnx_graph_to_caffe2_net(
    onnx_model, device="CPU")
init_net_file = os.path.join(caffe2_dir, "mobile_init_net.pb")
pred_net_file = os.path.join(caffe2_dir, "mobile_pred_net.pb")
save_caffe2_net(init_net, init_net_file, output_txt=False)
save_caffe2_net(pred_net, pred_net_file, output_txt=True)

Converted Conv op:

op {
  input: "1"
  input: "2"
  output: "121"
  name: ""
  type: "Conv"
  arg {
    name: "dilations"
    ints: 1
    ints: 1
  }
  arg {
    name: "strides"
    ints: 1
    ints: 1
  }
  arg {
    name: "pads"
    ints: 1
    ints: 1
    ints: 1
    ints: 1
  }
  arg {
    name: "group"
    i: 1
  }
  arg {
    name: "kernels"
    ints: 3
    ints: 3
  }
}
op {
  input: "121"
  input: "3"
  output: "122"
  name: ""
  type: "Add"
  arg {
    name: "broadcast"
    i: 1
  }
  arg {
    name: "axis"
    i: 1
  }
}

SNPE error:
ERROR - ERROR_CAFFE2_SNPE_OP_SUPPORT_ERR: Cannot resolve op type Add which is not yet supported by this conversion script.

convert-caffe2-to-onnx get error : "Segmentation fault (core dumped)"

Hi,
I'm trying to convert caffe2 model to onnx model, but simply type convert-caffe2-to-onnx will get this error : "Segmentation fault (core dumped)".

I have tried the following environments :

Ubuntu 16.04.4 LTS, Python 3.6 Caffe2 CPU;
Ubuntu 16.04.4 LTS, Python 3.6 Caffe2 GPU CUDA 9.0 CuDNN7;
Caffe2 docker Ubuntu 16.04.2 LTS, Python 2.7 Caffe2 CPU;

They all get the same result.

ONNX Tutorial: filter.dim32(i + 2) == kernel_[i]

Hello!
We're trying to replicate PyTorch ONNX Super-Resolution Tutorial . Conversion seems to work OK. But when deploying model to iOS an error occurs (on predictor->run):

[MC] Reading from public effective user settings. 
libc++abi.dylib: terminating with uncaught exception of type caffe2::EnforceNotMet: [enforce fail at conv_op_impl.h:37] filter.dim32(i + 2) == kernel_[i].  Error from operator:  
input: "9" input: "1" output: "11" name: "" type: "Conv" arg { name: "kernels" ints: 5 ints: 5 } arg { name: "strides" ints: 1 ints: 1 } arg { name: "pads" ints: 2 ints: 2 ints: 2 ints: 2 } arg { name: "dilations" ints: 1 ints: 1 } arg { name: "group" i: 1 }

We can run original caffe2 models on device. When we compared manually written caffe2 models and the model made by conversion tool, we noticed conversion tool adds (maybe it could help to fix this issue?):

device_option {
  device_type: 0
  cuda_gpu_id: 0
}

Also this problem replicates on more simple examples.

Attribute error when preparing a model on MacOS

I have caffe2, onnx, onnx-caffe2 installed on a macbook (following the anaconda installation instructions).

I'm attempting to run the tutorial on converting a super-resolution model from pytorch to caffe2 via ONNX. The torch.onnx commands to save the model to onnx seem to work fine. However, when I try to use the saved model via the onnx-caffe2 backend's prepare method, I receive an attribute error, as shown below.

IIn [51]: model = onnx.load("/Users/ohara/data/models/super_resolution.onnx")

In [52]: prepared_backend = onnx_caffe2.backend.prepare(model)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-52-2fbcb732bd1d> in <module>()
----> 1 prepared_backend = onnx_caffe2.backend.prepare(model)

/Users/ohara/anaconda3/envs/python2/lib/python2.7/site-packages/onnx_caffe2/backend.pyc in prepare(cls, predict_graph, device, init_graph, **kwargs)
    309                 init_net = caffe2_pb2.NetDef()
    310 
--> 311             predict_net, _ = cls._onnx_graph_to_caffe2_net(predict_graph)
    312             predict_net.device_option.CopyFrom(get_device_option(device))
    313 

/Users/ohara/anaconda3/envs/python2/lib/python2.7/site-packages/onnx_caffe2/backend.pyc in _onnx_graph_to_caffe2_net(cls, graph_def)
    423 
    424         net_def = caffe2_pb2.NetDef()
--> 425         net_def.name = graph_def.name
    426 
    427         # environment from ONNX name to Caffe2 name

AttributeError: 'ModelProto' object has no attribute 'name'

RuntimeError: [enforce fail at operator.cc:52] blob != nullptr. op Scale: Encountered a non-existing input blob: 1

I have a simple machine learning model for MNIST consisting of a single 748x10 fully connected layer. I can export the model from PyTorch commit a0ac72e, I can convert the ONNX data to Caffe2 with

convert-onnx-to-caffe2 --output MNIST.simple.caffe2 --init-net-output MNIST.simple.init.caffe2 -- MNIST.simple.proto

using onnx-caffe2 8a40069a, and I can run inference on the exported model using onnx-caffe2 but I cannot run the Caffe2 model in Caffe2 91f63a236:

christoph:~/caffe2-inference-test$ python run-caffe2.py MNIST.simple.caffe2 
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:root:Debug message: No module named caffe2_pybind11_state_gpu
Traceback (most recent call last):
  File "run-caffe2.py", line 51, in <module>
    sys.exit( main() )
  File "run-caffe2.py", line 43, in main
    predictor = caffe2.python.workspace.Predictor(init_net, predict_net)
RuntimeError: [enforce fail at operator.cc:52] blob != nullptr. op Scale: Encountered a non-existing input blob: 1 

Please provide guidance because I cannot tell if this is an onnx-caffe2 or a Caffe2 problem.

'ModelProto' object has no attribute 'opset_import'

Hi,

I'm seeing the following error when loading official onnx models:

Traceback (most recent call last):
  File "test_model_large_stepping.py", line 37, in test
    cf_rep = c2.prepare(_model)
  File "/Users/xxx/onnx-caffe2/onnx_caffe2/backend.py", line 559, in prepare
    for imp in model.opset_import:
AttributeError: 'ModelProto' object has no attribute 'opset_import'

I'm using latest onnx from source, latest onnx-caffe2 from pip install.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

import unittest
import numpy as np
import onnx
import onnx_caffe2.backend as c2
from onnx import helper
from onnx.onnx_pb2 import TensorProto

class TestLargeModel(unittest.TestCase):
  MODEL_PATH = "../../../onnx_models/"

  def test(self):
    _model = onnx.load(self.MODEL_PATH + "shufflenet/model.pb")
    cf_rep = c2.prepare(_model)

if __name__ == '__main__':
  unittest.main()

maybe time to update model zoo?

Best wishes,
Tian Jin.

Does onnx-caffe2 support cudnn7 only?

When I run the following code:

prepared_backend = onnx_caffe2.backend.prepare(model, device="CUDA:0")

it happens:

RuntimeError: [enforce fail at common_cudnn.h:118] version_match. cuDNN compiled (6021) and runtime (7003) versions mismatch

My cudnn version is 6.0.21๏ผŒ and I compile caffe2 with cudnn 6.0.21, why runtime cudnn version is 7.0.03? Does onnx-caffe2 support cudnn7 only?

Fail to convert caffe2 module to onnx module.

I can't convert a caffe2 module to onnx module.
I first tried the example on README and then write a MINST module myself, both of them generated the following error messages.
I am using x86_64 w/ Ubuntu 14.04 and no GPUs currently.
Thanks in advance for any suggestions!

Traceback (most recent call last):
File "caffe2toonnx.py", line 25, in
value_info,
File "/users/lucasyc/anaconda2/lib/python2.7/site-packages/onnx_caffe2/frontend.py", line 517, in caffe2_net_to_onnx_model
model = make_model(cls.caffe2_net_to_onnx_graph(*args, **kwargs))
File "/users/lucasyc/anaconda2/lib/python2.7/site-packages/onnx_caffe2/frontend.py", line 357, in caffe2_net_to_onnx_graph
cls._ssa_rewrite(predict_net, init_net, value_info)
File "/users/lucasyc/anaconda2/lib/python2.7/site-packages/onnx_caffe2/frontend.py", line 492, in _ssa_rewrite
assert re.match('GivenTensor.*Fill', op.type)
AssertionError

onnx-caffe2 is slower?

I try to run this tutorial:Example: End-to-end AlexNet from PyTorch to Caffe2
However, I found the reference speed of onnx-caffe2 is 10x slower than the origin pytorch model.
Anyone help? Thanks. If the reference time is comparable, It would be great to deploy models of Pytorch using Caffe2.

My Machine:

Ubuntu 14.04
CUDA 8.0
cudnn 7.0.3
Caffe2 latest
Pytorch 0.3.0

OnnxAttributes type conversion is error prone

The OnnxAttributes helper class that I wrote suffers the problem that it uses exactly the type of the attribute value to decide whether or not to write out f or i attribute protos. I don't know if Caffe2 will accept an i anywhere a f is expected, but if it does not, then this leads to the very real danger that someone writes a literal like 1 or 2, expecting it to "just work", but actually the underlying attribute required floats.

Intent to Fix: Convolution padding

Caffe2 requires padding to be 2x size of kernel size, which is a little weird and contradicts the documentation. I'm fixing onnx-caffe2 to expand padding to the correct size when it is necessary.

Allow to filter out inputs in caffe2_net_to_onnx_model

Caffe2 nets usually have stubs for the inputs in a form of ConstantFill in the init_net. It causes caffe2_net_to_onnx_model to complain as it doesn't know how to handle this op.

Instead we should just allow user to specify the list of model inputs and filter them out from the init_net. Also, we can extract ValueInfo from the shape to make things nicer.

Onnx 'helper' and 'checker' attribute are not defined

I have onnx 0.2.1 installed on my conda virtual environment

conda list | grep onnx

packages in environment at /Users/aanirud/anaconda2/envs/onnx:

onnx 0.2.1 py27_1 ezyang
onnx-caffe2 0.2.1 py27hbe716ef_1 ezyang
onnx-mxnet 0.4.1

But I am not able to use the onnx.checker or onnx.helper attributes -

import onnx
onnx.checker.check_model("toy_model.onnx")
Traceback (most recent call last):
File "", line 1, in
AttributeError: 'module' object has no attribute 'checker'

Get the same error when I try to use onnx.helper.

Move onnx-optimize to ONNX repository

While refactoring the onnx-optimize to a Python extension, I have to change the namespace/package from onnx to onnx_caffe2, otherwise, we will have package conflicts.

So I think it's better to remove ATen dependency and move onnx-optimize to onnx repo.

import error

When I run

import onnx
import onnx_caffe2.backend

I got

[libprotobuf FATAL google/protobuf/stubs/common.cc:61] This program requires version 3.4.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1.  Please update your library.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "google/protobuf/any.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
  what():  This program requires version 3.4.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1.  Please update your library.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "google/protobuf/any.pb.cc".)
Aborted (core dumped)`

However,when I check,

$ protoc --version
libprotoc 3.4.0

Besides,I can run import torch.onnx and import onnx_caffe2.backend independently,but cannot run together. Note that I have successfully installed caffe2 as instruction.

Anyone can help with it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.