Comments (1)
After exporting PYTHONPATH like this:
export PYTHONPATH=onnxruntime/build/Linux/Release:${PYTHONPATH}
export PYTHONPATH=onnxruntime/build/Linux/Release/_deps/tvm-src/python:${PYTHONPATH}
I ran into another error:
>>> import onnxruntime
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/onnxruntime/build/Linux/Release/onnxruntime/__init__.py", line 57, in <module>
raise import_capi_exception
File "/onnxruntime/build/Linux/Release/onnxruntime/__init__.py", line 23, in <module>
from onnxruntime.capi._pybind_state import ExecutionMode # noqa: F401
File "/onnxruntime/build/Linux/Release/onnxruntime/capi/_pybind_state.py", line 12, in <module>
from . import _ld_preload # noqa: F401
File "/onnxruntime/build/Linux/Release/onnxruntime/capi/_ld_preload.py", line 13, in <module>
import tvm
File "/onnxruntime/build/Linux/Release/_deps/tvm-src/python/tvm/__init__.py", line 55, in <module>
from . import tir
File "/onnxruntime/build/Linux/Release/_deps/tvm-src/python/tvm/tir/__init__.py", line 24, in <module>
from .expr import Var, SizeVar, Reduce, FloatImm, IntImm, StringImm, Cast
File "/onnxruntime/build/Linux/Release/_deps/tvm-src/python/tvm/tir/expr.py", line 1014, in <module>
class Load(PrimExprWithOp):
File "/onnxruntime/build/Linux/Release/_deps/tvm-src/python/tvm/_ffi/registry.py", line 69, in register
check_call(_LIB.TVMObjectTypeKey2Index(c_str(object_name), ctypes.byref(tidx)))
File "/onnxruntime/build/Linux/Release/_deps/tvm-src/python/tvm/_ffi/base.py", line 348, in check_call
raise get_last_ffi_error()
tvm.error.InternalError: Traceback (most recent call last):
4: 0xffffffffffffffff
3: 0x000000000058852d
2: ffi_call
1: TVMObjectTypeKey2Index
0: tvm::runtime::Object::TypeKey2Index(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
File "/tvm/src/runtime/object.cc", line 165
InternalError: Check failed: (it != type_key2index_.end()) is false: Cannot find type tir.Load. Did you forget to register the node by TVM_REGISTER_NODE_TYPE ?
TVM hint: You hit an internal error. Please open a thread on https://discuss.tvm.apache.org/ to report it.
from onnxruntime.
Related Issues (20)
- Missing onnxruntime_perf_test.exe in Release Assets (or what actually is "Build Drop"?) HOT 2
- [Build]: cmake', '--build', '/temp/liz/onnxruntime/build/Linux/RelWithDebInfo', '--config', 'RelWithDebInfo', '--', '-j64'] HOT 1
- [Feature Request] Request grid_sample 5D support 🌟 HOT 1
- [Build][Bug] The compiler doesn't support BFLOAT16!!! HOT 2
- [WebGPU] `Error: [WebGPU] Kernel "[MaxPool] /sincnet/pool1d.0/MaxPool" failed. Error: length of specified kernel shapes should be 2 less than length of input dimensions` HOT 2
- Error Instantiating EmbeddingModel with ONNX Model intfloat/multilingual-e5-large HOT 1
- [Documentation] Community blog post contribution HOT 1
- [ARM][CPU] Unit test and onnx_runtime_perf test gives cpuinfo error for new Windows ARM chips HOT 2
- [Feature Request] Mark as negative tests for minimal CUDA build
- New restricted asymmetric quantization mode in QDQ mode with zero_point restricted to either 128 or 0
- Trilu op still not work with INT32 input HOT 3
- [WebNN EP] Support int64 output data type for CoreML backend HOT 1
- [Web] where is the demo of object detection on web HOT 2
- LNK2019: unresolved external symbol OrtGetApiBase HOT 1
- Multi-threaded GPU inferencing failing with whisper-small: Non-zero status code returned while running DecoderMaskedMultiHeadAttention node HOT 4
- TensorRT EP failed to create engine from network. HOT 4
- Custom Op Library does not work for CUDA
- onnxruntime.InferenceSession.run sometimes get stuck, sometimes not HOT 3
- How to do multithreaded infer with onnxruntime HOT 1
- CUDA provider fallback to CPU is not working when CUDA_PATH environment variable exists
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.