Comments (2)
it's a technical error in the description then. it should be 0-based index for axes.
The example in the spec https://github.com/onnx/onnx/blob/master/docs/Operators.md#examples-97
even uses a 0 axis
node = onnx.helper.make_node(
'Squeeze',
inputs=['x'],
outputs=['y'],
axes=[0],
)
x = np.random.randn(1, 3, 4, 5).astype(np.float32)
y = np.squeeze(x, axis=0)
expect(node, inputs=[x], outputs=[y],
name='test_squeeze')
Given onnx is an interchange format between frameworks and caffe2, tf, pytorch etc. use 0-based index, it wouldn't make sense for onnx to use 1-based.
from onnxruntime.
Forward to ONNX repo: onnx/onnx#1755
from onnxruntime.
Related Issues (20)
- Wrong OrtApi size for v1.17.0/v1.17.1 ? HOT 1
- [Build] Broken build with Xcode 15.3 HOT 8
- [Performance] createSession() slow on release 1.15 and 1.17.1 as compare to 1.14 HOT 6
- [Build]Neither "onnx/onnx-ml.pb.h" nor "onnx/onnx.pb.h" is generated HOT 2
- [Web] When performing inference with ONNX Runtime in C++, using the libonnxruntime_webassembly.a static library, but encountering an error during the session.run() call in an HTML5 environment, the error message is: "Uncaught (in promise) 50699072". HOT 12
- @parcel/resolver-default: Could not load './dist/ort-web.min.js' from module 'onnxruntime-web' found in package.json#browser
- Is there any way to retrieve Quantization type and Quantization parameters using onnxruntime ? HOT 3
- [Build] Trying to build on a embedded device that doesn't support BFLOAT16 HOT 22
- [Performance] MultiHeadAttention CPU kernel slower than unfused HOT 4
- Improve Inference Performance on GPU [Python] HOT 2
- [Documentation] HTTP 404 for "Get Started" link in "ONNX Runtime JavaScript API" page
- a perfermance issue when use onnx runtime-tensorrt HOT 18
- Are there problems with the OpenVINO EP? HOT 4
- [Feature Request] Support AMX BF16
- Invalid rank for input,Got: 1 Expected: 4 HOT 4
- Inference Layer by Layer or feature extraction on Onnx Runtime HOT 2
- [Performance] version 1.17.1 causes performance regression over 1.16.3 both with TRT EP and Cuda EP on Faster-RCNN model inference HOT 3
- [Documentation Request]
- Issue while modifying onnx file HOT 1
- Add CUDA12 support for Java's onnxruntime_gpu dependency HOT 12
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.