Comments (4)
Sweet! I assumed that PR was already part of the stable release. It's looking much better with ort-nightly
, thank you
Using PyTorch 2.3.1, Transformers 4.41.2, and ORT 1.19.0
Optimizing with ViT config:
{'EmbedLayerNormalization': 0, 'Attention': 0, 'MultiHeadAttention': 0, 'Gelu': 0, 'FastGelu': 0, 'BiasGelu': 0, 'GemmFastGelu': 0, 'LayerNormalization': 3, 'SimplifiedLayerNormalization': 0, 'SkipLayerNormalization': 23, 'SkipSimplifiedLayerNormalization': 0, 'RotaryEmbedding': 0, 'QOrderedAttention': 0, 'QOrderedGelu': 0, 'QOrderedLayerNormalization': 0, 'QOrderedMatMul': 0}
Optimizing with CLIP config:
{'Attention': 12, 'LayerNormalization': 3, 'QuickGelu': 12, 'SkipLayerNormalization': 23}
from onnxruntime.
Can you specify transformers version as well? They made a lot of changes recently.
from onnxruntime.
Can you specify transformers version as well? They made a lot of changes recently.
Hi,
In [6]: import transformers
In [7]: transformers.__version__
Out[7]: '4.41.2'
I also tried with 4.28.1
that (I think) was released at around the time CLIP fusion support was added to onnxruntime
but got the same results:
Using PyTorch 2.3.1, Transformers 4.28.1, and ORT 1.18.0
Optimizing with ViT config:
{'EmbedLayerNormalization': 0, 'Attention': 0, 'MultiHeadAttention': 0, 'Gelu': 0, 'FastGelu': 0, 'BiasGelu': 0, 'GemmFastGelu': 0, 'LayerNormalization': 3, 'SimplifiedLayerNormalization': 0, 'SkipLayerNormalization': 23, 'SkipSimplifiedLayerNormalization': 0, 'RotaryEmbedding': 0, 'QOrderedAttention': 0, 'QOrderedGelu': 0, 'QOrderedLayerNormalization': 0, 'QOrderedMatMul': 0}
Optimizing with CLIP config:
{'Attention': 0, 'LayerNormalization': 3, 'SkipLayerNormalization': 23}
I'll run the test again with PyTorch 1.13.1 later
Edit: my mistake, 4.28.1 is much older than either of the two PRs that added CLIP attention fusion
from onnxruntime.
Can you try your code with the nightly ORT package instead of the stable ORT 1.18.0 package? New fusions for CLIP were recently added in this PR and aren't in the stable ORT package currently. Alternatively, you can try the "from source" instructions in the PR's description.
from onnxruntime.
Related Issues (20)
- QDQ removal around Resize (mode=linear) causes wrong numeric values
- [Build] RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MatMul node. Name:'/MatMul_7' Status Message: /onnxruntime_src/onnxruntime/core/framework/op_kernel.cc:83 virtual OrtValue* onnxruntime::OpKernelContext::OutputMLValue(int, const onnxruntime::TensorShape&) status.IsOK() was false. Shape mismatch attempting to re-use buffer. {1,1,512} != {1,32,512}. Validate usage of dim_value (values should be > 0) and dim_param (all values with the same string should equate to the same size) in shapes in the model. HOT 7
- Not able to load onnx model multilingual-e5-large HOT 3
- [Crash] Crash while loading AlibabaNLP/gte-base ONNX model HOT 5
- Model saved with offline basic optimizations will not load - ShapeInferenceError HOT 1
- [Training] [ShapeInferenceError] Dimension could not be inferred: incompatible shapes
- [Build] How can I quantize the llama3 model activation to int4 ?
- [Feature Request] ORT-Profiler: Include timestamps for tensor allocations and deallocations. HOT 2
- header files path not recognized or unable to read header file HOT 1
- [Build] AllocatorTest.CUDAAllocatorFallbackTest failed HOT 1
- [Performance] Get nan value when I block all the node in fp16 conversion HOT 8
- [Bug] The per_tensor quantized weight type of matmul is wrong HOT 1
- ONNX Runtime 1.18.1 CUDA 12.4 cuDNN 9.2 breaks inference with repeated inputs when enable_mem_reuse is enabled
- Latest Release(1.18.1) Java Artifacts Unavailable HOT 1
- [Build] C++ API cannot be reliably linked with an program using CMake
- [BUG] CANN: onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError]
- [Build] Cross compilation of the library for ARMv7 32bit target with gcc 8.3 HOT 3
- CUDA 12 and session.get_providers() not showing CUDAExecutionProvider HOT 7
- [Web] Memory access out of bounds / alignment fault
- An error occurred when I installed onnxruntime-qnn in an Arm environment HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.