Comments (6)
It looks that you didn't warm up GPU before measuring the CUDA EP inference time.
Simply said, the GPU test code should be something like
output = session.run(None, {"input": input_data}) # Replace "input_name" with the actual name of your model's input
start_time = time.time()
for i in range(1, 10):
output = session.run(None, {"input": input_data})
inference_time = (time.time() - start_time ) / 10
https://forums.developer.nvidia.com/t/why-warm-up/48565
from onnxruntime.
Thanks
It looks that you didn't warm up GPU before measuring the CUDA EP inference time. Simply said, the GPU test code should be something like
output = session.run(None, {"input": input_data}) # Replace "input_name" with the actual name of your model's input start_time = time.time() for i in range(1, 10): output = session.run(None, {"input": input_data}) inference_time = (time.time() - start_time ) / 10
Thanks , now when i run this code -
import onnxruntime
import numpy as np
import time
onnx_model_path = "realesrgan-x4.onnx"
providers=['CUDAExecutionProvider']
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = (onnxruntime.GraphOptimizationLevel.ORT_ENABLE_BASIC)
session = onnxruntime.InferenceSession(onnx_model_path,sess_options=sess_options,providers=providers)
x = session.get_providers()
print(x)
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
Warm-up the model
input_shape = (1, 3, 64, 64) # Example input shape
input_data = np.random.randn(*input_shape).astype(np.float32)
_ = session.run([output_name], {input_name: input_data})
start_time = time.time()
output = session.run([output_name], {"input": input_data})
inference_time = (time.time() - start_time )
providers=['CPUExecutionProvider']
session2 = onnxruntime.InferenceSession(onnx_model_path,sess_options=sess_options,providers=providers)
start_time2 = time.time()
output = session2.run(None, {"input": input_data})
inference_time2 = (time.time() - start_time2 )
print("Inference time: GPU ", inference_time, "seconds")
print("Inference time: CPU", inference_time2, "seconds")
This is Output -
Inference time: GPU 0.04687690734863281 seconds
Inference time: CPU 0.37531065940856934 seconds
but when i warm up with input shape = (1,3,64,64)
and run session on input shape = (1,3,224,224)
The Output Is -
Inference time: GPU 55.74775457382202 seconds
Inference time: CPU 0.2968311309814453 seconds
-- How Can I Warm It Up So That It Work With Dynamic Input Shapes . As My Model Can Work On Different Input Shapes .
from onnxruntime.
You could try running warmup runs with the full range of possible shapes.
Could you please check out the post
#13198 (comment)
from onnxruntime.
You could try running warmup runs with the full range of possible shapes. Could you please check out the post #13198 (comment)
I Can Warmup With All Possible Shapes . But I wanted to know that is there any way that i can save the warmed up model or any kind of warmup data that i can use in future . As The Time To Warmup every possible shape will be around 12hours . So, I Can't Warmup Every Time I Run Application . Its Not Practical .
from onnxruntime.
You may explore other CuDNN algo picking options - https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#cudnn_conv_algo_search for your "dynamic shape" use-case.
Being warmed-up is a state of the runtime, not a state of the model itself - so there is no way to dump a "warmed-up" model. What you are asking is if ORT supports dumping optimal configurations for kernels that it finds as part of the warm-up process and currently ORT does not support dumping or reading such "config" files. If you really want to go that approach, for your image model use-case, you can try adding logic to serialize this to a config file on disk for your first warmed-up session and re-reading from it in subsequent sessions.
from onnxruntime.
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
from onnxruntime.
Related Issues (20)
- E:onnxruntime:Default, provider_bridge_ort.cc:1992 onnxruntime::TryGetProviderInfo_CUDA HOT 2
- [Web] onnxruntime-gpu(1.18.0) can not be install HOT 2
- Different outputs in Python and C++ HOT 2
- [Web] Demucs model won't run in both WASM and WGPU HOT 3
- Treatment of optional inputs to nodes when empty HOT 1
- [Training] HOT 1
- [Documentation] Difficulty using trt_int8_use_native_calibration_table option in ONNX Runtime
- DeformConv in > ONNX Runtime v1.19.0
- Support `__dlpack__` for OrtValues
- [Training] Implicit dependency of Python training API on 'torch' package HOT 1
- [Web] model cannot load after 1.19 HOT 3
- GetElementType is not implemented after updating onnxruntime HOT 1
- [Feature Request] Native WebGPU Execution Provider HOT 2
- [Documentation] Prebuilt ORT Package does not include required QNN dependency HOT 1
- topk assumes GridDim::maxThreadsPerBlock >= 256 HOT 1
- topk unreachable code HOT 1
- [Feature Request] Integrate with ONNX 1.17.0 release branch
- [Web] The ONNX model cannot be loaded via WASM in the IOS17 browser. HOT 4
- [Documentation] HOT 1
- [Build] cannot create ALIAS target "nsync::nsync_cpp"
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.