Giter Club home page Giter Club logo

Comments (6)

mszhanyi avatar mszhanyi commented on September 26, 2024

It looks that you didn't warm up GPU before measuring the CUDA EP inference time.
Simply said, the GPU test code should be something like

output = session.run(None, {"input": input_data}) # Replace "input_name" with the actual name of your model's input
start_time = time.time()
for i in range(1, 10):
output = session.run(None, {"input": input_data})
inference_time = (time.time() - start_time ) / 10

https://forums.developer.nvidia.com/t/why-warm-up/48565

from onnxruntime.

Juek396 avatar Juek396 commented on September 26, 2024

Thanks

It looks that you didn't warm up GPU before measuring the CUDA EP inference time. Simply said, the GPU test code should be something like

output = session.run(None, {"input": input_data}) # Replace "input_name" with the actual name of your model's input start_time = time.time() for i in range(1, 10): output = session.run(None, {"input": input_data}) inference_time = (time.time() - start_time ) / 10

https://forums.developer.nvidia.com/t/why-warm-up/48565

Thanks , now when i run this code -
import onnxruntime
import numpy as np
import time

onnx_model_path = "realesrgan-x4.onnx"
providers=['CUDAExecutionProvider']
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = (onnxruntime.GraphOptimizationLevel.ORT_ENABLE_BASIC)
session = onnxruntime.InferenceSession(onnx_model_path,sess_options=sess_options,providers=providers)
x = session.get_providers()
print(x)

input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name

Warm-up the model

input_shape = (1, 3, 64, 64) # Example input shape
input_data = np.random.randn(*input_shape).astype(np.float32)
_ = session.run([output_name], {input_name: input_data})

start_time = time.time()
output = session.run([output_name], {"input": input_data})
inference_time = (time.time() - start_time )

providers=['CPUExecutionProvider']
session2 = onnxruntime.InferenceSession(onnx_model_path,sess_options=sess_options,providers=providers)

start_time2 = time.time()
output = session2.run(None, {"input": input_data})
inference_time2 = (time.time() - start_time2 )

print("Inference time: GPU ", inference_time, "seconds")
print("Inference time: CPU", inference_time2, "seconds")

This is Output -
Inference time: GPU 0.04687690734863281 seconds
Inference time: CPU 0.37531065940856934 seconds

but when i warm up with input shape = (1,3,64,64)
and run session on input shape = (1,3,224,224)

The Output Is -
Inference time: GPU 55.74775457382202 seconds
Inference time: CPU 0.2968311309814453 seconds

-- How Can I Warm It Up So That It Work With Dynamic Input Shapes . As My Model Can Work On Different Input Shapes .

from onnxruntime.

mszhanyi avatar mszhanyi commented on September 26, 2024

You could try running warmup runs with the full range of possible shapes.
Could you please check out the post
#13198 (comment)

from onnxruntime.

Juek396 avatar Juek396 commented on September 26, 2024

You could try running warmup runs with the full range of possible shapes. Could you please check out the post #13198 (comment)

I Can Warmup With All Possible Shapes . But I wanted to know that is there any way that i can save the warmed up model or any kind of warmup data that i can use in future . As The Time To Warmup every possible shape will be around 12hours . So, I Can't Warmup Every Time I Run Application . Its Not Practical .

from onnxruntime.

hariharans29 avatar hariharans29 commented on September 26, 2024

You may explore other CuDNN algo picking options - https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#cudnn_conv_algo_search for your "dynamic shape" use-case.

Being warmed-up is a state of the runtime, not a state of the model itself - so there is no way to dump a "warmed-up" model. What you are asking is if ORT supports dumping optimal configurations for kernels that it finds as part of the warm-up process and currently ORT does not support dumping or reading such "config" files. If you really want to go that approach, for your image model use-case, you can try adding logic to serialize this to a config file on disk for your first warmed-up session and re-reading from it in subsequent sessions.

from onnxruntime.

github-actions avatar github-actions commented on September 26, 2024

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

from onnxruntime.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.