Comments (2)
There may be several potential issues with CUDA run - if CUDA is not installed properly or fails on your system, ONNX may revert to CPU. I didn't see that cases before but assume that this is the case. Also, we usually do not mix CPU and CUDA installation so you need to clean CM cache in between such runs:
cm rm cache -f
Maybe you can clean the cache and rerun above command with --device=cuda and submit the full log?
We may need to handle better such cases ... Thanks a lot again for your feedback - that helps us improve CM for everyone!
from ck.
After running the command(cm rm cache -f), I run cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cuda --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open
the errors are following.
GPU Device ID: 0
GPU Name: NVIDIA GeForce RTX 4070 Laptop GPU
GPU compute capability: 8.9
CUDA driver version: 12.2
CUDA runtime version: 12.4
Global memory: 8585216000
Max clock rate: 1980.000000 MHz
Total amount of shared memory per block: 49152
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block X: 1024
Max dimension size of a thread block Y: 1024
Max dimension size of a thread block Z: 64
Max dimension size of a grid size X: 2147483647
Max dimension size of a grid size Y: 65535
Max dimension size of a grid size Z: 65535
Detected version: 24.0
! cd /home/zhaohc/CM/repos/local/cache/07081a5ef7a04a4a
! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
! cd /home/zhaohc/CM/repos/local/cache/c7e571cb13e549e1
! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
! call "detect_version" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
Detected version: 5.1
! cd /home/zhaohc/CM/repos/local/cache/c7e571cb13e549e1
! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
Generating SUT description file for default-onnxruntime
HW description file for default not found. Copying from default!!!
! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-mlperf-inference-sut-description/customize.py
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘void mlperf::logging::AsyncLog::RecordTokenCompletion(uint64_t, std::chrono::_V2::system_clock::time_point, mlperf::QuerySampleLatency)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:483:61: warning: unused parameter ‘completion_time’ [-Wunused-parameter]
483 | PerfClock::time_point completion_time,
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTokenLatencies(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:601:68: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
601 | std::vector AsyncLog::GetTokenLatencies(size_t expected_count) {
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTimePerOutputToken(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:607:72: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
607 | std::vector AsyncLog::GetTimePerOutputToken(size_t expected_count){
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTokensPerSample(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:613:58: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
613 | std::vector<int64_t> AsyncLog::GetTokensPerSample(size_t expected_count) {
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc: In instantiation of ‘void mlperf::loadgen::RunPerformanceMode(mlperf::SystemUnderTest*, mlperf::QuerySampleLibrary*, const mlperf::loadgen::TestSettingsInternal&, mlperf::loadgen::SequenceGen*) [with mlperf::TestScenario scenario = mlperf::TestScenario::SingleStream]’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1132:61: required from ‘static mlperf::loadgen::RunFunctions mlperf::loadgen::RunFunctions::GetCompileTime() [with mlperf::TestScenario compile_time_scenario = mlperf::TestScenario::SingleStream]’
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1138:58: required from here
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
918 | PerformanceSummary perf_summary{sut->Name(), settings, std::move(pr)};
| ^~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc: In instantiation of ‘void mlperf::loadgen::FindPeakPerformanceMode(mlperf::SystemUnderTest*, mlperf::QuerySampleLibrary*, const mlperf::loadgen::TestSettingsInternal&, mlperf::loadgen::SequenceGen*) [with mlperf::TestScenario scenario = mlperf::TestScenario::SingleStream]’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1132:61: required from ‘static mlperf::loadgen::RunFunctions mlperf::loadgen::RunFunctions::GetCompileTime() [with mlperf::TestScenario compile_time_scenario = mlperf::TestScenario::SingleStream]’
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1138:58: required from here
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
988 | PerformanceSummary base_perf_summary{sut->Name(), base_settings,
| ^~~~~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]
SUT: default-reference-gpu-onnxruntime-v1.17.1-default_config, model: bert-99, scenario: Offline, target_qps updated as 44.1568
New config stored in /home/zhaohc/CM/repos/local/cache/9039508f728b4d64/configs/default/reference-implementation/gpu-device/onnxruntime-framework/framework-version-v1.17.1/default_config-config.yaml
[2024-03-20 20:08:05,501 log_parser.py:50 INFO] Sucessfully loaded MLPerf log from /home/zhaohc/test_results/default-reference-gpu-onnxruntime-v1.17.1-default_config/bert-99/offline/performance/run_1/mlperf_log_detail.txt.
[2024-03-20 20:08:05,506 log_parser.py:50 INFO] Sucessfully loaded MLPerf log from /home/zhaohc/test_results/default-reference-gpu-onnxruntime-v1.17.1-default_config/bert-99/offline/performance/run_1/mlperf_log_detail.txt.
from ck.
Related Issues (20)
- Add universal check of env vars in cmind.utils HOT 1
- Add "prototype" flag to CM script meta
- `convert_path` is not part of setuptools API and will be removed HOT 7
- Refactor CM, CM for MLOps and CM for MLPerf docs and tutorials
- Improving CM core
- When dumping version info from dependencies, variations do not have _ HOT 1
- Warning Encountered During pip install cmind on Ubuntu via WSL HOT 3
- KeyError in MLPerf Inference with ResNet-50 HOT 5
- Could not identify license file for opentelemetry-cpp HOT 1
- CUDA version 12.4 not supported for this cm command HOT 1
- Support branch for cm pull repo
- How do you specify which GPU to run an Mlperf benchmark on with CM? HOT 6
- cm add script is failing on new CM repository HOT 4
- Support ssh URLs in cm pull repo HOT 3
- Improve the accessibility of the documentations HOT 1
- How to prevent caching? HOT 2
- Requiring a user access token or an SSH key instead for huggingface.co HOT 2
- Portable CM script failed (name = build-docker-image, return code = 256) HOT 1
- prototype "cm init" to check some system deps (git, wget) and pull cm4mlops repo HOT 1
- Installing cm with conda
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ck.