Giter Club home page Giter Club logo

Comments (2)

gfursin avatar gfursin commented on September 16, 2024

There may be several potential issues with CUDA run - if CUDA is not installed properly or fails on your system, ONNX may revert to CPU. I didn't see that cases before but assume that this is the case. Also, we usually do not mix CPU and CUDA installation so you need to clean CM cache in between such runs:

cm rm cache -f

Maybe you can clean the cache and rerun above command with --device=cuda and submit the full log?
We may need to handle better such cases ... Thanks a lot again for your feedback - that helps us improve CM for everyone!

from ck.

KingICCrab avatar KingICCrab commented on September 16, 2024

After running the command(cm rm cache -f), I run cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cuda --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open
the errors are following.

GPU Device ID: 0
GPU Name: NVIDIA GeForce RTX 4070 Laptop GPU
GPU compute capability: 8.9
CUDA driver version: 12.2
CUDA runtime version: 12.4
Global memory: 8585216000
Max clock rate: 1980.000000 MHz
Total amount of shared memory per block: 49152
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block X: 1024
Max dimension size of a thread block Y: 1024
Max dimension size of a thread block Z: 64
Max dimension size of a grid size X: 2147483647
Max dimension size of a grid size Y: 65535
Max dimension size of a grid size Z: 65535

        Detected version: 24.0
         ! cd /home/zhaohc/CM/repos/local/cache/07081a5ef7a04a4a
         ! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
         ! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
           ! cd /home/zhaohc/CM/repos/local/cache/c7e571cb13e549e1
           ! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
           ! call "detect_version" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py
      Detected version: 5.1
       ! cd /home/zhaohc/CM/repos/local/cache/c7e571cb13e549e1
       ! call /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/run.sh from tmp-run.sh
       ! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-generic-python-lib/customize.py

Generating SUT description file for default-onnxruntime
HW description file for default not found. Copying from default!!!
! call "postprocess" from /home/zhaohc/CM/repos/mlcommons@ck/cm-mlops/script/get-mlperf-inference-sut-description/customize.py

/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘void mlperf::logging::AsyncLog::RecordTokenCompletion(uint64_t, std::chrono::_V2::system_clock::time_point, mlperf::QuerySampleLatency)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:483:61: warning: unused parameter ‘completion_time’ [-Wunused-parameter]
483 | PerfClock::time_point completion_time,
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTokenLatencies(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:601:68: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
601 | std::vector AsyncLog::GetTokenLatencies(size_t expected_count) {
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTimePerOutputToken(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:607:72: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
607 | std::vector AsyncLog::GetTimePerOutputToken(size_t expected_count){
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc: In member function ‘std::vector mlperf::logging::AsyncLog::GetTokensPerSample(size_t)’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/logging.cc:613:58: warning: unused parameter ‘expected_count’ [-Wunused-parameter]
613 | std::vector<int64_t> AsyncLog::GetTokensPerSample(size_t expected_count) {
| ~~~~~~~^~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc: In instantiation of ‘void mlperf::loadgen::RunPerformanceMode(mlperf::SystemUnderTest*, mlperf::QuerySampleLibrary*, const mlperf::loadgen::TestSettingsInternal&, mlperf::loadgen::SequenceGen*) [with mlperf::TestScenario scenario = mlperf::TestScenario::SingleStream]’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1132:61: required from ‘static mlperf::loadgen::RunFunctions mlperf::loadgen::RunFunctions::GetCompileTime() [with mlperf::TestScenario compile_time_scenario = mlperf::TestScenario::SingleStream]’
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1138:58: required from here
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
918 | PerformanceSummary perf_summary{sut->Name(), settings, std::move(pr)};
| ^~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:918:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc: In instantiation of ‘void mlperf::loadgen::FindPeakPerformanceMode(mlperf::SystemUnderTest*, mlperf::QuerySampleLibrary*, const mlperf::loadgen::TestSettingsInternal&, mlperf::loadgen::SequenceGen*) [with mlperf::TestScenario scenario = mlperf::TestScenario::SingleStream]’:
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1132:61: required from ‘static mlperf::loadgen::RunFunctions mlperf::loadgen::RunFunctions::GetCompileTime() [with mlperf::TestScenario compile_time_scenario = mlperf::TestScenario::SingleStream]’
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:1138:58: required from here
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_min’ [-Wmissing-field-initializers]
988 | PerformanceSummary base_perf_summary{sut->Name(), base_settings,
| ^~~~~~~~~~~~~~~~~
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_max’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::first_token_latency_mean’ [-Wmissing-field-initializers]
/home/zhaohc/CM/repos/local/cache/d071d1318a114521/inference/loadgen/loadgen.cc:988:22: warning: missing initializer for member ‘mlperf::loadgen::PerformanceSummary::time_per_output_token_min’ [-Wmissing-field-initializers]

SUT: default-reference-gpu-onnxruntime-v1.17.1-default_config, model: bert-99, scenario: Offline, target_qps updated as 44.1568
New config stored in /home/zhaohc/CM/repos/local/cache/9039508f728b4d64/configs/default/reference-implementation/gpu-device/onnxruntime-framework/framework-version-v1.17.1/default_config-config.yaml
[2024-03-20 20:08:05,501 log_parser.py:50 INFO] Sucessfully loaded MLPerf log from /home/zhaohc/test_results/default-reference-gpu-onnxruntime-v1.17.1-default_config/bert-99/offline/performance/run_1/mlperf_log_detail.txt.
[2024-03-20 20:08:05,506 log_parser.py:50 INFO] Sucessfully loaded MLPerf log from /home/zhaohc/test_results/default-reference-gpu-onnxruntime-v1.17.1-default_config/bert-99/offline/performance/run_1/mlperf_log_detail.txt.

from ck.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.