Giter Club home page Giter Club logo

play_with_tensorrt's Introduction

Play with TensorRT

  • Sample projects to use TensorRT in C++ for multi-platform
  • Typical project structure is like the following diagram
    • 00_doc/design.jpg

Target

  • Platform
    • Linux (x64)
    • Linux (aarch64)
    • Windows (x64). Visual Studio 2019

Usage

./main [input]

 - input = blank
    - use the default image file set in source code (main.cpp)
    - e.g. ./main
 - input = *.mp4, *.avi, *.webm
    - use video file
    - e.g. ./main test.mp4
 - input = *.jpg, *.png, *.bmp
    - use image file
    - e.g. ./main test.jpg
 - input = number (e.g. 0, 1, 2, ...)
    - use camera
    - e.g. ./main 0
- input = jetson
    - use camera via gstreamer on Jetson
    - e.g. ./main jetson

How to build a project

0. Requirements

  • OpenCV 4.x
  • CUDA + cuDNN
  • TensorRT 8.x
    • In case you have build errors related to TensorRT location, modify cmake settings for it in InferenceHelper/inference_helper/CMakeLists.txt

1. Download

  • Download source code and pre-built libraries
    git clone https://github.com/iwatake2222/play_with_tensorrt.git
    cd play_with_tensorrt
    git submodule update --init
    sh InferenceHelper/third_party/download_prebuilt_libraries.sh
  • Download models
    sh ./download_resource.sh

2-a. Build in Linux

  • Build and run
    cd pj_tensorrt_cls_mobilenet_v2   # for example
    mkdir -p build && cd build
    cmake ..
    make
    ./main

2-b. Build in Windows (Visual Studio)

  • Configure and Generate a new project using cmake-gui for Visual Studio 2019 64-bit
    • Where is the source code : path-to-play_with_tensorrt/pj_tensorrt_cls_mobilenet_v2 (for example)
    • Where to build the binaries : path-to-build (any)
  • Open main.sln
  • Set main project as a startup project, then build and run!

Configuration for TensorRT

You don't need to change any configuration for TensorRT, but you can change it if you want.

Model format

  • The model file name is specified in xxx_engine.cpp . Please find MODEL_NAME definition
  • inference_helper_tensorrt.cpp automatically converts model according to the model format (extension)
    • .onnx : convert the model from onnx to trt, and save the converted trt model
    • .uff : convert the model from uff to trt, and save the converted trt model (WIP)
    • .trt : use pre-converted trt model
  • If *.trt file exists, InferenceHelper will use it to avoid re-conversion to save time
    • If you want to re-convert (for example, when you try another conversion settings), please delete resource/model/*.trt
    • Also, if you want to re-convert with INT8 calibration, please delete CalibrationTable_cal.txt

DLA Cores (NVDLA)

  • GPU is used by default
  • Call SetDlaCore(0) or SetDlaCore(1) to use DLA

Model conversion settings

  • The parameters for model conversion is defiend in inference_helper_tensorrt.cpp
  • USE_FP16
    • define this for FP16 inference
  • USE_INT8_WITHOUT_CALIBRATION
    • define this for INT8 inference without calibration (I can't get good result with this)
  • USE_INT8_WITH_CALIBRATION
    • define this for INT8 inference (you also need int8 calibration)
  • OPT_MAX_WORK_SPACE_SIZE
    • 1 << 30
  • OPT_AVG_TIMING_ITERATIONS
    • not in use
  • OPT_MIN_TIMING_ITERATIONS
    • not in use
  • Parameters for Quantization Calibration
    • CAL_DIR
      • directory containing calibration images (ppm in the same size as model input size)
    • CAL_LIST_FILE
      • text file listing calibration images (filename only. no extension)
    • CAL_BATCH_SIZE
      • batch size for calibration
    • CAL_NB_BATCHES
      • the number of batches
    • CAL_IMAGE_C
      • the channel of calibration image. must be the same as model
    • CAL_IMAGE_H
      • the height of calibration image. must be the same as model
    • CAL_IMAGE_W
      • the width of calibration image. must be the same as model
    • CAL_SCALE
      • normalize parameter for calibration (probably, should use the same value as trainig)
    • CAL_BIAS
      • normalize parameter for calibration (probably, should use the same value as trainig)

Quantization Calibration

  • If you want to use int8 mode, you need calibration step
  1. Create ppm images whose size is the same as model input size from training images
    • you can use inference_helper/tensorrt/calibration/batchPrepare.py
    • python .\batchPrepare.py --inDir sample_org --outDir sample_ppm
  2. Copy the generated ppm files and list.txt to the target environment such as Jetson
  3. Use .onnx model
  4. Modify parameters for calibration such as CAL_DIR and define USE_INT8
  5. Compile the project and run it
  6. If it succeeds, trt model file is generated. You can use it after that

Note

  • Install TensorRT in Windows
    • cuDNN installation
      • Copy all files into CUDA directory
    • TensorRT installation
      • Copy all files into CUDA directory
      • Or, set environment variable(TensorRT_ROOT = C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT\TensorRT-8.2.0.6), and add %TensorRT_ROOT%\lib to path

License

  • Copyright 2020 iwatake2222
  • Licensed under the Apache License, Version 2.0

Acknowledgements

play_with_tensorrt's People

Contributors

iwatake2222 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

play_with_tensorrt's Issues

about run pj_tensorrt_seg_robust_video_matting Issue?

Environment (Hardware)

  • Hardware:2060 GPU
  • Software: WIndows10 TensorRT-8.4.2.4.Windows10.x86_64.cuda-11.6.cudnn8.4 cuda 11.1
    (Please include version information)

Project Name

pj_tensorrt_seg_robust_video_matting
image
image

Issue Details

<1> about int64

image

<2> about issues when converted to FP16
image
<3> output The window doesn't output anything
image

请问ufld_v2中模型文件.pth转换到.onnx的方法

Environment (Hardware)

  • Ubuntu 20.04
  • TensorRT-8.4.1.5
  • onnx-1.13.0

Project Name

pj_tensorrt_lane_ultra-fast-lane-detection_v2

Issue Details

使用提供的onnx文件可以成功转到tensorrt且可以运行,但是效果不太理想,尤其是ufldv2_tusimple_res18_320x800.onnx和ufldv2_tusimple_res34_320x800.onnx,我想要尝试将自己训练的.pth转到tensorrt,请问该方法中.pth转换到.onnx是怎么实现的,有相关代码吗?

paddleseg_cityscapessota

Hi,
I'm using paddleseg_cityscapessota_180x320 and after build to TensorRt and I get this error:

engine_ = std::unique_ptr<nvinfer1::ICudaEngine>(runtime_->deserializeCudaEngine(plan->data(), plan->size()));
		if (!engine_) {
			PRINT_E("Failed to create engine (%s)\n", model_filename.c_str());
			return kRetErr;
		}

[InferenceHelperTensorRt][341] num_of_in_out = 2
[InferenceHelperTensorRt][344] tensor[0]->name: x
[InferenceHelperTensorRt][345] is input = 1
[InferenceHelperTensorRt][349] dims.d[0] = 1
[InferenceHelperTensorRt][349] dims.d[1] = 3
[InferenceHelperTensorRt][349] dims.d[2] = 180
[InferenceHelperTensorRt][349] dims.d[3] = 320
[InferenceHelperTensorRt][353] data_type = 0
[InferenceHelperTensorRt][344] tensor[1]->name: tmp_520
[InferenceHelperTensorRt][345] is input = 0
[InferenceHelperTensorRt][349] dims.d[0] = 1
[InferenceHelperTensorRt][349] dims.d[1] = 19
[InferenceHelperTensorRt][349] dims.d[2] = 180
[InferenceHelperTensorRt][349] dims.d[3] = 320
[InferenceHelperTensorRt][353] data_type = 0
[08/19/2022-14:41:54] [E] [TRT] 3: Cannot find binding of given name: tf.identity

usage of dynamic/static batch for yolov7

Question:

I am trying to figure out how to modify your code to create a instance for batch usage, static and dynamic.
How can i Implement it?

Environment (Hardware)

  • Jetson Orin
  • ubuntu 20.04

Tensorrt conversion

Nice you got YOLOX working on Jetson NX.
I was trying the same, but I couldn't convert the model on the Jetson because it runs out of memory. Did you experience the same? Did you convert the model on a different machine? What is the solution here?

I used the trt.py script of the YOLOX repository.

I appreciate your support on that!

Calibration failed with virtualMemoryBuffer.cpp::nvinfer1::StdVirtualMemoryBufferImpl

Environment (Hardware)

  • Hardware: RTX 3070ti, AMD Ryzen 5800X
  • Software: VS22

Cuda 11.3
Cudnn 8.4
TensorRT 8.4

Project Name

Int8 calibration

Issue Details

When i set all the important parameters for Calibration like following:
#define CAL_DIR "ppmSamples"
#define CAL_LIST_FILE "list.txt"
#define CAL_BATCH_SIZE 10
#define CAL_NB_BATCHES 2430
#define CAL_IMAGE_C 3
#define CAL_IMAGE_H 256
#define CAL_IMAGE_W 256
/* 0 ~ 1.0 /
// #define CAL_SCALE (1.0 / 255.0)
// #define CAL_BIAS (0.0)
/
-2.25 ~ 2.25 */
#define CAL_SCALE (1.0 / (255.0 * 0.225))
#define CAL_BIAS (0.45 / 0.225)

and use all the preparation steps for the calibration it doesnt work.
First of all i used 2430 images for calibration which i put into the directly ppmSamples and the list is also filled with the image names without the file extension. When compiling the programm i get following console output and i cant figure out what im doing wrong or if there maybe is a bug

Error Log

[InferenceHelper][119] Use TensorRT
[08/27/2022-19:04:43] [I] [TRT] [MemUsageChange] Init CUDA: CPU +444, GPU +0, now: CPU 4908, GPU 1226 (MiB)
[08/27/2022-19:04:43] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 4952, GPU 1226 (MiB)
[08/27/2022-19:04:45] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +357, GPU +104, now: CPU 5441, GPU 1330 (MiB)
[08/27/2022-19:04:45] [I] [TRT] ----------------------------------------------------------------
[08/27/2022-19:04:45] [I] [TRT] Input filename:   resource/model/yolov7_256x256.onnx
[08/27/2022-19:04:45] [I] [TRT] ONNX IR version:  0.0.6
[08/27/2022-19:04:45] [I] [TRT] Opset version:    11
[08/27/2022-19:04:45] [I] [TRT] Producer name:    pytorch
[08/27/2022-19:04:45] [I] [TRT] Producer version: 1.11.0
[08/27/2022-19:04:45] [I] [TRT] Domain:
[08/27/2022-19:04:45] [I] [TRT] Model version:    0
[08/27/2022-19:04:45] [I] [TRT] Doc string:
[08/27/2022-19:04:45] [I] [TRT] ----------------------------------------------------------------
[08/27/2022-19:04:45] [W] [TRT] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/27/2022-19:04:45] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +796, GPU +314, now: CPU 6294, GPU 1652 (MiB)
[08/27/2022-19:04:45] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +139, GPU +58, now: CPU 6433, GPU 1710 (MiB)
[08/27/2022-19:04:45] [I] [TRT] Timing cache disabled. Turning it on will improve builder speed.
[08/27/2022-19:04:47] [I] [TRT] Detected 1 inputs and 4 output network tensors.
[08/27/2022-19:04:47] [I] [TRT] Total Host Persistent Memory: 26592
[08/27/2022-19:04:47] [I] [TRT] Total Device Persistent Memory: 0
[08/27/2022-19:04:47] [I] [TRT] Total Scratch Memory: 0
[08/27/2022-19:04:47] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 0 MiB, GPU 140 MiB
[08/27/2022-19:04:47] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 127.918ms to assign 10 blocks to 336 nodes requiring 29229056 bytes.
[08/27/2022-19:04:47] [I] [TRT] Total Activation Memory: 29229056
[08/27/2022-19:04:47] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 7814, GPU 2392 (MiB)
[08/27/2022-19:04:47] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 7812, GPU 2376 (MiB)
[08/27/2022-19:04:47] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +27, now: CPU 0, GPU 167 (MiB)
[08/27/2022-19:04:47] [I] [TRT] Starting Calibration.
[08/27/2022-19:04:47] [I] Batch #0
.ppm27/2022-19:04:47] [I] Calibrating with file 0
.ppm27/2022-19:04:47] [I] Calibrating with file 1
.ppm27/2022-19:04:47] [I] Calibrating with file 2
.ppm27/2022-19:04:47] [I] Calibrating with file 3
.ppm27/2022-19:04:47] [I] Calibrating with file 4
.ppm27/2022-19:04:47] [I] Calibrating with file 5
.ppm27/2022-19:04:47] [I] Calibrating with file 6
.ppm27/2022-19:04:47] [I] Calibrating with file 7
.ppm27/2022-19:04:47] [I] Calibrating with file 8
.ppm27/2022-19:04:47] [I] Calibrating with file 9
.ppm in data directories:
        ppmSamples
&&&& FAILED
[08/27/2022-19:04:47] [E] [TRT] 1: [virtualMemoryBuffer.cpp::nvinfer1::StdVirtualMemoryBufferImpl::~StdVirtualMemoryBufferImpl::104] Error Code 1: Cuda Runtime (driver shutting down)

Yolox-nano with deepstream on nvidia jetson

  • NVIDIA Jetson AGX Xavier [16GB]
    • Jetpack 4.6 [L4T 32.6.1]
    • NV Power Mode: MODE_30W_ALL - Type: 3
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 10.2.300
    • cuDNN: 8.2.1.32
    • TensorRT: 8.0.1.6
    • Visionworks: 1.6.0.501
    • OpenCV: 4.5.1 compiled CUDA: YES
    • VPI: ii libnvvpi1 1.1.15 arm64 NVIDIA Vision Programming Interface library
    • Vulkan: 1.2.70

Project Name

pj_xxx

Issue Details

I cannot run yolox-nano or yolox-tiny with deepstream on my Jetson. Everytime I try to run them I get a segmentation fault or my terminal freezes. But I can run yolox-s, at 1 FPS, slowly, but I does run.

I convert them from torch to TRT the same way and when doing inference with just TRT, all 3 models work.

How to Reproduce

Convert original yolox-nano, with torch2trt and then Try to use it with deepstream 6.0

Dynamic shapes

Hi,

Is it so that interference helper do not support currently dynamic shapes e.g. in onnx model which would be used by tensorRT? Do you have plans to add support for this?

-Mikko

How can I make a model with input size 640x384 from 640x640, the original Yolov7 training?

Environment (Hardware)

  • Hardware: Device, CPU, GPU, etc.
  • Software: OS, Compiler, etc.
    (Please include version information)
    GPU : Cuda 11
    OS : Windows 11

Project Name

pj_tensorrt_det_yolov7

Issue Details

I have a question about your model...
After training yolov7 with default input size 640x640, how can I change the input size from 640x640 to 640x384?

Is there your owned way to do this?

Thanks ~

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.

Error Log

error log

Additional Information

Add any other context about the problem here.

about tensorrt How do I use my own model

Environment (Hardware)

  • Hardware: Device, CPU, GPU, etc.
  • Software: VS 2019
    (Please include version information)

Project Name

pj_tensorrt_seg_robust_video_matting

Issue Details

1660553167985
1660553328098
What should be done?

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.

Error Log

error log

Additional Information

Add any other context about the problem here.

Downloading issue with 'libtensorflow_framework.so.2'

Environment (Hardware)

  • CPU AMD Ryzen 7 5800x with Nvidia RTX 3070ti

Issue Details

After following the download steps

How to Reproduce

Steps to
Unbenannt
Last step of downloading failed like shown in the image down in the post. Preview of the image somehow doesnt work so you need to open it manually ):

Error Log



error log
![dfd](https://user-images.githubusercontent.com/52799640/184359585-e896e3e9-d396-4c1f-90d9-2cc152f28aff.PNG)

Additional Information

My thought was maybe there are some pre requirements that you thought would be already downloaded by everybody but i dont have that. Idk what it is but i literally followed the download steps and gave cmd admin rights

Int8 Calibration trying to read non existing file

Environment (Hardware)

  • Hardware: RTX 3070ti, AMD Ryzen 5800X
  • Software: VS22

Cuda 11.3
Cudnn 8.4
TensorRT 8.4

Project Name

Int8 calibration

Issue Details

When i set all the important parameters for Calibration like following:

#define CAL_DIR        "ppmSamples"
#define CAL_LIST_FILE  "list.txt"
#define CAL_BATCH_SIZE 10
#define CAL_NB_BATCHES 2430
#define CAL_IMAGE_C    3
#define CAL_IMAGE_H    256
#define CAL_IMAGE_W    256
/* 0 ~ 1.0 */
// #define CAL_SCALE      (1.0 / 255.0)
// #define CAL_BIAS       (0.0)
/* -2.25 ~ 2.25 */
#define CAL_SCALE      (1.0 / (255.0 * 0.225))
#define CAL_BIAS       (0.45 / 0.225)

and use all the preparation steps for the calibration it doesnt work.
First of all i used 2430 images for calibration which i put into the directly ppmSamples and the list is also filled with the image names without the file extension. When compiling the programm i get following console output and i cant figure out what im doing wrong or if there maybe is a bug. All images have the width and height of 256 like in the defines and i checked a few times already if the list.txt has anything weird in it and checked with a script if any file or list item is missing but nothing. Also tried out alot different batch sizes but nothing works

The output is: Could not find 0000005 .ppm in data directories:
ppmSamples and the program is right there is no 0000005 .ppm file because i named them from 0-2430ppm and inside the list.txt is also no 0000005 which gives me the idea that my naming approach is wrong in general. Thought your images are just randomly named inside the samples folder. Is there any naming routine i can follow in order to get it to work? My file naming approach is already snake case because there are no capital letters or spaces just numbers

Another update: I just tried naming my files exactly like you do it by adding the 0's so i made a script to always get to 12 character for the filename and i still count to 2430 (my total image count) but before the number im adding 0's to always get to a filename length of 12 chars. so my first image name is: 000000000000.ppm and my last one is 000000002430.ppm
The programm still tries to find files that arent even specified inside the list.txt or in the ppmSample folder i defined

Error Log

[InferenceHelper][119] Use TensorRT
[08/27/2022-20:56:36] [I] [TRT] [MemUsageChange] Init CUDA: CPU +188, GPU +0, now: CPU 12047, GPU 1226 (MiB)
[08/27/2022-20:56:36] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 12091, GPU 1226 (MiB)
[08/27/2022-20:56:37] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +317, GPU +104, now: CPU 12557, GPU 1330 (MiB)
[08/27/2022-20:56:37] [I] [TRT] ----------------------------------------------------------------
[08/27/2022-20:56:37] [I] [TRT] Input filename:   resource/model/yolov7_256x256.onnx
[08/27/2022-20:56:37] [I] [TRT] ONNX IR version:  0.0.6
[08/27/2022-20:56:37] [I] [TRT] Opset version:    11
[08/27/2022-20:56:37] [I] [TRT] Producer name:    pytorch
[08/27/2022-20:56:37] [I] [TRT] Producer version: 1.11.0
[08/27/2022-20:56:37] [I] [TRT] Domain:
[08/27/2022-20:56:37] [I] [TRT] Model version:    0
[08/27/2022-20:56:37] [I] [TRT] Doc string:
[08/27/2022-20:56:37] [I] [TRT] ----------------------------------------------------------------
[08/27/2022-20:56:37] [W] [TRT] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/27/2022-20:56:38] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +808, GPU +314, now: CPU 13421, GPU 1652 (MiB)
[08/27/2022-20:56:38] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +143, GPU +58, now: CPU 13564, GPU 1710 (MiB)
[08/27/2022-20:56:38] [I] [TRT] Timing cache disabled. Turning it on will improve builder speed.
[08/27/2022-20:56:40] [I] [TRT] Detected 1 inputs and 4 output network tensors.
[08/27/2022-20:56:40] [I] [TRT] Total Host Persistent Memory: 26592
[08/27/2022-20:56:40] [I] [TRT] Total Device Persistent Memory: 0
[08/27/2022-20:56:40] [I] [TRT] Total Scratch Memory: 0
[08/27/2022-20:56:40] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 0 MiB, GPU 140 MiB
[08/27/2022-20:56:40] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 132.835ms to assign 10 blocks to 336 nodes requiring 29229056 bytes.
[08/27/2022-20:56:40] [I] [TRT] Total Activation Memory: 29229056
[08/27/2022-20:56:40] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 14851, GPU 2392 (MiB)
[08/27/2022-20:56:40] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 14851, GPU 2376 (MiB)
[08/27/2022-20:56:40] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +27, now: CPU 0, GPU 167 (MiB)
[08/27/2022-20:56:40] [I] [TRT] Starting Calibration.
[08/27/2022-20:56:40] [I] Batch #0
[08/27/2022-20:56:40] [I] Calibrating with file 000000000000.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000001.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000002.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000003.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000004.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000005.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000006.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000007.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000008.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000009.ppm
[08/27/2022-20:56:40] [I] [TRT]   Calibrated batch 0 in 0.115671 seconds.
[08/27/2022-20:56:40] [I] Batch #1
[08/27/2022-20:56:40] [I] Calibrating with file 0000005.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000006.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000007.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000008.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000009.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000010.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000011.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000012.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000013.ppm
[08/27/2022-20:56:40] [I] Calibrating with file 000000000014.ppm
Could not find 0000005.ppm in data directories:
        ppmSamples
&&&& FAILED
[08/27/2022-20:56:40] [E] [TRT] 1: [virtualMemoryBuffer.cpp::nvinfer1::StdVirtualMemoryBufferImpl::~StdVirtualMemoryBufferImpl::104] Error Code 1: Cuda Runtime (driver shutting down)

what GCC and G++ versions is needed to compile?

Environment (Hardware)

  • Hardware: Device, CPU, GPU, etc.
    i5 10th gen
    rtx 3070
    cuda 11.1

  • Software: OS, Compiler, etc.
    ubuntu 20.04
    gcc 9.4.0
    g++ 9.4.0

Project Name

pj_tensorrt_det_yolov7

Issue Details

building the pj_tensorrt_det_yolov7 has error, i just did what is said on the instructions

cmake ..

-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
[main] CMAKE_SYSTEM_PROCESSOR = x86_64, BUILD_SYSTEM = x64_linux
-- No build type selected, default to Release
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda-11.1 (found suitable exact version "11.1")
-- Found OpenCV: /usr/local (found version "4.5.2")
-- Found CUDA: /usr/local/cuda-11.1 (found version "11.1")
CUDA_INCLUDE_DIRS: /usr/local/cuda-11.1/include
-- Found CUDA: /usr/local/cuda-11.1 (found suitable exact version "11.1")
-- Configuring done
-- Generating done

make

Scanning dependencies of target InferenceHelper
Scanning dependencies of target CommonHelper
[ 14%] Building CXX object image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/inference_helper_tensorrt.cpp.o
[ 14%] Building CXX object image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/inference_helper.cpp.o
[ 21%] Building CXX object image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/tensorrt/logger.cpp.o
[ 28%] Building CXX object image_processor/common_helper/CMakeFiles/CommonHelper.dir/common_helper.cpp.o
[ 35%] Building CXX object image_processor/common_helper/CMakeFiles/CommonHelper.dir/bounding_box.cpp.o
[ 42%] Building CXX object image_processor/common_helper/CMakeFiles/CommonHelper.dir/tracker.cpp.o
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp: In member function ‘virtual int32_t InferenceHelperTensorRt::Initialize(const string&, std::vector&, std::vector&)’:
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:178:69: error: ‘class nvinfer1::IBuilder’ has no member named ‘buildSerializedNetwork’
178 | auto plan = std::unique_ptrnvinfer1::IHostMemory(builder->buildSerializedNetwork(network, config));
| ^~~~~~~~~~~~~~~~~~~~~~
In file included from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:32:
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/tensorrt/common.h: In instantiation of ‘void samplesCommon::InferDeleter::operator()(T
) const [with T = nvinfer1::IBuilder]’:
/usr/include/c++/9/bits/unique_ptr.h:292:17: required from ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = nvinfer1::IBuilder; _Dp = samplesCommon::InferDeleter]’
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/tensorrt/common.h:962:116: required from here
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/tensorrt/common.h:387:9: error: ‘virtual nvinfer1::IBuilder::~IBuilder()’ is protected within this context
387 | delete obj;
| ^~~~~~
In file included from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:30:
/usr/include/x86_64-linux-gnu/NvInfer.h:7261:13: note: declared protected here
7261 | virtual ~IBuilder()
| ^
In file included from /usr/include/c++/9/memory:80,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:26:
/usr/include/c++/9/bits/unique_ptr.h: In instantiation of ‘void std::default_delete<_Tp>::operator()(_Tp
) const [with _Tp = nvinfer1::IRuntime]’:
/usr/include/c++/9/bits/unique_ptr.h:292:17: required from ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = nvinfer1::IRuntime; _Dp = std::default_deletenvinfer1::IRuntime]’
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.h:35:7: required from here
/usr/include/c++/9/bits/unique_ptr.h:81:2: error: ‘virtual nvinfer1::IRuntime::~IRuntime()’ is protected within this context
81 | delete __ptr;
| ^~~~~~
In file included from /usr/include/x86_64-linux-gnu/NvInfer.h:53,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:30:
/usr/include/x86_64-linux-gnu/NvInferRuntime.h:801:13: note: declared protected here
801 | virtual ~IRuntime() {}
| ^
In file included from /usr/include/c++/9/memory:80,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:26:
/usr/include/c++/9/bits/unique_ptr.h: In instantiation of ‘void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = nvinfer1::ICudaEngine]’:
/usr/include/c++/9/bits/unique_ptr.h:292:17: required from ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = nvinfer1::ICudaEngine; _Dp = std::default_deletenvinfer1::ICudaEngine]’
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.h:35:7: required from here
/usr/include/c++/9/bits/unique_ptr.h:81:2: error: ‘virtual nvinfer1::ICudaEngine::~ICudaEngine()’ is protected within this context
81 | delete __ptr;
| ^~~~~~
In file included from /usr/include/x86_64-linux-gnu/NvInfer.h:53,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:30:
/usr/include/x86_64-linux-gnu/NvInferRuntime.h:1358:13: note: declared protected here
1358 | virtual ~ICudaEngine() {}
| ^
In file included from /usr/include/c++/9/memory:80,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:26:
/usr/include/c++/9/bits/unique_ptr.h: In instantiation of ‘void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = nvinfer1::IExecutionContext]’:
/usr/include/c++/9/bits/unique_ptr.h:292:17: required from ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = nvinfer1::IExecutionContext; _Dp = std::default_deletenvinfer1::IExecutionContext]’
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.h:35:7: required from here
/usr/include/c++/9/bits/unique_ptr.h:81:2: error: ‘virtual nvinfer1::IExecutionContext::~IExecutionContext()’ is protected within this context
81 | delete __ptr;
| ^~~~~~
In file included from /usr/include/x86_64-linux-gnu/NvInfer.h:53,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:30:
/usr/include/x86_64-linux-gnu/NvInferRuntime.h:1693:13: note: declared protected here
1693 | virtual ~IExecutionContext() noexcept {}
| ^
[ 50%] Building CXX object image_processor/common_helper/CMakeFiles/CommonHelper.dir/common_helper_cv.cpp.o
In file included from /usr/include/c++/9/memory:80,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:26:
/usr/include/c++/9/bits/unique_ptr.h: In instantiation of ‘void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = nvinfer1::IBuilder]’:
/usr/include/c++/9/bits/unique_ptr.h:292:17: required from ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = nvinfer1::IBuilder; _Dp = std::default_deletenvinfer1::IBuilder]’
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:142:120: required from here
/usr/include/c++/9/bits/unique_ptr.h:81:2: error: ‘virtual nvinfer1::IBuilder::~IBuilder()’ is protected within this context
81 | delete __ptr;
| ^~~~~~
In file included from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:30:
/usr/include/x86_64-linux-gnu/NvInfer.h:7261:13: note: declared protected here
7261 | virtual ~IBuilder()
| ^
In file included from /usr/include/c++/9/memory:80,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:26:
/usr/include/c++/9/bits/unique_ptr.h: In instantiation of ‘void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = nvinfer1::INetworkDefinition]’:
/usr/include/c++/9/bits/unique_ptr.h:292:17: required from ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = nvinfer1::INetworkDefinition; _Dp = std::default_deletenvinfer1::INetworkDefinition]’
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:144:109: required from here
/usr/include/c++/9/bits/unique_ptr.h:81:2: error: ‘virtual nvinfer1::INetworkDefinition::~INetworkDefinition()’ is protected within this context
81 | delete __ptr;
| ^~~~~~
In file included from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:30:
/usr/include/x86_64-linux-gnu/NvInfer.h:5434:13: note: declared protected here
5434 | virtual ~INetworkDefinition() {}
| ^
In file included from /usr/include/c++/9/memory:80,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:26:
/usr/include/c++/9/bits/unique_ptr.h: In instantiation of ‘void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = nvinfer1::IBuilderConfig]’:
/usr/include/c++/9/bits/unique_ptr.h:292:17: required from ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = nvinfer1::IBuilderConfig; _Dp = std::default_deletenvinfer1::IBuilderConfig]’
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:145:95: required from here
/usr/include/c++/9/bits/unique_ptr.h:81:2: error: ‘virtual nvinfer1::IBuilderConfig::~IBuilderConfig()’ is protected within this context
81 | delete __ptr;
| ^~~~~~
In file included from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:30:
/usr/include/x86_64-linux-gnu/NvInfer.h:6756:13: note: declared protected here
6756 | virtual ~IBuilderConfig() {}
| ^
In file included from /usr/include/c++/9/memory:80,
from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:26:
/usr/include/c++/9/bits/unique_ptr.h: In instantiation of ‘void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = nvonnxparser::IParser]’:
/usr/include/c++/9/bits/unique_ptr.h:292:17: required from ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = nvonnxparser::IParser; _Dp = std::default_deletenvonnxparser::IParser]’
/SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:147:130: required from here
/usr/include/c++/9/bits/unique_ptr.h:81:2: error: ‘virtual nvonnxparser::IParser::~IParser()’ is protected within this context
81 | delete __ptr;
| ^~~~~~
In file included from /SSD2/Python/play_with_tensorrt/InferenceHelper/inference_helper/inference_helper_tensorrt.cpp:31:
/usr/include/x86_64-linux-gnu/NvOnnxParser.h:232:13: note: declared protected here
232 | virtual ~IParser() {}
| ^
make[2]: *** [image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/build.make:76: image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/inference_helper_tensorrt.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:221: image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 57%] Linking CXX static library libCommonHelper.a
[ 57%] Built target CommonHelper
make: *** [Makefile:84: all] Error 2

Cannot find binding of given name: tf.identity (pj_tensorrt_seg_paddleseg_cityscapessota)

Environment (Hardware)

  • Hardware: Jetson Xavier NX.
  • Software: Jetpack 5.0.2 GA [L4T 35.1.0]
    NV Power Mode: MODE_20W_6CORE - Type: 8
    jtop:
  • Version: 4.1.5
  • Service: Active
    Libraries:
  • CUDA: 11.4.239
  • cuDNN: 8.4.1.50
  • TensorRT: 5.0.2
  • VPI: 2.1.6
  • Vulkan: 1.3.203
  • OpenCV: 4.5.4 - with CUDA: NO

Project Name

pj_tensorrt_seg_paddleseg_cityscapessota

Issue Details

I'm trying the run the inference of pj_tensorrt_seg_paddleseg_cityscapessota after successfully built, unfortunately I encountered some problems after running ./main. Plus, I have built and successfully run [pj_tensorrt_cls_mobilenet_v2] and [_tensorrt_det_yolov7] well with the same device and environment.

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.

In this step, I have done exactly the same as the instruction.

cd pj_tensorrt_seg_paddleseg_cityscapessota/
mkdir -p build && cd build
cmake ..
make
./main

Error Log

[InferenceHelper][117] Use TensorRT 
[03/03/2023-10:31:58] [I] [TRT] [MemUsageChange] Init CUDA: CPU +186, GPU +0, now: CPU 209, GPU 7830 (MiB)
[03/03/2023-10:32:03] [I] [TRT] Loaded engine size: 142 MiB
[03/03/2023-10:32:05] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +261, GPU +230, now: CPU 742, GPU 8232 (MiB)
[03/03/2023-10:32:05] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +2, GPU +141, now: CPU 2, GPU 141 (MiB)
[03/03/2023-10:32:05] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 506, GPU 8108 (MiB)
[03/03/2023-10:32:05] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +2, GPU +113, now: CPU 4, GPU 254 (MiB)
[InferenceHelperTensorRt][345] num_of_in_out = 2
[InferenceHelperTensorRt][348] tensor[0]->name: x
[InferenceHelperTensorRt][349]   is input = 1
[InferenceHelperTensorRt][353]   dims.d[0] = 1
[InferenceHelperTensorRt][353]   dims.d[1] = 3
[InferenceHelperTensorRt][353]   dims.d[2] = 180
[InferenceHelperTensorRt][353]   dims.d[3] = 320
[InferenceHelperTensorRt][357]   data_type = 0
[InferenceHelperTensorRt][348] tensor[1]->name: tmp_520
[InferenceHelperTensorRt][349]   is input = 0
[InferenceHelperTensorRt][353]   dims.d[0] = 1
[InferenceHelperTensorRt][353]   dims.d[1] = 19
[InferenceHelperTensorRt][353]   dims.d[2] = 180
[InferenceHelperTensorRt][353]   dims.d[3] = 320
[InferenceHelperTensorRt][357]   data_type = 0
[03/03/2023-10:32:05] [E] [TRT] 3: Cannot find binding of given name: tf.identity
[ERR: InferenceHelperTensorRt][222] Output tensor doesn't exist in the model (tf.identity)
[ERR: SegmentationEngine][99] Inference helper is not created
Initialization Error

Additional Information

Add any other context about the problem here.

Using different YOLOX models

Project Name

pj_tensorrt_det_yolox

Issue Details

I wanted to use larger YOLOX models (such as YOLOX-m and YOLOX-l) in the project but in result they were unable to detect almost anything. I've used export_onnx.py script included in YOLOX repo (changed the input size to match image size) and had no issues with conversion to .trt format yet the most I achieved was 1 detection each few frames.

Are there any parameters in the project I should consider when using different models?

For comparison here is YOLOX-nano:
yolox_nano

And here is YOLOX-m:
yolox_m

Yolov5-Seg

Hi, Is it possible to support yolov5-seg?

Thanks in advance

/usr/bin/ld: cannot find -lnvinfer: No such file or directory

Environment (Hardware)

Linux Mint 21 with CUDA 11.8, CuDNN 8.8 and TensorRT 8.6

Project Name

pj_tensorrt_cls_mobilenet_v2

Issue Details

Maybe stupid question to ask. I am getting to learn about tensorRT and came to this repo. Followed as mentioned in the installation procedures. I have TensorRT installed and export 'ed the PATH. After cmake, when make I get the following error.

Error Log

[100%] Linking CXX executable main
/usr/bin/ld: cannot find -lnvinfer: No such file or directory
/usr/bin/ld: cannot find -lnvonnxparser: No such file or directory
/usr/bin/ld: cannot find -lnvinfer_plugin: No such file or directory
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/main.dir/build.make:210: main] Error 1
make[1]: *** [CMakeFiles/Makefile2:137: CMakeFiles/main.dir/all] Error 2
make: *** [Makefile:91: all] Error 2

Question about int8 Calibration

Environment (Hardware)

  • Hardware: AMD Ryzen 5800X, RTX 3070ti
  • Software: Windows 10, Visual Studio 22

(Nvidia stuff)
Cuda 11.3.1
Cudnn 8.4.1
TensorRT 8.4.2.4

Question

So i followed the steps for setting up custom calibration but i wonder if i have to input all my training images because i have over 10k images and maybe its not necessary. If its better for later accuracy ill do it i just didnt find good documentation about optimal calibration settings. Also wondered wether the CAL_BATCH_SIZE is also like in training a value for making training/calibration faster?
CAL_NB_BATCHES i dont know what it really is. I just got confused by the value 2 inside the inference_helper_tensorrt.cpp
From what i understood from this article https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf
I thought CAL_SCALE will be calculated while calibrating?
CAL_SCALE and CAL_BIAS should be same value as training but im not sure whats meant by that exactly.
I trained my model on pytorch yolov7 and converted it to onnx but didnt see or modified any of those 2 Parameters thats why im asking here. Since its a nooby question thread i apologize in advance

Question about Kalman's filter

Project Name

pj_tensorrt_det_yolox

Hi, I was wondering if your Kalman's filter implementation could be used to predict further trajectory of detected objects somewhat accurately (say 20-30 frames ahead). Is it worth experimenting with or should I search for something else?

Measurements of objects

Hi @iwatake2222,

I appreciate your work. I am using the HITNET depth method to extract the depth from the image. Is it possible to estimate the size of an object in meters?

image

Thanks in advance

yolov7 error while convertting (yolov7_736x1280.onnx to TensorRT engine) in ./main

Environment (Hardware)

----------Docker tensorrt:21.10-py3-----------

--------Docker Ubuntu OS-----------------------
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal

--------Docker CUDA----------------
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Aug_15_21:14:11_PDT_2021
Cuda compilation tools, release 11.4, V11.4.120
Build cuda_11.4.r11.4/compiler.30300941_0

-------Local Running PC to run docker------
CUDA 11.4
Ubuntu 18.04
GPU RTX 3070Ti

Project Name

pj_tensorrt_det_yolov7

Issue Details

I'm trying to run ./main from pj_tensorrt_det_yolov7 after successfully built.
But while converting onnx to tensorrt engine, the error occurs

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.
(detector) root@001ff98c0213:/workspace/play_with_tensorrt/pj_tensorrt_det_yolov7/build# cmake ..
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
[main] CMAKE_SYSTEM_PROCESSOR = x86_64, BUILD_SYSTEM = x64_linux
-- No build type selected, default to Release
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- Found OpenCV: /usr (found version "4.2.0")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda (found version "11.4")
CUDA_INCLUDE_DIRS: /usr/local/cuda/include
-- Configuring done
-- Generating done
-- Build files have been written to: /workspace/play_with_tensorrt/pj_tensorrt_det_yolov7/build

|||| Make ||||

(detector) root@001ff98c0213:/workspace/play_with_tensorrt/pj_tensorrt_det_yolov7/build# make
Scanning dependencies of target InferenceHelper
[ 7%] Building CXX object image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/inference_helper.cpp.o
[ 14%] Building CXX object image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/inference_helper_tensorrt.cpp.o
[ 21%] Building CXX object image_processor/inference_helper/CMakeFiles/InferenceHelper.dir/tensorrt/logger.cpp.o
[ 28%] Linking CXX static library libInferenceHelper.a
[ 28%] Built target InferenceHelper
Scanning dependencies of target CommonHelper
[ 35%] Building CXX object image_processor/common_helper/CMakeFiles/CommonHelper.dir/common_helper.cpp.o
[ 42%] Building CXX object image_processor/common_helper/CMakeFiles/CommonHelper.dir/bounding_box.cpp.o
[ 50%] Building CXX object image_processor/common_helper/CMakeFiles/CommonHelper.dir/tracker.cpp.o
[ 57%] Building CXX object image_processor/common_helper/CMakeFiles/CommonHelper.dir/common_helper_cv.cpp.o
[ 64%] Linking CXX static library libCommonHelper.a
[ 64%] Built target CommonHelper
Scanning dependencies of target ImageProcessor
[ 71%] Building CXX object image_processor/CMakeFiles/ImageProcessor.dir/image_processor.cpp.o
[ 78%] Building CXX object image_processor/CMakeFiles/ImageProcessor.dir/detection_engine.cpp.o
[ 85%] Linking CXX static library libImageProcessor.a
[ 85%] Built target ImageProcessor
Scanning dependencies of target main
[ 92%] Building CXX object CMakeFiles/main.dir/main.cpp.o
[100%] Linking CXX executable main
[100%] Built target main

Error Log

the error log while i'm trying to run ./main
(detector) root@001ff98c0213:/workspace/play_with_tensorrt/pj_tensorrt_det_yolov7/build# ./main

[InferenceHelper][117] Use TensorRT 
[02/08/2023-09:48:03] [I] [TRT] [MemUsageChange] Init CUDA: CPU +537, GPU +0, now: CPU 548, GPU 1145 (MiB)
[02/08/2023-09:48:04] [I] [TRT] Loaded engine size: 95 MB
[02/08/2023-09:48:04] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 668 MiB, GPU 1145 MiB
[02/08/2023-09:48:05] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +793, GPU +340, now: CPU 1478, GPU 1567 (MiB)
[02/08/2023-09:48:06] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +198, GPU +342, now: CPU 1676, GPU 1909 (MiB)
[02/08/2023-09:48:06] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1676, GPU 1891 (MiB)
[02/08/2023-09:48:06] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 1676 MiB, GPU 1891 MiB
[02/08/2023-09:48:06] [I] [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 1556 MiB, GPU 1891 MiB
[02/08/2023-09:48:06] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +10, now: CPU 1556, GPU 1901 (MiB)
[02/08/2023-09:48:06] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 1556, GPU 1909 (MiB)
[02/08/2023-09:48:06] [I] [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 1556 MiB, GPU 2119 MiB
[InferenceHelperTensorRt][345] num_of_in_out = 2
[InferenceHelperTensorRt][348] tensor[0]->name: images
[InferenceHelperTensorRt][349]   is input = 1
[InferenceHelperTensorRt][353]   dims.d[0] = 1
[InferenceHelperTensorRt][353]   dims.d[1] = 3
[InferenceHelperTensorRt][353]   dims.d[2] = 736
[InferenceHelperTensorRt][353]   dims.d[3] = 1280
[InferenceHelperTensorRt][357]   data_type = 0
[InferenceHelperTensorRt][348] tensor[1]->name: output
[InferenceHelperTensorRt][349]   is input = 0
[InferenceHelperTensorRt][353]   dims.d[0] = 1
[InferenceHelperTensorRt][353]   dims.d[1] = 57960
[InferenceHelperTensorRt][353]   dims.d[2] = 85
[InferenceHelperTensorRt][357]   data_type = 0
Unable to init server: Could not connect: Connection refused

(test:7748): Gtk-WARNING **: 09:48:06.055: cannot open display: 
[02/08/2023-09:48:06] [E] [TRT] 1: [hardwareContext.cpp::terminateCommonContext::141] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [defaultAllocator.cpp::free::85] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [W] [TRT] Unable to determine GPU memory usage
[02/08/2023-09:48:06] [W] [TRT] Unable to determine GPU memory usage
[02/08/2023-09:48:06] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1392, GPU 0 (MiB)
[02/08/2023-09:48:06] [F] [TRT] [defaultAllocator.cpp::free::85] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaStream::455] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [defaultAllocator.cpp::free::85] Error Code 1: Cuda Runtime (driver shutting down)
[02/08/2023-09:48:06] [F] [TRT] [resources.h::operator()::445] Error Code 1: Cuda Driver (TensorRT internal error)
Segmentation fault (core dumped)

Additional Information

Add any other context about the problem here.

Inference helper is not created

Environment (Hardware)

HARDWARE:jetson Orin
cuda:11.4
tensorrt 8.2

Project Name

play with tensorrt_hitnet

Issue Details

when I run the main ,I got this problem, what should I do?

[InferenceHelper][117] Use TensorRT
[08/02/2022-17:25:46] [I] [TRT] [MemUsageChange] Init CUDA: CPU +303, GPU +0, now: CPU 327, GPU 12344 (MiB)
[08/02/2022-17:25:46] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 346, GPU 12362 (MiB)
[08/02/2022-17:25:48] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +403, GPU +380, now: CPU 749, GPU 12741 (MiB)
Could not open file /home/nvidia/Downloads/play_with_tensorrt-master/pj_tensorrt_depth_stereo_hitnet/build/resource//model/hitnet_eth3d_480x640.onnx
Could not open file /home/nvidia/Downloads/play_with_tensorrt-master/pj_tensorrt_depth_stereo_hitnet/build/resource//model/hitnet_eth3d_480x640.onnx
[08/02/2022-17:25:48] [E] [TRT] ModelImporter.cpp:705: Failed to parse ONNX model from file: /home/nvidia/Downloads/play_with_tensorrt-master/pj_tensorrt_depth_stereo_hitnet/build/resource//model/hitnet_eth3d_480x640.onnx
[ERR: InferenceHelperTensorRt][149] Failed to parse onnx file (/home/nvidia/Downloads/play_with_tensorrt-master/pj_tensorrt_depth_stereo_hitnet/build/resource//model/hitnet_eth3d_480x640.onnx)[ERR: DepthStereoEngine][89] Inference helper is not created
Initialization Error

Float32 input and Int32 output - segmentation fault

Environment (Hardware)

  • Hardware: GPU.
  • Software: Ubuntu20.04.

Project Name

pj_tensorrt_seg_paddleseg_cityscapessota

Issue Details

I got a model with int32 output. And I added "#define TENSORTYPE_OUT TensorInfo::kTensorTypeInt32" as one parameter in Initialize "output_tensor_info_list_.push_back(OutputTensorInfo(OUTPUT_NAME_1, TENSORTYPE_OUT, IS_NCHW));"

If I run the codes, I got the segementation fault when running into "InferenceHelperTensorRt::Process".

Then I found that the "AllocateBuffers" process the int32 data with the same way as float and half. I changed the codes to:

case nvinfer1::DataType::kHALF: buffer_cpu = new float[data_size]; buffer_list_cpu_.push_back(std::pair<void*,int32_t>(buffer_cpu, data_size * sizeof(float))); cudaMalloc(&buffer_gpu, data_size * sizeof(float)); buffer_list_gpu_.push_back(buffer_gpu); break; case nvinfer1::DataType::kINT32: buffer_cpu = new int32_t[data_size]; buffer_list_cpu_.push_back(std::pair<void*,int32_t>(buffer_cpu, data_size * sizeof(int32_t))); cudaMalloc(&buffer_gpu, data_size * sizeof(int32_t)); buffer_list_gpu_.push_back(buffer_gpu); break;
The error is still there.

Any tips on this issue?

Thank you.

How to Reproduce

Steps to reproduce the behavior. Please include your cmake command.

Error Log

[03/30/2023-10:58:04] [W] [TRT] The enqueue() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use enqueueV2() instead.
[03/30/2023-10:58:04] [W] [TRT] Also, the batchSize argument passed into this function has no effect on changing the input shapes. Please use setBindingDimensions() function to change input shapes instead.
Segmentation fault (core dumped)

Additional Information

Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.