Comments (2)
You don't have gpu support in your container then. Check available documentation on how to enable the nvidia runtime for docker.
from yolov4-triton-tensorrt.
I don't think that is the case...I'm able to run other tasks with GPU and cuda.
attach here nvidia-smi
output in the container:
root@550dd14d9c34:/yolov4-triton-tensorrt/build# nvidia-smi
Mon Jul 18 13:36:17 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.48.07 Driver Version: 515.48.07 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA T4G Off | 00000000:00:1F.0 Off | 0 |
| N/A 53C P0 28W / 70W | 2MiB / 15360MiB | 5% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
from yolov4-triton-tensorrt.
Related Issues (20)
- tritonclient.utils.InferenceServerException: [StatusCode.UNIMPLEMENTED] HOT 1
- yolov4-tiny model accuracy not right HOT 2
- Facing problem to create "engine" HOT 8
- C++ client produces no detections HOT 8
- multiple model instances issue HOT 15
- Feature: Import darknet weights instead of pytorch .wts HOT 1
- unexpected inference input 'data' HOT 2
- TensorRT 8 Support HOT 11
- Unexpected inference output 'detections' for model 'yolov4' HOT 1
- Dynamic batcing inference time HOT 1
- Support for TensorRT8 HOT 1
- [QUESTIONS] .wts file and plugin file HOT 3
- where is test time? HOT 1
- Can I use this repo to use custom trained yolo-v4 with single class HOT 1
- mismatch in postprocess func HOT 9
- Triton Inference Server taking adding 3 seconds to get YOLOv4 Inference HOT 7
- How can I generate batch=5 engine? HOT 1
- error: creating server: Internal - failed to load all models - NVIDIA Triton Server for YOLOv4 HOT 1
- tritonclient.utils.InferenceServerException: [StatusCode.UNAVAILABLE] failed to connect to all addresses HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolov4-triton-tensorrt.