Comments (9)
Adding further debug output to the code, it seems as if the following lines:
assert(C.nvmlDeviceGetPowerManagementLimit(dev, &power))
and assert(C.nvmlDeviceGetCpuAffinity(dev, C.uint(len(mask)), (*C.ulong)(&mask[0])))
in nvml.go
cause these errors.
It is most likely that my Laptop GPU does not support these properties, and the calls are returning NVML_ERROR_NOT_SUPPORTED
instead of NVML_SUCCESS
. Is there some more graceful way to handle this?
The assert(C.nvmlDeviceGetPowerUsage(d.handle, &power))
in the function Status
also fails when running curl http://localhost:3476/v1.0/gpu/status
(assuming the calls to the first two functions have been removed).
Looking at the output of nvidia-smi
, I would assume that any field that is shown as N/A
will also fail.
from nvidia-docker.
That's a known issue. Mobility cards are poorly supported by NVML and most of the calls will fail.
The implication of this is that the RestAPI will be pretty much useless. Maybe we should add an option to disable the RestAPI completely.
Note that while using nvidia-docker-plugin
will fail, using nvidia-docker
in standalone mode should work.
from nvidia-docker.
Yes, I understand that my card is most likely to blame.
I don't think that this means that the REST API is useless though. Would it
not make sense to handle some some of the properties differently, providing
some indication to the client that they are not available (NVIDIA SMI does
this by printing N/A)?
For power, simply returning zero may make sense, and although CPUs with
higher core count also show NUMA effects (Hyperthreading further compicates
this), one should be able to find a sensible default for this too.
On Tue, 26 Jan 2016 23:08 Jonathan Calmels [email protected] wrote:
That's a known issue. Mobility cards are poorly supported by NVML and most
of the calls will fail.
The implication of this is that the RestAPI will be pretty much useless.
Maybe we should add an option to disable the RestAPI completely.Note that while using nvidia-docker-plugin will fail, using nvidia-docker
in standalone mode should work.—
Reply to this email directly or view it on GitHub
#40 (comment)
.Evan Lezar
www.evanlezar.com
from nvidia-docker.
If it helps, I've found that NVML can find memory info for all cards, but not necessarily utilization or temperature.
https://github.com/NVIDIA/DIGITS/blob/v3.1.0/digits/device_query.py#L215-L235
from nvidia-docker.
I agree we should handle this case more gracefully, that's why I labeled it as a "bug".
from nvidia-docker.
Handling it gracefully would mean having default values for everything (not just few calls).
Given the current implementation, it's going to be difficult.
I contacted the NVML team about this matter a while ago maybe things will change on their end.
from nvidia-docker.
@lukeyeager Did you check on GeForce Kepler products ?
from nvidia-docker.
@elezar So regarding your card specifically this seems to be an issue with NVML.
Can you provide us the output of nvidia-smi -q
?
In any case, we should start working on handling unsupported features gracefully.
from nvidia-docker.
@3XX0 I have run nvidia-smi -q
on my machine both on the host and using nvidia-docker
(after creating the driver volume manually). Please find the output attached in the files below.
nvidia-docker-smi-q.stdout.txt
nvidia-smi-q.stdout.txt
There seem to be many properties that nvidia-smi
cannot determine on my card. For comparison, I have also included the output for the Titan Z and K80s that I have available.
nvidia-smi-q-k80.stdout.txt
nvidia-smi-q-titan.stdout.txt
from nvidia-docker.
Related Issues (20)
- Support Debian 12 HOT 15
- docker.io/nvidia/cuda:11.2-cudnn8.1-ubuntu18.04: not found HOT 1
- installed
- Unable to get container to use nvidia graphics resources. System : Ubuntu22.04 HOT 2
- Manifest unknown error HOT 4
- How to change the jupyter lab host in Pytorch:23.07 image HOT 2
- Can't download cuda on Ubuntu 20.04 HOT 8
- DEEPLINK.GOOGLE.COM
- containerd-config.patch cannot be applied HOT 5
- My wallet, I'd like to receive bnb beacon chain token if possible
- Fails with CUDA driver version is insufficient for CUDA runtime version HOT 1
- No nvidia gpu, docker: Error response from daemon: failed to create shim task: OCI runtime create failed HOT 1
- Failed to acquire license from license server HOT 5
- config.toml created by nvidia-ctk doesn't work properly HOT 10
- can not pull cuda:11.8.0-cudnn8-devel-centos7 HOT 1
- where are the old images? HOT 3
- version
- Unable to find image 'nvidia/cuda:11.0-base' locally when testing nvidia-docker2 setup HOT 1
- Docker run gives : "WSL Environment Detected but no adopters were found" HOT 1
- Splitting up /var/lib/apt/lists/partial/developer.download.nvidia.com_compute_cuda_repos_ubuntu2204_x86%5f64_InRelease into data and signature failed Error HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nvidia-docker.