Giter Club home page Giter Club logo

nvml-wrapper's People

Contributors

arpankapoor avatar bdhu avatar bsteinb avatar cldfire avatar jjyyxx avatar kisaragieffective avatar nemosupremo avatar rdruon avatar theelectronwill avatar thejltres avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

nvml-wrapper's Issues

Support NVML 10

Ran tests locally and noticed this:

test device::test::supported_throttle_reasons ... thread 'device::test::supported_throttle_reasons' panicked at 'successful single test: Error(IncorrectBits(U64(63)), State { next_error: None, backtrace: None })', /checkout/src/libcore/result.rs:860:4

Sure enough, NVML version changed:

test test::sys_nvml_version ... "9.384.69" ... ok

Rename `NVML` struct to `Nvml`

According to the rust API guidelines:

In UpperCamelCase, acronyms and contractions of compound words count as one word: use Uuid rather than UUID

The NVML struct should be renamed to Nvml in line with this guidance. Documentation should be updated as well (although this will take some care, since the NVML lib itself should still be referred to as NVML).

I'd be happy to accept a PR for this!

Device Brand function fails for new RTX cards

Calling device.brand() for RTX cards (e.g. RTX A4000) causes the function to fail with:

unexpected enum variant value: 13

In device enums in the docs we see that 13 corresponds with NVML_BRAND_NVIDIA_RTX = 13, however the Brand struct only supports up to NVML_BRAND_TITAN (6).

I see that these bindings are already generated in nvml-wrapper-sys, so the Brand struct should be updated with all device types

Explore reference counting to maintain lifetime relationships among NVML data structures

Over in #20, it was noted that the use of compile-time lifetimes to represent the lifetime relationships between the various NVML data structures greatly complicates storing those structures alongside each other within the same object.

The use of Arcs to represent these relationships would likely be an acceptable performance tradeoff to make for this library and would greatly simplify its usage in these situations.

Support for querying scoped fields

The current implementation of Device::field_values_for is too limiting. The caller can only pass in a slice of field IDs to populate nvmlFieldValue_t::fieldId while all other struct members are set to zero, while nvmlDeviceGetFieldValues actually also uses nvmlFieldValue_t::scopeId as input when querying several fields such as NVLink remote IDs, NVLink ECC counters, or power draw and power limits.

I would like to contribute a function that allows passing in both fieldId and scopeId and was wondering whether that should replace Device::field_values_for or be a separate function, e.g. Device::scoped_field_values_for.

Load NVML lib at runtime

Right now this crate can only be used as an optional compile-time dependency on Linux and / or Windows. This makes it impossible both to build a binary that can be distributed cross-platform and to build a binary that can decide at runtime whether or not it is possible to get GPU info.

This crate needs to be moved to a dlopen-style approach where symbols are obtained at runtime, something that is unfortunately going to be a lot of work. It'll hopefully become somewhat tractable if and when this bindgen issue is resolved.

If I have the time and motivation + bindgen or some other tool makes the process more streamlined, I will invest the time into re-working the crate.

(Note that I may also end up dropping a lot of the more niche API surface upon making this migration in order to focus my time on supporting the more commonly-used functionality.)

Update memory info to v2

nvmlDeviceGetMemoryInfo_v2 has been available for a few years now. I propose to update the implementation to v2, so that the results correspond with what nvidia-smi and gpustat output.

Currently, the outputs of device.memory_info() return an nvmlMemory_t struct with total, free and used. nvmlMemory_v2_t is defined in the existing code, but never used. Calling the NVML function with a struct set up for v2 gives a slightly different result, where the used part doesn't include cache memory and other non-allocated stuff. Additionally, the struct has a field for version and reserved memory.

I implemented this change and did some testing, and it works! Importantly, the results of device.memory_info()?.used would produce 912MB on our A100s before the change, and now show 7.8MB with the v2 version, which matches nvidia-smi and gpustat.

process_utilization_stats failed with NOT_FOUND error, Ubuntu 22.04

use nvml_wrapper::Nvml;

fn main() {
    let nvml = Nvml::init().unwrap();
    let device = nvml.device_by_index(0).unwrap();

    let st = device.process_utilization_stats(None).unwrap();
}

cargo run with error:

thread 'main' panicked at src/main.rs:7:53:
called `Result::unwrap()` on an `Err` value: NotFound
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

My device:

Fri Mar 15 07:01:16 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14              Driver Version: 550.54.14      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3080 Ti     Off |   00000000:01:00.0 Off |                  N/A |
|  0%   43C    P8             24W /  350W |       1MiB /  12288MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

It's quite strange here, the first call to nvmlDeviceGetProcessUtilization to retrieve proccess count returned 79 in my situation which should be 0.

feature request: add support for `nvmlDeviceGetGraphicsRunningProcesses_v2`

Hello! I'm one of the users, and found out that this crate does not support nvmlDeviceGetGraphicsRunningProcesses_v2. My personal computer runs on debian 11 (and there's managed package called nvidia-driver), which is v470 and lacks support of nvmlDeviceGetGraphicsRunningProcesses_v3:

$ nm -D /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 | grep nvmlDeviceGetGraphicsRunningProcesses
0000000000051370 T nvmlDeviceGetGraphicsRunningProcesses
0000000000051570 T nvmlDeviceGetGraphicsRunningProcesses_v2

So, I purpose add two method: running_graphics_processes_v2 and running_graphics_processes_count_v2. They'll have same return type as running_graphics_processes and running_graphics_processes_count as well.

Here's my current code (you can use this: I wrote this based on your code, so I'll leave this as MIT OR Apache-2.0 license)

Would you mind if I submit PR for this? Thank you.

Code

Code
trait GetRunningGraphicsProcessesV2 {
    fn running_graphics_processes_v2(&self) -> Result<Vec<ProcessInfo>, NvmlError>;

    fn running_graphics_processes_count_v2(&self) -> Result<u32, NvmlError>;
}

type GetProcessV2Sig = unsafe extern "C" fn(
    device: nvmlDevice_t,
    #[allow(non_snake_case)]
    infoCount: *mut c_uint,
    infos: *mut nvmlProcessInfo_t,
) -> nvmlReturn_t;

static GET_PS_V2: Lazy<GetProcessV2Sig> = Lazy::new(|| {
    unsafe {
        let lib = Library::new("libnvidia-ml.so").unwrap();
        let f: GetProcessV2Sig = lib.get(b"nvmlDeviceGetGraphicsRunningProcesses_v2\0").map(|a| *a).unwrap();
        f
    }
});

impl GetRunningGraphicsProcessesV2 for Device<'_> {
    fn running_graphics_processes_v2(&self) -> Result<Vec<ProcessInfo>, NvmlError> {
        let sym = *GET_PS_V2;

        let mut count: c_uint = match self.running_graphics_processes_count_v2()? {
            0 => return Ok(vec![]),
            value => value,
        };
        // Add a bit of headroom in case more processes are launched in
        // between the above call to get the expected count and the time we
        // actually make the call to get data below.
        count += 5;
        let mem = unsafe { mem::zeroed() };
        let mut processes: Vec<nvmlProcessInfo_t> = vec![mem; count as usize];

        let device = unsafe { self.handle() };
        nvml_try(sym(device, &mut count, processes.as_mut_ptr()))?;
        processes.truncate(count as usize);

        Ok(processes.into_iter().map(ProcessInfo::from).collect())
    }

    fn running_graphics_processes_count_v2(&self) -> Result<u32, NvmlError> {
        let sym = *GET_PS_V2;


        // Indicates that we want the count
        let mut count: c_uint = 0;

        let device = unsafe { self.handle() };
        // Passing null doesn't indicate that we want the count. It's just allowed.
        match sym(device, &mut count, null_mut()) {
            nvmlReturn_enum_NVML_ERROR_INSUFFICIENT_SIZE => Ok(count),
            // If success, return 0; otherwise, return error
            other => nvml_try(other).map(|_| 0),
        }
    }
}

Segmentation fault when fetching graphics processes

I've been using this library in my GUI program centred around displaying GPU statistics, however, I've started receiving segmentation faults when fetching graphics processes. However, compute processes still works fine (including when there's more than 0 compute processes.).

This isn't caused by a change in the library itself, as I was using version 0.8 the whole time and the issue started appearing on its own after a system update. Updating to 0.9 didn't fix the issue.

I've narrowed down the segfault to this line:

Ok(processes.into_iter().map(ProcessInfo::from).collect())

My guess is this is a use-after-free error of some sort. Is the API being misused in some way?

I am running Arch Linux with proprietary drivers, here is my nvidia-smi:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03              Driver Version: 535.54.03    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3090        Off | 00000000:09:00.0  On |                  N/A |
| 82%   74C    P2             316W / 370W |  19504MiB / 24576MiB |     96%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce GTX 1080 Ti     Off | 00000000:0A:00.0 Off |                  N/A |
|  0%   42C    P8              11W / 250W |      9MiB / 11264MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

GPU Temp sensors

Is there a way to get more GPU temp sensors? i.e Core, hotspot, mem junction.
Currently i only see nvml::enum_wrappers::device::TemperatureSensor::Gpu which i assume is Core?

Cannot call legacy functions

device_count() panics with value: FailedToLoadSymbol("/lib64/libnvidia-ml.so: undefined symbol: nvmlDeviceGetComputeRunningProcesses_v3")

nvml_wrapper version: 0.9.0 (with legacy-functions feature)
nvidia driver version: 470.161.03

Is something else required to call the v2 version of this function?

Use the `#[doc(alias = "...")]` attribute on wrapper methods

The #[doc(alias = "...")] attribute became stable in Rust 1.48, and an interesting usecase for the feature was presented in this blog post. Libraries that wrap over an FFI interface (like nvml-wrapper does) can use the attribute to make it easier for developers used to working with the C library to transition to the Rust wrapper by enabling them to search for the C function names directly.

As an example, the following wrapper function in nvml-wrapper:

pub fn are_devices_on_same_board(
    &self,
    device1: &Device,
    device2: &Device,
) -> Result<bool, NvmlError> {
    let sym = nvml_sym(self.lib.nvmlDeviceOnSameBoard.as_ref())?;

    unsafe {
        let mut bool_int: c_int = mem::zeroed();

        nvml_try(sym(device1.handle(), device2.handle(), &mut bool_int))?;

        match bool_int {
            0 => Ok(false),
            _ => Ok(true),
        }
    }
}

should be annotated with the following attribute:

#[doc(alias = "nvmlDeviceOnSameBoard")]

So that it's possible to search for the C function name and find the equivalent method in the Rust wrapper.

I'd be happy to accept a PR adding usages of this attribute around the wrapper!

Accept more variants of lib name or make internal init public

So I attempted to run a tool made using this in an nvidia based docker image and found it didn't work. Further investigation showed that the image had libnvidia-ml.so.1 symlinked to libnvidia-ml.so.450.80.02 but hadn't created a libnvidia-ml.so.

I can create the symlink myself as a workaround, it's just the tool may be copied from this image as a base image an image built off the same nvidia base image without the symlink so it just adds a mild inconvenience to every user. Also when using a non-root user for security reasons I won't be able to create the symlink from my tool either ๐Ÿ˜ž

Lifecycle problems with `device_by_index`

This is what I am trying

pub struct GpuInfo<'a> {
    _instance: NVML,
    _devices: Option<Vec<Device<'a>>>,
    count: u32,
}

impl<'a> GpuInfo<'a> {
    pub fn new() -> Result<GpuInfo<'static>, NvmlError> {
        let _instance: NVML = NVML::init()?;
        let count = _instance.device_count()?;
        Ok(GpuInfo {
            _instance,
            _devices: None,
            count,
        })
    }

    pub fn get_devices(&mut self) {
        self._devices = Some(
            (0..self.count)
                .filter_map(|i| self._instance.device_by_index(i).ok())
                .collect(),
        );
    }
}

But it does not work. It gives me lifecycle errors. Specifically:

error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
  --> src/gpu.rs:49:48
   |
49 |                 .filter_map(|i| self._instance.device_by_index(i).ok())
   |                                                ^^^^^^^^^^^^^^^
   |
note: first, the lifetime cannot outlive the anonymous lifetime #1 defined on the method body at 46:5...
  --> src/gpu.rs:46:5
   |
46 |     pub fn get_devices(&mut self) {
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: ...so that reference does not outlive borrowed content
  --> src/gpu.rs:49:33
   |
49 |                 .filter_map(|i| self._instance.device_by_index(i).ok())
   |                                 ^^^^^^^^^^^^^^
note: but, the lifetime must be valid for the lifetime `'a` as defined on the impl at 28:6...
  --> src/gpu.rs:28:6
   |
28 | impl<'a> GpuInfo<'a> {
   |      ^^
note: ...so that the expression is assignable
  --> src/gpu.rs:47:25
   |
47 |           self._devices = Some(
   |  _________________________^
48 | |             (0..self.count)
49 | |                 .filter_map(|i| self._instance.device_by_index(i).ok())
50 | |                 .collect(),
51 | |         );
   | |_________^
   = note: expected `Option<Vec<nvml_wrapper::Device<'a>>>`
              found `Option<Vec<nvml_wrapper::Device<'_>>>`

I am not sure if this is related to #2 or not. Any suggestions on how to fix this?

failing to load driver nvml from wsl2

I am just getting started in rust development so I may just need some guidance. I have developed a simple program which just dumps the details of the driver that it loads, on my ubuntu 22.04 lts notebook i run my app and the output is (and as expected)

Device 0: "NVIDIA GeForce GTX 1060 with Max-Q Design"
Memory Info MemoryInfo { free: 6220742656, total: 6442450944, used: 221708288 }
Clock Info 405
Num Cores 1280

and nvidia-smi reports

nvidia-smi
Fri Nov 10 09:29:24 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06              Driver Version: 545.23.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce GTX 1060 ...    On  | 00000000:01:00.0 Off |                  N/A |
| N/A   45C    P8               5W /  60W |    139MiB /  6144MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      1513      G   /usr/lib/xorg/Xorg                           81MiB |
|    0   N/A  N/A      2378      G   /usr/bin/gnome-shell                         55MiB |
+---------------------------------------------------------------------------------------+

no when i go and comile and run my app on my windows 11 wsl2 ubuntu 22.04 lts instance I get

Error: DriverNotLoaded

however when i run nvidia-smi i am presented with a driver. I was guessing since smi worked that the driver was accessible.

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06              Driver Version: 546.01       CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3060        On  | 00000000:07:00.0 Off |                  N/A |
| 43%   25C    P8               7W / 170W |    500MiB / 12288MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A       333      G   /Xwayland                                 N/A      |
+---------------------------------------------------------------------------------------+

My question is there any additional path or environment variables that I need to setup in order to get this to work or is it simply an issue with wsl2?

just for fun i'm including the code becuase its very basic

extern crate nvml_wrapper as nvml;
use nvml::Nvml;
use nvml::error::NvmlError;
use nvml_wrapper::enum_wrappers::device::Clock;
fn main() -> Result<(),NvmlError>{

    let nvml = Nvml::init()?;
    let device_count = nvml.device_count()?;
    for di in 0..device_count{
        let device = nvml.device_by_index(di)?;
        println!("Device {}: {:?}",di,device.name()?);
        println!("Memory Info {:?}",device.memory_info()?);
        println!("Clock Info {:?}",device.clock_info(Clock::Memory)?);
        println!("Num Cores {:?}",device.num_cores()?);
    }
    Ok(())
}

FailedToLoadSymbol("GetProcAddress failed") with Windows studio driver v536.99

I just upgraded nvidia windows drivers to "NVIDIA Studio Driver version 536.99" and I now get FailedToLoadSymbol("GetProcAddress failed") error when trying to initialize the library. I suspect this is due to the bindings being out of sync or something?

Is it possible to self re-generate/re-build the project to get compatibility with latest drivers? I've tried to find the source of nvml.h but haven't managed to track it down.

Comply with the Rust API guidelines as much as possible

Checklist and links copied from https://github.com/brson/rust-api-guidelines:

Rust API guidelines

Crate conformance checklist

  • Organization (crate is structured in an intelligible way)
    • Crate root re-exports common functionality
      • This is a guideline that will have to be kept in mind with the addition of new content.
    • Modules provide a sensible API hierarchy
      • I believe that this crate meets this guideline (although it is very subjective).
  • Naming (crate aligns with Rust naming conventions)
    • Casing conforms to RFC 430
    • Ad-hoc conversions follow as_, to_, into_ conventions
    • Methods on collections that produce iterators follow iter, iter_mut, into_iter
      • N/A
    • Iterator type names match the methods that produce them
      • N/A
    • Ownership suffixes use _mut and _ref
    • Single-element containers implement appropriate getters
      • N/A (?)
  • Interoperability (crate interacts nicely with other library functionality)
    • Types eagerly implement common traits
      • Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash, Debug,
        Display, Default
      • I do my best here, but I'm not perfect.
    • Conversions use the standard traits From, AsRef, AsMut
      • Should my various as_inner and as_mut_inner methods be replaced (or augmented) with implementations of AsRef and AsMut?
    • Collections implement FromIterator and Extend
      • N/A
    • Data structures implement Serde's Serialize, Deserialize
    • Crate has a "serde" cfg option that enables Serde
    • Types are Send and Sync where possible
      • Another guideline that needs constant evaluation.
    • Error types are Send and Sync
    • Error types are meaningful, not ()
    • Binary number types provide Hex, Octal, Binary formatting
      • Only applicable to bitflags, and the bitflags crate handles this for us.
  • Macros (crate presents well-behaved macros) (N/A)
    • Input syntax is evocative of the output
    • Macros compose well with attributes
    • Item macros work anywhere that items are allowed
    • Item macros support visibility specifiers
    • Type fragments are flexible
  • Documentation (crate is abundantly documented)
    • Crate level docs are thorough and include examples
    • All items have a rustdoc example
      • I would say N/A. The vast majority of content is either a simple data struct or a simple function call that takes no parameters and returns a Result with a simple value. The more complicated items / usecases have examples (or should).
    • Examples use ?, not try!, not unwrap
    • Function docs include error conditions in "Errors" section
      • As much as NVIDIA's documentation allows.
    • Function docs include panic conditions in "Panics" section <-- pick up evaluating here
    • Prose contains hyperlinks to relevant things
    • Cargo.toml publishes CI badges for tier 1 platforms
    • Cargo.toml includes all common metadata
      • authors, description, license, homepage, documentation, repository,
        readme, keywords, categories
    • Crate sets html_root_url attribute "https://docs.rs/$crate/$version"
    • Cargo.toml documentation key points to "https://docs.rs/$crate"
    • Release notes document all significant changes
  • Predictability (crate enables legible code that acts how it looks)
    • Smart pointers do not add inherent methods
    • Conversions live on the most specific type involved
    • Functions with a clear receiver are methods
    • Functions do not take out-parameters
    • Operator overloads are unsurprising
    • Only smart pointers implement Deref and DerefMut
    • Deref and DerefMut never fail
    • Constructors are static, inherent methods
  • Flexibility (crate supports diverse real-world use cases)
    • Functions expose intermediate results to avoid duplicate work
    • Caller decides where to copy and place data
    • Functions minimize assumptions about parameters by using generics
    • Traits are object-safe if they may be useful as a trait object
  • Type safety (crate leverages the type system effectively)
    • Newtypes provide static distinctions
    • Arguments convey meaning through types, not bool or Option
    • Types for a set of flags are bitflags, not enums
    • Builders enable construction of complex values
  • Dependability (crate is unlikely to do the wrong thing)
    • Functions validate their arguments
    • Destructors never fail
    • Destructors that may block have alternatives
  • Debuggability (crate is conducive to easy debugging)
    • All public types implement Debug
    • Debug representation is never empty
  • Future proofing (crate is free to improve without breaking users' code)
    • Structs have private fields
    • Newtypes encapsulate implementation details
  • Necessities (to whom they matter, they really matter)
    • Public dependencies of a stable crate are stable
    • Crate and its dependencies have a permissive license

Name of the crate

Hello,
I am a bit confused about the name of the crate: is it nvml_wrapper or nvml-wrapper? The repo name has a hyphen, the doc url https://docs.rs/nvml-wrapper/0.4.1/nvml_wrapper/ has both in it and the doc heading is Crate nvml_wrapper. I would maybe just add the extern crate nvml_wrapper as nvml; and the other imports to the readme example to clarify this.

Linking error while building on Windows

I keep getting this error when I try to compile a crate with this crate as a dependency:

error: linking with `link.exe` failed: exit code: 1181
  |
  = note: "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX64\\x64\\link.exe" "/NOLOGO" "/NXCOMPAT" "/LIBPATH:C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.0.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.1.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.10.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.11.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.12.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.13.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.14.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.15.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.2.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.3.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.4.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.5.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.6.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.7.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.8.rcgu.o" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.nvdup.6dghzu5s-cgu.9.rcgu.o" "/OUT:D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.exe" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\nvdup-766bc952328f798a.2l1vb010stq23k08.rcgu.o" "/OPT:REF,ICF" "/DEBUG" "/NATVIS:C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\etc\\intrinsic.natvis" "/NATVIS:C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\etc\\liballoc.natvis" "/NATVIS:C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\etc\\libcore.natvis" "/LIBPATH:D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps" "/LIBPATH:C:\\Program Files\\NVIDIA Corporation\\NVSMI" "/LIBPATH:C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\libnvml_wrapper-04a79d4e90b7683a.rlib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\libnvml_wrapper_sys-b4e13e775667842f.rlib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\libbitflags-ad952368103d65d7.rlib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\liberror_chain-ba0720c77fe09a14.rlib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\libbacktrace-9b8e9d1a1fee94a6.rlib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\librustc_demangle-44d12e35ab5684f9.rlib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\libcfg_if-14e084dd51270587.rlib" "D:\\Programming\\Rust\\active\\nvdup\\target\\release\\deps\\libwinapi-adf1e76229356873.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\libstd-c1e537280a7eb2d9.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\libpanic_unwind-fea7faaed3d25759.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\libbacktrace_sys-b0e97dc981603010.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\libunwind-d93e4acd3f7f9acb.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\librustc_demangle-42249eebbc8b7bf7.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\liblibc-9bb7faeb8ad341e3.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\liballoc-ae07d6dbc61a1548.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\librustc_std_workspace_core-53d2dfe88d5ede66.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\libcore-797cfa1fd40eb75c.rlib" "C:\\Users\\Markus\\.rustup\\toolchains\\stable-x86_64-pc-windows-msvc\\lib\\rustlib\\x86_64-pc-windows-msvc\\lib\\libcompiler_builtins-8424507037470daf.rlib" "nvml.lib" "advapi32.lib" "dbghelp.lib" "kernel32.lib" "advapi32.lib" "ws2_32.lib" "userenv.lib" "msvcrt.lib"
  = note: LINK : fatal error LNK1181: cannot open input file 'nvml.lib'

I'm not sure what I'm supposed to do since the README doesn't say anything about nvml.lib. There seems to be no such file inside C:\Program Files\NVIDIA Corporation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.