Giter Club home page Giter Club logo

Comments (4)

fxmarty avatar fxmarty commented on June 25, 2024 1

Hi @ingo-m, thank you for the report.

Locally, how did you install onnxruntime-gpu? The wheel hosted on PyPI index is for CUDA 11.8. https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html gives instructions on how to install ORT CUDA EP for CUDA 12.1.

Not sure it will work, but you can also try export LD_LIBRARY_PATH = $LD_LIBRARY_PATH:/path/to/miniconda3/envs/py-onnx/lib/python3.10/site-packages/nvidia/cublas/lib

Regarding the

RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0]

I'm not sure yet, will investigate.

from optimum.

fxmarty avatar fxmarty commented on June 25, 2024 1

@ingo-m I can not reproduce the issue with:

import torch
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model_name = "bigscience/bloomz-560m"
device_name = "cuda"

tokenizer = AutoTokenizer.from_pretrained(base_model_name)

ort_model = ORTModelForCausalLM.from_pretrained(
    base_model_name,
    use_io_binding=True,
    export=True,
    provider="CUDAExecutionProvider",
)

prompt = "i like pancakes"
inference_ids = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(
    device_name
)

# Try to generate a prediction (fails).
output_ids = ort_model.generate(
    input_ids=inference_ids["input_ids"],
    attention_mask=inference_ids["attention_mask"],
    max_new_tokens=512,
    temperature=1e-8,
    do_sample=True,
)

print(tokenizer.decode(output_ids[0], skip_special_tokens=True))

with CUDA 11.8, torch==2.1.2+cu118, optimum==1.16.2, onnxruntime-gpu==1.17.0, onnx==1.15.0.

from optimum.

ingo-m avatar ingo-m commented on June 25, 2024

@fxmarty thanks for looking into it.

Locally, I installed directly from PyPI (with pipenv). In other words, I did not follow the specific instructions for CUDA 12, so that explains the problem. (However, it's strange that I had no problems with CUDA 12 when I was still using the older version optimum[onnxruntime-gpu]==1.9.1 🤔).

On google colab, !nvidia-smi reveals that it's using CUDA 12 as well (this is a free tier colab instance):

Mon Feb  5 12:59:37 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla T4                       Off | 00000000:00:04.0 Off |                    0 |
| N/A   61C    P8              10W /  70W |      0MiB / 15360MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

As you said, it looks like CUDA 12 is the culprit.

from optimum.

ingo-m avatar ingo-m commented on June 25, 2024

Regarding this error:

RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0]

Perhaps the ORTModelForCausalLM model was not placed on the GPU for inference (because the CUDAExecutionProvider didn't work because of the issue with CUDA 12), but the tokens were placed on the GPU, and then the error occurs because model and tokens are not on the same device?

from optimum.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.