Giter Club home page Giter Club logo

Comments (14)

GuillaumeTong avatar GuillaumeTong commented on August 10, 2024 1

Looking closer at the onnxruntime compatibility, I noticed that onnx 1.10 actually matches with onnxruntime 1.9 (which begs the question: what does onnxruntime 1.10 match?).
So I installed the packages as suggested: pip install "onnx>=1.10,<1.11" "onnxruntime-gpu>=1.9,<1.10"
After fixing this issue, the catboost example runs correctly:

$ CUDA_VISIBLE_DEVICES=0 python
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> from sklearn import datasets
>>> import onnxruntime as rt
>>> breast_cancer = datasets.load_breast_cancer()
>>> sess = rt.InferenceSession('breast_cancer.onnx')
/home/***/miniconda3/envs/***/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:350: UserWarning: Deprecation warning. This ORT build has ['CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. The next release (ORT 1.10) will require explicitly setting the providers parameter (as opposed to the current behavior of providers getting set/registered by default based on the build flags) when instantiating InferenceSession.For example, onnxruntime.InferenceSession(..., providers=["CUDAExecutionProvider"], ...)
  warnings.warn("Deprecation warning. This ORT build has {} enabled. ".format(available_providers) +
>>> probabilities = sess.run(['probabilities'],
...                          {'features': breast_cancer.data.astype(np.float32)})
>>>

This appears to be a simple version mismatch problem. But it seems unexpected that such problems should arise when I installed my packages with pip install onnx>=1.9.0 onnxruntime-gpu originally.

from clip-onnx.

Lednik7 avatar Lednik7 commented on August 10, 2024

Hi @YoadTew! Thank you for using my library. Have you looked at the examples folder? In order to use ONNX together with the GPU, you must run follow code block.

!pip install onnxruntime-gpu

Check the functionality of the module.

import onnxruntime
print(onnxruntime.get_device()) # return "GPU"

After these steps, please restart your runtime. I think it can help you.

from clip-onnx.

YoadTew avatar YoadTew commented on August 10, 2024

Hey @Lednik7, Thank you for responding, I have looked at the exmaples folder and ran all those steps.

Running

import onnxruntime
print(onnxruntime.get_device()) # return "GPU"

Does returns "GPU" for me, but still I have the same problem I described earlier. I also restarted my machine to make sure.

from clip-onnx.

Lednik7 avatar Lednik7 commented on August 10, 2024

@YoadTew Can I find out what configuration you are working on? In what environment?

from clip-onnx.

YoadTew avatar YoadTew commented on August 10, 2024

@Lednik7 I'm working with ubuntu 20.04 in a new conda environment with python 3.8. The only packages I installed are the ones required by this repo.

Here is the output of !nvidia-smi :

| NVIDIA-SMI 495.29.05    Driver Version: 495.29.05    CUDA Version: 11.5     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA TITAN Xp     On   | 00000000:05:00.0 Off |                  N/A |
| 23%   35C    P8     9W / 250W |    240MiB / 12188MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA TITAN Xp     On   | 00000000:06:00.0 Off |                  N/A |
| 23%   34C    P8     9W / 250W |      8MiB / 12196MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA TITAN Xp     On   | 00000000:09:00.0 Off |                  N/A |
| 23%   29C    P8     9W / 250W |      8MiB / 12196MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA TITAN Xp     On   | 00000000:0A:00.0 Off |                  N/A |
| 23%   30C    P8     9W / 250W |      8MiB / 12196MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

Here is the out of pip freeze:

argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
asttokens==2.0.5
attrs==21.4.0
backcall==0.2.0
black==22.1.0
bleach==4.1.0
certifi==2021.10.8
cffi==1.15.0
click==8.0.3
clip @ git+https://github.com/openai/CLIP.git@40f5484c1c74edd83cb9cf687c6ab92b28d8b656
clip-onnx @ git+https://github.com/Lednik7/CLIP-ONNX.git@75849c29c781554d01f87391dd5e6a7cca3e4ac1
debugpy==1.5.1
decorator==5.1.1
defusedxml==0.7.1
entrypoints==0.3
executing==0.8.2
flatbuffers==2.0
ftfy==6.0.3
importlib-resources==5.4.0
ipykernel==6.7.0
ipython==8.0.1
ipython-genutils==0.2.0
jedi==0.18.1
Jinja2==3.0.3
jsonschema==4.4.0
jupyter-client==7.1.2
jupyter-core==4.9.1
jupyterlab-pygments==0.1.2
MarkupSafe==2.0.1
matplotlib-inline==0.1.3
mistune==0.8.4
mypy-extensions==0.4.3
nbclient==0.5.10
nbconvert==6.4.1
nbformat==5.1.3
nest-asyncio==1.5.4
notebook==6.4.8
numpy==1.22.1
onnx==1.10.2
onnxruntime==1.10.0
onnxruntime-gpu==1.10.0
packaging==21.3
pandocfilters==1.5.0
parso==0.8.3
pathspec==0.9.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.0.0
platformdirs==2.4.1
prometheus-client==0.13.1
prompt-toolkit==3.0.26
protobuf==3.19.4
ptyprocess==0.7.0
pure-eval==0.2.2
pycparser==2.21
Pygments==2.11.2
pyparsing==3.0.7
pyrsistent==0.18.1
python-dateutil==2.8.2
pyzmq==22.3.0
regex==2022.1.18
Send2Trash==1.8.0
six==1.16.0
stack-data==0.1.4
terminado==0.13.1
testpath==0.5.0
tomli==2.0.0
torch==1.10.2
torchvision==0.11.3
tornado==6.1
tqdm==4.62.3
traitlets==5.1.1
typing_extensions==4.0.1
wcwidth==0.2.5
webencodings==0.5.1
zipp==3.7.0

Do you need anything else?

from clip-onnx.

Lednik7 avatar Lednik7 commented on August 10, 2024

@YoadTew Try installing and running the example again with

os.environ["CUDA_VISIBLE_DEVICES"] = "0"

I want to find out is this a cluster work problem or not

from clip-onnx.

YoadTew avatar YoadTew commented on August 10, 2024

@Lednik7 It doesn't seem to help. The same problem also happens when I use my own pc with ubuntu 20.04 and a single rtx 3070:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.91.03    Driver Version: 460.91.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 3070    Off  | 00000000:07:00.0  On |                  N/A |
|  0%   38C    P8    20W / 240W |    342MiB /  7973MiB |      9%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

from clip-onnx.

Lednik7 avatar Lednik7 commented on August 10, 2024

@YoadTew Try to run an example of conversion and launch from here https://catboost.ai/en/docs/concepts/apply-onnx-ml together with CUDAExecutionProvider

from clip-onnx.

Lednik7 avatar Lednik7 commented on August 10, 2024

@YoadTew Did you manage to start or have problems installing catboost? I asked to run to check if onnxruntime-gpu works

from clip-onnx.

GuillaumeTong avatar GuillaumeTong commented on August 10, 2024

I am having the same problem

Error message:

$ CUDA_VISIBLE_DEVICES=0 python script.py
2022-02-08 07:41:18.681109642 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please referen
ce https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met.
2022-02-08 07:41:18.681124990 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference h
ttps://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.

CUDA versions:

$ nvidia-smi
Tue Feb  8 07:46:15 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.29.05    Driver Version: 495.29.05    CUDA Version: 11.5     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
|  0%   39C    P8    43W / 390W |      1MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  On   | 00000000:02:00.0 Off |                  N/A |
|  0%   37C    P8    38W / 390W |      1MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

onnx versions:

$ conda list onnx
# packages in environment at /home/***/miniconda3/envs/***:
#
# Name                    Version                   Build  Channel
onnx                      1.10.2                   pypi_0    pypi
onnxruntime-gpu           1.10.0                   pypi_0    pypi

Verifying onnxruntime can get GPU:

$ python
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import onnxruntime
>>> print(onnxruntime.get_device())
GPU

from clip-onnx.

GuillaumeTong avatar GuillaumeTong commented on August 10, 2024

Here is what I get when going through the first catboost example:

$ python
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import catboost
>>> from sklearn import datasets
>>> breast_cancer = datasets.load_breast_cancer()
>>> model = catboost.CatBoostClassifier(loss_function='Logloss')
>>> model.fit(breast_cancer.data, breast_cancer.target)
Learning rate set to 0.008098
0:      learn: 0.6787961        total: 48.4ms   remaining: 48.3s
[...]
999:    learn: 0.0100949        total: 750ms    remaining: 0us
<catboost.core.CatBoostClassifier object at 0x7f26d1dc19d0>
>>> model.save_model(
...     "breast_cancer.onnx",
...     format="onnx",
...     export_parameters={
...         'onnx_domain': 'ai.catboost',
...         'onnx_model_version': 1,
...         'onnx_doc_string': 'test model for BinaryClassification',
...         'onnx_graph_name': 'CatBoostModel_for_BinaryClassification'
...     }
... )
>>> import numpy as np
>>> from sklearn import datasets
>>> import onnxruntime as rt
>>> breast_cancer = datasets.load_breast_cancer()
>>>
>>> sess = rt.InferenceSession('breast_cancer.onnx')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/**/miniconda3/envs/***/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/***/miniconda3/envs/***/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 361, in _create_inference_session
    raise ValueError("This ORT build has {} enabled. ".format(available_providers) +
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
>>>
>>> sess = rt.InferenceSession('breast_cancer.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
2022-02-08 08:06:46.093924754 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met.
2022-02-08 08:06:46.093945737 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
>>>
>>> sess = rt.InferenceSession('breast_cancer.onnx', providers=['CUDAExecutionProvider'])
2022-02-08 08:07:48.538600177 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.

from clip-onnx.

Lednik7 avatar Lednik7 commented on August 10, 2024

Thank you @GuillaumeTong for the tests. It turns out now everything works for you?

from clip-onnx.

GuillaumeTong avatar GuillaumeTong commented on August 10, 2024

@Lednik7 Yes, correct

from clip-onnx.

hidara2000 avatar hidara2000 commented on August 10, 2024

for anyone else having a similar issue and using Torch. Ensuring Torch is imported before onnxruntime solved my issue
ie replace

import onnxruntime as rt
import torch

with:

import torch
import onnxruntime as rt

from clip-onnx.

Related Issues (10)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.