Comments (3)
@idObTri Thank you. I am not sure. use_auth_token
not working anymore indeed seems to be a regression. While we fix it, you can indeed use huggingface-cli / huggingface_hub.
from optimum.
@idObTri Thank you for the report, I can not reproduce the issue. Can you try: pip install -U optimum huggingface_hub transformers
and
huggingface-cli login
with a token from https://huggingface.co/settings/tokens?
from optimum.onnxruntime import ORTModelForCustomTasks
model = ORTModelForCustomTasks.from_pretrained("fxmarty/tiny-onnx-private-2")
works for me with optimum==1.17.1, huggingface-hub==0.21.3 & transformers==4.38.1
from optimum.
Thank you for the reply.
I updated each module to optimum==1.17.1, huggingface-hub==0.21.3 and transformers==4.39.0dev0, but still had this problem.
Next, following your advice, I used huggingface_hub.login(hf_token) instead of use_auth_token = hf_token with ORTModelForCustomTasks(). This works well.
(with my notebook, huggingface-cli didn't work well so I tried to use huggingface_hub module)
Should I move to migrate to using huggingface-cli / hugginface_hub from use_auth_token?
from optimum.
Related Issues (20)
- Optimum for Jetson Orin Nano HOT 1
- 似乎不支持cpu HOT 2
- Llama-2-7b is failing with bfloat16 export with onnx HOT 1
- Implement ORTModelForZeroShotObjectDetection HOT 3
- changes to _maybe_log_save_evaluate() not reflected in optimum repo HOT 1
- 'gemma is not supported yet with the onnx backend' - Exporting on-the-fly to onnx HOT 8
- Gemma Onnx suuport HOT 5
- tflite support for gemma HOT 1
- [BUG] Mistral feature extraction export to ONNX is broken HOT 5
- [BUG] Mistral feature extraction export to ONNX is broken
- [BUG] Cannot export Gemma/Mistral to ONNX/TensorRT using INT8 HOT 9
- Cannot download ONNX external data file from Hugging Face Hub HOT 4
- onnx model issue for zeroshot facebook/bart-large-mnli HOT 1
- Mixtral-8x7B-Instruct-v0.1-GPTQ AssertionError HOT 6
- OnnxSlim support
- NoSuchFile: [ONNXRuntimeError] : 3 : NO_SUCHFILE : Load model from data/data_onnx/model.onnx failed:Load model data/data_onnx/model.onnx failed. File doesn't exist HOT 1
- The exported ONNX model of Qwen/Qwen1.5-0.5B-Chat does not produce a cache-enabled model. HOT 1
- Training fails because of accelerate configure settings. HOT 1
- Llava ONNX export HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from optimum.