Comments (9)
Covering my bases...
import torch
from optimum.onnxruntime import ORTModelForCausalLM
model = ORTModelForCausalLM.from_pretrained('/path/to/gemma')
logits = model(torch.LongTensor([[1,2,3,4,5]]), torch.Tensor([[1,1,1,1,1]]), torch.LongTensor([[0,1,2,3,4]]))['logits']
The logits here look like the pure ORT version in the original issue description.
from optimum.
Hi @jacob-vincent-mink , I see you haven't enabled eval mode in your comparison.
Running the following script:
import torch
from transformers import AutoModelForCausalLM
from optimum.onnxruntime import ORTModelForCausalLM
torch.manual_seed(0)
torch.cuda.manual_seed(0)
input_ids = torch.randint(10, 110, (1, 100), dtype=torch.long)
position_ids = torch.arange(100, dtype=torch.long).unsqueeze(0)
attention_mask = torch.ones(1, 100, dtype=torch.long)
ort_model = ORTModelForCausalLM.from_pretrained("./gemma")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
model.eval()
with torch.inference_mode():
output = model(input_ids, attention_mask=attention_mask, position_ids=position_ids)
ort_output = ort_model(input_ids, attention_mask=attention_mask, position_ids=position_ids)
print(output.logits)
print(ort_output.logits)
torch.testing.assert_close(ort_output.logits, output.logits, rtol=1e-3, atol=1e-4)
I get almost identical logits
Mismatched elements: 1615 / 25600000 (0.0%)
Greatest absolute difference: 0.0008251667022705078 at index (0, 8, 169499) (up to 0.0001 allowed)
Greatest relative difference: 18.47104263305664 at index (0, 8, 52871) (up to 0.001 allowed)
The ones that aren't matching seem to be mostly very near zero (hence the big relative difference)
>>> ort_output.logits[0, 96, 169953]
tensor(4.9114e-05)
>>> output.logits[0, 96, 169953]
tensor(1.6689e-06)
In general OnnxRuntime can't output 100% identical logits but they're close enough (for example the probability of the top +99% of tokens in a text generation model is the same):
torch.testing.assert_close(ort_output.logits.softmax(-1), output.logits.softmax(-1))
Mismatched elements: 99 / 25600000 (0.0%)
Greatest absolute difference: 8.20457935333252e-05 at index (0, 12, 1) (up to 1e-05 allowed)
Greatest relative difference: 0.0003821254940703511 at index (0, 8, 1) (up to 1.3e-06 allowed)
from optimum.
@IlyasMoutawwakil thanks for the information! I will try this out and report back.
It’s worth noting that the ONNX model I’m having trouble with was converted with the optimum-cli - does the CLI also perform the call to eval() before/during conversion?
I’m trying to use the model in C# with ONNXRuntime, which does not have an equivalent eval() as far as I’m aware. Therefore I would expect that to be baked into the ONNX file itself.
The way this problem actually manifests is that the logits values in Python are “small”, while the logits values from my converted model are “large”, which leads to floating-point overflow in a more strongly typed language like C# when I try to run things like Softmax on the output. Moving everything to double is obviously a workaround, but not preferred given the overhead of touching every logit to do so.
from optimum.
Related Issues (20)
- 'gemma is not supported yet with the onnx backend' - Exporting on-the-fly to onnx HOT 8
- Gemma Onnx suuport HOT 5
- tflite support for gemma HOT 1
- [BUG] Mistral feature extraction export to ONNX is broken HOT 5
- [BUG] Mistral feature extraction export to ONNX is broken
- [BUG] Cannot export Gemma/Mistral to ONNX/TensorRT using INT8 HOT 9
- Cannot download ONNX external data file from Hugging Face Hub HOT 4
- Cannot download from private repository on Hugging Face using optimum 1.17 HOT 3
- onnx model issue for zeroshot facebook/bart-large-mnli HOT 1
- Mixtral-8x7B-Instruct-v0.1-GPTQ AssertionError HOT 6
- OnnxSlim support
- NoSuchFile: [ONNXRuntimeError] : 3 : NO_SUCHFILE : Load model from data/data_onnx/model.onnx failed:Load model data/data_onnx/model.onnx failed. File doesn't exist HOT 1
- The exported ONNX model of Qwen/Qwen1.5-0.5B-Chat does not produce a cache-enabled model. HOT 3
- Training fails because of accelerate configure settings. HOT 1
- Llava ONNX export HOT 1
- Documentation for exporting openai/whisper-large-v3 to ONNX HOT 10
- How to tell whether the backend of ONNXRuntime accelerator is Intel VINO. HOT 2
- Performance with ORTModelForSequenceClassification is worse than with the original model HOT 3
- Missing `token_type_ids` when different tokenizer and model
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from optimum.