Comments (5)
https://github.com/qwopqwop200/AutoAWQ-exllama
I succeeded in running exllama in AutoAWQ. Additionally, some minor changes to the exllama kernel were required.
Performance at opt-125m is:
awq kernel
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
wikitext | 1 | word_perplexity | 33.9570 | ||
byte_perplexity | 1.9333 | ||||
bits_per_byte | 0.9510 |
[======] Model summary: opt-125m-awq [======]
Load time: 2.66 seconds
Context speed: 10473.90 tokens/second (0.10 ms/token)
Generation speed: 118.32 tokens/second (8.45 ms/token)
VRAM: 255.58 MB
exllama kernel
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
wikitext | 1 | word_perplexity | 33.9579 | ||
byte_perplexity | 1.9333 | ||||
bits_per_byte | 0.9510 |
[======] Model summary: opt-125m-awq [======]
Load time: 2.70 seconds
Context speed: 8750.52 tokens/second (0.11 ms/token)
Generation speed: 131.00 tokens/second (7.63 ms/token)
VRAM: 255.58 MB
It was tested in the following.
wsl (window 11)
cuda 11.3
pytorch 2.0.1+cuda 11.7
RTX 3090 + R7 5800x
from autoawq.
This is good work @qwopqwop200. I was working on the same thing on the exllama branch. It seems there could be a modest boost in speed of around 10% from your initial testing.
Do you want to open a PR or can I copy your work into the exllama branch?
from autoawq.
Copy it to exllama branch. I'm not sure yet, but it seems that exllama and awq kerenl have different weight storage methods. This may be why exllama is not working.
from autoawq.
I have gone through your implementation now and unfortunately, it seems it runs into the same issues around the shapes of the in_features and out_features. I have fixed these for now in the exllama branch, but I still need to make the fused modules work.
If you have time to spare @qwopqwop200 and want to help with the exllama integration, I would appreciate it if you could work from this branch.
https://github.com/casper-hansen/AutoAWQ/tree/exllama
A few issues:
- I tested with a LLaMa 7B model and the generation is just random output, however, there seems to be a 10% boost in tokens/s .
- The fused modules are not working yet.
- Exllama module only works with linear modules that have
in_features == out_features
from autoawq.
Draft PR #30 is now open.
from autoawq.
Related Issues (20)
- cogvlm2 issue
- Cannot copy out of meta tensor; no data!
- quantize models with large context HOT 3
- why "w_bit": 6
- qwen2-72B can not be quantized by autoawq HOT 17
- when quantize qwen2 by autoawq, it not works successful. HOT 2
- Support Qwen2-57B-A14B?
- awqint4 to gguf ,ModuleNotFoundError: No module named 'awq.apply_awq' HOT 4
- Is there an example on how to quantize with multiple GPUs? Is it possible to quantize Llama 3 70B with 2x3090 24GB?
- I wanted to add a pull request, but it was closed immediately, prompting that the base branch is protected. HOT 2
- ConnectionError: Couldn't reach 'mit-han-lab/pile-val-backup' on the Hub (ConnectTimeout) HOT 11
- [Performance degrade]phi-3-medium-128k-instruct after awq quantized, then output repetitively HOT 2
- Support Qwen2 72 Awq quantization? HOT 2
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
- Unable to install with `poetry`
- Is it possible to quantize MoE models?
- Multi-GPU quantization randomly loads all host GPUs HOT 1
- Same AWQ model behaves differently on two similar machines
- deepseek-coder-v2-instruction-awq HOT 8
- Version on PyPi doesn't support Python 3.12 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from autoawq.