Comments (4)
As of right now, this is a research-focused repository with the goal of accurately sparsifying GPT-style models. As @Godofnothing is saying, sparse models are currently stored as dense tensors with many weights that are exactly zero. This simulates a sparse model and is standard in sparsity research. There are various other projects focused on actual size reduction and speedups for existing sparse models, e.g. DeepSparse, XNNPACK or CUTLASS (for 2:4 sparsity).
The memory consumption and runtime of the final model should be exactly the same, perhaps some of the memory increases and slowdowns are during the sparsification process itself and/or our layer-by-layer evaluation procedure designed to evaluate large models on a single GPU?
from sparsegpt.
@chenrui17 parameters were set to zero, but in fact the models has the same memory footprint, since weights are stored as dense tensors
from sparsegpt.
I found the model is running even slower. Is that expected? If size doesn't change and speed is slower, what is the pruning for? Did I miss anything? cc @Godofnothing
from sparsegpt.
Is there any howto for reduce size of the sparsed model? I tried with DeepSparse, but failed miserably. It seems there's no way how to convert back the DeepSparse-compiled model back to huggingface format.
from sparsegpt.
Related Issues (20)
- why there is no inference related code in the project? HOT 10
- Lack of comments in the code
- Different error between OBS and SparseGPT HOT 5
- OOM:cannot download opt-30b, opt-66b
- How should I verify the speedup effect of the algorithm? HOT 4
- Purpose of this update
- finetuning sparsified LLaMa HOT 5
- Inference Speedup HOT 3
- Dependencies are wrong HOT 3
- Would sparsegpt be available for Llama2? HOT 3
- When would the code for GPT-J-6B be released?
- Adaptation for Pruning Conv2d or Conv3d Layers? HOT 1
- Can SparseGPT be used on BERT ?
- Using llama.py silently fails and occasionally causes system instability
- transformers version is not correct
- Mistral Support HOT 2
- how to use for Baichuan?
- 2:4 sparsity with to_sparse_semi_structured method from pytorch results in memory issue
- Why Hessian can get by activation ($H = XX^T$) ? HOT 4
- Why transpose the input when in case of nn.Linear or nn.Conv1d?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sparsegpt.