Comments (9)
Yeah, sound like it'll fit :D
The current codebase doesn't support running the model without quantization, but you could try rewriting the expert wrapper class.
This class moves the expert's parameters to a single storage, so it later can be efficiently moved between GPU and CPU memory. Here's a snippet that does this for the original expert class:
def replace_layer_storage(layer, device):
state_dict = layer.state_dict()
storage_size = 0
offsets = [0]
for x in nested_flatten(state_dict):
if not isinstance(x, torch.Tensor):
continue
storage_size += x.nbytes
offsets.append(storage_size)
storage = torch.UntypedStorage(storage_size, device=device)
i = 0
new_flattened_states = list()
for x in nested_flatten(state_dict):
if not isinstance(x, torch.Tensor):
new_flattened_states.append(x)
continue
start = offsets[i]
end = offsets[i + 1]
a_view = torch.as_tensor(storage[start:end], dtype=x.dtype, device=device).view(x.shape)
a_view[...] = x
assert a_view.data_ptr() == storage.data_ptr() + start
i += 1
new_flattened_states.append(a_view)
state_dict = nested_pack(new_flattened_states, state_dict)
for name, param in layer.named_parameters():
param.data = state_dict[name]
return layer, storage
The rest of the codebase is still quite HQQ-specific and offloading the unquantized model will require rewriting some code in the build_model.py file. Most of it boils down to replacing HQQ layers with default pytorch ones, though.
If you decide to go down that path, I can help you out a bit in this issue :)
from mixtral-offloading.
@freQuensy23-coder, yes, you are right - @lavawolfiee must have misunderstood you.
from mixtral-offloading.
I've tried to rewrite your code to add a fp16 support using your tips, but i faced some difficulties: i don't understand where exactly in replace_layer_storage we use quantization? As i think it will work with 16bits layers to? Can you help me with it?
from mixtral-offloading.
What hardware do you plan running the model on? It would require quite the amount of combined RAM + VRAM to run the model without quantization.
from mixtral-offloading.
I'll use Tesla A100 with 80 gb vram + 512 ram
from mixtral-offloading.
Seems like you'll be a little bit short on VRAM. Full fp16 model requires ~87GB. The table is taken from our tech report.
from mixtral-offloading.
Seems like you'll be a little bit short on VRAM. Full fp16 model requires ~87GB. The table is taken from our tech report.
I'll unload some of experts to RAM during inference, and it will use less gpu vram. It's the main idea of this lib. @dvmazur am i right
from mixtral-offloading.
If you decide to go down that path, I can help you out a bit in this issue :)
Thanks, Iād appreciate your help with this. Also i 'll try to do it myself today's evening.
from mixtral-offloading.
I've tried to rewrite your code to add a fp16 support using your tips, but i faced some difficulties: i don't understand where exactly in replace_layer_storage we use quantization? As i think it will work with 16bits layers to? Can you help me with it?
The snippet I sent you doesn't use quantization. It simply puts a given layer to one single storage.
from mixtral-offloading.
Related Issues (20)
- Mixtral OffLoading/GGUF/ExLlamaV2, which approach to use? HOT 1
- How to use the offloading in my MoE model? HOT 4
- Doesn't work HOT 10
- Can it run on multi-GPU? HOT 10
- Can it run with LlamaIndex?
- Is it possible to finetune this on a custom dataset? HOT 7
- CUDA OOM errors in wsl2
- need mixtral offload for NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
- hqq_aten package not installed. HOT 1
- 4bit-3bit model produces gibberish when plugged into demo
- Run on second GPU (torch.device("cuda:1")) HOT 1
- Update Requirements.txt
- a strange issue with default parameters " RuntimeError about memory"
- (Colab) Clear GPU RAM usage after running the generation code without restarting instance HOT 1
- Can this be used for Jambo inference HOT 1
- Trition Issues in Running the Code Locally HOT 1
- runtimeerror when nbit = 4 and group_size =64
- Implementation of benchmarks (C4 perplexity, Wikitext perplexity)
- How to split the model parameter safetensors file into multiple small files HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ššš
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ā¤ļø Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mixtral-offloading.