Giter Club home page Giter Club logo

llama-int8's People

Contributors

glample avatar guangyusong avatar pamparamm avatar timlacroix avatar tloen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llama-int8's Issues

LLaMA 13B works on a single RTX 4080 16GB

meta-llama#79 (comment)

System:

  • RTX 4080 16GB
  • Intel i7 13700
  • 32GB RAM
  • Ubuntu 22.04.2 LTS

LLaMA 13B

  • It uses > 32 GB of host memory when loading and quantizing, be sure you have enough memory or swap
  • VRAM usage: about 15GB
  • loading time: 5 min (using swap)
  • inference time: 30s

image

LLaMA 7B

  • VRAM usage: about 8.6 GB
  • loading time: 34s
  • inference time: 20s

65B on multiple GPUs : CUDA out of memory with 4 x GPU RTX A5000 (24GB) / 96GB in total

For the moment, I can't run the 65B model with 4 GPUs and a total of 96GB.

I investigate,
bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable are a first idea ...

[1] % torchrun --nproc_per_node 4 example.py --ckpt_dir ../../LLaMA/30B --tokenizer_path ../../LLaMA/tokenizer.model
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
/home/scampion/Code/llama/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/scampion/Code/llama/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/scampion/Code/llama/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/scampion/Code/llama/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
Allocating transformer on host
Allocating transformer on host
Allocating transformer on host
Allocating transformer on host
Traceback (most recent call last):
  File "/home/scampion/Code/llama-int8/example.py", line 129, in <module>
    fire.Fire(main)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/scampion/Code/llama-int8/example.py", line 101, in main
    generator = load(ckpt_dir, tokenizer_path, max_seq_len, max_batch_size, use_int8)
  File "/home/scampion/Code/llama-int8/example.py", line 38, in load
    model = Transformer(model_args)
  File "/home/scampion/Code/llama-int8/llama/model.py", line 255, in __init__
    self.layers.append(TransformerBlock(layer_id, params))
  File "/home/scampion/Code/llama-int8/llama/model.py", line 206, in __init__
    self.attention = Attention(args)
  File "/home/scampion/Code/llama-int8/llama/model.py", line 132, in __init__
    ).cuda()
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 208.00 MiB (GPU 0; 23.68 GiB total capacity; 5.08 GiB already allocated; 6.94 MiB free; 5.08 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
  File "/home/scampion/Code/llama-int8/example.py", line 129, in <module>
    fire.Fire(main)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/scampion/Code/llama-int8/example.py", line 101, in main
    generator = load(ckpt_dir, tokenizer_path, max_seq_len, max_batch_size, use_int8)
  File "/home/scampion/Code/llama-int8/example.py", line 38, in load
    model = Transformer(model_args)
  File "/home/scampion/Code/llama-int8/llama/model.py", line 255, in __init__
    self.layers.append(TransformerBlock(layer_id, params))
  File "/home/scampion/Code/llama-int8/llama/model.py", line 206, in __init__
    self.attention = Attention(args)
  File "/home/scampion/Code/llama-int8/llama/model.py", line 129, in __init__
    ).cuda()
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 208.00 MiB (GPU 0; 23.68 GiB total capacity; 5.28 GiB already allocated; 6.94 MiB free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
  File "/home/scampion/Code/llama-int8/example.py", line 129, in <module>
    fire.Fire(main)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/scampion/Code/llama-int8/example.py", line 101, in main
    generator = load(ckpt_dir, tokenizer_path, max_seq_len, max_batch_size, use_int8)
  File "/home/scampion/Code/llama-int8/example.py", line 38, in load
    model = Transformer(model_args)
  File "/home/scampion/Code/llama-int8/llama/model.py", line 255, in __init__
    self.layers.append(TransformerBlock(layer_id, params))
  File "/home/scampion/Code/llama-int8/llama/model.py", line 206, in __init__
    self.attention = Attention(args)
  File "/home/scampion/Code/llama-int8/llama/model.py", line 129, in __init__
    ).cuda()
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 208.00 MiB (GPU 0; 23.68 GiB total capacity; 5.28 GiB already allocated; 6.94 MiB free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
  File "/home/scampion/Code/llama-int8/example.py", line 129, in <module>
    fire.Fire(main)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/scampion/Code/llama-int8/example.py", line 101, in main
    generator = load(ckpt_dir, tokenizer_path, max_seq_len, max_batch_size, use_int8)
  File "/home/scampion/Code/llama-int8/example.py", line 38, in load
    model = Transformer(model_args)
  File "/home/scampion/Code/llama-int8/llama/model.py", line 255, in __init__
    self.layers.append(TransformerBlock(layer_id, params))
  File "/home/scampion/Code/llama-int8/llama/model.py", line 206, in __init__
    self.attention = Attention(args)
  File "/home/scampion/Code/llama-int8/llama/model.py", line 129, in __init__
    ).cuda()
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 208.00 MiB (GPU 0; 23.68 GiB total capacity; 5.28 GiB already allocated; 6.94 MiB free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 887816) of binary: /home/scampion/Code/llama/venv/bin/python
Traceback (most recent call last):
  File "/home/scampion/Code/llama/venv/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main
    run(args)
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
    elastic_launch(
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/scampion/Code/llama/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
example.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2023-03-14_09:55:43
  host      : vector
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 887817)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
  time      : 2023-03-14_09:55:43
  host      : vector
  rank      : 2 (local_rank: 2)
  exitcode  : 1 (pid: 887818)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
  time      : 2023-03-14_09:55:43
  host      : vector
  rank      : 3 (local_rank: 3)
  exitcode  : 1 (pid: 887819)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-03-14_09:55:43
  host      : vector
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 887816)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
(venv)

Producing nan Tensors

generate.py sometimes produces tensors with nan and sometimes does not and I cannot see any support for when this happens. I am using the given example.

CUDA out of memory

Hi, i try to add int8 inference of llama in my code, but i don't want to edit my original model structure. So i try similar to your quantize:

def quantize(self):

first of all, it works, only use 6-7G gpu memory loading 7B model, but in the stage of forward, the gpu memory will increase rapidly and then CUDA out of memory.
Have you ever been in this situation?
GPU: tesla T4 15G

error trace:
Load model with 6.87GB.
Traceback (most recent call last):
File "scripts/generate_lm_int8.py", line 112, in
output = model(src_tensor, seg_tensor)
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "scripts/generate_lm_int8.py", line 39, in forward
output = self.encoder(emb, seg)
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/workspace_fyh/TencentPretrainQuan/tencentpretrain/encoders/transformer_encoder.py", line 142, in forward
hidden, prev_attn = self.transformer[i](hidden, mask, position_bias=position_bias,
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/workspace_fyh/TencentPretrainQuan/tencentpretrain/layers/transformer.py", line 80, in forward
output = self.dropout_2(self.feed_forward(output)) + hidden
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/workspace_fyh/TencentPretrainQuan/tencentpretrain/layers/position_ffn.py", line 30, in forward
gate = self.act(self.linear_gate(x))
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/bitsandbytes/nn/modules.py", line 242, in forward
out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py", line 488, in matmul
return MatMul8bitLt.apply(A, B, out, bias, state)
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py", line 338, in forward
) = F.double_quant(B.to(torch.float16))
File "/home/ubuntu/miniconda3/envs/fyh-3.8/lib/python3.8/site-packages/bitsandbytes/nn/modules.py", line 199, in to
super().to(
RuntimeError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 14.62 GiB total capacity; 12.67 GiB already allocated; 11.38 MiB free; 13.34 GiB reserved in total by PyTorch)

RTX4090 CUDA out of memory.

I am using the latest version of nvidia-docker of pytorch, with support for cuda 12.
I complie the cuda 118 version of bit lib, since the code require bitxxx_cuda118.so .
Tested on 7B version, OK.
13B, CUDA out of memory. About 1-2G less.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 68.00 MiB (GPU 0; 23.65 GiB total capacity; 22.68 GiB already allocated; 41.31 MiB free; 23.14 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

No OOM error, 64Gb memory installed.

I doubt whether RTX4090 can actually run 13B model.
Please share more detailed imformation of your device.

Getting error on generation in Windows

I installed bitsandbytes following the guide for windows
including the dll from here.

Everything works find it loads 7B into about 8GB VRAM. Great.

But in generating I get:

  File "example.py", line 103, in main
    results = generator.generate(
  File "C:\Users\Shadow\Documents\LLama\llama-int8-main\llama\generation.py", line 60, in generate
    next_token = torch.multinomial(
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0

Any ideas what went wrong?

Issue for bitsandbytes /// NameError: name 'cuda_setup' is not defined. Did you mean: 'CUDASetup'?

Hi, thanks for sharing the wonderful code.
But I got the following error so could you clarify how to solve it?
I think it is better if you can clarify how to install bitsandbytes with version (e.g., https://pypi.org/project/bitsandbytes-cuda113/) in requirements.txt

Thank you!!

===========================================================

$MYPATH/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 153, in is_cublasLt_compatible
cuda_setup.add_log_entry("WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!", is_warning=True)

NameError: name 'cuda_setup' is not defined. Did you mean: 'CUDASetup'?

13B - load is successful on T4, but forward pass fails

any clues?

I had 30GB RAM, and I used 2MBx13000 swapfiles with the following command
: sudo dd if=/dev/zero of=/swapfile bs=2M count=13000 status=progress

Allocating transformer on host
Loading checkpoint 0
Loading checkpoint 1

Loaded in 2590.17 seconds with 13.19 GiB
cuBLAS API failed with status 15
A: torch.Size([72, 5120]), B: torch.Size([5120, 5120]), C: (72, 5120); (lda, ldb, ldc): (c_int(2304), c_int(163840), c_int(2304)); (m, n, k): (c_int(72), c_int(5120), c_int(5120))
error detectedTraceback (most recent call last):
  File "/home/jupyter/llama-int8/example.py", line 117, in <module>
    fire.Fire(main)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/jupyter/llama-int8/example.py", line 107, in main
    results = generator.generate(
  File "/home/jupyter/llama-int8/llama/generation.py", line 42, in generate
    logits = self.model.forward(tokens[:, prev_pos:cur_pos], prev_pos)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/jupyter/llama-int8/llama/model.py", line 281, in forward
    h = layer(h, start_pos, freqs_cis, mask)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jupyter/llama-int8/llama/model.py", line 221, in forward
    h = x + self.attention.forward(
  File "/home/jupyter/llama-int8/llama/model.py", line 142, in forward
    xq, xk, xv = self.wq(x), self.wk(x), self.wv(x)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/bitsandbytes/nn/modules.py", line 242, in forward
    out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 488, in matmul
    return MatMul8bitLt.apply(A, B, out, bias, state)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 377, in forward
    out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB)
  File "/opt/conda/envs/pt/lib/python3.9/site-packages/bitsandbytes/functional.py", line 1410, in igemmlt
    raise Exception('cublasLt ran into an error!')
Exception: cublasLt ran into an error!

Further detail needed - installing bitsandbytes from source

Not usually familiar with installing python modules outside of pip install -r requirments.txt. just wondering how I would go about the install of this dependency within venv and not conda.

Building the tool shouldn't be an issue, but just wondering how to go about integration - where does it belong?

Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.