Comments (2)
Maybe it's the fact precision is set to bf16-mixed?
from lit-llama.
Ran it again with bf16-true
and got this error instead
[2023-11-03 19:13:17.491399] iter 7998: loss 0.8175, time: 1408.26ms
Validating ...
Traceback (most recent call last):
File "/root/lit-llama/finetune/full.py", line 225, in <module>
CLI(main)
File "/root/.venv/lib/python3.10/site-packages/jsonargparse/_cli.py", line 96, in CLI
return _run_component(components, cfg_init)
File "/root/.venv/lib/python3.10/site-packages/jsonargparse/_cli.py", line 181, in _run_component
return component(**cfg)
File "/root/lit-llama/finetune/full.py", line 86, in main
train(fabric, model, optimizer, train_data, val_data, out_dir)
File "/root/lit-llama/finetune/full.py", line 131, in train
val_loss = validate(fabric, model, val_data)
Traceback (most recent call last):
File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
File "/root/lit-llama/finetune/full.py", line 225, in <module>
return func(*args, **kwargs)
File "/root/lit-llama/finetune/full.py", line 177, in validate
output = generate_response(model, instruction)
File "/root/lit-llama/finetune/full.py", line 152, in generate_response
output = generate(
File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
Traceback (most recent call last):
File "/root/lit-llama/finetune/full.py", line 225, in <module>
CLI(main)
File "/root/.venv/lib/python3.10/site-packages/jsonargparse/_cli.py", line 96, in CLI
return func(*args, **kwargs)
File "/root/lit-llama/generate.py", line 83, in generate
return _run_component(components, cfg_init)
idx = idx.index_copy(0, input_pos, idx_next)
File "/root/.venv/lib/python3.10/site-packages/jsonargparse/_cli.py", line 181, in _run_component
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument source in method wrapper_CUDA_index_copy)
return component(**cfg)
File "/root/lit-llama/finetune/full.py", line 86, in main
train(fabric, model, optimizer, train_data, val_data, out_dir)
File "/root/lit-llama/finetune/full.py", line 131, in train
CLI(main)
File "/root/.venv/lib/python3.10/site-packages/jsonargparse/_cli.py", line 96, in CLI
val_loss = validate(fabric, model, val_data)
File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return _run_component(components, cfg_init)
return func(*args, **kwargs)
File "/root/.venv/lib/python3.10/site-packages/jsonargparse/_cli.py", line 181, in _run_component
File "/root/lit-llama/finetune/full.py", line 177, in validate
return component(**cfg)
output = generate_response(model, instruction)
File "/root/lit-llama/finetune/full.py", line 86, in main
File "/root/lit-llama/finetune/full.py", line 152, in generate_response
train(fabric, model, optimizer, train_data, val_data, out_dir)
File "/root/lit-llama/finetune/full.py", line 131, in train
output = generate(
File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
val_loss = validate(fabric, model, val_data)
File "/root/lit-llama/generate.py", line 83, in generate
File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
idx = idx.index_copy(0, input_pos, idx_next)
return func(*args, **kwargs)
File "/root/lit-llama/finetune/full.py", line 177, in validate
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:3! (when checking argument for argument source in method wrapper_CUDA_index_copy)
output = generate_response(model, instruction)
File "/root/lit-llama/finetune/full.py", line 152, in generate_response
output = generate(
File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/lit-llama/generate.py", line 83, in generate
idx = idx.index_copy(0, input_pos, idx_next)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:2! (when checking argument for argument source in method wrapper_CUDA_index_copy)
Traceback (most recent call last):
File "/root/lit-llama/finetune/full.py", line 225, in <module>
CLI(main)
File "/root/.venv/lib/python3.10/site-packages/jsonargparse/_cli.py", line 96, in CLI
return _run_component(components, cfg_init)
File "/root/.venv/lib/python3.10/site-packages/jsonargparse/_cli.py", line 181, in _run_component
return component(**cfg)
File "/root/lit-llama/finetune/full.py", line 86, in main
train(fabric, model, optimizer, train_data, val_data, out_dir)
File "/root/lit-llama/finetune/full.py", line 131, in train
val_loss = validate(fabric, model, val_data)
File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/lit-llama/finetune/full.py", line 177, in validate
output = generate_response(model, instruction)
File "/root/lit-llama/finetune/full.py", line 152, in generate_response
output = generate(
File "/root/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/lit-llama/generate.py", line 83, in generate
idx = idx.index_copy(0, input_pos, idx_next)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:1! (when checking argument for argument source in method wrapper_CUDA_index_copy)
from lit-llama.
Related Issues (20)
- How to convert hf weight of 70b to lit-lamma weights?
- How to quantize LLama in fine-tuning ?
- RuntimeError: cutlassF: no kernel found to launch! HOT 1
- RuntimeError: Expected x1.dtype() == cos.dtype() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) HOT 1
- Can I use Lightning fabirc to pre train llama2 on v100?
- Ban some tokens
- Error: git submodule update --init --recursive -q did not run successfully HOT 1
- Beam search generation
- Issue with Rotary Embedding Initialization when the number of devices is > 1
- TPU Training
- OSError: Not found: "checkpoints/lit-llama/tokenizer.model": No such file or directory Error #2 HOT 4
- `PackedDatasetBuilder` does not separate with `sep_token`
- it seems that hash of traindata is lost, so it's impossible to continue finetune after stop
- Converting from lit-llama to HF checkpoint?
- why cannot the generate function be used twice
- How to convert lit-llama pretrained model to HF format? HOT 1
- Using llama3 through lit lama HOT 5
- Where is tokenizer.model? tokenizer path
- 数据集与训练方法的相关问题
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lit-llama.