Comments (29)
from colossalai.
Bot detected the issue body's language is not English, translate it automatically. ๐ฏ๐ญ๐ป๐งโ๐คโ๐ง๐ซ๐ง๐ฟโ๐คโ๐ง๐ป๐ฉ๐พโ๐คโ๐จ๐ฟ๐ฌ๐ฟ
from colossalai.
Hi,
Yes, I believe based on the readme you need torch 1.12 to run it. In fact some of these legacy APIs are under migration and are not guaranteed to be runnable, but I'll try some fixes tomorrow.
from colossalai.
Hi, Yes, I believe based on the readme you need torch 1.12 to run it. In fact some of these legacy APIs are under migration and are not guaranteed to be runnable, but I'll try some fixes tomorrow.
I know. But I want to deploy colossal on NVIDIA H800 GPU which only support cuda 12. Based on cuda 12, I can only install pytorch 2.0+ not 1.12.... Could you give me some further suggestions?
from colossalai.
Sorry, I think the current auto parallel is less performant and popular so we didn't adapt it to the newest version. Do you have a compelling reason to use it?
Otherwise, it's advised to use the HybridParallelPlugin or Gemini (ZeRO 3 with chunk-based memory management)
from colossalai.
I don't necessarily have to use Auto Parallel Strategy . What I mean is that the official demos provided now are all based on the Torch 1.12 API, but on H800, only Torch 2.0+ can be used, which means I can't deploy training plans on H800.
from colossalai.
Other demos should work on torch 2.0
from colossalai.
Could you give me some examples ? I have tried many training demo codes but they all failed on torch 2.0 but succeeded on torch 1.12..
from colossalai.
Could you try examples/language/gpt/gemini and examples/language/gpt/hybridparallelism?
from colossalai.
Bot detected the issue body's language is not English, translate it automatically. ๐ฏ๐ญ๐ป๐งโ๐คโ๐ง๐ซ๐ง๐ฟโ๐คโ๐ง๐ป๐ฉ๐พโ๐คโ๐จ๐ฟ๐ฌ๐ฟ
Could you try examples/language/gpt/gemini and examples/language/gpt/hybrid parallelism?
from colossalai.
It seems that transformer api is not compatible with current colossal aiCould you try examples/language/gpt/gemini and examples/language/gpt/hybridparallelism?
from colossalai.
I have fixed this so pulling from the newest main branch should work
from colossalai.
from colossalai.
Could you either install apex from source or set enable_all_optimization=False? Thanks.
from colossalai.
I have re-compiled and re-installed apex from source and run the programs , got the following:
/usr/local/lib/python3.10/dist-packages/colossalai/nn/optimizer/hybrid_adam.py:90: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)
self._dummy_overflow_buf = torch.cuda.IntTensor([0])
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch [1/3]: 0%| | 0/57 [00:00<?, ?it/s]Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch [1/3]: 0%| | 0/57 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/root/ColossalAI/examples/language/gpt/hybridparallelism/finetune.py", line 313, in <module>
main()
File "/root/ColossalAI/examples/language/gpt/hybridparallelism/finetune.py", line 293, in main
train_epoch(epoch, model, optimizer, _criterion, lr_scheduler, train_dataloader, booster, coordinator)
File "/root/ColossalAI/examples/language/gpt/hybridparallelism/finetune.py", line 147, in train_epoch
outputs = booster.execute_pipeline(
File "/usr/local/lib/python3.10/dist-packages/colossalai/booster/booster.py", line 205, in execute_pipeline
return self.plugin.execute_pipeline(data_iter, model, criterion, optimizer, return_loss, return_outputs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/booster/plugin/hybrid_parallel_plugin.py", line 1259, in execute_pipeline
outputs = self.schedule.forward_backward_step(
File "/usr/local/lib/python3.10/dist-packages/colossalai/pipeline/schedule/one_f_one_b.py", line 445, in forward_backward_step
result = self.run_forward_backward(model, data_iter, criterion, optimizer, return_loss, return_outputs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/pipeline/schedule/one_f_one_b.py", line 365, in run_forward_backward
output_obj = self.forward_step(model, input_obj, criterion, accum_loss, outputs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/pipeline/schedule/one_f_one_b.py", line 249, in forward_step
output_obj = model_forward(model, micro_batch, input_obj)
File "/usr/local/lib/python3.10/dist-packages/colossalai/pipeline/schedule/_utils.py", line 120, in model_forward
return model(**data, **internal_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/booster/plugin/hybrid_parallel_plugin.py", line 197, in forward
return super().forward(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/interface/model.py", line 25, in forward
return self.module(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/modeling/gpt2.py", line 718, in gpt2_for_sequence_classification_forward
outputs = GPT2PipelineForwards.gpt2_model_forward(
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/modeling/gpt2.py", line 260, in gpt2_model_forward
outputs = block(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 390, in forward
attn_outputs = self.attn(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/modeling/gpt2.py", line 840, in forward
attn_output = ColoAttention.attention(query, key, value, **attention_mask, dropout_p=dropout_p, scale=scale)
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/layer/attn.py", line 250, in attention
attn_func = ColoAttention._dispatch_kernel(q.dtype, mask_type)
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/layer/attn.py", line 98, in _dispatch_kernel
].load()
File "/usr/local/lib/python3.10/dist-packages/colossalai/kernel/kernel_loader.py", line 73, in load
assert len(usable_exts) != 0, f"No usable kernel found for {self.__class__.__name__} on the current machine."
AssertionError: No usable kernel found for FlashAttentionWithPaddingMaskLoader on the current machine.
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
from colossalai.
You'll need to either set enable_all_optimization=False or pip install flash-attn
from colossalai.
pip install flash-attn
set enable_all_optimization for colossal or apex?
from colossalai.
from colossalai.
fix and thanks
from colossalai.
I have solved all the issues related to system env and when I re-run the program in ~/ColossalAI/examples/language/gpt/hybridparallelism I got the ProcessError:
from colossalai.
from colossalai.
when I run the examples in ColossalAI/examples/language/gpt/hybridparallelism/ using command colossalai run --nproc_per_node=2 finetune.py, I always got the following error:
On the other hand , could you show how can I run this example in multiple node(machine). Thanks ! @Edenzzzz
from colossalai.
Thanks for your issue. This is probably due to a recent transformers upgrade, so I've fixed it.
For multi-node please refer to commands in examples/language/llama/README.md
from colossalai.
Thanks for you reply. Actually, I have launched two Docker containers on two separate machines. How can I configure the Docker address in the host file
from colossalai.
Please refer to similar examples in Pytorch forum. You can either run docker in host network mode or map a port from container to host.
https://discuss.pytorch.org/t/how-to-multi-node-parallel-in-dockers-container/188736
https://discuss.pytorch.org/t/run-multi-node-training-inside-docker/167537
from colossalai.
when I run the examples in ColossalAI/examples/language/gpt/hybridparallelism/ using command bash run.sh, I always got the following error:
Failed to run torch 2.1 in Tesla V100 GPU .....
@Edenzzzz do you test this demo on V100 GPU , cuda 12.1, torch 2.1?
from colossalai.
This is not a bug on our end as flash attention doesn't support V100, which is why it's throwing no kernel. You should uninstall flash_attn
from colossalai.
This is not a bug on our end as flash attention doesn't support V100, which is why it's throwing no kernel. You should uninstall flash_attn
When I uninstall the flash-attn and re-run this example , and I met the similar error.
How can I run this example successfully
@Edenzzzz
from colossalai.
Now, I am running this example in a distributed environment , 4 H800 GPUs server and 4 V100 GPUs server using the command colossal run --nproc_per_node 4 --hostfile=hostfile --master_addr=xxxx fine_tune.py ๏ผand I met the following error:
torch.distributed.DistBackendError: NCCL error .
torch version : 2.1.0 , cuda 12.1, driver 535.171.04
I am not sure if this is caused by NCCL internal error or other factors? How can I fix it @Edenzzzz
from colossalai.
Related Issues (20)
- Use gemini plugin and LowLevelZero to run llama2_7b. In the pulgin in gemini, set the policy to static, shard_param_frac, offload_optim_frac, and offload_param_frac to 0.0, making gemini equal to zero2, and set stage to 2 in LowLevelZero. Using bf16 for training, and comparing the two plugins, we found that the GPU memory usage of gemini is higher than that of LowLevelZero. Why is this? In principle, gemini should save more GPU memory HOT 2
- [FEATURE]: Support Command-R model
- [BUG]: Command-R 8 GPU Pytest failure
- [FEATURE]: Support T5ForTokenClassification
- [FEATURE]: Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM
- [BUG]: loading OPT 66B model - CPU runs out of memory HOT 13
- [BUG]: Colossal AI failed to load ChatGLM2 HOT 2
- [BUG]: ColossalChat train sft is skipped with opt-1.3b model HOT 7
- [FEATURE]: Support SP+PP in Llama etc. HOT 1
- [compatibility] support torch 2.2
- [FEATURE]: [PyTorch] per-channel FP8 quantization
- [DOC]: macos ไธๅฏไปฅ่ฟ่กๅ่ฏท้ฎ HOT 2
- [Feature]: [PyTorch] FP8 all-reduce using all-to-all and all-gather
- 2024 ๆบๅบ่ท่ทฏๅๅ HOT 1
- training issue HOT 1
- Whether to support the training acceleration of the StableDiffusion3 algorithm model๏ผ HOT 1
- [BUG]: run opt inference but failed with No module named 'energonai'
- [BUG]: pip install colossalai, pip install . produces an exit code: 1
- [PROPOSAL]: Does the LowLevelZero Plugin Support Lora, This Code Is Confusing HOT 1
- [BUG]: Low_Level_Zero plugin crashes with LoRA HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from colossalai.