Comments (10)
I meet the same problem,have you found a solution?
from colossalai.
hey @ArnaudFickinger @B-Soul , could you please share the settings of your scripts?
from colossalai.
My code is related to my own ongoing research, so it is not convenient to share. But I just changed the distributed framework used to Huggingface Accelerate, and gradients are not None. So, I think there is a bug in colossalai framwork.
from colossalai.
My code is related to my own ongoing research, so it is not convenient to share. But I just changed the distributed framework used to Huggingface Accelerate, and gradients are not None. So, I think there is a bug in colossalai framwork.
hi @B-Soul , a snippet of optimizer/plugin settings will help. Besides, the gradient accessing API might be different due to optimization, if you are using LowLevelZeroOptimizer
or GeminiOptimizer
, you could check those tests for gradient accessing: genimi and low-level
from colossalai.
@botbw thank you the low-level snippet is working! By the way which of gemini or low-level should I use for best performance with 1 to 8 A100 GPUs and 500M to 2B trainable parameters?
from colossalai.
@botbw thank you the low-level snippet is working! By the way which of gemini or low-level should I use for best performance with 1 to 8 A100 GPUs and 500M to 2B trainable parameters?
@ArnaudFickinger Glad to hear that! And we might work on the API to make it more intuitive.
Regarding the performance, LowLevelZeroOptimizer
implements zero-1
and zero-2
and GeminiOptimizer
implements zero-3
together with continuous memory optimization (i.e. memory locality, you may check this doc for more information) to reduce communication cost.
Generally speaking, you should choose the plugin by the intended zero-n
parallel strategy, the real-world performance might be case-by-case and depend on the trade-off between computation and communication.
Do let us know if you have further doubts :p
from colossalai.
@botbw when I define 2 param_groups the id() of the parameters of the second group do not match any keys of optimizer._grad_store._grads_of_params[1]
from colossalai.
@botbw when I define 2 param_groups the id() of the parameters of the second group do not match any keys of optimizer._grad_store._grads_of_params[1]
@ArnaudFickinger I guess it's unexpected since each group is handled separately in the same way (like a for loop), would you mind sharing the version (or commit) you are using and a min repro if possible?
from colossalai.
@botbw I have written a min repro with a simple network and in this case the keys actually match! I will take a closer look at my code and get back to you if I believe the issue might still be ColossalAI related.
from colossalai.
@botbw I have written a min repro with a simple network and in this case the keys actually match! I will take a closer look at my code and get back to you if I believe the issue might still be ColossalAI related.
@ArnaudFickinger Sure, feel free to ask here or raise a new issue
from colossalai.
Related Issues (20)
- [BUG]: Report some errors in test
- [PROPOSAL]: Refactor inference engine by selecting backend during init of modules HOT 1
- [BUG]:Report some errors in test_fx/test_tracer
- I have searched the existing issues
- [BUG]: Run finetune with the DEMO, get CUDA Out of Memory on a H800 node in hpcaitech cloud instance HOT 4
- [BUG]: FileNotFoundError: [Errno 2] No such file or directory: '/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/pybind/inference/inference.cpp' HOT 2
- [FEATURE]: LoRA with sharded model
- Use gemini plugin and LowLevelZero to run llama2_7b. In the pulgin in gemini, set the policy to static, shard_param_frac, offload_optim_frac, and offload_param_frac to 0.0, making gemini equal to zero2, and set stage to 2 in LowLevelZero. Using bf16 for training, and comparing the two plugins, we found that the GPU memory usage of gemini is higher than that of LowLevelZero. Why is this? In principle, gemini should save more GPU memory HOT 2
- [FEATURE]: Support Command-R model
- [BUG]: Command-R 8 GPU Pytest failure
- [FEATURE]: Support T5ForTokenClassification
- [FEATURE]: Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM
- [BUG]: loading OPT 66B model - CPU runs out of memory HOT 5
- [BUG]: Colossal AI failed to load ChatGLM2 HOT 1
- [BUG]: ColossalChat train sft is skipped with opt-1.3b model HOT 3
- [FEATURE]: Support SP+PP in Llama etc. HOT 1
- [compatibility] support torch 2.2
- [PyTorch] per-channel FP8 quantization
- [DOC]: macos 不可以运行吗请问 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from colossalai.