Comments (6)
Hi, @feifeibear . Thank you for sharing the idea! In our opinion, this is basically a trade-off between memory cost and communication cost.
The current design of 3D Linear layer applies an all-gather on the input matrix A and a reduce-scatter on the output matrix C in the forward pass (all-gather on the gradients of C and reduce-scatter on the gradients of A in the backward pass), so that each activation can be 1/N of the original size.
An alternative design is to use an all-reduce on C in the forward pass as well as on the gradients of A in the backward pass, but the activations are 1/N^(2/3) of the original size.
Considering activation checkpointing, as the forward pass is recomputed, the first design applies 2 * all-gather + reduce-scatter on A and 2 * reduce-scatter + all-gather on C in total, while the second design applies 3 * all-reduce. Since all-reduce of the ring algorithm has similar cost to all-gather + reduce-scatter, the total communication costs of both designs seem to be similar.
However, we indeed concern that small tensors decrease the bandwidth utilization, and it is hard to fuse them up. To find the optimal performance, we are testing as much models and networking environments as possible, and let the results tell.
from colossalai.
I agree 3D parallel can shrink the peak activation footprint in one GPU at cost of more communication. The method definitely works in some special cases. Maybe a simple searching method can be derived to figure out which part of the DNN is suitable for 3D parallelism in the constraint of a limited memory budget.
from colossalai.
I agree 3D parallel can shrink the peak activation footprint in one GPU at cost of more communication. The method definitely works in some special cases. Maybe a simple searching method can be derived to figure out which part of the DNN is suitable for 3D parallelism in the constraint of a limited memory budget.
This can be a good idea. For example, self-attention blocks usually consume more than mlp (ffn) blocks.
from colossalai.
This issue is stale because it has been open for 14 days with no activity.
from colossalai.
@1SAA communication profiling results may support some of my assumption iin discussion.
from colossalai.
We have updated a lot. This issue was closed due to inactivity. Thanks.
from colossalai.
Related Issues (20)
- [BUG]: UnboundLocalError: cannot access local variable 'default_conversation' where it is not associated with a value HOT 3
- [BUG]: _local_rank in DistCoordinator should be int
- [BUG]: RuntimeErrorRuntimeError: : The param bucket max size 12582912 is exceededby tensor (size 131334144)The param bucket max size 12582912 is exceededby tensor (size 131334144) HOT 7
- [BUG]: a directory will be maked in each epoch HOT 1
- [BUG]: Pipeline Parallelism fails when input shape varies HOT 1
- [Feature]: support FP8 communication in Gemini
- [FEATURE]: Request updates for pretraining roberta
- [BUG]: Pytest with a specific config failed after PR #5868
- support moe
- llama fp8 forward/backward
- [fp8] support low level zero HOT 2
- [DOC]: Is there an example of Lora training for Llama3? HOT 1
- [DOC]: Is there documentation on how to create hostfiles HOT 3
- [BUG]: Hang on startup HOT 4
- qwen2 fp8 forward/backward
- [fp8] support hybrid parallel plugin
- [fp8] support amp HOT 2
- [FEATURE]: How to skip a custom node from generating strategies in colossal-auto?
- [BUG]: Cannot use CollosalChat HOT 1
- [BUG]: Torch compile causes multi-process to hang with python 3.9
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from colossalai.