Comments (3)
For our experiment, the local_batch_size is set to 32 for fine-tuning and 64 for pre-training.
Ps: local_batch_size=32 consume about 10G GPU memory for V100.
from disco.
-
Regarding the "Fine-tuning with Disentangled Control" paper, in the case of 8 GPUs, is the global_batch_size equivalent to 8 * 32?
-
I have conducted two experiments as follows. However, the FID I obtained has a discrepancy compared to the results in the paper (FID=30.75). Additionally, when using a larger batch size, the FID results are even worse. Can you help me with this issue?
CUDA_VISIBLE_DEVICES=5,6 AZFUSE_USE_FUSE=0 NCCL_ASYNC_ERROR_HANDLING=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=22234 --use_env finetune_sdm_yaml.py \ --cf config/ref_attn_clip_combine_controlnet/tiktok_S256L16_xformers_tsv.py \ --do_train --root_dir /project/DisCo/runtest \ --local_train_batch_size 64 \ --local_eval_batch_size 64 \ --log_dir exp/s2_tiktok_cfg_64_2_1_70k \ --epochs 100 \ --deepspeed \ --eval_step 2000 \ --save_step 2000 \ --gradient_accumulate_steps 1 \ --learning_rate 2e-4 \ --fix_dist_seed \ --loss_target "noise" \ --train_yaml /datasets/disco/TSV_dataset/composite_offset/train_TiktokDance-poses-masks.yaml \ --val_yaml /datasets/disco/TSV_dataset/composite_offset/new10val_TiktokDance-poses-masks.yaml \ --unet_unfreeze_type "all" \ --refer_sdvae \ --ref_null_caption False \ --combine_clip_local --combine_use_mask \ --conds "poses" "masks" \ --stage1_pretrain_path /datasets/disco/checkpoint/pretrain/mp_rank_00_model_states.pt \ --drop_ref 0.05 \ --guidance_scale 1.5 \ --eval_visu \
(1) global_batch_size = 2 * 64, gradient_accumulate_steps = 2; FID = 38.912
(2) global_batch_size = 1 * 32, gradient_accumulate_steps = 1, FID = 33.737
I sincerely appreciate your insights. Thank you very much for your time and consideration.
from disco.
Hi @Fanghaipeng, sorry for the delay since I cannot achieve the computing resources for this project after the end of my internship in July. Here is our log screen-shot (without additional TikTok-Style data but with cfg):
.
The global bs=8*32. And our final FID is 31.3. During training, the FID is evaluated with clean-fid
. We use pytorch-fid
results in the paper for all the model and usually it is a little bit lower (~0.5%). So it seemed that the results can be reproduced. But this experiment is still performed under 8 GPU. Honestly, I do not even try to use 2 GPU for running so I am not sure about the parameter changing. But from your screenshot, it is weird that the FID log becomes higher at the end of the training step. What's results at the middle of the training? (e.g., 25k step for your 2nd exp)
from disco.
Related Issues (20)
- Model checkpoint for temporal module
- Question about the multi-gpu running: 'mpirun -np ...' HOT 7
- Hope for more instruction about the multiple GPU running
- Error when mpirun -np 3? HOT 2
- how to caculate fvd metrics?
- Where can I get the 10K tiktok style test split?
- Questions about image size
- [BUG] a bug in the dataset/tiktok_video_dataset.py
- How can I get "More TikTok-Style Training Data" please? HOT 1
- the code for computing PSNR is wrong HOT 4
- None
- huggingface demo broken HOT 1
- How were these files generated? For validation and training.
- jax and jaxlib latest version problem
- None
- Jimmy HOT 1
- How to use DeepSpeed for multi-GPU training instead of using mpirun?
- Is the TikTok dataset you provide the MoreTikTok dataset?
- 能不能提供多机多卡的运行脚本
- drop_pose_ratio is not defined (temporal module finetuning)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from disco.