Giter Club home page Giter Club logo

Comments (8)

muqeeth avatar muqeeth commented on September 2, 2024 2

Hi, sorry for getting back late. Add allow_skip_exp=false to the command similar to https://github.com/r-three/t-few/blob/master/configs/t011b.json in order to run MultiGPU training.

from t-few.

muqeeth avatar muqeeth commented on September 2, 2024 1

I am sorry for getting back late. I don't think I can fully resolve the problem. One thing is even though you have 4 GPUs, I think only 0 and 1 are used. May be export CUDA_VISIBLE_DEVICES=0,1,2,3.

The deepspeed, torch, and cuda versions in the requirements worked for us on A100s, A5000, and A6000 GPUs. I am not sure about other GPUs. Maybe @HaokunLiu can help?

from t-few.

danielkorat avatar danielkorat commented on September 2, 2024

@craffel @dptam @jmohta @muqeeth
Thanks

from t-few.

xszheng2020 avatar xszheng2020 commented on September 2, 2024

Hi, @danielkorat you may try to set the "compute_strategy" as "ddp" or "deepspeed_stage_3"

from t-few.

danielkorat avatar danielkorat commented on September 2, 2024

I tried it, the code hangs after starting the experinent and then skipping it. Looks like a parallelization issue:

Start experiment t03b_rte_seed42_ia3_pretrained
{
    "exp_dir": "/store/code/t-few/exp_out/t03b_rte_seed42_ia3_pretrained",
    "exp_name": "t03b_rte_seed42_ia3_pretrained",
    ....
    ....
}
Skip finished experiment t03b_rte_seed42_ia3_pretrained

from t-few.

danielkorat avatar danielkorat commented on September 2, 2024

Hi @muqeeth,

When I try compute_strategy: deepspeed_stage_3 (with (allow_skip_exp=false), I get the following error.
My goal is to be able to fit your model on my GPUs (Model Parallelization, not Data Parallelization).
I'm using 4 x Nvidia RTX with 24GB each. I'm using the package versions as they appear in requirements.txt.
My machine has 40 CPUs and 128 GB RAM.
I tried many deepspeed configurations. I suspect it's an issue related to integration of pytorch-lightning with deepspeed.

Thank you

Mark experiment t03b_rte_seed42_ia3_pretrained as claimed
initializing deepspeed distributed: GLOBAL_RANK: 0, MEMBER: 1/2
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
initializing deepspeed distributed: GLOBAL_RANK: 1, MEMBER: 2/2
[2022-06-15 13:21:21,093] [WARNING] [deepspeed.py:630:_auto_select_batch_size] Tried to infer the batch size for internal deepspeed logging from the `train_dataloader()`. To ensure DeepSpeed logging remains correct, please manually pass the plugin with the batch size, `Trainer(strategy=DeepSpeedPlugin(logging_batch_size_per_gpu=batch_size))`.
Reusing dataset super_glue (/home/dkorat/.cache/huggingface/datasets/super_glue/rte/1.0.2/d040c658e2ddef6934fdd97deb45c777b6ff50c524781ea434e7219b56a428a7)
Reusing dataset super_glue (/home/dkorat/.cache/huggingface/datasets/super_glue/rte/1.0.2/d040c658e2ddef6934fdd97deb45c777b6ff50c524781ea434e7219b56a428a7)
Train size 32
Eval size 277
Train size 32
Eval size 277
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
You have not specified an optimizer or scheduler within the DeepSpeed config. Using `configure_optimizers` to define optimizer and scheduler.
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/ddp.py:510: UserWarning: Error handling mechanism for deadlock detection is uninitialized. Skipping check.
  rank_zero_warn("Error handling mechanism for deadlock detection is uninitialized. Skipping check.")
Traceback (most recent call last):
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/store/code/t-few/src/pl_train.py", line 98, in <module>
    main(config)
  File "/store/code/t-few/src/pl_train.py", line 69, in main
    trainer.fit(model, datamodule)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 741, in fit
    self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1188, in _run
    self._pre_dispatch()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1223, in _pre_dispatch
    self.accelerator.pre_dispatch(self)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 136, in pre_dispatch
    self.training_type_plugin.pre_dispatch()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 389, in pre_dispatch
    self.init_deepspeed()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 459, in init_deepspeed
    self._initialize_deepspeed_train(model)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 492, in _initialize_deepspeed_train
    model, deepspeed_optimizer = self._setup_model_and_optimizer(model, optimizer, scheduler)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 430, in _setup_model_and_optimizer
    dist_init_required=False,
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/__init__.py", line 129, in initialize
    config_params=config_params)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 258, in __init__
    self._configure_distributed_model(model)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1066, in _configure_distributed_model
    self._broadcast_model()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 969, in _broadcast_model
    group=self.data_parallel_group)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1163, in broadcast
    work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:957, unhandled system error, NCCL version 21.0.3
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
Traceback (most recent call last):
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/store/code/t-few/src/pl_train.py", line 98, in <module>
    main(config)
  File "/store/code/t-few/src/pl_train.py", line 69, in main
    trainer.fit(model, datamodule)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 741, in fit
    self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1188, in _run
    self._pre_dispatch()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1223, in _pre_dispatch
    self.accelerator.pre_dispatch(self)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 136, in pre_dispatch
    self.training_type_plugin.pre_dispatch()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 389, in pre_dispatch
    self.init_deepspeed()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 459, in init_deepspeed
    self._initialize_deepspeed_train(model)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 492, in _initialize_deepspeed_train
    model, deepspeed_optimizer = self._setup_model_and_optimizer(model, optimizer, scheduler)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/deepspeed.py", line 430, in _setup_model_and_optimizer
    dist_init_required=False,
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/__init__.py", line 129, in initialize
    config_params=config_params)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 258, in __init__
    self._configure_distributed_model(model)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1066, in _configure_distributed_model
    self._broadcast_model()
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 969, in _broadcast_model
    group=self.data_parallel_group)
  File "/home/dkorat/anaconda3/envs/tfew/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1163, in broadcast
    work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:957, unhandled system error, NCCL version 21.0.3
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
(tfew) dkorat@gpu-m7mm8:/store/code/t-few$ 

from t-few.

HaokunLiu avatar HaokunLiu commented on September 2, 2024

I didn't meet the problem you posted. I worked on deepspeed for a while. And after digging though a lot of other problems, I can run the model with it. But it's usually very slow. So we didn't use it in our final experiments. Instead, we rent some 80GB A100 online. The experiments finished quickly, so it wasn't as expansive as it sounds.

Overall, I recommend using big GPUs+ddp rather than deepspeed, if possible.

from t-few.

danielkorat avatar danielkorat commented on September 2, 2024

I see, thanks for the info!

from t-few.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.