Giter Club home page Giter Club logo

Comments (3)

Shuyu-XJTU avatar Shuyu-XJTU commented on July 26, 2024

This issue is caused by version changes, and the following two lines of code:
import ruamel.yaml as yaml
config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
can be replaced with:
from ruamel.yaml import YAML
yaml = YAML(typ='safe')
config = yaml.load(open(args.config, 'r'))

from aptm.

erictan23 avatar erictan23 commented on July 26, 2024

Hello @Shuyu-XJTU ! Thank you for your advice on changing the lines of code due to version changes.

I am still experiencing some problems similar to the previous time where the retrieval.py has failed. I have checked if my CUDA device is unavailable, but they are actually available. I suspect this is due to the torch.distributed.launch being deprecated? But I am not entirely sure of how I will need to modify this to rectify the problem... Could you further advise me on what I should do ?

The following shows the failure received from the python script.
NNODES, 1
NPROC_PER_NODE, 4
MASTER_ADDR, 127.0.0.1
MASTER_PORT, 3000
NODE_RANK, 0
/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launch.py:183: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects --local-rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
[2024-04-01 10:10:58,498] torch.distributed.run: [WARNING]
[2024-04-01 10:10:58,498] torch.distributed.run: [WARNING] *****************************************
[2024-04-01 10:10:58,498] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-04-01 10:10:58,498] torch.distributed.run: [WARNING] *****************************************
Traceback (most recent call last):
File "Retrieval.py", line 303, in
main(args, config)
File "Retrieval.py", line 36, in main
utils.init_distributed_mode(args)
File "/home/default/Desktop/eric/APTM/utils/init.py", line 264, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/cuda/init.py", line 408, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

| distributed init (rank 0): env://
Traceback (most recent call last):
File "Retrieval.py", line 303, in
main(args, config)
File "Retrieval.py", line 36, in main
utils.init_distributed_mode(args)
File "/home/default/Desktop/eric/APTM/utils/init.py", line 264, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/cuda/init.py", line 408, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Traceback (most recent call last):
File "Retrieval.py", line 303, in
main(args, config)
File "Retrieval.py", line 36, in main
utils.init_distributed_mode(args)
File "/home/default/Desktop/eric/APTM/utils/init.py", line 264, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/cuda/init.py", line 408, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

[2024-04-01 10:11:08,564] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 17993 closing signal SIGTERM
[2024-04-01 10:11:08,781] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 17994) of binary: /home/default/miniconda3/envs/aptm/bin/python3
Traceback (most recent call last):
File "/home/default/miniconda3/envs/aptm/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/default/miniconda3/envs/aptm/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launch.py", line 198, in
main()
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launch.py", line 194, in main
launch(args)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launch.py", line 179, in launch
run(args)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 135, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

Retrieval.py FAILED

Failures:
[1]:
time : 2024-04-01_10:11:08
host : default-Pulse-15-B13VFK
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 17995)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2024-04-01_10:11:08
host : default-Pulse-15-B13VFK
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 17996)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2024-04-01_10:11:08
host : default-Pulse-15-B13VFK
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 17994)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

from aptm.

Shuyu-XJTU avatar Shuyu-XJTU commented on July 26, 2024

Sorry, maybe you can try to replace 'torch.distributed.launch' with 'torch.distributed.run'.

from aptm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.