Comments (3)
This issue is caused by version changes, and the following two lines of code:
import ruamel.yaml as yaml
config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
can be replaced with:
from ruamel.yaml import YAML
yaml = YAML(typ='safe')
config = yaml.load(open(args.config, 'r'))
from aptm.
Hello @Shuyu-XJTU ! Thank you for your advice on changing the lines of code due to version changes.
I am still experiencing some problems similar to the previous time where the retrieval.py has failed. I have checked if my CUDA device is unavailable, but they are actually available. I suspect this is due to the torch.distributed.launch being deprecated? But I am not entirely sure of how I will need to modify this to rectify the problem... Could you further advise me on what I should do ?
The following shows the failure received from the python script.
NNODES, 1
NPROC_PER_NODE, 4
MASTER_ADDR, 127.0.0.1
MASTER_PORT, 3000
NODE_RANK, 0
/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launch.py:183: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects --local-rank
argument to be set, please
change it to read from os.environ['LOCAL_RANK']
instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
[2024-04-01 10:10:58,498] torch.distributed.run: [WARNING]
[2024-04-01 10:10:58,498] torch.distributed.run: [WARNING] *****************************************
[2024-04-01 10:10:58,498] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-04-01 10:10:58,498] torch.distributed.run: [WARNING] *****************************************
Traceback (most recent call last):
File "Retrieval.py", line 303, in
main(args, config)
File "Retrieval.py", line 36, in main
utils.init_distributed_mode(args)
File "/home/default/Desktop/eric/APTM/utils/init.py", line 264, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/cuda/init.py", line 408, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
| distributed init (rank 0): env://
Traceback (most recent call last):
File "Retrieval.py", line 303, in
main(args, config)
File "Retrieval.py", line 36, in main
utils.init_distributed_mode(args)
File "/home/default/Desktop/eric/APTM/utils/init.py", line 264, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/cuda/init.py", line 408, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
Traceback (most recent call last):
File "Retrieval.py", line 303, in
main(args, config)
File "Retrieval.py", line 36, in main
utils.init_distributed_mode(args)
File "/home/default/Desktop/eric/APTM/utils/init.py", line 264, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/cuda/init.py", line 408, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
[2024-04-01 10:11:08,564] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 17993 closing signal SIGTERM
[2024-04-01 10:11:08,781] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 17994) of binary: /home/default/miniconda3/envs/aptm/bin/python3
Traceback (most recent call last):
File "/home/default/miniconda3/envs/aptm/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/default/miniconda3/envs/aptm/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launch.py", line 198, in
main()
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launch.py", line 194, in main
launch(args)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launch.py", line 179, in launch
run(args)
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 135, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/default/miniconda3/envs/aptm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
Retrieval.py FAILED
Failures:
[1]:
time : 2024-04-01_10:11:08
host : default-Pulse-15-B13VFK
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 17995)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2024-04-01_10:11:08
host : default-Pulse-15-B13VFK
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 17996)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Root Cause (first observed failure):
[0]:
time : 2024-04-01_10:11:08
host : default-Pulse-15-B13VFK
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 17994)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
from aptm.
Sorry, maybe you can try to replace 'torch.distributed.launch' with 'torch.distributed.run'.
from aptm.
Related Issues (20)
- 感谢大佬,我有个关于图像生成captions的疑问 HOT 2
- Zero-shot evaluation for downstream datasets? HOT 1
- CUHK-PEDES HOT 1
- Adding License file HOT 1
- 作者们好,请问可以放出训练MALS的log吗 HOT 1
- 请教一下生成caption的prompt HOT 7
- PA100K Result HOT 1
- Dataset Drive Link HOT 1
- Issue with Missing JSON Files in gene_attrs Folder after Extracting finetune.zip HOT 4
- `c_g_a_9.zip` is not downloadable HOT 1
- 使用pre_trained model进行微调 HOT 2
- 【评估代码的一些疑问】 HOT 6
- 预训练模型的zero-shot能力 HOT 1
- pre-trained models HOT 2
- 作者们好,请问我如何从模型获取好的特征向量 HOT 1
- ModuleNotFoundError : from reTools import evaluation, mAPNo module named 'gnn_reranking'
- 大佬们,我想知道文本编码器使用bert前六层和交叉编码器使用的bert后六层在代码中哪里体现 HOT 2
- gene_crop/c_g_a_1/文件夹下有多个图像缺失 HOT 4
- Comparison with BLIP and importance of Attribute Learning HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aptm.