Comments (6)
At what point d you run out of CPU memory? This is most likely going to happen while ROUGE is being computed. If that is the case for you, then you can turn off ROUGE evaluation and only do it at the end of training. If you could provide more detail about when/where it runs out of memory I might be able to help more.
from decanlp.
Hi @t-vi, just checking in. Were you able to resolve this by turning off ROUGE. If not, at what point were you running out of RAM? Loading the datasets or during training/validation?
from decanlp.
GPU out of memory, How to fix it ?
Error message is:
process_0 - Begin Training
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/generic/THCStorage.cu line=58 error=2 : out of memory
Traceback (most recent call last):
File "/decaNLP/train.py", line 365, in
main()
File "/decaNLP/train.py", line 361, in main
run(args, run_args, world_size=args.world_size)
File "/decaNLP/train.py", line 309, in run
writer=writer if rank==0 else None, save_every=args.save_every, start_iteration=start_iteration)
File "/decaNLP/train.py", line 220, in train
loss, train_metric_dict = step(model, batch, opt, iteration, field, task, lr=lr, grad_clip=args.grad_clip, writer=writer, it=train_iter)
File "/decaNLP/train.py", line 131, in step
loss.backward()
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
variables, grad_variables, retain_graph)
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/function.py", line 91, in apply
return self._forward_cls.backward(self, *args)
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/functions/tensor.py", line 481, in backward
grad_tensor = grad_tensor.masked_scatter(mask, grad_output)
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/variable.py", line 427, in masked_scatter
return self.clone().masked_scatter(mask, variable)
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/generic/THCStorage.cu:58
from decanlp.
Some information about:
- Task
python /decaNLP/train.py --train_tasks squad --gpu 0 - GPU
root@e7ebc34933bd:/# nvidia-smi
Fri Aug 10 12:33:36 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.130 Driver Version: 384.130 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1060 Off | 00000000:01:00.0 On | N/A |
| N/A 45C P8 9W / N/A | 364MiB / 6070MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
3) Train parameters:
{'backend': 'gloo',
'beta0': 0.9,
'commit': '704ce1360d1cd49bc838a1da03decf06620b563b',
'data': '/decaNLP/.data/',
'dimension': 200,
'dist_sync_file': '/decaNLP/results/18/08/10/03/58/49.865648/squad,MultitaskQuestionAnsweringNetwork,1g/704ce13/distributed_sync_file',
'dropout_ratio': 0.2,
'embeddings': '/decaNLP/.embeddings',
'exist_ok': False,
'gpus': [0],
'grad_clip': 1.0,
'jump_start': 0,
'load': None,
'log_dir': '/decaNLP/results/18/08/10/03/58/49.865648/squad,MultitaskQuestionAnsweringNetwork,1g/704ce13',
'log_every': 100,
'lower': True,
'max_answer_length': 50,
'max_effective_vocab': 1000000,
'max_generative_vocab': 50000,
'max_output_length': 100,
'max_train_context_length': 400,
'max_val_context_length': 400,
'model': 'MultitaskQuestionAnsweringNetwork',
'n_jump_start': 0,
'num_print': 15,
'resume': False,
'reverse': False,
'rnn_layers': 1,
'save': '/decaNLP/results',
'save_every': 1000,
'seed': 123,
'subsample': 20000000,
'timestamp': '18/08/10/03/58/49.865648',
'token_testing': False,
'train_batch_tokens': [10000],
'train_iterations': None,
'train_tasks': ['squad'],
'transformer_heads': 3,
'transformer_hidden': 150,
'transformer_layers': 2,
'transformer_lr': True,
'val_batch_size': [32],
'val_every': 1000,
'val_filter': True,
'val_tasks': ['squad'],
'vocab_tasks': None,
'warmup': 800,
'world_size': 1}
from decanlp.
@t-vi This was a problem with the memory consumption of the tokenizer we were using (revtok). It was creating too many short strings during tokenization. For now, a quick fix (1f83b7a), but we'll get this fixed in revtok (update: jekbradbury/revtok@f1998b7).
Let me know if this fixes your issue!
from decanlp.
@delldu You'll need to run with a smaller --train_batch_tokens value than the default of 10k or you'll need to reduce the size of the model --dimension or one of the other arguments listed in lines 58-62 of
Line 58 in 1f83b7a
from decanlp.
Related Issues (20)
- Exception while processing wikisql dataset HOT 2
- What Chinese dataset are available for training decaNLP? HOT 3
- the model suddenly predict many useless word during training, and scores dropped down to 0 HOT 2
- What is the format for the input of text summarization? HOT 2
- Missing Winograd schema url HOT 4
- Error while validating models. HOT 2
- UnboundLocalError: local variable 'wikisql_ids' referenced before assignment HOT 2
- Question about the paper: Why is answer representation needed for training? HOT 1
- How to install pytorch and tensorflow HOT 1
- Issue with running commands with the docker image provided HOT 1
- NaN loss and only OOV in the greedy output HOT 2
- Retraining/Fine tuning for different semantic parsing dataset HOT 1
- connection error for /research.metamind.io/cove/wmtlstm-8f474287.pth HOT 3
- Training on Custom Dataset HOT 1
- Evaluating custom dataset with dsEM metric HOT 2
- Training on custom dataset and answer is always "o"
- cannot reproduce output for custom dataset from example
- Fine-tuning pretrained model HOT 3
- Error during training
- Recent work? HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from decanlp.