While running the following code from sample notebook, I am getting error ValueError: Using AMP requires a cuda device, but I am running it on a GPU enabled colab notebook.
from cache at /root/.cache/torch/transformers/5aab0d7dfa1db7d97ead13a37479db888b133a51a05ae4ab62ff5c8d1fcabb65.52b6ec356fb91985b3940e086d1b2ebf8cd40f8df0ba1cabf4cac27769dee241
2021-01-21 07:00:21,397 - INFO - transformers.modeling_utils - All model checkpoint weights were used when initializing RobertaForMaskedLM.
2021-01-21 07:00:21,398 - WARNING - transformers.modeling_utils - Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at distilroberta-base and are newly initialized: ['lm_head.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2021-01-21 07:00:21,398 - INFO - allennlp.common.params - model.seq2vec_encoder = None
2021-01-21 07:00:21,399 - INFO - allennlp.common.params - model.feedforward = None
2021-01-21 07:00:21,399 - INFO - allennlp.common.params - model.miner = None
2021-01-21 07:00:21,399 - INFO - allennlp.common.params - model.loss.type = nt_xent
2021-01-21 07:00:21,399 - INFO - allennlp.common.params - model.loss.temperature = 0.05
2021-01-21 07:00:21,400 - INFO - allennlp.common.params - model.initializer = <allennlp.nn.initializers.InitializerApplicator object at 0x7f545585b2e8>
2021-01-21 07:00:21,400 - INFO - allennlp.nn.initializers - Initializing parameters
2021-01-21 07:00:21,400 - INFO - allennlp.nn.initializers - Done initializing parameters; the following parameters are using their default initialization from their code
2021-01-21 07:00:21,400 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.bias
2021-01-21 07:00:21,400 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.dense.bias
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.dense.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.layer_norm.bias
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.layer_norm.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.LayerNorm.bias
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.LayerNorm.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.position_embeddings.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.token_type_embeddings.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.word_embeddings.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.LayerNorm.bias
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.LayerNorm.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.dense.bias
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.dense.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.key.bias
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.key.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.query.bias
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.query.weight
2021-01-21 07:00:21,401 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.value.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.value.weight
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.intermediate.dense.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.intermediate.dense.weight
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.LayerNorm.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.LayerNorm.weight
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.dense.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.dense.weight
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.LayerNorm.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.LayerNorm.weight
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.dense.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.dense.weight
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.key.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.key.weight
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.query.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.query.weight
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.value.bias
2021-01-21 07:00:21,402 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.value.weight
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.intermediate.dense.bias
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.intermediate.dense.weight
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.LayerNorm.bias
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.LayerNorm.weight
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.dense.bias
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.dense.weight
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.LayerNorm.bias
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.LayerNorm.weight
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.dense.bias
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.dense.weight
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.key.bias
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.key.weight
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.query.bias
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.query.weight
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.value.bias
2021-01-21 07:00:21,403 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.value.weight
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.intermediate.dense.bias
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.intermediate.dense.weight
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.LayerNorm.bias
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.LayerNorm.weight
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.dense.bias
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.dense.weight
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.LayerNorm.bias
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.LayerNorm.weight
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.dense.bias
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.dense.weight
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.key.bias
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.key.weight
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.query.bias
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.query.weight
2021-01-21 07:00:21,404 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.value.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.value.weight
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.intermediate.dense.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.intermediate.dense.weight
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.LayerNorm.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.LayerNorm.weight
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.dense.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.dense.weight
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.LayerNorm.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.LayerNorm.weight
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.dense.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.dense.weight
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.key.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.key.weight
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.query.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.query.weight
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.value.bias
2021-01-21 07:00:21,405 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.value.weight
2021-01-21 07:00:21,483 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.intermediate.dense.bias
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.intermediate.dense.weight
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.LayerNorm.bias
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.LayerNorm.weight
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.dense.bias
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.dense.weight
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.LayerNorm.bias
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.LayerNorm.weight
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.dense.bias
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.dense.weight
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.key.bias
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.key.weight
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.query.bias
2021-01-21 07:00:21,484 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.query.weight
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.value.bias
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.value.weight
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.intermediate.dense.bias
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.intermediate.dense.weight
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.LayerNorm.bias
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.LayerNorm.weight
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.dense.bias
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.dense.weight
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.pooler.dense.bias
2021-01-21 07:00:21,485 - INFO - allennlp.nn.initializers - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.pooler.dense.weight
2021-01-21 07:00:21,486 - INFO - allennlp.common.params - data_loader.type = pytorch_dataloader
2021-01-21 07:00:21,486 - INFO - allennlp.common.params - data_loader.batch_size = 2
2021-01-21 07:00:21,486 - INFO - allennlp.common.params - data_loader.shuffle = False
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.sampler = None
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.batch_sampler = None
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.num_workers = 1
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.pin_memory = False
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.drop_last = True
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.timeout = 0
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.worker_init_fn = None
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.multiprocessing_context = None
2021-01-21 07:00:21,487 - INFO - allennlp.common.params - data_loader.batches_per_epoch = 8912
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
2021-01-21 07:00:21,505 - INFO - allennlp.common.params - trainer.type = gradient_descent
2021-01-21 07:00:21,507 - INFO - allennlp.common.params - trainer.patience = None
2021-01-21 07:00:21,507 - INFO - allennlp.common.params - trainer.validation_metric = -loss
2021-01-21 07:00:21,508 - INFO - allennlp.common.params - trainer.num_epochs = 1
2021-01-21 07:00:21,508 - INFO - allennlp.common.params - trainer.cuda_device = None
2021-01-21 07:00:21,508 - INFO - allennlp.common.params - trainer.grad_norm = 1
2021-01-21 07:00:21,508 - INFO - allennlp.common.params - trainer.grad_clipping = None
2021-01-21 07:00:21,508 - INFO - allennlp.common.params - trainer.distributed = None
2021-01-21 07:00:21,508 - INFO - allennlp.common.params - trainer.world_size = 1
2021-01-21 07:00:21,508 - INFO - allennlp.common.params - trainer.num_gradient_accumulation_steps = 1
2021-01-21 07:00:21,508 - INFO - allennlp.common.params - trainer.use_amp = True
2021-01-21 07:00:21,509 - INFO - allennlp.common.params - trainer.no_grad = None
/usr/local/lib/python3.6/dist-packages/allennlp/data/dataset_readers/dataset_reader.py:371: UserWarning: Using multi-process data loading without setting DatasetReader.manual_multi_process_sharding to True.
Did you forget to set this?
If you're not handling the multi-process sharding logic within your _read() method, there is probably no benefit to using more than one worker.
UserWarning,
2021-01-21 07:00:21,511 - INFO - allennlp.common.params - trainer.momentum_scheduler = None
2021-01-21 07:00:21,512 - INFO - allennlp.common.params - trainer.tensorboard_writer = None
2021-01-21 07:00:21,512 - INFO - allennlp.common.params - trainer.moving_average = None
reading instances: 0it [00:00, ?it/s]2021-01-21 07:00:21,512 - INFO - allennlp.common.params - trainer.batch_callbacks = None
2021-01-21 07:00:21,512 - INFO - allennlp.common.params - trainer.epoch_callbacks = None
2021-01-21 07:00:21,513 - INFO - declutr.dataset_reader - Reading instances from lines in file at: wikitext_103/train.txt
2021-01-21 07:00:21,514 - INFO - allennlp.common.params - trainer.optimizer.type = huggingface_adamw
2021-01-21 07:00:21,515 - INFO - allennlp.common.params - trainer.optimizer.lr = 5e-05
2021-01-21 07:00:21,515 - INFO - allennlp.common.params - trainer.optimizer.betas = (0.9, 0.999)
2021-01-21 07:00:21,515 - INFO - allennlp.common.params - trainer.optimizer.eps = 1e-06
2021-01-21 07:00:21,515 - INFO - allennlp.common.params - trainer.optimizer.weight_decay = 0
2021-01-21 07:00:21,515 - INFO - allennlp.common.params - trainer.optimizer.correct_bias = False
2021-01-21 07:00:21,520 - INFO - allennlp.training.optimizers - Done constructing parameter groups.
2021-01-21 07:00:21,520 - INFO - allennlp.training.optimizers - Group 0: ['_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.intermediate.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.query.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.value.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.value.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.key.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.key.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.value.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.intermediate.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.value.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.position_embeddings.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.token_type_embeddings.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.value.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.query.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.key.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.query.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.intermediate.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.value.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.intermediate.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.query.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.key.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.word_embeddings.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.pooler.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.intermediate.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.query.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.intermediate.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.key.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.lm_head.layer_norm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.lm_head.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.query.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.dense.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.key.weight'], {'weight_decay': 0.1}
2021-01-21 07:00:21,521 - INFO - allennlp.training.optimizers - Group 1: ['_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.key.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.query.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.intermediate.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.key.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.key.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.key.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.lm_head.layer_norm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.value.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.query.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.value.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.intermediate.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.intermediate.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.key.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.query.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.lm_head.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.value.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.query.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.query.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.pooler.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.key.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.intermediate.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.value.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.query.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.intermediate.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.value.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.lm_head.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.intermediate.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.LayerNorm.weight', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.value.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.LayerNorm.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.dense.bias', '_text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.dense.bias'], {}
2021-01-21 07:00:21,597 - INFO - allennlp.training.optimizers - Number of trainable parameters: 82760793
2021-01-21 07:00:21,602 - INFO - allennlp.common.util - The following parameters are Frozen (without gradient):
2021-01-21 07:00:21,603 - INFO - allennlp.common.util - The following parameters are Tunable (with gradient):
2021-01-21 07:00:21,603 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.word_embeddings.weight
2021-01-21 07:00:21,603 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.position_embeddings.weight
2021-01-21 07:00:21,603 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.token_type_embeddings.weight
2021-01-21 07:00:21,603 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.LayerNorm.weight
2021-01-21 07:00:21,603 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.embeddings.LayerNorm.bias
2021-01-21 07:00:21,603 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.query.weight
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.query.bias
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.key.weight
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.key.bias
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.value.weight
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.self.value.bias
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.dense.weight
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.dense.bias
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.LayerNorm.weight
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.attention.output.LayerNorm.bias
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.intermediate.dense.weight
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.intermediate.dense.bias
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.dense.weight
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.dense.bias
2021-01-21 07:00:21,604 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.LayerNorm.weight
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.0.output.LayerNorm.bias
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.query.weight
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.query.bias
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.key.weight
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.key.bias
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.value.weight
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.self.value.bias
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.dense.weight
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.dense.bias
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.LayerNorm.weight
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.attention.output.LayerNorm.bias
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.intermediate.dense.weight
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.intermediate.dense.bias
2021-01-21 07:00:21,605 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.dense.weight
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.dense.bias
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.LayerNorm.weight
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.1.output.LayerNorm.bias
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.query.weight
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.query.bias
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.key.weight
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.key.bias
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.value.weight
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.self.value.bias
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.dense.weight
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.dense.bias
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.LayerNorm.weight
2021-01-21 07:00:21,606 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.attention.output.LayerNorm.bias
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.intermediate.dense.weight
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.intermediate.dense.bias
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.dense.weight
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.dense.bias
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.LayerNorm.weight
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.2.output.LayerNorm.bias
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.query.weight
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.query.bias
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.key.weight
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.key.bias
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.value.weight
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.self.value.bias
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.dense.weight
2021-01-21 07:00:21,607 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.dense.bias
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.LayerNorm.weight
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.attention.output.LayerNorm.bias
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.intermediate.dense.weight
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.intermediate.dense.bias
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.dense.weight
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.dense.bias
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.LayerNorm.weight
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.3.output.LayerNorm.bias
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.query.weight
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.query.bias
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.key.weight
2021-01-21 07:00:21,608 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.key.bias
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.value.weight
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.self.value.bias
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.dense.weight
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.dense.bias
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.LayerNorm.weight
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.attention.output.LayerNorm.bias
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.intermediate.dense.weight
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.intermediate.dense.bias
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.dense.weight
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.dense.bias
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.LayerNorm.weight
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.4.output.LayerNorm.bias
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.query.weight
2021-01-21 07:00:21,609 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.query.bias
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.key.weight
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.key.bias
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.value.weight
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.self.value.bias
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.dense.weight
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.dense.bias
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.LayerNorm.weight
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.attention.output.LayerNorm.bias
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.intermediate.dense.weight
2021-01-21 07:00:21,610 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.intermediate.dense.bias
2021-01-21 07:00:21,704 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.dense.weight
2021-01-21 07:00:21,705 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.dense.bias
2021-01-21 07:00:21,705 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.LayerNorm.weight
2021-01-21 07:00:21,705 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.encoder.layer.5.output.LayerNorm.bias
2021-01-21 07:00:21,705 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.pooler.dense.weight
2021-01-21 07:00:21,705 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.roberta.pooler.dense.bias
2021-01-21 07:00:21,705 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.bias
2021-01-21 07:00:21,706 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.dense.weight
2021-01-21 07:00:21,706 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.dense.bias
2021-01-21 07:00:21,706 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.layer_norm.weight
2021-01-21 07:00:21,706 - INFO - allennlp.common.util - _text_field_embedder.token_embedder_tokens.transformer_model.lm_head.layer_norm.bias
2021-01-21 07:00:21,706 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.type = slanted_triangular
2021-01-21 07:00:21,707 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.cut_frac = 0.1
2021-01-21 07:00:21,708 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.ratio = 32
2021-01-21 07:00:21,709 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.last_epoch = -1
2021-01-21 07:00:21,709 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.gradual_unfreezing = False
2021-01-21 07:00:21,709 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.discriminative_fine_tuning = False
2021-01-21 07:00:21,710 - INFO - allennlp.common.params - trainer.learning_rate_scheduler.decay_factor = 0.38
2021-01-21 07:00:21,710 - INFO - allennlp.common.params - trainer.checkpointer.type = default
2021-01-21 07:00:21,712 - INFO - allennlp.common.params - trainer.checkpointer.keep_serialized_model_every_num_seconds = None
2021-01-21 07:00:21,713 - INFO - allennlp.common.params - trainer.checkpointer.num_serialized_models_to_keep = -1
2021-01-21 07:00:21,718 - INFO - allennlp.common.params - trainer.checkpointer.model_save_interval = None
2021-01-21 07:00:21,723 - CRITICAL - root - Uncaught exception
Traceback (most recent call last):
File "/usr/local/bin/allennlp", line 8, in <module>
sys.exit(run())
File "/usr/local/lib/python3.6/dist-packages/allennlp/__main__.py", line 34, in run
main(prog="allennlp")
File "/usr/local/lib/python3.6/dist-packages/allennlp/commands/__init__.py", line 92, in main
args.func(args)
File "/usr/local/lib/python3.6/dist-packages/allennlp/commands/train.py", line 118, in train_model_from_args
file_friendly_logging=args.file_friendly_logging,
File "/usr/local/lib/python3.6/dist-packages/allennlp/commands/train.py", line 177, in train_model_from_file
file_friendly_logging=file_friendly_logging,
File "/usr/local/lib/python3.6/dist-packages/allennlp/commands/train.py", line 238, in train_model
file_friendly_logging=file_friendly_logging,
File "/usr/local/lib/python3.6/dist-packages/allennlp/commands/train.py", line 433, in _train_worker
local_rank=process_rank,
File "/usr/local/lib/python3.6/dist-packages/allennlp/common/from_params.py", line 595, in from_params
**extras,
File "/usr/local/lib/python3.6/dist-packages/allennlp/common/from_params.py", line 624, in from_params
return constructor_to_call(**kwargs) # type: ignore
File "/usr/local/lib/python3.6/dist-packages/allennlp/commands/train.py", line 689, in from_partial_objects
validation_data_loader=validation_data_loader_,
File "/usr/local/lib/python3.6/dist-packages/allennlp/common/lazy.py", line 46, in construct
return self._constructor(**kwargs)
File "/usr/local/lib/python3.6/dist-packages/allennlp/common/from_params.py", line 461, in constructor
return value_cls.from_params(params=deepcopy(popped_params), **constructor_extras)
File "/usr/local/lib/python3.6/dist-packages/allennlp/common/from_params.py", line 595, in from_params
**extras,
File "/usr/local/lib/python3.6/dist-packages/allennlp/common/from_params.py", line 624, in from_params
return constructor_to_call(**kwargs) # type: ignore
File "/usr/local/lib/python3.6/dist-packages/allennlp/training/trainer.py", line 1174, in from_partial_objects
use_amp=use_amp,
File "/usr/local/lib/python3.6/dist-packages/allennlp/training/trainer.py", line 437, in __init__
raise ValueError("Using AMP requires a cuda device")
ValueError: Using AMP requires a cuda device