When I sync down the project and run python -m pytest. There is a failing test.
The output is included below.
`
======================================================================================================= test session starts =======================================================================================================
platform darwin -- Python 3.6.3, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: /Users/paul.murphy/PycharmProjects/second-attempt/allennlp-as-a-library-example, inifile: pytest.ini
plugins: pythonpath-0.7.2, cov-2.5.1, flaky-3.4.0
collected 3 items
tests/dataset_readers/semantic_scholar_dataset_reader_test.py . [ 33%]
tests/models/academic_paper_classifier_test.py F [ 66%]
tests/predictors/predictor_test.py . [100%]
============================================================================================================ FAILURES =============================================================================================================
_________________________________________________________________________________ AcademicPaperClassifierTest.test_model_can_train_save_and_load __________________________________________________________________________________
self = <models.academic_paper_classifier_test.AcademicPaperClassifierTest testMethod=test_model_can_train_save_and_load>
def test_model_can_train_save_and_load(self):
self.ensure_model_can_train_save_and_load(self.param_file)
tests/models/academic_paper_classifier_test.py:12:
../../untitled/venv/lib/python3.6/site-packages/allennlp/common/testing/model_test_case.py:81: in ensure_model_can_train_save_and_load
self.check_model_computes_gradients_correctly(model, model_batch)
model = AcademicPaperClassifier(
(text_field_embedder): BasicTextFieldEmbedder(
(token_embedder_tokens): Embedding(
...(_dropout): ModuleList(
(0): Dropout(p=0.2)
(1): Dropout(p=0.0)
)
)
(loss): CrossEntropyLoss(
)
)
model_batch = {'abstract': {'tokens': Variable containing:
18 80 6 ... 0 0 0
18 80 6 ... 0 ... 0 0
237 612 238 4 613 614 14 239 615 616 0 0
[torch.LongTensor of size 10x12]
}}
@staticmethod
def check_model_computes_gradients_correctly(model, model_batch):
model.zero_grad()
result = model(**model_batch)
result["loss"].backward()
has_zero_or_none_grads = {}
for name, parameter in model.named_parameters():
zeros = torch.zeros(parameter.size())
if parameter.requires_grad:
if parameter.grad is None:
has_zero_or_none_grads[name] = "No gradient computed (i.e parameter.grad is None)"
# Some parameters will only be partially updated,
# like embeddings, so we just check that any gradient is non-zero.
if (parameter.grad.data.cpu() == zeros).all():
has_zero_or_none_grads[name] = f"zeros with shape ({tuple(parameter.grad.size())})"
else:
assert parameter.grad is None
if has_zero_or_none_grads:
for name, grad in has_zero_or_none_grads.items():
print(f"Parameter: {name} had incorrect gradient: {grad}")
raise Exception("Incorrect gradients found. See stdout for more info.")
E Exception: Incorrect gradients found. See stdout for more info.
../../untitled/venv/lib/python3.6/site-packages/allennlp/common/testing/model_test_case.py:161: Exception
------------------------------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------------------------------
Parameter: classifier_feedforward._linear_layers.0.weight had incorrect gradient: zeros with shape ((2, 4))
Parameter: classifier_feedforward._linear_layers.0.bias had incorrect gradient: zeros with shape ((2,))
Parameter: classifier_feedforward._linear_layers.1.weight had incorrect gradient: zeros with shape ((3, 2))
------------------------------------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------------------------------------
10it [00:00, 388.59it/s]
100%|██████████| 10/10 [00:00<00:00, 2317.55it/s]
10it [00:00, 557.01it/s]
10it [00:00, 551.77it/s]
20it [00:00, 2460.00it/s]
accuracy: 0.4000, accuracy3: 1.0000, loss: 1.0902 ||: 100%|##########| 1/1 [00:00<00:00, 50.55it/s]
accuracy: 0.4000, accuracy3: 1.0000, loss: 1.0898 ||: 100%|##########| 1/1 [00:00<00:00, 88.64it/s]
10it [00:00, 530.95it/s]
10it [00:00, 561.45it/s]
-------------------------------------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------------------------------------
bucket_iterator.py 92 WARNING shuffle parameter is set to False, while bucket iterators by definition change the order of your data.
bucket_iterator.py 92 WARNING shuffle parameter is set to False, while bucket iterators by definition change the order of your data.
===Flaky Test Report===
===End Flaky Test Report===
=============================================================================================== 1 failed, 2 passed in 3.06 seconds ================================================================================================
(
`