/home/adrian/repositories/lightning-llama/lit_llama/model.py:43: UserWarning: ComplexHalf support is experimental and many operators don't support it yet. (Triggered internally at ../aten/src/ATen/EmptyTensor.cpp:31.)
).to(complex_dtype)
Traceback (most recent call last):
File "/home/adrian/repositories/lightning-llama/finetune_adapter.py", line 201, in <module>
main()
File "/home/adrian/repositories/lightning-llama/finetune_adapter.py", line 67, in main
train(fabric, model, optimizer, train_data, val_data)
File "/home/adrian/repositories/lightning-llama/finetune_adapter.py", line 97, in train
fabric.backward(loss / gradient_accumulation_steps)
File "/home/adrian/anaconda3/envs/lit-llama/lib/python3.10/site-packages/lightning/fabric/fabric.py", line 365, in backward
self._precision.backward(tensor, module, *args, **kwargs)
File "/home/adrian/anaconda3/envs/lit-llama/lib/python3.10/site-packages/lightning/fabric/plugins/precision/amp.py", line 70, in backward
super().backward(tensor, model, *args, **kwargs)
File "/home/adrian/anaconda3/envs/lit-llama/lib/python3.10/site-packages/lightning/fabric/plugins/precision/precision.py", line 81, in backward
tensor.backward(*args, **kwargs)
File "/home/adrian/anaconda3/envs/lit-llama/lib/python3.10/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/adrian/anaconda3/envs/lit-llama/lib/python3.10/site-packages/torch/autograd/__init__.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Expected is_sm80 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
fail with this error.