model_baseline.load_state_dict(checkpoint['model_baseline']) throws an error complaining about missing/unexpected keys. After transforming the keys from "module.start_placeholder", "module.input_emb.weight"... etc. to the form "start_placeholder", "input_emb.weight" ... in the OrderedDict, this time I get the following size mismatches:
RuntimeError: Error(s) in loading state_dict for TSP_net:
size mismatch for encoder.norm1_layers.1.weight: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for encoder.norm1_layers.2.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for encoder.norm1_layers.3.running_mean: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for encoder.norm1_layers.4.running_var: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for encoder.norm2_layers.0.weight: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for encoder.norm2_layers.1.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for encoder.norm2_layers.2.running_mean: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for encoder.norm2_layers.3.running_var: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for encoder.norm2_layers.5.weight: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for decoder.decoder_layers.0.Wq_selfatt.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([128, 128]).
size mismatch for decoder.decoder_layers.0.Wq_selfatt.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for decoder.decoder_layers.0.Wk_selfatt.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([128, 128]).
size mismatch for decoder.decoder_layers.0.Wv_selfatt.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([128, 128]).
size mismatch for decoder.decoder_layers.0.W0_selfatt.weight: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128, 128]).
size mismatch for decoder.decoder_layers.0.W0_att.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([128, 128]).
size mismatch for decoder.decoder_layers.0.Wq_att.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([128, 128]).
size mismatch for decoder.decoder_layers.0.Wq_att.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for decoder.decoder_layers.0.BN_selfatt.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for decoder.decoder_layers.0.BN_att.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for decoder.decoder_layers.0.BN_MLP.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for WK_att_decoder.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([256, 128]).
size mismatch for WK_att_decoder.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for WV_att_decoder.weight: copying a param with shape torch.Size([128, 128]) from checkpoint, the shape in current model is torch.Size([256, 128]).
size mismatch for WV_att_decoder.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).