This is the code for the MLRC2020 challenge w.r.t. the ACL 2020 paper Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings
Originally posted by Joyrocky October 14, 2021
I just add the Sbert question embedding on the original paper but get the very worse result, could someone tell me where is the point ? Is that not right?
When I choose RoBERTa model as question Embedding model, In the getQuestionEmbedding() function in the model.py file, in the calculation results of the last_hidden_states, all data are nan. Could you please explain?
According to the main paper, we have two options for question embedding: LSTM ans RoBERTa.
right?
and here u shows that using SBert works better than the previous ones.
am I missing some information?
Nice work @jishnujayakumar . I was going to use SBert on EmbedKGQA. But you already did it, plus you did more models like Longformers. Can you also publish, what the result was for each of the combination you tested? I was going to use SBert + ComplEx. If your research found a different combination better, may be I will go with that.