I very much enjoyed reading your paper and I am trying similar things for Icelandic PoS and lemmatization.
The performance of my model for lemmatization is not what I would expect. I tried implementing a simlar RNN decoder for lemmatization as you describe in the paper (in PyTorch) but putting the pieces together from the referenced papers (Bahdanau et al., 2014 and Luong et al. 2015) proved to be difficult. Therefore, I turned to the code. Thank you for releasing it!
I am not that used to TensorFlow but from what I understand is that the input to the RNN decoder is the word_rnn_outputs
(O^w_i in paper), tag_feats
(T_i), word_cle_states
and attention over word_cle_outputs
(e^c_{i...}). Along with the previous predicted character, which I assume TF handles for you as it is not clear from the code.
There are a few things not clear to me, even from reading the code.
- What does
word_cle_states
stand for? According to the paper it should be e^w_i (summed last state of character RNN to a word embedding) as it is part of the input to the RNN decoder, but from the code it seems that it is simply the last state of the character RNN. Is this correct?
- The inital hidden state of the RNN decoder is said to be O^w_i in the paper, but it seems to be also this mysterious
word_cle_states
. Am I correct in understanding that the inital hidden state of the RNN decoder is last state of the character RNN, which can also make sense.
- Is the previous hidden state of the RNN decoder used to calculate the multiplicate attention (called "dot" in Luong et al.) or is the attention done after calculating the next hidden state, and then fed to the to
decoder_layer
?
- Is the output of the RNN decoder simply mapped linearly to a dimension of the correct size to predict characters (
decoder_layer
)?
I would be very grateful for any answer you have!