RNN is a kind of sequence to sequence model, it can solve a lot of NLP problems such as Machine translation ใ Summarization ใ
and even can generate article.
This time I am going to use RNN to generate music, similar to Article generation we can convert music
into notes and durations then we can treat them as the input of RNN.
But the problem of traditional RNN is that it only output the last latent vector with fixed length, when the setence becomes longer and longer it
can't express the meaning of the sentence well.
So I introduced Attention mechanism to solve this problem.
- use Bidirectional LSTM with 256 units
- use malti-head attention
The data contains 18 anime musics
and is collected from [https://www.midiclouds.com/forum-4-1.html]
[https://github.com/Yukino1010/Music-Generate-Part1/blob/master/output/first.mid]
[https://github.com/Yukino1010/Music-Generate-Part1/blob/master/output/second.mid]
[https://github.com/Yukino1010/Music-Generate-Part1/blob/master/output/third.mid]
Although the result looks terrible and had been overfitting,
But it did capture some of the information in the original data, that makes it a bit like real music.
davidADSP GDL_code[https://github.com/davidADSP/GDL_code]