Made by Dhruv Bhatia, Noah Medina, Herbert Traub, and Aryan Srivastava.
-
Clone the repo and run create_env. Alternatively, set up a virtual environment and install the requirements from requirements.txt.
-
Just run
python assignment.py
! The program should prompt you for some inputs to determine the kind of learning and generation you want. -
If you generated music, it will be saved in the Generated Pieces folder under a file name of the current date and time.
-
Have fun!
Here is a map of what directory holds what!
-
processing - this directory has the files we use for preprocessing (preprocess.py) and postprocessing (generate_midi.py). convert_midi has some helper functions we used during programming and testing.
-
Models - this directory has the models we use to train (both note generation and duration generation). The files we use are duration_gen.py, note_gen_functional.py, and transformer_funcs.py.
-
Generated Pieces - all generated pieces are stored here
-
data - All the midi files sorted by composers.
-
AllOneTrack - all the midi files that only have one track!
-
OneTrackData - one track midi files sorted by composers.
-
Trained Weights - Stored trained weights so you don't have to train everytime you want to make music!
-
Dict Data - Stored data from preprocessing to make generation based on already trained models more efficient.