2/1/19
Goal: List pros and cons and find ways to improve on melody generation model from last semester.
Result:
Pros - diff sequence lengths, types of layers, w/ shallow network can produce pretty good results,
fairly easy to run default program
cons/improvements - choosing right sequence length, weird notes and rests, doesn’t support duration of notes and diff offsets btwn notes/ rests, beginnings and endings to pieces, single instrument
Next Steps: Try other models for melody generation. Tweak model.
2/8/19
Goal: Try altering the model from last semester. Try using other models for melody generation.
Result: Tried training the melody by altering the input for some of the layers changing Dense and LSTM layers from 256 to 512. Using the weights from this training, I had trouble getting the prediction program to work because the dimension shapes aren't equal. I also tried changing the dropout value from .3 to .5 for the training and to .1 for prediction. The result was more repetitive, but didn't seem to show much difference.
Next Steps: Try running magenta program and understanding it. https://github.com/tensorflow/magenta/tree/master/magenta/models/piano_genie
2/15/19
Goal: Understand and get magenta piano genie program running.
Results: Installed magenta, but had trouble getting the training running. Seems to be working now.
Notes:
Next Steps: Figure out what I can do with the trained model.
2/22/19
Goal: Figure out what can be done with training the piano genie model.
Results: Had trouble figuring out what the output of the training was and what happened with it after it finished running. Also was unsure how to properly modify to commands to train the model using your own data set, since that is probably what I would want to do as a first step to using it. In that process I found and briefly looked at a different magenta model, the melody rnn, which seems like it relates more to what I am trying to do with melody generation.
https://www.twilio.com/blog/generate-music-python-neural-networks-magenta-tensorflow
https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#train-your-own
Next Steps: Get the melody rnn model running.
3/1/19
Goal: Run the magenta melody rnn model.
Results: Was able to generate some melodies using the pretrained melody rnn models - basic_rnn, lookback_rnn, and attention_rnn. Basic generates notes one by one, keeping track of the most recent. Lookback keeps track of recent 2 bars, so can be more repetitive. Attention should give more long term structure. I tried each using middle c as the priming note to start on. I also tried the suggested first 4 notes of twinkle twinkle little star and also giving a midi with a max 16 bar melody as a priming midi. Some observations of this model compared to the one I used before is that the generated melody contains articulation(mostly just staccatos), rests and notes of different lengths. It looks much cleaner, there are just some weird clef changes. Each generates default 10 8 bar melodies per run, but this can be changed. They are also all in 4/4. I tried priming one with a midi with 4 bars and generating 16 bars and the melody generated matched the input fairly well.
Next Steps: Find a dataset and try to train using the melody rnn model.
3/13/19
Goal: Find dataset to train the model.
Results: Downloaded some movie themes midi files from midiworld.com and built the dataset and created sequences for the training. I was able to train the melody rnn model with the default commands. It didn't complete all the update steps, but I was able to generate some melodies using the latest checkpoints of the training.
Next Steps: Fully train the melody rnn model.
4/4/19
Goal: Try to train a larger dataset completely.
Results: I tried to train a subset of the Lakh dataset, which was LMD-matched containing 45,129 files from https://colinraffel.com/projects/lmd/#get, which was mentioned in the magenta instructions. When trying to generate the notesequences and sequence examples to make the dataset, it as taking too long with so many files, so I cut it short, so am unsure how many it got through. In addition, I tried to let it run overnight, but the VPN got disconnected, so it stopped. I was still able to use the latest checkpoint to generate some melodies, but I am unsure how much was actually trained.
Next Steps: Try to apply it to a tool, maybe just adjusting the code to having command line inputs/prompts to use the generation. I might try to train again or just use the latest checkpoint from one of my previous trainings.
4/12/19
Goal: Apply the generation to a tool.
Results: I modified the generation program, by changing the flags, to prompt the user for inputs instead which would be used to change various conditions for the generation, such as primer note, number of melodies to generate, and number of bars. I used the checkpoint from what I had previously trained for the generation.
Next Steps: Try to train the larger dataset again with about 3000 files using linux screen so that it won't disconnect. Potentially get the generation working on my local computer to have easier and immediate access to the melodies after they are generated.
4/19/19
Goal: Fully train larger dataset.
Results: I was able to fully train a subset of the Lakh dataset with 2736 midi files. I also some of the input prompts for generating with various changed aspects so they now include: How fast? (in quarters per min), Please give a primer note or sequence with values from 0-127 separated by commas, How many bars per melody? and How many melodies do you want to generate?