September 14-20:
- Research GANs for melody generation
- How would a discriminator determine what melodies are "good"?
- How can we put music theory principles in place to improve the output of the network?
- Present findings at next group meeting
- Explain what a GAN is and how we can use it
October 4th-12th:
- Implement a deep learning network
- Recurrent Neural Network with LSTM cells
- Implemented "mnist-gan"
- Currently working to apply to music
- Had some trouble setting up a virtual python environment to run TensorFlow with music21
October 12th-18th:
- Explore drum/rhythm generation using deep learning
- Having difficulty translating MIDI files with drums using music21
- All the channels are defaulted to piano after being read by music21
- Possibly due to a poor data source
- Need to find out how to process the data in a way that I can isolate drum tracks
October 25th:
- Implement C-RNN-GAN with Nottingham data
- Implementation from above GitHub repo fetches 3,414 classical MIDI files (approx. 3.4GB)
- Had to fix many of the outdated links
- Implementation uses an old version of TensorFlow--work needs to be done to update the code
- Utilized documentation from the older TF version (0.12) to convert to a more current version (1.2)
Final Presentation Prep
- Experiment with different hyperparameters to try and coerce good generated output from the GAN
- Trained a larger model with more than one tone per cell
- This allows for polyphonic output
- Began training large model with a large discriminator
- Requires considerably more time to train