9/14
Goal: Research composer tools and existing ideas
Meeting -
Next steps: Research and read about chord progression from provided resources, research LSTM
9/20
Goals: Learn about RNN/LSTM and become familiar with Music21
How I did: Read "The Unreasonable Effectiveness of Recurrent Neural Networks," installed Music21 and briefly looked over the user guide about notes, pitches, duration, streams etc.
Results:
Notes from reading:
Next steps: Continue researching RNNs and familiarizing with Music21
9/28
Goals: Do more research on RNNs and learn more about Music21.
How I did: Did some more research on RNNs, tried to start looking at code examples. Played around with and looked over more of the user guide for Music21 about streams, lists of lists, score, chords etc.
Results: Getting a better understanding of both, still have a lot more to learn.
Next Steps: Work on importing MIDI file into Music21 and identify if notes are in the chord progression. Look into MIDI files.
10/5
Goals: Write program to identify if notes are in the chord progression in MIDI file.
How I did: I wrote a program that imports a MIDI file with chords and and loops through the pitches in the chords to identify if the note input is in the chord progression. I used python and Music21 and created a sample MIDI file to test it.
Results: I gained understanding of what MIDI files are and how to use them, learned more about Music21 with importing and writing files, streams, chords, pitch tuples, and learned some python.
Next Steps: Work on function that generates a list of notes that are not in the specific chords, from the chords generated by Omega.
10/12
Goals: Write a function to generate a list of notes that are not in specific chords, from a stream of chords from a midi file.
How I did: I wrote the method, playing around with and trying different inputs (stream and output from Omega's method). I also tried producing different outputs: getting a list of notes from each specific chord or the notes not in the entire chord progression. We also created a github repository to collaborate.
Results: The method takes in a chord progression stream and outputs a list of each list of notes not in the specific chord for each chord in the stream.
Next Steps: Combine all the methods/parts with Adam and Omega, to have a working program that takes in the csv file of chords and a midi file with melody to combine the two and adjust/fix using Adam's algorithm.
10/19
Goals: Combine everything together to have a working program.
How I did: I looked into an alternative library to mingus that would work with python3 and found a potential option that someone had ported it to python3, but ended up using a different one.
Results: The program works and takes in a melody midi file and chord stream and returns a stream of corrected notes.
Next Steps: Start looking at deep learning with chord and melody generation. Find a 2 minute movie scene with faces and emotion.
10/26
Goals: Go through a tutorial about How to Generate Music using a LSTM Neural Network in Keras.
How I did: I read through the tutorial and learned about how the RNN can be used to predict the next note given a music data set. I also downloaded the code and downloaded tensorflow and keras to try to run the tutorial.
Notes from tutorial: Classical Piano Composer
Results: I was able to run the tutorial and run the lstm and prediction programs. I mainly tried the prediction and was able to get an output midi of predicted notes. I also tried using it on some beethoven midi files I found.
Next Steps: Keep playing around with it and try to better understand how the LSTM works and how the training and predicting works. I will also try to actually train a data set using the nottingham data set
11/2
Goals: Try training a data set, specifically the nottingham data set, with LSTM.
How I did: I tried training the data set, but each iteration took a very long time (aprox. 5 hours), so I was only able to train one epoch.
Results: The results were not very good because it was not well trained, so the prediction ended up being the same note repeated over and over.
Next Steps: I will continue trying to train using the nottingham data set and try to get some results. I will try to figure out if I can continue training from where I left off, which would help me train more epochs.
11/9/18
Goals: Train lstm model using the nottingham dataset and prepare presentation.
How I did: I tried to train the nottingham dataset using remote access, which made it run slightly faster, but was still taking too long, so tried to just train a small sample of 11 midi files from the data set. At first I used an input sequence of 100 notes, which yielded no results, so I then tried an input sequence of 50 notes.
Results: I was able to fully train the 11 midi files through 200 epochs using the 50 note input and it was able to predict and generate music.
Next Steps: Figure out end of semester goal that comes back to a central tool.
11/30/18
Goals: Decide on end of semester central tool.
How I did: I brainstormed ideas with Adam and Omega about what we wanted to do the bring what we've done back to a central tool as Richard suggested.
Results: We decided to try to use our note correction program with the midi melody files I have generated through the training and prediction.
Next Steps: Train just the melody midi files of the nottingham dataset to reduce music/note errors in the midi files. Potentially try to train the entire dataset by ssh on the Lenovo. Try using the note correction on the resulting prediction and put it all together on the repository.
12/5/18
Goals: Fix the prediction output to work for note correction and bring back to central tool.
How I did: I trained some of the nottingham dataset using just the melody, which removed chords, but still had problems. Then I tried changing the offset in the midi creation in the prediction to 1 instead of 0.5 so there would be no overlapping notes, and this resolved the problem of miscellaneous rests everywhere.
Results: I was able to use this output for the note correction. I added more chords to the chord input.txt to correct for more measures. I also was able to try it with a different chord input to compare. I also modified the note correction to generate a midi of corrected notes.