Fri Sept 7.
- Built basic NLP to extract genre from request for Shimi to play audio
Fri Sept 14.
- Toyed w/ idea of Shimi tracking facial position & bobbing to it
- Did basic Python copy/paste implementation
- Looked into STT w/ Python
- Found a wrapper for CMU's Sphinx, expects audio files
- Could build system to record/save sounds between pauses for a certain amount of times
- Video of interactivity: https://youtu.be/FnMkelr2hZE?t=220
Fri Sept 21.
- Built STT system for Shimi
- Extracts genre from text & queries iTunes for songs that match query
Fri Sept 28.
- Experimented with Madmom
- Tried to set up pitch/chord/beat tracking
- Only beat tracking worked
- Make ASCII shimi bob its head
Fri Oct 5
- Went hard with Spotify & Porcupine
- (Super cool demo inbound)
- Can now activate shimi by saying "Alexa" (lmao) and tell it to play songs or genre or artist
- It will play those songs in spotify
- TODO
- Build custom Shimi wake word
- Use spotify API to get "danceability", bpm, etc, to make some interactivity within shimi
Fri Oct 12
- Added custom shimi wake word
- Dance function based on current song
- Spotify danceability metrics, bpm, etc
- Fourier deconstruction
- Display head position output