Generation of lyrics lines conditioned on music audio clips
Paper accepted at NLP4MusA 2020 Workshop
Paper accepted at NLP4MusA 2020 Workshop
Olga Vechtomova, Gaurav Sahu, Dhruv Kumar
University of Waterloo
We present a system for generating novel lyrics lines conditioned on music audio. A bimodal neural network model learns to generate lines conditioned on any given short audio clip. The model consists of a spectrogram variational autoencoder (VAE) and a text VAE. Both automatic and human evaluations demonstrate effectiveness of our model in generating lines that have an emotional impact matching a given audio clip. The system is intended to serve as a creativity tool for songwriters.