References
References
[1] K. James, E. Morales, Y. Takeda, R. Horst, and E. Yung, “Listen and Learn: Future methods, present possibilities and past dilemmas for sounding texts in digital arts-based educational research,” in CSSE Conference, 2021.
[2] K. James, R. Horst, E. Morales, Y. Takeda, and E. Yung, “Singling and the Earful Yearning: A remote, digital, hyper-interactive text-to-MIDI literacoustic jam,” in Electronic Literature Organization conference, 2020.
[3] K. James, E. Morales, R. Horst, Y. Takeda, and E. Yung, “Story Music and the Future of Sound Thinking,” in 3rd
[4] Auvray, M., Hanneton, S., and O'Regan, J. K. (2007). Learning to perceive with a visuo-auditory substitution system: localisation and object recognition with ‘The vOICe’. Perception 36, 416–430. doi:10.1068/p5631
[5] Haigh, Alastair, et al. “How Well Do You See What You Hear? the Acuity of Visual-to-Auditory Sensory Substitution.” Frontiers in Psychology, vol. 4, 2013, https://doi.org/10.3389/fpsyg.2013.00330.
[6[ Somangshu Mukherji. “Generative Musical Grammar: a Minimalist Approach”. PhD thesis. 2014.
[7] Horse+Person Image taken from EECS 442 Problem Set 7
[8] Glasses Image taken from 5 reasons black glasses aren't boring | Banton Frameworks
[9] Linguistic Syntax Tree taken from Wikipedia
[10] Musical syntax tree generated personally