Algorithmic composition is a growing field that offers a promising route for creating novel and engaging musical experiences tailored specifically to the brain (He, 2022). Affective algorithmic composition (AAC) exploits computer aid in order to generate new music with particular emotional qualities and affective intentions (Williams et al., 2017). Instead of relying on past listening history, which do not fully capture a person’s evolving tastes or immediate emotional responses, using real-time metrics can lead to a more adaptable approach to gauging musical taste. Concerning musical taste being utilized in medical interventions, there is sizeable importance placed on the patient preferences in such endeavors – research suggests that “it is the act of making a choice that determines the greatest effectiveness of the procedure [musical intervention for the patient].” (Guerrier et al., 2021). Music has more pronounced effects on listeners the more it is tailored to them, and the precision of algorithms can make this neurological effect even stronger (Huang & Lin, 2013; Woods et al., 2019).
The GUI for Karma-lab, an early algorithmic composition engine under development since 1994
The first instances of computer use for music composition emerged around the mid-1950s, when computers were still very expensive and slow. Created through rule systems and Markov chains, Hiller and Isaacson’s Illiac Suite was a string quartet entirely composed by the ILLIAC I computer at University of Illinois at Urbana–Champaign by the two professors in 1957, and is commonly cited as the first electronically generated score (Hiller & Isaacson, 1958; Fernández & Vico, 2013).
It is known that particular musical features elicit certain responses in people. The way music affects the brain has substantial impact that only gets better with algorithmic precision, and programs which are designed with this goal in mind are part of Affective Algorithmic Composition. Neuroscientific studies demonstrate this impact, revealing heightened emotional responses with algorithmically precise classical compositions (Agres et al., 2023).
Beyond classical, services like Brain.fm utilize algorithms that tailor music's acoustic features to sustain attention in people with varying attentional difficulties in a variety of genres (Woods et al., 2021). Electronic composition systems already show some level of human affectance. One such system to control emotional expression in computer-aided music generation improves composition and performance ability by musicians and non-musicians alike (Grimaud & Eerola, 2020). Some more similar systems are “MoMusic” (Bian et al., 2023) and the “affective remixer” (Chung & Vercoe, 2006), that serve as working models.
An artificial system capable of influencing peoples’ affective states through AI-generated music was developed through the use of “biofeedback loops” for gauging affect and updating acoustics in real-time (Williams et al., 2020). Such “intelligent musical interfaces” measure cognitive states of people implicitly in real-time to create appropriately toned music without conscious effort on the part of the user (Yuksel et al., 2019; Hou, 2022). A systematic review on AAC systems highlights their ability to trigger emotions in humans, but designing such systems to stimulate users’ emotions remains a steadfast challenge due to the lack of aggregated existing literature (Wiafe. & Fränti, 2023). Algorithmic music composition also frees up time for valuable human resources and enables people without musical expertise to create desirable music (Meier, 2014).