For a complete list of every project I have made and wish to be public, please consult my Portfolio (Curriculum Vitae) which is also linked on the Home page. This list is primarily for those projects that I believe are helpful to document at length.
The projects on this list all require multiple unique skills to be combined in order to function, and the processes to create these pieces cannot be meaningfully conveyed with a program note. Hopefully, these long form writings about these works convey my interests in the way these topics work together, as well as my creative process when it comes to larger works.
If you have questions about a particular work, please contact me at sgo@oberlin.edu or any of the other social links provided at the bottom of this website.
Spring 2024
This piece revolves around a web application that allows users to record audio and immediately playback their recordings on loop through the use of a single, screen sized button. Distributed with a QR code, this app aims to democratize interaction with one’s surroundings through the medium of digital audio. Simultaneously, the app turns any group of users into a network of nodes that process data in a manner hopefully analogous to the social interaction they already perform. The resulting work aims to grant compositional authority to a group of people while facilitating constructive interactions through technology.
This piece uses the gyro sensors on a smartphone to generate MIDI data, which then controls a variety of set parameters in the software synthesis and effect plugins Xfer Serum, Image Line Fruity Convolver, Image Line Multiband Delay, and Lese Codec. In conjunction with a timeline that changes parameters in a global sense, this piece aims to establish a concrete link between the physical gestures of the performer and gestures within the stereo field.
Winter 2024
The title 'Mirror Image' encapsulates the essence of this piece, as its compositional themes operate on multiple layers of mirroring. Conceptually, the composition is structured as a dynamic interplay of crescendos and diminuendos, mirrored sonic textures played in contrast to the performer, and gradual soundsource abstraction, akin to the reflective nature of a mirror's curvature. Furthermore, the gestural motifs meticulously woven throughout the work serve to perpetuate this mirroring effect, whereby each gesture finds its counterpart, thus engendering a symmetrical dialogue within the musical fabric. This intentional symmetry not only imbues the piece with a sense of cohesion but also facilitates a deeper exploration of its thematic underpinnings, inviting listeners to discern parallelisms and resonances across its intricate musical landscape. We encourage the audience to ask themselves, what becomes abstracted in a mirror’s reflection? What perspective do you hope to gain from looking in a mirror? How do you feel about what you see?
Program note authored by Will Judd.
As part of a collaborative composition process designed to educate participants on improvisation practice in the fields of music, technology, and dance, this piece was created over the course of several weeks based on the prompt: sublime stillness, frantic activity. The ten ensemble members were divided into four groups with each group composing a general movement score and accompanying piece of music. The four movements were then workshopped by the entire ensemble and by instructors Seven Kemper, Aurie Hsu, Edwin Huizinga, and Lilian Barbeito.
Fall 2023
This piece is designed to demonstrate a system comprised of three main parts: a Max patch that translates Xbox controller data to Max message data with the Human Interface object, a simple monophonic granulator Max Patch built around the groove~ object, and a 5 tap random interval delay patch also built in Max. Creatively, the piece aims to take the diverse sonic possibilities of these simple granular sound generation techniques and allow them to be performable and expressive. Technically, the patch gives the user the ability to granulate two separate samples at the same time, and also gives them the ability to swap between a collection of pre-determined samples. The player can modulate the speed at which grains are produced and may introduce randomness into the starting point of each grain. The player can also control the pitch of grains, while being able to randomly change the pitch of each grain. The grains produced by the two granulators can be mixed in with reverb live, and all generated sounds are sent through the multi-tap delay.
Inspired by performance works in the vein of Pauline Oliveros, this piece is designed to be a lightweight, portable performance piece that is ensemble agnostic. To perform this piece, each player in an ensemble is given a number in a set of numbers whose size is equal to the amount of ensemble members. This set of numbers is then sorted into an AVL tree, a step-by-step visual representation of which is displayed with a Google Slides presentation. The players are given a basic set of instructions which they perform according to the AVL tree’s various steps. The result is an emergent soundscape in accordance with this improvisation framework.
Summer 2023
The Blue Poodle Studio is a company created by former Disney employee and avid collector Mark Rexroat, with the aim of building a platform around Rexroat’s collection of rare toys, products, and designs. As the Blue Poodle Studio needed a sonic brand identity, Rexroat commissioned three pieces to accompany the other media produced by the Blue Poodle Studio. Over the course of several meetings, three general pieces were produced: a Main Theme which was designed to stand by itself and introduce longer form videos, a Background Theme which could be looped to arbitrary lengths and could play beneath someone talking, and a Stinger which would last a few seconds and accompany a short motion graphic. Based on demographic research and general field expertise, Rexroat wanted the pieces to draw inspiration from the works of Mark Mothersbaugh and 90s era merchandising. Rexroat was delivered the three pieces that were commissioned, as well as several stem tracks that compose each piece and other sound resources that would be useful to him in the future.
Spring 2023
This piece was created in the wake of my exposure to the ideas of Schaeffer, Smalley, Barett, Blackburn and others about gesture and the qualification of sound. In her writing about her piece Little Animals, Natasha Barett establishes a spectrum between representative and abstract sound. I felt that the idea of this spectrum in particular codified many of my experiences with electronic music both in the academic vein and otherwise, and I wanted to explore it in a piece of my own. Etude aux Bol Umami establishes a source bond – a connection between a sound and the notion of the object which manifests the sound – around some chopsticks and a ceramic bowl that I frequently ate with while isolated over Winter. The recordings of these objects all feature a lot of room noise which I feel firmly establishes their representative nature. As an analogue for my experience daydreaming while eating, I used the processing techniques I had studied while reading about these foundational composers to manipulate the source bond and create abstract gestures with the recorded material.
2022
A collaborative work between myself, visual artist and Geoscientist Katherine Chambers, and Computer Scientist Gawain Liu, this virtual reality experience was created over the course of a four week long period in the winter between semesters at Oberlin. The first few weeks of this workshop provided us with information about mechanics within the game engine Unity 3D as well as basic instruction in the programming language CSound and the ways in which the two softwares can interact. We used a landscape builder to create an environment that minimized render distance to ease strain on the GPU and a particle system of butterflies rendered as manipulated billboards. The butterflies were illustrated by Chambers, who also provided a number of facts about butterflies in a voice over I helped her record. The entire VR experience was meant to replicate a documentary about the mating season of Monarch butterflies in an immersive environment. Sonically, I created a generative sequencer and basic FM synth within CSound which, combined with nature recordings, created a soundscape akin to the nature documentaries that inspired the creation of this work.
When my roommate, Sam Kennedy, needed to create a brief animation for his first year seminar course about Art as a form of experimentation, he felt that the subject matter of his work would benefit from the sonic possibilities of digital audio that we had talked about casually in the time before he created this work. In Visage, Kennedy aims to examine and criticize manifestations of self image fed by superficial online communities like Instagram. The viewer is invited both literally and figuratively to inhabit the self-reflective eye of Kennedy’s self portrait, rendered with coarse neon curves (a stark juxtaposition to the nuanced gradients of his prior works in this series) that make up a rotoscope. This manifestation of self image stares into the imaginary camera as it is gradually obliterated by digital artifacts. To supplement Kennedy’s visuals, I datamoshed frames from his animation and processed them with a combination of intensely digital distortions – Lese’s Codec and AirWindows Ulaw Encode – to create an unnatural and painful soundscape that I felt supplemented the discomfort Sam’s animation achieves.
This piece was created as part of a greater capstone project I did during my time at the Pembroke Hill School in Kansas City, Mo. In this project, I explored and compiled language about the interactive music found in video games and placed this music within the greater contexts of academic electronic music and the internet. To put the ideas I had written about into practice, I created an adaptive score for a brief demonstration of Unity 3D which was created by my friend and colleague, John Shorter. The piece is composed of several layers of fixed media ranging from granulated recordings of a Vintage Stock to square waves reminiscent of the Commodore 64 SID chip to instrumental samples that I felt evoked more modern interactive experiences like Minecraft and Portal 2, among others. These layers change in volume in relation to the user’s position within Shorter’s interactive environment, creating a unique musical experience which relies on my divestation of compositional agency to the user. As a separate exercise, I composed a ten minute long fixed media arrangement of the audio layers akin to a Video Game OST.
When prompted by Oberlin’s TIMARA department faculty to create an audio work to accompany a video of The Four Seasons’ Canon: Spring choreographed by Crystal Pite, I was directed to use techniques that would force me out of my comfort zone. This resulting piece consists of my voice recorded through a granular delay patch I made in SuperCollider while studying at the SPLICE institute with Professor Joo Won Park. I felt that the texture of my voice through this patch complemented the organic but alien qualities of the dance; furthermore, my voice and SuperCollider had never appeared in my work before this piece.
This piece was composed under the tutelage of Dr. Michael Miller as an exploration of the String Quartet and my first piece of strictly acoustic music. Guided by the principles of minimalist works such as Philip Glass’ Facades and the second movement of John Adams’ String Quartet among others, this piece establishes an interpretation of Sonata form from a perspective that is largely ignorant of the extensive classical traditions it exists within. Though it may stray from formal classical and contemporary traditions, it gave me an introduction into the language and practice of these traditions which I feel informs my work in electronics and will allow me to fuse my experiences of music across traditions in a meaningful and artful manner.
This piece was composed under the tutelage of Dr. Michael Miller as an exploration of the interaction between voice and electronics. Though I had worked with voices and electronics both in isolation, this piece allowed me to lend the narrative properties of the spoken word – in this instance provided by a dramatic vocal interpretation of C.S. Lewis’ Jabberwocky by my friend and colleague Nicholas May – to my technical knowledge of synthesis and sound design. I aimed to create timbrel motifs to accompany narrative elements of the poem. Metallic gestures from the Vorpal Blade and harsh, vocal square waves from the Jabberwock sit atop a cheesily rendered, science fiction influenced soundscape to create a sonic environment that lends a different interpretation of a poem that I had grown up reading
2021
My first investigation of a synthesized soundscape, this piece aims to create an immersive environment with narrative form with fixed media. Its creation was guided by Dr. Michael Miller who provided me with language about a listener’s relationship to sounds as the sounds move both between reality and abstraction, and spatially within the stereo field. I used a combination of field recordings and synthesized elements to create sonic material which I then manipulated spatially with volume automation, reverb, and HRTF panning via Blender’s sound source objects and sound listener object. Formally, the piece aims to put the listener in perspective of a person who moves from an urban environment, represented with realistic sounds and spatialization, to a green space which is rendered with a more nebulous and melodic soundscape.
This piece was created right after I received a Roland Ju-06a as a birthday present from my father. I was inspired by the spontaneous nature of sound creation I could achieve with this synth, especially when compared to my processes with software synths that I had been used to. Though the Juno was much less powerful than Xfer Serum or ImageLine Sytrus for instance, the hardware controls and chorus created a space where I could make decisions that resulted in sounds that continuously inspired me to make more decisions. To explore the strengths of this hardware, I arranged some MIDI notes in my FL Studio and recorded eight takes where I fed the MIDI to the Juno and manipulated the Juno’s arpeggiator, controlled its volume and filter with envelopes and LFOs which I also manipulated, and the volume and filter cutoff and resonance by themselves. I then mixed all of these takes in FL Studio, arranging them in the stereo field to draw focus to what I thought were the most interesting parts of each take. This approach created form for the piece as more of an emergent property than one that determined the direction of the piece and made for a rewarding process.
2020
Creative Stories from Page to Stage is the name of a theater pedagogy project that Kansas City, MO actress Nicole Marie Green received grant funding to produce. For this project, Green worked with young children to draft a set of stories which she would coach the children to realize on stage. Unfortunately, Green was required to pivot as a result of the COVID-19 pandemic.
Green's solution for finishing the project while abiding by the health guidelines of the time was to commission fellow actors from the Kansas City area to voice act the stories. I was commissioned at this point to assist in the recording process remotely, as well as to edit all of the dialogue together with sound effects into four, radio drama style works. Said works can be found at the provided SoundCloud link. I was given much creative autonomy for this project, and being my first professional project it taught me much about professional time managment for creative projects.
2019
Ianua is the music production alias of Savino Go. Read more about its inspiration, creation, and aims in this article.