Origins

[composition, film music, sonification, live electronics]

Concept and Overview

An exploration of the metamorphosis of space via a live fire and metalworking performance accompanied by a sonic interpretation of these gestures and pictures, creating a generative and improvised sound universe. This fixed media piece was created as an exploration of possible artistic directions for an upcoming live performance where the soundscape will be created in real time using live electronic tools. A collaboration with visual artist and PhD student Yuki Shiraishi.

Unfortunately, the project was put on hold due to the complexities of the Covid-19 situation but it will ultimately include the use of audio descriptor analysis and computer vision to generate inputs for live electronic processing/sonification.

jump to:

Compositional Process

The process of creating this version of the piece was akin to film scoring or a sort of abstract sonification. I attempted to define and/or apply sonic processes/relationships for the various actions in the film based on the associations I formed while watching them, in a sense, creating a narrative. Given the theme of the piece, I attempted to compose sonic gestures that developed over time, sharing a common evolutionary nature with the visuals. Ultimately, the goal was to present a vision or interpretation of the creation of the universe.

For instance, I imagined the section beginning at 0'54" as a scattering of energy and attempted to 'sonify' it through a series of falling pitches that progressed further with each successive weld. The section at 1'25" conjured an image of popping bubbles, while the passage at 4'08" creates an association with solar wind and flares, and as such, I used those metaphors as my guide during the sound design and composition process.

Technical Implementation

We recorded several trials during March and April of 2019; the audio was recorded with two overhead condenser mics in an X-Y configuration and two dynamic mics, one on either side. After Yuki cut together the video during the summer, I scored the film in an iterative process using the original audio from the trials processed by Ajax Sound Studio’s Cecilia 5 software, specifically the Ultimate Grainer module, as well as various effects included with the software. The individual gestures were layered and mixed in an iterative fashion using Adobe Audition.

In December, 2019, the piece premiered at Lange Nacht at ZHdK. I performed a live diffusion using software developed by Adrien Zanni that created ‘traveling reverb’, inspired by Luigi Nono’s production of Prometeo at the San Lorenzo Church in Venice Italy in 1984. The stereo mix was connected to three reverb processing busses: one which filled the room in a uniform manner, one which the delayed sound traveled from the front of the space along the left side of the room towards the back, and one where it followed a similar path along the right side. By independently controlling the send and return levels of each bus, I was able to create a much more dynamic, real-time ambisonic diffusion from the original stereo mix.

Photos

Media