This project began as a project for Peter Färber’s Composing with and for moving loudspeaker course, with the original inspiration coming from this video that I came across while navigating my way through a late-night youtube rabbit hole.
My goal with this piece was to explore the link between sound and process, specifically considering sound as a process through the sonification of natural phenomena/physics via a Blackburn Pendulum (a Y-shaped pendulum that creates Lissajous curves in 3D space). In order to achieve this goal, I constructed a room-size dual-axis pendulum with a speaker at the end and a sensor capable of measureing its position (via the angle of the bottom link in the pendulum). The resulting data was sent wirelessly (via Bluetooth in v1.0 and OSC over wifi in v2.0) to my computer where it was processed and passed as an input to various sonic processes.
→ jump to:
In version 1.0 (January, 2019) I used an early prototype from Mictic to measure the angle of the pendulum. The resulting data was outputted as a stream of Midi CC values over Bluetooth and read by Max for processing and sonification. Using the anglular displacement around the X- and Y-axes, I computed a projection of the position of the pendulum onto the X-Y plane.
Using this result as an input, I used the nodes object max to compute the distance to eight points aranged in a circle around the space. This object outputs a list of 1 - the distance to the center of each node when a pointer is within the radius, and 0 otherwise. The output was used to control the amplitude of eight harmonically related (from a scale) oscillators. A ninth center node applied a distortion effect to the output signal - the maximum amount of distortion could be manually adjusted over time (via the size of the node center). In order to account for the decreasing magnitude of deviation of the pendulum over time, I manually adjusted the position and size of the nodes in real time as the process progressed. Finally, the output of each of the individual 8 nodes was played from a corresponding speaker arranged in a circle around the audience, while the summed output was played from the swinging speaker.
In version 2.0 (June, 2019) I replaced the sensing system with and Arduino Nano coupled with a 9 DoF IMU that send out a wealth of orientation data in floating point format via OSC over wifi. This implementation offered a substantial improvement upon the 7 bit resolution of the Mictic (particularly at the end of the process when the total deviation was relatively small).
In this version I updated the synthesis method to include to include 2 elements:
The swinging speaker played the output of a series of several detuned oscillators (programmed in supercollider, using LFNoise to ___). Specific parameters of the oscillators (e.g. __, __, __) were controlled by data (derived) from the pendulum.
The 24 speakers in the room were again mapped to independent audio channels, this time the result of a composed granular synthesis process. As in the previous version, each channel was ‘activated’ when the pendulum entered the node. The size of the nodes were inversely proportional to the maximum deviation from the Z-axis over the previous 15 seconds, so as the magnitude of the motion of the pendulum decreased over time, the size of the nodes increased and the granulation ceased to be localized to the area near the moving speaker and tended to fill the entire room.
Original sketches
Sensor from v2.0
v1.0 - If it ain't got that swing - January, 2019
v2.0 - It don't mean a thing - May, 2019