The World is Evolving Around Me
(2022)
In Collaboration with Qingyu Zhang
In Collaboration with Qingyu Zhang
KEYWORDS:
Max/MSP; Spat5; Real-time Head Tracking; Binaural and Multichannel Playback
The world is evolving around me hints at modern trouble: the information explosion. People need to face the endless influx of annoying auditory excitations: the notification sounds from mobile apps, office chatter, and the news stream.... As sound disorientation is re-created through adaptive and immersive playback, this installation becomes a parody of the massive amount of information that people can not escape today: sometimes, it brings the illusion of being at the center of the universe, and more often than not, overwhelming confusion.
The NVSonic head tracker is composed of an MPU orientation sensor, which outputs gyroscope and accelerometer data, and a Pro Micro board with an ATmega32U4 microcontroller, which processes the data from the sensor based on pre-compiled commands.
A customized OSC bridge has been built for the NVSonic Head tracker with JUCE, which interprets and translates gyroscope and accelerometer data into quaternion or Euler coordinates and relays the orientation message into Max/MSP via the OSC protocol.
The virtual listener, the sound sources, and the virtual speakers were created within the Spat5 virtual sound space, which can be visualized and monitored in real time with the spat5.viewer patch window.
In addition to a stereo pad track, short, mono audio clips ("sweeteners") in user-uploaded playlists are randomly triggered and follow trajectories implemented via the spat5.ellipse, spat5.random.poly, and spat5.scaling modules.
Three Modes of Presentation
To change the sound stage orientation:
Send yaw/pitch/roll parameters to spat5.binaural (for stereo) or spat5.hoa.rotate (for HOA) renderers
To sync spat5.viewer visualization with audio orientation:
Add yaw/pitch/roll parameters to listener perspective via spat5.osc.prepend /listener
6 virtual speakers with coordinates matching the physical speakers at the performance location
6 object-based audio sources in the virtual sound scene:
Sources #1-4 on spat5.random.poly, sources #5-6 on spat5.ellipse trajectory
⚠️ Unstable build - proof of concept only
To change the sound stage orientation:
Send yaw/pitch/roll parameters to spat5.transform command, updated once every 100 ms on a signal bang
Freeze speaker and listener arguments via spat5.osc.ignore command