audio rack
The CAVE2 audio system is composed of 22 separate audio channels providing localized audio for the visual content.
Genelec 7050B
superclamp
speaker support
Genelec 6010A
All 22 channels are connected to a digital-to-analog converter controlled by a standalone audio computer (MacPro) through a MADI interface (RME). MADI (Multichannel Audio Digital Interface) allows the transmission of 64 channels of 24-bit audio at a sample rate of up to 48 kHz. This protocol is particularly desirable due to the support of relatively long cable lengths in our application.
The audio computer runs CAVE2 Sound Server, a custom application written in SuperCollider, an open source programming language and environment for real-time audio synthesis and algorithmic composition. CAVE2 Sound Server is controlled through Open Sound Control (OSC) messages.
One of the primary goals at the onset of software development was to develop a simple set of commands to interact with the sound server, which would facilitate playback and positioning of audio objects in virtual space. Individuals developing for CAVE2 access these commands via a C++ sound API.
The OmegaSound API abstracts many of the features necessary for interacting with the CAVE2 Sound Server (e.g. position of sound object relative to listener, location in virtual space), as well as formatting and sending OSC network messages. The result is a very simple set of commands to play, stop, or update sounds in virtual space.
As Supercollider (and consequently the CAVE2 Sound Server) consists of a separate and distinct audio server and language interpreter (which intercommunicate via OSC), the incoming OSC messages sent by the OmegaSound API are received by the language interpreter, which then evaluates specific functions that send messages to the audio server.
This pipeline of events allows CAVE2 application developers to send basic high-level commands (such as “/play”), along with a few details about file and virtual space location to the CAVE2 Sound Server. The language interpreter converts these commands to the lower-level language understood by Supercollider’s audio server. Mono sound objects are placed in space using first-order 2D Ambisonics, which takes the X and Z location of the sound object relative to the listener, along with a W component corresponding to the object's sound pressure, and in-turn places it in the appropriate speaker(s) in CAVE2, at the appropriate amplitude.
The end result is an audio system that provides:
playback of mono audio samples
positioning of samples in virtual space
updating samples (e.g. location)
matching size of sound object to that of corresponding visual object
play back of ambient stereo or mono sources
application of reverb to sound objects, with controls for wetness/dryness and overall effect
The physical realities of our CAVE2 environment presented some challenges, particularly the detrimental acoustic effects of large polygonal glass surfaces and hard flooring. The installation of specifically selected acoustic ceiling and carpet tiles helped to mitigate the reflection of acoustic waves.
See the Environment page for details on acoustic treatments.
See CAVE2assy.xls 'Audio' Tab for component details and BOM.
A loudspeaker is mounted at the top of each column of displays and oriented towards the viewer. The result is a full ring of audio shaped as a hypothetical cylinder centered within CAVE2, with a diameter and height matching the 3D viewing range of an average viewer. The subwoofers are located just outside the CAVE2 cylinder along the floor, offset 90 degrees from the center of the visual environment.