SPACE: Silence in Speaking

If spaces in texts are used to separate words, then silence is the SPACE in our spoken communication.

1. From Simple Beginnings:

In a theme such as SPACE it is easy to go with the most straightforward notion or interpretation attached to that concept. I won't deny my initial thoughts immediately drifting along two major branches:

Anyone who knows me will know I immediately felt a strong pull towards the second, more down to earth approach. While I like discussions and philosophy I also really like concrete, solid examples, and clear logical states. The latter is the programmer in me. But this time I wanted to make sure that I would fully explore the possibilities of the theme and after some reasoning I landed on the following thought: If spaces in texts are used to separate words, then silence is the SPACE in our spoken communication.

Video showing the final concept in action

2. Threefold Layered Complexity

With this initial concept - using the silence/spaces in spoken communication as some input - I started thinking about how I would embed this in an actual project. I decided that I wanted the theme to be relevant on multiple layers, or with multiple meanings, and since I strongly believe in the rule of three (as any storyteller does) I gave the embedding three levels:

After this reasoning I was happy with the core concept and decided it was time to put theory into practice. It is with these three layers as described above that I decided to embark on my journey into the technical side of the project.

Figure 1. An overview of what will be the final product.

Remember you can click the photo to see a full resolution version

3. Technological Exploration & Frustration

One of the few requirements for this assignment was, besides having to do something with the theme SPACE, the usage of openFrameworks as the development platform. This meant that I would finally have to learn c++, a task which is was partially looking forward to and partially dreading. I will document my development process below:

Installation: The installation should have been straightforward, I mean, how hard can it be? It turns out that Visual Studio Community Edition takes up a whopping 23GB for the full install, including openFrameworks. This was a little bit much for my poor 128GB SSD, so I had to remove a lot of programs to run this one IDE. After messing with the settings I managed to make the examples from OF run, so naturally it would be easy after this.

Figure 2. Yamaha Steinberg ASIO USB driver was NOT good friends with OF.

Drivers & Sound: Since I needed to do some work with audio I decided I would test with the microphone first and luckily I started very early with this process since the first half of a day was spent trying to find out why my system would keep trying to install/configure/re-install audio drivers every time an OF application would launch. No easy fix was found, so after much frustration I removed the proprietary ASIO Yamaha driver for my sound card and just used the standard windows driver (with much higher delay). A fine start to a project.

Figure 3. Constant assertion errors. After correcting sound card settings this went away.

Perlin Noise: I knew from previous tests in Processing that I wanted to generate a circular object with multiple octaves of perlin noise. Each octave adds a certain amount of detail to the overall landscape. The process was straightforward: I generated a looping heightmap from the layered noise functions, and spread these out as verteces of a circular object. At this point, as well as at later points, it might be useful to consult my actual code on GitHub. I had some experience with 2D terrain generation before, so this process took only about a day to refine. However, it made me realize that I would need better code organization.

Figure 4. One octave (and a slow one) of perlin noise that is looped around.

C++ / VS / Classes: I would like to preface this section that I am a major lover of OOP. Classes are a must for me. So, naturally after the initial perlin noise example was running, I wanted to refactor it into proper OOP. This was not to be. Visual Studio kept being very annoying, even after I understood the structure of c++ includes. For some reason the virtual filesystem that VS was using made the whole ordeal extraordinarily painful. So much so that after an entire wasted afternoon I decided to just stick with one .cpp and one .h file. At the time of writing that's >800 lines in two files, of which >600 in the .cpp file.

Audio Mangling: The input sound (which was inspired by an example here) capture sends audio buffers of the required buffer size to a function. There we calculate all kinds of variables, such as loudness level. Over the course of several frames (during which we receive tons of buffers) we constantly store the frame by frame average microphone level, which is also later used to calculate an average across the recorded clip. This microphone level is used to detect silences (beneath a certain threshold) which count as the SPACE in our speech. These and other variables are then plugged into the octave settings of the land generator. The variables are:

Figure 5. Multiple (4/5, IIRC) octaves of perlin noise land generation and 3 octaves for water.

Hardware: Initially I intended to use a separate microphone, preferably a vocal microphone such as an SM58, however during development it turned out that the internal microphone of the laptop was beyond sufficient. Using the internal microphone decreases the footprint of the work, which can be seen as a good and a bad thing, but most importantly: it makes the project easier to reproduce for other people. For the sound output it is probably best to use either good headphones or speakers via an aux cord, however this is not a must. Overall the whole work has been designed to allow for novel forms of input (sound) through standardized means.

Figure 6. Planet name generation and clouds are now in place.

Sound Design: It soon became clear that the work was missing something. That something turned out to be good sound design. I fired up my DAW and started designing various pads, drones and hits to convey the message of SPACE. When a planet is showing the music is intentionally very space-y or drone-like, this is then contrasted with the recording section, which has a busy sound to contrast the relaxing nature of the drone with. The transitions are both amplified by a percussion sub-hit to add some weight to the transition.

Figure 7. Every planet now has a chance of getting 0-3 moons/satellites.

UI and Story Design: At this point the experience felt very satisfying, which was what I was originally aiming for, however the screen felt a little lackluster. In the earliest stages I had already added a random name generator and in the final stages I decided to expand upon this idea by creating various pieces of information, such as size, resources and description, and displaying them in a rudimentary UI. This UI also showed some minor debug info to help me in the inevitable last minute debugging that is surely to come. The description is randomly generated from several sentences, however a lot of care has been put into the 'story' aspect of these to convey the appropriate 'explorer' message. Of course, the SPACEing of these sentences has been meticulously checked.

Figure 8. Another example of a finished planet.

Useful Links

- Video on YouTube showing the work in action (soon to follow).

- The code on GitHub

- All audio files designed for this project on my own website

- The font used from daFont, called Nasalization. It's a tribute to the NASA font style.