Sonic Monkey

See also: Sound

Sonic Who Wha?

Continuing a long history of forcing bizarre inventions on an unsuspecting world, I've taken the Monkey Robotics flag first into Android territory - with the chaotic sound art toy "Sonic Monkey".

I will be releasing many variations over the coming months - right now the "Drone" and "Spooky" versions are available.

Sonic Monkey in the Google Play Store

Components

Client - Java / Android SDK

The Inspiration: Drone Sound

'Monkey' was, as all my inventions, originally for my own use.

I like sound, a lot. Music too, of course, but also just plain sound. My hearing is very acute, and I deeply appreciate everything from Bach to the sound of a table saw winding down. When you really hear, and are this much an audiophile, you literally appreciate both.

There is a great legacy of artistic use of sound for purposes other than music. Obviously we could reference Lou Reed's Metal Machine Music [link here], or Eno's fathering of "Ambient", but these are musical at least by declaration - even if not to some ears.

Experimental musicians like John Cage and Christian Wolff are relevant too, but theirs either were or evolved into highly formalized mediums - too structured to be non-musical.

There remains an entirely separate genre devoted to sound for its own sake, sound that is not musical, and structured only loosely if at all.

Somewhere in there lies "Drone", an obscure but not invisible genre serviced by, among others, SomaFM's excellent "Drone Zone" radio show [link!!]. Mostly formless, wandering from "soundscape" to "white noise" and obviously droning ambiance, this is also the territory inhabited by Sonic Monkey.

Yup, that's right. Sonic Monkey is a configurable "drone machine".

PS: If we continued our ventures into pure sound, we will eventually find "Noise" (literally, noise). Yes, such a genre exists. But we're not going that far today...

Design Goals

Aside of knowing what I wanted it to sound like, I started out with certain priorities:

    1. Stay lightweight

    2. Be easily "re-tuned" for different flavors

    3. Draw and keep attention on the sound

    4. Support the visually impaired

    5. Be visually unique

Architecture

Unfortunately Google really makes many of these choices for developers. Sure, you could write your app in C++ (or C#? Why?), technically - but you'll be jumping through many extra hoops, can't use all the APIs, and are basically a second-class citizen in Android Land. They want Java and their architecture makes it clear, so you suck it up and go with the flow.

This simple single-user app doesn't require any communications, making it an ideal launch pad for a fledgling Monkey Robotics, so my only task was to design a flexible, re-usable method of playing and managing multiple audio samples efficiently.

The conceptual model I came up with is that of an audio mixing board, with "channels" for different audio sources, as if microphones were placed at different places in a "scene". Various "sources" would be generating their usual sounds in those scenes.

If I write it again (probably), I'll be using a method much more like musicians in an orchestra. But anyways on to...

Implementation

I first wrote an "AudioLayer" class, encapsulating the following:

    • Lifecycle management of the "SoundPool" Android object, by which samples are played

    • A "deck" of samples and their "Resource IDs"

    • A set of rules for selecting and preparing samples for use with SoundPool

    • Methods for controlling and "morphing" sounds during playback

The class manages all of the audio preparation, which in Android is significant. For example, samples cannot play until loaded (and de-compressed!) by Android, and the time it takes to do so can vary greatly. But I wanted samples to coincide with each other rhythmically: The AudioLayer class takes care of this for me by always having a new sample loaded and "at the ready", which can be started immediately and on-beat (if desired).

As you'd guess from the screen shots there are several flavors of AudioLayer instances running at any given time. By varying their configuration they each generate anything from a slowly-evolving subtle background humm or "white noise", to what amounts to a Morning Radio Show DJ's "Sound Board".

Combine these together and morph them slowly over time, and you get a surprisingly natural "ambient audio" effect. Careful selection of samples and 'tuning' can then re-use the same engine to recreate Forest, Ocean, Outer Space environments. But obviously not that last one, that would be silent.

Note: Hilariously, the "MonkeyVision (tm)" screen was never planned at all but rather grew out of my own diagnostic interface, which displayed colored tiles to represent the internal state.

(To be continued...)

Training the Monkey

(How it got configured, which is really the Secret Sauce)

Releasing the Monkey

There must be something to say about releasing it.

It's in The Play Store, yay!

AudioLayer States

// We use a 'state machine' model, with simple switch

// or "State ==" tests. The state is also exposed directly

// via property 'State'. PS: Ask me why these aren't Enums,

// I'll be happy to explain where the C Family got that handy

// type right, while Java so very very wrong. :\

// gad

private int STATE_INIT = 100; // Waiting for setup

private int STATE_IDLE = 200; // Ready for loading

private int STATE_LOADING = 220; // Don't interrupt me!

private int STATE_LOADED = 240; // Ready to play

private int STATE_LOOPING = 300; // Playing in loop mode

private int STATE_PAUSED = 400; // Paused in any mode

private int STATE_STOPPED = 900; // Stopped, ie 'dead'

(...)

public int State = STATE_INIT; // We start thusly

AudioLayer 'Texas Hold Em'

// After this, 'hand' contains unique random selections

// from 'samples'. This is cheaper and scales better than

// buffer-compare methods.

// gad

for (int i=0; i < iHandSize; i++) {

iSample = oRandGen.nextInt(oArraySamples.size());

oArrayHand.append(i, oArraySamples.keyAt(iSample));

oArraySamples.removeAt(iSample);

}

AudioLayer and MonkeyViz

// If a "oVizImageElement" handle has been provided

// by the caller, a small (72x72) image is selected at

// random and placed on the tiny TV screen. The alpha

// is set to max, as the mainline is constantly fading it

// out. Each layer does so, resulting in "MonkeyVision (tm)" !

// gad

// Determine if device is "Ice Cream" OS or later

boolean bVersionGTEIceCream = (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH);

// Later, as AudioLayer begins a new sample...

if (oVizImageElement != null) {

oVizImageElement.setImageResource(

oArrayVizImages.get(

(int) (Math.random() * oArrayVizImages.size())

)

);

// (Re-) Set max alpha, if supported device

if (bVersionGTEIceCream) {

oVizImageElement.setImageAlpha(255);

}

}

A Brief Visual History of Sonic Monkey