I've been experimenting for over 20 years with computer music composition, in a rather strict form. I don't simply use the computer to assist in manual composition, as someone using Ableton to compose dance music does, but rather I'm seeking to generate music directly from computer programs, the only manual intervention being writing the program code; without employing any physical musical instruments; and with no, or at least minimal, manual editing of the resulting music. I call this algorithmic composition: the practice of a human musician playing an instrument and choosing a tune is replaced by the practice of me – a human – choosing algorithms and input data that I think may generate an interesting tune. It would of course be quite straightforward to merely transpose existing conventional tunes into my own computer notation and just have the program play them, but that’s the exact opposite of what I’m trying to achieve.
It's extremely challenging to generate anything that sounds even vaguely musical in such a fashion, because a computer has no sense of melody, harmony or rhythm, and cannot distinguish music from white noise. My creative process becomes the selection of just those algorithms that produce something that sounds to me like music, and then experimenting with different input data sets to find those that make the music sound better. My results vary widely in success, and they will be more or less acceptable to you depending upon your musical tastes and background – in particular how you feel about modern classical music, and free jazz. That’s because I enjoy some (by no means all) of both these genres, and so my tolerance for dissonance is certainly far greater than average: actually tolerance is the wrong word, I positively enjoy dissonance when cleverly employed by a Bartok, a Coltrane, a Bill Frisell or a Pere Ubu. It’s exciting the way chillies in food are exciting: things eventually can seem bland without.
Back in 1993 I wrote my own MIDI library in Turbo Pascal which enabled me to output playable MIDI files directly from Pascal code. I wrote Turbo Pascal programs consisting of various loops that set up MIDI tracks and assigned instruments to them, then controlled the pitch, duration and spacing of successive notes in those tracks. When run such a program applies parameters, some random, some selected by me, that compose a new, original piece of music which gets output as a standard MIDI file. The composition process was thus in two parts: first writing the basic program structure, and second repeatedly tweaking the parameters and re-running it until a desirable result was attained. A crucial design decision was not to represent whole notes in my internal notation, but rather to segregate pitch, duration and loudness into separate streams, so that programs could more easily manipulate these values independently of one another. This library contained a lot of mathematical transform functions to reverse, invert, transpose and mutate such streams in the manner of formalist composers like Bach and John Adams.
This indirect approach, using an intermediate MIDI representation, meant that I couldn't create music on the fly as a live performance, the way that dance DJs do. I'm strictly an armchair composer sitting in my room churning out tunes. One advantage that MIDI does offer though is the large selection of General MIDI instruments that I can change at will, so that having found a promising structure I could try it out with any combination of instruments, including orchestral and brass sections, string sections and choral effects.
When Windows took off I was at first tempted to write an interactive version with a graphical user interface, but eventually I decided in favour of sticking with a code-based system, for the infinite flexibility it confers (and because I actually enjoy writing code). I stopped work on the project and took it up again repeatedly over the next decade or so, but when Windows 8 came along I was no longer able to run Turbo Pascal even in a DOS-box, and so had to consider rewriting the system in another language. After flirting with using Ruby, I eventually decided to do it in Python (thanks to a good existing MIDI interface library). The rewrite was hugely successful, Python’s powerful and expressive sequence types and automatic memory management making manipulating the streams so much easier than Pascal.
Here’s a typical example of the Python music code, for a short tune I generated recently called Tango1:
from midiseq import *from music import *from instruments import *from math import truncfrom random import randint, seed
S = midiSeq('tango1.mid', 16)
t = '001100'd = '123123'v = 'zazaaz'm = '104100'm2 = '010201'A = Acoustic_Grand_PianoB = Acoustic_BassD = Melodic_TomK1 = 64K2 = 32sc = (major, minor, bartok, bartok3)
for mx in range(0,4): for n in range(0,4): S.Exp = 1 S.addTempo(0,0,300+100*mx*n) p = 'DAEFCA' p = rr(p,mx) t = up(t,mx-1) S.phrase(1,1,sc[mx],K1,A,S.nSeq(p,t,up(d,mx),v,m)) S.phrase(4,4,sc[mx],K2,B,S.nSeq(inv(p),t,d,v,m2)) for n in range(0,4): S.addTempo(0,0,300+150*mx*n) p = 'ADEGEA' p = rr(p,mx) S.phrase(1,1,sc[n],K1,A,S.nSeq(p,p,d,v,m)) S.phrase(4,4,sc[n],K2,D,S.nSeq(p,t,d,v,m2)) for n in range(0,4): S.addTempo(0,0,600+60*mx*n) p = 'GGDEEA' p = rr(p,mx) S.phrase(1,1,sc[mx],K1,A,S.nSeq(p,t,d,v,m)) S.phrase(4,4,sc[mx],K2,B,S.nSeq(inv(p),p,d,v,m2)) for n in range(0,4): S.Exp = 0.5 p = 'GGDEFA' p = rr(rev(p),n) S.phrase(1,1,sc[n],K1,A,S.nSeq(p,p,d,v,m)) S.phrase(4,4,sc[n],K2,D,S.nSeq(p,t,d,v,m2))
When run this program creates the file tango1.mid, which can be played on any General MIDI sequencer, or loaded into a program like Ableton Live to post-process it, apply better instrumental voices, samples and special effects. I can then output it as a WAV or MP3 audio file which is more acceptable for publishing to Facebook, SoundCloud or this site.
I originally started out playing with techniques copied from those used by "minimalist" composers like Philip Glass and John Adams – reflection and inversion of melody elements, making sections that slip relative to each other in phase, fugues too complex to be played by human hands, and fractal structures in which large scale movements are constructed from similar ‘shape’ smaller elements. Most of the tunes I produced at first were fairly short because a major problem with algorithmic composition is instilling variation in a way that doesn't sound merely arbitrary. The computer is happy to play the same or very similar sounds forever without getting bored, but you or I do get bored pretty quickly. My Python version has encouraged me to create ever longer pieces because it’s so much more flexible in parameterising the sections: for example its support for function parameters and lists of functions.
I also started to investigate music theory in order to improve the algorithms that I employ. Philip Ball's magnificent book "The Music Instinct" hugely improved my understanding of harmony, by explaining it at both physical and physiological levels. More recently I found Ernö Lendvai’s book “Bela Bartok: An Analysis of his Music” enormously useful for developing rules of harmony. I love Bartok’s music and so was both surprised and pleased to discover that he employed some quite mathematically-oriented methods based on the Golden Section, and develop a rigorous system of chromatic tonality that translates very well into algorithms.
My own musical education progressed from black American music in my teens – '50s and '60s R&B, country blues, soul, jazz, progressing to bebop and later to free jazz. From there I made the leap into modernist music via Bartok and Debussy, before returning to a deeper appreciation of the classics via Bach, Mozart, Beethoven and Schubert, through Wagner and R Strauss up to contemporary figures like Leo Brouwer and Dusan Bogdanovic. This highly unorthodox educational trajectory has imbued me with a craving for a particular kind of rhythmic freedom – highly variable, "springy", human-sounding rhythms that remind you of walking, running and staggering, rather than metronomically regular beats. Charlie Parker's lightning scales; Bartok's orchestral handbrake turns; Robert Johnson's manic strums; Danny Richmond and Charles Mingus swapping bass and drum flurries; Sly and Robby dubbed by King Tubby. I find some contemporary electronic dance, house and dubstep very exciting – for example Brandt, Brauer, Frick or Rusko – but that's not a direction I'm seeking for my own works, which rarely have a beat you can state in BPM.
Meeting a young American muso (thanks Jim) a few years back turned me onto SoundCloud.com, which does for music what Flickr does for photos, and here's a link to a SoundCloud album of some of my early efforts using the new Python system: it's called RubbleDub. Below you will find a number of music files created more recently with my new Python-based system, which I've now christened "Algorhythmics": some of them are MP3 files you can play by merely clicking, others are MIDI files you can load into a sequencer and play or edit as you wish. As for the system itself, I have yet to decided how to distribute it. Giving it away as freeware is one option, but I worry about it being appropriated for something dire like generating jingles and e-musak. Anyone who is seriously interested in algorithmic music could email me at email@example.com and we can talk about it.
Primal is the very earliest piece I created, back in 1993. It's based on the first 2000 prime numbers 1,3,5,7,11,13 etc. and in fact the desire to hear the primes was what first sparked my interest in computer music. I couldn't use the absolute values of these primes as pitch or duration information because they increase continuously, so the tune would just zoom up into the hypersonic region, so instead I based both pitch and duration of the notes on the gaps between successive pairs of primes rather than their absolute values. The result sounded rather too bland and uniform, so I then also made the emphases (ie. the rhythm) depend on these gaps, and ended up with a polyrhythmic, highly atonal piece that reminds me a little of some African music. You'd be hard put to whistle it, and I wouldn't recommend dancing to it either unless you are very supple indeed. I’ve deliberately allocated African string and percussion instruments to Primal since these voices bring out this African feel, but here’s the file primal.mid which you can download, and if you know how to operate a MIDI sequencer you could replace the instruments with harpsichord, orchestra instruments or whatever takes your fancy. Here also is a poem called The Proof about the mathematics of prime numbers, written by my late friend and business partner Felix Dennis, who very kindly dedicated it to me. I used a vintage voice synthesiser program called MonologueW to read the poem with my Primal tune playing as background music. I chose that synth because its voice has a robotic, early-Steven-Hawking quality that's been completely lost in superior modern voice synths, which merely sound like perky American ad salespersons.