I've been experimenting for over 20 years with computer music composition, in a rather strict form. I don't simply use the computer to assist in manual composition, as someone using Ableton to compose dance music does, but rather I'm seeking to generate music directly from computer programs, the only manual intervention being writing the program code; without employing any physical musical instruments; and with no, or at least minimal, manual editing of the resulting music. I call this algorithmic composition: the practice of a human musician playing an instrument and choosing a tune is replaced by the practice of me – a human – choosing algorithms and input data that I think may generate an interesting tune. It would of course be quite straightforward to merely transpose existing conventional tunes into my own computer notation and just have the program play them, but that’s the exact opposite of what I’m trying to achieve.

It's extremely challenging to generate anything that sounds even vaguely musical in such a fashion, because a computer has no sense of melody, harmony or rhythm, and cannot distinguish music from white noise. My creative process becomes the selection of just those algorithms that produce something that sounds to me like music, and then experimenting with different input data sets to find those that make the music sound better. My results vary widely in success, and they will be more or less acceptable to you depending upon your musical tastes and background – in particular how you feel about modern classical music, and free jazz. That’s because I enjoy some (by no means all) of both these genres, and so my tolerance for dissonance is certainly far greater than average: actually tolerance is the wrong word, I positively enjoy dissonance when cleverly employed by, say, Bartok, Coltrane, Bill Frisell or Pere Ubu. It’s exciting the way chillies in food are exciting: things eventually can seem bland without.

Back in 1993 I wrote my own MIDI library in Turbo Pascal which enabled me to output playable MIDI files directly from Pascal code. I wrote Turbo Pascal programs consisting of various loops that set up MIDI tracks and assigned instruments to them, then controlled the pitch, duration and spacing of successive notes in those tracks. When run such a program applies parameters, some random, some selected by me, that compose a new, original piece of music which gets output as a standard MIDI file. The composition process was thus in two parts: first writing the basic program structure, and second repeatedly tweaking the parameters and re-running it until a desirable result was attained. A crucial design decision was not to represent whole notes in my internal notation, but rather to segregate pitch, duration and loudness into separate streams, so that programs could more easily manipulate these values independently of one another. This library contained a lot of mathematical transform functions to reverse, invert, transpose and mutate such streams in the manner of formalist composers like Bach and John Adams.

This indirect approach, using an intermediate MIDI representation, meant that I couldn't create music on the fly as a live performance, the way that dance DJs do. I'm strictly an armchair composer sitting in my room churning out tunes. One advantage that MIDI does offer though is the large selection of General MIDI instruments that I can change at will, so that having found a promising structure I could try it out with any combination of instruments, including orchestral and brass sections, string sections and choral effects.

When Windows took off I was at first tempted to write an interactive version with a graphical user interface, but eventually I decided to stick with a coding-based system, for the infinite flexibility it confers (and because I actually enjoy writing code). I stopped work on the project and took it up again repeatedly over the next decade or so, but when Windows 8 came along I was no longer able to run Turbo Pascal even in a DOS-box, and so had to consider rewriting the system in another language. After flirting with using Ruby, I eventually decided to do it in Python (thanks to a good existing MIDI interface library). The rewrite was hugely successful, Python’s powerful and expressive sequence types and automatic memory management making manipulating the streams so much easier than Pascal.

I originally started out playing with techniques copied from those used by "minimalist" composers like Philip Glass and John Adams – reflection and inversion of melody elements, making sections that slip relative to each other in phase, fugues too complex to be played by human hands, and fractal structures in which large scale movements are constructed from similar ‘shape’ smaller elements. Most of the tunes I produced at first were fairly short because a major problem with algorithmic composition is instilling variation in a way that doesn't sound merely arbitrary. The computer is happy to play the same or very similar sounds forever without getting bored, but you or I do get bored pretty quickly. My Python version has encouraged me to create ever longer pieces because it’s so much more flexible in parameterising the sections: for example its support for function parameters and lists of functions.

I also started to investigate music theory in order to improve the algorithms that I employ. Philip Ball's magnificent book "The Music Instinct" hugely improved my understanding of harmony, by explaining it at both physical and physiological levels. More recently I found Ernö Lendvai’s book “Bela Bartok: An Analysis of his Music” enormously useful for developing rules of harmony. I love Bartok’s music and so was both surprised and pleased to discover that he employed some quite mathematically-oriented methods based on the Golden Section, and he developed a rigorous system of chromatic tonality that translates very well into algorithms.

My own musical education progressed from black American music in my teens – '50s and '60s R&B, country blues, soul, jazz, progressing to bebop and later to free jazz. From there I made the leap into modernist music via Bartok and Debussy, before returning to a deeper appreciation of the classics via Bach, Mozart, Beethoven and Schubert, through Wagner and R Strauss up to contemporary figures like Leo Brouwer and Dusan Bogdanovic. This highly unorthodox educational trajectory has imbued me with a craving for a particular kind of rhythmic freedom – highly variable, "springy", human-sounding rhythms that remind you of walking, running and staggering, rather than metronomically regular beats. Charlie Parker's lightning scales; Bartok's orchestral handbrake turns; Robert Johnson's manic strums; Danny Richmond and Charles Mingus swapping bass and drum flurries; Sly and Robby dubbed by King Tubby. I find some contemporary electronic dance, house and dubstep very exciting – for example Brandt, Brauer, Frick or Rusko – but that's not a direction I'm seeking for my own works, which rarely have a beat you can state in BPM.

Below you'll find a list of music files created more recently with my new system, which I've now christened "Algorhythmics": some of these are MP3 files you can play by merely clicking, while others are MIDI files that you must load into a sequencer and play or edit as you wish. Meeting a young American muso (thanks Jim) a few years ago turned me onto SoundCloud, which does for music what Flickr does for photos: my SoundCloud album called RubbleDub contains some of my earlier efforts. I've also put out an online album on Bandcamp, called Uneasy Listening, which contains a mix of algorhythmic compositions and classics that I've mangled in Ableton from MIDI files (often adding algorhythmic effects too).

As for the Algorhythmics system itself, I've yet to decided how to distribute it. Giving it away as freeware is one option, but I'd worry about it being appropriated for something dire like generating ad jingles and e-musak. Anyone who's seriously interested in algorithmic music could email me at and we can talk about it. If you're interested and are familiar with Python programming then the very brief introduction file Algorhythmics Intro.pdf below will give you some idea of what's involved.

Here’s a typical example of my Python music code, for a short tune called 'Tango1':

from midiseq import *from music import *from instruments import *from math import truncfrom random import randint, seed
S = midiSeq('tango1.mid', 16)
t = '001100'd = '123123'v = 'zazaaz'm = '104100'm2 = '010201'A = Acoustic_Grand_PianoB = Acoustic_BassD = Melodic_TomK1 = 64K2 = 32sc = (major, minor, bartok, bartok3)
for mx in range(0,4): for n in range(0,4): S.Exp = 1 S.addTempo(0,0,300+100*mx*n) p = 'DAEFCA' p = rr(p,mx) t = up(t,mx-1) S.phrase(1,1,sc[mx],K1,A,S.nSeq(p,t,up(d,mx),v,m)) S.phrase(4,4,sc[mx],K2,B,S.nSeq(inv(p),t,d,v,m2)) for n in range(0,4): S.addTempo(0,0,300+150*mx*n) p = 'ADEGEA' p = rr(p,mx) S.phrase(1,1,sc[n],K1,A,S.nSeq(p,p,d,v,m)) S.phrase(4,4,sc[n],K2,D,S.nSeq(p,t,d,v,m2)) for n in range(0,4): S.addTempo(0,0,600+60*mx*n) p = 'GGDEEA' p = rr(p,mx) S.phrase(1,1,sc[mx],K1,A,S.nSeq(p,t,d,v,m)) S.phrase(4,4,sc[mx],K2,B,S.nSeq(inv(p),p,d,v,m2)) for n in range(0,4): S.Exp = 0.5 p = 'GGDEFA' p = rr(rev(p),n) S.phrase(1,1,sc[n],K1,A,S.nSeq(p,p,d,v,m)) S.phrase(4,4,sc[n],K2,D,S.nSeq(p,t,d,v,m2))

When run this program outputs the file TANGO1.MID, which can be played on any General MIDI sequencer, or else loaded into a program like Ableton Live for post-processing, to apply better instrumental voices, samples or special effects. Then I output it again as a WAV or MP3 audio file that's more acceptable for publishing to Facebook, SoundCloud or this site. Here's what it sounds like:


Here's a longer tune called 'Insect Dance' which I generated from an Algorhythmics program, then post-processed in Ableton, output as an MP3, and finally combined with an animation into an MP4 for viewing on YouTube:


Primal is the very earliest piece I created, back in 1993. It's based on the first 2000 prime numbers 1,3,5,7,11,13... In fact it was this desire to hear the prime numbers that first sparked my interest in computer music. I couldn't use the absolute values of the primes as pitch or duration information because they increase continuously - the tune would just zoom up into the hypersonic - so instead I based both pitch and duration of the notes on the gaps between successive pairs of primes rather than their absolute values. The result sounded rather too bland and uniform, so I then also made the emphases (ie. the rhythm) depend on the gaps too, and ended up with a polyrhythmic, highly atonal tune that reminds me a little of some African music. You'd be hard put to whistle it, and I wouldn't recommend dancing to it either unless you are very supple indeed. I deliberately allocated African string and percussion instruments to Primal since those voices bring out this African feel. The file primal.mid is included below: you can download this, and if you know how to operate a MIDI sequencer, can replace these instruments with others of your choice, say harpsichord, orchestral instruments or whatever takes your fancy.

My late friend and business partner Felix Dennis kindly dedicated to me his poem called The Proof , about the mathematics of prime numbers, and so I employed a vintage voice synthesizer called MonologueW to read his poem over my Primal tune as background. I chose that primitive synth because its voice has a robotic, early-Steven-Hawking quality that's been completely lost with superior modern voice synths, which merely sound like perky American ad salespersons:

Here also is a version of Primal that I produced some years later, which renders the same data in a rather more accessible style: