With this project, my aim is to build a live electronic system that can be utilised for solo and collaborative improvisation. Functioning as a means for developing a live approach to composition and freeness within live music.
The intention is to build a system that I can continue to evolve and practice with as I grow as an improviser, and will form the foundation for a more longstanding exploration into live electronic music.
Previously, when looking to perform electronics live as part of collaborative improvisations, I have commonly felt restricted by my chosen systems and setups, not feeling responsive enough to react to or anticipate players with more traditional instrumentation.
For the most part, granular synthesis, being played through TouchOSC, MIDI controllers and Max MSP has been my main approach to these interactions as it feels the most responsive and dynamic in its abilities with time stretching, pitch and layering.
The problem with this lies in a lack of versatility; even with a percussive angle, there has to be a preestablished relationship with collaborators to allow room for the sounds being produced. It also doesn't always feel necessary due to the tendency to collaborate with melodic instrumentation. This results in an inability to evolve a live composition by my own accord, without the facilitation of other players. Leaving me restricted to building on the atmospheres and ambiences once they have already been determined.
It's due to this that I would like to begin angling my improvised music approach towards a versatility on textural and percussive elements that can build on collaborations with melodic instruments and players in a responsive and non-invasive manner.
To facilitate this, I am keen to take a pre-prepared approach to my system, carefully curating an aesthetically coherent instrument that allows collaborators to more or less know where they stand. Whilst simultaneously giving me freedom to grow as a solo musician when exploring sound design and composition as part of my own personal aesthetics.
Example of group improvised work prior to this project (playing granular synthesis and prepared synthesis in Max MSP and FL Studio, using the Akai Midi Mix):
When thinking about how to approach this project I immediately thought of the work of 'Elvin Brandhi', an electronic improviser and composer, who has been able to build a discography of solo and collaborative improvised based music through a signature take on live electronics and sound design.
When investigating her live setup, it appears to be achieved through the implementation of an SP404; cycling through pre-prepared loops and sound design live, whilst also, interlacing vocals where she sees fit.
However, when trying to find out more about the origins of the sound design and loops that she is using, I discovered an interview with 'Shape+' where she outlines that she builds "drum racks in Ableton that contain certain ingredients that shift depending on where I’ve been and who I come into contact with. Khanja screams, Blumbergian brass, Omutaba hits, sea gulls, doors slamming, tapes, metal, tools, jackals, violin, YouTube – the personal associations of all these fragments combined determine the track’s character. The drum rack is like the cast list of a psycho-acoustic drama." (www.shapeplatform.eu/2020/screams-brass-seagulls-an-interview-with-elvin-brandhi/)
Immediately after doing some digging on the possibilities of the Ableton drum rack, this approach struck me as one that could facilitate my intentions with this project. Not only would this allow me to implement different MIDI controllers and setups, but also to grow and expand my palette of sound depending upon the improvisation that I'm going into. For example, preparing for a collaboration with a saxophonist would be vastly different from a collaboration with a bassist and pianist. By using this approach, the tangibility and performance would remain the same, whilst the outcome, through some preparation, would be vastly different.
Although this gave me some inspiration as to the possibility of the type of system, I was still keen to investigate the live processing that I could apply once I was building the system.
One of the approaches to live processing that I found particularly interesting is the work of 'Ikue Mori', whose innovative and foundational approach to live electronics and percussion viewed the drum machine as a sound source which could then be distorted into an array of textural and musical ideas in a very responsive way. Using, "different reverbs or delays and multi-processors that totally twists the sound and makes it come out really different." Explaining how she uses "three drum machines together with a mixer, then adding a pedal with pitch control and a Sony effects processor.” (furious.com/perfect/ikuemori.html )
Another reference point which expands on 'Ikue Mori's early explorations into drum machines and pitch and effect-based processing is 'Mariam Rezaei'. An improvised turntablist who pushes pitch to its breaking points through the use of live scratching and effects such as delay and reverb. Rezaei's highly tactile approach is definitely one that inspires me on a sonic level, with the texture of scratching being an interesting aesthetic and compositional component.
Finally, in terms of the sound design I am looking to implement, I am particularly inspired by the textured, dark and crunching sound worlds of contemporary electronic producers such as 'Iglooghost' (*1), 'Arca' (*2) and 'Parker Corey' of 'By Storm' & 'Injury Reserve' (*3). I feel as though the common thread in sound design for these artists is about excavating a particular sample or series of samples. Working with library sounds and warping them in a D.A.W, to then create as much variation as possible before sifting through these sound design sessions for the best sounds that have come from it. In terms of how they are manipulating their chosen samples, it varies, with 'Parker Corey' and 'Arca' having more of a focus on granulation than 'Iglooghost' who commonly uses re-synthesis and gating. The manner in which they generate sounds is the consistent factor, with intricate workflows that allow for extreme variation and processing that can be curated later on.
1*)
2*)
3*)
Initially, I set out by exploring how to assemble a drum rack in Ableton, creating a simple 4 sound, system of kick drums whilst also looking into choke groups. Choke groups enable any sound in the same group to cut each other off to avoid clashing e.g. if a kick and a snare were to be in choke group 1 and then the kick drum were to play first, the snare (when triggered) could cut off the end transients of the kick:
After doing some more research I discovered that as well as having 160 drum pads available to trigger, you can also stack up to 128 sounds onto a single pad, either triggering them all to hit at the same time or to be interchangeable using macro features, which allow you to "distribute ranges equally"; essentially spreading the line up of samples so that they can be selected rather than all triggered at the same time.
Furthermore, to create some variation I linked the macro knobs to a random envelope, meaning that every time the pad is triggered a different sound from that pad will play:
With this as a foundation, I continued to flesh out the drum rack in this way - creating banks of sound design in FL Studio with sounds sourced from sample libraries I've been collecting and then implementing it into the drum rack:
From here I began looking into how I could create more movement when cycling between sounds. To do this, I applied pitch and reverb-based processing, and mapped different parameters to envelopes, making for some constant live automation.
Furthermore, as I wanted to have more control over the system in terms of its randomization, I began to create some variation snapshots. Meaning that some sounds I would leave to be randomized through an envelope and some sounds, I can curate through a variety of snapshots that could be triggered as and when I please:
As I continued in this direction, I was constantly building on and revising the sounds to make a concise instrument. As well as this, I was also feeling out the type of rhythmic ideas that the sound would suit.
I began to become more detailed with creating movement by applying automated envelopes to the volume of sounds to create dynamics and arpeggiators to create rhythmic variations:
At this point, I was keen to implement some more character into the sound selection. Inspired by 'Elvin Brandhi', I recorded some long-form vocal improvisations and then applied the same processing to them that I had been using for the rest of the sound design so far:
From here I was keen to look into how I could make the system performative. I began by assigning different effects to be triggered on and off with the 'Akai APC Mini'. I also assigned some of the different variations I'd been making to a few of the MIDI controller's pads.
To control the effects I decided to the use the 'Ableton Push 2' as not only did this give me a visually easy to navigate view of the effects but it also meant that I could sequence patterns live and with the help of a live quantize, 'Max For Live' device, I could perform on top of these patterns:
At this stage, I had organised a jam with 2 guitarists to test out how playable the machine was in its current form. After completing the session it was clear to me that the current performance setup was not responsive enough. Before the session I had also prepared a synth and live looper in Ableton, which I was performing with using the 'scale' functionality of the 'Push 2'. I ended up largely relying on this during the session as the drum rack was completely unfit for the context.
I think that the main flaw with this was doing the sequencing live. It was far too slow and by the time I had built up a pattern that was appropriate the jam had moved on. On top of this, performing the sounds in response to what was playing was weightless and would take a lot of practice to be done well.
Although this didn't go as planned, it has shown me to refine the focus of the system so that I can utilise performance in a more intuitive way:
Not seeing anything above? Reauthenticate
In the meantime, whilst figuring out how to reapproach the system i continued to build up the drum rack:
With the drum rack almost at the stage that I wanted it to be at, I began to reinvestigate the playability of the system. Thinking back to 'Ikue Mori' and how she applied processing to the master sound source, I decided to remove the effects from being limited to individual pads and began applying them to the master chain. MIDI Mapping their parameters to the 'Akai MIDI MIX' as I found it to be a much more intuitive controller than the 'Push 2'. Some of these effects included: BPM, Pitch, Reverb, granular, and more. I also mapped the overall, macro, sound randomizer to the controller so that I was able to randomize the non-cycling via the controller:
I then got the drum rack to a point that I was happy with, with 32 of the pads having different banks assigned to them. At this stage, I had also decided that instead of playing the sounds live, it was better to prepare a variety of MIDI patterns which I could then launch live. Allowing me to focus on the processing. Although this sounds limiting, where i now had control of the global BPM, making the MIDI patterns play as intended was quite intuitive. This was also because I had created the MIDI patterns for different contexts e.g. on grid, stripped back and free etc. ( I go over this further in 'The Final System' video walkthrough at 6:36).
Following this, I arranged all of the sounds into 4 choke groups and created 4 empty pads that assigned to these groups, meaning that when I was working on a MIDI pattern, I was able to be precise with when I wanted a sound to cut out without having to trigger another sound from its group.
Finally, I added a noise gate to the master effects chain, with the threshold being controllable via the 'Akai Midi Mix':
To complete the live system, I was keen to add another element that wasn't percussion-driven. I decided to add vocals. I did this by applying the same approach that I had been doing to the sound design (randomly enveloping a series of effects), however, this time doing it on the microphone input signal. I had control over this through a small MIDI keyboard. Whenever I hit one of the notes, the parameters were then randomized:
Final System Walkthrough
Timestamps:
0:00 - 0:25 : Intro
0:25 - 1:19 : Explanation Of The System & Master Fx reasoning
1:19 - 5:50 : Drum Rack Bank 1 Playthrough
5:51 - 6:36 : Talking about the system becoming more intuitive later on through practice
6:36 - 10:41 : MIDI Patterns breakdown + chance and velocity probaility
10:45 - 13:45 : Drum Rack Bank 1 Playthrough
13:45 - 14:05 : CHoke Group MIDI data
14:05 - 14:22 : Why Quantization and variations were removed
14:23 - 14:30 : even split of sound design and sample library sounds
14:30 - 18:36 : Master Effects Chain Breakdown
18:36 - 18:54: Outro
As this is intended to be a prototype of an ongoing system, the main rehearsing of the system happened simultaneously with the prototyping as the final performance is three different improvisations that I recorded (there isn't a definitive final performance). Therefore, I've included two practice jams that I did when I completed the system. Following these two attempts, I decided to tweak the levels of all of the sounds and apply limiters where necessary:
Take 1:
drive.google.com/file/d/1Hskkn-YHv8KqqWo5Zvuj8U7B8Knz1GE1/view?usp=sharing
Take 2:
drive.google.com/file/d/1EYxkF0Chl0JHN-V-7uHaEdV2YbV8cLGl/view?usp=sharing
To conclude:
I feel as though the final performances went as well as they could have done at this stage in the system's life. I also feel as though by the third improvisation, I was starting to get to grips with the performability of the drum rack a little bit more and perhaps in a months time the initial awkwardness will have worn off.
I do think that at this stage it feels more like a sound design system rather than a fully functional improvisational instrument. This is part of the growing pains that come with building live electronic systems and in order to make the system more functional, I feel as though a better and even more focused curation on sonic material is needed as well as more practice with the effects and knowledge of my pre-prepared MIDI patterns.
At this stage, it still feels far too random and I think that it would be good to exchange some of the random envelopes for LFOs. Currently, it doesn't feel entirely responsive, especially when manipulating a lot of the effects. Going forward a lot more focus on taming the random nature of the system would be beneficial in the long run.
The moments that felt best were during the bits of ambience, where the sounds would become much more stripped back and there was room for my vocals to come through. I think that allowing room for the sounds to breathe is key and currently they become too cluttered too quickly.
I am also keen to test this system out in its current format with melodic instrumentation to see how it performs and where it needs improvement.
All in all, I'm happy with the instrument as a proof of concept and a sound design tool which, in its current state, can be chopped up and used in compositions.