Please read our paper presented at the 2025 New Interfaces for Musical Expression (NIME) Conference, 24th–27th June 2025 at The Australian National University, Ngunnawal and Ngambri Country, Canberra, Australia. This companion website is designed to complement the findings detailed within the paper, offering further insights into the instrument, beyond those discussed at the conference.
The paper may be accessed at: https://nime2025.org/proceedings/200.html
O’Flaherty, T. F., Marino, L., Saitis, C., Xambó Sedó, A. (2025) Sonicolour: Exploring Colour Control of Sound Synthesis with Interactive Machine Learning. In Proceedings of the New Interfaces for Musical Expression.
The Sonicolour SC-1 is an innovative Digital Musical Instrument (DMI) in the field of colour-to-sound mapping. This page explores the instrument's context, concept and aims, design processes and choices, technical and creative aspects, and reflects on the journey from concept and aims to realisation. An account and video of a performance with the SC-1 is detailed, alongside behind-the-scenes photos and videos of its creation.
Top view of the Sonicolour SC-1
Sonicolour SC-1 during production
The original idea for the Sonicolour SC-1 involved an investigation into the field of timbre space, timbre colour theory, and cross-modal correspondences. It was discovered that, particularly in the context of music, "colour" is often used when describing the timbre of sounds, a concept echoed by Wessel (1979), who describes timbre as the "colour" of a sound, separate to dynamics and pitch. As proven by a multitude of existing instruments, including commercial synthesisers, timbre is a crucial element of an instrument's musical output. Fasciani (2016) notes that timbre may be independently analysed and itself mapped to the parameters within a synthesiser. Cross-modal correspondences are reasonably common cognitive phenomena, whereby listeners associate sounds with colours (Liu et al., 2021).
A core motivation for the creation of this instrument was to explore such colour-to-sound mappings, through the medium of timbre changes. This intended to build on the work of Soraghan et al. (2018), who suggested that it is the spectral centroid of a sound that is commonly used to differentiate timbre "brightness". Moreover, the significant work of Saitis and Wallmark (2024) in identifying an interplay between the spectral centroid and fundamental frequency (F0), commonly referred to as the pitch, in determining timbre "brightness", offered further inspiration behind the Sonicolour SC-1's design. Although a well-recognised idea, few instruments have been created with explicit reference to colour and timbre mapping, thus this influenced the decision to create a new instrument in this area.
The instrument was inspired by instruments producing sustained pitches, with a gradual ADSR envelope. Its synthesised sounds were not intending to replicate the sounds common to traditional acoustic instruments, but rather closer aligned to digital synthesisers. Core timbre mapping and sonic inspiration was found in the work of Shier et al. (2024) and Gregorio and Kim (2021).
The main aim involved creating an instrument capable of mapping colour to timbre, or more generally, "sound". In turn, it became important to understand the requirement of brightness within colour, and therefore sound, thus brightness mapping to the spectral centroid was decided. Furthermore, to investigate the aforementioned interplay between the pitch and spectral centroid in determining the brightness (Saitis and Wallmark, 2024), the instrument must be able to control both parameters simultaneously.
Typically, instruments commonly have high entry points, meaning learners must spend significant time honing their playing skills, before being able to play the instrument to a reasonable standard. The Sonicolour SC-1's intended to have a low entry point, being simple and user-friendly, such that even non-musicians can quickly learn how to produce interesting sounds, especially to determine the existence of cross-modal correspondences in non-musicians.
It was initially intended that the instrument investigate the effect of auditory-visual or audiovisual synaesthesia, whereby individuals associate specific music, although most notably timbres, with different colours, in a process by which one organ's stimulation, causes the direct stimulation of another (Liu et al., 2021). Such condition is very rare, and it was hoped to identify if the instrument's colour-to-sound mappings would be shared by listeners of the instrument. This colour association would have provided evidence of universal or unique mappings, exclusive to individuals with medical conditions, or everyone, although the project was later transformed into a more usable instrument.
The instrument's initial design proposed a light shining through coloured material, changeable via a slide potentiometer to control the brightness, mapped to the timbre "brightness". An LDR sensor would have been used to detect the brightness of the light, mapped to the spectral centroid, and the Bela Trill Square would have been only used to change the pitch. The potentiometer would have controlled the number of oscillators used, and a series of different coloured LEDs would have showed the device's status. Such design was refined and improved, as more information was learnt about the sensor operation and sonic qualities.
Initial concept design of the Sonicolour SC-1
Side view of the Sonicolour SC-1
Prototype front panel of the Sonicolour SC-1
Bottom panel of the Sonicolour SC-1 during production
Inside of the Sonicolour SC-1 - support block for colour wheel
Front panel of the Sonicolour SC-1 with some sensors installed
Prior to manufacturing the instrument, an initial instrument design was created, alongside a "proof of concept" Max 8 Patch (Cycling '74, 2024). The initial instrument design contained the same sensors as the final produced instrument, although with minor differences in sensor-to-sonic mapping, and a significantly larger case. As such, it was decided to refine the design into a more portable and playable device, which is examined below. The design aimed to use an iterative approach, being refined following a greater understanding of the project's aims and constraints, thus an Agile project management approach was employed.
Prior to designing the instrument's case, consideration was made to selecting appropriate sensors, which would provide users with the desired HCI affordances, to both easily interact with the instrument, and control its required sonic parameters.
It was decided that a Bela Trill Square (Bela.io, 2024) sensor would facilitate the necessary control over the pitch and spectral centroid (brightness), allowing the interplay between these aforementioned parameters to be identified. Through mapping the spectral centroid and pitch controls to the X and Y parameters respectively, such relationships may be observed simultaneously through a one-finger input. To enable the instrument to become more controllable and produce more usable sounds, the Trill Square's touch velocity (Z) parameter was mapped to the LFO carrier wave type.
Furthermore, to allow the colour-to-sound mapping to occur, a TCS3200 colour sensor (Ams Osram, 2024) was employed, to read the selected colour from the colour wheel, ultimately being converted to sound. It was decided that an LFO depth control, varying the "pulsing" sound of the synthesiser, would offer much needed sonic control, transforming the design from a research project, into a desirable instrument.
This was echoed by the installation of an MPU-6050 accelerometer/gyroscope sensor (TDK, no date), mounted inside the case, to control dynamics, panning, and feedback. As an average single performer only has a maximum of two available hands, it was decided that further controls should utilise single-handed operation, to enable their use alongside control of the pitch and spectral brightness. Through controlling such values via the tilting of the case, this resolves such issue, whilst offering logical mappings, aligned with ease of use - tilting left causes a left panning, for example.
It was also noted that user feedback be important within the design of the instrument, particularly to benefit performance, but especially to provide a user-friendly design, with low entry point. Consequently, an LCD screen was attached to the front of the case, offering device status information, alongside performance information, including the selected pitch, spectral centroid, and LFO wave type. This allows users to easily connect sound to values, facilitating easier composition, and instrument learning.
Such idea is continued with the use of an LED, with a brightness corresponding to the timbre "brightness", thus further offering user feedback. All electronics are contained within the device's case, and a USB-C cable is attached for user simplicity.
The instrument is intended to be portable and easily holdable, as one of its mappings relates to the use of an accelerometer, requiring the movement of the instrument itself, to produce the desired sonic effects. The instrument required a case to contain the required sensors and, considering its playable affordances, the case was produced to be as small as possible, to accommodate the internal components. To make the instrument easier to pick up, feet were crafted and attached to the bottom of the case.
The instrument's case uses a wooden construction, with beeswax finish, to offer a comfortable and professional aesthetic, with appropriate cutouts for the sensors. Where necessary, cutouts are inlayed with clear acrylic, protecting the sensors from damage from general use. All sensor controls are suitably engraved and/or rastered with textual and symbolic labels, offering insight into their sonic mappings, for ease of use.
The instrument's layout was considered extensively, namely to ensure its layout did not hinder left or right-handed operation, whilst facilitating fast and easy access to all controls, to benefit performance. To further allow for manipulation of the sensed colours, a colour wheel material holder was designed, alongside associated semi-transparent coloured acrylic pieces.
Further to the Inkscape (Inkscape, 2024) design of a modified box, based on Makercase's Simple Box Template (Makercase, no date), a 3D-printed colour wheel was designed in SolidWorks (Dassault Systèmes, 2021). Due to the unique requirements of the instrument, and the efficiency of 3D printing for certain applications, this became the most suitable manufacturing method for designing the core component of the instrument.
Following the physical instrument design, sound design was considered - arguably one of the most important aspects of the instrument. It was decided that the instrument offer suitable control of its timbre, including across the "brightness" and "colour" spaces.
Whilst brightness offered relatively simple and clear mapping, through the interplay between the pitch and spectral centroid on the Bela Trill Square (Bela.io, 2024), colour was identified as being more difficult to traditionally map directly. Consequently, it was decided that Wekinator (Fiebrink, 2016) would be used, to allow machine learning to facilitate simple, yet effective and accurate, colour to sound mappings, infinitely variable across the colour spectrum.
To achieve effective control over the sound of the instrument, an additive synthesis method was employed, allowing for each voice to be manipulated independently, to create a complex but refined timbre. Further inputs from the potentiometer allow the LFO depth of the oscillator voices to be varied, thus creating an interesting "pulsing" sound effect. To add more performative value to the instrument, the accelerometer inputs offer left and right panning and dynamics control to be employed, adding further sonic variance.
The introduction of a variable delay/feedback loop allows for unpredictable, but exciting sounds to be produced, greatly enhancing performance. The colour sensor's readings are relayed to a Jitter video output, offering players and audience members alike, an immersive experience, with visual effects. To ensure Max Patch (Cycling '74, 2024) ease of use, thus offering a user-friendly design, a simple Presentation View was created, offering only the basic controls, clearly labelled.
It was important that DIY music hardware feminist HCI principles (Jawad and Xambó Sedó, 2024) were considered within the instrument's design, as such qualities can significantly improve the inclusivity and usability of the finished product.
The instrument attempts to meet the ecology principle through using as much recycled, scrap material as possible, alongside reducing the amount of plastics used in its production. The majority of the instrument was built using wood, with only the absolutely necessary aspects being produced from acrylic, which was taken from discarded pieces from other, past projects. Consequently, the instrument is as environmentally-friendly as possible.
Pluralism is considered through the use of colour, the chosen sensors, and the instrument's timbre. The theme of colour is significant across all cultures and backgrounds, and the use of all colours across the spectrum attempts to encourage consideration to different identities, especially with significance to LGBTQ+, in certain cultures. Furthermore, by selecting sensors with a universal operation and simple design, users of all backgrounds are not hindered in their use of the instrument, due to complexities. Additionally, the instrument's timbre is culturally-undefined, and does not make use of musical scales, in an attempt to make the instrument as culturally-inclusive as possible.
Through the instrument's layout and design, it is hoped that the principle of embodiment is met, as the instrument may be played by left and right-handed users without hinderance, and its simplicity means players with diverse abilities may learn its operation. It is hoped that its sensors are clear and intuitive, even without the control labels, thus cultural practices or language differences should be mitigated.
After measuring the size of the sensors to be installed in the case, it was decided that a box size of 315mmx265mmx90mm (external dimensions) would offer a compact, but spacious size to house the sensors. The box template size was specified in Makercase (Makercase, no date), and the resulting .svg image was downloaded and further modified in Inkscape (Inkscape, 2024), to append the accurately measured and placed sensor cutouts and text to the case, and meet the specifications of the laser cutter.
The Sonicolour SC-1's case was subsequently crafted from 6.5mm plywood, to offer it strength and rigidity, with clear acrylic inlays used to protect delicate sensor components, without affecting their functionality. Coloured acrylic was used to create additional filters, which could further manipulate the colour sensor readings. The box panels and inlays were glued together using wood glue, to provide additional strength.
A block of wood was installed inside the case, to hold the colour wheel, with silicone grease applied, for smooth operation, and the wooden feet were attached to the case. After assembly, the wooden case was sanded using varying grits of sandpaper, to provide a smooth finish and prevent splinters or injury from use, before being polished with beeswax, for a high-end finish.
The 3D printed colour wheel was designed in SolidWorks (Dassault Systèmes, 2021) printing was kindly organised by Ms Geetha Bommireddy. A colour wheel design was printed on high quality photo paper, before being cut and glued to the colour wheel 3D printed model. Finally, a power cord grommet was cut from wood and attached to the case, to add to the high-quality aesthetic of the DIY instrument.
To facilitate control of the brightness and spectral centroid simultaneously, a Bela Trill Square (Bela.io, 2024) sensor was affixed to the middle of the case, allowing both parameters to be mapped to the same single touch input, via the sensor's X and Y readings. The Trill Square sensor's included Qwiic Connector was attached to the port on the rear of the sensor. Due to the short length of the cables, the wires were stripped and soldered to longer cables, in turn connected to a stripboard.
Within the stripboard, the Trill Square's SDA and SCL cables are each connected via a 4.7 kΩ pull-up resistor to 5V, before being attached to the A4 and A5 (SDA and SCL alternative pins respectively), on an Arduino UNO R4 WiFi (Arduino, no date). This allows the sensor's SDA and SCL readings to be detected by the Arduino, thus the X, Y and velocity (Z) readings may be read. The 5V and GND sensor pins are connected to the 5V and Ground connections on the power breakout stripboard, also connected to the Arduino.
The 10 kΩ linear potentiometer was connected to the 5V and Ground on the power breakout board, with its data leg connected to one of the Arduino's analogue inputs, offering an intuitive and user-friendly input control. The LED's negative leg was connected to a 220 Ω resistor, for current limiting, with its positive leg connected to a PWM Arduino pin digital output, allowing its brightness to be varied. The LED allows for a simple, visual feedback mapping to the spectral brightness.
The MPU-6050 accelerometer (TDK, no date) was directly connected to the 5V, Ground, SCL and SDA pins, to allow its I2C readings to be obtained on the Arduino. As previously mentioned, the accelerometer allows for multiple parameters to be controlled simultaneously, though tilting the instrument.
The TCS3200 colour sensor's (Ams Osram, 2024) colour trigger pins were connected to digital pin outputs, with its returned value output connected to a digital pin input on the Arduino. Its VCC and GND pins were connected to the Arduino's 5V and Ground respectively. The colour sensor provides a mechanism for reading colours, to be mapped to associated timbre "colours".
The LCD's character display data pins were connected to the digital outputs on the Arduino, with necessary 5V and Ground pins connected accordingly. The LCD's LED backlight was connected to 5V via a 220 Ω current-limiting resistor on the LCD breakout stripboard, and the V0 pin was connected to the data pin of a 10 kΩ potentiometer, allowing the display contrast to be varied, as desired. The LCD provides status and performance information to the player, assisting with the instrument's user-friendly operation.
All sensors were glued to the case using hot-melt adhesive, as this allowed them to be removed, modified or replaced if they failed or were unsuitable. The Arduino UNO R4 WiFi (Arduino, no date) was chosen due to its relatively low latency, significant number of connection pins, and support for analogue, digital and I2C connections. Furthermore, it supports Wi-Fi, required to send sensor readings wirelessly over OSC to the sound-generating software running on the connected computer.
The Arduino required custom firmware, to interact with the connected sensors, and facilitate the transmission of the sensor readings via OSC messages over Wi-Fi. As per the OSC message standard, messages were sent individually in separate UDP packets, to appropriately named routes, to be handled in the Max Patch (Cycling '74, 2024).
The OSC message receiver code was based on the work of CNMAT (CNMAT, 2017), and allows OSC messages to be received from the connected computer, to the instrument. This was used to receive performance status messages from Max, and display them on the LCD, and control the brightness of the LED, based on the played values. To transmit the sensor readings from the instrument to the connected computer, OSC messages were sent, based on Wi-Fi and OSC transmission code created by Dr Anna Xambó Sedó (Xambó Sedó, 2024).
The Bela Trill Square code was adapted from the Bela Square Print example (BelaPlatform, 2024), although it was decided that only the first touch should control the instrument, and the velocity reading should be an average of the horizontal and vertical touch velocities, output from the Trill Square. This ensures the instrument is still heavily controllable, but operates more predictably and reliably, than the otherwise often anomalous values that may be read.
The LCD code is based on the Arduino Hello World example (Arduino, 2023), which uses the LiquidCrystal controller library (Arduino, 2017), ensuring simple control of the LCD, abstracting its otherwise complex workings. For this same reason, the MPU-6050's code was based on the controller library of Korneliusz Jarzebski (Jarzebski, 2014), whilst the TCS3200 colour sensor's code built on the tested work of Dejan Nedelkovski (How To Mechatronics, no date), including sensor calibration code. Finally, the potentiometer and LED code used elements of the Arduino Colour Mixer example code (Arduino, 2022), for a reliable method of controlling both components.
All firmware code was written within the Arduino IDE (Arduino, 2024), and used appropriate comments, sections and functions, in an attempt to ensure the code was more readable and reusable. Where existing code examples were used, these were always tailored specifically for the SC-1, with readings being output via OSC, and on the LCD, to provide appropriate functionality and user feedback.
All sound was synthesised using Max 8 (Cycling '74, 2024), to provide the SC-1's desired sonic qualities. Within the Max Patch, all OSC messages are received from the Arduino (Arduino, no date), connected via Wi-Fi. Based on the message route, these are sent, after being rescaled if necessary, to the appropriate control within the Patch. The colour sensor readings are also sent via OSC messages over UDP, to Wekinator (Fiebrink, 2016).
The sound generation uses an additive synthesiser, based on the work of Dr Luigi Marino (Marino, 2024a), whereby 25 voices are used to create the sonic output. The ratios, low frequency oscillator (LFO) frequencies, and ADSR envelope of each voice are mapped to the values received from Wekinator. Initially, all voice ratios were controlled via Wekinator, but it was later decided that the fundamental frequency (Voice 1) should be ever present, and unaffected by the Wekinator output.
The LFO depth value is rescaled and mapped to the LFO depth of each voice, whilst the pitch is received from the Y value of the Trill Square, and rescaled appropriately. If no touch is present, the pitch is set to 0 Hz, and the reading is sent back to the instrument using OSC, to be displayed on the LCD. Depending on the press intensity on the Trill Square (Bela.io, 2024), the LFO wave type varies.
Each voice has an amplitude relative to the spectral centroid, calculated from rescaling the X value of the Trill Square, with this brightness also being sent back to the instrument via OSC, being displayed on the LCD, and mapped to the brightness of the LED and Jitter visuals.
A delay and feedback loop is implemented, based on the work of Dr Luigi Marino (Marino, 2024b), adding an amount of feedback into the output sound, when the instrument's MPU-6050 accelerometer (TDK, no date) Z axis experiences movement.
All voices are summed together, with sound output only being present upon the Trill Square being pressed. Audio clicks are prevented using a line~ object, and the ADSR envelope is controlled using values received from Wekinator. The instrument's master volume is controlled using the value of the MPU-6050 accelerometer's X axis, when instrument movement is recorded, whilst the Y axis value is mapped to the left and right panning of the sound, based on code from Reddit user thedeany (Thedeany, 2013).
Based on the received colour values from the TCS3200 colour sensor (Ams Osram, 2024), and brightness readings from the Trill Square, a Jitter matrix is created, with text being added to the Jitter window (Cycling '74, 2024), alternating via a Metro object.
After creating the Max patch, in line with the low entry point idea behind the instrument, a simple Presentation Mode was created and set as the default view, to ensure players are only presented with the two necessary instrument controls - display visuals, and output audio.
Wekinator was used to map TCS3200 colour sensor readings to the voices ratios, LFO frequencies and ADSR envelope. After outputting the colour sensor values from the Max Patch to Wekinator using OSC, and creating the corresponding Wekinator inputs on the synthesis parameters in Max, Neural Network outputs were created for each Max Patch Wekinator input. Each output's value range and type was modified, to generate suitable values for the corresponding input.
The 'randomize' button, with additional value modifications, was used to generate Max sounds (Wekinator values) believed to correspond to colours. Upon selecting a desired sound, the associated colour was placed in front of the colour sensor, and around 50 values were recorded in Wekinator. This process was repeated for all major colours of the colour wheel, before the model was trained, and started running.
IDMT Concert performance close-up of the Sonicolour SC-1
Laser cutting Sonicolour SC-1 box
Installing sensors inside the Sonicolour SC-1 front panel
Connecting sensors to Arduino inside the Sonicolour SC-1 front panel
Sonicolour SC-1 Max Patch - Patch View
Sonicolour SC-1 Max Patch - Presentation View
Sonicolour SC-1 Wekinator Project
Sonicolour SC-1 Arduino Code
Sonicolour SC-1 Breadboard Schematic
The instrument’s firmware, sound generation, designs and code are all available on GitHub: https://github.com/sonicolour/sonicolour-instrument.
The Quick Start and Troubleshooting Guides may be found below:
A five-minute piece was composed on the Sonicolour SC-1, named "Navigating the colour space", performed live to an audience at the IDMT Concert, Queen Mary University of London, on Thursday 12th December 2024. This composition aimed to demonstrate all available mappings on the instrument, in addition to the different sounds obtainable from the device.
Listeners were treated to sounds from across all colours of the colour wheel, demonstrating the timbre colours available, alongside the effects of adding an additional coloured filter to the instrument. The depth knob was used, demonstrating the subtleties of the "pulsing" effect, whilst the pitch and spectral brightness interplay, and LFO wave type effects, were explored, through the Bela Trill Square (Bela.io, 2024). This was complemented by left and right panning, dynamics changes, and the addition of feedback, through tilting the instrument, triggering the accelerometer.
To allow the audience to view the currently-selected colour on the colour wheel, the Jitter window was displayed on-screen, providing listeners with an immersive experience, and connection to the instrument, allowing them to investigate their own cross-modal correspondences.
The performance was made to overwhelmingly positive reception, with many audience members being impressed by the instrument's design, playability, affordances, and sonic qualities.
IDMT Concert performance of the Sonicolour SC-1 with visuals
IDMT Concert performance of the Sonicolour SC-1
Sonicolour SC-1 in the workshop
Sonicolour SC-1 front view, with additional acrylic filters
IDMT Concert, with Sonicolour SC-1 on left of performance table
IDMT Concert Poster
It can be concluded that this project was overall a relative success. The instrument itself was fully functional, and produced varied timbres, pitches, and brightness of sounds, alongside some interesting effects, including feedback and panning. If measuring success from audience member feedback, such comments were overwhelmingly positive, resulting in discussions with listeners relating to the instrument's construction and sonic output.
However, one must also investigate success in relation to meeting the original specification, whereby success would be significantly more limited. Although initially proposed, the instrument failed to investigate auditory-visual synaesthesia, and arguably only partly met the brief of investigating cross-modal correspondences. It was decided that the project required rescoping, due to the complexity of researching auditory-visual synaesthesia, and the rarity of listeners with such condition.
Furthermore, it can be noted that the instrument's brightness and LFO depth controls are subtle, and the tones are subjectively less pleasing than other instruments demonstrated, which utilise sampling, over completely synthesised sounds. Nonetheless, as an instrument relating to colour-sound-mapping, even in a pseudoscience form, it reliably delivers intriguing sounds, is well-built, functional, and attracts audience attention.
Throughout this project, a number of challenges arose. One of the first challenges involved the problem of recruiting audience members with the rare auditory-visual synaesthesia condition, thus the original research idea was abandoned, in favour of creating a device to test cross-modal correspondences. However, this still led to the continued challenge of creating a suitable colour-to-sound mapping, as individuals can perceive sound and colour mappings differently. Consequently, it was decided that the colour-to-sound mappings be made with the broad consensus of darker colours being mapped to dull sounds (e.g. blue), and lighter colours mapped to brighter sounds (e.g. yellow).
During the creation of the electronics for the instrument, a major issue arose, whereby the Bela Trill Square (Bela.io, 2024) sensor was not detected by the Arduino (Arduino, no date). After attending a support session, Dr Xambó Sedó kindly assisted me in attempting to fix the problem. It was discovered that, by adding pullup resistors, the Trill Square sensor could subsequently be detected by the Arduino.
Later, when creating the Max Patch (Cycling '74, 2024) for the sound generation, challenges arose due to the inaccuracies of the TCS3200 colour sensor (Ams Osram, 2024), with the colour sensor often reading incorrect values, or being inconsistent. This was resolved through recalibrating the sensor within the Arduino code, rescaling the values to the output range of sensor.
An additional challenge came from the delayed response of the instrument, with significant latency between interacting with a sensor, and the sound being generated or modified. This was resolved through reducing the delay between sensor readings within the Arduino code, making the instrument much more responsive.
Vast challenges were experienced with Jitter (Cycling '74, 2024), as the jit.lcd object shown in many online Jitter tutorials is deprecated, thus it was not recommended to be used. Sourcing a tutorial using jit.gl was difficult, and no tutorials showed methods for producing the desired visuals output, with a colour-changing background. Through significant research and trial and error, it was eventually discovered how to create the necessary Jitter visuals.
Moreover, when the audio was initially generated, there were significant clicks in the audio, when the Trill Square was touched. After exploring different solutions, it was noted that the line~ object would resolve the issue, thus the patch was modified to ensure an envelope was used.
Throughout this project, I have gained many new technical and creative skills. I have learnt how to use a laser cutter, to create products, and SolidWorks, to design 3D CAD models, which can be 3D printed. Furthermore, I gained insight into connecting to and programming for Arduino, how sensor values can be read, and soldering with stripboards. Aside from electronics knowledge, I gained insight into how audio can be generated using Max (Cycling '74, 2024), and how additive synthesis, the spectral centroid and OSC works.
I learnt routing messages between Max (Cycling '74, 2024) and Wekinator (Fiebrink, 2016), how Wekinator can be used to quickly and simply create machine learning mappings, and the ability to use Jitter (Cycling '74, 2024) to create visual accompaniments. Notwithstanding, I grew as a person, developing my presentation and performance skills, as this was the first time I had performed an instrument to an audience.
During this project, I went above and beyond, trying to get out of my comfort zone as much as possible. As someone from a Computer Science background, I was excited to craft the physical instrument and intrigued to learn new ideas in Max, as I had only been used to coding, rather than visual block editing. This project enabled me to explore all new areas, and I am grateful for the opportunity to develop across a broad range of aspects of digital musical instrument creation.
One significant future direction would be to explore the originally intended idea of auditory-visual synaesthesia, by mapping the colours to an individual's own perception of timbre colour, and investigating if such mappings are consistently supported by other listeners of the instrument.
The instrument could seek to replace the colour wheel with a colour-changing LED, to allow for a greater degree of control over the colours of the timbre, and facilitate pre-programming of timbre colours. Additionally, as the instrument currently only supports a single touch input on the Trill Square (Bela.io, 2024), mapping additional touches to other spectral effects controls could offer more variability of the produced sound, and investigate different interplays between spectral features.
The instrument's fundamental frequency is currently not constrained to any musical scales, thus the effects of musical scales on timbre colour and brightness could be explored, through remapping the Trill Square's Y values to a Western musical scale, for example.
As previously discussed in the project's pitch, the instrument could be converted into an installation piece, with DMX lighting control, through Max (Cycling '74, 2024), corresponding to the colour sensor readings, much like the Jitter window (Cycling '74, 2024) currently used. This would add further immersion to the audience, and benefit in their connection between the audible and visual stimulation, during the performance.
Photos showing the manufacture, finished product, and IDMT Concert performance of the Sonicolour SC-1.
An introduction to the Sonicolour SC-1 (2 minutes), showing its controls via a performance (left video), and behind-the-scenes footage of the making, playability and performance of the SC-1 (5 minutes 45 seconds) (right video).
Ams Osram (2024) Ams TCS3200 Color Sensor. Available at: https://ams-osram.com/products/sensor-solutions/ambient-light-color-spectral-proximity-sensors/ams-tcs3200-color-sensor (Accessed: 17 December 2024).
Arduino (2017) LiquidCrystal. Available at: https://docs.arduino.cc/libraries/liquidcrystal/ (Accessed: 17 December 2024).
Arduino (2022) Basics of Potentiometers with Arduino. Available at: https://docs.arduino.cc/learn/electronics/potentiometer-basics/ (Accessed: 17 December 2024).
Arduino (2023) Liquid Crystal Displays (LCD) with Arduino. Available at: https://docs.arduino.cc/learn/electronics/lcd-displays/ (Accessed: 17 December 2024).
Arduino (2024) Arduino IDE (Version 2.3.4) [Computer program]. Available at: https://www.arduino.cc/en/software (Accessed: 17 December 2024).
Arduino (no date) UNO R4 WiFi. Available at: https://docs.arduino.cc/hardware/uno-r4-wifi/ (Accessed: 17 December 2024).
Jawad, K. and Xambó Sedó, A. (2024) 'Feminist HCI and narratives of design semantics in DIY music hardware', Frontiers in Communication. Available at: https://doi.org/10.3389/fcomm.2023.1345124 (Accessed: 17 December 2024).
Bela.io (2024) About Trill. Available at: https://learn.bela.io/products/trill/about-trill/ (Accessed: 17 December 2024).
BelaPlatform (2024) Trill-Arduino. Available at: https://github.com/BelaPlatform/Trill-Arduino/blob/master/examples/square-print/square-print.ino (Accessed: 17 December 2024).
CNMAT (2017) ESP8266ReceiveMessage.ino. Available at: https://github.com/CNMAT/OSC/blob/master/examples/ESP8266ReceiveMessage/ESP8266ReceiveMessage.ino (Accessed: 17 December 2024).
Cycling '74 (2024) Max 8 (Version 8.6.5) [Computer program]. Available at: https://cycling74.com/downloads/older (Accessed: 17 December 2024).
Dassault Systèmes (2021) SolidWorks 2021 (Version SP3) [Computer program]. Available at: https://www.solidworks.com/sw/support/downloads.htm (Accessed: 17 December 2024).
Fasciani, S. (2016) 'TSAM: a Tool for Analyzing, Modeling, and Mapping the Timbre of Sound Synthesizers', Proceedings of the 13th Sound and Music Computing Conference, Germany, 20 August. Available at: https://www.researchgate.net/publication/305984399_TSAM_a_Tool_for_Analyzing_Modeling_and_Mapping_the_Timbre_of_Sound_Synthesizers (Accessed: 17 December 2024).
Fiebrink, R. (2016) Wekinator (Version 2.1.0.4) [Computer program]. Available at: http://www.wekinator.org/downloads/ (Accessed: 17 December 2024).
Gregorio, J. and Kim, Y.E. (2021) ‘Evaluation of Timbre-Based Control of a Parametric Synthesizer’, International Conference on New Interfaces for Musical Expression. NIME 2021. Available at: https://doi.org/10.21428/92fbeb44.31419bf9 (Accessed: 17 December 2024).
How To Mechatronics (no date) Arduino Color Sensing Tutorial – TCS230 TCS3200 Color Sensor. Available at: https://howtomechatronics.com/tutorials/arduino/arduino-color-sensing-tutorial-tcs230-tcs3200-color-sensor/ (Accessed: 17 December 2024).
Inkscape (2024) Inkscape (Version 1.4) [Computer program]. Available at: https://inkscape.org/release/inkscape-1.4/ (Accessed: 17 December 2024).
Jarzebski (2014) MPU6050_gyro_simple.ino. Available at: https://github.com/jarzebski/Arduino-MPU6050/blob/dev/MPU6050_gyro_simple/MPU6050_gyro_simple.ino (Accessed: 17 December 2024).
Liu, J. et al. (2021) ‘Research on the Correlation Between the Timbre Attributes of Musical Sound and Visual Color’, IEEE Access, 9, pp. 97855–97877. Available at: https://doi.org/10.1109/ACCESS.2021.3095197 (Accessed: 17 December 2024).
Makercase (no date) Easy Laser Cut Case Design. Available at: https://en.makercase.com/#/basicbox (Accessed: 17 December 2024).
Marino, L. (2024a) 'Max Tutorial 2'. ECS742P: Interactive Digital Multimedia Techniques. Available at: https://qmplus.qmul.ac.uk/ (Accessed: 17 December 2024).
Marino, L. (2024b) 'Max Tutorial 3'. ECS742P: Interactive Digital Multimedia Techniques. Available at: https://qmplus.qmul.ac.uk/ (Accessed: 17 December 2024).
Saitis, C. and Wallmark, Z. (2024) ‘Timbral brightness perception investigated through multimodal interference’, Attention, Perception, & Psychophysics, 86(6), pp. 1835–1845. Available at: https://doi.org/10.3758/s13414-024-02934-2 (Accessed: 17 December 2024).
Shier, J. et al. (2024) ‘Real-time Timbre Remapping with Differentiable DSP’, Proceedings of the International Conference on New Interfaces for Musical Expression, 55, pp. 377–385. Available at: https://doi.org/10.5281/zenodo.13904884 (Accessed: 17 December 2024).
Soraghan, S. et al. (2018) ‘A New Timbre Visualization Technique Based on Semantic Descriptors’, Computer Music Journal, 42(1), pp. 23–36. Available at: https://doi.org/10.1162/comj_a_00449 (Accessed: 17 December 2024).
TDK (no date) MPU-6050. Available at: https://invensense.tdk.com/products/motion-tracking/6-axis/mpu-6050 (Accessed: 17 December 2024).
Thedeany (2013) The "right" way to do Panning?. Available at: https://www.reddit.com/r/MaxMSP/comments/181lw4/comment/c8eukoa/ (Accessed: 17 December 2024).
Wessel, D.L. (1979) ‘Timbre Space as a Musical Control Structure’, Computer Music Journal, 3(2), p. 45. Available at: https://doi.org/10.2307/3680283 (Accessed: 17 December 2024).
Xambó Sedó, A. (2024) 'multiplevalue_print_osc.ino'. ECS742P: Interactive Digital Multimedia Techniques. Available at: https://qmplus.qmul.ac.uk/ (Accessed: 17 December 2024).
Thanks are expressed to Dr Anna Xambó Sedó, Dr Charalampos (Charis) Saitis, Ms Geetha Bommireddy, and Dr Luigi Marino at the Centre for Digital Music (C4DM) at Queen Mary University of London (QMUL) for the excellent lectures, advice, and support - it has been a thoroughly enjoyable journey, and I am very grateful to have had this opportunity.