21.04.2025
22.05.2025
11.11.2025
https://www.ioanavrememoser.com/sizzling-semiconductors
My idea is to build a small set of unique, handmade noise instruments ans use them to create a collaborative sound art work. I want to explore noise as a collective sound practice rather than a controlled musical performance.
Plan to construct around six or seven noise instruments, each responsible for a different type of sound behavior. Some may produce high frequency noise, others low-frequency pressure, continuous drone or some glitch sounds... Each instrument will be unstable and unpredictable, with no fixed or "correct" way to play it.
The instruments will be built using everyday objects and culturally specific materials connected to China and East Asia. These materials are not meant to represent culture symbolically, but to function as physical sound sources shaped by daily use and material history.
The instruments will be played collectively without coordination, allowing collision, misalignment, and error to shape the sound. The final work will exist as a single, unrepeatable recording.
https://www.nime.org/proceedings/2020/nime2020_paper1.pdf
John Sullivan, Julian Vanasse, Catherine Guastavino, and Marcelo Wanderley
3/13:
For the technical side of the project, I plan to experiment with simple electronics and embedded systems. One inspiration for this idea comes from the Sizzling Semiconductor workshops, where people build experimental noise instruments using discarded electronics and everyday objects. These workshops focus more on experimentation and discovering unexpected sounds than on technical precision.I might use small microcontrollers such as Arduino or Raspberry Pi, along with contact microphones, speakers, and sensors. I also want to experiment with feedback and simple sound circuits. Another idea is to incorporate everyday objects or materials related to Chinese or East Asian daily life into the design of the instrument, letting those materials influence the sound and physical interaction.
The final outcome of the project will be a working prototype of this noise instrument. Ideally, the instrument will be able to produce different types of sounds depending on how it is played or manipulated. Instead of presenting a traditional composition, I plan to demonstrate the instrument through a short performance or recording that explores the different sounds it can produce.
3/11 CODE POEMS:
The CODE POEMS concert was split into two parts, and the feeling changed a lot between them. In the first part, there were only guitar and marimba, but the sound was already really rich. The most interesting thing to me was hearing the marimba played with a bow, because I had never really seen that before and it created this super strange but beautiful sustained sound. The guitar was also using E-bow and delay, so the whole texture felt really soft, stretched out, and kind of floating. With the spatial sound setup, it all felt really immersive.
In the second part, Professor Goodheart’s spatial music was added, and the marimba changed to percussion. That made the whole thing feel bigger right away. The gong sound really stayed with me, because it reminded me of the kind of music I would hear in China during holiday celebrations. The guitar also seemed to switch to a headless guitar, and from that point it felt like a lot of the sound was about noise, texture, and atmosphere. The surround speaker layout plus the church space made everything sound huge, especially with all the natural reverb from the room. I think that was the part I liked most about the concert. The technology never felt forced or separate. It just felt built into the music.
This is a synth drum with a built-in self-oscillating VCF.
It is equipped with a wideband oscillator and has a strong SWEEP effect.
You can create various tones by tapping on the body or with an external trigger (sequencer or piezo).
The oscillator has a thicker sound from highs to lows, and the waveform can be selected as TRIANGLE or SAWTOOTH.
SWEEP is set to be applied very strongly, so adjust it in conjunction with CUT OFF COARSE.
SUSUTAIN Setting of the decay time from the onset of sound
SENSITIVITY TRIGGER (When the plug is inserted into the trigger, the internal trigger of the unit is turned off) is to adjust the input sensitivity of the SENSITIVITY TRIGGER.
LFO.F controls the amount of LFO applied to the filter and LFO.
LFO.O controls the amount of LFO applied to the oscillator.
FINE is a finer pitch control for the oscillator than COARSE.
The RESONANCE oscillates at maximum and the pitch can be changed with the CUT OFF knob.
It can be powered by either a center minus 9V DC or 9V battery.
The revised project uses TouchDesigner, vcv rack, webcam input, and motion tracking. Instead of playing a physical instrument, participants can use hand gestures, facial movement, or their position in front of a camera to change different states of noise. These movements can affect the shape, density, distortion, and behavior of the sound in real time.
Motion tracking in touchdesigner:
mediapipe-touchdesigner:https://github.com/torinmb/mediapipe-touchdesigner
For this part, I have been using MediaPipe in TouchDesigner to experiment with motion tracking. Right now I have not really added visual effects yet, and the system is still mostly just pure tracking. I am still trying to improve and refine it before moving on to the visual side. At this stage, I think it is more important to first make the tracking stable and figure out how to connect it to the sound part of the project.
At first, I was just sending the x, y, z, and active data from both hands out of TouchDesigner through OSC and into VCV Rack, but it felt pretty bad and not very usable. So I started refining it more and breaking it down into more specific variables, like each hand’s position, pinch distance, and the distance between both hands. That made the mapping feel much more interesting and more controllable.
Mediapipe → touch designer (osc out) → (osc in) vcv rack
Through OSC Out, I send the tracking data from TouchDesigner to VCV Rack, where it is mapped to CV signals that control different parameters in the patch.
At the moment, I am sending and mapping seven variables from TouchDesigner into VCV Rack:
h1 pinch_midpoint:x
h2 pinch_midpoint:x
h1 pinch_midpoint:y
h2 pinch_midpoint:y
h1 pinch_midpoint:distance
h2 pinch_midpoint:distance
hand_distance
One important module I am using is trowaSoft’s cvOSCcv in VCV Rack. This module allows me to receive OSC data from TouchDesigner and convert it into CV control inside the Rack patch. With the expander module, it can take in nearly thirty inputs, which makes it especially useful for this project since I want to work with multiple tracking variables at the same time.
The mapping is still not perfect, and I am still changing it, but it is already very fun to play with. A lot of the results feel unexpected in a good way, which fits the project well. I am also thinking about adding face tracking or facial expression tracking as another control layer later on.
I currently send each incoming signal into a Scope first so I can see the changes in real time. It is helpful for testing and understanding the tracking data, but it also looks a little cluttered, so I am still thinking about a cleaner way to do it.
1st Demo:
What am I doing rn:
improving the mapping and trying to define more clearly what kinds of sonic changes each hand gesture or motion can create. I also want to add face tracking, build an audio reactive visual in TouchDesigner, and further optimize the modules in my VCV Rack patch. The system is already working in a fun way, but I am still refining it so the interaction feels more meaningful and connected.