Github page:
Created a repository for the scripts I've been working on. 4.8.25 is a basic hummer, but 4.11.25 allows for control via the cables. Works well. Shouldn't be too hard to add the OSC and develop more sounds. Hell yeah!
It is 3:30 in the morning and I am battling a massive headache (please pity me). Anyways, I have delved into the sonification of electricity from Designing Sound by Andy Farnell. This approach aims to recreate sparking, arcing, and humming of an AC transformer. The method by which this is done is through taking beating/phasing patterns of phasors at close frequencies and running them through a chirp impulse. Resonance is then added on through comb filters and loud sparks through "short impulses, noise, and a sharply tuned resonant filter bank."
In order to recreate this in ChucK, I must create a few different modules:
Comb filter unit
Hum Source
Chirp Pulse unit
Random gate
Band EQ bank for sparking
Excitation source for filter bank
Putting these all together should result is a faithful recreation of a broken transformer. I listened to a Max/MSP patch that someone made of it and I liked the sound. Since all the examples are done in DSP, I figure the biggest challenge is just translating that into ChucK.
Next up is to research Wekinator, though I figure that may be a tad easier since there are some implementations in ChucK already.
Here is an instructions manual on my project so far. Next goal is to just try and make more cool sounds with it. I'm also considering working with buffers and granular synthesis but I have NO idea how that works. Time to learn I guess...
I decided to make a little chuck script with two ideas in mind:
1) Make something with a slightly more complex sound
2) Make something with two input controls
Behold, Obscurify. It's pretty much just layered triangle waves in octaves. The first channel controls the pitch & volume, and the second channel adds distortion to the frequency values. Is it the craziest thing in the world? No. But it is a step in the right direction and it works as intended. Going forward, I would like to add more inputs. Possibly by moving my hands with a gyroscope attached and hooking that up to an Arduino. Just a thought...
Some brief ideation for the skin conduction idea:
Spring break is over. I have been doing quite a bit of brainstorming regrading the human conductor thing but I haven't thought of anything interesting. I did some research, and the TS cables that I'm working with don't have anything else I didn't know about before—the tip carries the signal and the sleeve connects to ground. I think where I'm getting stuck is at the data collection part. Sure, I can take one or two live data signals and manipulate it, but, ultimately, it seems a lot more practical to have multiple data streams at once and having them work off of each other. It allows for far more possibilities. I keep thinking back to that flight simulator controller...it has so many possibilities for inputting data...three dimensional movements, countless triggers for binary inputs...it seems far easier to create something that has a wide range of possibilities with that than whatever I feel like I come up with (which typically only has one or two data streams). I'm sure I'll figure something out eventually, and I do still like the idea of using my ADC and TS cable as an input, but I am not sure how to create something complex or interesting out of it...
HUGE BREAKTHROUGH AND REALY COOL THING I PROBABLY WANT TO PURSUE!!!
I was messing around with my Scarlett 2i2 with a guitar cable plugged into one of the inputs. I was tapping its tip and sometimes got a strong response and sometimes got next to nothing. After some experimenting, I found out the proximity of my skin to my phone charger plugged into my wall is was caused stronger outputs. In other words, I am a life sized battery controlling an instrument. It is not an instrument yet, but there is a vision to do some really cool stuff with this. With two distinct inputs into this ADC, I could control various parameters such as the volume and pitch of various sounds. Whatever I end up creating, I feel like it would be extremely cool to demonstrate the power of the human skin as a conductor in a musical fashion.
These are the handful of albums that I would consider to be 10/10, perfect, flawless, life-changing experiences. I could endlessly enjoy these albums front to back, and a lot of them serve as influences for any sort of musical projects or undertakings I may have.
Here's my attempt at interpolating between data for the jingle bells set. My goal was to successfully create a smooth interpolation between values, and it does just that. I added a basic sine UGen and controlled the frequency for an auditory interpretation. In the future I'd like to look into using the shakers STK UGen object for fully realizing the vision I originally had...
For class today, I spent a little bit of time searching the internet for different data sets to gather. I learned you could download google trends data as a .csv file, so I did the first search term I thought of: "All I Want for Christmas is You." Every year, the Christmas season seems to sneak up on us and then fade out as soon as we saw it. I want to capture this feeling with this data since both perfectly align.
My first idea of representing this data would be through taking a buffer of some sort, like a jingle bell shake, and having the value at a given time be represented by the quantity of jingle bell shakes, and the intensity of them being represented by the change in value over time (the value being its relative popularity). Google trends only offers discrete values every month over the last 20 years, so that's not a whole lotta points to work with, but it's possible to artificially create mor by being able to create a line from one data point to another and then going along that path over a continuous amount of time.
Below is the data for this search.
After class update: the data set for jingle bells was similar but had slightly more variation, so I'm gonna stick with that instead.
It has been a bit. I have been trying to get familiar with ChucK as well as trying not to die from my other courses. But I digress. Here is my "Hello World!" of ChucK, proving my competency for understanding the syntax and ability to create with it. It is a simple FM synthesizer controller. Enter in an argument for the carrier frequency. Then, using the up and down arrows you can bring the modulation index and and down, and the same can be done for the modulator frequency with the left and right arrows. (Very much used the ChucK examples as a guide)
After doing some more looking into it, the OpenRCT2 community does not seem as accessible as I originally had hoped, meaning getting help for the roller coaster idea may not be as ideal, but I bought the game for $10 so I won't be throwing it out quite yet (and maybe it's a fun game I can play too, who knows).
However, the topic of live music coding came up in class, and I was immediately interested. Dr. Hamilton suggested I research Gibber, and I was astounded. I came across this paper by Charles Roberts and JoAnn Kuchera-Morin (I recall Dr. Hamilton mentioned Charles but not the other person that worked on it), and it gave me a good idea of what Gibber was about. The premise of it is that it is built to be extremely accessible, and in many ways. First off, it runs in a browser—crazy, I know. Secondly, the syntax was very simple and elegant. In the paper, they made sure to note how "Gibber is layered on top of the audioLib.js library," but that it is significantly simpler to implement something like a square wave, as seen in section 2.2.1 of the paper. Lastly, it is built for musicians. I came across this video of Charles Roberts giving a talk about and performing with Gibber at the JavaScript Conference in 2014. I could see that the IDE and language was set up in a way for musicians, allowing for a visualization of the music within the code built on musical notation and a metronome in the upper left corner. The performance was also the sickest thing I've ever seen. Attached below is the video.
Though Gibber is very cool, I figure using ChucK to create live music code would be most practical since it's what we're learning in class. Fun fact, ChucK was mentioned in the paper linked above!
Doing some more research for the roller coaster idea, I came across OpenRCT2, an open source remake of Roller Coaster Tycoon 2. Having an open source game for gathering data would be significantly easier than a closed source game like planet coaster. In addition, there are many community made plugins for OpenRCT2, and one called "openrct2-live-ride-measurements" by Phelicks shows that it can be possible to extract the live data from the ride as it happens. Downloading the JavaScript file and reading through the code may be able to give me some insight into how one would be able to carry this out.
OpenRCT2: https://openrct2.io/
openrct2-live-ride-measurements: https://openrct2plugins.org/plugin/MDEwOlJlcG9zaXRvcnkzNDM5MDQ4NzQ=/openrct2-live-ride-measurements
Here are 3 research papers I found interesting:
"Natural Materials on Stage: Custom Controllers for Aesthetic Effect" from John Toenjes University of Illinois Urbana-Champaign describes the creation, purpose, and performance of two instruments call the "Control/Recorder (C/R)" and the "VideoLyre." These instruments demand a level of physicality from the performer, which in turn helps the audience engage in its musicality and engage in the performance. What interests me is how Toenjes talks about how he deliberately chose wooden materials to build the instruments to create a "dissonance" between its organic nature and technologically infused performance. As a materials engineer, considering this as a materials property is very interesting to me. Source: https://www.nime.org/proceedings/2009/nime2009_052.pdf
"Application of new Fiber and Malleable Materials for Agile Development of Augmented Instruments and Controllers" from Adrian Freed at the CNMAT (Center for New Music and Audio Technology), Dept of Music UC Berkeley, delves into various configurations of different conductive materials (piezoelectric materials, capacitive materials, conductive fibers, etc.) to show they can be used to improve upon existing musical devices or create new pieces of musical technology entirely. I've always found the concept of materials science/engineering as an interdisciplinary study to be very intriguing, but it's very cool to see it applied to music. Additionally, I appreciate how you can tell that the author of the paper put in a lot of effort to their builds, and it almost has a hobbyist/DIY feel to it that you get from engineers that have fiery passions. Source: https://www.nime.org/proceedings/2008/nime2008_107.pdf
"Exploring the (un)ambiguous guitar: A Qualitative Study on the use of Gesture Disambiguation in Augmented Instrument Design" from Adan L. Benito Temprano, Teodoro Dannemann, and Andrew McPherson put experienced guitarists in an interesting scenario. The researchers modified a guitar to act differently than expected when a string is bent, including changing how the pitch is affected or adding additional effects on top relative to how much it is bent. This was in an effort to see if creating a disambiguation of the guitarists' auditory image would result in the creation of new, rich sounds. To an amateur guitarist such as myself, it was cool to see how various guitarists much better than myself adapt to playing under different conditions on the spot. Source: https://nime.org/proceedings/2023/nime2023_61.pdf
Did a little ideation for a potential project which would allow someone to use the video game Planet Coaster as a means of controlling the parameters of a generative sound process.
PDF for planning below: