In a future where humanity has become more cyborg than human, losing touch with its primal somatic awareness, this website aims to highlight the deep connections between movement, music, and visual expression—rooted in human instincts. It serves as a resource for future cyborgs, who may struggle with emotional expression and facial control, offering tools to help them reconnect with their innate human instincts and refine their ability to express emotion through body language and facial expressions.
Click to try the website yourself:)
I wasn't making much progress before the Interaction Day. Things were technically working but not satisfying at all and I had no idea where to improve. Luckily, I got so many useful suggestions around the testing stage and made many iterations afterward, which led me to the final version that I myself could enjoy. Very grateful for all the help from professors, fellows, and peers.
Here's the first version of my work:
(Didn't keep enough record of the effects so I'll include the code to demonstrate, sorry)
User Interaction
I detected certain movements on eyebrows, mouth and nose to trigger sounds and define emotions
if (positions.length > 0) {
noseX = positions[62][0];
noseY = positions[62][1];
eye1X = positions[24][0];
eye1Y = positions[24][1];
eyebrow1X = positions[22][0];
eyebrow1Y = positions[22][1];
mouth1X = positions[60][0];
mouth1Y = positions[60][1];
mouth2X = positions[57][0];
mouth2Y = positions[57][1];
let d = dist(noseX, noseY, p_noseX, p_noseY);
let EYEd = dist (eye1X, eye1Y, eyebrow1X, eyebrow1Y);
let MOUTHd = dist (mouth1X, mouth1Y, mouth2X, mouth2Y);
//rect(20, 20, d * 3, 20);
if (d > 10){
osc.start();
let f = map(noseX, 0, width, 100, 300);
osc.freq(f);
envelope.play(osc);
}
if (EYEd > 30){
osc.start();
let f = map(EYEd, 0, width, 600, 800);
osc.freq(f);
envelope.play(osc);
}
if (MOUTHd >= 15){
osc.start();
let f = map(mouth1X, 0, width, 800, 1000);
osc.freq(f);
envelope.play(osc);
}
}
p_noseX = noseX;
p_noseY = noseY;
}
I used osc with different frequencies as the sound effect. The eyebrows control the high pitch, the nose controls the low pitch, etc.
let osc, envelope;
let attackTime = 0.001;
let sustainTime = 0.5;
let sustainLevel = 0.8;
let releaseTime = 0.01;
let noseX, noseY;
let p_noseX, p_noseY;
Visual Deisgn
I used the face tracker we learned in class and kept the dots on the face to imply that the interaction was making facial expressions and providing a futuristic aesthetic.
function draw() {
image(capture, 0, 0, w, h);
var positions = tracker.getCurrentPosition();
noFill();
stroke(255);
//creates line shape around the face
beginShape();
for (var i = 0; i < positions.length; i++) {
fill(100);
noStroke;
ellipse (positions[i][0], positions[i][1], 2, 2);
//vertex(positions[i][0], positions[i][1]);
//text(i, positions[i][0], positions[i][1]);
}
//ends line shape around the face
endShape();
noStroke();
for (var i = 0; i < positions.length; i++) {
//fill(map(i, 0, positions.length, 0, 360), 0, 100);
//ellipse(positions[62][0], positions[62][1], 4, 4);
//text(62, positions[62][0], positions[62][1]);
}
In my CSS design, I put the emotion boxes and the webcam side by side
.flexbox-content {
display: flex;
flex-direction: column;
width: 50%;
align-items: center;
margin-top: 0;
}
.box{
width: 300px;
height: 200px;
padding: 20px;
margin:5px;
display: inline-block;
background-image: url('assets/body1.jpg');
background-position: top;
}
.fixed-sketch {
position: fixed;
right: 100px;
top: 200px;
/* width: 800px;
height: 550px; */
}
I made a few changes bafore Presentation Day:
Detecting dots on the face.
Locating dots on the side.
First user Imaan.
Final Result Page1
Final Result Page2
(Code will be introduced in the technical section later in the post)
User Interaction - Music Stem
I didn't like the sounds I made with osc because they were too repetitive and very much like noise. No matter how actively I moved my face, it wasn't likely to play any rhythm and the interaction became boring. Thanks to Professor Leon, we came up with the idea of using stems so that there will be a certain rhythm already embedded to be triggered.
This change to the design led to a shift to the concept. Playing music was fun but couldn't go together with the emotion detection idea I had initially. However, after considering how far-fetched it was to link emotions to the limited facial movements, I chose the music stem idea over the emotion detection idea. The core of the concept, which was somatic awareness, was able to explain the music element since it helps rewire the connections between movements, audio and visual for future humans. I was satisfied with this finalized idea. It wasn't a trade-off for more interesting interaction.
Visual Design
Professor Marcele pointed out that the page arrangement of boxes and webcam side by side was distracting and both the camera view and the dots on the screen might be overwhelming.
Since I wasn't going to use emotion boxes anymore the first problem was solved. As for the second problem, I love how cool it looked after hiding the camera but I also felt showing the face itself is providing a connection to somatic awareness. Thus, I made two pages connected with a clickable button to show the webcam first and then enter the face dancing mode.
Since there's a page hiding the camera view, the dots from the face tracker we used in class seemed insufficient. Also because it wasn't able to detect eye movement accurately, I eventually used another tool Fellow Jiapei recommended: ml5.js. I set the color bright green to look futuristic.
To bring in better visual awareness, I added particles mapped with the volume on the background.
I made final adjustments before IMA Show:
(Code will be introduced in the technical section later in the post)
After my in-class presentation I gained very feasible feedback so I added them to my project:
User Interaction
Position Trigger - I brought up how I felt restricted by limited elements of a human face that can be designed to be triggers in my presentation. In response to that, Professor Leon suggested that I could use the position of the face as another trigger. So I mapped the speed of the sound to the x coordinate. To drum, that changes the pace. To instrumental parts, that also changes the pitch.
To be a Christmas gift for people who will check out this project in the IMA Show, I inserted another Christmas song into the project to add variety as well. Once the user clicks the "Switch Mode" button on the corner, it will start snowing and trigger a different song. (the snow particles didn't work in the IMA Show unfortunately but I fixed it later).
Visual Design
Visual Feedback - Another suggestion from both Jiapei and Leon was that I should incorporate some visual responses to the actions of the users. Instead of having instrument stickers showing up on the screen, I chose waves that feel more consistent with the aesthetic in my opinion. Might be less straightforward but I liked it.
Black Curtain - After collecting more user testing I found that people were still interested in seeing their own faces on the screen. So I wanted to provide this option. Also, a "bug" for my project for my presentation was that once I entered the music page, the only way to restart was to refresh. That's certainly a huge issue for this website to be legit. To tackle both of the problems, I had the idea of dragging two pages so the user can have full control of how much of their own face they want to see and once it's fully back to the camera view it restarts. However, I didn't know how to navigate two different website pages like that so I asked Jiapei. She had this brilliant idea that I could just drag a black canvas to hide and reveal. So I mapped the "black curtain" with the x coordinate of an arrow I designed.
Main technical challenges:
My project somehow leaned heavily towards the p5 sketch. Everything happened in the sketch so the code was quite long and complicated (to me). I got lost in my own code a lot. I had to sort them out and comment things to remind myself from time to time. I also had many archived copies in case I messed some parts up.
Though the techniques of the key execution have all been learned in class, there were some detailed problems I bumped into that were beyond my knowledge so I asked a lot of help from the fellows. Was very grateful for all the help and I learned many experience-based skills.
The technical executions and design decisions are inseparable so I might have already yapped too much about technical problems in the last section. Here are more detailed codes:
1.Sound triggering
For the post-punk song, the full song will be triggered by frowning, nodding, opening mouth, and different positions on the screen;
For the Christmas song, the full song will be triggered by lifting eyebrows, shaking your head, smiling, and different positions on the screen.
//facemesh tool
for (let i = 0; i < faces.length; i++) {
let face = faces[i];
push();
translate(width, -((width / 640) * 480 - height) / 2);
scale(-1, 1);
//Trigger1-space and speed
let noseX = face.keypoints[4].x;
let playbackRate = map(noseX, 0, width, 0.8, 1.2);
mouthOpenSound.rate(playbackRate);
mouthOpenSound2.rate(playbackRate);
shakingHeadSound.rate(playbackRate);
eyebrowSound.rate(playbackRate);
//face dots
for (let j = 0; j < face.keypoints.length; j++) {
let keypoint = face.keypoints[j];
noStroke();
if (currentMode === 'christmas') {
fill(255);
} else {
fill(100, 100, 100);
}
ellipse(keypoint.x, keypoint.y, 5, 5);
}
//Trigger2a-mouth open detection
let mouthHeight = dist(face.keypoints[13].x, face.keypoints[13].y, face.keypoints[14].x, face.keypoints[14].y);
let faceHeight = dist(face.keypoints[152].x, face.keypoints[152].y, face.keypoints[10].x, face.keypoints[10].y);
let mouthProportion = mouthHeight / faceHeight;
let mouthThreshold = 0.03;
if (mouthProportion > mouthThreshold) {
mouthOpenSound.setVolume(1, 0.1);
mouthOpenSound2.setVolume(1, 0.1);
} else {
mouthOpenSound.setVolume(0, 0.1);
mouthOpenSound2.setVolume(0, 0.1);
}
//Trigger3-nose movement detection (shaking head)
let noseY = face.keypoints[4].y;
let noseMovement = abs(noseY - previousNoseY);
let shakeThreshold = 1;
if (noseMovement > shakeThreshold) {
isShaking = true;
shakingHeadSound.setVolume(2, 0.1);
} else {
isShaking = false;
shakingHeadSound.setVolume(0, 0.1);
}
previousNoseY = noseY;
//Trigger4-eyebrow movement detection (frowning/lifting)
let leftEyebrowNoseDistance = dist(face.keypoints[285].x, face.keypoints[285].y,
face.keypoints[168].x, face.keypoints[168].y);
let rightEyebrowNoseDistance = dist(face.keypoints[55].x, face.keypoints[55].y,
face.keypoints[168].x, face.keypoints[168].y);
let avgEyebrowNoseDistance = (leftEyebrowNoseDistance + rightEyebrowNoseDistance) / 2;
let frownProportion = avgEyebrowNoseDistance / faceHeight;
let frownThreshold = 0.10;
if (frownProportion < frownThreshold) {
eyebrowSound.setVolume(2, 0.1);
} else {
eyebrowSound.setVolume(0, 0.1);
}
//Trigger2b-mouth lifting detection (smile)
let leftMouthCorner = face.keypoints[61];
let rightMouthCorner = face.keypoints[291];
let upperLipCenter = face.keypoints[13];
let avgCornerHeight = (leftMouthCorner.y + rightMouthCorner.y) / 2;
let lipLift = upperLipCenter.y - avgCornerHeight;
let lipLiftProportion = lipLift / faceHeight;
let smileThreshold = 0.01;
2. OOP Particles
For the post-punk song, the particles are the waves: I wanted the waves to look sharper due to the characteristics of the song so I used lerp to add points on sin waves
For the Christmas song, the particles are the snow: very basic particles we have learned in class.
//OOP-Wave
let waveLines = [];
const WAVE_COLORS = [
[90, 70, 100],
[120, 100, 100],
[160, 100, 80],
[140, 100, 60]
];
class WaveTrack {
constructor(y, color) {
this.points = new Array(100).fill(0);
this.y = y;
this.color = color;
this.maxPoints = 100;
this.currentVolume = 0;
}
addPoint(volume) {
this.currentVolume = lerp(this.currentVolume, volume, 0.1);
this.points.unshift(this.currentVolume);
if (this.points.length > this.maxPoints) {
this.points.pop();
}
}
draw() {
push();
stroke(this.color);
strokeWeight(2);
noFill();
beginShape();
for (let i = 0; i < this.points.length; i++) {
let x = map(i, 0, this.maxPoints, width, 0);
let amplitude = map(this.points[i], 0, 1, 0, 50);
let pointyWave = pow(sin(i * 0.3), 3) * amplitude;
vertex(x, this.y + pointyWave);
}
endShape();
pop();
}
}
//OOP-Snow
let snowflakes = [];
class Snow {
constructor() {
this.x = random(width);
this.y = random(-100, 0);
this.size = random(3, 8);
this.speed = random(1, 3);
this.wobble = random(0, 2 * PI);
this.brightness = random(200, 255);
this.isOffscreen = false;
}
fall() {
this.y += this.speed;
this.x += sin(this.wobble) * 0.5;
this.wobble += 0.01;
if (this.y > height) {
this.isOffscreen = true;
}
}
show() {
push();
noStroke();
fill(this.brightness);
circle(this.x, this.y, this.size);
pop();
}
}
...
//execute oop-wave
if (isStarted && currentMode === 'NowWe') {
waveLines[0].addPoint(mouthOpenSound.getVolume());
waveLines[1].addPoint(mouthOpenSound2.getVolume());
waveLines[2].addPoint(shakingHeadSound.getVolume());
waveLines[3].addPoint(eyebrowSound.getVolume());
for (let wave of waveLines) {
wave.draw();
}
}
//execute oop-snow
if (currentMode === 'christmas') {
while (snowflakes.length < 100) {
snowflakes.push(new Snow());
}
for (let i = snowflakes.length - 1; i >= 0; i--) {
snowflakes[i].fall();
snowflakes[i].show();
if (snowflakes[i].isOffscreen) {
snowflakes.splice(i, 1);
}
}
}
3. Interface Design
use webcam for interface, mirrored with scale(-1,1);
huge black canvas rect(slideX,0,width,height) controlled by arrow;
once returned to beginning, all the sounds stop;
function draw() {
background(0);
//initiating
if (interfaceVisible) {
push();
translate(width, -((width / 640) * 480 - height) / 2);
scale(-1, 1);
image(video, 0, 0, width, (width / 640) * 480);
pop();
}
//Transition between pages
let slideX = map(arrow.x, width - 50, 50, width, 0);
fill(0);
rect(slideX, 0, width, height);
if (isDragging) {
arrow.x = constrain(mouseX, 50, width - 50);
}
if (isStarted && arrow.x >= width - 50) {
isStarted = false;
interfaceVisible = true;
//Restarting
mouthOpenSound.stop();
mouthOpenSound2.stop();
shakingHeadSound.stop();
eyebrowSound.stop();
christmasMouthSound.stop();
christmasMouthSound2.stop();
christmasShakingSound.stop();
christmasEyebrowSound.stop();
}
4. CSS Button Design
Felt bad that I didn't use layout from CSS and HTML at all so I tried to make the button design legit. I checked out some examples as references and made my own adjustments to fit the aesthetic.
https://getcssscan.com/css-buttons-examples
button {
position: fixed;
top: 20px;
left: 20px;
z-index: 1000;
padding: 10px 20px;
font-family: Arial, sans-serif;
font-size: 16px;
text-transform: uppercase;
letter-spacing: 2px;
background-color: #000000;
color: #00ff00;
border: 2px solid #00ff00;
cursor: pointer;
box-shadow: 0 0 10px #00ff00,
0 0 20px #00ff00,
0 0 30px #00ff00;
text-shadow: 0 0 5px #00ff00;
transition: all 0.3s ease;
}
button:hover {
background-color: #00ff00;
color: #000000;
box-shadow: 0 0 15px #00ff00,
0 0 25px #00ff00,
0 0 35px #00ff00;
}
The major deficiency of my project that bothered me a lot was that I didn't use enough html and css that we learned and practiced in the second half semester. If I were to make it a more established and mature website, I could maybe make the options of songs part of the website. But I honestly don't have a good idea of how to incorporate more website design for a better purpose that is why it is what it is now. I hope I'll develop more ideas on that in the future.
I think with the sound, visual and body movement, this project did successfully deliver the somatic awareness idea in my concept. I was really happy to see people's interaction with it during the IMA Show, especially the process of them gradually getting a sense of how to control the sounds, some people really mastered the game better than me. I think that really responds to my concept well as a process of building connections and awareness.
Looking back on the journey of the project, I have to say all the development relied on people's feedback. I feel very lucky to be able to collect great ideas in this class to help me with my project. It's actually way beyond my own work but a collective work.
Mentor for the whole project: Marcela Godoy
Professor Leon: https://www.cambridge-mt.com/ms/mtk/#PaulSmith8DAWS
Fellow Jiapei: https://docs.ml5js.org/#/reference/facemesh
Other Online Resources:
https://youtu.be/47nuXIPPKjA?si=YzIAoOxNDR8nGFlO
https://youtu.be/V4iWO73zPL4?si=siOL_eOJYug0n0Q5
https://getcssscan.com/css-buttons-examples
Inspiration: https://www.youtubeeducation.com/watch?v=pLAma-lrJRM