"On the internet, nobody knows you're a dog."
- Peter Steiner
Internet identities has always been a big topic. Now since we use internet almost everyday, most of us have already been used to the identities shifting from real life to cyber world. When we look into the future, technologies like Virtual Reality support diverse platforms, different platforms sometimes means we need to adjust our identities to fit in the environment. How do we deal with the multi-identities then? How does it feel like if you can visualize and experience the shifting?
The idea of this project is to let people experience the process of their own identities shifting. Both their facial identity and vocal identity will be altered. The expected effect varies from person to person, as we ask they to sit front of a webcam and then lie while seeing their own faces. Some thought it's really interesting but some just found it strange as they were not used to this kind of experience. And few just couldn't finish the process because they didn't want to lie in front of the camera. Most of the users were really willing to try it out and also had their struggles coming up with a lie and seeing their face and voice being altered. And afterwards through the struggles and experience they had more thought and insight about this topic.
Why did I choose lying instead of anything else?
First, lying usually tells more information about the person himself/herself. It requires people to really think about what they would like to lie about. Second, Internet dishonesty is the topic we can all somehow relate to ourselves or the people we know. Last, watching yourself lying and hearing your own lying voice definitely have a stronger psychological effect on people than ask you to just say random things, especially in a open environment like the expo.
The whole process
Stage 1: the users get a sit in front of the computer and type in their names
Camera on
Stage 2: the users need to introduce themselves and then tell a lie while watching themselves appear on the screen (meanwhile we take a picture of the user's face and record the audio using Pure Data)
Stage 3: play the recorded audio and ask if the users are willing to enter the cyber world
Once the users agreed, play the recorded audio with different altered voice pitches
Stage 4: the previous user's face on the screen. And by moving the mouse the user could change the shifting level from absolute themselves to a hybrid person.
Stage 5: all the recorded faces randomly shuffle on the screen and all the recorded audios will be played randomly
The build
I used Processing for the image processing and interface building and deal with audio using Pure Data.
For the part face shifting according to the mouse, I used the green screen technique. So it compares the pixel difference between the face image from last user and the current one, and replaces the different part. The threshold of the replacement depends on the position of the mouse.
And for altering speech audio, I use pure data to change the play time in order to change the pitches.
The code is available for download. There two things need to notice:
1. I use port 8000 to communicate processing and pure data.
2. Update the initial faceNum value each time you restart the program
3. I use the full screen mode for this project, please notice the resolution of the webcam is 1280*720 and my laptop full screen is 1366*768. So if your platform has different resolution, you need to change the code file "lastface" to match them.
In the end I had 112 different users who tried this project, here is a short video for some of them:
Because it includes personal information, I set the video to private and the password is "mediatech". Please do not distribute.
The final lies, thanks
Peter for this amazing course
Jeroen for the spiritual counselling
All the weirdly talented classmates
And all the beautiful people who were willing to experience this project