Weekly Writing
Week 8 EWRL Response
Explore
The first artist/initiative that I really liked is The Machine to be Another by BeAnotherLab, which employs Virtual Reality technology to simulate body/gender swap. As VR provides immersive and interactive experiences to allow someone to see the world through someone else's perspective. The core idea is something called Body Transfer Illusion where one could experience an external object (someone else's body in this case) as their own. This paper First Person Experience of Body Transfer in Virtual Reality is an extension of the Body Transfer Illusion using the technology of VR. The Machine to be Another is a real-world application of the research and conducted numerous exhibitions that feature gender swap and body swap. The two subjects move and gesture according to mutually-agreed actions and their view will be streamed to another subject's VR headset. I thought such an experiment is intriguing because we always thought of using VR to create a fictional/imaginative world with fantastical experiences, but we rarely thought to imitate the real world -- but in someone else's perspective. I feel like although we are all living in the same world, someone else's life, may it be your closest friend or family, may still be a drastically different experience.
The second artist I want to talk about is Will Pappenheimer. His work includes VR and AR projects that mix reality with elements from VR and AR. For example, his project Ascension of Cod is created to raise awareness of the deteriorating conditions of cod because of excessive fishing and warming waters. He created an AR project with a swarm of cod swimming in front of the Customs House in Salem, Massachusetts. It also features a VR project where the cod are ascending to heaven in a prairie scene. I also like how his project called Watch the Sky where the audience was invited to create Sky Writings, an AR form of graffiti but without the actual damage, that is visible to other audiences as well. The collaborative elements in this project really elevated the concept of Sky Writings to another level.
Read
The city that stood out is Esmeralda (pp. 88-89). It reminds me of Venice, Italy -- how the city is threaded by canals, bridges, and sinuous trails. However, it also has its own flavors. It has ground level and a higher level, consisting of "landings, cambered bridges, hanging streets", which I could imagine easily together with the canals and bridges beneath. This part reminds me of the Highline Park near Hudson Yards in New York, where a high-hanging railway is transformed into a park that passes through the skyscrapers and buildings there. The mentioning of cats and mice, thieves, illicit lovers, and smugglers adds a sense of reality to the city: Esmeralda is not a romanticized town with perplexing routes and canals; it also has elements of evil in it that make it real and palpable as any other real cities.
To represent a city in the virtual world, not every aspect of the city needs to be modeled out precisely. I feel like as long as its uniqueness is carefully crafted and presented in the virtual world, we could just skip the abstract parts or use the general perception of cities as a placeholder. Cities are bound to be similar -- but they stand out in their own ways, which is why I think we need to make the most original part of the city interactable, be it the architecture, the garden, or the inhabitants in it. We should be able to see the difference in this city through a few aspects that make it drastically different.
Lastly, the city changes as time elapses. For example, in Ersilia (pp. 76), the city is abandoned once the strings filled the city fully. So the city is abandoned and rebuilt again and again by the inhabitants. In such case, we could see two versions of Ersilia as time passes, a full-functioning one with few strings and a deserted one with decaying buildings and strings. Time changes the appearance of the city. In the case of Melania, time changes the inhabitants: every time you enter the square, you see the old people gone, replaced by their descendants, but continuing exactly the same conversations. In this case, time changes the inhabitants of the city. These different approaches reflect the uniqueness of a city -- how they stand out among the hundreds of cities described by the author.
Week 3 EWRL Response
It is natural to conclude that the VR headset itself brings maximal immersion to the user -- by surrounding our most important sensory inputs such as auditory, visual, and sometimes olfactory inputs under a completely simulated reality, we are transported to another world where the creator wants us to be. But sadly, it's not true. An immersive storytelling, as number of the materials from this week revealed, depends largely on the story itself and how well it utilizes the forementioned senses to create an immersive experience for the user.
I see immersion as a spectrum. On the left hand side, we have the technological apparatus that delivers the plots of the story, interacts with the player, and realizes the effects devised by the creator. On the right hand side, we have the concepts and the story itself. Their gist are manifested throughout the project. Only the projects that executes both ends of the spectrum well could create an truly immersive experience for the audience.
In his blog on audiovisual spatiality & immersion, Michael Naimark enunciates on the different audiovisual technologies that is the left side of the spectrum. To recreate a truthful immersion in the virtual space, it is essential to understand how human eyes and ears work and the best technologies to deliver the faithful scenario that cheats the human senses. One thing I noticed when using the Photosphere camera is that the stitching of pictures into a full 360 image is often flawed -- that's why for better quality images we need stereo-panoramic video cameras that are capable of shooting multiple angles from the same no-parallax point. An interesting point raised in the article is about how deep learning could be the solution of tackling immersive spatiality in digital space -- the Deep Learning Super Sampling (DLSS) technology developed by NVIDIA uses deep learning models to infer frames between two keys frames. If similar technology was used in 3D modeling to infer a viewpoint given the images of the 3D space from other viewpoints, we would be able to compensate for the unavailability of high-end cameras using the algorithms.
Lastly, I want to briefly discuss the Immersive Design Summit talk given by Laura Hall, which lies in the right side of the spectrum. Her experience in designing immersive storytelling projects accumulated many guidelines for architecting such space. The discrepancy of what player behaves and what the designer imagined should be taken into consideration. This is often amplified in interactable expression such as VR/AR projects since what creators wants the player to do is often different from what the player will actually do.
Week 2 EWRL Response
In the paper Place illusion and plausibility can lead to realistic behavior in immersive virtual environments, Mel Slater develops his theory on realistic behavior in virtual environments based on previous literature by defining two key requirements: Place Illusion (PI) and Plausibility Illusion (Psi). While previous literature only focuses on "presence", referred to as PI in this article, the author believes it is the combination of several factors -- immersion, PI, Psi, and the virtual body -- that functions as a framework, which prompts the person to produce "response-as-if-real" RAIR.
Personally, I think the orthogonal juxtaposition of the two principles is more accurate in theorizing our response in the VR systems. First of all, Place Illusion, as stated in the paper, is the strong illusion of being in a place despite of our irrefutable knowledge of the opposite fact. This "qualia" is not unique to VR systems -- when we are having lucid dream or when we are watching a movie in a theater. Even though we acknowledge the fact that we are in a "simulation", we often experience a sense of detachment when we exit from such simulation. However, for me to respond realistically in the scenario of immersion, the Pi is far from enough. In most cases, I would not cry or laugh as the plot shifts in the movies. In this case, I am not producing realistic behaviors because I am convinced that the simulation, though as real as it may be, is not really occurring.
One thing I do want to note here is that Pi and Psi does not come necessarily from a faithful and accurate emulation of the reality. In the virtual reality game Superhot VR, the world, including all the characters and objects, are low-poly -- compared to the real world, the game is absurd and strange. However, I would block my face when objects are thrown at me or try to fend off them using my arms. These kind of behaviors are the manifestation of Psi in a well-made VR game that prompted my realistic response even though the world has nothing like the real world. In sum, the world in VR needs to be logical and consequential, as the world interacts with the player, so that the player has a realistic response to the environment, which is constantly changing due to player's behaviors.
Week 1 EWRL Response
I felt utterly repulsive when I heard about the idea of wearing VR goggles for the entire duration of a week. I could not imagine how someone was able to discern what is real from what is virtual in such an overwhelmingly long immersion in an artificial world. While I am skeptical of Jak Wilmot's idea of envisioning a future society entirely in virtual reality, I could see how this bizarre, but perhaps a bit dumb (according to his own words), this experiment would transform one's perception of the very environment we live in every day.
On his first day, Jak realized something that is fundamental to the virtual reality world, that is we each command and create our own version of reality in its entirety, a contrast to the reality where everyone only controls a tiny part of the world. In actuality, as he said in the talk, the difference between the two worlds lies in our authority over information. In the virtual world, we construct the surroundings using our selected information -- what we do, what we see, including what others see when they are in our worlds -- are effectively monopolized by us. Indeed, I believe isolated environments, away from others' distractions, are conducive to the booming of creativity. As Jak jumped from a simulation of his room to the savannah in Africa, and then to a Netflix simulation for sleeping, he is free from the physical limitations bound to reality. I could easily envision VR as an escape from the real world, one that allows freedom for creation and imagination. However, I don't think it will be something scalable beyond being just an escape as he mentioned in the talk. All of the virtual happenings are bounded to reality somehow -- their virtual gathering of SpaceX watch party depend on the physical launch of the rocket in reality; his time in savannah and under galaxy would not exist if they did not reference the breathtaking sceneries in the real world. While VR is an extension of reality, enabling us to experience senses in an unprecedented way, it is not able to replace any part of reality, let alone become a daily platform for interaction.
A more serious problem is the confusion between the reality and virtual as Jak spends more time in VR. If we are no longer capable of differentiating them, we treat them as equal -- while they are drastically different fundamentally in where their information comes from. When technology was able to simulate reality in VR, would we be able to distinguish what happened in VR and in reality? Like a similar scenario in the movie *Inception*, we are likely to be trapped inside VR or lose all senses of reality in the real life. In sum, I don't see VR becoming a universal platform of all interactions for the society in the future.