Kseniia and I wanted to tackle making an accessible virtual museum gallery. Virtual museum spaces have proliferated due to the pandemic that has shut their physical counterparts, but their necessity extends beyond the current situation. Here were some examples of initial users we thought our virtual museum could better serve:
Blind users and those with low-vision: difficult to experience the art due to the nature of the disability.
Wheelchair users: many museums are very crowded and many buildings are old so the experience
Those who cannot travel to the original physical building and see the object/work in person
It would be interesting to incorporate: https://github.com/pmndrs/react-three-a11y which offers support for three.js too!
What: How do participants connect or communicate?
Virtual museum space where participants connect via voice and audio chat
When: How is time experienced in this interaction?
Synchronously
Who: Who does this interaction serve?
Users who cannot visit the physical space
Blind or low-vision users: include areas that explain/describe the work via audio that is accompanied by a magnification feature for those with low-vision
Wheelchair users
Where: What is the nature of the interaction space?
Users are represented as avatars and can navigate through a virtual museum space
Why: What is the purpose of this interaction?
To create a more accessible environment to view art with others
What are the social dynamics of this space or community?
Visitors can view art with others in a more private setting or opt-in for a public space
How can you capture this in virtual space?
Create private rooms?
Which physical elements of the space do you choose to keep?
I think we'd like to keep basic delineation of space by simulating a gallery - just to help orient visitors mentally and create a basic sense of immersion.
Which elements do you let go of? Why?
To remove: crowds (unless desired by creating and inviting them to a room). Ideally a sense of elitism associated with museums (to allow a more welcoming space for people to view art).
To add: more elements that enhance the museum experience virtually via more information (video, audio, images) displayed in a better manner than just a virtual placard.
Ideally it would be nice to create a space like Google Arts and Culture's Gal Godot's HeARt Gallery.
The first thing to implement was the building itself that would house the works. Admittedly, we thought this would be easier but we ran into a few problems that took us a while to figure out.
We originally used THREE.BoxGeometry to create the walls for the space
However, we soon became stuck when creating openings for windows and doors. Eventually, we arrived to think of an alternative to Box Geometry - extrusion. Thus our implementation became:
Create a 2D shape with THREE.Shape via THREE.moveTo() and THREE.lineTo(). This shape would become the wall.
Create another 2D Shape that represented the door/window and then it to the original shape's holes array.
EG: shape.holes.push(createDoor())
Extrude the original wall shape
Set the wall shape's x and z position
When our extruded holes and windows seemingly had no depth, we solved this problem by setting the material to be double-sided (material.side = THREE.DoubleSide)
However, we soon realized that we wanted the exterior of this building to have a different material from the inside. Since material was double-sided in order to render the wall openings correctly, our workaround became to create two walls: an interior and exterior one, each with different materials.
We had to calculate offsets and adjust the sizes of both the interior exterior walls in order to fit the interior and exterior walls snugly as well as preserve the correct positions of the windows and door.
With the building complete, we needed to add a ceiling as well as collision detection to prevent the users from clipping/traveling into the walls
One of the features we wanted to include was a "More Information" that would allow users to read content that extended beyond the in-person graphics that usual art galleries offered. In order to implement this, we looked into the CSS3D Renderer that would allow us to project HTML into the 3D space. Despite successfully following a tutorial, we had trouble in our code fixing the resolution of the web page and capturing keyboard events from CSS3D renderer and passing it down to the scene.
After banging our heads against the wall for a while, a very helpful office hours with Aidan inspired us to abandon using CSS3D Renderer and implement the same feature with a regular modal in the HTML DOM instead. This was the best solution because it was easier to implement, and most importantly, didn't interfere/cause problems with keyboard controls. Unfortunately, despite forgetting to capture any documentation of this last feature, I was able to toggle the modal on/off with the keypress by creating a rudimentary state variable and passing it down into the keyboard controls.
For our final week, we knew had to prioritize our remaining features:
Fix the exterior walls' material since it stretched weirdly in the first iteration
Add the artworks
Add wall collision
Implement an audio navigator
Implement more information
Unfortunately, we don't have as much documentation for this week as we did the previous week's because we had to prepare for Thesis' Alumni Feedback Day.
We spoke to
After adding the artworks, we added a feature that would automatically detect the closest painting and then prompt the user to interact with the work (and open the modal) if they were within the correct range. We did this by frequent ray-casting in the update loop and setting a new state variable, closestPainting.
The last big feature we had time to implement was the audio navigator that would be triggered on keypress. When the audio navigator is triggered, these are the steps that occur:
Snap a player to face one of the walls at 90 degrees
Determine their distance from the painting using ray casting
Use the Speech Synthesis API to give verbal feedback about their distance if they were facing the painting.
Separately, we settled on the content and coded what the according information modal would look like for each work:
Our target initially was anyone interested in art and who were limited by physical obstacles from going to galleries/museums, but we ended up focusing on people with disabilities (particularly those with visual impairment) who we found were often overlooked in virtual spaces.
We chose works from the book Mouth & Toes: The World of 19th Century Silhouette Artists with Disabilities by Laurel Dean and Marianne R. Petit because we thought it would be apropos to feature artists in our accessible virtual museum space. The artists we chose, Martha Ann Honeywell, Sara Rogers, and Saunders Ken Grems Nellis worked at the intersection of visual art, performance, and disability in the early to mid-19th century.
Potentially in another prototype, we would like to include these features that we didn't have time to implement:
Add wall collision so that users can more realistically "bump" into walls (and include this in the audio navigator)
Magnification of works
Adding a ceiling and extra lighting to the space
The ability to toggle the user's video so that their face would be shown/hidden
Transition the controls so that head-swiveling/rotation would be controlled by the keyboard instead of the mouse