DESIGN 6400:



WEEK 4 OF PHASE 2 PROJECT: Completing the Booklet and 360 Visualizer. Starting on the Year 2 Committee Document

  • This week was about finishing my 6400 project in which I am creating the 360 visualizer and also a booklet to accompany the tool. As previously mentioned, the ultimate goal is to have a clear presentation of this material for not only my thesis committee, classmates and advisers, but also anyone in the future who might be interested in learning more about what I am setting out to do in my last year here at Ohio State. Two of the biggest hurdles this week, was finding a graph or chart layout which could encompass the entirety of the project, but also provide useful organization between the AR, VR properties and the point of interest. I also had to give some thought as to what a possible ending VR experience would accompany the AR walk about the grounds. Finally, I am actively looking to insert three works that I have been using to inform my decisions (something that I am trying to do a better job of with this project) These three books/ papers have been very important:
  • " Digital Media in Archaeological Areas, Virtual Reality, Authenticity and Hyper-Tourist Gaze " Costa & Melotti
  • Tuan, Yi-fu. Space and Place: The Perspective of Experience. Minneapolis, MN: University of Minnesota Press, 2014.
  • SCHELL, JESSE. ART OF GAME DESIGN. Place of Publication Not Identified: CRC Press, 2018.
  • Lens #61 The Lens of Virtual Interface


I am sharing a photo of my graph comparing the different strengths and weakness of AR and VR as it pertains to these specific areas I am looking to visualize. I also have some pictures of various elements of the visualizer. You can see before and after pictures of when you elect to use the phone for AR, or you are just walking through the path. I also included a photo of a graph from the book SPACE and PLACE by Yi-fu. This book looks at different solutions to visualizing elements. It helped with the layout of the booklet and also in me creating the visualizer. I realized, that for this example, the path could be linear and then give the user some agency through the mobile phone.

Beginning of Visualizer

chart comparing AR and VR el

Building the village

Aerial view

Before AR

After AR

Multiple Scenes

Yi-Fu's picture of the sky and a path


Many of the choices made this week were about distilling down elements to focus on what I wanted to convey to the committee and my classmates. I was very happy with the progress I made on the visualizer. I finally decided to have one button allow you to move through the space and another button activate or deactivate the mobile phone with the AR application attached to it. I also decided to just make a linear visual of the VR component. The village will appear around you as you click a trigger. I find that even though there is not much "play" I can still have the act of one trying to visualize the village be something that can be fun. In a way the process of the visualization is the play. I will talk a bit about how Schell's Lens of Virtual Interface helped guide me through that below. I also decided to use a color scheme and a table to show the comparisons between the AR and VR elements. The chart has helped me pick the elements to use and hopefully I can get some helpful feedback from the committee that will help drive me through the summer. It is my goal to have some of this work done by the end of the summer


My approach to the visualizer was ease of use. I wanted to make sure whatever I show is going to be clear and also will allow some room for discussion and the manifestation of new ideas through some ideation. When making this project, I allowed Schell's Lens to influence me. The Lens asks the questions;

  • What information does a player need to receive that isn't obvious just by looking at the game world?
    • The historic elements in relation to the field the user is walking around in and exploring.
  • When does the player need information?
    • On demand. When they feel they have approached an area that is important, they can trigger the element to appear
  • How can this information be delivered to the player in a way that wont interfere with the player interaction?
    • They can turn the vessel of information (in this case a mobile phone) on or off

Answering these questions helped influence my design. I also looked at some graphs and tables from business to education and found that the color scheme in a table provided the best visualization of the data of comparing the AR and VR elements


  • Where should I test the physical elements? A field near me? At OSU. I need to find a test spot as going to the site 2 hours away wont be necessary till the end.
  • Should I have a control device (i.e. build this for one type of phone iOS? Android? Tablet?)
  • What other element are missing? Now is the time to think about them.
  • Is there away to keep things simple but increase the "play" very little


  • Print booklet
  • Test experience in Sim4
  • Make sure that Unity/Steam and Vive plugins are all done
  • Finish the Committee paper

Side note: The Morton Arboretum outside Chicago is using Gnomes to get visitors to explore the vast acres even more. This "searching for gnomes" element is very similar to "searching for the artifacts" on this is also a showcase for installation art, but I believe this can be applied to digital art as well. Here are some pictures of the Gnomes.


WEEK 3 OF PHASE 2 PROJECT: Making Choices on Booklet and 360 Visualizer

  • I made some choices this week in regards to bringing in what I think is going to be the most important elements in this project that are related to the Ft. Ancient people. I started out with picking a template that I thought could be the basis of the booklet. While, I waffled a bit back and forth wondering how I should lay it out, I just decided to jump in and I am please with the process so far. It is my hope that the combination of the booklet and the 360 visualizer will be useful for my committee and also be something I can share with those wondering ehat I am working on. Through this process I have also been able to use InDesign, which was a program I always wanted to use, but never had the need. This provided the perfect opportunity.


Below are some pictures of the development of the booklet. There are multiple sections, each delivering the important information that would give someone who does not know much about the Ft. Ancient culture some context and some meaning. I also have some picture of my design of the 360 visualizer that will be used to show how the AR/VR functionality will work. And finally, there are some schematics of how I am approaching the areas of importance and also some of the extra information I collected while at SunWatch.

Title Page of Booklet

Page one, overview

My background

Brief History

Cell phone visualizer. I will use this to convey the steps

Another angle showing how I can make the illusion of AR

Literature for teachers

First Diagram of using the "rings" as the bases of the experience


I made many choices this week as I needed to start putting things together. The first decision was picking the final layout of the booklet. I started to think about the design theories that I was using. Those of Co-Design with Dr. Cook, the ethnography I was using by going to the museums, but I also started to think a bit about how I was going to share these designs with other individuals who might be on my committee, in a class, or who are not going to be well versed in AR, VR, or in Ft. Ancient Culture. This led me to create a booklet that will highlight and present the most important points, but will leave some areas open to discussion. Also, I need to move forward with providing content. I believe having a fear of this material wont be good for the project. Dr. Cook has already pointed out areas that I might look towards more readily, so I think he will continue to point out areas that are incorrect or are not that important as the project progresses. That said, as I started created sub sections, I had to start making decisions. I saw that by breaking these pieces apart (i.e the areas of importance, the field itself, the technology, etc.) it was readily apparent that I was going to have to find the most important pieces, combine them and then present them clearly not only in VR, but to others. This led me to an idea of me actually mapping these pieces out for a use to pickup and share. I also decided to create a 360 visualizer in Vive. While I know that limits my sharing of the visualizer to those with a VIVE like ACCAD, it will also provide me with the tools to simulate an AR experience and perhaps give a more realistic interpretation of the experience as a whole. Thus, because it is closer to a real experience, this tool will provide me with even more data and user input that I can incorporate into the final design.


As I went into this week, I wanted to make sure that I had some pages completed for the booklet, as I will have to print them and make sure they are accurate are what I wanted dot present. I decided to use the following topics: Biography, Overview, Ft. Ancient History, Areas of Importance, List of items in areas filtered through AR and VR usefulness, The Rings of the village and icons that represent important elements, Tech Flowchart and Next Steps. Along with the booklet covering these areas with text and photos, I also started work on a 360 visualizer. When I realized that I can mick the AR experience in VR much more than just through pictures, I decided to move forward with taking this visualizer a bit further. I am imagining that we go from 3D photo to 3D photo. When we get to an area that we think is important, we take out our phone and look for artifacts. When we grab an artifact, it is stored in the phone. When we enter the final area in the center, we share this with the person in VR who will then recreate the world for us so we can see what it looked like. Each "ring" of the village will be a building block.


Over the weekend, Maria sent me an amazing paper about the use of VR and AR in sharing archaeology. The paper is called, " Digital Media in Archaeological Areas, Virtual Reality, Authenticity and Hyper-Tourist Gaze " Costa & Melotti Their thoughts on an experience like this becoming a serious type of play is most inspiring. I have always thought that we can have play that is intelligent, engaging, fun and also important, so to see someone write about this in the realm of immersive media is very welcoming. I did not see any reference to AR which might be a nice compliment to the type of work they were looking at.

Here is a quote from the paper I thought was important:

Today, in the relationships among visual culture, cultural tourism and hospitality economy (Crouch & Lubben, 2003), the virtual image is used not only as a tool of information, communication and tourist promotion of the museum or archaeological area but also as a productive resource to invent new forms of business in museums and archaeological areas. They become containers of a new post-modern mix that transforms the cultural place into a tremendously serious type of “play” that hybridizes knowledge and scientific and humanistic skills in a new synthesis


  • How do i represent the relative importance of areas in a chart/ booklet
  • Should I use text in the 360 visualizer? I suppose if I am showing the movement through space. Having some title cards, or interstitial would be helpful.
  • How do I show the building process of the world in VR. Maybe I just have the user click a button that helps build the world around them and say this is where a person coming in from outside would enter their AR artifacts they gathered.


  • Continue building the booklet. Add in element of works that are influencing my work.
  • Run some classmates through the visualizer and get their thoughts before summer starts
  • Finish the 360 visualizer
  • Make choices on the VR component
  • Start committee paper


WEEK 2 OF PHASE 2 PROJECT: SunWatch Research

  • This was a week of research. Dr. Cook recommended I visit a site called SunWatch as that is a sister site of the sites being studied by the Corn, Climate and Culture group. SunWatch that is located just outside Dayton, OH. I spent a full day there looking at the site and also talking with the workers there. I also documented what I saw and heard while there. In addition, I watched a video that was made summarizing the site that was presented in conjunction with a small museum that they had set up to inform guest of what the culture may have looked like. I also started to work on my presentation packets. I am thinking I can also include a 360 photo walking tour and some composite scenarios of my ideas with the packets. I also started on a Thesis website that will hold all of my research thought and findings much like this journal, so I have something to write about when I am done.


I have three series of photos. In one set, I have the museum and the gifts shop. In another I have photos of the village. In another, I have 360 photos of the site that might become the basis of a 360 walking tour to help with the visualization process. I also found a teachers packet that has some information for activities. While many of them seem a bit outdated, it may have some ideas I wasn't thinking of. Finally, I have started on my packet and presented a part of it here.

Panorama of Site from center at sun poles

Old Video from 80s about site


Dig Process

Storage/ Trash Pit

Clay Pot with cross ribbon decoration which is Ft. Ancient origin

Canoe for trade/travel: Underrepresented

Inside a house

Facial Reconstruction using Forensic Modeling

thatch roof


base of house

Recreation of Ft. Ancient


While this Phase II project is about the beginnings of my research and a bit of visualization of that, I also have started to think a bit about what it is that I can group together and what might work as activities in an environment that has a physical component (the site itself) and a presentation area (like the museum or gift shop at SunWatch). It also begs the question of what are the takeaways from a guest's time at the site and what are the guests doing before they attend? For the packet I decided to model the presentation after a design document. Using a template, I am able to build and change key elements while keeping an aesthetic look that will help plan the material. Also, after talking with Maria, I decided that the order of this should be 1) Break out areas of interest [which I am doing] 2) Chart the properties of the VR/AR potential 3) Hone in on 3 areas of interest and do some visualization of activities for them 4) A 360 tour of site [which I just added as it will help with discussions for those who haven't been there.]


For this week, I wanted to get some photos, videos and 360 captures of the environment. I was able to get all three of them. I also wanted to keep my approach open to not only as to what kind of information is being presented at this museum, but also what the guests were doing, young and old. I followed a few families around (at a appropriate distance as to not disturb them), but I gleamed a few gems that I think can bring life to this culture. The one big idea was how the families were identifying with the mannequins that were placed within the museum. One child said, "Look, this is the cheiftain! Did he live here?" I also observed another family describing how they lived while looking at the facial reconstruction case and looking at a face. This is something that I will include in my project and analysis. Having the villiger AT the site and within it is a strength of AR and VR. I could see a system that allows for AR beacons at the site to help the guests gather up their elements. When they have their element, they can bring it into the VR system and have the room be reconstructed into a living village. I drew up this basic idea here.


  • Who else should I be talking to
  • Is there a way to chat with someone who has worked with AR within a museum before
  • What would a minimum viable project look like
  • How in-depth should the mock-up be? This will be different that a proof of concept, so I need to think about this and what would be most helpful in making


  • Pull out all the relevant photos and lay them in the booklets
  • Finish writing about the areas of interest
  • Place areas of interest in booklet
  • Create chart of pluses and minuses of the areas of interest
  • Build 360 tour of site that includes a visualization of beacons and vr endpoint
  • Pick three elements to break out
  • Do some visualization for those sites


WEEK 1 OF PHASE 2 PROJECT: PHASE 1 Recap and Beginnings of Phase 2

  • For this post, I will share a bit of feedback about Phase I and then share my thoughts and points of reference for Phase 2. I will also share a bit of the research that I plan on doing and also some of the next elements I will have to tackle in order to move forward. For the most part, this post is a bit of a step back from my usual planning of a short project an begin the first steps in identifying, clarifying and simplifying the beginning of my thesis. This will be about the analysis of what I have done and what still is needed to do.


I will be looking at what the outcomes of Phase I were. I will also share a few screenshots of a multi-network component that I will be exploring in the future. Additionally, I am sharing what I have begun on doing to separate out the elements of what is going to be important to the Climate, Corn and Culture visualization project. It will be interesting to come to a conclusion on what it is that will be important when it comes to the emerging thread of the project. Right now, I just have information, so it will be the beginnings of separating out the elements that will be important.

Panels that explain the areas from which the artifacts come from - Field Museum

Very basic button push to reveal a picture of a plant - Interaction - Field Museum

Explanation of trade to central Ohio - Field Museum

Tools and Pottery

Dolls and Kachinas

Drawing of Ft. Ancient Village from Dr. Cooks Book

Dr. Cooks Book

Map of Tribes and Areas of Influence

Central Ohio Map relating Mississippian to Ft. Ancient

Phase 1 Project: God of Element


Exchange of Corn between Villager and Farmer


First, let me talk a bit about my presentation of Phase I I decided to have multiple users to test the use of many masks working with multiple users. This starts to elude to the fact that I am trying to have multiple users in AR or on a mobile device interacting with the person in VR. There is a a technical element to this that have decided to explore at a later date. The program I am going to use is going to be called Photon and from what I gather, it will allow networking between multiple platforms and devices. However, for the remainder of the semester, I will be focusing on creating elements that will start the pre-production process of my thesis. I also have decided to call the system "Artifacts in Virtual Reality System" or ARVS. I think that having research and exploration done before my May committee meeting will be the utmost of importance.


My goals for this Phase are to create seven Scenario Sheets that outline and present possible areas of exploration of the environment. The seven scenarios at this time are: Environment & Sustainability, House Architecture, Social Organization, Tools and Tool making, Timeline of Events (Overview), Daily Activities, and Burial Practices. I will then make a chart that will take the 3 most promising areas and interpolate what a VR, AR and VR/AR experience would look like. I will also include some preliminary elements like an app, or character building. I hope to select 2-3 books to use a framework for this. Right now, I am thinking of using the below works to extract out solid principles as I move forward with the research I will be using for this project. Lastly, I will be creating mock-ups of what some of the VR and AR elements might look like using existing models and artifacts.

Table for Phase II Work


  • Start on User Profiles
  • Calendar of Events
    • 7 profiles of scenarios
    • Chart of AR, VR and VR/AR strengths and weaknesses for 3 scenarios
    • Document trip to Sunwatch (Recreation site in Dayton)
    • Mock-ups of 3 scenarios and what they would look like [Try to be as different as possible in approach in the hopes of consolidating the 3 into one that has the best elements]
  • Go to Sunwatch and take pictures and document (this was recommended by Dr. Cook)


  • Where will I get the research
  • How much research should I gather
  • Should I only use Dr. Cook and CCC group as source
  • Should I include 360 photos for mock-ups


WEEK 5 OF PHASE 1 PROJECT: Presentation

  • The work over Spring break and into week five was one of honing in on the physical player's motivation and also the construction of the environment itself. In my thesis, I will be using avatars, so I decided to learn a new program - FUSE and I inserted the avatar into the project as well. There was also the testing of the system and the walk-through and practice presentation of the FINAL. We did not have enough time to show my project on Thursday, but I will be showing it on Tuesday, so in this post I will still be talking about Phase 1 and not Phase 2.


In this Visual Documentation, there are many things to post. In fact, my trip to the Field Museum in Chicago provided many helpful nsights into creating an experience curated and cultivated by a professional exhibit designer. On the trip, I met with Alvaro Amat who designs exhibits for the Field Museum. He suggested I find the one true thread in the experience and build on that first. What ever the combination of various these elements does best, build to that first. He suggested I take the strength of VR and AR and find the common thread between them. I think it is visualization of objects and their relation to environments that can link these two platforms together. I have also included some pictures of the final portion of Phase 1 development. I also learned a new program, FUSE and so I included some documentation of that as well. There is a model of an indigenous man who serves as a guide for the experience. The other pictures are of some of the physical elements and also a few pictures of the Native American Exhibit at the Field Museum.

Alvaro Amat - Director of Exhibition Design who encouraged me to reach out during my thesis development process

I visited the Field Museum as part of a Career trek sponsored by OSU

Final Unity Scene with NPC, trackable corn and environment

NPC of Villager using FUSE

NPC (no texture) of Villager

360 Photo of Museum Reconstruction of Pueblo @ Field Museum in Chicago.

Entrance to American Indian Exhibit

Finished Kachina Masks and Mask Tracker

Artifacts from Fort Ancient Culture in Ohio

Corn Girl Mask used as slice of experience and Physical Structure

Unity environment

Inside of Pueblo

Inside of pueblo with a staple of corn



For this presentation, I decided to just take one vertical slice of the corn mask. In a full experience, I would have multiple masks each being tracked, but for this experience, the time was limited, so I wanted to have one thread that connected from the physical space all the way through VR back to the physical space. You can see this was accomplished. I decided to include the NPC character to give information to the user in the virtual space. I also took the time to identify the roles. As you can in the flowchart of visual documentation, I am exploring how these physical objects can be passed from a physical state and role, to a virtual state and role, and then with context, trigger a learning outcome for the user. While the Flow of this is still not there, I wanted to connect the dots at the cost of the experience feeling full. My feeling is that once I have the happy path, I can then alter the actions, and information to create interesting scenarios. Creating the single through-lines was something I decided upon.


For this final week, I wanted to make sure I could use multiple players to complete a task. I used Jesse Schell's theory on Lens of the Team as a basis for this experience. While the end result is very basic, it does try and connect on a theoretical level. For this vertical slice I also wanted to recognize elements common to VR/ Physical Space such as the actual location of users and their characters to one another, objects, verbal/ audio cues and finally teamwork. I also wanted to explore what Makes an object Important, in this case it is the mask. The objects are wearable, functional (as a mask), trackable and holds implicit meaning. And finally, I wanted to have an interactive experience that instructs. That was mostly met. I also wanted a state of flow, which this does not have yet. The plan and intention of this experience was to create a multi-user virtual reality project that isolates physical and virtual elements and lets me identify them for use in my thesis. The importance and learning outcome were that the masks were important to the Pueblo culture. And they represented objects in the real world . For this phase I picked one object and one mask.


  • Answer and build some context around subjects like:
    • Timeline of these events
    • Environment and Sustainability
      • Climate Change, Flooding
      • Animal & Plant Use
    • Tools and Tool making
      • Pottery and arrowheads
    • House Architecture
    • Burial Practices
    • Social Organizations
      • Migration
      • Families
      • Political System
  • What areas from above should be prioritized.
  • What framework and research theory should I use to develop the next element (Jesse Schell? Eric Zimmerman?)


  • Start on Research for the next Phase
  • Visit Sunwatch and take photos
  • Create a basic layout of the various scenarios presented by Dr. Cook
  • Answer questions and definitions given by Maria


WEEK 4 OF PHASE 1 PROJECT: First Presentation of Elements

  • The majority of the work I did this week was in terms of creating some of the elements of the mask project. I was able to create a physical mask, a cipher that could be decoded and an environment for the two users to occupy together, both in the digital world and the physical world. I also tested the use of the trackers that are extensions of the Vive and I was able to do a bit of research regarding the meanings of the mask. Finally, I was able to test the system with the class and ask them about what worked and what didn't


In this Visual Documentation, I am sharing both the physical mask, the cypher, the tracker holder and also some screen shots of the actual experience inside a Unity environment. It was within this environment that I was able to create some choices for the user to make. I also included a representation of a physical object that would reflect the objects in their environment, in this case it was a piece of corn the user could pick up in the VR space and then walk with it to the space occupied by the user in the physical space.

Mask in environment

State changes between red and blue

Front of mask

Back of mask


I decided to make the masks using some safety goggles as the base. I then photo printed the other masks so that the user could select whcih mask to don. When they put on the mask, it is up to the person in VR to find out what element the person in the mask is representing. In this case, the mask was representing corn, so I created an ear of corn that the user could pick up and take to the person in teh physical space. When both individuals were in the space together, a light would turn on to signify a state change in the environment. I also decided that I wanted to use the existing drawings that were created by Josie Hazen, A.G. Smith


As asked, I wanted to summarize and attach some goals to this project. I have some conceptual goals which include me wanting to create a multiplayer experience between two users, both the VR Player and the Physical Player. I want to have the VR player interact with the physical world around them. I do this through the user of the space in which the VR user has to move to. I also wanted the physical player and the VR player to act together. In Jesse Schell's book, The Art of Game Design, he talk about different lenses use in game design. One of the lens that he explores is the "Lens of the Team". The elements that define good team play is if the team is appropriate. In this case, my classmates are appropriate. Will they communicate? The answer is yes, they exchange information about the mask. Are the communicating clearly? This is very simple, so yes. And can they unify around a decision? This is an area that is lacking. I need to find a way for them to make a conclusion together. I did read about two masks that represent the jokester and the protector, I wonder if they collectively can decide who is saying what the masks mean in a way that doesn't interfere with the masks meaning.


  • Build the other three masks
  • Create clues that the person in VR and in the physical space can use to make a decision together
  • Finish more decoded messages. Find another way to present them. Possibly a projector?
  • Finish the environments that the user in VR gets transported to
  • Build the dynamic scene changes


  • Will I use another format to project the cipher
  • Should I have them make a decision together?
  • How many masks do I user? Right now thinking 4 masks for elements and 2 masks to make a decision
  • Will I have sound and music given the short time frame? Should I just drop in some atmosphere?
  • Now that I have some ideas of thesis, can I start working those ideas into this?


WEEK 3 OF PHASE 1 PROJECT: Creating Physical Mask and Placing the "Room" in VR

For this week, I worked on fleshing out a bit of the physical game play in the room. Making the physical masks involved finding and gathering materials and then digitizing them. When I tested out the game last week, one of my biggest notes was to include the physical player more. It is a good note on something that I should definitely remember moving forward as I could easily spend much of my time in the VR design stage and forget that there might be even more users in the physical game space.

Another thing the I worked on this week, was talking about my project with various companies, museums and professors to try and find some context for this. I am meeting Dr. Cook next week, but he gave me some things to look at in the meantime. Here is a link to his groups work:


For this documentation, I want to share some of the physical elements that I created, the digitizing of the masks, a look at some of these elements in unity.

I also some of the elements that might make their way into the thesis after chatting with Dr. Cook. The area is in Southwestern Ohio and the name of the indigenous people are the Fort Ancients. They are responsible for such historic landmarks as the serpent mounds. This new site that Dr. Cook found is said to be in very good condition and will have an impact for discovering and learning more about this indigenous culture.

Tests of Mask and backdrops

Test of Mask in Cave

The six masks digitized - Honan Kachina - Badger Kachina - Healer or Doctor

Kachin' Mana - Yellow Corn Girl - Corn

Konin Kachiin' Mana - Cohonino Kachina Girl

Konin taha-Um - Havasupai Side Dancer -Rain

Sikya Heheya - Yellow Heheya - Enemy

Example of Decoder

Wuyak-Kuita - Broad Faced Kachina - Beans

Are of Ancient Fort

Rendering of what site might have looked like (they could use some graphic help!)


When building the physical models this week, I wanted to make sure that the solution for tracking the physical mask could be used by one individual. This did pose a bit of a problem. How will I get four masks to represent the four elements if I only have one tracker? What I decided on was to have the user change the masks they were wearing and then stand in a particular part of the room that could be seen by the person in VR. The person in the physical world would then move rocks that they see in VR to create the word communicated by the user to them. In this case, we would have the user see the indigenous word for Corn Girl, Kachin Mana. The word would only be seen if the physical user puts on the mask. The other element to decide on was how to have the user in VR use the word that they were told. I thought a good way to represent this might be some sort of word jumble. In this case, ther user would place stones with letters or possibly pictographs on them to represent the element.


The sim lab has multiple VIVE trackers, but I wanted to make sure that with my first experience with trackers, I was only using one, so I could learn the ins and outs of the tracker and how they could be used productively in an experience, I also thought it would make it much easier to troubleshoot one tracker rather than four of them. The user will wear the tracker on the top of their head (which will be more comfortable) and then just place glasses over them in order to see the coded message. The user could then tell the person in VR what the symbol, or word jumble is. For the creation of the mask, I decided to order a lot of 10 pieces of colored film that could be used to attach them on the insides of the masks. I also have decided to use photo paper instead of the original punch out masks because I don't have another book and if I make a mistake I might find myself without a backup. I also decided on possibly having ther person in VR looking out a wigwam or a stone cave. This might be more aesthetically pleasing and give some artistic value to the project. Finally, taking a lesson from the last project, I will make sure that light has a big importance here.


  • Build final masks
  • Decide on word jumble or pictorial
  • Once I have the specifics of the mechanic, print out the decoded message
  • Create unlocking script so that the word can open the portal to the next area
  • Import the environments and object (free models and existing elements)


  • What will the inside of the VR "hut" or starting point look like?
  • How will the word decipher be communicated? Will I use pictorial? Word jumble?
  • Sill need to explore the title sequequence, how will the user learn how to play?
  • After the person in VR "unlocks" and area, should this be the time that they can walk into a new world and connect with their other player?
  • Is there something inherently fun in the game if we try and find each other physically?


WEEK 2 OF PHASE 1 PROJECT: Mock-up & Layout

In this week, I tried to flesh out what this experience might look like for the two player. Specifically, the player in VR, Player VR, and the one in the physical world, Player R'. Overall, I think I have a found a good configuration of the play area and also what the person in VR is going to be doing, but I have some reservations on what the person in the physical space is doing. I hope to explore what I have thought about and then to reflect on possible solutions to what Player R' in the real world is doing. My goal is to still focus on the idea of this game keeping flow. Even if sometimes this idea can seem like a nice footnote, I want to stress that there is a continuum that I am after. I don't want one user to be in the experience waiting for the other experience to have something happen. I want to explore and solve this.


I have collected photos of the Maquette mock-up I made in order to show the order of events I see in creating the Happy Path for the user. I have also included an overview diagram, or top down diagram, that shows the physical proximity in the room. It will be interesting to explore how the players can move through the rooms and the windows in order to interact with one another. Hopefully, I can rely a bit on the play iteration to come up with better solutions.

View into Area #1 Corn Field

Another Area

VR Player sees user in Area

VR Player walks through portal

VR Player and R' Player cooperate to find out information on the area

One of my favorite twilight episodes

Video MOCK-UP of Mask Experience Using Maquette

Inside VR Layout

In Physical Room


In order for me to connect both the player who is in VR and the player who is wearing the masks on the outside, I need to be able to find a novel way to connect them together. I was looking at a Zoetrope at ACCAD and I saw the slots that are usually used to window a frame in an animation and I thought that might be a good way to "separate" the environments as seen from the VR players point of view. We could place the VR player in a type of cylindar portals cut out. The VR player could look out these portals and if so inclined, they could walk through the portal into another realm. Since I want to section out 4 areas of the room with each representing a new element for the R Player to interact with, I also wanted the same thing to happen with the VR Player. I would like them to be able to look around the cylindar and then be able to "see" the correct mask appear in the correct area. i.e. if I look at area #1 I will see the corn mask and then when I walk through that portal, I will be able to walk among the corn stalks with the person who is in the physical room. I admit, I need some more gameplay for the person in the physical room. I am hoping to chat with Scott about it this week and get some of his input for the physical game play/ experience.


For this week, I decided to use Maquette to draw out what a person in VR might see if they were in the space. I also asked Leah to look at what I came up with and see if she could have any insight into what might be best in order to keep the flow and the gameplay interesting. She correctly said that there is not enough for the person in the physical room with masks to do. There is not enough activity, or design for them. I will need to expand on this. When I was in Maquette, I experimented with the idea of portals and found that this will work well. I also think that because there is a component of physical/ VR crossover here, I should probably keep the locomotion (ability to move across space) to the form of a 1 to 1 relationship walking while in Area#1, #2, #3, etc. . What I mean is that if I walk across the room, I am also walking across Area#1 (the corn field). However, if I am in the main room and decided go through a portal, I could just use the transport locomotion (trackpaddown to point to a place in the room). This should also be a way to get the user familiar with the program. I am also thinking the start screen could be the person in VR telling the person in the real world they have to walk to a spot. The sequence doesn't start till they go stand in the particular space.


  • I have four masks and one tracker. How do I swap the masks. Do I just have the users put on new ones?
  • What and how much does the person in Physical Space see from the person in VR. Is there more I can do to send visual signals from VR to the Physical Space?
  • What does the title sequence look like? Do I show the user how to play befor they enter the world?
  • Once the VR and Physical player are together, should I have them do something that will trigger a "area complete" type of dynamic? If they interact with all four areas do they "win?" Or, what do I call this instead of win?


  • Still looking for interchangeable mask
  • Flow Document after I settle on mechanics
  • Find out how I can make occluded areas that will only show the tracked object when seen through the portals
  • Start building the basic scene interaction in Unity
  • Make sure the controllers work
  • Do one iteration of the scene and then go back and solidify the gameplay before laying in the textures and environments and sounds


WEEK 1 OF PHASE 1 PROJECT: Tracking Masks

For this week, I worked further on my proposal, found some templates for Kachina Masks, created a crude mockup of ideas in Maquette, found some photos of what I am looking to make as references and also tracked a model in Unity. I will try and highlight both the new technologies I am working with and also the importance of some of the Kachina Masks. I also will address a few concepts and ideas I found this week that might shape my thesis project. Most of this area of discussion revolves around classifying my experience and also narrowing my focus a bit.


Below you will see some some examples of masks from the Pueblo Indians. This type of mask is considered a Kachina mask. I have also included a few pictures from some of the areas I am exploring in regard to VR/Physical/AV and AR components and where they might overlap. There is also a diagram of how much we use our senses to perceive the world around us.

This is from the book Kachina Punch Out Masks by A.G. Smith and Josie Hazen

More masks in the book. I plan on photocopying them to use for the final experiment

Mockup of mask imported into Unity environment using the punch out masks as the image projected onto the model.

Traditional Masks proposed by Smith and Hazen, others in museums

17.5 % of our perception comes from 4 senses, the rest comes from visual stimulation.

I am envisioning a person in Augmented Virtuality and another in Reality

Crude mockup of me wearing different "mask" holders. Could ultimately be a paper mask and something like the white headset that is light and can easily attach to the user.


There are two parts to my creative choices that I have made so far. On the one hand, I have decided to move forward with the Katchina masks for their power to call upon worldly resources and to use those resources for good. I think there is a lot to learn from Native individuals when it comes to conservation and protection of the environment. I see this connection starting to form in this project and decided to have the masks invoke certain environmental changes. If I have enough time, I can start to play with the interaction between the two players. Both starting to change the environment together. Maybe the VR player can seed and area and then the physical player can don the mask and see the bounty of their respect. I have decided to use a Physical person=VR player dynamic for this experience. I have also decided to just use one game mechanic. Physical location of the mask for outside player and area preparation for the VR player. In one variation I may have the VR player be the sun and then they can direct their energy to the player wearing the mask and thus influencing the outcome of the experience. Ultimately, I would love to have a VR-AV-AR1-AR2-AR3 experience.


It is said that "...the central theme of the kachina is the presence of life in all objects that fill the universe. Everything has an essence or a life force, and humans must interact with these or fail to survive." An overview from wikipedia states, "A kachina can represent anything in the natural world or cosmos, from a revered ancestor to an element, a location, a quality, a natural phenomenon, or a concept. The local pantheon of kachinas varies in each pueblo community; there may be kachinas for the sun, stars, thunderstorms, wind, corn, insects, and many other concepts. Kachinas are understood as having human like relationships; they may have uncles, sisters, and grandmothers, and may marry and have children. Although not worshipped, each is viewed as a powerful being who, if given veneration and respect, can use [their] particular power for human good, bringing rainfall, healing, fertility, or protection, for example." This idea of having the mask represent a location, or an object is what I am looking for here. I want to make a visual representation of the entity or element that is being invoked by the mask is present to the person in VR. If one decides to use their element for good, we have rainfall, healing and protection, but if one does not respect the power, bad luck will be cast upon them.


  • What will the layout of the area be?
  • Will there be spacial triggers? Other objects?
  • How big should the virtual mask me. Should it impose power on the observer?
  • Should I show the person in the physical space what the person in VR is seeing? Is there a way to play with that dynamic
  • How do I attach the tracker to create a new mask entity, can I show something on the monitor to encourage the physical player?


  • Practice Trackers
  • Connect Players in VR and Space
  • Test Controller
  • Test Mask



For this week, I finished up my lighting project, presented it to the class and then started working on my phase 1 Project. In this journal post, I will talk a bit about the beginnings of the phase 1 project. I will also start discussing which element I will be a creating to give the user the ability to interact with someone else within a room. I am hoping that I can discover some social VR scenarios that help create an atmosphere that doesn't isolate the user in VR and instead empowers them with additional skills or enhancements.


You can see some of the screenshots of me testing a HTC vive tracker. Getting this to work on the VIVE was a challenge, but eventually, I got it working. I also am including a video I did for researching Maori and Iroquois cultures. I am still working out which culture I will create the experience around, so I am assessing which one will lend itself better to the experience I am trying to create. Both place an importance on masks as a ritual ceremony, but I would like to find a mask that would make a good conduit for teaching a bit about the culture of one of these indigenous peoples. I am also hoping to use the other mask as a statue. I have also inserted some models that i am hoping will influence this project as I am deciding on a thesis topic. I have included both a chart of the ecosystem of my idea and also that of FLOW game design fundamentals.

A diagram of a proposed project that ultimately combines more than one immersive technology.

Flow Philosophy used in game design

Tracker for HTC VIVE working

Photogrammetry Mask/Statue imported into Unity

Vive Tracker detection from Vive

NW American Indian and New Zealand masks

Cherokee, Iroquois False Face Mask and also a tribal ceremony costume from 1940.


I have decided to focus on one overlap element between a physical space and a VR experience. In this context I hope to use the mask to allow two players to communicate with the wearing and the positioning of the mask. I am hopeful that I will discover new areas of interaction that are sometimes limited when using objects in VR. I believe a mask is a powerful object that can server multiple purposes and in the case of American Indian and other indigenous individuals, they can be a source of culture, and spirituality.

I am almost certainly going to use an American Indian mask at this point. Originally I was going to use a New Zealand Maori Mask, but many of those masks were used to decorate homes and temples. In the case of American Indian masks, they were worn by humans and many served important roles in their society.


I am going to write a few places and seek out someone here at OSU that knows a bit more about masks. Maria gave me the emails of few people at OSU. I also have reached out to Candace Stout, who might be able to get me in touch with someone at a museum who can help me find artifacts that would make interesting objects to explore. Concurrently, I will focus on creating a scenario with a mask. I am leaning toward a local indian tribe like the Cherokee and their Booger Dance Masks. Here is a picture of the mask.


  • There will be the need for a design document that will map both the physical and the virtual space
  • Will there be spacial triggers? Other objects?
  • Will the mask invoke some other spiritual dreamlike entities?
  • Should I show the person in the physical space what the person in VR is seeing? Is there a way to play with that dynamic
  • I would like this experience to have visual and aural experiences, but should I keep the mode of communication between the two subjects visual? I only have one tracker, but maybe the different spaces changes the function of the tracker?


  • Decide on object to use
  • Sketch out elements and environments
  • Use maquette for basic iteration
  • create game flow document
  • attach a mask obj to the tracker
  • test accuracy of mask placement in Unity
  • Continue to ask around about museums and individuals who might see my vision too.



After some discussion, I have decided to try and add a new beginning to the experience. Maria and I spoke about how the traditional title screen could be circumvented in this project to accomodate my VO element. I am pleased with the way the project is going as I now have a title screen that is traditional so that the user can enter when they want. Kind of like the main menu. Then you are taken to a new scene that will serve as the "memory trigger." I will discuss my choices below, but overall, I think there is something exciting in how one approaches the beginnings of VR experiences. Now would be an excellent time to discover and explore these elements. I designed for my project with this in mind.


Below, I will show some of the progress on my project, and I will also discuss some of the elements that I tackled this week. I will also post some of the prototyping processes I have done up until this point.

After exploring the beginning of this experience, I decided to add in another element that would serve as the main menu. The menu serves as a placeholder before the experience, but I also added in some 80s inspired music to get the user prepared for the scene and let it play into this motif.

For the next picture, you can see the land where the player's house once stood. Nothing is left of the house, but you find a stuffed bear that was once in the house. This ability of an object to take you back in time is of great interest to me, and I think it does a good job here of connecting two points in time in VR.

Here is the living room. You can see some of the intractable elements, the bear, the tv remote, the window and behind the camera is the door to the basement. I also then created walkable space, so that the user would not be walking through doors and walls.

Finally, below is an example of The basement and its areas. You can also see the flashlight, which you get when you go to the worktable. This allows you to access the electric panel then and turn back on the lights.


One of the hardest choices for this project was how to begin and how to end the project. I elected on playing into the memory motif even further, by adding in an element you find in the spot of your old house.

I also chose to have a few of the elements not be so linear, like the TV and the window. I think I will add a highlight effect to some of them though. I like the idea of the user just finding them, but it does add some confusion, and the user needs to be in the state of flow between anxiety and also a skill. The highlighted objects may release some of that anxiety.


I will continue to work on connecting the pieces. I almost have all the scenes loading correctly, and many of the elements are done, but I need to tighten a few things up and then turn my attention to guidance elements like text, or highlighted game objects. I also think when you turn on the lights, it takes you back to the beginning.


  • Will I highlight objects
  • I need to synchronize some of the sound effects
  • Can I change the outside scene fast enough so that I can get in different weather states without interrupting the user's flow?
  • Can I make one of the lights flicker in the basement?
  • Can I switch skyboxes by switching scenes?


  • Improve the living room lighting
  • Correct scene loading triggers
  • Add in moving clouds or rain (sound)
  • Add in small sound effects like TV clicker, door opening and walking downstairs
  • Hide the electric box
  • Switch out the controller for the flashlight
  • Connect the last scene to the first


I would like to add a flickering light like this to the basement.



This week, I build some mock-ups in Maquette, imported those concepts into a non-linear editor and laid down some of the sound effects and the order of these elements. I found the placing of sound in the experience really added to the ability to create a mock-up experience to use in the design of this immersive lighting project. Finally, I started to work in unity and created the living room, dining room and also the lighting setup that would be used.


For this part of Visual Documentation, I will share what the experience will be like with the sound effects added and also provide some detailed pictures of the model in Unity with different lighting setups.

This is mock-up created in Maquette. You can see the following elements:

  • Window
  • Flapping Door
  • The Siren
  • The Rain of storm approaching
  • The TV
  • The Stairs
  • The Basement light
  • The Electrical box
  • The Flashlight
  • The Radio

Below are examples of various lighting setups as the storm approaches

  • Yellow lighting
  • Clouding of outside
  • Flashlight on Floor
  • Light coming in from basement window
  • Clear path to outside window
  • (Window outside will be view of field)


As I started to mock-up the living room and also the basement, I noticed that not only do I have action dependent items, I also have sound dependent items. Therefore, when creating the "animatic" I decided to create the flow that I am calling the "happy path" Basically, this is the path that I envision a user going down in a perfect world. That said, I know that the user will be choosing different paths as they look around the world and try and understand what is going on around them, but this happy path is a good place to start the design process from. I also chose to place the VO at the beginning of the experience instead of in the middle. It would break the flow of the experience if it was placed in the middle or over the top of the sound effects and the storm sounds.


With a basic mock-up in place and the model picked out, I will now look to add in the final lighting setups and the user itself, so that I can test this out in the HTC VIVE. In Unity you have the option between various lights. I am planning on setting up all the lighting scenes between different unity scenes and then loading them dynamically so I can add and subtract them independently. This should allow me to "turn on" and turn off" lights as the user interacts with objects around the scene. I might try and avoid time based actions (like wait 3 secs and then start this light or that) I feel that as a minimum viable project, just the different set ups of the lights will be sufficient.


  • How many scenes will I need to create?
  • How can I spin a Skybox to give the illusion that there is wind blowing outside?
  • Work on finding out how to use a timer to trigger events like screen door slam?
  • Is there a better way to simulate wind?
  • Still a question: Create flashlights that a user can use as a controller? If controller replacement is too much, just have the controller project a light forward?
  • Still a question:Will I use tunneling to move a user?
  • Still a question:Should I use one controller or two?


  • Create a Player in Unity
  • Make sure you can walk around inside the house, but not thru the house
  • Create Title Screen
  • Load scene one and unload the title screen
  • Create 8 boxes that will serve as the points that load and unload the scenes
  • After the "happy path" established, I can then attached the triggers to the actual objects
  • Set up lighting schema for each scene



For this week, I looked a bit into the sounds that were going to be apparent in this project, the VO of what will be said in the project, created a map for completing and learning new tech and also build some mockups of the experience using a new program from Microsoft called maquette. All and all, it was a productive week and I feel it is now time to go into production of the project. I notice from the development assignments from 6400 that we are going to be wanting to complete an animatic. I'm not sure that this is going to be important for this experience, I hope to see if a video of a mockup will suffice.


For this part of Visual Documentation, I will share what the experience will sound like. I pulled some samples of storms and also found an original song that was inspired from the 80s that I might be able to use as a title screen background and maybe at the end when the radio turns on.

I decided on using Maquette over TiltBrush, below are the mockups

This is an example of the templates used in Maquette

This is a first mockup of the living room

This is a first mockup of the basement

These are image captures of key elements of the project

The window to look out

The door slamming shut by the wind


A see through window

The user area is small

Stairs to basement

The electrical box used to turn on power

Flashlight on desk

Light pouring in through the window

A view of the depth and a tool in the foreground

In living room a door to the basement

A better mockup of the window

Depth of view

Text that I am considering either before or after the event.


After I was able to select a Haiku that would work for this project, I was almost given a green light in the flow of creativity. The first task I gave myself was to use Trello. I set up a project management template as you can see below:

The area that was most important to then became the sound. In order from me to have a series of events take place, I felt that I could use the sound as a milestone marker to the different activities and events a user would be engaging with. The sounds are listed above, but they consist of the storm getting worse and then some activity based sounds like a door slamming and a light flickering. Once I have the sounds in place, I will be able to layer in the other elements. This is a different approach, but seems to make the most sense in holding some value as a backbone to the experience. Maybe because these sound events are time based, but the other elements are user triggered and event based.

Another element was my blueprint or a technical skill plan in using programs that I haven't used before. I tried Maquete and really liked how easy it is to use. I was able to start mocking things up right away. Tilt brush is a bit more limited, so I decided to use Maquette instead and got some promising results for future project planning. Some of these are as follows:






Finally, I am looking at how the use will move about the space. On one hand, if I contain them to an indoor carpet in the living room, or a cramped basement downstairs, the user will not need transportation "tricks" I have used transportation elements in the past, but I find them a bit strange. There is a second way of using these tricks called tunneling, but this also has its own limitations. Between the Red and Blue pill, I think the tunneling will be the better choice. I suppose when I have a mockup, I can ask some classmates what they would prefer.


Now that I have a mockup and am looking to start building the project, I will start with the model of the house itself. Once I have them model in Unity and I can walk within it, I will be in a better place to decide between a 1:1 walking experience, or one that might "transport" to different locations within the experience. Based on the time allotted, maybe having them transport will be better. If I can get the tunneling to work, that might even lend itself to a better type of transport. Either way I will need to get into the model to know for sure. I have a model picked out, I just need to import it and then set the scene. I will also start working on the lighting of the scene next as it is one of important elements of this experience. After the lighting and the walking are set, I can turn my attention to laying in the sound and the actions that will be triggered.


  • Try to get timing down of both the interaction and sounds. What make sense? Do we believe what is happening?
  • Can I create numerous scenes that can dynamically add and subtract elements as actions are accomplished?
  • Work on finding out how to use a timer to trigger events like screen door slam.
  • Create flashlights that a user can use as a controller? If controller replacement is too much, just have the controller project a light forward?
  • Will I use tunneling to move a user?
  • Should I use one controller or two?


  • Regardless of what program, create two rooms
  • One room will be the living room
  • The other room will be the basement
  • Find Models that will serve the project
  • Create Models as needed using Maya
  • After assembling all the parts, import them into unity
  • Create basic movement of the experience, build and export to Vive



In this post, I will talk a bit about my first week of 6400. I was given the task to create three poems, or poetic thoughts about a childhood memory. The caveat to this project is that these pieces are light inspired. The will experience that represents a moment that we memorably remember which used light. How light fell on a subject, or just how the action of light instilled a memory inside of us. From these thoughts, I started the development of a VR experience that would give the user an inspired feeling of what my emotional state was like. I also think this would be an excellent opportunity to explore the "how to approach the design of experience" problem and also to even look a bit at the UX design of the experiences. Overall, most of my challenges this week have been conceptual. I have been trying to think what each of the experiences affords the user. I want to create a small but meaningful experience.


These are photos that can help influence my design a bit.

Below are some initial sketch ups of what the user experience would be like

Some elements include:

  • Thunder
  • Siren Sounds
  • Door Highlight
  • Walk to Trigger
  • Downstairs
  • Storm is raging outside
  • Overhead light goes out
  • Electrical Box
  • Rain'
  • Flashlights

This is a new prototyping tool for use in VR, I would like to try it out on this project

I will be using Trello for managing the project with deadlines


When initially starting this process, I went back and looked at old photos of me when I was a child. This step helped me to refresh my old memories and also to remember a few specific moments that would fit the criteria of light that is needed to complete this project. Ultimately, I decided on writing three Haikus that discussed the following experiences. I have included all three, but picked the first one highlighted below:

Tornado Time

After school, stormy clouds.

The siren wail leads us below.

Huddled random flashlights.

This an experience I had with my sister after school one day. We were latch key kids, so one day a tornado approached. The power went out, so we only had flashlights. After it was over, we ran outside in the rain.

Minnesota Dance

Sticky summer trip.

The family melts in sun sweat

At night, a color dance.

As a young adult, my family took a trip to upper Minnesota. The vacation wasn't going the way anyone had planned, and it was ungodly hot. The night of the first day, a cold front came in. That evening we saw millions of stars and aurora borealis in the distance.

Los Angeles City Lights

Reptilian road.

Climbing to the pinnacle.

A diamond necklace.

Soon after moving to Los Angeles, I found myself going to a spot shared with me by friends and co-workers. The drive was unlike other parts of the city. After a bit, I pulled over to the said of the road and gazed down on what looked like a quiet, sparkling diamond necklace.

After sharing my Haikus with the rest of the class, I decided on using the first Haiku: Tornado Time. This moment has a right combination of sound and the visual change of light. I remember the light outside going from an eerie orange to almost pitch black. I also remember having light streaming in from basement windows at the end of the warning, almost as an invitation to come back upstairs.

The other area I would like to look at is the use of a VR design tool. Microsoft just released such a tool called Maquette, and I want to use it to prototype what the experience will be before I build everything. If the tool seems to be too much to work with, I will go back to an older process that I used for past VR experiences.

Finally, I have decided to create this light memory with the HTC VIVE. I believe this will give me the range of motion to look around the space and will also allow me to use some of the new Input kits. I also think I will be able to create a beam of light from one of the controllers to act as a flashlight. For models, I will use existing assets allowing me to focus on the UX and UI design. This will also let me play with the lighting a bit more given the time restraints.

Although I would love to explore the use of Unreal Engine for this project, I believe this would be out of scope for this particular assignment. I also think that if I ran into any issues, I am not particularly familiar with C++. Instead, I will be using the Unity engine. I have experience scripting with C#, and this will also be a way for me to focus on the design elements as stated above.


With a rough sketch of what the experience will be, I now turn my attention to the design process. Recently, I have tried a few different ways to approach assignments. To break up and set goals for my projects, I was using notepads, email reminders, etc. However, I recently had some luck using a program called Trello that helps with deadlines and keeping your project on track. Therefore, for this project, I will use Trello for project management. It is super easy to use, and you can share it with others.

I also uploaded some pictures to give myself some reference to the experience of seeing a tornado coming into your neighborhood. I will use this to inform the experience.

Using Maquette, I will create a quick layout and then derive from that an action workflow or diagram tree of the interaction. Hopefully, I can share this basic experience with others before I make it so that I can alter or change the actions based on the feedback I get. In the past, I didn't have enough input before I finished a project, so I would like to correct that moving forward.


  • Will Maquette be enough to previz, or will I need to use drawings as well?
  • Is there any way to include my "sister" in the experience, I don't think there is as it will be too much to capture and then have another human follow the user around, but maybe I could do something with sound?
  • What would this storm sound like?
  • Should I test some basic lighting scenarios before putting models into unity?
  • Will I have the scenes of upstairs have actions triggered events, or will the events be triggered on a timer? Or, both?
  • How should I end the event?


  • Create a Tello Site
  • Download and start a Maquette project
  • Quickly place objects into it Maquette and get users to try it this week.
  • Find models
  • Start collecting sounds
  • Find some skyboxes to use for the stormy weather


After I was able to select a Haiku that would work for this project, I was almost given a green light in the flow of creativity. The first task I gave myself was to use Trello. I set up a project management template as you can see below:

The area that was most important to then became the sound. In order from me to have a series of events take place, I felt that I could use the sound as a milestone marker to the different activities and events a user would be engaging with. The sounds are listed above, but they consist of the storm getting worse and then some activity based sounds like a door slamming and a light flickering. Once I have the sounds in place, I will be able to layer in the other elements. This is a different approach, but seems to make the most sense in holding some value as a backbone to the experience. Maybe because these sound events are time based, but the other elements are user triggered and event based.

Another element was my blueprint or a technical skill plan in using programs that I haven't used before. I tried Maquete and really liked how easy it is to use. I was able to start mocking things up right away. Tilt brush is a bit more limited, so I decided to use Maquette instead and got some promising results for future project planning. Some of these are as follows:






Finally, I am looking at how the use will move about the space. On one hand, if I contain them to an indoor carpet in the living room, or a cramped basement downstairs, the user will not need transportation "tricks" I have used transportation elements in the past, but I find them a bit strange. There is a second way of using these tricks called tunneling, but this also has its own limitations. Between the Red and Blue pill, I think the tunneling will be the better choice. I suppose when I have a mockup, I can ask some classmates what they would prefer.

After exploring the beginning of this experience, I decided to add in another element that would serve as the main menu. The menu serves as a placeholder before the experience, but I also added in some 80s inspired music to get the user prepared for the scene and let it play into this motif.