Project: Use Unity to build a VR environment where users can test the threshold of minute visual differences- known as "Just noticable differences" and compare to traditional 2D visuals to see the effect of virtual reality in perceiving minute visual changes
Final project poster: https://drive.google.com/file/d/1wpak-806EiAZ9GOQ_8w_YYEeQVZNgoaa/view?usp=sharing
Final project flash talk: https://drive.google.com/file/d/17uWRfX10MzEycKG05dmgRXJZka4fej6F/view?usp=sharing
Project: Use Unity to build a VR environment where users can test the threshold of minute visual differences- known as "Just noticable differences" and compare to traditional 2D visuals to see the effect of virtual reality in perceiving minute visual changes
Project 2 proposal: https://docs.google.com/presentation/d/1pRx6jBrqTVUz43qek1k1f73T379_fG-X6aSI0wJ-yDk/edit?usp=sharing
Project 2 update: https://docs.google.com/presentation/d/11PB6QgxC48G1megSyW2RFk4CM000gJb691o6vNPt3mY/edit?usp=sharing
Project 2 end presentation: https://docs.google.com/presentation/d/1Il3Sb7p84QAtdLLsuZxkSmj8qj_VLNfNOYBzX1bzKKs/edit?usp=sharing
Project 2 2D visual: https://docs.google.com/presentation/d/1XbVZTJjY2pBpiC-u6VA4UWZFa67E4-1Dv1uE39JejM8/edit?usp=sharing
Project 2 VR visual:https://drive.google.com/file/d/1SNUKszkFX3guX8VY2RxkVDzZ__UR8hZK/view?usp=drive_link
Project 2 wiki page: https://www.vrwiki.cs.brown.edu/applications-of-vr/vr-in-small-visual-differences
Project: Use Unity to build a VR environment where users can seamlessly compare extremely small and extremely large objects (e.g., molecules, cells, humans, buildings, planets) to study how VR affects intuition about scale compared to 2D representations.
Project 1 proposal: https://docs.google.com/presentation/d/1DgOrN0jShF_YFZv5l4f_nVnYlfzodsSv/edit?usp=sharing&ouid=107338774808050941721&rtpof=true&sd=true
Project 1 update: https://drive.google.com/file/d/1L65vUfd16pCXwvK_4mJFshZiKDL-c_0n/view?usp=sharing
Project 1 end presentation: https://drive.google.com/file/d/1MJ6cdaTSk-6DS0wyc7cTbwQ3ortPkaok/view?usp=sharing
Project 1 2D visual: https://htwins.net/scale2/#google_vignette
Project 1 VR visual: https://drive.google.com/file/d/1ywr5sF_NZ6IXK442oC4pj5OYi42QfqsT/view?usp=drive_link
Project 1 wiki page: https://www.vrwiki.cs.brown.edu/applications-of-vr/vr-in-visualizing-scale
Project: VR in aiding visual comparison tasks- "Spot the difference"
Students will try to spot the difference between VR spaces & 2D images
Aim to test if humans can perceive visual change better in 3D vs 2D
Project detailed summary: This project explores whether virtual reality (VR) improves performance on visual comparison tasks compared to traditional 2D displays. Using a Unity-based “spot the difference” setup, participants will be asked to identify differences between paired scenes presented in both immersive VR environments and standard 2D formats. The differences will be categorized into visual changes (e.g., color or texture) and spatial changes (e.g., object position, depth, or occlusion) to evaluate whether VR provides advantages specifically for spatial understanding. User performance will be measured through task completion time, accuracy, and error rates, along with subjective feedback on usability and perceived difficulty. The goal of this project is to determine when VR is most effective as a visualization tool and whether its immersive properties meaningfully enhance human perception in comparison tasks.
Project Timeline:
3/31: Finalize project scope, scene types, and difference categories (visual vs spatial)
4/02: Set up Unity VR environment and test basic interaction on headset
4/07: Start building first VR scene pair with implemented differences
4/09: Create corresponding 2D version using same scenes for comparison
4/14: Finish 2D scenes
4/16: Finish 3D environments
4/21: Start survey logging user feedback(i.e. Accuracy, time, thoughts)
4/23: Start wiki contribution, finish APK for deployment
4/28: In-class activity
4/30: Finalize Wiki contribution
Project Evaluation:
clearly identifies deliverable additions to our VR Software Wiki-5 can add a "Visualization in visual change perception"
involves passthrough or “augmented” in VR-3 can add a passthrough portion in simulation, but unsure if it particularly aids in checking the difference between 2D vs 3D
involves large scientific data visualization along the lines of the "Scientific Data" wiki page and identifies the specific data type and software that it will use-5 will utilize Unity and maybe ShapesXR to deploy environment
has a realistic schedule with explicit and measurable milestones at least each week and mostly every class-5 Set up realistic milestones
explicitly evaluates VR software, preferably in comparison to related software-5 aims to test the effects of VR in perceiving visual change
includes an in-class activity, which can be formative (early in the project) or evaluative (later in the project)-5 has a clear "find the difference" project
has resources available with sufficient documentation-5 already have knowledge on making Unity/shapesXR environment
Journal activities are explicitly and clearly related to course deliverables- 5
Most of my journal entries in the beginning few weeks of class was tailored towards familiarizing myself with the class, VR setup, and the Meta Quest. I have stated, albeit sometime more briefly than other times, the work I have performed, which was mainly completing homework and starting on my first project this week.
deliverables are described and attributed in wiki- 3
I have identified which deliverables to do- I will have a wiki page on "Applications of VR", in a section dedicated to "Perception of sense of scale and distance", which is the main premise of my first project. I will include my user survey results, as well as my own findings of how utilizing a 3D visualization space helps users visualize hard-to-grasp object scales better. Otherwise, I aim to do a separate wiki page on either "VR visualization software" or "VR modeling software", under the Unity section, focusing more on the multi layered approach of using multiple environments and seamlessly traversing through them. I am more uncertain on the final design/contents of this second wiki page, so I gave myself a 3 for this criteria.
report states total amount of time -5
total time is appropriate -5
I am on pace, and the number of hours will only start increasing from now as I start progressing in my projects.
Project description: Use Unity to build a VR environment where users can seamlessly compare extremely small and extremely large objects (e.g., molecules, cells, humans, buildings, planets) to study how VR affects intuition about scale compared to 2D representations.
The proposed project clearly identifies deliverable additions to our VR Software Wiki-5- aiming to expand on Unity capabilities and add onto techniques to enable seamless transition between multiple layers/environments
Involves passthrough or “augmented” in VR-5- aims to start users within the classroom, showing scales of objects
The proposed project involves large scientific data visualization along the lines of the "Scientific Data" wiki page and identifies the specific data type and software that it will use-5- using Unity and Blender if necessary to build objects- technique used is multi-scale layered environment rendering
The proposed project has a realistic schedule with explicit and measurable milestones at least each week and mostly every class-5- attainable goals, so far meeting timeline
The proposed project explicitly evaluates VR software, preferably in comparison to related software-5- project is aimed to compare VR adn 2D visualization on its capabilities of showing a sense of scale
The proposed project includes an in-class activity, which can be formative (early in the project) or evaluative (later in the project)- 5- includes an in class survey evaluating visualization techniques between VR and 2D visualization
The proposed project has resources available with sufficient documentation- 5- lots of documentation on Unity and layered environments
Project: Use Unity to build a VR environment where users can seamlessly compare extremely small and extremely large objects (e.g., molecules, cells, humans, buildings, planets) to study how VR affects intuition about scale compared to 2D representations.
Plan:
2/10: Find software to build 3D models and allow users to magnify/shrink environment by multiple factors of ten
2/12: Find equivalent website that showcases this in a 2D level
2/19: Start developing the 3D visualization, and start thinking of survey questions for activity participants
2/24: Finalize survey; work on finding ways to overlay classroom objects onto the visual
2/26: Continue finishing up the visualization; maybe add a ruler app to showcase the size of objects in quantifiable measures
3/03: I am aiming to provide an in-class activity around this time, once the visual is done, and aim to ask a survey of how the 2D-3D visuals help users grasp scales of objects
3/05: Work on wiki tab on software usage and visualization of large scale objects using VR
Shown on top of journal
Project 1: Find and download real-scale objects and build software that lets users render objects of varying magnitudes. Insert a ruler/sizer that allows users to calculate length/size of objects present within the clasroom, then comparing those sizes with other objects in the visualization. Conduct surveys asking if users can mentally grasp large/small sizes better after the visualization, which also ties in with the class activity. A few sample deliverables can indlue a tutorial on the rendering software used, or a study with user accounts on how much better humans can perceive size using VR.
Project 2: Find papers/2D interactive maps that show glacier loss data. At the same time, find softwares that can show glacier loss on a globe, as well as a separate scaled cube of water that represents the volume of melted ice. Also, outline the parts of the map that has submerged due to seawater rise. Users can interact with this map to infer the progression of glacier loss and its effects on seawater rise. The main deliverable for this project can be a detail on using arcgis to make interactive maps, as well as overlaying simulations over the 3D map.
Project 3: Find simulations that best show the 3D magnetic field projected on a 2D monitor. Develop user interactive virtual magnets that users can move around, to see the magnetic field shifts. This project can allow users to manipulate the magnetic objects within their simulations and see real time shifts of the magnetic field. There can also be a collaborative aspect, as long as the simulation can be connected real time with other users that are simulateously experiencing the project. The main deliverable for this project can be the visualization of physics and simulating physics movements in a VR space.
before after
---- ----
1 | 3 | Goal 1: articulate AR/VR visualization software tool goals, requirements, and capabilities
1 | 3 | Goal 2: construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research
3 | 4 | Goal 3: execute tool evaluation strategies
2 | 4 | Goal 4: build visualization software packages
3 | 4 | Goal 5: comparatively analyze software tools based on evaluation
1 | 3 | Goal 6: be familiar with a number of AR/VR software tools and hardware
3 | 4 | Goal 7: think critically about software
4 | 5 | Goal 8: communicate ideas more clearly
1 | 3 | Goal 9: grow a habit of routinely journaling my work
Proposal 1- Use Unity to build a VR environment where users can seamlessly compare extremely small and extremely large objects (e.g., molecules, cells, humans, buildings, planets) to study how VR affects intuition about scale compared to 2D representations. This directly addresses a human's limitation of perceiving objects outside the scope of everyday objects. Can also use blender to render these objects.
Proposal 2- Create a VR visualization that maps glacier loss within the arctic circle so users can intuitively grasp magnitude and rate of change. Similarly to proposal 1, can use Unity as the main VR rendering software.
Proposal 3- Visualizing invisible physical fields, such as magnetic fields, as users can interact with it using virtual magnets. This allows us to visualize fields in 3D space, which gives a comparable information advantage compared to traditional 2D simulations. I can use Paraview for vector field processing.
Homework 1 Assignment:
10 minute changes:
Link SideQuest download page/guide to Immersion Analytics Installation and Setup page
Add link to ShapesXR within the VR Modeling Software page
Remove 2021 VR@Brown page link, as it currently shows a 404 not found error
1 hour changes:
Review and add more papers on the related VR research page
Add additional applications of VR in economics- particularly on the visualization of hard to comprehend macroeconomics trends
Add/populate new 2025 student research page within the VR Research page
10 hour changes:
Write up reflection based on project 1- under applications of VR
Write up reflection based on project 2- under applications of VR
Write up page on utilizing Meta Quest Developer Hub to deploy APKs
CONTRIBUTION 1 [Added link to SideQuest download in Immersion Analytics Installation and Setup page
CONTRIBUTION 2 [Added wiki page on VR in perception- making a vr in grasping a sense of scale]
https://www.vrwiki.cs.brown.edu/applications-of-vr/vr-in-visualizing-scale
CONTRIBUTION 3[Added wiki page on VR in JND- comparing VR and 2D tools in perceiving small visual differences]
https://www.vrwiki.cs.brown.edu/applications-of-vr/vr-in-small-visual-differences
CONTRIBUTION 4[Added wiki page of utilizing Meta Quest Developer Hub for deploying/testing APKs
https://www.vrwiki.cs.brown.edu/vr-development-software/meta-quest-developer-hub
.....
CONTRIBUTION N [short description] <ADD LINK>
Total: 139 hours
1/25/26 - 2 Hours
Finishing up homework 1 content
1/25/26 - 3 Hours
Reading papers on collaborative AR
1/28/26 - 1 Hours
Set up Meta quest and explored basic functionalities
1/28/26 - 2 Hours
Finished journal entries for homework 2
2/2/26 - 2 Hours
Finished previous lab (ShapesXR) and all necessary setups for tomorrow's class
2/2/26 - 2 Hours
Finished journal entries for homework 3
2/4/26 - 2 Hours
Finished journal entries for 2/5 homework; starting finding 2D websites that most closely resemble the work I am aiming to compare to
2/9/26 - 6 Hours
Finished journal entries for 2/10 homework; solidified software and techniques used for project, started reading related documents to get used to multi-layered environments. Finished visualizing the 3D environment and started compiling a list of objects and their relative sizes. Finished working on short presentation of my project deliverable.
2/10/26 - 4 Hours
Finished journal entries for 2/12 homework; as I did the journal self reflection I realized I should include more in my journal entries so I intend to explain my contents of work in more detail here. This journal will also be treated as an intermediary notes page for me to log the progress of my projects, as well as keep any relevant pieces of information for me to come back later.
I was able to find a 2D webstie that does exactly what I'm looking for: https://htwins.net/scale2/#google_vignette. This seems to be the most updated version of this website, and it scales the entire universe, from the smallest measurable length, Planck length, to the size of the observable universe. Because of the limited time, I tend to do a smaller range of objects, from an Angstrom of length(10^-10m) to around the size of a galaxy. The individual environments will likely show objects 100x in relative magnitude. For example, the first environment will be overlayered using AR onto the classroom and will have objects ranging from 0.1-10m.
For the zoom technique, I will make it so that the user themselves change in size, so the relative pieces of objects will be fixed. This might make the user interaction limited.
2/16/26 - 6 Hours
I downloaded all necessary Unity components and spent a good few hours getting familiar with the tools necessary for the project. I will now detail the processes, in case I wish to write more about these steps in my wiki pages later.
The setup process was as follows:
First, I installed Unity 6 LTS with Android build support and configured the project specifically for connection with the meta quest headsets. This required:
Switching the build target to Android(previously defaulted to mac)
Enabling OpenXR under XR Plug-in Management
Activating Meta Quest support within OpenXR
Enabling the Oculus Touch controller interaction profile
This ensures the project can compile into an APK that runs natively on the headset. I am hoping that the APK can be run in any headset, as long as they have access to it.
Next, I installed steps so that the scene itself can be simulated and run on the headset.
I installed and configured the XR Interaction Toolkit and added:
XR Origin (VR)
XR Interaction Manager
XR Interaction Simulator (for in-editor testing)
Since the headset developer mode was unavailable, I set up the XR Interaction Simulator to emulate headset and controller behavior directly inside Unity. This allows me to develop and test movement and interaction without deploying to hardware.
Lastly, I implemented a rudimentary zoom feature that changes the scene's size itself. it is a logarithmic scale, which is crucial because human perception of scale is closer to logarithmic than linear.
Thus, at this point:
The XR environment runs in simulation
The world can dynamically scale up and down
Core architecture for multi-scale visualization is in place
2/18/26 - 2 hours
Finished setup for class tomorrow, and tried a bit of the Paraview activity beforehand just to be familiar with the controls. I also finished the full scope of the actual project I will make; I realized while starting my visualization of the project that because I am so unfamiliar with Unity, getting the environments set up and able to connect to headsets will be the biggest issue. Now, I have decided to shrink the range of my visualization greatly, in exchange for a cleaner project. I will also spend more time in the wiki contributions on the sense of scale/perception using VR and the survey itself as well, as I will put more emphasis on these parts more than the visualization now.
2/22/26- 5 hours
I started working again on the VR simulation, and started working on gathering content for the wiki page contribution. I have decided that the best wiki contribution will include how VR can help our understanding of a sense of scale. I think the best part is that I have already done research on human depth/size perception, and already know that humans are great at measuring sizes that we can perceive in the real world, but this breaks down quickly once we get to scales that we are not used to. I plan on adding adequate background information on this to support my reasoning behind why I wanted to do this visualization as my first project. Still, the main issue is that I am unsure how I will use this unity simulation and connect it to the meta quest headsets and distribute them. If the project seems to be taking too much time, I wish to ask for a later presentation time.
2/25/26- 5 hours
I have finished making a preliminary, short survey to distribute to the classmates post simulation presentation. I wish to add the results of this survey onto the wiki page. I will also have started making the wiki page itself right after this journal entry. I plan on finishing up on the journal after the presentation, but I will have finished adding all adequate background material by the end of today.
Due to time costraints, I have further changed the details of my project- now, I aim to show 3-5 different panels of objects of sizes that are within 1000x of each other's size- for example, in one panel, I will see the size difference of the planets of the solar system. This way, I still try and use the VR visualization for scale perception, but I can finish up the project in time. I will now create transitions between environments that don't seamlessly connect with each other, but I hope to polish up the current visualizations so relative sizes can still be perceived easily.
2/27- 7 hours
I have spent today working on the setup of the environment and getting a general sense of the visualization mechanics. Like mentioned earlier, I aim to fixate the "human" or the viewer in a set location(0,0,0 on the virtual world) and zoom in and out the other objects dynamically. I initially tried to use a movement option by using the arrow keys(which would later be translated into the left controller's joystick), but the problem was the player will move a fixed distance per second no matter the zoom- thus, if one zooms in too much, the movement becomes too fast and when one zooms out too much, the movements becomes nonexistent. I tried creating a dynamic movement script, but then this broke the perspective script and had to scratch this. Another problem is that I wish to add dynamic text boxes that show the size of the object, but it is hard to have the text boxes within a location of the object as the camera dynamically shifts focus to an object based on the zoom level. To fix this, I attached a child canvas object and a text box child onto said canvas object to lock in a floating text box next to each object.
3/01- 6 hours
So far, the environment is done, and the current scale of the objects range from Mount Everest(right under 1e4 m) to an ant(5e-3m). This is roughly a magnitude of 10^6-10^7 times- this sounds large, but contain a lot of objects that aren't really difficult to perceive with the human eye/mind. The main problem is with the overlap of objects when the relative sizes are too similar(within 1 order of magnitude/10x in size)- since the current zoom feature dynamically moves the focus of the camera to the object that is most fit for the zoom amount, if there are two objects that both satisfy this requirement then they show up overlapped. For instance, the human and the elephant are recorded to be 1.7 and 3.3 meters respectively and thus are overlapped whenever the zoom is between 1 and -1 (where the number corresponds to log base 10 m). To fix this, I will have to try and change the zoom + camera perspective scripts. One positive feature is that the rendering of large to small objects can be done relatively quickly/easily/with low amounts of rendering- right now, my environment is "clamped" to 20 degrees of magnitude- from 10^10 to 10^-10 m. Thus, theoretically I can showcase from an atom to a star (1 angstrom to roughly 1*10^9m), but so far I have not been able to smoothly render all objects. I have also yet to change the keyboard controls to the meta quest joysticks.
3/02 - 8 hours
Largely, I worked on switching the simulation from being on the computer and using the keyboard controls to utilizing the joycons and meta quest. To switch to keyboard controls, I made a separate action script based on the y axis movement of my right hand controller. Basically, I moved the +- keyboard inputs into the right hand joystick of the meta quest controller. I needed to turn on developer mode but since Aarav was still the device owner, I couldn't get this to work. Thus, I performed a factory reset and was able to connect to Unity and build and run my project on my headset. After several hours of debugging, I switched the perspective to allow for multiple objects to be shown side by side, finally being able to showcase the relative sizes of objects. Due to the way the canvas objects and text box generated for larger objects, I had to cap the large objects at the main island of Hawaii. However, for the smaller objects, I was able to go down to the diameter of an atom, which is roughly 1 angstrom or 1*10^-10m. Thus, my final project has a scale going 15 orders of magnitude, from 1e5 m to 1e-10m roughly. I have finished up my google form questionnaire and have attached to the course timeline my steps of downloading and accessing my simulation and all other necessary components. Below I attach the required materials and steps to download and access my project:
Henry's in-class activity:
Please download/have access to the following:
Steps to utilize the VR simulation:
1. Please make sure developer mode is turned on- on your phone's meta horizon app, go to devices -> your device -> headset settings -> developer mode-> turn on
2. While connected USB -C to your computer, your headset will ask to allow USB debugging- check always allow, then OK
3. Download the Meta Quest Developer Hub on your computer and log in to your Meta account that is linked to your Meta Quest 3
4. Drag and drop the apk file onto the Meta Quest Developer Hub app, and put under "Connected Device: Meta Quest 3"
5. ScaleVR should be accessible to users now on the headset under applications
3/9/26- 2 hours
I was sick with a fever for most of last week from the 3rd onwards, so I missed class on the 5th. As such, I missed several students' presentations. Thus, I separately installed the apks and tried their VR simulation as instructed, so I have a sense of their project to get ready for the end of project presentations. I then finished setting up for the in class activities for the 10th. Since I have schedule conflicts for this upcoming Friday and Saturday, I requested to work on the AVP activity earlier this week.
3/10/26- 7 hours
Today I compiled the analysis of the survey for my in-class activity. Here is a google sheet that has the responses I received for my in class activity: https://docs.google.com/spreadsheets/d/1NQHgYQTUKXd4UbmF6okiOeyD4zuKJe87J7KIcCmqozA/edit?usp=sharing
I have finished making a wiki section of my findings and more general cases of vr's usage in scale perception, and linked it to my wiki contributions sections. My wiki page I made can be see under https://www.vrwiki.cs.brown.edu/applications-of-vr/vr-in-visualizing-scale.
Here, I add several graphs that highlight the effectiveness of my VR visualization in comparison to the 2D website. I also populated the wiki page with other projects that have aimed to do similar visualizations of showing objects of many magnitudes of scale- however, these projects are paid-only and thus was a large motivation behind my project.
I have also created an end-of-project presentation to present in class, where I have attached pictures of the simulation itself, the survey results, and the wiki page. Overall, I will reiterate the findings I attached onto my wiki page.
3/11/26- 5 hours
Today I worked on Meng's and Ellie's projects, as I was not present in class the day of their presentation.
I also finished up the end of project presentation- I reiterated more in my wiki page of my findings, but there seems to be a noticeable difference in both immersion and clarity of telling a sense of scale. Also, for my project 2, I have thought of a few ideas:
JND (Just Noticeable Difference) in 2D vs VR
Test whether users can detect small differences in visual properties better in VR or 2D
Focus on 3 categories: size, shade, and quantity (number of dots)
Users are shown two options and must choose which one is larger/darker/more dense
Differences between options gradually decrease → approaching perceptual threshold
Measure when users start getting answers wrong / become unsure (approximate JND)
Project Idea 2: “Spot the Difference” (2D vs VR scenes)
Create two similar scenes with small differences:
Object position
Size
Color/texture
Occlusion/depth
Users identify what is different between scenes
Compare:
2D (fixed viewpoint images)
VR (can move around and explore)
Measure:
Accuracy
Time taken
User confidence
3/14/26 – 4 hours
Today I worked on setting up my computer for the Apple Vision Pro assignment and completing the activity itself.
I also finalized my direction for Project 2.
Decided to pursue the JND (Just Noticeable Difference) project
Will focus on comparing perception between 2D and VR environments
Plan to test differences in:
Size
Shade
Quantity
Chose this over the “spot the difference” idea due to:
Simpler implementation
More controlled experiment design
Easier analysis of results
Next steps will be to start designing the 2D tasks and thinking about how to structure the VR version efficiently.
3/18/26- 4 hours
Today I worked on and submitted my Project 2 proposal.
Formalized project idea around VR vs 2D visual comparison tasks
Framed project as a “spot the difference” experiment
Designed comparison between:
VR environments (Unity-based)
2D static images
Categorized differences into:
Visual (color, texture)
Spatial (position, depth, occlusion)
Planned evaluation metrics:
Accuracy
Task completion time
Error rates
User feedback
I also outlined a structured timeline for development:
Early setup of Unity VR environment
Building paired VR and 2D scenes
Running user survey and collecting results
Finalizing wiki contribution and presentation
Overall, this proposal builds on my earlier interest in perception, but shifts focus from JND to a more scene-based comparison task, which may better highlight VR’s advantages in spatial understanding.
Spring Break Week- 3/20 to 3/28- 3 hours
Over the past week and a bit, not much progress was made due to spring break. Instead of starting the Unity implementation, I started reading some articles on JND to better ground the project concept.
Reviewed basic ideas behind Weber’s Law (relationship between stimulus change and perception)
Looked into Fechner’s work on psychophysics and how perceptual thresholds are measured
Goal is to better understand how to design tasks where differences approach perceptual limits
Next step is to start designing actual stimuli (2D first) and thinking through difficulty scaling.
3/30/26 - 4 hours
Today I started refining the actual structure of my JND project, focusing more on the specific design of the tasks and how I will evaluate them.
Began thinking through specific 2D image designs for JND tasks
Size comparisons (circles of slightly different radii)
Shade comparisons (grayscale differences)
Quantity comparisons (dot density / number of dots)
Decided to structure tasks as:
Pairwise comparisons (left vs right)
Increasing difficulty (smaller differences over time)
Goal is to approximate perceptual threshold (JND) through accuracy drop-off
I also read through the Seven Scenarios paper and used it to better frame my evaluation approach.
Focused on:
VDAR (Visual Data Analysis and Reasoning)
UP (User Performance)
Used these to formalize my research question:
Does VR help users detect subtle visual differences
more accurately, efficiently, and confidently than 2D?
4/2/26- 4 hours
Today I worked from home due to a sore throat and missing class, so I focused on completing the assigned readings and continuing to think through my JND project design. I finished my Bloom's taxonomy work, and saw most of my contributions would be under the "apply" category.
I also started outlining different types of JND tasks I could include in my project.
Size differences
Circles or objects with slightly different radii
Color / shade differences
Grayscale or color gradient comparisons
Quantity / density
Number of dots or objects in a region
Shape differences
Slight variations in geometry (circle vs ellipse, distortion)
Angle / orientation
Rotated lines or objects with small angular differences
Spacing / position
Distance between objects or alignment differences
At this stage, I am thinking about which of these are:
Easy to implement in both 2D and VR
Easy for users to understand quickly
Suitable for gradually increasing difficulty (approaching JND)
I think size, shade and quantity could be the easiest to standardize across both formats, so I'm leaning towards this for now.
4/3/26 – 3 hours
Today I started actually building out the 2D stimuli for the JND portion of the project. Up until now, most of the work had been conceptual, so this was the first time I tried translating the idea into something concrete. I began with simple size comparisons using circles, trying to keep everything visually clean and consistent so that the only difference between the two options would be the size itself. The easiest format is making the images on google drawing, and keeping the circles labeled. I am currently also checking on how many tests should be done per category, and how big/small the "delta"s or the differences should be. I will probably finish this project(at least the 2D side) by putting all the drawings into a google slides presentation for the 2D part.
At first, I made the differences relatively large just to test if the format worked. This part was straightforward, but I quickly realized that the real challenge would not be generating the images, but figuring out how to control the difficulty in a meaningful way.
4/5/26 – 4 hours
I expanded the stimuli to include all three categories I had been considering: size, shade, and quantity. Structurally, everything followed a similar pattern—pairwise comparisons with two options side by side. I wanted to keep the design as standardized as possible so that any differences in performance would come from perception rather than layout or formatting.
However, after going through a few of the questions myself, it became clear that the tasks were far too easy. Even the later questions were immediately obvious, which meant I was not actually getting close to the “just noticeable difference” range. I plan on testing my project(at least the current images) to see if the tests are too easy.
4/7/26 – 3 hours
I had a friend go through the 2D questions to get an external perspective, and this confirmed my suspicion. Most of the questions were answered almost instantly, with high confidence, and there was little hesitation even in the later trials. This meant that my current setup was not actually testing perceptual limits at all.
Based on this, I went back and started reducing the differences significantly-making the circle sizes much closer, tightening the grayscale differences, and bringing the dot counts closer together. Initially, the first half of images were really easy and gradually got harder, but I decided to make even the first half slightly harder to more accurately gauge twhere the 50% threshold stands. Below, I added some images of the size and shade JNDs I have made so far.
4/13/26 – 7 hours
I began transitioning into the VR portion of the project. I set up a basic Unity environment and started placing objects for the same types of comparisons I had built in 2D. The initial idea was to create something like a simple exhibit space, where users could look at two objects side by side.
A major issue I ran into today was getting movement and rotation to work properly in the VR environment.
Initially tried setting up movement using:
XR Origin (VR)
Continuous Move / Turn Providers
Followed standard setup guides, but:
No response from joystick input
Could look around, but could not move or rotate
I spent a significant amount of time trying different fixes:
Re-adding XR Origin and locomotion components
Adjusting input bindings manually
Checking action maps and input systems
Trying different prefabs and configurations
At first, I thought the issue was with Unity setup or incorrect component wiring, since everything looked structurally correct.
However, the problem ended up being much simpler- Unity was set up correctly but the system was not detecting the Meta Quest controllers at all. This was due to not properly specifying/enabling the controller input in system settings. Once this was fixed, the joystick input started registering correctly, and movement/rotation began working as expected.
4/15/26 and 4/17/26 – 9 hours
I continued working on the VR portion of the project, focusing more on how users would actually interact with the environment rather than just placing objects. Initially, I tried to implement a more complete interaction system so that users could actively navigate and make selections within the scene, including:
Better movement system using XR Origin with locomotion system
Continuous Move Provider (joystick-based translation)
Continuous Turn Provider (rotation using joystick)
Looked into adding ray interactor / direct interactor
Goal was to allow users to “select” which object they think is larger/darker
Considered adding UI elements:
Buttons or panels in world space for user input
Event system to log responses directly in Unity
However, this quickly became more complex than expected, and I chose to only implement a better movement system, and instead of building a full interaction system, I shifted toward a simpler approach:
Focus on passive visual comparison
Users look at two objects side-by-side
Minimize required input:
No complex selection mechanics inside VR
Responses collected externally (e.g., survey)
Keep movement basic:
Only needed for repositioning/viewing angle
Around this time, I also considered adding AR/passthrough elements to the project. The idea was to place JND tasks within the real-world environment, and this could make the experience feel more natural and grounded. However, after thinking through the implications, I decided against it due to several reasons:
Real-world lighting conditions are highly variable
Shadows and brightness would directly affect shade perception tasks
This would introduce uncontrolled variables into the experiment
Would make results harder to interpret, especially for grayscale comparisons
Given that shade is one of the core variables being tested, this would undermine the validity of that portion of the experiment. Thus, I decided to keep everything within a fully controlled VR environment, even if that meant sacrificing some realism.
4/19/26 – 4 hours
I worked on refining the VR visuals, particularly lighting and object placement. One thing I noticed was that even with controlled lighting, VR introduces some level of visual noise: slight blur, depth perception differences, and general headset limitations. For the shade JND, I already had to change the shades to make the differences more apparent, as the shading got muted as the visualization moved to VR.
This made me start thinking more seriously about one of my core questions: whether VR actually helps or hurts performance for very small visual differences. My intuition at this point was that VR might actually make these fine-grained comparisons harder, even if it helps with more spatial tasks.
4/21/26 – 5 hours
Today I focused on setting up the survey component. I created a Google Form to collect responses from participants for both the 2D and VR portions of the project. The form was structured to mirror the experiment itself, where participants would go through a series of comparison tasks and select which option they believed was larger/darker/more numerous.
In addition to the objective questions, I also included short response questions asking:
Which format felt easier to use
What aspects of the visualization helped or hindered their decisions
I wanted to capture not just accuracy, but also how users felt about their decisions, since confidence is an important part of perception, especially near the JND threshold. One challenge here was making sure the survey aligned closely with the VR tasks so that comparisons between 2D and VR would be meaningful. I tried to keep the structure and ordering as consistent as possible across both formats.
4/23/26 – 4 hours
Today I worked on refining the VR environments and finishing up the remaining rooms for each type of comparison. One part that was initially more tedious than expected was generating the dot arrays for the quantity (density) JND tasks. Manually placing dots was not practical, especially when trying to create very small differences between counts.
To address this, I implemented a more automated approach:
Generated dots programmatically within defined regions
Controlled:
Total number of dots
Distribution within the space
Ensured randomness while still keeping counts precise
This made it much easier to create multiple trials with slightly different quantities, and also ensured consistency across different rooms. I also spent time finishing the layout of the VR “rooms,” making sure each comparison setup was clean and easy to interpret visually. The goal was to minimize distractions and keep the focus entirely on the differences being tested.
4/26/26 – 3 hours
Today I finalized the survey and overall experiment setup in preparation for the in-class activity, and finished publishing the project and prepared the unity visualization for deployment through Meta Developer. Next steps would be to analyze survey results and do more Wiki contributions.
4/28/26 - 5 hours
Today I compiled the results from my JND project. Mainly, I will choose to look at the accuracy of the students' accuracy, as well as comments, to see if it there are differences in quality of visualization/accuracy of students. Below I added the threee graphs showing the accuracy of the three different JND experiments I've done. We can see that there is a higher accuracy rating almost consistent across all three experiments, where participants did better in 2D over VR. There were several issues regarding the lighting and resolution of the Unity apk, as well as resolution issues for the headsets themselves. More information will be included in the wiki document here:
5/03/26 - 5 hours
Today I finished making the poster for the final presentation. I have also made a quick slide for the flashtalk to state. I have decided to focus on the purpose of my experiment, the methodology, some examnples, and the results to add onto my final poster.