1 | 4 | Goal 1: Articulate AR/VR visualization software tool goals, requirements, and capabilities.
2 | 5 | Goal 2: Construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research.
2 | 5 | Goal 3: Execute tool evaluation strategies.
1 | 4 | Goal 4: Build visualization software packages.
2 | 5 | Goal 5: Comparatively analyze software tools based on evaluation.
2 | 5 | Goal 6: Be familiar with a number of AR/VR software tools and hardware.
3 | 5 | Goal 7: Think critically about software.
3 | 5 | Goal 8: Communicate ideas more clearly.
Project 1 Proposal: Project 1
Presentation for Project 1 Proposal: Project 1 Description Slides
End Presentation for Project 1: Project 1 Presentation Slides
Project 2 Proposal <ADD LINK>
Presentation for Project 2 Proposal <ADD LINK>
Poster <ADD LINK>
In-class Activity <ADD LINK>
Public Demo <ADD LINK>
Homework 1 Assignment
10 minute changes
Add links to the papers on the VR in Psychology page
The "Remote stereotactic visualization for image-guided surgery: technical innovation" link does not work in the VR in medicine page
Add "The Body VR" and a description to the "Example VR/AR Educational Software" page and link the existing page. Also add how to download and run, hardware requirements, and metrics for "The Body VR"
1 hour changes
Add links to the papers on the "History of VR" page.
Create a more detailed WebXR Development Guide, replacing the existing page content
Expand on the "Arduino + Unity" page, including how to download any software, connect the two, any hardware requirements, etc. Right now it is a collection of links and resources and is not flushed out. Could also include a section with existing research papers regarding arduino and unity being used together.
10 hour changes
Add prerequisite tags to pages that may need prior knowledge or software to understand, and link these pages
Update the "Large Displays, Labs, and Papers" section with links, more papers and groups, and more large displays along with information/reviews of all of them.
Expand on the "VR Visualization Software" comparison charts (Features and Time Taken to Complete Tasks) by adding all of the softwares as columns and filling in their columns with information based on features. Moreover, some of the softwares don't have individual pages, such as "Amira" and "ParaView", so these would need to be added.
CONTRIBUTION 1 [short description] <VR in Psychology> Added links to the papers on the VR in Psychology page
CONTRIBUTION 2 [short description] <ML Embeddings Visualizations in VR/AR> Added 6 pages regarding ML Embeddings Visualizations in VR/AR. In the VR Visualization Software section.
.....
CONTRIBUTION N [short description] <ADD LINK>
Project Overview:
For my final project, I will focus on visualizing word embeddings in passthrough augmented reality. Word embeddings are high-dimensional vector representations learned by machine learning models that encode semantic relationships between words. These embeddings are typically visualized using 2D dimensionality-reduction plots (e.g., PCA, UMAP), which often obscure spatial structure and neighborhood relationships.
This project investigates whether passthrough AR, which embeds data directly into physical space, improves users’ understanding of semantic structure compared to traditional 2D plots. By allowing users to walk through and physically reference embedding space, the project explores how metric grounding in the real world affects interpretation, comparison, and sensemaking of high-dimensional language data.
The project explicitly compares 2D visualization vs passthrough AR, focusing on tasks such as cluster identification and similarity judgment.
Data and Software:
Data Type
High-dimensional word embeddings (e.g., 300–768 dimensions)
Scientific data derived from trained NLP models
Hundreds to thousands of labeled word vectors
Datasets
Public word or sentence datasets
Pretrained embeddings (e.g., GloVe or transformer-based embeddings)
Software
PyTorch – embedding generation or loading pretrained embeddings
NumPy / PCA / UMAP – dimensionality reduction
Unity – visualization environment
Meta XR SDK – passthrough AR
Project Milestones:
02/10
Finalize word dataset (e.g., emotions, professions, moral concepts): Complete
glove.6B.300d.
Has a 400K vocabulary size, Pre-trained, Publicly available, Moderate download size, 300 dimensions is a standard embedding size
Using GloVe for word embeddings.
Decide evaluation tasks (cluster identification, similarity judgment): Complete
Cluster Identification, Similarity judgment, Spatial Reasoning, Category Membership, Word Identification
02/12
Generate or load word embeddings using PyTorch
Create baseline 2D PCA / UMAP plots
Set up Unity project with passthrough AR support
02/19
Import projected 3D embedding coordinates into Unity
Render embeddings as labeled point clouds
Implement basic interaction (selection, highlighting, proximity)
Add color-coding by semantic category
Begin documentation of the Unity pipeline
02/24
Integrate passthrough AR
Anchor embeddings in physical space
Adjust scale, occlusion, and navigation for real-world interaction
02/26
Enable collaborative viewing of the same physical embedding
Allow shared discussion and annotation of clusters
Compare usability and interpretation between 2D and AR
03/03
Run pilot evaluation:
2D plots vs passthrough AR
Measure task accuracy, confidence, and qualitative feedback
03/05
Analyze evaluation results
Finalize wiki pages and comparison tables
Record short demo videos (2D and AR)
Polish documentation and best-practices writeup
In-Class Activity:
Students will explore the same word embedding dataset in two formats:
Traditional 2D scatter plot
Passthrough AR
Students will:
Identify semantic clusters
Judge similarity between selected words
Discuss perceived structure, scale, and relationships
A short Google Form will collect:
Task accuracy
Confidence
Perceived usefulness of each medium
This activity serves as an evaluative comparison of visualization methods.
Deliverables:
Placement: VR Visualization Software → Scientific Visualization → ML Embeddings in AR
Tutorial: Visualizing Word Embeddings in Passthrough AR Using Unity
Comparison Table: 2D vs Passthrough AR for understanding semantic embedding geometry
Conceptual Page: When Physical Grounding Helps (and When It Doesn’t) for Language Embedding
Project Overview:
For my final project, I will focus on visualizing computational fluid dynamics (CFD) simulation data in passthrough augmented reality. CFD simulations generate complex 3D vector and scalar fields representing fluid flow, pressure distributions, and velocity patterns. These flow fields are typically visualized using desktop software like ParaView through 2D slice views or static 3D renderings, which often obscure the true spatial structure of flow phenomena like vortices, separation zones, and pressure gradients.
This project investigates whether passthrough AR, which embeds flow visualizations directly into physical space, improves users' understanding of 3D flow patterns compared to traditional ParaView desktop visualization. By allowing users to physically walk around flow structures and interact with probe-based streamline seeding, the project explores how embodied interaction in real space affects interpretation and spatial reasoning of complex fluid dynamics.
The project explicitly compares ParaView desktop visualization vs passthrough AR, focusing on tasks such as identifying flow separation, locating stagnation points, and predicting vortex structures.
Data and Software:
Data Type
3D vector fields (velocity: u, v, w components)
3D scalar fields (pressure, vorticity magnitude, temperature)
Scientific simulation data from computational fluid dynamics
Mesh-based or structured grid data with millions of cells
Datasets
OpenFOAM tutorial cases (cylinder flow, airfoil aerodynamics)
Public CFD benchmark datasets (lid-driven cavity, backward-facing step)
Moderate mesh resolution (100K–1M cells for AR performance)
Software
OpenFOAM – CFD simulation engine (or use pre-computed results)
ParaView – CFD post-processing and export pipeline
VTK – data format conversion
Unity – visualization environment
Meta XR SDK – passthrough AR
Project Milestones:
3/31
Select and download OpenFOAM tutorial dataset (cylinder flow at Re=100)
Complete ParaView visualization pipeline (streamlines, pressure contours, velocity glyphs)
Export test geometry and vector field data from ParaView
Deliverable: Working ParaView session with 2 flow visualization techniques
4/02
Set up Unity project with Meta XR SDK and passthrough AR support
Import CFD mesh geometry into Unity (basic boundary visualization)
Implement camera passthrough and spatial anchoring
Deliverable: Demo video showing CFD geometry overlaid in real space
4/07
Implement streamline rendering system in Unity (static streamlines from ParaView export)
Add color-coding by velocity magnitude
Optimize mesh LOD for AR performance
Deliverable: Demo showing colored streamlines visible in passthrough AR
4/09
Build interactive probe system (hand tracking or controller-based)
Implement real-time streamline seeding from probe position
Add streamline integration using exported velocity field data
Deliverable: Demo video showing probe movement → streamline updates
4/14
Implement scalar field overlay system (pressure/velocity magnitude)
Add toggle UI for switching between flow variables
Implement transparency and color map controls
Deliverable: Demo showing pressure field overlay + streamlines simultaneously
4/16
Polish all core AR interactions (probe placement, streamline seeding, scalar field toggling)
Optimize performance for smooth demo experience
Create comparison baseline: export equivalent ParaView visualizations
Deliverable: Feature-complete AR application ready for evaluation
4/21
Run pilot evaluation with 2-3 participants
Test both ParaView desktop and AR conditions
Identify usability issues and interaction bottlenecks
Deliverable: Pilot study feedback and identified improvements
4/23
Implement refinements based on pilot feedback (interaction improvements, clearer visual encoding)
Prepare in-class activity materials (2D ParaView screenshots, AR setup instructions)
Create evaluation task worksheet (stagnation point identification, flow separation prediction)
Deliverable: Polished AR application and complete activity protocol ready
4/28
In-class activity execution (see detailed description below)
Students complete flow pattern identification tasks in both conditions
Collect accuracy, confidence, and qualitative feedback data
Deliverable: Raw evaluation data from all participants
4/30
Analyze in-class activity results and complete comparison analysis
Finalize all wiki documentation (tutorial, comparison table, best practices)
Create tutorial: "OpenFOAM → ParaView → Unity AR Pipeline"
Record final demo videos and polish screenshot galleries
Deliverable: Complete wiki package with evaluation results ready for publication
In-Class Activity:
Students will explore the same CFD flow field (flow over cylinder) in two formats:
ParaView desktop – traditional 2D slice views and 3D static renderings
Passthrough AR – embodied 3D flow field with interactive probe
Tasks:
Identify stagnation points (where flow velocity = 0)
Mark the location where flow separation begins
Predict the path of a particle released from a specific point
Estimate the relative pressure difference between two locations
Evaluation Format:
Students work individually, completing the same tasks in both conditions (counterbalanced order)
A Google Form collects:
Task accuracy (correct location identification)
Confidence ratings (1-5 scale)
Time to task completion
Perceived difficulty
Qualitative feedback on spatial understanding
Goal: Evaluate whether AR improves spatial reasoning for 3D flow structures compared to 2D desktop visualization. This activity serves as the primary evaluative assessment, conducted after pilot testing and refinements.
Deliverables:
Placement: VR Visualization Software → ParaView → CFD in AR
Tutorial: "OpenFOAM to Passthrough AR Pipeline" – Step-by-step guide from CFD simulation to AR visualization
Comparison Table: ParaView vs Unity AR for CFD visualization techniques (streamlines, scalar fields, interaction paradigms)
Evaluation Report: Spatial reasoning task results (flow separation, vortex identification) comparing desktop vs AR
Best Practices Guide: Interactive streamline design in AR (performance optimization, visual encoding, probe interaction)
Demo Videos: Probe-based exploration, pressure field overlays, multi-variable toggling
Conceptual Page: When Embodied Flow Visualization Adds Value (and When Desktop ParaView Suffices)
Project Self-Evaluation
The proposed project clearly identifies deliverable additions to our VR Software Wiki → Strongly agree
Six specific deliverables listed with explicit wiki location
The proposed project involves passthrough or "augmented" in VR → Strongly agree
Core focus on passthrough AR with physical space anchoring and embodied interaction
The proposed project involves large scientific data visualization and identifies the specific data type and software → Strongly agree
OpenFOAM CFD datasets (vector/scalar fields), ParaView processing, VTK export explicitly stated
The proposed project has a realistic schedule with explicit and measurable milestones → Strongly agree
10 milestones across all required dates (3/31–4/30), each with concrete deliverable
The proposed project explicitly evaluates VR software, preferably in comparison to related software → Strongly agree
Direct ParaView desktop vs Unity AR comparison with task-based evaluation
The proposed project includes an in-class activity → Strongly agree
Formative evaluation on 4/21 with clear tasks, measures, and data collection protocol
The proposed project has resources available with sufficient documentation → Strongly agree
OpenFOAM tutorials, ParaView docs, Unity XR SDK, VTK pipeline all well-documentedShare
Total: 112 hours
1/26/26 - 3 Hours
Set up individual journal page and linked it into top-level journal page
Joined the course slack and introduced myself
Reviewed the course homepage, timeline, and homeworks. Looked through Project Ideas and Scientific data pages. Read thorugh Google Sites Woes page.
Brainstormed Project Ideas and interests.
1/27/26 - 4 Hours
Explored wiki.
Added 10 minute, 1 hour, and 10 hour changes.
Completed Contribution 1.
Read Kenny Gruchalla's bio and website and add a question (and an extra one) for him to the "board gdoc"
Pondered project ideas
1/29/26 - 4 Hours
Came up with potential project ideas:
A neuroadaptive VR scientific visualization interface that uses real-time EEG signals to tailor visual presentation based on user cognitive engagement.-->Unity, ParaView or VTK, OpenBCI, WebSockets
A VR system that integrates GAN-based data augmentation to help users interpret incomplete scientific volumes by comparing real vs synthetically completed data in immersive space.-->Unity or Unreal Engine 5, ParaView/VTK
An immersive VR visualization with spatial ML annotations that highlights key patterns and clusters in complex 3D scientific data to aid interpretation-->Unity, VTK
A passthrough AR visualization that overlays scientific datasets onto real laboratory spaces to enhance spatial reasoning and interaction-->Unity, WebXR, Paraview, Spatial Anchors API
Read previous projects:
Copied the course learning goals from the syllabus onto the top of my journal and added scores.
Joined Paperspace.
2/2/26 - 8 Hours
Expanded on potential project ideas:
Visualize invisible scientific fields (magnetic, electric, fluid flow) overlaid onto real space using passthrough AR, enabling embodied interaction with abstract vector fields.-->ParaView, VTK, Unity + Meta XR SDK, Physics-based vector field datasets
3 things I will do: Use VTK / ParaView to generate 3D vector field data (streamlines, glyphs). Build a passthrough AR interface in Unity where users manipulate virtual probes or magnets to see field changes. Compare passthrough AR interaction against traditional 2D vector field diagrams.
1 class activity: In-class experiment where students try to predict field behavior before and after interacting with the AR visualization.
Potential deliverables: A tutorial on how to go from Vector Fields in ParaView to Passthrough AR in Unity. A table showing 2D vs VR vs passthrough AR for field comprehension. Some video demos. A page on when embodied visualization adds value and when it does not. This would go in VR Visualization Software → VTK → Field Visualization in AR in the Wiki.
Use passthrough AR to physically embed high-dimensional ML embeddings (e.g., from vision or language models) into real space, allowing users to walk through and interrogate structure that is normally flattened into 2D plots (PCA, t-SNE, UMAP). AR provides metric grounding to the physical world, allowing users to better understand the data better than in VR.-->Pytorch, Numpy, Unity, Meta XR SDK, public datasets
3 things I will do: Generate embeddings using a real ML model (e.g., image embeddings from a CNN, or sentence embeddings). Project embeddings into 3D (PCA / UMAP / custom projection) and render them in passthrough AR using Unity. Run a small study comparing 2D plots vs immersive VR vs passthrough AR on tasks like cluster identification or similarity judgment.
1 class activity: In-class experiment where students attempt to identify clusters in a 2D scatter plot, VR, and passthrough AR.
Potential deliverables: A tutorial on Visualizing ML Embeddings in Passthrough AR Using Unity. A table showing 2D vs VR vs passthrough AR for understanding embedding geometry. A page on when AR helps and when it collapses under dimensionality. This would go in VR Visualization Software → Scientific Visualization → ML Embeddings in AR in the Wiki.
Visualize 3D CT/MRI volumes in passthrough AR to enhance spatial understanding and collaborative analysis beyond traditional 2D slice-based views. Users can slice, threshold, clip, and annotate volumetric data while comparing different visualization techniques.-->Unity, WebXR, Paraview, Spatial Anchors API-->ParaView, VTK / exported meshes or volume textures, Unity + Meta XR SDK, Open medical datasets.
3 things I will do: Preprocess CT/MRI datasets in ParaView and export for AR volume rendering. Build a passthrough AR interface in Unity where users can slice, clip, and annotate volumes. Conduct a small evaluation comparing AR visualization against desktop 2D viewers.
1 class activity: Small groups explore the same volume in AR and desktop, documenting where spatial understanding improves or worsens.
potential delivarables: Tutorial: “Bringing CT/MRI Volumes into Passthrough AR”. Comparison table: ParaView vs Unity vs 3D Slicer. Evaluation writeup: task completion time, spatial understanding, user preference. Screenshots and short demo videos
Visualize time-varying climate simulation data (temperature, ice thickness, wind) in passthrough AR to allow users to physically walk around and collaboratively explore trends, anomalies, and spatiotemporal patterns.-->NetCDF climate datasets), ParaView, Unity + Meta XR passthrough
3 things I will do: Preprocess NetCDF climate datasets in ParaView to create visualizations of scalar and vector fields. Build a passthrough AR interface in Unity with time controls and collaborative annotations. Compare AR comprehension and collaboration with desktop-based visualization.
1 class activity: Students complete a climate interpretation task in both desktop ParaView and passthrough AR, comparing accuracy and confidence.
Potential deliverables: Step-by-step pipeline: NetCDF → ParaView → AR. Visualization technique comparison matrix (color maps, height fields, particle flows). Evaluation results (qualitative + quantitative). Best practices guide for large spatiotemporal data in AR
Finished Shapes XR Lab
Finished Quest 3 Setup with Paperspace and SteamVR.
Finished Quest 3 Practice Tutorial and posted Google Earth screenshots in my journal.
Installed DinoVR
Read DinoVR paper
Thought of project ideas and expanded on some
Brainstormed software to use
Brainstormed software evaluation metrics
2/4/26 - 5 Hours
Selected project idea and made a plan for the ML embeddings project above.
02/10
Finalize dataset selection (image embeddings primary)
Decide evaluation tasks (cluster identification, similarity judgment)
02/12
Generate embeddings using PyTorch
Create baseline 2D PCA/UMAP plots
Set up Unity project (VR + AR templates)
02/19
Import projected 3D embedding coordinates into Unity
Render embeddings as point clouds in standard 3D space
Implement basic interaction (selection, highlighting, proximity)
Add labeling or color-coding by class
Begin documentation of Unity embedding pipeline
02/24
Integrate passthrough AR
Anchor embeddings in physical space
Adjust scale, occlusion, and navigation for real-world interaction
02/26
Allow multiple users to view the same physical embedding, allowing for shared discussion and annotation of clusters
Compare interaction differences between VR and AR
03/03
Run pilot evaluation:
2D plot vs VR vs passthrough AR
Measure accuracy, confidence, and qualitative feedback
03/05
Analyze evaluation results
Finalize wiki pages and comparison tables
Record short demo videos (2D, VR, AR)
Polish documentation and best-practices writeup
In-Class Activity--3/10
Students will be shown the same embedding dataset in three formats:
Traditional 2D scatter plot
Immersive VR
Passthrough AR
They will then:
Identify clusters
Judge similarity between points
Discuss perceived structure and scale
A short Google Form will collect:
Task accuracy
Confidence
Perceived usefulness of each medium
Read through VR Visualization Software pages (D3.js Basic Tutorial, Omegalib, VTK/VTK.js, FieldView)
2/9/26 - 6 Hours
Completed 3-minute Proposal Presentation
Completed Project Proposal:
Project Overview:
For my final project, I will focus on visualizing word embeddings in passthrough augmented reality. Word embeddings are high-dimensional vector representations learned by machine learning models that encode semantic relationships between words. These embeddings are typically visualized using 2D dimensionality-reduction plots (e.g., PCA, UMAP), which often obscure spatial structure and neighborhood relationships.
This project investigates whether passthrough AR, which embeds data directly into physical space, improves users’ understanding of semantic structure compared to traditional 2D plots. By allowing users to walk through and physically reference embedding space, the project explores how metric grounding in the real world affects interpretation, comparison, and sensemaking of high-dimensional language data.
The project explicitly compares 2D visualization vs passthrough AR, focusing on tasks such as cluster identification and similarity judgment.
Data and Software:
Data Type
High-dimensional word embeddings (e.g., 300–768 dimensions)
Scientific data derived from trained NLP models
Hundreds to thousands of labeled word vectors
Datasets
Public word or sentence datasets
Pretrained embeddings (e.g., GloVe or transformer-based embeddings)
Software
PyTorch – embedding generation or loading pretrained embeddings
NumPy / PCA / UMAP – dimensionality reduction
Unity – visualization environment
Meta XR SDK – passthrough AR
Project Milestones:
02/10
Finalize word dataset (e.g., emotions, professions, moral concepts)
Complete: https://nlp.stanford.edu/projects/glove/
glove.6B.300d.
Has a 400K vocabulary size, Pre-trained, Publicly available, Moderate download size, 300 dimensions is a standard embedding size
Using GloVe for word embeddings.
Decide evaluation tasks (cluster identification, similarity judgment)
Complete: Cluster Identification, Similarity judgment, Spatial Reasoning, Category Membership, Word Identification
02/12
Generate or load word embeddings using PyTorch
Create baseline 2D PCA / UMAP plots
Set up Unity project with passthrough AR support
02/19
Import projected 3D embedding coordinates into Unity
Render embeddings as labeled point clouds
Implement basic interaction (selection, highlighting, proximity)
Add color-coding by semantic category
Begin documentation of the Unity pipeline
02/24
Integrate passthrough AR
Anchor embeddings in physical space
Adjust scale, occlusion, and navigation for real-world interaction
02/26
Enable collaborative viewing of the same physical embedding
Allow shared discussion and annotation of clusters
Compare usability and interpretation between 2D and AR
03/03
Run pilot evaluation:
2D plots vs passthrough AR
Measure task accuracy, confidence, and qualitative feedback
03/05
Analyze evaluation results
Finalize wiki pages and comparison tables
Record short demo videos (2D and AR)
Polish documentation and best-practices writeup
In-Class Activity:
Students will explore the same word embedding dataset in two formats:
Traditional 2D scatter plot
Passthrough AR
Students will:
Identify semantic clusters
Judge similarity between selected words
Discuss perceived structure, scale, and relationships
A short Google Form will collect:
Task accuracy
Confidence
Perceived usefulness of each medium
This activity serves as an evaluative comparison of visualization methods.
Deliverables:
Placement: VR Visualization Software → Scientific Visualization → ML Embeddings in AR
Tutorial: Visualizing Word Embeddings in Passthrough AR Using Unity
Comparison Table: 2D vs Passthrough AR for understanding semantic embedding geometry
Conceptual Page: When Physical Grounding Helps (and When It Doesn’t) for Language Embeddings
Evaluated Project on Rubric
Project Self-Evaluation:
The proposed project clearly identifies deliverable additions to our VR Software Wiki → Strongly agree
The proposed project involves passthrough or “augmented” in VR → Strongly agree
The proposed project involves large scientific data visualization and identifies the specific data type and software → Strongly agree
The proposed project has a realistic schedule with explicit and measurable milestones → Strongly agree
The proposed project explicitly evaluates VR software, preferably in comparison to related software → Agree
The proposed project includes an in-class activity → Strongly agree
The proposed project has resources available with sufficient documentation → Strongly agree
Linked this section/date to top. Link changed on 2/11.
Met 2/10 Milestones of Project
Finalize word dataset (e.g., emotions, professions, moral concepts): Complete
https://nlp.stanford.edu/projects/glove/
glove.6B.300d --> Has a 400K vocabulary size, Pre-trained, Publicly available, Moderate download size, 300 dimensions is a standard embedding size
Using GloVe for word embeddings.
Decide evaluation tasks (cluster identification, similarity judgment): Complete
Cluster Identification, Similarity judgment, Spatial Reasoning, Category Membership, Word Identification
2/11/26 - 2 Hours
Edited Journal layout and added Project 1 section to top. Linked the Project 1 section in the Proposals section.
Prepared Journal for in-class review.
Self-evaluated Journal:
Journal activities are explicitly and clearly related to course deliverables: 4
Deliverables are described and attributed in wiki: 3
Report states total amount of time: 5
Total time is appropriate: 4
Filled in my project's class activity in the wiki timeline for the class time I would like (3/10).
2/12/26 - 4 Hours
Project work
Finalized 64-word dataset across 4 categories.
Loaded GloVe 6B 300D embeddings.
Decided on evaluation tasks: cluster identification, similarity judgment, spatial reasoning, category membership, word identification.
Began Python pipeline.
AFrame Lab (Completed at home since I was sick)
WebXR Measure Lab (Completed at home since I was sick)
2/15/26 - 7 Hours
Completed both lab feedback forms (AFrame and WebXR)
Project work:
Additionally worked on Python pipeline: finished generate_embeddings.py, reduce_dimensions.py, export_json.py.
Generated first baseline 2D PCA/UMAP scatter plots.
Set up Unity project with Android Build Support.
2/17/26 - 3 Hours
Under "Project 1 Class Activity Dates", filled in class activity in the wiki timeline for the class time I would like.
Complete installation steps for both Paraview and Ben's Volume Rendering applications in Paraview + Ben's Volume Rendering tutorial
Signed up for AVP Unity Lab Slots
Project work:
Worked on generating spheres in Unity
2/19/26 - 5 Hours
Completed ParaView feedback form.
Built Unity scene outline: WordPoint prefab, EmbeddingLoader, InteractionManager, CategoryLegend, and CloudNavigator scripts.
Imported embeddings.json into Assets/Data.
2/22/26 - 6 Hours
Continued Unity development.
Refined EmbeddingLoader to support UMAP and PCA layout switching.
Tested interaction system in Unity editor.
Read through project update presentation requirements.
Began working on 3-minute update slides (milestone review for 2/24).
Explored Meta XR SDK documentation for passthrough AR integration options
2/23/26 - 5 Hours
Completed 3-minute update slides and emailed to Aarav and David.
Accepted Apple Developer Invitation
Signed up for Unity Lab Slot Signup
Worked on Unity Project for sphere coloration to represent the different groups, broke the project. No spheres showed.
2/25/26 - 8 Hours
Repaired Unity Project by reverting to previous state and undoing changes. Unity Project now looked like a bunch of gray spheres.
Fixed implementation of spheres and python to unity pipeline so that the spheres can appeared with color.
Made all relevant objects and finished these files: WordPoint prefab, EmbeddingLoader, InteractionManager, CategoryLegend, and CloudNavigator scripts.
Began passthrough AR integration: reviewed Meta XR passthrough API, added OVRCameraRig to scene, explored spatial anchor options. Ran into API level and graphics API build failures.
3/1/26 - 5 Hours
Deferred passthrough AR — shifted to standard VR mode to meet March 10 deadline.
Fixed build configuration: set Vulkan + OpenGLES3, API level 32, IL2CPP, ARM64.
First successful APK build.
Installed on Quest 3 via ADB.
First full on-headset test: verified 64 spheres spawned, navigation did not work. Appeared as 64 spheres on a flat image that would move with the user's head.
3/2/26 - 3 Hours
Downloaded apk file for Henry and Lee's in class activity.
Finalized 3-condition study design (noisy 2D, best 2D, Unity 3D).
Updated plot_2d.py to generate all 4 plot variants (labeled and unlabeled, best and worst).
3/4/26 - 5 Hours
Selected and added gray mystery dots (guilt, soldier, power) to plots as Dot A/B/C
Selected and added gray mystery dots (guilt, soldier, power) to VR as Dot A/B/C (done through editing of Python and C Sharp scripts).
Fixed movement of user so that right joystick controlled forward, left, right, backward movements and left joystick controlled rotation. B/A controlled up/down. Bugs: Rotation would rotate you about some axis in the word rather than an in palce rotation. Moreover, B (the top button) would move you down and A would move you up, which is counter intuitive. Moreover, movement was slow and annoying. Also, when you moved your head the scene entirely moved with you rather than you moving in the scene.
Did setups for Meng, Sanil, and Ellie's activities.
3/6/26 - 4 Hours
Fixed VR scene issues: head-locked world (moved to OVRCameraRig), joystick rotation behavior, B/A button inversion.
Updated EmbeddingLoader and WordPoint to support mystery dot relabeling (Dot A/B/C, gray color, hidden category).
Rebuilt and reinstalled APK.
3/8/26 - 7 Hours
Implemented point at sphere to see the word and category feature.
Implemented click sphere to select the group and make the sphere slightly larger. However, removed this feature later on as if you clicked the gray ones, it would tell you the answer to which group it belongs to.
Sped up fly and rotation speeds to make movement more fluid.
Experimented and changed Noisy 2D plot as another was found with more overlap of points and clusters.
Restructured how in person demo would be done and what tasks I wanted participants to do.
Drafted Google Form for in-person activity
3/9/26 - 5 Hours
Finalized movement and features for VR demo.
Adjusted the pointing feature of the Unity VR as it was off (originally pointing from the face of the controller rather than the front).
Adjusted fonts and sizing of text.
Selected which words for Google Form + Demo I wanted users to compare (nurse, engineer, doctor and hope, artist, envy).
Finalized google form for project
Finalized project for in class demo, redownloaded apk and uploaded.
Added demo setup instructions, activity forms, and instructions for VR movement to timeline.
Selected timelot for AVP Unity Lab.
Downlaoded setup for Korey and Justin's activities.
3/10/26 - 4 Hours
Analyzed responses from Demo Google form, studied all graphs generated.
Reviewed feedback from Demo form.
Decided key takeaways and project steps and which graphs were meaningful
Drew relevant conclusions
Created final project presentation and began filling it out.
3/11/26 - 6 Hours
Completed final project presentation
Included doctor/nurse/engineer accuracy breakdown, cluster rating averages, mystery dot accuracy by condition, usefulness ratings, qualitative themes.
Added 6 Pages to wiki under "VR Visualization Software" as the deliverables for Project 1. These include a tutorial, comparison table, a page about when to use 2d vs 3d, video demos, and a full project walkthrough with code/technical output, in-person demo, and google form analysis.
Added base Passthrough AR Implementation (Did not have time to add images of this to the slides but will add them to my wiki pages/deliverables).
Added new scripts and objects to Unity in order to accomplish.
Result: word spheres float in the physical room at table height. Users walk between the emotion cluster and profession cluster. Thus, there are now 2 apks for the project: 1 in VR only, 1 with AR. Wanted to implement a switching feature using buttons or triggers but did not have time.
Did not have time to add the annotation feature from original plan.
Worked on Journal entries and reorganized.
3/12/26 - 3 Hours
Worked on Journal entries and reorganized.
Added screenshots to Activity Board.
Went through and completed all google forms from past activities.
Wrote short evaluation of my progress in class along with the grade I deserve and emailed to Aarav and David.
Completed Setup steps for Apply Unity Vision Pro Lab.