1 | 4 | Goal 1: Articulate AR/VR visualization software tool goals, requirements, and capabilities.
2 | 5 | Goal 2: Construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research.
2 | 5 | Goal 3: Execute tool evaluation strategies.
1 | 4 | Goal 4: Build visualization software packages.
2 | 5 | Goal 5: Comparatively analyze software tools based on evaluation.
2 | 5 | Goal 6: Be familiar with a number of AR/VR software tools and hardware.
3 | 5 | Goal 7: Think critically about software.
3 | 5 | Goal 8: Communicate ideas more clearly.
Project 1 Proposal: Project 1
Presentation for Project 1 Proposal: Project 1 Description Slides
End Presentation for Project 1 <ADD LINK>
Project 2 Proposal <ADD LINK>
Presentation for Project 2 Proposal <ADD LINK>
Poster <ADD LINK>
In-class Activity <ADD LINK>
Public Demo <ADD LINK>
Homework 1 Assignment
10 minute changes
Add links to the papers on the VR in Psychology page
The "Remote stereotactic visualization for image-guided surgery: technical innovation" link does not work in the VR in medicine page
Add "The Body VR" and a description to the "Example VR/AR Educational Software" page and link the existing page. Also add how to download and run, hardware requirements, and metrics for "The Body VR"
1 hour changes
Add links to the papers on the "History of VR" page.
Create a more detailed WebXR Development Guide, replacing the existing page content
Expand on the "Arduino + Unity" page, including how to download any software, connect the two, any hardware requirements, etc. Right now it is a collection of links and resources and is not flushed out. Could also include a section with existing research papers regarding arduino and unity being used together.
10 hour changes
Add prerequisite tags to pages that may need prior knowledge or software to understand, and link these pages
Update the "Large Displays, Labs, and Papers" section with links, more papers and groups, and more large displays along with information/reviews of all of them.
Expand on the "VR Visualization Software" comparison charts (Features and Time Taken to Complete Tasks) by adding all of the softwares as columns and filling in their columns with information based on features. Moreover, some of the softwares don't have individual pages, such as "Amira" and "ParaView", so these would need to be added.
CONTRIBUTION 1 [short description] <VR in Psychology> Added links to the papers on the VR in Psychology page
CONTRIBUTION 2 [short description] <ADD LINK>
.....
CONTRIBUTION N [short description] <ADD LINK>
Project Overview:
For my final project, I will focus on visualizing word embeddings in passthrough augmented reality. Word embeddings are high-dimensional vector representations learned by machine learning models that encode semantic relationships between words. These embeddings are typically visualized using 2D dimensionality-reduction plots (e.g., PCA, UMAP), which often obscure spatial structure and neighborhood relationships.
This project investigates whether passthrough AR, which embeds data directly into physical space, improves users’ understanding of semantic structure compared to traditional 2D plots. By allowing users to walk through and physically reference embedding space, the project explores how metric grounding in the real world affects interpretation, comparison, and sensemaking of high-dimensional language data.
The project explicitly compares 2D visualization vs passthrough AR, focusing on tasks such as cluster identification and similarity judgment.
Data and Software:
Data Type
High-dimensional word embeddings (e.g., 300–768 dimensions)
Scientific data derived from trained NLP models
Hundreds to thousands of labeled word vectors
Datasets
Public word or sentence datasets
Pretrained embeddings (e.g., GloVe or transformer-based embeddings)
Software
PyTorch – embedding generation or loading pretrained embeddings
NumPy / PCA / UMAP – dimensionality reduction
Unity – visualization environment
Meta XR SDK – passthrough AR
Project Milestones:
02/10
Finalize word dataset (e.g., emotions, professions, moral concepts): Complete
glove.6B.300d.
Has a 400K vocabulary size, Pre-trained, Publicly available, Moderate download size, 300 dimensions is a standard embedding size
Using GloVe for word embeddings.
Decide evaluation tasks (cluster identification, similarity judgment): Complete
Cluster Identification, Similarity judgment, Spatial Reasoning, Category Membership, Word Identification
02/12
Generate or load word embeddings using PyTorch
Create baseline 2D PCA / UMAP plots
Set up Unity project with passthrough AR support
02/19
Import projected 3D embedding coordinates into Unity
Render embeddings as labeled point clouds
Implement basic interaction (selection, highlighting, proximity)
Add color-coding by semantic category
Begin documentation of the Unity pipeline
02/24
Integrate passthrough AR
Anchor embeddings in physical space
Adjust scale, occlusion, and navigation for real-world interaction
02/26
Enable collaborative viewing of the same physical embedding
Allow shared discussion and annotation of clusters
Compare usability and interpretation between 2D and AR
03/03
Run pilot evaluation:
2D plots vs passthrough AR
Measure task accuracy, confidence, and qualitative feedback
03/05
Analyze evaluation results
Finalize wiki pages and comparison tables
Record short demo videos (2D and AR)
Polish documentation and best-practices writeup
In-Class Activity:
Students will explore the same word embedding dataset in two formats:
Traditional 2D scatter plot
Passthrough AR
Students will:
Identify semantic clusters
Judge similarity between selected words
Discuss perceived structure, scale, and relationships
A short Google Form will collect:
Task accuracy
Confidence
Perceived usefulness of each medium
This activity serves as an evaluative comparison of visualization methods.
Deliverables:
Placement: VR Visualization Software → Scientific Visualization → ML Embeddings in AR
Tutorial: Visualizing Word Embeddings in Passthrough AR Using Unity
Comparison Table: 2D vs Passthrough AR for understanding semantic embedding geometry
Conceptual Page: When Physical Grounding Helps (and When It Doesn’t) for Language Embedding
Total: 32 hours
1/26/26 - 3 Hours
Set up individual journal page and linked it into top-level journal page
Joined the course slack and introduced myself
Reviewed the course homepage, timeline, and homeworks. Looked through Project Ideas and Scientific data pages. Read thorugh Google Sites Woes page.
Brainstormed Project Ideas and interests.
1/27/26 - 4 Hours
Explored wiki.
Added 10 minute, 1 hour, and 10 hour changes.
Completed Contribution 1.
Read Kenny Gruchalla's bio and website and add a question (and an extra one) for him to the "board gdoc"
Pondered project ideas
1/29/26 - 4 Hours
Came up with potential project ideas:
A neuroadaptive VR scientific visualization interface that uses real-time EEG signals to tailor visual presentation based on user cognitive engagement.-->Unity, ParaView or VTK, OpenBCI, WebSockets
A VR system that integrates GAN-based data augmentation to help users interpret incomplete scientific volumes by comparing real vs synthetically completed data in immersive space.-->Unity or Unreal Engine 5, ParaView/VTK
An immersive VR visualization with spatial ML annotations that highlights key patterns and clusters in complex 3D scientific data to aid interpretation-->Unity, VTK
A passthrough AR visualization that overlays scientific datasets onto real laboratory spaces to enhance spatial reasoning and interaction-->Unity, WebXR, Paraview, Spatial Anchors API
Read previous projects:
Copied the course learning goals from the syllabus onto the top of my journal and added scores.
Joined Paperspace.
2/2/26 - 8 Hours
Expanded on potential project ideas:
Visualize invisible scientific fields (magnetic, electric, fluid flow) overlaid onto real space using passthrough AR, enabling embodied interaction with abstract vector fields.-->ParaView, VTK, Unity + Meta XR SDK, Physics-based vector field datasets
3 things I will do: Use VTK / ParaView to generate 3D vector field data (streamlines, glyphs). Build a passthrough AR interface in Unity where users manipulate virtual probes or magnets to see field changes. Compare passthrough AR interaction against traditional 2D vector field diagrams.
1 class activity: In-class experiment where students try to predict field behavior before and after interacting with the AR visualization.
Potential deliverables: A tutorial on how to go from Vector Fields in ParaView to Passthrough AR in Unity. A table showing 2D vs VR vs passthrough AR for field comprehension. Some video demos. A page on when embodied visualization adds value and when it does not. This would go in VR Visualization Software → VTK → Field Visualization in AR in the Wiki.
Use passthrough AR to physically embed high-dimensional ML embeddings (e.g., from vision or language models) into real space, allowing users to walk through and interrogate structure that is normally flattened into 2D plots (PCA, t-SNE, UMAP). AR provides metric grounding to the physical world, allowing users to better understand the data better than in VR.-->Pytorch, Numpy, Unity, Meta XR SDK, public datasets
3 things I will do: Generate embeddings using a real ML model (e.g., image embeddings from a CNN, or sentence embeddings). Project embeddings into 3D (PCA / UMAP / custom projection) and render them in passthrough AR using Unity. Run a small study comparing 2D plots vs immersive VR vs passthrough AR on tasks like cluster identification or similarity judgment.
1 class activity: In-class experiment where students attempt to identify clusters in a 2D scatter plot, VR, and passthrough AR.
Potential deliverables: A tutorial on Visualizing ML Embeddings in Passthrough AR Using Unity. A table showing 2D vs VR vs passthrough AR for understanding embedding geometry. A page on when AR helps and when it collapses under dimensionality. This would go in VR Visualization Software → Scientific Visualization → ML Embeddings in AR in the Wiki.
Visualize 3D CT/MRI volumes in passthrough AR to enhance spatial understanding and collaborative analysis beyond traditional 2D slice-based views. Users can slice, threshold, clip, and annotate volumetric data while comparing different visualization techniques.-->Unity, WebXR, Paraview, Spatial Anchors API-->ParaView, VTK / exported meshes or volume textures, Unity + Meta XR SDK, Open medical datasets.
3 things I will do: Preprocess CT/MRI datasets in ParaView and export for AR volume rendering. Build a passthrough AR interface in Unity where users can slice, clip, and annotate volumes. Conduct a small evaluation comparing AR visualization against desktop 2D viewers.
1 class activity: Small groups explore the same volume in AR and desktop, documenting where spatial understanding improves or worsens.
potential delivarables: Tutorial: “Bringing CT/MRI Volumes into Passthrough AR”. Comparison table: ParaView vs Unity vs 3D Slicer. Evaluation writeup: task completion time, spatial understanding, user preference. Screenshots and short demo videos
Visualize time-varying climate simulation data (temperature, ice thickness, wind) in passthrough AR to allow users to physically walk around and collaboratively explore trends, anomalies, and spatiotemporal patterns.-->NetCDF climate datasets), ParaView, Unity + Meta XR passthrough
3 things I will do: Preprocess NetCDF climate datasets in ParaView to create visualizations of scalar and vector fields. Build a passthrough AR interface in Unity with time controls and collaborative annotations. Compare AR comprehension and collaboration with desktop-based visualization.
1 class activity: Students complete a climate interpretation task in both desktop ParaView and passthrough AR, comparing accuracy and confidence.
Potential deliverables: Step-by-step pipeline: NetCDF → ParaView → AR. Visualization technique comparison matrix (color maps, height fields, particle flows). Evaluation results (qualitative + quantitative). Best practices guide for large spatiotemporal data in AR
Finished Shapes XR Lab
Finished Quest 3 Setup with Paperspace and SteamVR.
Finished Quest 3 Practice Tutorial and posted Google Earth screenshots in my journal.
Installed DinoVR
Read DinoVR paper
Thought of project ideas and expanded on some
Brainstormed software to use
Brainstormed software evaluation metrics
2/4/26 - 5 Hours
Selected project idea and made a plan for the ML embeddings project above.
02/10
Finalize dataset selection (image embeddings primary)
Decide evaluation tasks (cluster identification, similarity judgment)
02/12
Generate embeddings using PyTorch
Create baseline 2D PCA/UMAP plots
Set up Unity project (VR + AR templates)
02/19
Import projected 3D embedding coordinates into Unity
Render embeddings as point clouds in standard 3D space
Implement basic interaction (selection, highlighting, proximity)
Add labeling or color-coding by class
Begin documentation of Unity embedding pipeline
02/24
Integrate passthrough AR
Anchor embeddings in physical space
Adjust scale, occlusion, and navigation for real-world interaction
02/26
Allow multiple users to view the same physical embedding, allowing for shared discussion and annotation of clusters
Compare interaction differences between VR and AR
03/03
Run pilot evaluation:
2D plot vs VR vs passthrough AR
Measure accuracy, confidence, and qualitative feedback
03/05
Analyze evaluation results
Finalize wiki pages and comparison tables
Record short demo videos (2D, VR, AR)
Polish documentation and best-practices writeup
In-Class Activity--3/10
Students will be shown the same embedding dataset in three formats:
Traditional 2D scatter plot
Immersive VR
Passthrough AR
They will then:
Identify clusters
Judge similarity between points
Discuss perceived structure and scale
A short Google Form will collect:
Task accuracy
Confidence
Perceived usefulness of each medium
Read through VR Visualization Software pages (D3.js Basic Tutorial, Omegalib, VTK/VTK.js, FieldView)
2/9/26 - 6 Hours
Completed 3-minute Proposal Presentation
Completed Project Proposal:
Project Overview:
For my final project, I will focus on visualizing word embeddings in passthrough augmented reality. Word embeddings are high-dimensional vector representations learned by machine learning models that encode semantic relationships between words. These embeddings are typically visualized using 2D dimensionality-reduction plots (e.g., PCA, UMAP), which often obscure spatial structure and neighborhood relationships.
This project investigates whether passthrough AR, which embeds data directly into physical space, improves users’ understanding of semantic structure compared to traditional 2D plots. By allowing users to walk through and physically reference embedding space, the project explores how metric grounding in the real world affects interpretation, comparison, and sensemaking of high-dimensional language data.
The project explicitly compares 2D visualization vs passthrough AR, focusing on tasks such as cluster identification and similarity judgment.
Data and Software:
Data Type
High-dimensional word embeddings (e.g., 300–768 dimensions)
Scientific data derived from trained NLP models
Hundreds to thousands of labeled word vectors
Datasets
Public word or sentence datasets
Pretrained embeddings (e.g., GloVe or transformer-based embeddings)
Software
PyTorch – embedding generation or loading pretrained embeddings
NumPy / PCA / UMAP – dimensionality reduction
Unity – visualization environment
Meta XR SDK – passthrough AR
Project Milestones:
02/10
Finalize word dataset (e.g., emotions, professions, moral concepts)
Complete: https://nlp.stanford.edu/projects/glove/
glove.6B.300d.
Has a 400K vocabulary size, Pre-trained, Publicly available, Moderate download size, 300 dimensions is a standard embedding size
Using GloVe for word embeddings.
Decide evaluation tasks (cluster identification, similarity judgment)
Complete: Cluster Identification, Similarity judgment, Spatial Reasoning, Category Membership, Word Identification
02/12
Generate or load word embeddings using PyTorch
Create baseline 2D PCA / UMAP plots
Set up Unity project with passthrough AR support
02/19
Import projected 3D embedding coordinates into Unity
Render embeddings as labeled point clouds
Implement basic interaction (selection, highlighting, proximity)
Add color-coding by semantic category
Begin documentation of the Unity pipeline
02/24
Integrate passthrough AR
Anchor embeddings in physical space
Adjust scale, occlusion, and navigation for real-world interaction
02/26
Enable collaborative viewing of the same physical embedding
Allow shared discussion and annotation of clusters
Compare usability and interpretation between 2D and AR
03/03
Run pilot evaluation:
2D plots vs passthrough AR
Measure task accuracy, confidence, and qualitative feedback
03/05
Analyze evaluation results
Finalize wiki pages and comparison tables
Record short demo videos (2D and AR)
Polish documentation and best-practices writeup
In-Class Activity:
Students will explore the same word embedding dataset in two formats:
Traditional 2D scatter plot
Passthrough AR
Students will:
Identify semantic clusters
Judge similarity between selected words
Discuss perceived structure, scale, and relationships
A short Google Form will collect:
Task accuracy
Confidence
Perceived usefulness of each medium
This activity serves as an evaluative comparison of visualization methods.
Deliverables:
Placement: VR Visualization Software → Scientific Visualization → ML Embeddings in AR
Tutorial: Visualizing Word Embeddings in Passthrough AR Using Unity
Comparison Table: 2D vs Passthrough AR for understanding semantic embedding geometry
Conceptual Page: When Physical Grounding Helps (and When It Doesn’t) for Language Embeddings
Evaluated Project on Rubric
Project Self-Evaluation:
The proposed project clearly identifies deliverable additions to our VR Software Wiki → Strongly agree
The proposed project involves passthrough or “augmented” in VR → Strongly agree
The proposed project involves large scientific data visualization and identifies the specific data type and software → Strongly agree
The proposed project has a realistic schedule with explicit and measurable milestones → Strongly agree
The proposed project explicitly evaluates VR software, preferably in comparison to related software → Agree
The proposed project includes an in-class activity → Strongly agree
The proposed project has resources available with sufficient documentation → Strongly agree
Linked this section/date to top. Link changed on 2/11.
Met 2/10 Milestones of Project
Finalize word dataset (e.g., emotions, professions, moral concepts): Complete
https://nlp.stanford.edu/projects/glove/
glove.6B.300d --> Has a 400K vocabulary size, Pre-trained, Publicly available, Moderate download size, 300 dimensions is a standard embedding size
Using GloVe for word embeddings.
Decide evaluation tasks (cluster identification, similarity judgment): Complete
Cluster Identification, Similarity judgment, Spatial Reasoning, Category Membership, Word Identification
2/9/26 - 2 Hours
Edited Journal layout and added Project 1 section to top. Linked the Project 1 section in the Proposals section.
Prepared Journal for in-class review.
Self-evaluated Journal:
Journal activities are explicitly and clearly related to course deliverables: 4
Deliverables are described and attributed in wiki: 3
Report states total amount of time: 5
Total time is appropriate: 4
Filled in my project's class activity in the wiki timeline for the class time I would like (3/10).