2 | 5 | Goal 1: articulate AR/VR visualization software tool goals, requirements, and capabilities;
1 | 4 | Goal 2: construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research;
1 | 4 | Goal 3: execute tool evaluation strategies;
2 | 4 | Goal 4: build visualization software packages;
1 | 4 | Goal 5: comparatively analyze software tools based on evaluation;
2 | 5 | Goal 6: be familiar with a number of AR/VR software tools and hardware;
3 | 5 | Goal 7: think critically about software;
3 | 5 | Goal 8: communicate ideas more clearly;
1 | 5 | Goal 9: Contribute high-quality, lasting content to the VR Software Wiki to aid future researchers and students.
Homework 1:
10 min changes:
Add protein VR to the VR Software or general tools sections on the wiki. ProtienVR is a lightweight tool that can be an alternative to Nanome.
Add a surigcal VR tab in the VR in Medcine section. The VR is medicine has many differenct types of applications and seems overly broad. It would be helpful for medical students or sugeons searcing for tools if there was distinction between visualization tools, surgical planning tools, etc.
Complete the WebVR Tutorials landing page to have the relevant summary/links. It is currently empty.
1 hour changes:
Compile just 1 page for all 3D model or VR relevant file types. There is already a page that describes some files types for 3D models, but I feel like it would be helpful for there to be just 1 pages that covers and describes all types.
Add a surigcal VR tab in the VR in Medcine section. The VR is medicine has many differenct types of applications and seems overly broad. It would be helpful for medical students or sugeons searcing for tools if there was distinction between visualization tools, surgical planning tools, etc.
Add an entry that convers "how to cite VR tools/ software." This would be helpful for researchers who aren't very familar with VR tools, but would like to use them and need to cite them in a manuscript.
10 hour changes:
Add a new page for AI in VR. Conduct a literature and general technology review and add the new and relevant developmetns in AI and how they are/can change(ing) the way VR/AR is used.
Add a new page for Gaussian Splatting. This is a new rendering technique for VR that represents scenes as millions of translucent, episodal "splats." It is not covered in the Wiki. It has mostly replaced NeRF's due to its faster rendering speeds.
Add more tools in the VR in sports section. It currently only includes NBA visualtion tools; however, many other tools such as Rezzil (soccer rehab and drills AR), WIN reality (baseball training simulations), STRIVER (QB training), exist can could be helpful for people to reference.
Homework 2 (Class 1/29):
Dino VR Screen Shot:
Google Earth Screen Shots and Link:
Project Ideas:
I want to get a high-dimensional dataset like Word2Vec, preprocess it for 3D projection, and implement a VR point-cloud renderer that supports selection. I plan to test some kind of sensory navigation interactions to see if physical movement aids understanding. For a class activity, people can perform an A/B cluster search to evaluate different visualization metaphors. My deliverables will include a comparative wiki entry on density vs. legibility in 3D plots and a reusable Unity/WebXR component for loading CSV embedding data.
I want to get skeletal movement data for correct and incorrect squat forms and develop an overlay system in VR to superimpose the ideal form over the user's path. You can use visual deviation indicators to highlight unnatural movements. In a activity, people will use the tool to guide a peer, analyzing the efficacy of the visual cues. Deliverables will be a case study on immersive spatial feedback versus video playback and a wiki tutorial on importing BVH motion files.
My goal is to capture network traffic data to build a VR visualization where devices are nodes and data packets are animated edges representing bandwidth. I will create filtering tools to isolate connections by grabbing nodes. The class will participate in a heuristic evaluation to identify a simulated security breach and rate the tool's usability. Potential deliverables include a table evaluating 3D graph layout libraries for VR performance and a wiki page on design patterns for selecting nodes in dense graphs.
Project Plan:
Goal: Build a VR point-cloud visualization system for exploring high-dimensional AI embeddings (Word2Vec), with interactive selection and navigation, culminating in a comparative study of density vs. legibility in 3D plots.
Platforms: Unity (Quest) and/or WebXR (A-Frame/Three.js)
Deliverables: Reusable Unity/WebXR component for loading CSV embedding data. Comparative wiki entry on density vs. legibility in 3D point-cloud plots. In-class A/B cluster search activity with evaluation data
Week 1: Data Acquisition & Preprocessing Pipeline
Monday 2/10 — Dataset Selection & Initial Preprocessing
Download pre-trained Word2Vec embeddings
Write Python script to extract a meaningful subset (e.g., 1,000–5,000 words from a semantic category like emotions, animals, or places)
Apply dimensionality reduction (PCA or t-SNE)
Export results to CSV format
Deliverable: CSV file with 3D-projected embeddings ready for visualization
Wednesday 2/12 — Visualization Framework Setup
Set up Unity project with XR Interaction Toolkit (experiment w/ WebXR development environment)
Review wiki resources: IATK Basics, DinoVR for Point Cloud Data
CSV parser to load data
Render initial static point cloud (no interaction yet)
Deliverable: Basic point cloud rendering from CSV data
Week 2: Core Interaction Development
Wednesday 2/19 — Selection & Labeling System
Implement point selection via ray-casting (controller or gaze-based)
Display word labels on hover/selection (floating text or tooltip UI)
Add color coding by cluster or semantic category
Implement basic camera/locomotion controls (teleportation or smooth movement)
Deliverable: Interactive point cloud with selection and labels working
Week 3: Navigation & Density Experiments
Monday 2/24 — Sensory Navigation Prototype
Implement physical movement mapping (room-scale walking corresponds to data navigation)
Add scaling controls (zoom in/out of point cloud)
Create two visualization modes for A/B testing:
Mode A (Dense): All points visible, smaller point size, full dataset
Mode B (Legible): Filtered points, larger labels, decluttered view with LOD (level of detail)
Deliverable: Two toggleable visualization modes ready for comparison
Wednesday 2/26 — Refinement & Activity Design
Polish interactions and fix bugs from testing
Design in-class activity protocol:
Task: "Find all words related to [category X] as quickly as possible"
Metrics: Time to completion, accuracy, subjective preference
Create simple data collection form (Google Form or in-app logging)
Prepare activity instructions and evaluation questionnaire
Deliverable: Activity protocol document and data collection system
Week 4: In-Class Activity & Analysis
Monday 3/03 — Final Pre-Activity Testing
Deliverable: Fully tested build ready for class activity
Wednesday 3/05 — Begin Wiki Documentation
Deliverable: Wiki entry draft (technical sections complete)
Week 5: In-Class Activity
Activity: A/B Cluster Search Evaluation
Participants split into two groups; each starts with a different visualization mode. Find and select all words related to 'weather' (or similar category) within the point cloud. Group A starts with Dense view, Group B starts with Legible view; then swap. Completion time, accuracy (% correct selections), NASA-TLX workload rating, preference ranking
Week 5–6: Analysis & Final Deliverables
By 3/12–3/13:
Analyze activity data (quantitative + qualitative)
Complete wiki entry with findings and recommendations
1. Word2Vec to Unity VR Pipeline Tutorial (link)
- Location: VR Visualization Software > Visualization Tutorials
- Description: Step-by-step tutorial on converting high-dimensional AI embeddings (Word2Vec) to 3D point cloud visualization in Unity for Quest 3, including Python preprocessing with gensim/scikit-learn and Unity CSV parsing.
2. Troubleshooting Unity + Quest 3 Setup
- Location: TBD
- Description: Not yet published will publish after bugs are acutally fixed.
Total: 61 hours
1/26/26 - 4 Hours
Joined up slack and introduced myself
Read through the wiki and researched new VR tech (ProtienVR, Gaussian Splatting)
Added my proposed changes to my Journal
Explored Kenny Gruchalla's bio and research and added my questions to Gdoc.
1/29/26 - 10 Hours
Worked on project ideas
Completed Google earth setup and activity.
Completed Dino VR Setup
Completed Virtual Desktop Setup
Completed ShapesVR Lab
2/4/26 - 4 Hours
Completed In class DinoVR assignment
Drafted and added project proposal and activites to journal
Researched mapping mapping points using Unity/WebXR
2/8/26 - 5 Hours
Completed project proposal
Completed project proposal powerpoint presentation
Researched tools needed for project (Unity, Python, etc)
2/10/26 - 4 Hours
- Began work on data preprocessing pipeline for Word2Vec embeddings
- Downloaded Google's pre-trained Word2Vec model (GoogleNews-vectors-negative300.bin, ~3.6GB)
- Installed gensim library and troubleshot dependency conflicts with numpy versions
- Wrote Python script to extract words from semantic categories
- Researched t-SNE vs PCA for dimensionality reduction — chose t-SNE for better cluster preservation
2/12/26 - 5 Hours
- Completed dimensionality reduction using scikit-learn's t-SNE implementation (perplexity=30, n_iter=1000)
- Normalized 3D coordinates to VR-appropriate scale (-5 to 5 meters)
- Exported final CSV with columns: word, x, y, z, category
- Began Unity setup, ran into issues with Unity Hub and installation
2/17/26 - 6 Hours
- Unity 2022.3 LTS installed after clearing cache and reinstalling
- Attempted to install XR Interaction Toolkit via Package Manager
- Researched compatibility between Unity version, XR Interaction Toolkit, and Meta XR All-in-One SDK
- Set up basic Unity project with XR Plugin Management configured for Oculus/Meta
- Reviewed wiki resources: IATK Basics page and DinoVR for Point Cloud Data tutorial for implementation ideas
2/19/26 - 5 Hours
- Wrote CSV parser script in C# to load word embedding data into Unity
- Created point cloud using Unity's Particle System with custom shader for category-based coloring
- Implemented color mapping:
- First attempt to build to Quest 3 failed
- Installed Android SDK and NDK through Unity Hub, correc configured paths
- Build succeeded but app crashed on launch
2/24/26 - 4 Hours
- Researched A/B testing methodology for VR visualization evaluation
- Studied existing wiki pages on VR evaluation: NASA Task Load Index, System Usability Scale
- Previewing two visualization modes for comparison:
- Mode A (Dense): All ~2,500 points visible, small point size (0.02m), hover-only labels
- Mode B (Legible): Filtered ~500 representative points, larger size (0.05m), persistent labels
- Drafted activity protocol: timed cluster search task, metrics to collect, counterbalanced design
- Created outline for Google Form to collect completion time, accuracy.
2/26/26 - 4 Hours
- Debugged Quest 3 connection issues
- Disabled Meta Quest Link and used direct USB build instead of streaming
- Fixed rendering issue where points appeared at origin
- Static point cloud with category colors now rendering correctly in VR on Quest 3
- Trying basic navigation by physically walking ar
3/03/26 - 10 Hours
UMAP Projection + Global Anchors
- Modified Python pipeline to support UMAP via umap-learn library for better cluster separation
- Added 18 global anchor words as persistent landmarks across sentences
- Added GlobalAnchorPoint data class and rendering logic in Unity
- UMAP technically worked but clusters still too close together for VR readability
LDA Projection Switch
- Implemented LDA (Linear Discriminant Analysis) projection for mathematically guaranteed inter-cluster separation
- Handled LDA's component limit (max n_classes−1) by padding with PCA residuals
- LDA succeeded mathematically but raw coordinates too small for VR (~0.1 unit spread)
Post-Projection Normalization
- Added center-and-scale normalization after projection
- Debugged NameError and Extents improved to 4-14 units, but clusters still in tiny 0.11-unit ball because trajectory dominated point cloud extent
Iterative Repulsion Cluster Spreading
- Implemented iterative repulsion algorithm to force cluster centroids apart with minimum 1.41 unit gap
- Applied affine transform to trajectory/context tokens
- Placed cluster centroids on Fibonacci sphere (so equidistant)
- Computed trajectory position via softmax-weighted interpolation using min-max normalized cosine similarities from 768-dim space
- Fixed outer_pull normalization math that was a no-op
- Successfully produces separated clusters with meaningful trajectory movement. This approach guarantees visual separation and good similarity intuition
- Expanded GetClusterColor() to 12 cluster colors
- Colors and positions now consistent across all 7 sentences