Final Poster and Flash Talk
2 | 5 | Goal 1: articulate AR/VR visualization software tool goals, requirements, and capabilities;
1 | 4 | Goal 2: construct meaningful evaluation strategies for software libraries, frameworks, and applications; strategies include surveys, interviews, comparative use, case studies, and web research;
1 | 4 | Goal 3: execute tool evaluation strategies;
2 | 4 | Goal 4: build visualization software packages;
1 | 4 | Goal 5: comparatively analyze software tools based on evaluation;
2 | 5 | Goal 6: be familiar with a number of AR/VR software tools and hardware;
3 | 5 | Goal 7: think critically about software;
3 | 5 | Goal 8: communicate ideas more clearly;
1 | 5 | Goal 9: Contribute high-quality, lasting content to the VR Software Wiki to aid future researchers and students.
Homework 1:
10 min changes:
Add protein VR to the VR Software or general tools sections on the wiki. ProtienVR is a lightweight tool that can be an alternative to Nanome.
Add a surigcal VR tab in the VR in Medcine section. The VR is medicine has many differenct types of applications and seems overly broad. It would be helpful for medical students or sugeons searcing for tools if there was distinction between visualization tools, surgical planning tools, etc.
Complete the WebVR Tutorials landing page to have the relevant summary/links. It is currently empty.
1 hour changes:
Compile just 1 page for all 3D model or VR relevant file types. There is already a page that describes some files types for 3D models, but I feel like it would be helpful for there to be just 1 pages that covers and describes all types.
Add a surigcal VR tab in the VR in Medcine section. The VR is medicine has many differenct types of applications and seems overly broad. It would be helpful for medical students or sugeons searcing for tools if there was distinction between visualization tools, surgical planning tools, etc.
Add an entry that convers "how to cite VR tools/ software." This would be helpful for researchers who aren't very familar with VR tools, but would like to use them and need to cite them in a manuscript.
10 hour changes:
Add a new page for AI in VR. Conduct a literature and general technology review and add the new and relevant developmetns in AI and how they are/can change(ing) the way VR/AR is used.
Add a new page for Gaussian Splatting. This is a new rendering technique for VR that represents scenes as millions of translucent, episodal "splats." It is not covered in the Wiki. It has mostly replaced NeRF's due to its faster rendering speeds.
Add more tools in the VR in sports section. It currently only includes NBA visualtion tools; however, many other tools such as Rezzil (soccer rehab and drills AR), WIN reality (baseball training simulations), STRIVER (QB training), exist can could be helpful for people to reference.
Homework 2 (Class 1/29):
Dino VR Screen Shot:
Google Earth Screen Shots and Link:
Project Ideas:
I want to get a high-dimensional dataset like Word2Vec, preprocess it for 3D projection, and implement a VR point-cloud renderer that supports selection. I plan to test some kind of sensory navigation interactions to see if physical movement aids understanding. For a class activity, people can perform an A/B cluster search to evaluate different visualization metaphors. My deliverables will include a comparative wiki entry on density vs. legibility in 3D plots and a reusable Unity/WebXR component for loading CSV embedding data.
I want to get skeletal movement data for correct and incorrect squat forms and develop an overlay system in VR to superimpose the ideal form over the user's path. You can use visual deviation indicators to highlight unnatural movements. In a activity, people will use the tool to guide a peer, analyzing the efficacy of the visual cues. Deliverables will be a case study on immersive spatial feedback versus video playback and a wiki tutorial on importing BVH motion files.
My goal is to capture network traffic data to build a VR visualization where devices are nodes and data packets are animated edges representing bandwidth. I will create filtering tools to isolate connections by grabbing nodes. The class will participate in a heuristic evaluation to identify a simulated security breach and rate the tool's usability. Potential deliverables include a table evaluating 3D graph layout libraries for VR performance and a wiki page on design patterns for selecting nodes in dense graphs.
Project Plan:
Goal: Build a VR point-cloud visualization system for exploring high-dimensional AI embeddings (Word2Vec), with interactive selection and navigation, culminating in a comparative study of density vs. legibility in 3D plots.
Platforms: Unity (Quest) and/or WebXR (A-Frame/Three.js)
Deliverables: Reusable Unity/WebXR component for loading CSV embedding data. Comparative wiki entry on density vs. legibility in 3D point-cloud plots. In-class A/B cluster search activity with evaluation data
Week 1: Data Acquisition & Preprocessing Pipeline
Monday 2/10 — Dataset Selection & Initial Preprocessing
Download pre-trained Word2Vec embeddings
Write Python script to extract a meaningful subset (e.g., 1,000–5,000 words from a semantic category like emotions, animals, or places)
Apply dimensionality reduction (PCA or t-SNE)
Export results to CSV format
Deliverable: CSV file with 3D-projected embeddings ready for visualization
Wednesday 2/12 — Visualization Framework Setup
Set up Unity project with XR Interaction Toolkit (experiment w/ WebXR development environment)
Review wiki resources: IATK Basics, DinoVR for Point Cloud Data
CSV parser to load data
Render initial static point cloud (no interaction yet)
Deliverable: Basic point cloud rendering from CSV data
Week 2: Core Interaction Development
Wednesday 2/19 — Selection & Labeling System
Implement point selection via ray-casting (controller or gaze-based)
Display word labels on hover/selection (floating text or tooltip UI)
Add color coding by cluster or semantic category
Implement basic camera/locomotion controls (teleportation or smooth movement)
Deliverable: Interactive point cloud with selection and labels working
Week 3: Navigation & Density Experiments
Monday 2/24 — Sensory Navigation Prototype
Implement physical movement mapping (room-scale walking corresponds to data navigation)
Add scaling controls (zoom in/out of point cloud)
Create two visualization modes for A/B testing:
Mode A (Dense): All points visible, smaller point size, full dataset
Mode B (Legible): Filtered points, larger labels, decluttered view with LOD (level of detail)
Deliverable: Two toggleable visualization modes ready for comparison
Wednesday 2/26 — Refinement & Activity Design
Polish interactions and fix bugs from testing
Design in-class activity protocol:
Task: "Find all words related to [category X] as quickly as possible"
Metrics: Time to completion, accuracy, subjective preference
Create simple data collection form (Google Form or in-app logging)
Prepare activity instructions and evaluation questionnaire
Deliverable: Activity protocol document and data collection system
Week 4: In-Class Activity & Analysis
Monday 3/03 — Final Pre-Activity Testing
Deliverable: Fully tested build ready for class activity
Wednesday 3/05 — Begin Wiki Documentation
Deliverable: Wiki entry draft (technical sections complete)
Week 5: In-Class Activity
Activity: A/B Cluster Search Evaluation
Participants split into two groups; each starts with a different visualization mode. Find and select all words related to 'weather' (or similar category) within the point cloud. Group A starts with Dense view, Group B starts with Legible view; then swap. Completion time, accuracy (% correct selections), NASA-TLX workload rating, preference ranking
Week 5–6: Analysis & Final Deliverables
By 3/12–3/13:
Analyze activity data (quantitative + qualitative)
Complete wiki entry with findings and recommendations
1. Word2Vec to Unity VR Pipeline Tutorial (link)
- Location: VR Visualization Software > Visualization Tutorials
- Description: Step-by-step tutorial on converting high-dimensional AI embeddings (Word2Vec) to 3D point cloud visualization in Unity for Quest 3, including Python preprocessing with gensim/scikit-learn and Unity CSV parsing.
2. SemanticGPT In-Class Activity: VR vs. 2D Comparison Study
Location: VR Visualization Software → ML Embedding Visualizations in VR/AR → SemanticGPT In-Class User Study: VR vs. 2D Comparison
- Description: Methodology and results from counterbalanced crossover study comparing VR vs. 2D visualization of LLM semantic embeddings. Includes NASA-TLX workload analysis, preference data, and qualitative feedback from 8 participants. VR showed lower effort/frustration and 75% preference for cluster identification tasks.
3. Meta Movement SDK Tracking Workflow
Locaiton: Unity > Meta Movement SDK Body Tracking
Description: A walkthrough of my integration of the Meta Movement SDK into my project. Covers the initial setup steps as well bugs that I ran into and how I troubleshooted them.
4. VR for Physical Therapy & Movement Rehabilitation
Location: Applications of VR > VR in Medicine > VR for Physical Therapy & Movement Rehabilitation
Description: An overall view of applciation that VR has for movement rehab and physical therapy. Specifailly fouced on the context of my VR movement PT detective project and the relevant tools and literature.
5. PT Detective Project Walkthrough
Location: Applications of VR > VR in Medicine > VR for Physical Therapy & Movement Rehabilitation > PT Detective
Description: A walkthrough of my PT Detectice project including a some of the key results as well as things that I learned and tips for people doing similar proejcts.
Total: 140
1/26/26 - 4 Hours
Joined up slack and introduced myself
Read through the wiki and researched new VR tech (ProtienVR, Gaussian Splatting)
Added my proposed changes to my Journal
Explored Kenny Gruchalla's bio and research and added my questions to Gdoc.
1/29/26 - 10 Hours
Worked on project ideas
Completed Google earth setup and activity.
Completed Dino VR Setup
Completed Virtual Desktop Setup
Completed ShapesVR Lab
2/4/26 - 4 Hours
Completed In class DinoVR assignment
Drafted and added project proposal and activites to journal
Researched mapping mapping points using Unity/WebXR
2/8/26 - 5 Hours
Completed project proposal
Completed project proposal powerpoint presentation
Researched tools needed for project (Unity, Python, etc)
2/10/26 - 4 Hours
- Began work on data preprocessing pipeline for Word2Vec embeddings
- Downloaded Google's pre-trained Word2Vec model (GoogleNews-vectors-negative300.bin, ~3.6GB)
- Installed gensim library and troubleshot dependency conflicts with numpy versions
- Wrote Python script to extract words from semantic categories
- Researched t-SNE vs PCA for dimensionality reduction — chose t-SNE for better cluster preservation
2/12/26 - 5 Hours
- Completed dimensionality reduction using scikit-learn's t-SNE implementation (perplexity=30, n_iter=1000)
- Normalized 3D coordinates to VR-appropriate scale (-5 to 5 meters)
- Exported final CSV with columns: word, x, y, z, category
- Began Unity setup, ran into issues with Unity Hub and installation
2/17/26 - 6 Hours
- Unity 2022.3 LTS installed after clearing cache and reinstalling
- Attempted to install XR Interaction Toolkit via Package Manager
- Researched compatibility between Unity version, XR Interaction Toolkit, and Meta XR All-in-One SDK
- Set up basic Unity project with XR Plugin Management configured for Oculus/Meta
- Reviewed wiki resources: IATK Basics page and DinoVR for Point Cloud Data tutorial for implementation ideas
2/19/26 - 5 Hours
- Wrote CSV parser script in C# to load word embedding data into Unity
- Created point cloud using Unity's Particle System with custom shader for category-based coloring
- Implemented color mapping:
- First attempt to build to Quest 3 failed
- Installed Android SDK and NDK through Unity Hub, correc configured paths
- Build succeeded but app crashed on launch
2/24/26 - 4 Hours
- Researched A/B testing methodology for VR visualization evaluation
- Studied existing wiki pages on VR evaluation: NASA Task Load Index, System Usability Scale
- Previewing two visualization modes for comparison:
- Mode A (Dense): All ~2,500 points visible, small point size (0.02m), hover-only labels
- Mode B (Legible): Filtered ~500 representative points, larger size (0.05m), persistent labels
- Drafted activity protocol: timed cluster search task, metrics to collect, counterbalanced design
- Created outline for Google Form to collect completion time, accuracy.
2/26/26 - 4 Hours
- Debugged Quest 3 connection issues
- Disabled Meta Quest Link and used direct USB build instead of streaming
- Fixed rendering issue where points appeared at origin
- Static point cloud with category colors now rendering correctly in VR on Quest 3
- Trying basic navigation by physically walking ar
3/03/26 - 10 Hours
UMAP Projection + Global Anchors
- Modified Python pipeline to support UMAP via umap-learn library for better cluster separation
- Added 18 global anchor words as persistent landmarks across sentences
- Added GlobalAnchorPoint data class and rendering logic in Unity
- UMAP technically worked but clusters still too close together for VR readability
LDA Projection Switch
- Implemented LDA (Linear Discriminant Analysis) projection for mathematically guaranteed inter-cluster separation
- Handled LDA's component limit (max n_classes−1) by padding with PCA residuals
- LDA succeeded mathematically but raw coordinates too small for VR (~0.1 unit spread)
Post-Projection Normalization
- Added center-and-scale normalization after projection
- Debugged NameError and Extents improved to 4-14 units, but clusters still in tiny 0.11-unit ball because trajectory dominated point cloud extent
Iterative Repulsion Cluster Spreading
- Implemented iterative repulsion algorithm to force cluster centroids apart with minimum 1.41 unit gap
- Applied affine transform to trajectory/context tokens
- Placed cluster centroids on Fibonacci sphere (so equidistant)
- Computed trajectory position via softmax-weighted interpolation using min-max normalized cosine similarities from 768-dim space
- Fixed outer_pull normalization math that was a no-op
- Successfully produces separated clusters with meaningful trajectory movement. This approach guarantees visual separation and good similarity intuition
- Expanded GetClusterColor() to 12 cluster colors
- Colors and positions now consistent across all 7 sentences
3/05/26 - 3 Hours
- Conducted in-class VR vs. 2D evaluation activity with 8 participants
- Set up 2 Quest 3 headsets with SemanticGPT APK
- Deployed 2D Unity WebGL version on laptops for comparison condition
- Ran counterbalanced crossover study: Group A (VR→2D), Group B (2D→VR)
- Participants completed cluster identification tasks across 4 sentences (bat_cave, bat_swing, bank_deposit, bank_river)
- Collected NASA-TLX workload ratings and preference data via Google Form
- Key finding: VR preferred for cluster identification (75%), lower effort/frustration than 2D
3/10/26 - 2 Hours
- Analyzed activity results: computed accuracy rates, NASA-TLX means, preference percentages
- VR showed 75% preference for cluster identification, 63% for path understanding
- Documented methodology and findings for wiki contribution page
- Created project presentation slides
3/19/26 - 5 Hours
- Pivoted to Project 2: a Quest 3 application for VR-assisted movement review.
- Designed the project around a "PT-Patient" workflow where a patient records a squat in VR and a physical therapist reviews the recording as a color-coded 3D skeleton.
- Researched body tracking approaches: compared Quest 3 hand/head tracking, Meta Movement SDK (inside-out body tracking), and external systems like MediaPipe. Settled on Meta Movement SDK because it's built into Quest 3, requires no extra hardware, and exposes per-frame skeletal transforms via OVRBody.
- Drafted the in-class activity ("PT Detective"): a within-subjects counterbalanced crossover where one participant reviews a recorded squat in VR and the other reviews a 2D side-view video of the same squat, then we swap.
- Wrote the Project 2 plan in my journal and built the two-slide proposal deck for Aarav and David (due 10:30am Wed).
- Sketched out 10 milestone dates: 3/31, 4/02, 4/07, 4/09, 4/14, 4/16, 4/21, 4/23, 4/28, 4/30.
3/24/26 - 4 Hours
- Researched biomechanical anomaly thresholds in clinical literature.
- Read Bishop et al. 2017 on Frontal Plane Projection Angle (FPPA) as an injury-risk predictor — landed on 6° mild / 10° severe thresholds for dynamic knee valgus.
- Read Nae et al. 2017 on left-right asymmetry — adopted 6° / 12° thresholds for knee and hip asymmetry.
- Read Wellsandt et al. 2018 on Limb Symmetry Index — adopted 10% / 20% LSI thresholds.
- Read Sahabuddin on trunk lean during squats — adopted 5° / 12° lateral and 30° / 45° forward thresholds.
- Read Leys et al. 2013 on MAD-based statistical thresholds for outlier detection — influenced the three-frame persistence filter idea to suppress sensor-noise spikes.
3/31/26 - 5 Hours
- Got the Meta XR All-in-One SDK installed in Unity 2022.3 LTS and ran the Movement SDK sample scenes on the Quest 3.
- Spent the first hour fighting Unity Hub: Meta XR SDK requires a specific OpenXR loader configuration, and I had a leftover XR Plugin Management package from Project 1 that conflicted.
- Verified inside-out body tracking is working by running the official Movement SDK body sample on-device and watching my skeleton render.
4/02/26 - 4 Hours
- Got a custom skeleton rendering in Unity from OVRBody data.
- Wrote MovementDataTypes.cs (joint pose serialization structs) and SkeletonResolver.cs (resolves Meta's joint indices to a stable name → transform map).
- Spawned a sphere on each tracked joint and lines between parent-child joint pairs. Ugly but it moves with my body in real time on the headset.
- Hit my first "the SDK changed" issue: the OVRBody.GetJointPoseFromRoot signature in v83 takes different arguments than the docs I had bookmarked from an older version. Resolved by reading the source.
4/07/26 - 6 Hours
- Built the squat capture pipeline: BodyTrackingRecorder.cs samples joint transforms each FixedUpdate, writes them to a JSON file in the app's persistent data path.
- Recorded my first squat. Pulled the JSON off the device with adb pull and inspected it — 11 tracked joints × 60 frames/sec × ~5 seconds of squat = ~3,300 sample rows. Looked clean.
- Added a Patient scene controller with a 3-second countdown UI before recording starts and a "5 reps complete" auto-stop.
- The cable is the enemy. Squatting with a USB-C tether changes how you load each side. Wireless adb is non-negotiable for this kind of project.
4/09/26 - 5 Hours
- Built the PT Review scene. Loaded a recorded JSON file and replayed it as an animated skeleton in a virtual room.
- Wrote SkeletonPlayback.cs with play/pause, scrub, frame-step, and yaw rotation controls bound to the right thumbstick.
- The reviewer (me, wearing the headset, no longer the patient) can walk around the replaying skeleton freely. This is the moment I knew the project was going to work.
- Built SceneSwitcher.cs with a button-driven Patient → PT Review handoff. Wrote two Unity Editor automation scripts (SceneSetupEditor.cs and AutoFixPatientScene.cs) that build an
4/14/26 - 6 Hours
- Wrote the analysis pipeline: JointAngleCalculator.cs and SquatAnalyzer.cs.
- For each frame, compute knee/hip asymmetry, knee FPPA (dynamic valgus), trunk lateral and forward lean, and Limb Symmetry Index. Compare each against the mild and severe thresholds from the clinical literature I read on 3/24.
- Three-frame persistence filter: a joint is only flagged severe if the metric crosses the severe threshold for at least three consecutive frames. This kills almost all sensor-noise false positives in a quick test recording.
- Joints render gray (normal) / amber (mild) / red (severe). Severe joints get scaled 1.5× and outlined in white for accessibility — color alone wouldn't be enough for a colorblind reviewer.
4/16/26 - 8 Hours
- Body tracking data started returning broken segment lengths — torso was 6 meters tall, knees were inside the floor. Spent the first 4 hours convinced my analysis math was wrong, then convinced my joint resolution was off. It was neither.
- Ran a diagnostic logging block that dumped all 11 joint world positions + computed segment lengths at frame 0 and mid-frame. The numbers were nonsense.
- Root cause was that the Meta Core SDK and Meta Movement SDK had drifted onto incompatible versions when I updated one without the other. Movement SDK was expecting a Core SDK API that didn't exist in the version I had installed.
- Fix: pinned both SDK versions explicitly in Packages/manifest.json. Switched to OpenXR-only configuration with Meta Quest Support enabled and removed the legacy Oculus XR Plugin entirely.
4/21/26 - 5 Hours
- Polished the PT Review experience and prepared the in-class study materials.
- Tuned color thresholds and tested with three pre-recorded squats: one normal, one with simulated knee valgus, one with simulated weight shift. The system correctly flagged the anomalies in the latter two.
- Wrote the Google Form for the in-class activity: per-anomaly detection (Yes/Maybe/No), 1–5 confidence per anomaly, six NASA-TLX subscales (mental, physical, temporal demand; performance; effort; frustration), and a forced binary VR vs 2D preference at the end. Plus open-text fields.
- Wrote the participant instructions and the operator protocol.
4/23/26 - 4 Hours
- Ran the PT Detective in-class activity. n=8 participants, within-subjects counterbalanced crossover, 4 VR-first and 4 2D-first.
- Each participant reviewed the same recorded squat once in VR (PT Detective on the Quest 3) and once on 2D side-view video on a laptop, in randomized order.
- After each review they filled out the Google Form: per-anomaly detection, confidence, NASA-TLX, and the binary preference at the end of the second review.
- Activity ran on time. Lowest-friction part was actually the VR — no one asked how to walk around the skeleton; they just did it.
- Anecdotal observation during the activity: every single participant noticed the left-right weight shift in VR, where most missed it in 2D.
4/26/26 - 6 Hours
- Cleaned and analyzed the study data in Python.
- Ran Wilcoxon signed-rank tests for each NASA-TLX subscale and each per-anomaly confidence rating. Computed matched-pairs rank-biserial effect sizes for each.
- Ran a two-sided binomial test on the forced preference count.
4/28/26 - 5 Hours
- Wrote up the statistical analysis and started the deliverables.
- 8/ 8 participants preferred VR (binomial p = 0.008).
- Three very large effects (|r| ≈ 0.91): TLX-Performance VR 2.50 vs 2D 4.62, TLX-Frustration VR 1.75 vs 2D 3.12, asymmetry confidence VR 3.88 vs 2D 2.88. All three approached but did not cross α=0.05 because the Wilcoxon test at n=8 has a discrete-distribution floor near p=0.039.
- Detection counts: asymmetry VR 5 / 2D 2 (the cleanest VR win — 4 participants saw it only in VR), knee valgus VR 3 / 2D 1, lateral lean VR 1 / 2D 0, forward lean VR 4 / 2D 7 (the lone 2D advantage, consistent with side-view video's natural strength at sagittal-plane motion).
4/30/26 - 5 Hours
- Built the Project 2 final presentation deck and started the wiki write-up.
- Final deck covers project motivation, system design (Patient + PT Review scenes, six metrics with thresholds and citations), study design diagram, headline results, discussion of the n=8 statistical floor, and open research questions.
- Drafted the project results page for the wiki ("PT Detective — VR Movement Review") under Applications of VR > VR in Sports Science / VR in Medicine.
- Started the Meta SDK debugging tutorial as a separate wiki contribution.
5/03/26 - 5 Hours
- Built the final poster and the 40-second flash talk for the public demo.
- Flash talk is 3 slides with auto-advance timings (10s + 20s + 10s) sent to Aarav for the class presentation deck.
5/05/26 - 4 Hours
- Last class before finals. Reviewed poster drafts in class and rehearsed full-semester flash talks. Got feedback on the poster, refined a few section labels and tightened spacing for the print version.
5/08/26 - 2 Hours
- Public demo / final presentations. Presented the poster and ran live PT Detective demos on the Quest 3.