This page compares three conditions for visualizing the same 64-word GloVe embedding dataset: a noisy 2D PCA projection (using low-variance dimensions 3 and 5), a clean 2D PCA projection (using highest-variance dimensions 1 and 2), and an interactive Unity 3D environment using UMAP layout on the Meta Quest 3 headset. Data is drawn from a user study conducted March 10, 2026 with 9 participants.
Cluster Clarity Rating (1–5):
2D Flow Field: avg 4.3
VR Semantic Field: avg 3.7
Overall Usefulness Rating (1–5):
2D Flow Field: avg 4.3
VR Semantic Field: avg 3.7
Which format made flow direction easier to understand:
2D arrows: 3/7
VR colored lines: 2/7
Both equally: 1/7
Neither: 1/7
Which format made category boundaries easier to understand:
2D contours: 2/7
VR clouds + walking: 3/7
Both equally: 1/7
Neither: 1/7
Did the probe panel add information contours couldn't show:
Yes meaningfully: 4/7
Somewhat: 2/7
Not really: 1/7
Projection quality determines accuracy more than dimensionality. The noisy 2D projection's 22% correct rate on the similarity task demonstrates that a poorly chosen projection actively misleads users — not just fails to inform them. Adding a third dimension alone (clean 2D → VR point cloud) improved accuracy modestly, but the larger gain was from fixing the projection choice (noisy 2D → clean 2D).
Continuous fields show what discrete points cannot. The flow field conditions introduced a type of insight neither point cloud nor scatter plot can convey: the direction and magnitude of semantic change between locations. Participants in Project 2 could describe boundary locations and gradient directions — tasks that were not meaningful to ask in Project 1 because the visualization contained no field information.
The 2D field plot scored comparably to the VR field on most metrics. Unlike Project 1 where the VR point cloud dominated on most tasks, Project 2 showed the 2D and VR conditions performing similarly overall. The likely reason: the 2D flow field plot is informationally denser — matplotlib's streamplot draws arrows across the entire grid, while the VR scene only rendered 20 streamlines. The 2D plot's contour rings also gave a cleaner topographic readout than the VR fog volumes for users who were experienced with map reading.
The probe panel was the VR condition's unique contribution. 4/7 participants said the real-time percentage composition readout added meaningful information that the 2D contours couldn't show. This is the field equivalent of Project 1's hover-to-inspect interaction — a mechanism for getting precise local information rather than just visual impression.
Walking through the field adds experiential understanding. Several participants described the VR condition as making category bleed and boundary ambiguity more apparent than the 2D plot — even when they rated it lower for clarity. The VR field may be more honest about inter-category mixing precisely because you experience it spatially rather than reading it from rings.
Use the VR semantic field when:
Understanding the topology of the space — where boundaries are, how gradual transitions are — is the primary goal
Users benefit from the probe tool to interrogate specific spatial locations quantitatively
Time and hardware are available and the dataset warrants deep exploration
Use the 2D flow field plot when:
A quick, printable, hardware-free overview of field structure is needed
The audience is experienced with topographic or weather map reading conventions
The visualization will be shared in a paper, poster, or printed format
Use the VR point cloud (Project 1) when:
The primary task is judging similarity between specific words or locating individual words in space
Label readability and word identification are important
Cluster separation rather than boundary structure is the main insight
Avoid noisy 2D projections in any task involving similarity judgment or spatial reasoning — they actively mislead users rather than simply failing to inform them.