By Arib Syed
These pages document how word embeddings from machine learning models can be visualized as fields and gradients in physical space using Unity and the Meta Quest 3 headset.
Project 2 — Continuous Semantic Field Visualization
The second project extends the same dataset and pipeline to represent embedding space as a continuous scalar and vector field rather than a set of discrete points. Instead of spheres marking individual words, the space is filled with colored fog clouds derived from kernel density estimation over the UMAP coordinates, and gradient flow lines show the direction semantic meaning changes most strongly at any location. A semantic probe lets users point anywhere in the 3D space and read the live composition of categories at that point. A user study compared the 2D flow field plot against the VR field environment on tasks focused on boundary detection, gradient interpretation, and field structure — tasks that discrete point clouds cannot address at all.