A Glimpse into My Quest 3S & Unity 6 Development Experience
The Meta Quest 3S's full-color Passthrough is a huge leap forward in mixed reality development. In this article, I'll demonstrate how I used Unity 6 and the Meta SDK with the Quest 3S to build a realistic and immersive MR experience.
Spatial tracking is the foundation of AR development. This demo uses the Passthrough and MRUK prefabs from the Meta SDK to implement spatial tracking techniques.
The video shows how this feature can be used for room arrangement. It allows users to explore what furniture fits a room before buying it, or to freely change a room's theme in a virtual space. This method makes both applications possible.
While MRUK is a powerful tool, it comes with specific limitations. For the most precise AR tracking, the room must be scanned before each use of the app, and the positions of furniture and windows must be reset. This ensures the most accurate spatial anchors.
To precisely place a painting on a wall, I needed to know the names of all the wall objects once they were imported into Unity from the scan. However, Meta Quest doesn't share room information stored on the headset with Unity. To solve this, I wrote a custom script to fetch the room's hierarchy and objects during debugging. This allowed me to confirm how to programmatically locate the wall objects after the room was scanned into the application.
Lighting Synchronization | Depth-Sensing Occlusion | Product Label Display
To synchronize virtual and environmental light, the solution is simple yet effective: add virtual light sources into the digital space.
The video demonstrates this straightforward approach to lighting coherence, along with the implementation of depth detection. This enables virtual objects to appear as if they are truly part of your physical environment, casting shadows and reacting to light in a way that feels natural and immersive.
Lighting is computationally intensive. To maintain a smooth refresh rate above 72 FPS and avoid significant delays, I had to limit the total number of point lights in the scene to three.
Incorrect parameters on the shaders applied to the furniture could cause light blending to change from semi-transparent to opaque, obscuring the original passthrough view of the furniture.
While the Universal Render Pipeline (URP) allows for easy depth-based occlusion via shaders, the Built-in Render Pipeline—which I chose to use for its efficiency and lower resource consumption—only allowed for occlusion coherence for objects like hands. This optimization was crucial for ensuring a smooth, low-latency experience on the hardware.
Labeling is another critical operation in MR applications.
To implement this technique, I created a phantom object of the label target using 3D scanning. This phantom was imported into Unity for labeling, and then made invisible during the demo. This process forms the basis of the labeling operation, allowing virtual labels to be accurately placed on real-world objects.
The process of accurately overlaying the phantom object onto the physical item required significant effort. The primary challenge after using a separate app for 3D scanning and importing the FBX into Unity was verifying the correct scaling. If the scaling was off, the phantom would not align perfectly with the real object, defeating the purpose of the overlay.
Furthermore, because MRUK provides a limited number of furniture labels, a workaround was necessary for the target object. When scanning the space, I had to assign the target object a unique label—one not used by any other existing furniture in the room—and anchor it to a fixed location. This ensured that when the MR environment launched, the correct labeling (here mean the white "tags" that attached to the object) would appear precisely at its designated spot.