Challenge: Positioning down to the mm accuracy is needed for effective AR and VR.
Goal: To integrate with several of the most common third-party offerings.
Method: HTC Vive is the main positioning system.
Extensions: Added the ability to send position data over the network using UDP for multiple viewers.
In-house positioning: RGB Positioning is a POC for a low-res positioning system to learn the concepts of positioning.
Lessons Learned: Millimeter or centimeter precision positioning is still a challenge. ZED, HALOLENS and LEAP are useable for limited applications.
Findings late 2017: Software-based positioning (using video streams) is not good enough yet. Hardware-based systems like HTC Vive work very well.
Next challenge: Develop or support multiple positioning platforms to stay current.
Wish list: We need more than positioning. We also need object recognition and distance. Lidar-based positioning with object recognition is top on the list.
Occlusion: Point cloud data (which includes distances) can be integrated into the depth buffer to provide occlusion. Object-recognition would be a plus.
Point clouds: Point cloud data (which includes distances) can be input directly into the depth buffer providing occlusion. Object-recognition would be a plus.
Object recognition: Object recognition allows .
Point cloud positioning: Object recognition allows .
Idea for improving the use of DLP projectors as positioning system. Moving the DLP slightly in a pattern improves resolution by exposing more data points.
Sub-pixel movement pattern to increase sampling. Two sine waves are used to control the DLP projector.
Seeing coverage on a group of DLP pixels.
Three cameras locating three objects to see coverage.
Here is the point cloud data without the scan lines.
Simple concept of checking pixel colors to locate and triangulate target positions.
RGB positioning has one cm resolution, and requires LEDs and substantial CPU.