The mechanism for viewing the rendered objects will have 4 major components.
The Housing
A unit to store the components. Includes a backplate to hold the screen (beginning with a smartphone, later to be a raspberry pi LCD
angled plate to hold the mirror
attachment for lens
strap to secure to head
Screen
This is what will display the components. The program will adjust for positioning and any fish eye effect that may occur
Mirror
An angled mirror to reflect the light from the screen towards the user
Tinted Lens
positioned at an angle to capture the reflected light from the mirror
Tinted to make the image stand out against potentially bright backgrounds
Use camera to send data to program
Low latency
High accuracy
Use combinations of Python libraries, Github code, and custom code to track hand location
After tracking location, also track the shape and gestures
The models can be pulled from two sources.
Preloaded from SD or USB drive
Pulled from SolidWorks or other 3d software
Two potential user interface options (or a combination of both).
Software based using hand tracking for selection
Hardware interface like keypad or remote
Take all the components and make the full assembly.
Features to focus on
Wide viewing angle
Non warped imaging
Accurate hand tracking
Features to improve
Speed of hand tracking for fast working
Comfort
Design
Gather user feedback and add necessary or desired improvements post prototyping and optimization