Milestone 4

4.1 Optimization

A few changes had to be made for the alpha prototype.

Hand Tracking

Gesture recognition can happen one of two ways. Either the headset can have an onboard camera and use computer vision to recognize hand gestures, or the system can use motion capture gloves. As neither of these are an option at this time, we are now using a five button remote. Each button will have a particular function based on which stage of operation the user is in. Also on the remote is an accelerometer and a bluetooth module to allow for the position tracking.

Computation

Instead of using a Raspberry Pi, the device will operate off of a laptop. The reasoning for this was twofold. Firstly, connecting the Raspberry Pi to the Stevens wi-fi was an arduous process which would make testing and improving the device more difficult. Secondly, in order to run the program, the computer needs a monitor. To SSH into the raspberry pi does not allow for graphics, so we were reliant on VNC for connection which was impractical for non local testing. 

Function

Without the use of $1000 or more expensive accelerometers, the sensors are subject to large amounts of noise, so getting accurate and consistent readings are near impossible. Instead of continuous object motion, the rendered object will appear in the center of view, and the user will make swiping motions for incremental translation.

4.2 Delivery

4.3 Management

This project has gone through many redesigns, primarily due to budgeting costs. From researching tracking options, headset designs, and computing methods, we were forced to resolve accuracy and cost descrepencies, as well as complete each stage within a realistic timeline. The most recent development of the remote came after trying to give function to the hand tracking. It proved near impossible to get accurate readings of the hand. We did a good job so far balancing the considerations so far, and expect a working prototype within the coming weeks.