Milestones and Timeline
Gather more information on ASL, Python Webcam libraries, and methods of processing images /video with code. Do additional research on Deep Machine Learning if time permits.
Design Program (Week 3, 4, and 5)
Create use cases, sequence diagrams, and state machine diagrams for the program. Start to record progress on website (www.agnorokdomain.com) for documentation of the implementation phases. Create basic UI design for simple application.
Make Webcam process Static and Dynamic Hand Gestures [Implement] (Week 6, 7, and 8)
Set up the basis of the program, make it process hands and ignore any unrelated stimuli. Make sure the program clearly understands palm and finger positions. Add to the base program, process hand motion (, may implement Machine Learning here if time permits). Make sure the program can differentiate a variety of hand motions.
Make dictionary system for ASL Signs and String Pairs [Implement] (Week 9 and 10)
Create a dictionary system for identifying processed ASL Signs, then retrieve the equivalent string. Make the system for adding signs simple and orderly.
Create simple application utilizing system [Implement] (Week 11 and 12)
Based on the design, create a basic application for running the program. The output should be displayed via the app or should be typed by the program where the text cursor is located. There will be a toggle for disabling and enabling the application from taking input.
Implement Reach Goals, if time permits [Implement] (Week 13 and 14)
Add reach goal of translating American English into ASL and displaying ASL with images outputted to the Monitor. Add reach goal of allowing text to speech for American English output and speech recognition for American English input.
Final Testing and Debugging Phase (Week 15)
Test all successfully implemented functions together. Cycle with Debugging for the final product.
*each Implementation phases will have their own Testing and Debugging done