Our project, Learning ASL with AI, is an interactive game-based platform that teaches users about American Sign Language and artificial intelligence. Using Google's Teachable Machine, we will train models to recognize the ASL alphabet, which we then incorporate into different game modes. In free play mode, users sign letters to a camera and use the models on screen letter prediction to write words. Our second mode will be skill based, tasking users with quickly signing letters based on on-screen prompts to earn game points. The final mode will allow users to curate their own datasets with the goal of implementing a custom language. After training a model on their unique gestures, users can test their new model's accuracy in a mode like free play. Our platform teaches users both about sign language and several important AI topics, such as the training process and issue of bias. By training their own models, users will have a better understanding of how factors such as dataset size and sample quality affect their resulting model's performance. Comparison of various models’ prediction accuracy will enable users to engage in a conversation about how different datasets can produce bias and how bias might be handled.
For this project, we will use Google's Teachable Machine to train models to understand the ASL alphabet. Then we will create our own program to use the exported model to help teach users about ASL. The end goal is for the program to utilize a camera to see what signs the user is making, and then based on the prediction accuracy, save the letter in a timely fashion so the user can continue signing.
In order to attract the attention of our users, we would implement various games that they are able to interact with.
Free Signing
Users can hold up ASL letters to the camera and the program will detect what letter it is and display it on screen
Users will be able to spell out words and make sentences
The ability to clear the word and delete characters will also be implemented
Sign Knowledge
The program will display letters and the user will have to show the appropriate ASL gesture to the camera before time expires
For every correct gesture, they get a point
Current score and high score will be tracked
Custom Signs
Users will have the ability to create their own datasets with the goal of creating their own language
They can hold up random gestures, and train their model
Then they will be able to use the model and see how accurate it is
This week I worked on developing a prototype to show the basics of how the system will operate. The model is currently only trained to recognize characters A-F. The following steps are how you operate the program:
Upload sign images into the examples folder and label the images 1.png, 2.png, x.png.
1.png is the first sign gesture that will be interpreted, 2.png is next and so on
Specify in the program which example you want the program to run its analysis on
Run the program
If the prediction accuracy is greater than 90%, the letter will be saved into the final word string
The program will output the word it thinks you signed, based on the input
File: 1.png
Character: B
File: 2.png
Character: A
File: 3.png
Character: D
Output: