A Sign Language Interpreter to Seamlessly transform sign language into English and many other regional languages in India(audio and visual) by collecting live data and training it using deep learning algorithms.
This Project was done as a part AI COE under the guidance of Dr. Gnaneswari.G
You can view the project demo here.
Effective communication is crucial for social interaction, education, and work in today's connected world. But conventional communication techniques can be quite difficult for those who are deaf or hard of hearing, which can cause them to feel excluded and alone.
In light of the significance of inclusive communication, we present the "Real-Time Sign Language Interpretation System." This novel method uses state-of-the-art technology to seamlessly translate sign language motions into spoken English and many other regional languages in India in real-time, bridging the communication gap between the hearing and deaf communities.
Our approach seeks to enable people with hearing impairments to participate fully in society by dismantling obstacles and advancing inclusivity, creating a more just and connected environment for everybody.
The problem addressed is the communication barrier that deaf or hard of hearing people experience, which makes it difficult for them to engage with the hearing community. Conventional techniques, including lipreading or written communication, are frequently ineffective or insufficient. This restricts their ability to engage in social activities, work, and education.
Python
Tensorflow
Keras
OpenCV
tkinter
gtts
googletrans
Deploy the project on cloud and create an API for using it.
Increase the vocabulary of our model
Incorporate feedback mechanism to make the model more robust
Add more sign languages
Increase the regional Language Scope not only in India but also in many other countries
Clearity of the Audio
Created by Celina Thingbaijam, Sundaram Dutta Modak,Muhammad Yahya and Anamika Singh.