This research intends to help the Filipino community, specifically the deaf and the mute, to communicate more effectively and to be better understood.
This research intends to help the Filipino community, specifically the deaf and the mute, to communicate more effectively and to be better understood.
Today there are American Sign language (ASL) translators applications that have been published and are being used for the same purpose. By providing real time sign language translations and instead of the usual language of ASL this application will bridge the gap between those with the inability to verbally communicate and the entirety of the Filipino community by making an application translator with the Filipino Sign Language (FSL). This can promote effective and efficient communication in order to accommodate diversity and strengthen the unity and inclusivity within the society. Thus resulting in opening more opportunities for those with hearing and speaking impairment and increase their independence as a community. To create a prototype of this research, the group coded a prototype application using the MIT application inventor. To test the success of this, the group held trails whilst observing the accuracy and speed of the prototype.
An experimental research design was used in the research. A prototype for the Filipino Sign Language App was done using the MIT App Inventor. The program of the FSL App Translator will consist of processed images of 14 Filipino Sign Languages, each taken on a plain background with 30 images of different angles and directions to be used in the recognition and translation of gestures into words. The interface of the app will show percentages of correct guesses on which the app recognizes the hand sign gesture it is most similar to. These percentages are analyzed using Jamovi, a computer program used to gather inferential statistics to test the app’s accuracy and reliability.
The sampling was done by the researchers through performing the different Filipino Sign Languages using the MIT App Inventor. The pictures were taken with a plain and natural background to determine its accuracy with both variables. The highest percentage of hand gesture will be used to conclude which letter or word will be translated and shown in the interface of the application. After the experiment, the researchers will conclude the efficacy of the app by measuring its speed and number of correct guesses it was able to determine.
For data analysis, two statistical methods were employed: descriptive statistics using Microsoft Excel and inferential statistics using Jamovi. Microsoft Excel was chosen for its ease in data entry and calculation, providing mean and median summaries. Jamovi, on the other hand, facilitated a student’s t-test to draw conclusions about the translation accuracy based on the background or environment where it takes place. With that, the background (plain or natural) is treated as the independent variable and accuracy as the dependent variable. With the use of the statistical software, Jamovi, data input led to instant statistical tables for analysis.
The research study, with the absence of any participant involvement, primarily addressed ethical considerations through results communication and research authenticity. Integrity and honesty were observed throughout the study and the researchers ensured that any form of plagiarism was avoided through plagiarism checker tools and proper citation. Furthermore, originality in coding the application was ensured under the guidance of the MIT App Inventor mentor. During data collection, two to three researchers were present to verify the accuracy of the translator application's experimental results and to prevent any alterations.
Descriptive Results: The mean value was used for the gathering of the descriptive result of the research to see the average percentage of the readings of the application in different backgrounds, as well as compare the values gathered in images with plain background from those with natural background. The data below shows a varying range of values in different parts of the collection and its overall result is presented at the second figure. The data 0 means that in that particular reading, the letter or word was not recognized by the application and this could be subjected to the positioning of the sign language. It can be seen that there is a very small difference between the averages of the two variables. From this, it could be inferred that the translation of the application shows little to no difference in capability in either setups.
Inferential Results: The P-value that the research group obtained is higher than 0.05. Since a higher P-value implies that there is no significance difference between the variables, the results indicate a low statistical power, implying that the results are similar and the ability of the application to translate the gestures remains the same in both natural and plain background. This proves that the app can recognize and translate sign language in a plain background and natural background, and that the translating capability of the app is the same in both backgrounds.
Speed and Accuracy of Translation
According to the results, the speed and accuracy of the translation in images with background and without background have no significant difference. This implies that the application is able to perform real-time translations regardless of the environment that it is used in. However, other factors can affect the accuracy and speed of translation such as the strength of WiFi connectivity and the execution of sign language gestures.
Accurate and Comprehensible Interpretation of Sign Language
The advanced machine learning algorithms of the application model are considered to achieve and ensure a precise interpretation through the recording of the percentage of correct interpretation. In creating the application model, a teachable software was used wherein sign language images were uploaded: thirty (30) pictures with a plain background and thirty (30) pictures with a more complex background per each letter and word.
Difference of Human Translator and Application Translator
Unlike AI applications, human translators have the ability to imply cultural and social competence during translation that can include understanding the context of the situation that can be applied through human emotions and feelings that fills the gap between the hearing and the deaf and mute community. However, not all human translators are readily available and in the case of Filipino Sign Language (FSL), only a few are able to interpret this mode of communication. Through this application, users can use the application in certain situations to communicate independently without relying on any human interpreter.
Creation of Model Using MIT Teachable Application Tool
Jamie
“Anonymous Dinosaur”
Adi
“Anonymous Otter”
Christiane
“Anonymous Turtle”
Pauline
“Anonymous Sloth”
Sophia
"Anonymous Anemone"
Jana
“Anonymous Flamingo”