Assistive technology is quite necessary to establish a two-way communication among deaf, mute and normal people where none of them needs to acquire knowledge of sign languages. In this paper, we propose a novel communication aid system that assists the deaf and mute to communicate independently with no use of sign language. Usually, deaf and mute, and normal people communicate with each other using visual references and simple sentences when none of them knows the sign languages. These small talks are mainly based on some contexts or keywords which can be delineated visually. Again these keywords can be classified based on syllables for the purpose of vibrotactile outputs so that the speech is easily comprehended. Therefore our proposed system is emphasized on both visual and vibrotactile feedback while communicating. An Android app has been developed which is focused on multimodal approach can be able to convert speech to visual contexts and vibrations, and similarly, the contexts and vibrations can be converted to speech. To validate the program, the system was experimented by six deaf and mute participants, two normal persons, and two examiners. Finally, the study of our experimentation reflects the effectiveness and usability of our system.
Published in: 2019 International Seminar on Application for Technology of Information and Communication (iSemantic)
Date of Conference: 21-22 Sept. 2019
Date Added to IEEE Xplore: 28 October 2019
With the rapid advancement of the mobile technologies, we now use mobile applications in every aspect of our life. At present when we find some useful information from business cards, newspapers, books, flyers and so on we do not usually note them rather we capture the image using mobile devices. However, the texts obtained from these images need to be processed further to make our task unequivocal. In this paper, we present an android application framework that can process the texts obtained from an image to get contact information from business cards as well as event information from magazines, posters or flyers. Google Cloud Vision API is used to retrieve the text from the captured image and OpenNLP to extract useful information from the obtained text. The experimental results show that our application is effective and efficient in terms of accuracy as well as processing time.
Published in: International Journal of Computer Theory and Engineering vol. 10, no. 3, pp. 77-83, 2018.
The authors are with the Computer Science and Engineering Department, Islamic University of Technology (IUT), Bangladesh (e-mail: kushol@iut-dhaka.edu, imamulahsan@iut-dhaka.edu, nishatraihan@iut-dhaka.edu).
It is a car race game where the user has to get rid of the Cop car. Inspired by the Need for Speed racing game we tried to develop a simpler version. Here we have added just one level where the game ends when the user ultimately reaches the safe house. The 3D models like some of the car and the trees were developed using blender. But we have also used several Unity 3D free assets in order to compensate our limited time for the project. Finally, some coins has been added where the user can collect and score points.
The rural people of Bangladesh have a very little knowledge of internet and smartphones. So, we tried to develop a system where the users get the medical service right away from home and designed in such a way that they can use it easily.
An android application for the students of Prime University where they can easily order their meal. It has menus that has the facility of real-time update. Students with relevant information can order the meal.
Developed a real time object detection android application. The detected objects are spoken out by the device using Google text to speech API.