“Speech” and “gestures” are the expressions, which are mostly used in communication between human beings. Learning of their use begins with the first years of life. Research is in progress that aims to integrate gesture as an expression in Human Computer Interaction (HCI). In human communication, the use of speech and gestures is completely coordinated. Machine gesture and sign language recognition is about recognition of gestures and sign language using computers. A number of hardware techniques are used for gathering information about body positioning; typically either image-based (using cameras, moving lights etc) or device-based (using instrumented gloves, position trackers etc.), although hybrids are beginning to come about. However, getting the data is only the first step. The second step, that of recognizing the sign or gesture once it has been captured is much more challenging, especially in a continuous stream. In fact currently, this is the focus of the research. This research paper analyses the data from an instrumented data glove for use in recognition of some signs and gestures. A system is developed for recognizing these signs and their conversion into speech. The results will show that despite the noise and accuracy constraints of the equipment, the reasonable accuracy rates have been achieved. Many scientists are working in the field of gesture recognition. A wonderful and latest survey of the work done in this field. Discuss the gesture recognition for human robot interaction and human robot symbiosis. Offers a novel “signal-level” perspective by exploring prosodic phenomena of spontaneous gesture and speech co-production. It also presents a computational framework for improving continuous gesture recognition based on two phenomena that capture voluntary (co-articulation) and involuntary (physiological) contributions of prosodic synchronization. Discusses different categories for gesture recognition. Markov models are used for gesture recognition. A comprehensive framework is presented that addresses two important problems in gesture recognition systems . An augmented reality tool for vision based hand gesture recognition in a camera-projector system is described. A methodology using a neighborhood-search algorithm for tuning system parameters for gesture recognition is addressed. A novel method is introduced to recognize and estimate the scale of time-varying human gestures.
Receiving 1st Runner Up crest and certificate at Premier University IT fest held on 1st May,2019 for this Gesture Vocalizer in Hardware project showcasing.
Azizul Haque
MD. Shahnewaz Ahsan