Video self-modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of him- or herself. In the field of speech language pathology, the approach of VSM has been successfully used for treatment of language in children with Autism and in individuals with fluency disorder of stuttering. Technical challenges remain in creating VSM contents that depict previously unseen behaviors. In this work, we propose a novel system that synthesizes new video sequences for VSM treatment of patients with voice disorders. Starting with a video recording of a voice-disorder patient, the proposed system replaces the coarse speech with a clean, healthier speech that bears resemblance to the patient’s original voice. The replacement speech is synthesized using either a text-to-speech engine or selecting from a database of clean speeches based on a voice similarity metric. To realign the replacement speech with the original video, a novel audiovisual algorithm that combines audio segmentation with lip-state detection is proposed to identify corresponding time markers in the audio and video tracks. Lip synchronization is then accomplished by using an adaptive video re-sampling scheme that minimizes the amount of motion jitter and preserves the spatial sharpness. Experimental evaluations on a dataset with 31 subjects demonstrate the effectiveness of the proposed techniques.
- Shen, J., C. Ti, A. Raghunathan, S.-C. Cheung, and R. Patel. 2015. Automatic Video Self Modeling for Voice Disorder. Multimedia Tools and Applications, volume 74, Issue 14 , pp 5329-5351
- Shen, J., A. Raghunathan, S.-C. Cheung and R. Patel. 2011. Automatic Content Generation for Video Self Modeling. Proceedings of IEEE International Conference on Multimedia Expo (ICME 2011), July 11-15, 2011, pp. 1-6.