DeepArt
About the project
Deep learning for articulatory-based disordered speech recognition.
People with motor control problems, who are unable to use keyboard and touch driven interfaces to access the digital world, could benefit hugely from automatic speech recognition. Unfortunately, these same people often have co-occurring speech disorders (dysarthria), that make their speech hard to recognise. Sheffield University have much experience in developing speech technology for disordered speech, however, the task is made challenging by a shortage of training data and a large inter-speaker variability.
This Google award will fund a researcher to work with Dr Heidi Christensen and Dr Jon Barker in CATCH/Computer Science to work on improving speech recognition for this group of people by exploring the use of articulatory information and deep learning techniques.