Research Project

Background

Aphasia is a condition resulting from brain damage that can cause difficulty in the production or comprehension of speech. There are several different subtypes of aphasia, including Broca’s aphasia (which involves difficulty in the production of speech but relatively intact comprehension), and Wernicke’s aphasia (which involves relatively fluent speech but difficulty with comprehension). We propose to design a computer model that can classify aphasic speech in the auditory and text domains.

Research question

What are the fundamental patterns present in aphasic speech that allow humans to subjectively detect the presence of aphasia? How important is each aspect of traditionally aphasic speech in the diagnosis of aphasia? Can we unveil novel characteristics of aphasic speech that can inform future diagnoses? If we build a model that can (to some degree) “diagnose” aphasia, we can use the weights from the model to assess the relative importance of different speech characteristics in diagnosis.

Goal

Write a model that takes video/auditory data from patients with aphasia and controls who do not have aphasia as inputs, that is capable of determining 1) if the video/ auditory input you give it comes from someone who likely has aphasia and possibly 2) if they do have aphasia, what sub-type of aphasia the patient is likely to have.

A schematic representation of the research project