Decoding Speech from EEG Recordings
Team Members:
Jeffrey C. Liu
Anthony Vasquez
Sangjoon An
Mentors:
Dr. Gert Cauwenberghs
Abhinav Uppal
Min Suk Lee
Abstract
For patients who are unable to speak, a brain-computer interface translating inner speech would dramatically improve their quality of life. As a starting point, this project focused on decoding heard speech from a dry-electrode EEG since inner and heard speech share functional pathways. A tone-based approach used auditory state steady response (ASSR) with modulated white noise at specific frequencies; ASSR was successfully found in the single-tone auditory stimulus, but only for long, continuous samples. A word-based approach classifying “Yes” and “No” sounds with logistic regression and support vector machines only yielded results around chance. Future directions include utilizing algorithms that treat time-related features as dependent, classifying vowel-consonant-vowel combinations, and designing a dry-electrode EEG to minimize noise.
Abet Addendum
Jeffrey C. Liu
Sangjoon An
Anthony Vasquez
The Team