Ongoing participant recruitment - Adults who stutter needed!

Case study - articulatory movement in stuttering

This study examines the articulatory characteristics of dysfluent speech of an adult male who stutters using real-time MRI. Results show that consonant gestures during dysfluency were made with extreme constriction degree and the release of consonant constriction was delayed. The coproduction of vowel gestures was not disrupted during dysfluency. Perceptual descriptions of dysfluencies as block, prolongation, and repetition did not necessarily result from distinct patterns of articulatory movement.

(Check out our publication in Journal of Communication Disorders)

(Click here to check out this research summary written by my colleague Charlotte Wiltshire and a video we did together)

Collaborators: Charlotte Wiltshire (LMU), Kate Watkins (Oxford), Mark Chiew (Oxford), Louis Goldstein (USC)


Compensation mechanism in post-glossectomy speech

Individuals who have undergone treatment for oral cancer oftentimes exhibit compensatory behavior in consonant production. This study reveals that compensatory strategies used to produce target alveolar segments vary systematically as a function of target manner of articulation. When target constriction degree at a particular constriction location cannot be preserved, individuals may leverage their ability to finely modulate constriction degree at multiple constriction locations along the vocal tract.

(Check out our publication in JASA Express Letters)

Collaborators: Christina Hagedorn (CUNY), Asterios Toutios, Uttam Sinha (USC-Keck school of Medicine), Louis Goldstein (USC), Shri Narayanan (USC)


Vocal tract anatomy and articulatory variation

This study presents a novel method to quantify tongue shapes for American English /ɹ/ based on the location and length of the palatal constriction. The measurements of the speakers' vocal tract anatomy were used to predict the obtained tongue shape measurements. A weak relationship was identified between /ɹ/ tongue shapes and two anatomical factors: oral cavity length and mandibular inclination.

(Manuscript in preparation; Check out our poster in ASA Seattle 2021)

Collaboraters: Haley Hsu (USC), Louis Goldstein (USC), Shri Narayanan (USC)


Self organization in speech perception

This study uses a biologically inspired, unsupervised machine learning technique, Self-Organizing Map, to model the acoustic similarities between phoneme pairs that differ in only feature dimension. The learning results were further compared to the perceptual similarities of the same phoneme pairs in order to examine the relative saliency of the differing features in determining acoustic similarity and perceptual similarity.

(Check out my poster at LSA 2020)