INTERNSHIP OFFER #1 (2024) Neural and perceptual coding of vowels in normal and impaired hearing
Robust coding of the spectral shape of sound by the auditory system is crucial for accurate speech perception, in particular with vowels. Although recent computational models of the peripheral auditory system suggest which neural information available at the level of the auditory nerve could code for spectral shape (such as formant positions in a vowel), the contribution of the different neural indices available to perception has never been studied systematically. This internship has two aims. The first aim will be to construct psychophysical experiments to quantitatively assess the impact of specific manipulations of vowel signals, informed by model simulations, on timbre perception, and thus to provide information about the neural cues that are effectively exploited in perception. The second aim will be to evaluate the extent to which cochlear synaptopathy, a “hidden” form of hearing loss at the level of the auditory nerve that cannot be detected with an audiogram, affects this coding and could therefore explain differences in perceptual sensitivity to spectral shape modifications across normal-hearing individuals. This internship will take place in the STMS Lab (Sciences et Technologies de la Musique et du Son) at Ircam (Institut de Recherche et Coordination Acoustique/Musique) in Paris.
INTERNSHIP OFFER #2 (2024) Psychophysical study of across-frequency interactions in the central auditory system
Current functional models of the auditory system process the spectral and temporal dimensions of sounds independently: cochlear filters first decompose the signals into different frequency bands, whose temporal information are processed independently through modulation filtering. However, animal and human vocalizations contain spectro-temporally oriented patterns of energy (e.g.formant transitions), and electrophysiological studies have identified central auditory neurons that are sensitive to specific spectro-temporal directions (i.e. non-separable). This suggests that the auditory system has a dedicated machinery to integrate/combine the temporal information present in different frequency channels, to form auditory objects and allow robust speech perception. Yet, current information regarding how this mechanism operates at a perceptual level remain scarce. The main objective of this internship will be to design and conduct psychophysical experiments based on spectro-temporally modulated signals (https://doi.org/10.1177/233121652097802) to better understand the characteristics of this central integration process, and therefore guide its computational implementation in current models. Furthermore, we will examine whether interindividual variability in across-frequency integration capacities could help understand why individuals with normal-hearing greatly vary in their ability to understand speech in noise. This internship will take place in the STMS Lab (Sciences et Technologies de la Musique et du Son) at Ircam (Institut de Recherche et Coordination Acoustique/Musique) in Paris.
Even if we do not have any offer at the moment, please do not hesitate to contact me with your request if you are interested in working on a research project in our lab. Please have a look at some of our current projects to get an idea of the questions we are interested in.