Online discussion and reading group
Virtual pre-workshop discussions & reading group
We normally meet at 9AM Pacific, 12 Noon Eastern, and 6PM European time on Thursday. (But we might have to change based on speaker's availability.) Send email to coghear@gmail.com if you wish to be added to the mailing list.
We are back for a few more topics. Please join us!!!
Future Talks
Talks are listed from the present going forward.
Past Talks
2 November at 1PM Eastern - Panel discussion on how do we measure synaptopathy in humans?
Andy Oxenham (Moderator) - Predicting the Perceptual Consequences of Hidden Hearing Loss
Chris Plack - Reliability and interrelations of seven proxy measures of cochlear synaptopathy
Magdalena Wojtczak - The search for correlates of age-related cochlear synaptopathy: Measures of temporal envelope processing and spatial release from speech-on-speech masking
Sarah Verhulst - Enhancing the sensitivity of the envelope-following response for cochlear synaptopathy screening in humans: The role of stimulus envelope
Enrique Alejandro López Poveda - Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation
Panel video and transcript
21 September 2023 - Florencia Assaneo (UNAM, Mexico) - What is an oscillator (and what is not)?
5 May 2022 - Daphne Bavelier (Geneva) - How do different forms of attention interact in complex environments?
21 April 2022 - Virginie van Wassenhove (Cognitive Neuroimaging Unit, France) - What is integration?
7 April 2022 - Gopala Anumanchipalli (Berkeley) leading a panel on decoding imagined audition
Stephanie Martin (NextSense) - Decoding spectrotemporal features of overt and covert speech from the human cortex
Jun Wang (UTAustin) - Decoding imagined and spoken phrases from non-invasive neural (MEG) signals
Timothée Proix (Geneva) - Imagined speech can be decoded from low- and cross-frequency intracranial EEG features
Giovanni Di Liberto (Dublin) - The music of silence, Part 1 and Part 2
David Moses (UCSF) - Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria
Christian Herff (Maastricht) - Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity
10 March 2022 - Lucas Parra (CCNY) - What engages the brain?
24 February 2022 - Behtash Babadi (Maryland) - How do we make causality great again?
10 February 2022 - Nai Ding (Zhejiang Univ., China) - How do we parse the structure of speech?
9 December 2021 - Liberty Hamilton (UT Austin) - Is Speech Special or Different?
Parallel and distributed encoding of speech across human auditory cortex (or this overview)
On Finding that Speech is Special (by A. Liberman in 1982)
11 November 2021 - Alain de Cheveigné (ENS) - Benchmarking Brain Decoding Algorithms (a discussion)
See this summary document describing many issues and listing publicly available databases.
28 October 2021 - Maryam Hosseini (Universite de Sherbrooke) - End to end speech enhancement with auditory attention via EEG
21 October 2021 - Lucia Melloni (Max-Planck-Institut für empirische Ästhetik) - How do we segment and create a hierarchical structure of the auditory world?
17 June 2021 - Jonas Obleser (Lübeck) - Listening in cluttered auditory scenes: How do our neural measures relate to behavior?
20 May 2021 - Nima Mesgarani (Columbia) - How to interpret real (neurophysiological) and artificial (DNN) models?
13 May 2021 (one week delayed) - Andrea Chiba (UCSD) - How do we learn the dynamics of the outside world?
22 April 2021 - Andrea Halpern (Bucknell) - Can we capture the internal experience of musical imagery?
8 April 2021 - Michael Casey - Do we all perceive (and decode) music the same way?
25 March 2021 - What can fNIRS do for auditory neuroscience? - Panel moderated by Ruth Litovsky
Colette McKay (Bionics Institute) - Cortical speech processing in post-lingually deaf adult cochlear implant users as revealed by fNIRS
Douglas Hartley (Nottingham) - Adaptive benefit of cross-modal plasticity following cochlear implantation in deaf adults
Heather Bortfeld (Merced) - Functional near-infrared spectroscopy as a tool for assessing speech and spoken language processing in pediatric and adult cochlear implant users.
Antje Ihlefeld (New Jersey Institute of Technology) - A quantitative comparison of NIRS and fMRI across multiple cognitive tasks
11 March 2021 - David Poeppel - Do neural oscillations (excitability cycles) causally influence auditory perception?
Best overview (send email for a preprint): Cortical oscillations and speech processing: emerging computational principles and operations
Speech and Motor: The coupling between auditory and motor cortices is rate-restricted: Evidence for an intrinsic speech-motor rhythm
Music: An oscillator model better predicts cortical entrainment to music
25 February 2021 - Preben Kidmose - What can we decode in or near the ear?
17 December 2020 - Jeremy Skipper (UCL) - A Contrary Opinion on How and why do auditory and motor systems interact?
3 December 2020 - Greg Hickok (UC Irvine) - How and why do auditory and motor systems interact?
=> Follow-up questions/answers to the discussion. 11
12 November 2020 - Jeremy Wolfe (Harvard) - What do visual models of attention tell us about audio?
29 October 2020 (1800 Standard European time, watch the summer/winter timing adjustments!) - Elia Formisano (Maastricht) : How do we recognize sounds? How do we investigate the underlying neural-computational mechanisms?
8 October 2020 - Ingrid Johnsrude (Western Ontario): What is listening effort? Can we measure it?
A model of listening engagement (MoLE) (Published version or Reprint)
Factors That Increase Processing Demands When Listening to Speech (Published version or Reprint)
24 September 2020 - Jack Gallant (Berkeley): What can we decode and what statistics are useful?
10 September 2020 - Christ Stecker (Boystown) and Erick Gallun (OHSU): How can we collect online data for cognitive hearing experiments?
See this wiki for the current thinking: http://spatialhearing.org/remotetesting
30 July 2020 - Jonathan Simon (UMd): What can we decode (with MEG)?
16 July 2020 - Prof. Nilli Lavie (UCL): How does attention affect auditory processing?
2 July 2020 - Prof. Lori Holt (CMU): Is speech understanding more about perception or hallucination?
11 June 2020 - Shihab Shamma (UMd and CNRS, with Giovanni Di Liberto, Claire Pelofi, Guilhem Marion): Do we decode speech and music differently?
28 May 2020 - Jean-Rémi King (CNRS): How are linguistic features represented in the brain?
21 May 2020 - Ed Chang (UCSF): Do parallel pathways decode speech?
The Encoding of Speech Sounds in the Superior Temporal Gyrus, By Han Gyol Yi, Matthew K.Leonard, Edward F.Chang.
7 May 2020 - Ed Lalor (Rochester and Trinity): Do our eyes help decode audio?
"Look at me when I'm talking to you: Selective attention at a multisensory cocktail party can be decoded using stimulus reconstruction and alpha power modulations," by Aisling E. O'Sullivan, Chantelle Y. Lim, Edmund C. Lalor.
"Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions," by Michael J. Crosse, John S. Butler and Edmund C. Lalor.
23 Apr 2020 - Tom Frankcart (KUL): How do we measure performance of auditory attention decoding systems?
S. Geirnaert, T. Francart and A. Bertrand, "An Interpretable Performance Metric for Auditory Attention Decoding Algorithms in a Context of Neuro-Steered Gain Control," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 1, pp. 307-317, Jan. 2020.
17 Apr 2020 - Jens Hjortkjær (DTU): Do older brains decode speech differently?
Søren A. Fuglsang, Jonatan Märcher-Rørsted, Torsten Dau and Jens Hjortkjær, "Effects of Sensorineural Hearing Loss on Cortical Synchronization to Competing Speech during Selective Attention," Journal of Neuroscience, 40 (12) 2562-2572, 18 March 2020.