Mirjam Fried is Professor of Linguistics and Chair of the Department of Linguistics at Charles University, where she has worked since 2011. Her expertise is in cognitive linguistics, especially the development of conceptual and analytic tools in Construction Grammar and Frame Semantics. Her research examines the grammatical and interactional organisation of spontaneously produced spoken language, with an empirical foundation in corpus data. She received her PhD from UC Berkeley, held faculty appointments in the United States (University of Oregon, UC Berkeley, and Princeton University), and has served in major leadership roles at Charles University, including as Dean of the Faculty of Arts and as Principal Investigator of a five-year European structural research project.
Topic: Multilayered cues for listener comprehension in spontaneously produced interactions
Abstract: Cognitively oriented linguistic research rests on the long-standing understanding that face-to-face oral interactions constitute an important resource for detecting the complex patterns of language production and perception. This usage-based commitment calls for a multi-layered approach that can incorporate simultaneous contributions of multiple semiotic channels – linguistic, auditory, visual (e.g. Feyaerts et al. 2017; Verhaagen 2025). I will demonstrate how the conceptual and analytic tools of Construction Grammar can help articulate integrative representations of speakers’ conventional linguistic knowledge. To illustrate, I will consider the relationship between lexico-grammatical structure and specific auditory patterns in signaling subtle, but conversationally crucial discourse-pragmatic meanings. My corpus material (private, everyday Czech conversations) is instructive in at least two ways: It highlights the role of sound (prosodic patterns and/or varying degrees of phonetic reductions) as a strong indicator of speakers’ intentions as well as a reliable cue for functional disambiguation and, hence, successful comprehension. And it supports the hypothesis that descriptively adequate and cognitively plausible generalizations about grammatical structure must take into account the inherent properties of dialogic interaction: temporal sequencing of turns, distributed co-construction of complete linguistic units, and the need to keep updating the listener’s mental model of the unfolding discourse.
Janet van Hell (PhD, University of Amsterdam) is Distinguished Professor of Psychology and Linguistics, and Director of the Center for Language Science at the Pennsylvania State University. Funded (mainly) by the National Science Foundation, research in her Bilingualism and Linguistic Diversity ( BiLD ) Lab focuses on the neural and cognitive basis of human language processing in linguistically diverse contexts, in L2 learners and monolingual, bilingual and bidialectal speakers. She combines neuropsychological and behavioral techniques to study patterns of cross-language interaction at the lexical and sentence levels, codeswitching, and accented-speech processing. Dr. Van Hell and her students are active in outreach initiatives that bring language and brain research to broader communities. She serves as PI of the NSF NRT program “Linguistic diversity across the lifespan: Transforming training to advance human-technology interaction”. She is also Associate Editor for Brain and Language and Co-Editor of Language Learning’s Cognitive Neuroscience Series.
Topic: Processing accented speech: impact of listener experience and speaker identity
Abstract: Our globalized world is a linguistic melting pot, home to many nonnative speakers of a given language. In fact, there are more nonnative than native speakers of English, many of whom have a noticeable accent. Nonnative-accented speech can pose processing challenges, as listeners must reconcile incoming deviating acoustic signals with their existing phonological representations. These challenges can be exacerbated when nonnative-accented speech is embedded in background noise, as may happen when you listen to your colleague during the conference coffee break. How do listeners process speech produced by nonnative-accented speakers? In this talk, I will discuss recent behavioral and electrophysiological (EEG/ERP) research on how listeners process semantic and syntactic information in sentences spoken by nonnative- and native-accented speakers. I will specifically focus on studies that examined how listeners’ processing of nonnative-accented speech is impacted by their familiarity with nonnative-accented speech, knowledge of the speakers’ identity, and noisiness of the environment. Collectively, these findings highlight the importance of integrating socio-indexical cues, listener experience, and environmental features into theoretical models of nonnative speech processing.
James H-Y. Tai (戴浩一) is Chair Professor of Linguistics at National Chung Cheng University, where he founded the Graduate Institute of Linguistics and the Research Center for Humanities and Social Sciences. He received his BA from National Taiwan University (1964) and his MA (1967) and PhD (1970) in Linguistics from Indiana University. His academic career includes positions at Southern Illinois University and The Ohio State University, with visiting appointments at MIT, the University of Massachusetts, and Cornell University, and he has also served as Adjunct Professor at Ohio State since 1995. His research spans Chinese linguistics (syntax, semantics, and pragmatics), cognitive linguistics, sign linguistics, and language and aging, and he has played a central role in advancing research on Taiwan Sign Language, including establishing the Taiwan Center for Sign Linguistics.
Topic: TBA
Abstract: TBA