Using language seems easy, but (just as many other things) the cognitive "machinery" and operations that make it possible are pretty complicated.

A speaker in a conversation needs to figure out what she wants to say, select the words that best fit her message, assemble them into grammatical structures and convert them into sounds. A listener needs to group these sounds into words, recover the grammatical structure and access meaning from long-term memory. But many if not most people nowadays speak more than one language, and thus have almost double the information in their long-term memory, need to control which language they speak, and their two language systems interact with each other.

Both speakers and listeners also need to keep units of language in working memory until they are integrated with context or produced. They also need to pay attention to the conversation but also to their surroundings, especially if they are doing two things at the same time such as talking and driving. On top, since the speaker and listener roles alternate and turn-taking time is very short (~250ms), listeners need to start planning their own speech at the same time as they are comprehending the speech of their conversation partner (essentially, dual-tasking much of the time).

In the Language and Communication Lab, we study different aspects of these processes, to be able to understand how it all happens.

Main research lines