Invited Speakers

Giuseppe Carenini is a Professor in Computer Science and Director of the Master in Data Science at UBC (Vancouver, Canada). His work on natural language processing and information visualization to support decision making has been published in over 150 peer-reviewed papers (including best paper at UMAP-14 and ACM-TiiS-14). Dr. Carenini was the area chair for many conferences including recently for ACL'21 in “Natural language Generation”, as well as Senior Area Chair for NAACL'21 in “Discourse and Pragmatics”.  Dr. Carenini  was also the Program Co-Chair for IUI 2015 and  for SigDial 2016. In 2011, he published a co-authored book on “Methods for Mining and Summarizing Text Conversations”.  In his work, Dr. Carenini has also extensively collaborated with industrial partners, including Microsoft and IBM. He was awarded a Google Research Award in 2007 and a Yahoo Faculty Research Award in 2016.

Title: Discourse Processing in the era of Large Language Models

Abstract: Despite the great success of Large Language Models (LLMs) in many NLP tasks, they still suffer from several serious flaws. For instance, they struggle with tasks involving multiple documents, they are not interpretable and they are not able to plan in either solving problems or text generation. It also seems that simply making them more domain specific or just scaling them up is problematic.  In this talk, I will argue that more powerful discourse processing could come to the rescue, but only if two key challenges are addressed. First, we need to be able to train modern discourse parsers (NLU) and generators (NLG) across domains and genres, for both monologues and dialogues, without requiring substantial human annotation. Secondly, we need to better understand what discourse info is missing in LLMs and how to inject such missing info into current LLMs. To conclude, I will then discuss further longer-term open issues involving discourse processing for LLMs designed to deal with long documents and with decoder-only architectures, as well as how to integrate different theories of discourse to better enhance the understanding and generation capabilities of LLMs. 

Yufang Hou is a research scientist at IBM Research Ireland. She is also an adjunct senior lecturer and co-supervisor at UKP Lab-TU Darmstadt. Her research interests include referential discourse modelling, argument mining, and scholarly document processing. Yufang received WoC “Technical Innovation in Industry” Award in 2020 and IBM Research Outstanding Technical Achievement Award in 2022.

Title: Bridging Resolution: A Journey Towards Modelling Referential Discourse Entities

Abstract: Information status and anaphora play an important role in text cohesion and language understanding. In this talk, I will first give an overview of my decade-long work on bridging resolution and information status classification. Next, I will discuss our research findings on probing LLMs for bridging inference. Finally, I will outline several challenging research questions on modelling referential discourse entities.