Workshop Program

Keynote Speaker: Vered Shwartz

Did Language Models "solve" figurative language?

Figurative expressions, such as idioms, similes, and metaphors, are ubiquitous in English. For many years, they have been considered a 'pain in the neck' for NLP applications, due to their non-compositional nature. With LLMs excelling at understanding and generating English texts, it's time to ask: did LLMs 'solve' figurative language? Is it possible that the sheer amount of exposure to figurative language in their training data equipped them with the ability to understand and use figurative language? I will discuss the state of LLMs in recognizing figurative usage, interpreting figurative expressions in context, and usage of figurative language in generated text.

Speaker Bio: Vered Shwartz is an Assistant Professor of Computer Science at the University of British Columbia. Her research interests include commonsense reasoning, computational semantics and pragmatics, and multiword expressions. Previously, Vered was a postdoctoral researcher at the Allen Institute for AI (AI2) and the University of Washington, and received her PhD in Computer Science from Bar-Ilan University. Vered's work has been recognized with several awards, including The Eric and Wendy Schmidt Postdoctoral Award for Women in Mathematical and Computing Sciences, the Clore Foundation Scholarship, and an ACL 2016 outstanding paper award.

 

Talk Recording

Figlang_keynote.mp4