Dr. Nancy F. Chen is an ISCA Fellow (2025), AAIA Fellow (2025), and A*STAR Fellow (2023), and was recognized with the Asian Women Tech Leaders Award (2025). She is also a 2025 inductee of IEEE Eta Kappa Nu (HKN), the honor society of IEEE recognizing outstanding engineers.
At A*STAR, Dr. Chen leads the Multimodal Generative AI group and the AI for Education Programme. Dr. Chen is a serial best paper award winner across major conferences - including ICASSP, ACL, EMNLP, MICAAI, COLING, APSIPA, SIGDIAL and EACL – her research spans applications in education, healthcare, neuroscience, social media, security and forensics. Dr. Chen's multimodal, multilingual technologies have led to commercial spin-offs and adoption by Singapore’s Ministry of Education.
Keynote Talk:
From Speech to Sense: The Art of Listening in Artificial Intelligence
Abstract: Unlike sight, which we can shut off with a blink, sound is inescapable. We are always listening, even when we wish not to. Hearing comes naturally, but understanding what we hear requires learning, knowledge, focus, and interpretation. Yet it is sound — be it the quiet drone of an air conditioner, a lover’s tender whisper, or the distant rush of a waterfall — that anchors us to our physical surroundings, social connections, and the present moment.
In this talk, I will share our experience in modelling the audio signal in multimodal generative AI to drive translational impact across domain applications. In particular, we exploit the audio modality to strengthen contextualization, reasoning, and grounding. Cultural nuances and multilingual peculiarities add another layer of complexity in understanding verbal interactions. Examples include our generative AI efforts in Singapore’s National Multimodal Large Language Model Programme has led to MERaLiON (Multimodal Empathetic Reasoning and Learning In One Network), the first multimodal large language model developed for Southeast Asia context. Such endeavors complement North American centric models to make generative AI more widely deployable for localized needs. Another case in point is SingaKids AI Tutor, which enables young children to learn ethnic languages such as Malay, Mandarin and Tamil. We are currently expanding applications to embodied agentic AI, aviation, and healthcare.
Tanya Goyal is an assistant professor in the Computer Science department at Cornell University.
Her research interests include building reliable and sustainable evaluation frameworks for large language models (LLMs) as well as understanding LLM behaviors as a function of training data and/or alignment strategies. Previously, she was a postdoctoral scholar at Princeton Language and Intelligence Center (2023-2024). Tanya completed her Ph.D. in Computer Science at UT Austin in 2023 where her thesis was awarded UTCS's Bert Kay Dissertation award. Her research is supported by NSF and a gift from Google.
Keynote Talk:
Climbing the Right Hill: On Benchmarking Progress in Long-Form Text Processing
Abstract: Large Language Models (LLMs) are now functionally capable of ingesting very long documents as input, but can they truly process and reason over these massive contexts? In this talk, I will discuss our efforts at answering this question through the lens of long narrative summarization, a setting that naturally requires information synthesis and reasoning over long range dependencies. In the first part, I will describe our work highlighting shortcomings of current models along two key summary quality axes - coherence and factuality - and discuss challenges in automating their evaluation. Next, I will present NoCha, our methodology for constructing realistic and uncontaminated benchmarks for long context narrative reasoning. I will discuss results that show that NoCha is challenging for frontier LLMs; GPT-5 reports <30% worse performance compared to humans, and provide a recipe for building the next generation of robust long context benchmarks.