IJCAI 2024
Trustworthy Interactive Decision-Making with Foundation Models Workshop
(TIDMwFM)
Cover image generated using DALL.E 3.
Overview
Interactive scenarios arise broadly, from robots collaborating with humans in dynamic environments, to recommender systems interacting with consumers, to intelligent control systems for power grids, transportation, and smart cities, to discovery agents interacting with literature databases and the internet. As foundation models grow more capable, they are being integrated into these interactive interfaces and autonomous systems that have physical or cyber components in the loop. This integration enables AI agents to collaborate with humans in assisting high-stakes decision-making across complex real-world domains. However, it also raises significant questions around how to ensure these models remain transparent, unbiased, robust, and aligned with human values.
This forward-looking workshop provides a forum to shape discussion on human-centric techniques that promote ethical and trustworthy integration of foundation models into interactive decision support systems. Sessions will review progress in areas like safe reinforcement learning and large language models (LLMs) based agents while also delineating open challenges around uncertainty quantification, safety evaluation, preference alignment, and responsible deployment in interactive settings.
By bringing together researchers across machine learning, human-computer interaction and AI safety, this workshop will facilitate cutting-edge thinking around trustworthy integration of foundation models into an emerging class of decision-making systems that closely link humans and AI. We look forward to your contributions on the techniques and responsible development practices that will shape the future of AI-assisted interactive decision making across a breadth of socially impactful domains.
Important Dates
Workshop date: August 5, 2024
Submission opens: April 1, 2024
Submission deadline: April 26, 2024 May 28, 2024 (AOE)
Acceptance notification: June 4, 2024 June 28, 2024 (tentative)
Camera-ready and poster upload deadline: TBD
Invited Speakers and Panelists
Prof. Se Young Yun
Associate Professor at the Graduate School of AI, KAIST
Title: Dynamics, Reasoning, and Instructional Robustness of Large Language Models
Prof. Sang-goo Lee
Professor, Seoul National University, Korea
Title: Reliable Integration of External Knowledge in LLM: Handling Conflicting and Irrelevant Information
Co-Presenter: Youna Kim
PhD student at Seoul National University, Korea
Prof. Jaesik Choi
Director, Explainable Artificial Intelligence Center, KAIST
Title: Explainable Artificial Intelligence to Discover the Internal Decision Mechanisms of Deep Neural Networks
Donhyeong Kim
Ph.D. in Electrical and Computer Engineering, Seoul National University
Title: Risk-Aware Safe Reinforcement Learning for Trustworthy Decision-Making
Opening remarks link: TIDMwFM opening remarks
Organizers
Assistant Professor, Virginia Tech, USA
Associate Professor, University of Chicago, USA
Assistant Professor, Virginia Tech, USA
Thomas L. Phillips Professor of Engineering , Virginia Tech, USA
Postdoc, UC Berkeley, USA
PhD Candidate, Virginia Tech, USA
PhD Student, Virginia Tech, USA