Workshop on Open-World Agents (OWA-2024)
Synergizing Reasoning and Decision-Making in Open-World Environments
A NeurIPS 2024 Workshop
12/15/2024 (Full day)
Ballroom A&B, Vancouver Convention Center
News
08/08/2024 - Our call for papers is out! We will offer 1 best paper award and 1-2 honorable mention awards.
07/26/2024 - Our workshop will be hosted with NeurIPS 2024 during the stunning winter in Vancouver, BC, Canada 🇨🇦!
Important Dates
Abstract submission Due: 09/10/2024 (GMT) 09/15/2024 (GMT)
Paper submission Due: 09/15/2024 (GMT) 09/20/2024 (GMT)
Notification of Acceptance: 09/30/2024Â
Camera-ready Paper Due: 10/05/2024
Workshop Date: 12/14/2024
Summary
In recent years, AI has made significant strides in achieving success across various domains, demonstrating capabilities that often surpass human performance in specific tasks. However, the real world presents challenges that go beyond single tasks, objectives, or predefined, static environments. We propose to consider open-world environments as the new habitat for AI agents: highly diverse and dynamic, fully interactive, teaming up with infinite and creative tasks, and requiring continuing learning and growth. Therefore, open-world agents, are expected to possess remarkable problem-solving capabilities across all cognitive functions, notably, reasoning and decision-making compared to specialized AI agents.
This workshop aims to bring together researchers from various fields to discuss emerging topics about reasoning and decision-making in open-world environments. This topic can be overly broad, but we are particularly interested in synergizing reasoning and decision-making, i.e., open-world agents that can simultaneously perform reasoning (e.g., QA, dialogue) and decision-making (e.g., planning and control), and how such unification helps tackle the challenges brought by the open world to both parties. To this end, the related fields are not limited to interleaved reasoning with decision-making, reasoning in embodied learning agents, LLM tool usage, reinforcement learning in open-world environments, open vocabulary learning, continued learning, multi-agent learning, and emerging ethical considerations in open-world environments. Our objective is to foster collaboration and insights into addressing the scientific questions about developing open-world reasoning and decision-making agents. Some examples are:
How do humans interleave reasoning and decision-making, what are the benefits and what can machines learn from these?
How to build a model that can unify reasoning and decision-making, likely for open-world environments?
How can we develop principled reasoning systems for open-world environments so that AI agents can plan in unseen scenarios?
How does (prior) knowledge play a role in reasoning and decision-making in such environments? How is new knowledge acquired?
How can we achieve open-world reasoning and decision-making with as little supervision / human feedback as possible?
How to quantitatively measure the generalization of reasoning and decision-making systems?
Is there a general theory or scheme behind reasoning and decision-making, for humans, or machines?
Best practices for building open-world agents in various domains including game AI, robotics, LLM agents for workflow automation, etc.
Invited Speakers & Distinguished Panelists
Sherry Yang
NYU
Tao Yu
HKU
Ted Xiao
Google DeepMind
Natasha Jaques
University of Washington
John Langford
Microsoft Research
Jiajun Wu
Stanford University
Contact
Please contact the organizing committee at owa-workshop@googlegroups.com if you have any questions!