8:00 - 9:00
Registration
Keynote Speakers
9:00 - 9:25
Title: Explaining Decisions in Multi-Agent Environments
Abstract: Understanding and accepting decisions made by artificial intelligence (AI) systems is crucial for human collaboration and trust. This importance grows even further in multi-agent environments, where AI systems make decisions based on unknown goals, potentially influenced by the preferences of other agents. In such complex scenarios, explanations become important in increasing user satisfaction and acceptance by considering factors like the system's decision, user preferences, agent preferences, environmental settings, and key attributes such as fairness, envy, and privacy. In this talk, we will explore the concept of Explainable Decisions in Multi-Agent Environments (xMASE) through two cases: constraints-driven optimization problems and explanations for multi-agent Reinforcement Learning (RL) and elucidating preferences. For each case, we propose an algorithm that generates comprehensive explanations. Furthermore, we will report on human experiments that demonstrate the informativeness and acceptability of explanations to users.
9:25 - 9:50
Title: Encouraging Automated Agents to Behave Nicely
Abstract: Autonomous AI agents are deployed in increasingly complex environments where they must account for the presence of other agents while trying to achieve their objectives. Moreover, such agents may require assistance from other agents to efficiently accomplish their assigned task, or even be able to complete it at all. The overarching objective of this work is to provide theoretical and computational foundations that will allow agents to autonomously learn and adopt cooperative and collaborative behaviors. As a first step for accomplishing this objective, we need to equip agents with the ability to produce ad-hoc assessments of their ability to be helpful to other agents and of the potential benefit of assistive actions that may be performed by other agents. For this purpose, the Value of Assistance (VOA) will be presented, capturing the expected improvement in performance of assistive actions. Computing VOA in different multi-agent and multi-robot settings will also be discussed. As an example, VOA estimation will be demonstrated for selecting which agent from a team of navigating robotic agents should receive localization information from a drone. Estimation of VOA will also be demonstrated for grasping and for integrated task and motion planning settings.
9:50 - 10:15
Title: Between multi-agent planning and multi-agent reinforcement learning
Abstract: Multi-agent planning (MAP) and Multi-agent reinforcement learning (MARL) are two well-studied sub-areas of Artificial Intelligence research that deal with sequential decision making for multiple agents. In this talk, I will give an overview of several types of MAP problems and algorithms and their relation to MARL research. This includes Multi-Agent Path Finding (MAPF), which deals with planning paths for multiple agents, and Multi-Agent STRIPS, which is a multi-agent variant of classical planning. Then, I will present research on learning domain models for single- and multi-agent planning and discuss its relation to model-free and model-based MARL.
10:15 - 10:40
Title: Contrastive Explanations for Reinforcement Learning
Abstract: In this talk, I will present two of our recent works on explainable reinforcement learning. The first introduces "DISAGREEMENTS" a method for generating dependent and contrastive summaries for reinforcement learning agents, enhancing user understanding of differences between agent strategies. The second presents "COViz," a technique comparing an agent's chosen action with a counterfactual, shedding light on the agent's decision-making processes.
Joint work with Yotam Amitai and Yael Septon.
10:40 - 11:10
Contributed work:
Ram Rachum, Dima Ivanov
11:10 - 11:35
Coffee Break
11:40 - 12:05
Title: Goal Recognition as RL - Fantastic Goals and Where to Find Them
Abstract: When considering more than one agent in the environment, goal-directed RL can be used not only to optimize the behavior of the ego agent but also to improve its understanding of other agents. This talk will first overview current goal-conditioned RL and then show how this problem formulation can be leveraged to improve reasoning and inference. It will conclude with a general discussion about goal-condition RL and its potential uses as part of MARL research.
12:05 - 12:30
Title: Social robots: from individuals to groups
Abstract: Social robots interact with people to achieve social goals. Using reinforcement learning enables the robot to learn about its human counterparts. In this talk I'll talk about reinforcement learning of social robots in the context of one-on-one interaction, many-to-one and one-to-many interactions.
12:30 - 12:55
Title: Generative Models for Multi-Agent Systems
Abstract: Models are often an imperative component for multi-agents systems to complete desired tasks. However, the complexity of the system and the environment including various uncertainties make the acquisition of a good model a challenge. Data-based models offer a feasible solution to the lack of a good analytical model. However, they require a significant amount of data which is rather expensive, time consuming and even dangerous. In this talk, I will discuss the generation of synthetic data for training the models for a group of robots and for ultra-range gesture recognition for directing robots.
12:50 - 13:20
Contributed work:
Nitay Alon, Assaf Caftory
13:20- 14:15
Lunch
14:20 - 15:20
15:30 - 16:00