Autonomy In Teams
Joint Workshop on Sharing Autonomy in Human-Robot Interaction, 13 July 2018
As an increasing number of intelligent agents enter our lives and support us in a wider variety of tasks, the question of how we can interact with these agents in a simple and natural way becomes increasingly important. In the future, agents will not solely be recruited by us for a single, clearly restricted task, but agents will offer a broad range of support in different tasks. On the one hand, this will require that agents are able to learn when to support and how to mediate joint actions in a collaboration. On the other hand, agents must become able to break down goals into actionable subtasks and determine how to contribute towards overarching goals in a joint effort. These capabilities should scale towards mixed teams of humans and artificial agents in collaborative settings.As an increasing number of intelligent agents enter our lives and support us in a wider variety of tasks, the question of how we can interact with these agents in a simple and natural way becomes increasingly important.
Research on Shared Autonomy focuses on how autonomous intelligent systems can successfully interact and shape each other’s autonomy spaces. It is about how two or more autonomous agents mediate how they individually and jointly can contribute to an overarching goal, but also at the same time fulfill their individual goals. The workshop aims, first, at agent models (algorithms, representations, evaluation metrics, datasets) that allow for goal-oriented interactions. Secondly, the main focus of the workshop is on how such models can be learned and adapt open-ended, over time and on various time scales. In short, how can we build robots and virtual agents that successful realize collaboration between humans and robots.
Impedance mismatches affect such aspects of teamwork as trust mechanisms, cooperative learning, understanding the division of cognitive labor, alignment of goals, adaptability of policies and plans, the granularity of policies and plans, and team roles.
We invite contributions targeting any areas of affecting teams that include humans and robots. Some of the topics of interest include, but are not limited to the following:
- Joint adaptation (and co-adaptation) of coordination patterns in agents
- Representations for collaboration in HRI: intentions, goals, domain knowledge, beliefs about current situation
- Communication and planning at differing levels of abstraction• Learning models of teammates
- Transfer and incremental learning of task models, e.g., in cooperative tasks and in multi-agent systems
- Recognizing and predicting actions and/or motions of other agents
- Shared control in collaborative human-robot tasks: Roles, strategies, and the division of labor
- Learning and modeling human-agent interaction, human instructions and collaborative behavior. Trust and transparency in decision making
The workshop will be part of IJCAI (together with ECAI, ICML, and AAMAS) 2018 in Stockholm (Sweden). It will be held as a full day workshop on the 13th of July 2018.
The workshop aims at a multidisciplinary perspective on key aspects and challenges of shared autonomy. Therefore, the presentations will reflect the diversity of approaches and topics. The full-day agenda will be organized with talks, a poster session, and a concluding panel session.
Schedule as of 9 July 2018
Introductory session – Sharing Autonomy
09:00-09:10 Welcome and Introduction
09:10-09:40 Stefan Kopp: Coordinating Cooperation Through Human-Robot Interaction
09:40-10:00 Poster pitches of 2 minutes each
10:00-10:30 Break and Poster Session
10:30-11:00 Bernhard Nebel: Implicitly Coordinated Multi-Agent Path Finding under Destination Uncertainty
11:00-11:30 Jacob Foerster
11:30-12:00 Luke Marsh: Learning Human-Acceptable Machine Behavior
12:00-12:30 Intermediate Discussion and Posters
12:30-14:00 Lunch Break
Characterizing and Measuring
14:00-14:30 Roger Woltjer, Human Autonomy Teaming metrics – a functional systems perspective: There are a number of reasons for measuring aspects of a Human-Autonomy Team (HAT) and its performance. Assessment of HAT fitness-for-purpose may for example be used in system design/development of HAT, evaluating HAT training progress, or HAT reconfiguration during missions to adapt to circumstances. This short presentation will discuss key metrics considerations for these purposes, and outline a functional system perspective on HAT metrics along a number of metrics dimensions. Example metrics will be provided in this perspective, concluding with arguments for using a balanced metrics portfolio.
Rogier Woltjer is Deputy Research Director at the Swedish Defence Research Agency (FOI). He obtained an M.Sc. in Artificial Intelligence from Vrije Universiteit Amsterdam in 2001, and a Ph.D. in Cognitive Systems from Linköping University, Sweden, in 2009. His research focuses on cognitive systems engineering, human factors, safety and security management, resilience engineering, training, decision support, and command and control. Application domains that he has worked on include air traffic management, aviation, unmanned systems, and emergency and crisis management.
14:30-15:00 Helen Lashley, Evaluation of Autonomy
15:30-16:30 Poster Session and Coffee Break
Stefan Kopp, CITEC, Bielefeld University, Germany
Doug Lange (Chief Scientist, Command and Control Department, Space and Naval Warfare Systems Center, Pacific)
Luke Marsh (Research Scientist – Defence Science and Technology Group)
Bernhard Nebel, University of Freiburg, Germany
Adrian Pearce (Associate Professor – The University of Melbourne)
Malte Schilling, CITEC, Bielefeld University, Germany
Michael Spranger, Sony CSL Lab, Japan