Workshop on Human Theory of Machines and Machine Theory of Mind for Human-Agent Teams__________________
IROS 2022, Kyoto, Japan

The workshop has now concluded. We thank all the speakers and contributors for their outstanding presentations and discussions!

We hope to see you again at future iterations of this workshop

Summary

A cornerstone of human-human interaction is our ability to infer our partner’s beliefs, desires, and intentions, allowing us to effectively collaborate with each other towards a common goal. The capacity to formulate such a mental model is known as theory of mind. As we seek to impart similar capabilities to embodied agents when interacting with humans, there is a need to develop methods that tackle both sides of the interaction: 1) theory of mind in which a mental model of the human is held by the agent, and 2) theory of machine in which a mental model of the agent is held by the human.

While this mental modeling is important in all types of human-agent teaming, this is especially so in the context of heterogeneous teams in which embodied agents with fundamentally different capabilities collaborate with humans. To this end, we believe that there is an inherent need to address the challenges faced by human-robot interaction systems from the perspective of mental modeling.

This workshop brings together renowned researchers in the fields of robotics, artificial intelligence, and cognitive science to discuss the current state of the art with respect to such mental models and the challenges inherent to future research and development. We aim to foster discussion on the relationship between various mental modeling techniques, and more specifically, on the following:

  • Theory of mind - Inferring the mental states of human partners, including but not limited to beliefs, desires, intentions, goals, and capabilities

  • Theory of machine - Inducing mental states related to the agent’s beliefs, desires, intentions, goals, and capabilities in human partners

  • Leveraging mental models in interactions between humans and agents

Call for Contributions

We invite contributions to the hybrid workshop from a wide variety of disciplines, including but not limited to human-robot interaction, human-agent teams, cognitive science, theory of mind, explainable AI, and robot social intelligence. Contributed extended abstracts should be submitted with the same template as full IROS papers and not exceed four pages, excluding references. Selected contributions will be presented during the workshop with a poster and spotlight talk.

All submissions should be made through CMT and do not need to be anonymized.

Templates:

Topics of Interest:

  • Human-robot interaction/collaboration

  • Heterogeneous human-agent teams

  • Cognitive and mental modeling including theory of mind and theory of machine

  • Shared mental models

  • Explainable, interpretable, legible, predictable, transparent interaction and planning

  • Social intelligence in human-robot interaction

Important Dates:

  • Abstract Submission: September 9th, 2022 (Time: 11:59pm AoE)

  • Author Notification: September 20th, 2022

  • Final Submission Deadline: September 23rd, 2022 (Extended to October 17th, 2022)

  • Hybrid Workshop: October 27th, 2022

At the Workshop (ID: ThWF-13):

  • The in-person component of the workshop will be in Room K

  • The online component will be accessible through Zoom: Click here to join.

  • Submit questions to the panel at this link


Keynote Speakers

Anca Dragan UC Berkeley

TBD

Katia Sycara Carnegie Mellon University

Abstract: Theory of Mind Modeling in Search and Rescue Teams

Theory of Mind (ToM) refers to the ability to make inferences about other’s mental states. Such ability is fundamental for human social activities such as empathy, teamwork, and communication. As intelligent agents come to be involved in diverse human-agent teams, they will also be expected to be socially intelligent in order to become effective teammates. In this talk, we describe motivations, challenges and advantages of ToM for single human and human teams. We present a computational ToM model which observes team behaviors and infers their mental states in an urban search and rescue (US&R) task. Our modular ToM model approximates human inference by explicitly representing beliefs, belief updates, and action prediction/generation. We present experimental results that compare the model’s performance with human observers that are asked to make the same inferences. We also discuss the complementary problem of Human Theory of Machines as well as open research problems.

Subbarao Kambhampati Arizona State University

Abstract: Symbolic Mental Models as a Lingua Franca for Supporting Explainable & Advisable Human-AI Teaming

Despite the surprising power of many modern AI systems that often learn their own representations, there is significant discontent about their inscrutability and the attendant problems in their ability to interact with humans. While alternatives such as neuro-symbolic approaches have been proposed, there is a lack of consensus on what they are about. There are often two independent motivations (i) symbols as a lingua franca for human-AI interaction and (ii) symbols as system-produced abstractions used by the AI system in its internal reasoning. The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities. Whatever the answer there is, the need for (human-understandable) symbols in human-AI interaction and teaming seems quite compelling. In particular, humans would be interested in providing explicit (symbolic) knowledge and advice -- and expect machine explanations in kind. This alone requires AI systems to maintain symbolic mental models for supporting explainable and advisable teaming with them. In this talk I will outline my group's research efforts in realizing this vision.

Short Bio:

Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He served as the president of the Association for the Advancement of Artificial Intelligence, a trustee of the International Joint Conference on Artificial Intelligence, the chair of AAAS Section T (Information, Communication and Computation), and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.

Harold Soh National University of Singapore

Abstract: How can we learn human models more efficiently?

In this talk, I will discuss two of our recent works on enabling robots to more effectively learn latent properties of humans. The first work is on human-robot assistive communication by planning over a learnt human model. Here, the key problem is that human models are difficult to obtain. We take inspiration from social projection theory and show that learning differences from the robot’s self model is more data-efficient than learning an entire human model from scratch. This leads to models that are effective for planning communication, as we will demonstrate in experiments. The second work is inverse reinforcement learning (IRL), which is important for value alignment and learning from demonstration. A fundamental issue with IRL is unidentifiability: there are multiple reward functions that can give rise to a given policy or observed behavior. Here, we propose a Bayesian Optimization (BO) approach to IRL that efficiently explores the reward function space to identify multiple solutions consistent with expert demonstrations. However, we find that a naive direct use of BO leads to unsatisfactory results. Our key insight is to leverage policy invariance to derive a new projection kernel, which leads to significantly better results in experiments.

Short Bio:

Harold Soh is an Assistant Professor in the Department of Computer Science at the National University of Singapore (NUS), where he directs the Collaborative Learning and Adaptive Robots (CLeAR) lab. Harold completed his Ph.D. at Imperial College London on online learning for assistive robots. Harold’s current research focuses on machine learning and decision-making for trustworthy collaborative robots. His work, which spans cognitive modeling (human trust) to physical systems (tactile perception with novel e-skins), has been recognized with a best paper award at IROS’21 and nominations at RSS’18, HRI’18, RecSys’18, and IROS’12.

Angelo Cangelosi University of Manchester

Short Bio:

Dr. Cangelosi has a background in both cognitive science and artificial intelligence and has made significant contributions to Human-Robot Interaction by introducing hybrid cognitive models that capture how empathetic trust is built towards an agent. His research areas are focused on language grounding and embodiment in humanoid robots, developmental robotics, human-robot interaction, and on the application of neuromorphic systems for robot learning. His latest book ‘Cognitive Robotics’ (MIT Press), was co-edited with Minoru Asada and published open access in 2022.

Schedule (Last Updated: October 7th)

Time (JST, GTM + 9)

9:00 - 9:15

9:15 - 10:00

10:00 - 10:15

10:15 - 10:45

10:45 -11:00

11:00 - 11:45

11:45 - 13:15

13:15 - 14:15

14:15 - 15:00

15:00 - 15:15

15:15 - 16:00

16:00 - 16:45

16:45 - 17:00

Topic

Welcome and Online Login

Keynote 1: Subbarao Kambhampati

Break

Keynote 2: Katia Sycara

Coffee Break

Keynote 3: Anca Dragan

Lunch Break

Contributed Abstract Talks (detailed schedule TBD)

Keynote 4: Harold Soh

Coffee Break

Keynote 5: Angelo Cangelosi

Panel

Closing Notes

Contributed Abstracts

We are pleased to announce the following abstracts that will be presented during the workshop:

  1. Theory of Mind-based Assistive Communication in Complex Human-Robot Cooperation, Moritz C. Buehler, Jurgen Adamy, and Thomas H. Weisswange

  2. Story-Based Machine Theory of Teams (MToT), Paul Robertson, Robert Laddaga, Howard E. Shrobe, Gary C. Borchardt, and Sue Felshin

  3. Learning Human-Robot Interactions to improve Human-Human Collaboration, Radu Stoican, Angelo Cangelosi, Christian Goerick, and Thomas H. Weisswange

  4. Interpretable Learned Emergent Communication for Human-Agent Teams, Seth Karten, Mycal Tucker, Huao Li, Siva Kailas, Michael Lewis, and Katia Sycara

  5. Toward Capability-Aware Cooperation for Human-Robot Interaction, Charles Jin, Zhang-Wei Hong, and Martin Rinard


Organizers

Joseph Campbell

Postdoctoral Fellow

Carnegie Mellon University

Simon Stepputtis

Postdoctoral Fellow

Carnegie Mellon University

Dana Hughes

Project Scientist

Carnegie Mellon University

Katia Sycara

Professor

Carnegie Mellon University

Michael Lewis

Professor

University of Pittsburgh