Imitation, Intent, and Interaction (I3)

Overview

A key challenge for deploying interactive machine learning systems in the real world is the ability for machines to understand human intent. Techniques such as imitation learning and inverse reinforcement learning are popular data-driven paradigms for modeling agent intentions and controlling agent behaviors, and have been applied to domains ranging from robotics and autonomous driving to dialogue systems. Such techniques provide a practical solution to specifying objectives to machine learning systems when they are difficult to program by hand. While significant progress has been made in these areas, most research effort has concentrated on modeling and controlling single agents from dense demonstrations or feedback. However, the real world has multiple agents, and dense expert data collection can be prohibitively expensive. Surmounting these obstacles requires progress in frontiers such as 1) the ability to infer intent from multiple modes of data, such as language or observation, in addition to traditional demonstrations; 2) the ability to model multiple agents and their intentions, both in cooperative and adversarial settings, and 3) handling partial or incomplete information from the expert, such as demonstrations that lack dense action annotations, raw videos, etc.

The workshop on Imitation, Intention, and Interaction (I3) will bring together researchers from multiple disciplines, including robotics, imitation and reinforcement learning, cognitive science, AI safety, and natural language understanding. Our aim will be to reexamine the assumptions in standard imitation learning problem statements (e.g., inverse reinforcement learning) and connect distinct application disciplines, such as robotics and NLP, with researchers developing core imitation learning algorithms. In this way, we hope to arrive at new problem formulations, new research directions, and the development of new connections across distinct disciplines that interact with imitation learning methods.

News

  • March 25 - Website live
  • June 1st - Paper decisions available

Carnegie Mellon University

Stanford University

Michigan State University

Massachusetts Institute of Technology

UMD, MSR NYC

Google Brain

Important Info

  • Paper submissions due: May 19th, 2019, Anywhere On Earth (UTC-12)
  • Author notifications: June 1st, 2019
  • Camera-ready paper submissions due: June 10th, 2019
  • Workshop: June 15, 2019
  • Room: 201
  • Poster dimensions: From https://www.icml.cc/FAQ/PosterBoardSize: "Workshop posters paper should be roughly 24" x 36" in portrait orientation. There will be no poster board; you will tape your poster directly to the wall. Use lightweight paper. We provide the tape."

Location: Room 201 @ ICML 2019, Long Beach Convention Center, Long Beach, California, USA

Date: June 15, 2019

Session 1

08:45-09:00 – Welcoming remarks

09:00-09:20 – Invited talk: Hal Daumé III: "Beyond demonstrations: Learning behavior from higher-level supervision"

09:20-09:40 – Invited talk: Joyce Chai: "Collaboration in Situated Language Communication"

09:40-10:00 – Invited talk: Stefano Ermon: "Multi-agent imitation and inverse reinforcement learning"

10:00-10:20 – Contributed talk: Iris R Seaman: "Nested Reasoning About Autonomous Agents Using Probabilistic Programs"

10:20-11:30 – Poster session and coffee

Session 2

11:30-11:50 – Contributed talk: Changyou Chen: "Self-Enhanced Inverse Reinforcement Learning for Text Generation"

11:50-12:10 – Contributed talk: Faraz Torabi: "Generative Adversarial Imitation from Observation"

12:10-12:30 – Contributed talk: Seyed Kamyar Seyed Ghasemipour: "Scalable Meta Inverse Reinforcement Learning through Context-Conditional Policies"

12:30-14:00 – Lunch Break

Session 3

14:05-14:25 – Invited talk: Natasha Jaques: "Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning"

14:25-14:45 – Invited talk: Pierre Sermanet: "Self-Supervision and Play"

14:45-15:05 – Contributed talk: Nicholas R Waytowich: "A Narration-based Reward Shaping Approach using Grounded Natural Language Commands"

15:05-16:30 – Poster session and Coffee

Session 4

16:30-16:50 – Invited talk: Kris Kitani: "Multi-modal trajectory forecasting"

16:50-17:10 – Contributed talk: Abhishek Das: "TarMAC: Targeted Multi-Agent Communication"

17:10 Closing remarks

We invite the submission of full 4-8 pages papers, with unlimited space for references and supplementary materials. Submissions should follow the ICML 2019 style guidelines.

Papers can be submitted at the following address: https://cmt3.research.microsoft.com/IIIW2019

Relevant topics include, but are not limited to:

  • Imitation Learning
  • Inverse Reinforcement Learning
  • Theory of Mind
  • Multi-Agent Reinforcement Learning

We are particularly interested in research within the following areas:

  • inferring intent from multiple modes of data
  • multi-agent modeling in cooperative and adversarial settings
  • handling partial or incomplete information from an expert demonstrator

The workshop on Imitation, Intention, and Interaction (I3) seeks contributions at the interface of these frontiers, and will bring together researchers from multiple disciplines such as robotics, imitation and reinforcement learning, cognitive science, AI safety, and natural language understanding. Our aim will be to reexamine the assumptions in standard imitation learning problem statements (e.g., inverse reinforcement learning) and connect distinct application disciplines, such as robotics and NLP, with researchers developing core imitation learning algorithms. In this way, we hope to arrive at new problem formulations, new research directions, and the development of new connections across distinct disciplines that interact with imitation learning methods.

Carnegie Mellon University

New York University

University of California, Berkeley

University of California, Berkeley

University of California, Berkeley

Amazon Web Services, New York University

University of California, Berkeley, Google Brain, Stanford University