Objectives
The areas of Human-Robot Interaction (HRI) and robot learning are tightly coupled. Interaction has been used to enhance robotic learning from people, providing methods for quickly learning new actions/tasks, understanding constraints, bootstrapping computational approaches, and providing context to what the robot learns. Similarly, learning has been used to improve HRI, providing a means for the robot to learn better models of social interaction, to improve collaboration, and lead to better overall interaction.
Contributions have been made in either subfield: (i) interaction to aid learning and (ii) learning to interact. However, there is little work which lies at their intersection. Additionally, work in one subfield may benefit the other; synergy between these two research directions could result in a robotic system which learns to better interact with humans and is thereby more likely to achieve its learning goals.
Our aim is to bring together experts from these two topics (both learning for interaction and interacting to aid learning). In doing so, we expect to:
- Identify interesting problems on each side that would relate to the other and explore how each contributes to goal-oriented social interaction, improved learning performance, and/or better collaboration
- Identify problems which are best addressed using a dual-focus on learning and interaction
- Develop a synergy between learning for interaction and interactive learning, identifying practical ways in which research in each direction can benefit from the other
- Brainstorm about potential future directions to take to address the duality of the identified research problems
Program
Room: 122
Welcome and Introduction: 09:00-09:10
Session 1: 09:10 - 10:00
09:10-09:45 Invited Speaker: Sidd Srinivasa
9:45-10:00 Paper Presentation:
- Learning Complex Manipulation Tasks from Heterogeneous and Unstructured Demonstrations (Nadia Figueroa and Aude Billard)
Coffee Break: 10:00-10:30
Coffee/Setup posters
Session 2: 10:30 - 12:30
10:30-11:05 Invited Speaker: Andrea L. Thomaz
11:05-11:35 Paper Presentations:
- Predicting Preschool Mathematics Performance of Children with a Socially Assistive Robot Tutor (Caitlyn Clabaugh, Konstantinos Tsiakas and Maja Mataric)
- Learning for Intent Communication: Human-Human Interaction Studies as a Source (Dogancan Kebude and Baris Akgun)
11:35-12:10 Invited Speaker: Ross Knepper
Lunch: 12:10 - 14:00
Session 3: 14:00 - 16:00
14:00-14:35 Invited Speaker: Sylvain Calinon
14:35-15:20 Paper Presentations
- Speech Enhanced Imitation Learning and Task Abstraction for Human-Robot Interaction (Simon Stepputtis, Chitta Baral and Heni Ben Amor)
- Classifying Task Segments in ADL Tasks using Variable-length Mode Sequences (Reem Al-Halimi and Medhat Moussa)
- An online scenario for mixed-initiative planning considering human operator state estimation based on physiological sensors (Nicolas Drougard, Caroline P. Carvalho Chanel, Raphaëlle N. Roy and Frédéric Dehais)
15:20-15:55 Invited Speaker: Heni Ben Amor
15:55-16:00 Additional Poster Setup time
Coffee Break: 16:00-16:30
Session 4: 16:30 - 1730
16:30-17:15 Poster Session
17:15-17:30 Closing Remarks
Organizers
Asst. Prof. Barış Akgün, Koc University
Kalesha Bullard, Georgia Institute of Technology
Vivian Chu, Georgia Institute of Technology
Tesca Fitzgerald, Georgia Institute of Technology
Dr. Matthew Gombolay, Massachusetts Institute of Technology
Dr. Chien-Ming Huang, Yale University
Prof. Brian Scassellati, Yale University
Topics of Interest
- Interactive Learning
- Learning from Demonstration
- Imitation learning
- Active learning
- Human-guided model refinement
- Interpreting human feedback for learning
- Learning to Interact
- Learning and acting according to user models/preferences
- Learning social interaction models
- Learning turn-taking or communication strategies
- Measuring and optimizing for user satisfaction
- Intersection of Interactive Learning and Learning-guided Interaction
- Learning for and from human-robot collaboration
- Learning and expressing transparency during goal-directed interaction
- Learning to act and interact from natural language
Invited Speakers
Ross Knepper, Cornell University
Siddhartha Srinivasa, University of Washington
Andrea Thomaz, University of Texas at Austin
Heni Ben Amor, Arizona State University
Sylvain Calinon, Idiap Research Institute
Timeline
- Submission deadline for papers: 8 August 2017 (Extended!)
- Notification of acceptance: 21 August 2017
- Camera-ready version: 18 September 2017
- Workshop: 28 September 2017
Submissions
We invite several types of contributions:
- Full length paper: 6 pages max
- Position paper: 6 pages max
- Extended abstract: 2 pages max
References do not count towards the page limit.
Author Package: You can use the IEEE RAS Templates (Latex/Word)
Submission website: https://easychair.org/conferences/?conf=iros17sbli