SCHooL: Scalable Collaborative Human–Robot Learning

NSF National Robotics Initiative Award 1734633

September 1, 2017 – August 31, 2020

-- To be useful in warehouses, homes, and other environments from schools to retail stores, robots will need to learn how to robustly manipulate a wide variety of objects. For instance, to enhance the productivity of human workers, service and factory robots could keep specified surfaces clear by identifying, grasping, and relocating objects to appropriate locations. Pre-programming robots to perform such complex manipulation tasks is not feasible; instead this project will investigate scalable robot manipulation, where multiple robots collaboratively learn from multiple humans. The project will contribute new models, algorithms, software, and experimental data to advance the state-of-the-art in deep learning, human–robot interaction, and cloud robotics. To broadly convey the results of this research to students and the public, the project will create a book and video with the Lawrence Hall of Science and the African Robotics Network.

-- Two primary gaps in current understanding of co-robotic Learning from Demonstration (LfD) are: 1) the absence of a theoretical framework that encompasses humans and robots to produce cooperative learning behaviors as optimal solutions; and 2) the lack of research linking LfD with deep learning, hierarchical planning, and human-robot interaction. The project addresses those gaps with a unified theoretical framework based on Inverse Reinforcement Learning and game-theoretic models of communication between humans and robots, treating LfD as a scalable co-robotic process in which multiple humans and multiple networked robots work in a distributed set of environments to maximize a collective set of reward functions and humans learn how to become more effective demonstrators for robots. The research can be applied to almost any context where robots can learn from human demonstrations and will be evaluated in “surface decluttering” benchmarks of increasing complexity over the course of the project.

Research Objectives

Objective 1: Formal Framework for Scalable Collaborative IRL (SCIRL), in order to extend the initial CIRL framework to multi-agent games and collaborative learning in multiple distributed domains.

Objective 2: Deep Learning Representations of Visual Features and Reward Functions, in order to robustly extract, apply, and share deep learning parameters and visual features needed for efficient robot learning.

Objective 3: Learning Hierarchical Task and Reward Structure, in order to reduce planning horizon by partitioning complex tasks into sub-tasks.

Objective 4: Human–Robot Communication, in order to scale bi-directional communication between robots and humans to facilitate distributed learning.

Application: All the objectives are being pursued in the surface decluttering integrative application using robot hardware in our labs and the new Fetch Robot purchased this year as per our budget.

** Please visit here for recent results and updates.

Kickoff meeting, August 29, 2017