Hierarchical Learning for Robotic Assembly Tasks Leveraging LfD 

Siddharth Singh, Qing Chang and Tian Yu

 

Abstract

Robotic assembly in manufacturing settings are a special type of long horizon Task and Motion Plan ning (TAMP) problem. While divising a motion plan for the robot is itself a challenging, identifying a task and learning it adds to problem’s complexity. This paper proposes a Hierarchical Learning (HL) based approach which pivots the multi-level structure to seamlessly integrate task identifica tion and sequencing with robot motion planning. Given the final assembly goal the higher-level agent emphasizes comprehending tasks and learning task plans. It generates sequences of sub-tasks, while the lower-level agent concentrates on executing the current sub-task. The higher-level agent employs a goal driven reinforcement learning (RL) approach to master the sequencing task, allowing it to adapt to unseen assemblies. Meanwhile, the lower level adopts a Learning from Demonstration (LfD) approach for motion planning, which can learn primitive skills from one-time demonstration and intel ligently combine the primitive skill for complicated tasks. The critical contribution of this work lies in the development of a novel method capable of comprehending and executing long horizon goal-driven assembly tasks without relying on expert demonstrations or explicit description of the whole assem bly. The proposed approach is validated through simulation and physical setup. 

Video Abstract

 Methodology

Training

Pick & Place

 Stacking

Stacking around Wall

Comparison against Greedy Reward Shaping

Execution

Execution in Simulation

Execution Task in Simulation Setup

Execution in Simulation

Execution Task on Physical Setup

Example of execution using Kinova Gen3 (7DoF) arm.