Learning on the Job: 

Self-Rewarding Offline-to-Online Finetuning for 

Industrial Insertion of Novel Connectors from Vision

Ashvin Nair*, Brian Zhu*, Gokul Narayanan, Eugen Solowjow, Sergey Levine

University of California, Berkeley; Siemens Corporation

*First two authors contributed equally.

Abstract

Learning-based methods in robotics hold the promise of generalization, but what can be done if a learned policy does not generalize to a new situation? In principle, if an agent can at least evaluate its own success (i.e., with a reward classifier that generalizes well even when the policy does not), it could actively practice the task and finetune the policy in this situation. We study this problem in the setting of industrial insertion tasks, such as inserting connectors in sockets and setting screws. Existing algorithms rely on precise localization of the connector or socket and carefully managed physical setups, such as assembly lines, to succeed at the task. But in unstructured environments such as homes or even some industrial settings, robots cannot rely on precise localization and may be tasked with previously unseen connectors. Offline reinforcement learning on a variety of connector insertion tasks is a potential solution, but what if the robot is tasked with inserting previously unseen connector?

In such a scenario, we will still need methods that can robustly solve such tasks with online practice. One of the main observations we make in this work is that, with a suitable representation learning and domain generalization approach, it can be significantly easier for the reward function to generalize to a new but structurally similar task (e.g., inserting a new type of connector) than for the policy. This means that a learned reward function can be used to facilitate the finetuning of the robot's policy in situations where the policy fails to generalize in zero shot, but the reward function generalizes successfully. We show that such an approach can be instantiated in the real world, pretrained on 50 different connectors, and successfully finetuned to new connectors via the learned reward function.

Summary Video

Inspecting Vision Networks (Section VI.C)

Grad-CAM visualization of the policy before and after fine-tuning: The first row shows input images from a single trajectory. The next row shows a Grad-CAM heatmap of the Y-axis policy output after offline training, before being trained on any examples of the test connector. The policy is paying attention to a spurious corner of the image. The final row shows the Grad-CAM heatmap after fine-tuning; the policy now attends to the connector and socket positions.

Grad-CAM visualization of reward models: The first row shows test input images, which all have ground truth reward 1. The next two rows show the Grad-CAM heat map of a trained reward classifier trained with standard ERM and with DAIB. The classifier trained with DAIB focuses on semantically meaningful regions of the connector and socket, while the classifier trained without DAIB often pays attention to spurious regions of the image.