OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning

[Paper][Slides][Poster]

Abstract

Reinforcement learning (RL) has achieved impressive performance in a variety of online settings in which an agent's ability to query the environment for transitions and rewards is effectively unlimited. However, in many practical applications the situation is reversed: an agent may have access to large amounts of undirected offline experience data, while access to the online environment is severely limited. In this work, we focus on this offline setting. Our main insight is that, when presented with offline data composed of a variety of behaviors, an effective way to leverage this data is to extract a continuous space of recurring and temporally extended primitive behaviors before using these primitives for downstream task learning. Primitives extracted in this way serve two purposes: they delineate the behaviors that are supported by the data from those that are not, making them useful for avoiding distributional shift in offline RL; and they provide a degree of temporal abstraction, which reduces the effective horizon yielding better learning in theory, and improved offline RL in practice. In addition to benefiting offline policy optimization, we show that performing offline primitive learning in this way can also be leveraged for improving few-shot imitation learning as well as exploration and transfer in online RL on a variety of benchmark domains.

Environments

Antmaze medium (diverse dataset)

Antmaze large (diverse dataset)

kitchen (mixed and partial dataset)

Antmaze diverse dataset visualization

Fig: Different ant trajectories are indicated by different colors

Results: Offline RL

State visitation heatmaps for Antmaze policies

CQL (53.7%)

CQL+OPAL (81.1%)

CQL (14.9%)

CQL+OPAL (70.3%)

Results: Few-shot IL

Results: Online RL

Results: Online multi-task transfer



Fig: Visualization of source task and transfer task MT10