Outcome-Driven Reinforcement Learning via Variational Inference









Tim G. J. Rudner*, Vitchyr H. Pong*, Rowan McAllister, Yarin Gal, Sergey Levine

in 34th Advances of Neural Information Processing Systems (NeurIPS), 2021.

Abstract

While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it. In this paper, we view reinforcement learning as inferring policies that achieve desired outcomes, rather than as a problem of maximizing rewards. To solve this inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function which can be learned directly from environment interactions. From the corresponding variational objective, we also derive a new probabilistic Bellman backup operator and use it to develop an off-policy algorithm to solve goal-directed tasks. We empirically demonstrate that this method eliminates the need to hand-craft reward functions for a suite of diverse manipulation and locomotion tasks and leads to effective goal-directed behaviors.

Link to Paper

Behaviors Learned with ODAC


Reward Shaping


Sawyer Pushing

A robot must push an object to a goal location. At the end of training, we see that the policies trained with OD-AC (left) learn to push the object to the correct location, whereas the policy trained with sparse rewards (right) do not.

ODAC

SAC + Sparse Rewards

Ant Locomotion

In this task, a quadruped "ant" robot must move to a location and match a given pose. We see that the policy trained with OD-AC (left) learns to solve the task faster than a Soft Actor-Critic (SAC) policy trained with a sparse reward (right).

ODAC

epoch 0

epoch 40

epoch 80

epoch 120

epoch 160

epoch 200

SAC + Sparse Rewards

epoch 0

epoch 40

epoch 80

epoch 120

epoch 160

epoch 200

BibTex


@InProceedings{rudner2021odrl,

title = {{O}utcome-{D}riven {R}einforcement

{L}earning via {V}ariational {I}nference},

author = {Tim G. J. Rudner and Vitchyr H.

Pong and Rowan McAllister and

Yarin Gal and Sergey Levine},

journal = {Advances in Neural Information

Processing Systems 34},

year = {2021},

}