Episodic Curiosity through ReachabilitY

Rewards are sparse in the real world and most of today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known "couch-potato" issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only.

Overview of Episodic curiosity module

Disclaimer: The videos below are selected at random (no cherry-picking). However, the variance in sparse-reward tasks can be high, so a single video is sometimes not representative.

Section 4.1: Static Maze Goal Reaching

Dense

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

Sparse

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

Very Sparse

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

Section 4.2: Procedurally Generated Random Maze Goal Reaching

Sparse

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

Very Sparse

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

Sparse + Doors

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

Section 4.3: NO Reward/Area Coverage

No Reward

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

No Reward - Fire

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

Section 4.4: Dense Reward TAsks

Dense 1

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

Dense 2

PPO (baseline)

PPO + ICM (baseline)

PPO + EC (our method)

PPO + Grid Oracle (uses privileged information)

pre-training of R-network vs online training

Pre-trained

Bumps into the walls quite often

Trained online

Almost does not bump into the walls -> online training is beneficial

Reward and memory state visualization

Section S1: Mujoco ant locomotion out of first-person-view curiosity

PPO (random, baseline)

PPO+1 (survival, baseline)

PPO + EC (our method, third-person view, for visualization only)

PPO + EC (our method, first-person view, used by the curiosity module)