Adversarial Imitation Via Variational Inverse Reinforcement Learning

Ahmed H. Qureshi, Byron Boots and Michael C. Yip

University of California San Diego, USA


Abstract

We consider a problem of learning the reward and policy from expert examples under unknown dynamics in high-dimensional scenarios. Our proposed method builds on the framework of generative adversarial networks and introduces the empowerment-regularized maximum-entropy inverse reinforcement learning to learn near-optimal rewards and policies. Empowerment-based regularization prevents the policy from overfitting expert demonstration, thus leads to a generalized behavior which results in learning near-optimal rewards. Our method simultaneously learns empowerment through variational information maximization along with the reward and policy under the adversarial learning formulation. We evaluate our approach on various high-dimensional complex control tasks. We also test our learned rewards in challenging transfer learning problems where training and testing environments are made to be different from each other in terms of dynamics or structure. The results show that our proposed method not only learns near-optimal rewards and policies that are matching expert behavior but also performs significantly better than state-of-the-art inverse reinforcement learning algorithms.

Github ArXiv

Transfer Learning Problems

(1) Agent dynamics are modified

- Training: IRL on quadrupled ant.

- Testing: RL performed on a crippled ant.

- Objective: Make the ant run forward.

EAIRL

AIRL (s)

AIRL (s,a)

(2) Environment structure is changed

- Training: IRL on a maze with left passage.

- Testing: RL on a maze with right passage.

- Objective: Move agent (yellow) to reach target (green).

EAIRL

AIRL(s)

AIRL(s,a)

Algorithms

EAIRL (our method): Empowerment-based Adversarial Inverse Reinforcement Learning

AIRL: Learning Robust Rewards through Adversarial Inverse Reinforcement Learning (Fu et al, 2018 )

AIRL(s): AIRL with state-only (disentangled) rewards

AIRL(s,a): AIRL with state-action rewards

Bibliography

@inproceedings{qureshi2018adversarial,
title={Adversarial Imitation via Variational Inverse Reinforcement Learning},
author={Ahmed H. Qureshi and Byron Boots and Michael C. Yip},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=HJlmHoR5tQ},
}