Jens Tuyls (1), Shunyu Yao (1), Sham Kakade (2), Karthik Narasimhan (1)
(1) Department of Computer Science, Princeton University
(2) John A. Paulson School of Engineering and Applied Sciences, Harvard University
Abstract
Text adventure games present unique challenges to reinforcement learning methods due to their combinatorially large action spaces and sparse rewards. The interplay of these two factors is particularly demanding because large action spaces require extensive exploration, while sparse rewards provide limited feedback. This work proposes to tackle the explore-vs-exploit dilemma using a multi-stage approach that explicitly disentangles these two strategies within each episode. Our algorithm, called eXploit-Then-eXplore (XTX), begins each episode using an exploitation policy that imitates a set of promising trajectories from the past, and then switches over to an exploration policy aimed at discovering novel actions that lead to unseen state spaces. This policy decomposition allows us to combine global de- cisions about which parts of the game space to return to with curiosity-based local exploration in that space, motivated by how a human may approach these games. Our method significantly outperforms prior approaches by 24% and 10% average normalized score over 12 games from the Jericho benchmark (Hausknecht et al., 2020) in both deterministic and stochastic settings, respectively. On the game of Zork1, in particular, XTX obtains a score of 103, more than a 2x improvement over prior methods, and pushes past several known bottlenecks in the game that have plagued previous state-of-the-art methods.
Results
We outperform previous baselines on 11 out of 12 games in Jericho! You can check out sample runs of the XTX and DRRN models at the top of this page.
Following Agarwal et al., (2021), we aggregate normalized game scores in various ways to show clear improvements compared to prior methods.
For any questions, please contact jtuyls@princeton.edu.