Simulator-Free Visual Domain Randomization via Video Games

Abstract

Domain randomization is an effective computer vision technique for improving transferability of vision models across visually distinct domains exhibiting similar content. Existing approaches, however, rely extensively on tweaking complex and specialized simulation engines that are difficult to construct, subsequently affecting their feasibility and scalability. This paper introduces BehAVE, a video understanding framework that uniquely leverages the plethora of existing commercial video games for domain randomization, without requiring access to their simulation engines. Under BehAVE (1) the inherent rich visual diversity of video games acts as the source of randomization and (2) player behavior -- represented semantically via textual descriptions of actions -- guides the alignment of videos with similar content. We test BehAVE on 25 games of the first-person shooter (FPS) genre across various video and text foundation models and we report its robustness for domain randomization. BehAVE successfully aligns player behavioral patterns and is able to zero-shot transfer them to multiple unseen FPS games when trained on just one FPS game. In a more challenging setting, BehAVE manages to improve the zero-shot transferability of foundation models to unseen FPS games (up to 22%) even when trained on a game of a different genre (Minecraft).

SMG-25 Dataset

The SMG-25 (Synchronized Multi-Game FPS) Dataset is our newly introduced dataset that encompasses synchronized gameplay visuals and player action data from 25 commercial First Person Shooter (FPS) games.

List of Games:

Meta-Data: 

Download Link: 

PUBG
Bioshock Infinite
Payday 3
Atomic Heart

Interactive Demo

The hover-interactive plots below demonstrate the impact of behavior-alignment training within the BehAVE framework. The t-SNE plots display encodings of short video sequences from 10 distinct FPS games. 


Cite

@misc{trivedi2024simulatorfree, 

title={Simulator-Free Visual Domain Randomization via Video Games}, 

author={Chintan Trivedi and Nemanja Rašajski and Konstantinos Makantasis and Antonios Liapis and Georgios N. Yannakakis},

year={2024},

eprint={2402.01335},

archivePrefix={arXiv}

}

Funding