Agents that Listen: High-Throughput Reinforcement Learning with Multiple Sensory Systems

Shashank Hegde¹, Anssi Kanevisto², Aleksei Petrenko¹

¹University of Southern California, ²University of Eastern Finland

IEEE Conference on Games 2021

Abstract

Humans and other intelligent animals evolved highly sophisticated perception systems that combine multiple sensory modalities. On the other hand, state-of-the-art artificial agents rely mostly on visual inputs or structured low-dimensional observations provided by instrumented environments. Learning to act based on combined visual and auditory inputs is still a new topic of research that has not been explored beyond simple scenarios.

To facilitate progress in this area we introduce a new version of VizDoom simulator to create a highly efficient learning environment that provides raw audio observations. We study the performance of different model architectures in a series of tasks that require the agent to recognize sounds and execute instructions given in natural language. Finally, we train our agent to play the full game of Doom and find that it can consistently defeat a traditional vision-based adversary.

Music recognition scenario

The agent is looking for an object that plays the target music track.

Voice instruction scenarios

Here the agent follows the instructions given in natural language by listening to raw audio samples. The video on the right demonstrates a harder version of the task where the instruction is given only once at the beginning of the episode.

Duel scenario (full game)

Human player POV while playing against an agent that can see and hear.

Citation

@inproceedings{hegde2021agents,

title={Agents that Listen: High-Throughput Reinforcement Learning with Multiple Sensory Systems},

author={Hegde, Shashank and Kanervisto, Anssi and Petrenko, Aleksei},

booktitle={IEEE Conference on Games},

year={2021}

}