Dynamic Multi-Team Racing: Competitive Driving on 1/10th Scale Vehicles via Learning in Simulation

Peter Werner*, Tim Seyde*, Paul Drews, Thomas Balch, Wilko Schwarting, Igor Gilitschenski, Guy Rossman, Sertac Karaman, and Daniela Rus

Abstract

Autonomous racing is a challenging task that requires vehicle handling at the dynamic limits of friction. While single-agent scenarios like Time Trials are solved competitively with classical model-based or model-free feedback control,  multi-agent wheel-to-wheel racing poses several challenges including planning over unknown opponent intentions as well as negotiating interactions under dynamic constraints. We propose to address these challenges via a learning-based approach that effectively combines model-based techniques, massively parallel simulation, and self-play reinforcement learning to enable zero-shot sim-to-real transfer of highly dynamic policies. We deploy our algorithm in wheel-to-wheel multi-agent races on scale hardware to demonstrate the efficacy of our approach. Several instances of these hardware races are provided in a supplementary video.

Emergent Behavior (Red: RL Team)  

Overtaking

Reclaiming Position

Pit Maneuver

Altruistic Blocking

Nudging Off Track

This work was supported in part by Toyota Research Institute (TRI). This article solely reflects the opinions and conclusions of its authors and not TRI, Toyota, or any other entity. We thank them for their support. We further thank Velin Dimitrov for assistance with hardware deployment and Markus Wulfmeier for fruitful discussions on DecSARSA.