Autonomous Navigation with Mobile Robots using Deep Learning

and the Robot Operating System

Anh Nguyen, Quang Tran

Abstract:

Autonomous navigation is a long-standing field of robotics research, which provides an essential capability for mobile robots to execute a series of tasks on the same environments performed by human everyday. In this chapter, we present a set of algorithms to train and deploy deep networks for autonomous navigation of mobile robots using the Robot Operation System (ROS). Practically, we describe three main steps to tackle this problem: (i) collecting data in simulation environments using ROS, Gazebo and classical controllers; (ii) designing deep network for autonomous navigation and (iii) evaluating results as well as deploying the learned policy on mobile robots in both simulation and real-world. Theoretically, we present deep learning architectures for robust navigation in normal environments (e.g. man-made houses, roads) and complex environments (e.g., collapsed houses, cities, or natural caves). We further show that the use of visual modalities such as RGB, Lidar, and point cloud are essential to improve the autonomy of mobile robots. Our project website and demonstration video can be found at https://sites.google.com/site/autonomousnavigationros

Dataset: Goolge Drive (58GB)

Gazebo Models: Google Drive (3GB)

Code: Github

Related Paper:

Anh Nguyen, Ngoc Nguyen, Kim Tran, Erman Tjiputra, Quang D. Tran, Autonomous Navigation in Complex Environments with Deep Multimodal Fusion Network, IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS), 2020.

Video: