Fast LiDAR Informed Visual Search in Unseen Indoor Environments
Ryan Gupta*, Kyle Morgenstein*, Steven Ortega^ and Luis Sentis*
University of Texas at Austin
Human Centered Robotics Laboratory
* Department of Aerospace Engineering , ^ Department of Mechanical Engineering
Method Overview
Abstract: This paper details a system for fast visual exploration and search without prior map information. We leverage frontier based planning with both LiDAR and visual sensing and augment it with a perception module that contextually labels points in the surroundings from wide Field of View 2D LiDAR scans. The goal of the perception module is to recognize surrounding points more likely to be the search target in order to provide an informed prior on which to plan next best viewpoints. The robust map-free scan classifier used to label pixels in the robot’s surroundings is trained from expert data collected using a simple cart platform equipped with a map-based classifier. We propose a novel utility function that accounts for the contextual data found from the classifier. The resulting viewpoints encourage the robot to explore points unlikely to be permanent in the environment, leading the robot to locate objects of interest faster than several existing baseline algorithms. Our proposed system is further validated in real-world search experiments for single and multiple search objects with a Spot robot in two unseen environments.
Please see 'System Details' tab for code, training data and implementation details.
Please see 'Maps' tab for details about the maps used for data acquisition and tests
Project Video
Data Acquisition Cart Platform
A figure depicting the cart used for labeled data acquisition and the Spot robot used for deployment of the proposed method. The RealSense provides odometry estimate to the cart platform, required for localization and ground-truth estimation. The spot is equipped with an RGB-D Azure Kinect for detection during the search task. Utilization of the cart enables easy data acquisition and reduced strain on the robot hardware.
Simulation Environments
Robot start pose is denoted by a cyan circle. (a) The Apartment (20x30m) simulation environment in the Hard configuration. Easy configuration has fewer objects.
Robot start pose is denoted by a cyan circle. (b) The Office (25x45m) environment. Easy setup involves the search target to the left of the red line and Hard involves the search target on the right.