Ryan Gupta , Minkyu Kim, Juliana T Rodriguez , Kyle Morgenstein and Luis Sentis
University of Texas at Austin
Human Centered Robotics Laboratory
Presented at the 2023 Workshop on Integrated Perception, Planning, and Control for Physically and Contextually-Aware Robot Autonomy at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Google Drive with videos and trajectory plots for each trial, project slides showing full tabular results and the complete data model.
Initial Condition 1 with hard difficulty object locations. The A1 is given the red global waypoints and the HSR is given green global waypoints and global paths are shown as the green line connecting each colored set of waypoints. Neither object is found due to the global path plans not visually inspecting the suitcases, hidden from plain sight behind desk and chairs.
The figure title represents {Planner Setting} - {Initial Condition} - {Obj1/Obj2 Difficulty}. It displays robot trajectories overlaid on the static map during execution of the trial. Target object locations are labeled by stars. Trajectory plots exist for all experimental trials and can be found in the Google Drive.
The A1 successfully detects the suitcase along its path, however the HSR fails in this trial due to a navigation (bumps into object). Due to the fact that Heuristic Visual CPP requires robots to traverse longer distances over longer time periods, it is more likely that these sort of autonomy failures occur.
While the CPP methods fail to find both objects, our proposed method robustly detects the hard difficulty object locations. Each object is found with the help of curiosity. The black arrows represent the next goal point for each agent, as sent from the Waypoint Manager. While following the global path plans, each robot takes a priority waypoint that deviates from the global path plans.
All videos and trajectory plots can be found at this google drive link:
Table describing success rate and failure modes and their relative frequency for each of the three planner settings.
Table showing average path lengths for each robot and combined in meters as a function of initial condition.
Table describing success rate in each planner setting when compared against object location difficulty.
A portion of the data model showing class Experiment as a high level entity. Environment setting, robots and central search server are three distinct subclasses of the experiment whose properties and metadata provide information about the experiment at hand. Three different datasets derive from the experiment from each of the three planner settings, their titles as metadata and component files are noted in the full data model. The complete data model provides a clear depiction of the origin and processing of the data presented in this work. Importantly, it adds transparency and confidence in the research process. For the full data model, please see the Data-Model.pdf file in the Google Drive Folder for this project.