Human-Robot Interaction in a

Shared Augmented Reality Workspace

Shuwen Qiu*, Hangxin Liu*(equal contribution), Zeyu Zhang, Yixin Zhu, Song-Chun Zhu

[PDF] [CODE]

Abstract

We design and develop a new shared Augmented Reality (AR) workspace for Human-Robot Interaction (HRI), which establishes a bi-directional communication between human agents and robots. In a prototype system, the shared AR workspace enables a shared perception, so that a physical robot not only perceives the virtual elements in its own view but also infers the utility of the human agent—the cost needed to perceive and interact in AR—by sensing the human agent’s gaze and pose. Such a new HRI design also affords a shared manipulation, wherein the physical robot can control and alter virtual objects in AR as an active agent; crucially, a robot can proactively interact with human agents, instead of purely passively executing received commands. In experiments, we design a resource collection game that qualitatively demonstrates how a robot perceives, processes, and manipulates in AR and quantitatively evaluates the efficacy of HRI using the shared AR workspace. We further discuss how the system can potentially benefit future HRI studies that are otherwise challenging.

The proposed system of a shared AR workspace. (a) A mobile robot platform with an RGB-D sensor and a Lidar for perception. (b) A human agent with an AR headset (Microsoft HoloLens). By calculating (c) the transformation from the robot to the human by a 3D human pose detector and (d) the transformation from the human to holograms, provided by the AR headset, (e) the poses of holograms can be expressed in the robot’s coordinate. Via visual perspective taking, the robot estimates the utility/cost of a human agent to interact with a particular hologram: the yellow, light blue, and dark blue regions indicate where AR holograms are directly seen by a human agent, seen after changing view angles, and occluded, respectively. (f) The system also endows the robot the ability to manipulate the augmented holograms and update the shared perception, enabling more seamless HRI in AR

Experiment

(a) Experimental environment

(b) Costs of seeing various holograms

We design a resource collection game in the shared AR workspace to demonstrate the efficacy of the system. Fig.(a) depicts the environment. Six holograms, rendered as point clouds and highlighted in circles with zoomed-in views, are placed around the human agent (marked by a red skeleton at the center of the room), whose facing direction is indicated in yellow. Some holograms can be easily seen, whereas others are harder due to their tricky locations in 3D or occlusion (e.g., object 6). A human agent’s task is to collect all holograms and move them to the table as fast as possible. The robot stationed in the green dot would help the human in collecting the resources.

In this game, the robot first estimates the cost for a human agent to see the holograms and whether they are occluded; the result is shown in Fig. (b). In our prototype system, the robot prioritizes to help the occluded holograms and then switch to the one with the highest cost.

Qualitative Results

(a) Human agent’s egocentric view captured by the HoloLens

(b) Third-person view of robot motions

(c) Robot’s knowledge and plans

Quantitative Results

Box plot of all participants’ collection time in two groups. Human: subjects finish the task by themselves; Human + Robot: the robot can help the human using the shared AR workspace system

Subjects helped by the robot are significantly more efficient in collecting all resources.