Fu-Jen Chu, Ruinian Xu, Zhenxuan Zhang, Patricio A. Vela and Maysam Ghovanloo
Georgia Institute of Technology, GA, U.S.A.
Abstract: A human-in-the-loop system is proposed to enable hands-free collaborative manipulation for persons with physical disabilities. Studies show that the cognitive burden of interfacing with a robotic assistant decreases with increased robot autonomy. Incorporating modern advances in perception with augmented reality, this paper describes a framework for obtaining high-level intent from the user to specify manipulation tasks. Augmented reality glasses provides an ego-centric perspective to the robot and a means for it to provide visual feedback through the summary of robot affordances on a menu. The system processes the sensory input to interpret the users environment. A tongue drive system serves as the input modality for triggering task execution by the robotic arm. Several manipulation experiments are performed with comparison to Cartesian control. The outcomes are also compared to reported state-of-the-art approaches. The results demonstrate competitive performance with minimal user input requirements.
IROS 2018: paper link
Block diagram of data flow for proposed system modules:
Experimental setup:
[BibTex]
Authors: