Peng Xu , Zhikai Wang , Liang Ding , Zhengyang Li , Junyi Shi , Haibo Gao , Guangjun Liu, Yanlong Huang
Shared control, as a combination of human and robot intelligence, has been deemed as a promising direction towards complementing the perception and learning capabilities of legged robots. However, previous works on human-robot control for legged robots are often limited to simple tasks, such as controlling movement direction, posture, or single-leg motion, and extensive training of the operator is required. To facilitate the transfer of human intelligence to legged robots in unstructured environments, this paper presents a user-friendly closed-loop shared control framework. Specifically, a rough navigation path from the operator is smoothed and optimized to generate a path with reduced traversing cost. The traversability of the generated path is assessed using fast Monte Carlo tree search (FastMCTS), which is subsequently fed back through an intuitive image interface and force feedback to help the operator make decisions quickly, forming a closed-loop shared control. The simulation and hardware experiments on a hexapod robot show that the proposed framework gives full play to the advantages of human-machine collaboration, and has nearly 33% higher task completion in navigation experiments than the traditional control framework.
Simulation
For different simulation environments, operators drew some rough paths to guide the robot.
Hardware Experiment 1
The hardware experiment to verify the advantage of the human-machine cooperation system facing sensor failures.
Hardware Experiment 2
The hardware experiment shows that the shared-control method can help humans to plan a path away from sparse foothold areas.
Comparison Experiments
The comparison experiment shows that the proposed shared-control method can reduce operation difficulty and improve the completion degree of the navigation task.
Hardware Experiment 4
The hardware experiment show that the operator can control the robot navigate in challenging environment easily using the proposed method.
Contact Information: Peng Xu (Email: pengxu_hit@163.com)