Yutong Hu, Kehan Wen, Fisher Yu
Visual Intelligence and System Group, Computer Vision Lab, ETH Zurich
IROS 2024
Joint manipulation of moving objects and locomotion with legs, such as playing soccer, receive scant attention in the learning community, although it is natural for humans and smart animals. A key challenge to solve this multitask problem is to infer the objectives of locomotion from the states and targets of the manipulated objects. The implicit relation between the object states and robot locomotion can be hard to capture directly from the training experience. We propose adding a 'virtual' feedback control block to compute the necessary body-level movement accurately and using the outputs as dynamic joint-level locomotion supervision explicitly. We further utilize an improved ball dynamic model, an extended context-aided estimator, and a comprehensive ball observer to facilitate transferring policy learned in simulation to the real world. We observe that our learning scheme can not only make the policy network converge faster but also enable soccer robots to perform sophisticated maneuvers like sharp cuts and turns on flat surfaces, a capability that was lacking in previous methods.
After 24 hours training, our method gets final (task related) return 90% higher than baseline
Our policy outperform others in rune-time command following accuracy
Our policy keeps the ball within control in extreme case while baseline method loses control
Our method is robust on smooth terrain and inclined floor, where baseline smooth failed to keep the ball within control