A Model Predictive Approach for Online Mobile Manipulation

of Nonholonomic Objects using Learned Dynamics


Roya Sabbagh Novin, Amir Yazdani, Andrew Merryweather, Tucker Hermans

University of Utah, Utah Robotics Center

Abstract

A particular type of assistive robots designed for physical interaction with objects could play an important role assisting with mobility and fall prevention in healthcare facilities. Autonomous mobile manipulation presents a hurdle prior to safely being able to utilize robots in real life applications such as healthcare and warehouse robots. In this article, we introduce a mobile manipulation framework based on model predictive control using learned dynamics models of objects. We focus on the specific problem of manipulating legged objects such as those commonly found in healthcare environments and personal dwellings (e.g. walkers, tables, chairs, equipment stands). We describe a probabilistic method for autonomous learning of an approximate dynamics model for these objects. In this method, we learn pre-categorized object models by using a small dataset consisting of force and motion interactions between the robot and object. In addition, we account for multiple manipulation strategies by formulating the manipulation planning as a mixed-integer convex optimization problem. The proposed manipulation framework considers the hybrid control system comprised of i) choosing which leg to grasp, and ii) continuous applied forces to move the object. We propose a manipulation planning algorithm based on model predictive control to compensate for modeling errors and find an optimal path to manipulate the object from one configuration to another. We show results for several objects with different wheel configurations. Simulation and physical experiments show that the obtained dynamics models are sufficiently accurate for safe and collision-free manipulation. When combined with the proposed manipulation planning algorithm, the robot can successfully move the object to a desired pose while avoiding collision.

[PDF]

Source Code & Dataset

Github repo for object dynamics learning: dynamics_model_learning

Github repo for planning framework: manipulation_planning

Dataset: motion & force data

Videos

Simulation Experiments:

Physical Experiments:

Dynamics models comparison plots

For better evaluation of the model types, for each model, a plot of final displacement errors with and without feedback is provided. It is shown that including inertia parameters improves prediction significantly. In addition although the second model works slightly better for the walker, the difference is not significant. As a result, since a more complex model results in higher computational time in optimization, we choose the first model which is simpler than the second model.

Manipulation planning in simulation

For each simulation trial, the actual dynamic parameter is sampled from the learned distribution. However, in the planning, we always use the mean value. We also add noise to the system for simulating the resulting trajectory. For each setup, we compare results of 50 trials from our approach and LQR using an initial trajectory obtained from optimization. We report the final position and orientation errors for all objects through all tasks. As we can see, the LQR method is almost never successful due to its inability to compensate for large errors in the system model. Using MPC, the framework lets the system recover from both modelling errors and noise in the system.

Manipulation planning in real world

In the physical experiments, due to robot limitations we only performed the experiments with the walker. Results from two tasks, each with 5 trials are reported. We can see that the second task (40\% success rate) is more difficult than the first one (80\% success rate). We believe this is mainly because of the longer distance in the second task which needs more repositioning actions which adds more error and opportunities to fail. A better re-grasp planning approach would improve the performance.