Motion and Manipulation Planning with User Guidance

The goal of this research is to develop reliable motion planning algorithms for manipulation that can satisfy different constraints that arise in manipulation of objects and seamlessly integrate user demonstration (if available). There has been much effort in autonomous robot motion and manipulation planning. Despite the impressive progress and reliability of these algorithms in some scenarios, these algorithms are still brittle, especially in the presence of end-effector constraints (or task space constraints). When the algorithms fail for any task instance, it is hard for people who are not experts in robotics to fix the issues. An alternative line of effort has focused on teaching robots manipulation tasks from demonstrations given by users. However, this class of work is usually data-hungry (or makes other assumptions to reduce data requirements) and essentially ignores any capability of the robot for reliable autonomous planning. We seek to develop algorithms that can reliably satisfy simple task constraints and use user demonstration if available (even one). The user demonstrations should also be usable in an incremental fashion. This is an ambitious program towards which we have achieved some initial results that show the feasibility of the approach. Please see below for more details, videos, and related publications.

Motion and Manipulation Planning

Robot motion is controlled in the joint space whereas the robots have to perform tasks in their task space. Many tasks like carrying a glass of liquid, pouring liquid, opening a drawer, manipulating a heavy object by pivoting, requires constraints on the end-effector during the motion. The forward and inverse position kinematic mappings between joint space and task space are highly nonlinear and multi-valued (for IK). Consequently, modeling task space constraints like keeping the orientation of the end-effector fixed while changing its position (which is required for carrying a cup of liquid without spilling) is quite complex in the joint space. In this work, we show that the use of Screw Linear Interpolation (ScLERP) to plan motions in the task space combined with resolved motion rate control to compute the corresponding joint space path, allows one to satisfy many common task space motion constraints in motion planning, without explicitly modeling them. In particular, any motion constraint that forms a one-parameter subgroup of the group of rigid body motions can be incorporated in our planning scheme, without explicit modeling. We further extend this motion planning scheme to a local motion planner that can satisfy collision avoidance constraints. The collision avoidance is done with a novel kinematic state evolution model of the robot where the collision avoidance is encoded as a complementarity constraint. We show that the kinematic state evolution with collision avoidance can be represented as a Linear Complementarity Problem (LCP). Using the LCP model along with screw linear interpolation, we show that it may be possible to compute a path between two given task space poses by directly moving from the current pose to the goal pose, even if there are potential collisions with obstacles. The local planner can be incorporated into any sampling-based global motion planning scheme.

User Guidance in Manipulation Planning

We develop a real-time point-to-point kinematic task-space planner based on screw interpolation that implicitly follows the underlying geometric constraints from a user demonstration. We demonstrate through example scenarios that implicit task constraints in a single user demonstration can be captured in our approach. One key feature of the proposed planner is that it does not learn a trajectory or intend to imitate a human trajectory, but rather explores the geometric features in the demonstration throughout a one-time guidance and extend such features as constraints in a generalized path generator. In this sense, the framework allows for generalization of initial and final configurations, it accommodates path disturbances, and it is agnostic to the robot being used. We evaluate our approach on the 7 DOF Baxter robot on a multitude of common tasks and also show generalization ability of our method with respect to different conditions.

Related Publications

  1. A. Fakhari, A. Patankar, and N. Chakraborty, ``Motion and Force Planning for Manipulating Heavy Objects by Pivoting", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, September, 2021.

  2. R. Laha, A. Rao, L. F. C. Figueredo, Q. Chang, S. Haddadin, and N. Chakraborty, ``Point-to-Point Path Planning Based on User Guidance and Screw Linear Interpolation", Proceedings of the ASME IDETC and 45th Mechanisms and Robotics (MR) Conference, August, 2021.

  3. A. Sinha, A. Sarker, and N. Chakraborty, ``Task Space Planning with Complementarity Constraint-based Obstacle Avoidance", Proceedings of the ASME IDETC and 45th Mechanisms and Robotics (MR) Conference, August, 2021.

  4. A. Sarker, A. Sinha, and N. Chakraborty, ``On Screw Linear Interpolation for Point-to-point Path Planning", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, October, 2020.

  5. N. Chakraborty, S. Akella, and J. C. Trinkle, ``Complementarity-based Dynamic Simulation for Kinodynamic Motion Planning'', 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems , St. Louis, MO, October 2009.