Research

Task-Oriented Grasp Synthesis

One of the key challenges in task-oriented grasp synthesis is to mathematically represent a task. In our work, we represent a task as a sequence of constant screw motions. Given a grasp (pair of antipodal contact locations) we can evaluate its feasibility for imparting the desired constant screw motion using our proposed task-dependent grasp metric. More recently, we have developed a neural network-based approach which solves the inverse problem, i.e. given an object representation in terms of a partial point cloud, obtained from an RGBD sensor, and a task in terms of a screw axis, compute a good grasping region for the robot to grasp the object and impart the desired constant screw motion.

Related Publications

Motion Planning 

Representing complex manipulation tasks, like scooping and pouring, as a sequence of constant screw motions in SE(3) allows us to extract the task-related constraints on the end-effector's motion from kinesthetic demonstrations and transfer them to newer instances of the same tasks. This approach has been evaluated for complex manipulation tasks like scooping and pouring and also in the context of vertical containerised farming for transplanting and harvesting leafy crops. We have also developed an approach that allows us to transfer the task-related constraints between objects which are functionally similar but have different geometries. The notion of functional similarity is captured by a knowledge base.

Using the screw-geometric structure of motion also allows us to generate motion plans for tasks which require object-environment contact like pivoting. We have also developed a self-evaluation-based approach which allows the robot to compute the minimal set of kinesthetic demonstrations required to reliably perform tasks like pouring and scooping over a specified region of its workspace. 

Related Publications