RGB-D and Servoing Framework

Introduction:

Expioiting the color and depth information obtained from RGB-D sensor (e.g. Kinect), we propose the exploring and grasp strategy in the natural and open-end environment. This work can be considered the extension of visuo-tactile servoing framework. We do not use the explicit BCH fiducial marker on the top of the object. To this end, the assumption that top of the object must be plane is not necessary any more in the work.

The vision feature in the natural environment provides the roughly 3d position and orientation of the object. After that, maintaining the contact force primitive in the tactile servoing framework is actived to drive the robot contact object. Then potential deviation in cartesian/vision space is employed to guide finger approaching the unknown points on the object surface. In the real application, such points have their own physical meaning. e.g. optimization grasp point, or dirt point robot want to clean etc. Because the uncalibrated scenario kinematics error, the robot can not approaching to these points, the tactile-based surface exploration strategy will deal with final error.

All data which can be sampled during the course of vision potential based approaching stage and tactile-based exploration stage can be used in the calibrate the vision frame and robot frame, which has provided a natual self-constrained humanoids calibration method.

Experiment:

Video 1. Vision potential driven approaching and tactile-based unknown object surface exploration

Video 2. Visuo-tactile point clouds fusion

Reference paper:

Qiang Li, Robert Haschke, Helge Ritter, "A Visuo-Tactile Control Framework for Manipulation and Exploration of Unknown Objects", IEEE Humanoids 2015