This video is a collection of slides of my talk at RSS 2015, on heat-transfer based recognition of materials for short-duration contact and varying initial conditions.




The robot 'DARCI' uses a tactile sleeve and a Kinect to create a dense haptic map while reaching into a cluttered environment. As the robot comes in incidental contact with the objects in the environment, it acquires local haptic information using the sleeve and propagates the local information to update its estimate of the haptic properties of the visible surface using the Kinect.


The video shows real-time sparse haptic map generation using whole-arm tactile skin while the robot is reaching into a cluttered environment and grabbing a bunch of keys. Initially, the robot has no idea about the environment and uses model-predictive control to reach its goal while maintaining the contact forces below a certain threshold. However, during the initial attempt to reach, it comes in contact with trunks and rigid objects in the environment which it identifies using HMMs and maps using brown dots (right). After the initial attempt fails, the robot updates its knowledge of the environment by identifying the location of the rigid objects in the environment like trunks using haptic mapping. Using this updated knowledge in its next turn, the robot successfully avoids them using a motion planner to successfully reach the goal and grab the keys. 




The video shows overhead view of our robot Cody with a stretchable, flexible tactile sensor array on its forearm and end-effector. The robot reaches through instrumented (with force-torque sensors for ground truth of contact force) clutter to a pre-specified goal location. Using the tactile sensing, including sensing that covers the articulated joints, the robot is successful. Whereas, without tactile sensing the robot fails to reach the goal.




This video shows the rapid categorization performance using HMMs while the robot reaches into clutter made of trunks and leaves. The robot uses data from the forearm tactile skin for online categorization. A taxel (tactile-pixel) is marked as a green dot if it is categorized as a leaf and as a brown dot if it is categorized as a trunk.



 The above video shows the performance of online Haptic Classification using information from incidental contact during goal-directed motion. The classification uses features such as Maximum Force, Contact Area, and Contact Motion for Classification purposes. These features are obtained from the Artificial Skin attached on the forearm of the Robot 'Cody'. The right side of the video shows an image representation of the unrolled skin Taxel array wherein darker pixels imply higher forces. My detailed results can be found here.





 The above video shows the performance of a PR2-robot equipped with a fabric-based tactile skin during goal-directed motion in unknown cluttered environments using online Haptic Classification and a bidirectional RRT planner. The RRT planner initially has no knowledge of the environment. During the motion, if the robot comes in contact with an obstacle and if the classification algorithm determines it to be a fixed obstacle, the RRT goes back to the initial position and re-plans with the updated knowledge of the environment. The robot can thus create a haptic map of the environment while reaching the goal. The RRT planner does not care if the obstacle is movable. I have included two cases in the above video to show the performance of PR2 with the same goal but different starting positions.




The above video shows the system characterization of a 7-DOF Robot Arm in KIST, South-Korea. The performance of the arm is compared between cases without compensation and with friction and gravity compensation using my method. My detailed results can be found here.




The above video shows the performance of the same 7-DOF Robot Arm in executing human-like reaching motion with compliance. It exhibits quasi-straight line trajectory of the end-effector and a symmetric bell-shaped velocity profile. Detailed results can be found here.




The above video shows the performance of the a Hand-Arm system while coordinating to reach and grasp objects of different shapes. The Arm is a 7-DOF redundant system while the hand is a 4-fingered 12-DOF hand. The reach-to-grasp task is carried out using my newly developed control law as described here. The simulation is carried out using RoboticsLab Software.





This video shows the performance of a 7-DOF redundant robot-arm system in maintaining the end-effector position while external disturbances (force correspond to red lines in video: force applied using mouse) perturb the robot's motion. I implemented a Task-Space Disturbance Observer to analyze its effect on the Null-Space Motion of the 7-DOF Robot Arm.