It is shown below how to determine the robot’s posture for poking in our work. As shown in the left figure, we make the GelSight sensor in contact with the cylindrical object with its posture parallel to the table. In this way, the contact force between the GelSight sensor and the cylindrical object will be perpendicular to the table, which minimises the horizontal component force and avoids changing the object state. The right figure shows that if a GelSight sensor contacts the object with a tilted angle, there will be a large horizontal force that may lead to a rolling motion of the cylindrical object on the table. Hence, we set the robot posture to the one that can make the GelSight sensor parallel to the table.
Side views of the tactile poking, where the GelSight sensors are in contact with the cylindrical object at different postures: parallel to the table (left) vs. with a tilted angle (right).
Tactile exploration can be used to estimate the object shape. However, in those works strong assumptions were made: either the object is two-dimensional and the pushing process is quasi-static [1,2] or the object is fixed on the table [3,4]. These assumptions are not suitable for our cases as 3D and movable objects are used in our investigated scenarios. For example, a cylindrical cup can roll a long way on the table with a small horizontal force. In this case, the assumptions in the tactile exploration will not stand any more as the cup is highly movable and is not quasi-static. In contrast, tactile poking proposed in our work would output a good tactile reading while maintaining minimal disturbance to the object’s state, so that the modelling of the object dynamics is not needed.
[1] Yu K T, Leonard J, Rodriguez A. Shape and pose recovery from planar pushing[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015: 1208-1215.
[2] Suresh S, Bauza M, Yu K T, et al. Tactile SLAM: Real-time inference of shape and pose from planar pushing[C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 11322-11328.
[3] Watkins-Valls D, Varley J, Allen P. Multi-modal geometric learning for grasping and manipulation[C]//2019 International conference on robotics and automation (ICRA). IEEE, 2019: 7339-7345.
[4] Suresh S, Si Z, Mangelson J G, et al. Efficient shape mapping through dense touch and vision [C]//2022International conference on robotics and automation (ICRA). IEEE, 2022.