SuPer: A Surgical Perception Framework
Traditional control and task automation have been successfully demonstrated in a variety of structured, controlled environments through the use of highly specialized modelled robotic systems in conjunction with multiple sensors. However, application of autonomy in endoscopic surgery is very challenging, particularly in soft tissue work, due to the lack of high quality images and the unpredictable, constantly deforming environment.
In this work, we propose a novel surgical perception framework, SuPer, for surgical robotic control. This framework continuously collects 3D geometric information that allows for mapping of a deformable surgical field while tracking rigid instruments within the field. To achieve this, a model-based tracker is employed to localize the surgical tool with a kinematic prior in conjunction with a model-free tracker to reconstruct the deformable environment and provide an estimated point cloud as a mapping of the environment.
The proposed framework was implemented on the da Vinci Surgical System in real-time with an end-effector controller where the target configurations are set and regulated through the framework. Our proposed framework successfully completed autonomous soft tissue manipulation tasks with high accuracy. The demonstration of this novel framework is promising for the future of surgical autonomy.
Framework and Setup
Experiments(click on the image to see more details and other experiments!)
Dataset / Links
The dataset is saved as a ROSBag format and is a recording of the repeated tissue manipulation experiment. The datastreams are the raw stereo-scopic left and right images, the encoder readings from the surgical robot, and a generated mask from our tool tracking. For details on the surgical robot, including the location of blue markers, refer to the LND.json file. Also, an initial handeye is also provided for purposes of surgical tool tracking. However, this is not necessary for testing deformable tissue tracking, as we already provide a mask. Note that the mask is generated in the left, rectified, undistorted camera frame and is not perfect so dilation is recommended.
SuPer: A Surgical Perception Framework for Endoscopic Tissue Manipulation with Surgical Robotics
Yang Li*, Florian Richter*, Jingpei Lu, Emily K. Funk, Ryan K. Orosco, Jianke Zhu, and Michael C. Yip (* Equal contributions)
Ryan K. Orosco