Applying Depth-Sensing to Automated Surgical Manipulation with a da Vinci Robot

Minho Hwang*, Daniel Seita*, Brijen Thananjeyan, Jeff Ichnowski,
Samuel Paradis, Danyal Fer, Thomas Low, Ken Goldberg.

International Symposium on Medical Robotics (ISMR), 2020

Table of Contents

Abstract

Recent advances in depth-sensing have significantly increased accuracy, resolution, and frame rate, as shown in the 1920x1200 resolution and 13 frames per second Zivid RGBD camera. In this study, we explore the potential of depth sensing for efficient and reliable automation of surgical subtasks. We consider a monochrome (all red) version of the peg transfer task from the Fundamentals of Laparoscopic Surgery training suite implemented with the da Vinci Research Kit (dVRK). We use calibration techniques that allow the imprecise, cable-driven da Vinci to reduce error from 4-5 mm to 1-2 mm in the task space. We report experimental results for a handover-free version of the peg transfer task, performing 20 and 5 physical episodes with single- and bilateral-arm setups, respectively. Results over 236 and 49 total block transfer attempts for the single- and bilateral-arm peg transfer cases suggest that reliability can be attained with 86.9 % and 78.0 % for each individual block, with respective block transfer speeds of 10.02 and 5.72 seconds.

Code and Data

Here are relevant resources for the project.

  • [GitHub code link] The code is not currently set up to be documented for outside users. If you have questions on how to use the code, email Minho and Daniel with what you are hoping to do, and we will do our best to help you out.

  • Here is a (view-only) spreadsheet showing statistics and relevant information from Danyal Fer's attempts at the peg transfer task. This also contains statistics from his hand-over trials, though this is not in the current paper.

  • Here is a (view-only) spreadsheet for the dVRK's automated runs.

Videos (Robot)

Here are some videos. For reference, here is the video corresponding to work from Rosen and Ma. All of the videos below are in real-time with no speed-ups.

NOTE: be warned, the videos have flashing lights in them due to the overhead camera we used. In the future we will attempt to reduce the number of times the camera flashes. We apologize for that. We were able to fix this in our follow-up papers here and here which also study the peg transfer task.

dvrk_onearm_09_12_outof_12.MOV

Single-arm example. No failures.

dvrk_onearm_25_10_outof_12.MOV

Single-arm example. This has a 10/12 success rate, due to two failures on the return set. Note that one of the placing failures was inserted reasonably well but could not touch the bottom of the workspace due to an existing block that was under it. We count this as a place failure as we are strict in our criteria.

dvrk_twoarm_01_11_outof_12.MOV

Bilateral-arm example. 11/12, one placing failure on the return set.

dvrk_twoarm_04_10_outof_13.MOV

Bilateral-arm example. 10/13, we tried one more time at the end which is why there are 13 attempts here.

Videos (Surgeon)

In the videos below, our co-author Danyal Fer (a practicing surgical resident at that time) teleoperated the robot.

danyal_onearm_no_handover_01.MOV

Single-arm example. No failures but he was somehow able to get 2 blocks with one attempt. A nice job defending the human race. :-) . There were a few subjective / ambiguous, *potential* pick failures but we didn't count them here, that would be ultra harsh even by our standards.

danyal_twoarm_noHover_01.MOV

Bilateral-arm example, two place failures, one grasp failure. (The grasp failure was on the first set and a bit harsh since he quickly got it the second time, but we are being uniformly strict in our evaluation criteria.)

danyal_handover_10.MOV

For reference, here is Dr. Fer showing the "usual" peg transfer task with hand-overs, where we don't paint the blocks, etc.

BibTeX

@inproceedings{minho_pegs_2020,

author = {Minho Hwang and Daniel Seita and Brijen Thananjeyan and Jeffrey Ichnowski and Samuel Paradis and Danyal Fer and Thomas Low and Ken Goldberg},

title = {{Applying Depth-Sensing to Automated Surgical Manipulation with a da Vinci Robot}},

booktitle = {International Symposium on Medical Robotics (ISMR)},

Year = {2020}

}

Acknowledgments

This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab, Berkeley Deep Drive (BDD), the Real-Time Intelligent Secure Execution (RISE) Lab, the CITRIS “People and Robots” (CPAR) Initiative, and with UC Berkeley’s Center for Automation and Learning for Medical Robotics (Cal-MR). The authors were supported in part by donations from SRI International, Siemens, Google, Toyota Research Institute, Honda, Intel, and Intuitive Surgical. The da Vinci Research Kit was supported by the National Science Foundation, via the National Robotics Initiative (NRI), as part of the collaborative research project “Software Framework for Research in SemiAutonomous Teleoperation” between The Johns Hopkins University (IIS 1637789), Worcester Polytechnic Institute (IIS 1637759), and the University of Washington (IIS 1637444). Daniel Seita is supported by a National Physical Science Consortium Fellowship.