AI in Robotic Telesurgery
Abstract
Using Conditional Generative Adversarial Networks to Reduce the Effects of Latency in Robotic Telesurgery
The introduction of surgical robots brought about advancements in surgical procedures. The applications of remote telesurgery range from building medical clinics in underprivileged areas, to placing robots abroad in military hot-spots where accessibility and diversity of medical experience are limited. Poor wireless connectivity may result in a prolonged delay, referred to as latency, between a surgeon’s input and action a robot takes. In surgery, any micro-delay can injure a patient severely and, in some cases, result in fatality. One way to increase safety is to mitigate the effects of latency using deep learning aided computer vision. While the current surgical robots use calibrated sensors to measure the position of the arms and tools, in this work we present a purely optical approach that provides a backup measurement of the tool position in relation to the patient's tissues that allows a robot to detect its own mechanical manipulator arms. A conditional generative adversarial network (CGAN) was trained on 1107 frames of a mock gastrointestinal robotic surgery from the 2015 EndoVis Instrument Challenge and corresponding labels for each frame. When run on new testing data, the network generated labels of the input images which were visually consistent with the hand-drawn labels and was able to do this in 299 milliseconds. These generated labels can be used as simplified identifiers for the robot to track its own controlled tools. This system allows for accurate monitoring of the position of surgical instruments in relation to the patient's tissue, increasing safety measures of telesurgery systems.