Summary: I am a PhD candidate at Johns Hopkins University Autonomous Systems Controls and Optimization (ASCO) lab under the guidance of Prof. Marin Kobilarov. My research revolves around developing task and motion planning strategies for robotic systems, encompassing areas including perception (object detection and segmentation), prediction (world model learning), and planning (model predictive control and deep imitation learning).
During my PhD, I focused on developing autonomous surgical workflows for eye surgery using a robotic arm, RGB-D images, and force feedback. Our notable achievements include pioneering advancements in automating retinal vein cannulation and subretinal injection, which were successfully demonstrated on cadaveric pig eyes. Our recent work was featured in New Scientist.
Currently, I am actively seeking full-time positions. In the future, I am interested in tackling problems beyond surgical applications, such as contact-rich manipulation or locomotion, while also enhancing decision-making capabilities for intelligent systems with solutions that scale.
Education
Ph.D MechE (Robotics), Johns Hopkins University 2018 – Current
M.S.E Robotics, Johns Hopkins University (2018 – 2020)
B.S. MechE, Johns Hopkins University (2013 – 2018)
Experience
Zoox, Inc. Foster City CA, USA
Software engineer Intern, Motion Planning and Control (Summer 2022)
Autonomous Systems, Control, and Optimization Laboratory, Johns Hopkins University
Ph.D candidate (2018 - current)
Mechanical Engineering Dept, Johns Hopkins University
Research assistant (2015 - 2017)
Automated Processes Inc. (API), Jessup MD, USA
Intern (Summer 2014)
Research Projects
Tackling Contact-Rich Surgical Manipulation Tasks with Deep Imitation Learning
Current work, details
Summary: the goal is to learn challenging surgical manipulation tasks with deep imitation learning on the Davinci system (e.g. suturing). Our goal is to learn such tasks from only stereo image input and robot kinematics and ideally with reasonable number of demonstrations.
Left col: ground-truth, right col: diffusion model predictions
Visual-Task Planning
Current work, details
Summary: the goal is to learn a multi-modal world model condition on robot actions. We are exploring several architectures, including the use of transformers and diffusion models.
Goal: insert needle into a ~0.1mm diameter blood vessel to dissolve a blockage downstream (source)
Demonstration of blood vessel puncture on pig eyes (10x speed)
Autonomous Needle Insertion Inside the Eye for Targeted Drug Delivery
JW Kim, P Zhang, P Gehlbach, I Iordachita, M Kobilarov (paper, news)
Submitted to T-RO
Summary: we demonstrate for the first time autonomous retinal vein cannulation on pig eyes to deliver drugs into the blood stream. This is a difficult procedure and remains experimental, but necessary to cure eye disease known to affect 16 million people world-wide.
Goal: access specific layer of the retina using a needle for targeted drug delivery (10x speed)
Task Autonomy in Robotic Retinal Surgery using RGB-D Images
JW Kim, S Wei, P Zhang, P Gehlbach, JU Kang, I Iordachita, M Kobilarov (paper, website)
Submitted to RA-L
Summary: we demonstrate autonomous subretinal injection on pig eyes to deliver drugs below the retinal tissue. This is also a difficult procedure that remains experimental, but necessary to treat rare eye diseases which affect 2 million people.
Autonomous Needle Navigation in Retinal Microsurgery: Evaluation in ex vivo Porcine Eyes
P Zhang, JW Kim, P Gehlbach, I Iordachita, M Kobilarov (paper)
ICRA 2023
Summary: we demonstrate a needle navigation task in eye surgery by combining deep imitation learning and optimal control. This is a follow-up validation work on the CoRL paper below using animal tissues.
Towards Autonomous Eye Surgery by Combining Deep Imitation Learning and Optimal Control
JW Kim, P Zhang, P Gehlbach, I Iordachita, M Kobilarov (paper)
CoRL 2020
Summary: we solve an autonomous navigation problem in eye surgery using a goal-conditioned imitation learning network trained to imitate expert surgeon trajectories. The network output is combined with MPC to generate safe trajectories while satisfying kinematic constraints.
Autonomously Navigating a Surgical Tool Inside the Eye by Learning from Demonstration
JW Kim, P Zhang, P Gehlbach, I Iordachita, M Kobilarov (paper)
ICRA 2020
Summary: this is a precursor work to the CoRL 2020 paper, with a slightly different learning formulation and without integrating MPC. This work marked one of the first effort towards autonomous eye surgery.
Publications
JW Kim, PY Zhang, P. Gehlbach, I. Iordachita, M. Kobilarov "Micromanipulation in Surgery: Autonomous Needle Insertion Inside the Eye for Targeted Drug Delivery." Workshop on Experiment-Oriented Locomotion and Manipulation Research (RSS) 2023 (paper)
JW Kim, PY Zhang, P. Gehlbach, I. Iordachita, M. Kobilarov "Deep Learning Guided Autonomous Retinal Surgery Using a Robotic Arm, Microscopy and iOCT Imaging." Pending submission to IEEE Robotics and Automation Letters (RA-L) 2023 (paper, details)
JW Kim, PY Zhang, P. Gehlbach, I. Iordachita, M. Kobilarov "Deep Learning Guided Autonomous Surgery: Guiding Small Needles into Sub-Millimeter Scale Blood Vessels" Submitted to IEEE Transactions on Robotics (T-RO) 2023 (paper, details)
PY Zhang, JW Kim, P. Gehlbach, I. Iordachita, M. Kobilarov "Autonomous Needle Navigation in Retinal Microsurgery: Evaluation in ex vivo Porcine Eyes" International Conference on Robotics and Automation (ICRA) 2023
K Mach, S Wei, JW Kim, A. Gomez, PY Zhang, JU Kang, M Nasseri, P Gehlbach, N Navab , I Iordachita
"OCT-guided Robotic Subretinal Needle Injections: A Deep Learning-Based Registration Approach." IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2022
S. Wei, JW Kim, A Martin-Gomez, PY Zhang, I. Iordachita, J. Kang. "Region targeted robotic needle guidance using a camera-integrated optical coherence tomography." Optical Coherence Tomography, CM2E (2022)
PY Zhang, JW Kim, M. Kobilarov "Towards Safer Retinal Surgery through Chance Constraint Optimization and Real-Time Geometry Estimation." Conference on Decision and Control (CDC) 2021
JW Kim, PY Zhang, P. Gehlbach, I. Iordachita, M. Kobilarov "Towards Autonomous Eye Surgery by Combining Deep Imitation Learning with Optimal Control." Conference on Robot Learning (CoRL) 2020 (paper), (video)
JW Kim, C. He, M. Urias, P. Gehlbach, I. Iordachita, M. Kobilarov "Autonomously Navigating a Surgical Tool Inside the Eye By Learning from Demonstration." International Conference on Robotics and Automation (ICRA) 2020 (paper)
MG Urias, N Patel, C He, A Ebrahimi, JW Kim, I Iordachita, PL Gehlbach "Artificial intelligence, robotics and eye surgery: are we overfitted?" International Journal of Retina and Vitreous, 2019