Surgical robotics stands at the forefront of medical innovation, promising significant improvements in precision, repeatability, and dexterity in surgical procedures. Achieving surgical autonomy, wherein robots independently execute surgical subtasks, has emerged as a critical research goal driven by recent advancements in artificial intelligence (AI) and robotic systems. Central to realizing this autonomy is the development of highly realistic, computationally efficient surgical simulators that mimic complex surgical scenarios, including soft tissue manipulation, suturing, cutting, and handling bodily fluids.
Real-time surgical simulators offer safe, efficient, and scalable environments for training robotic agents through reinforcement learning (RL) and imitation learning (IL). These simulators facilitate the generation of extensive synthetic datasets crucial for iterative AI policy training and refinement, significantly reducing reliance on limited and costly real-world surgical data. However, bridging the gap between simulated training and real-world deployment—sim-to-real transfer—remains an open challenge requiring robust and validated methodologies. Additionally, recent trends leverage simulators for training advanced multi-modal foundation models, particularly Vision-Language-Action (VLA) models, enabling surgical robots to reason, plan, and adapt dynamically to evolving surgical contexts through integrated visual and language cues.
This workshop gathers prominent global experts to discuss recent advances, open challenges, and emerging opportunities in (1) the development of realistic surgical simulators specifically optimized for AI training, (2) advanced AI-powered robotic motion planners trained exclusively within these simulators, (3) robust sim-to-real methods for deploying these trained policies onto clinical robotic platforms, and (4) innovative uses of simulation-mediated data for training multi-modal Vision-Language-Action (VLA) models for high-level task planning in surgery.
Objectives
The growing demand for surgical procedures, compounded by workforce shortages and surgical backlogs, has underscored the need for greater automation in surgery. Current surgical robotic systems, such as the da Vinci platform, remain entirely teleoperated, requiring full surgeon involvement with no autonomous capabilities. However, recent advancements in AI and machine learning (ML) are enabling higher levels of autonomy, with the potential to improve surgical efficiency, precision, and safety. Surgical autonomy can reduce surgeon workload, minimize human-induced errors, and improve consistency in complex procedures. For example, autonomous systems can assist with high-precision tasks such as tissue manipulation, tumor localization, suturing, and resection, while dynamically adapting to intraoperative conditions. Autonomy is particularly critical in remote and resource-limited settings, where AI-driven robotic assistance could enhance access to specialized care.
Central to this pursuit is the need for highly realistic and efficient surgical simulators that enable training artificial agents through robot learning approaches in environments that closely mimic real-world surgical conditions. These platforms enable the generation of extensive synthetic data across a diverse array of surgical scenarios involving complex interactions such as tissue deformation, cutting, suturing, blood suction, and irrigation. By facilitating safe, controlled, and repeatable experimentation, simulators provide an indispensable foundation for developing AI agents capable of executing precise surgical subtasks autonomously.
Recent advancements have underscored the importance of leveraging simulators for training AI models using IL and RL. Such approaches enable the safe and systematic data-based surgical robot learning, reducing reliance on limited and expensive real-world data and allowing iterative policy development and refinement. In addition, surgical simulators can facilitate the training of advanced multi-modal and foundation models, such as large language models (LLMs) and VLA models. These models empower surgical robots to engage in human-like reasoning and collaborative decision-making, controlling the surgical robot based on real-time multisensory feedback.
However, achieving effective real-world deployment of simulator-trained policies necessitates realistic and computationally efficient simulators capable of simulating a diverse range of objects in surgeries, including rigid bodies, soft tissue, ropes/threads, and fluids, as well as complex surgery-specific manipulations such as cutting and suturing. Robust sim-to-real transfer strategies are further needed to ensure successful transfer of the acquired policies to the real world. These strategies bridge the gap between the simulated environment and real surgical platforms, ensuring that skills learned in virtual settings translate into clinical practice.
Topics of interest include but are not limited to:
Surgical Simulation for AI Training
Development of high-fidelity, physics-based surgical simulators
Contact-rich simulation environments for tissue manipulation, suturing, cutting, among others
Data generation and augmentation strategies based on simulators
AI-Driven Robotic Motion Planning for Surgery
Reinforcement learning (RL) and imitation learning (IL) for autonomous surgical subtasks
Learning-based trajectory optimization for robotic tool manipulation
Real-time AI-based control and adaptation in dynamic surgical environments
Sim-to-Real Transfer for Surgical Robotics
Domain adaptation techniques to bridge the gap between simulation and real-world execution
Validation of AI-trained surgical policies on physical robotic platforms (e.g., dVRK)
Reducing reality gap in soft-tissue deformation modeling and tool-tissue interaction
Multi-Modal and Foundation Models for Surgery
Use of large models, such as LLMs and VLAs, for surgical task planning
Vision-language integration for high-level surgical decision-making
Simulator-mediated training of foundation models for robot-assisted surgery
TBA
We welcome poster submissions in the form of brief papers that are related to the scope of the workshop.
Papers may represent under-review, unpublished, or recently published research.
The paper should be no longer than 1 page.
Accepted 1-page papers will be reviewed but not published anywhere. You will be asked to bring a physical poster corresponding to your accepted 1-page paper to the workshop.
Submission deadline: September 15, 2025
Notification of acceptance: rolling
Mahdi Tavakoli, Professor,
Department of Electrical & Computer Engineering, University of Alberta, Canada.
Email: mahdi.tavakoli@ualberta.ca
Yafei Ou, PhD Student,
Department of Electrical & Computer Engineering, University of Alberta, Canada.
Email: yafei.ou@ualberta.ca
Fanny Ficuciello, Associate Professor,
University of Naples Federico II, Naples, Italy
Email: fanny.ficuciello@unina.it
Philippe Poignet, Professor,
University of Montpellier, Montpellier, France
Email: philippe.poignet@lirmm.fr