A Workshop co-located with the
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2023)
Date: August 28, 2023
1:30 pm - 4:30 pm
Format: Half day workshop
List of Topics
Human in the loop, medical robotics, real-time simulation, haptics, deep learning models for 4D human modeling, 3D human reconstruction from video, and action recognition.
Paradise Hotel, Busan, Korea
Statement of Objectives
With the progress in numerical healthcare over the past years, the need for numerical tool in the clinical workflow is growing either for pre, per or post-operative steps. This need is amplified by the constant growing use of new imaging tools and robots in Minimally Invasive Surgery (MIS) and radiology. This rapid evolution of surgical tools and techniques put a particular stress on image processing, allowing for instance augmented view, auto segmentation or robotic registration. But when image-based algorithms fail to provide consistent results, the use of numerical models, such as mechanical models, can help to regularize the results. This workshop will present new advances in those two key domains for numerical healthcare: human models for image recognition and numerical models for training and medical robotics.
Robot vision for human cognition often fails to work well in the real-world situation, despite the disruptive results achieved in Computer Vision and Artificial Intelligence. While most training data have been collected in well-conditioned, easy-to-isolate backgrounds, wild videos from the real-world may contain various environmental conditions such as lighting, background patterns, and, most notoriously, occlusions. The latter becomes the source of recurrent problems when it comes to human cognition by care-robots in the in-house situation. Large variations in body shapes, motions, clothes, and frequent interactions with objects also contribute to the difficulty. Unfortunately, it is almost impossible to collect a large, annotated dataset that spans over all possible configurations of the real-world scene. In this workshop, we will discuss a number of promising approaches to remedy such problems, a model-based learning framework being one of them. The main idea is to equip the robot with a realistic human model, so that the it can reconstruct the human and the world it is observing efficiently and robustly, over which the cognition task will be performed. We will present SOTA works as well as our recent/on-going works around deep learning models on human model generation, reconstruction from video, and action recognition in the context of, but not limited to, health and medicine.
MIS is progressively recognized as a safe and effective approach to meet surgical needs while decreasing the rates of complications. However, the paradigm shift from open surgery towards MIS has significantly raised the technical level required to perform the surgery. As a result, computer-based simulators have received considerable interest in the past years. Medical simulators have multiple benefits over traditional training methods, such as the possibility to train physicians in various scenarios, or to change the properties of the tissues in a repeatable manner. Recent advances in the field open the possibility to use the simulations not only for training but also for pre-and intraoperative support. Numerical models can also be used to display the internal structures (vessels, tumors...) on top of the intraoperative images, using the extrapolation capacity of biomechanical models. The combination of numerical models and robotics can be used to assist the practitioner, to reduce the technical burden of the practitioner. For all these applications, one obvious requirement is to simulate the mechanical response of organs with high accuracy. We will present our recent work around real-time medical simulation, simulation-based robotic control for needle insertion and shared control strategy.
Last but not least, we would like to take this opportunity to promote international collaborations with French robotics teams, which is supported by the Institute for Information Sciences (INS2I) of CNRS, in the framework of RoSaCo (Network of associated teams of Robotics and AI for Health in France and South Korea) project.
Intended Audience
Intended audience junior and senior researchers in the field of robotic control, medical simulation and/or robot vision. Surgeons and radiologists can also find this workshop interesting.
Hyewon Seo: CNRS research director, ICube laboratory – Univ. Strasbourg, France (1:30 pm ~ 2:00pm)
3D human pose estimation & dementia detection from gait videos
Abstract: Dementia with Lewy Bodies (DLB) and Alzheimer’s Disease (AD) are two common neuro-degenerative diseases among elderly people. Gait analysis is frequently used in clinical assessments to discriminate these neurological disorders from healthy controls, to grade disease severity, and to further differentiate dementia subtypes. In this talk, we will present our recent deep-learning based model to assess the disease severity from monocular gait videos. It first estimates the sequence of 3D body skeletons, corrects them by using extracted gait features, and performs a classification on the corrected 3D pose sequence according to the MDS-UPDRS gait scores. The model, named as MAX-GR, is based on a multi-head attention Transformer with gait parameters estimation, and a geometry-aware shape space over which a classifier is trained. We show and compare the results of our model to other existing SOTA methods in terms of 3D pose reconstruction, classification accuracy, and the combination of both.
Oriane Dermy: Post-doctoral student, BIRD Team, LORIA Nancy (2:00 pm ~ 2:30pm)
Understanding movement and underlying intention from simple gestures to full-body motion, for collaborative robotics.
Abstract: In this workshop, I will present my research on modeling human behavior for collaborative robotics, also known as cobotics. The focus of my work lies in non-verbal human-robot interactions, specifically addressing the prediction of intention, understanding, and reproduction of gestures. These questions are addressed through the utilization of learning by demonstration and exploration of different perceptual modalities, including proprioceptive sensors, visual sensors, and ultimately external sensors such as X-Sens. These learning approaches and sensor modalities are employed to enable the robot to recognize and predict the full-body movements of the user. I will present the utilized approaches, which are based on the statistical modeling of movement primitives, namely Probabilistic Movement Primitives, as well as the utilization of autoencoders to model comprehensive gestures containing significant information while enabling real-time motion prediction.
Minsu Jang: Electronics and Telecommunications Research Institute, South Korea (2:30 pm ~ 3:00pm)
Daily Activity Recognition for Elderly-Care Robots
Abstract: Daily activity recognition is an essential tool for delivering variety of health-care services by service robots. A service robot can detect anomalous clues in the daily activity routines or in motion patterns, initiate conversations with the elderly to check health status, and report the health-related clues to the family or responsible care-givers. In this talk, we introduce our recent research endeavors on daily activity recognition for human-care robots with experimental results on an elderly activity dataset called ETRI-Activity3D.
Hadrien Courtecuisse: CNRS researcher, ICube laboratory – Univ. Strasbourg, France (3:00 pm ~ 3:30pm)
Real-time Finite Element simulations for surgical assistance - Applications to training, augmented reality and robotic control
Abstract: Needle insertion techniques are among the most common surgical interventions, with their efficacy heavily depending on the precision of needle positioning within the patient's body. The main objective of this thesis was to develop an autonomous robotic system capable of inserting a flexible needle into a deformable structure along a predefined trajectory. The uniqueness of this work lies in the use of inverse finite element (FE) simulations within the robot's control loop to predict structural deformations. Throughout the insertion process, the FE models are continuously updated (corrective step) based on information extracted from an intraoperative imaging system. This step helps control model errors relative to actual structures and prevent them from diverging. A second step (predictive stage) allows the anticipation of deformable structures' behavior, relying solely on biomechanical model predictions. This anticipates the robot's command to compensate tissue displacements even before the needle moves.
Experimentally, we used our approach to control an actual robot to insert a flexible needle into a deformable foam along a predefined (virtual) trajectory. We proposed a constraint-based formulation allowing the calculation of predictive steps in the constraint space, thereby providing a total insertion time compatible with clinical applications.
We also proposed an augmented reality system for open liver surgery, based on an initial semi-automatic registration and an intraoperative tracking algorithm based on optical (3D) markers. We demonstrated the applicability of this approach in an operating room during a liver resection surgery.
The results obtained during this doctoral work have led to three publications (two IROS and one ICRA) in international conferences and a journal article (Transactions on Robotics) currently under review.
Paul Baksic: CNRS Research Engineer, ICube laboratory – Univ. Strasbourg, France (3:30 pm ~ 4:00pm)
Shared-control strategy for percutaneous procedures
Abstract: Interventional radiology is indicated for liver tumors of less than 3 cm in size. It uses needles to reach the cancerous tissue inside of the organ. The technicity of this type of procedure is high as its effectiveness depends on targeting accuracy, while the needle-tissue interaction induces non-trivial deformations, and the radiologist cannot see what he is doing directly. This thesis proposes a tool offering robotic assistance to percutaneous procedures that is centered around the practitioner to reduce the technical level required. An automatic insertion algorithm compensating for external disturbances along with needle-tissue interactions using a real-time inverse finite element simulation is first proposed. The sharing of the control associating the practitioner's decisions with the automatic control is then discussed. These two contributions are evaluated in simulated experiments in a first step and in a liver phantom in a second step. For this, an experimental setup is built and evaluated.
Jaesoon Choi: Professor, Dept. of Biomedical Engineering - University of Ulsan College of Medicine (4:00 pm ~ 4:30pm)
Intervention Assist Robots with Augmented Clinical Utility through Medical Simulation Technologies
Abstract: Surgical robot technology is continuously advancing, and recent advancements in artificial intelligence have led to ongoing new attempts. In particular, many new possibilities are being envisioned in the implementation of automation, which was previously considered a challenging problem. Robot systems used in various clinical areas are being commercialized, and robots in the field of intervention procedures are also being steadily developed. Intervention procedures require relatively limited movements and information processing compared to surgery, allowing for a more diverse pursuit of the benefits of robots. Simulations have long been utilized in training for surgeries and intervention procedures, but realistic content implementation has long been the biggest hurdle. However, just as neural networks have evolved into deep neural networks, simulations also show possibilities for evolution. With the recent establishment of the concept of digital twins, the technological environment is changing, enabling us to reconsider a wider range of applications. The fusion of advanced simulation technologies with surgical and intervention procedure robots can provide advanced image analysis information or play a role in the implementation of automation or autonomous task execution that has advanced practical significance compared to before. Furthermore, the use of digital twins could be anticipated as a new application to replace animal experiments or clinical trials for the evaluation of the safety of medical devices. In this presentation, we aim to examine the trends and prospects of surgical robots and simulation technology in this regard.
Main Organiser
Name: Hyewon Seo
Affiliation: CNRS-Univ Strasbourg, France
Address: Laboratoire ICube
Bâtiment Clovis Vincent, 1 place de l'Hôpital
67000 Strasbourg Cedex FRANCE
Phone: +33(0)3.90.41.35.04
Website: http://igg.unistra.fr/People/seo/
Bio: Hyewon Seo is a permanently posted CNRS (French National Scientific Research Center) Research director working at ICube (Laboratoire des sciences de l'ingénieur, de l'informatique et de l'imagerie), Université de Strasbourg. Since 2016, she is also an affiliated professor at POSTECH, South Korea. She holds BSc and MSc degrees in Computer Science from the Korea Advanced Institute of Science and Technology (KAIST). After obtaining the PhD degree at the University of Geneva (MIRALab), she became an assistant professor in the Computer Science and Engineering Department at the Chungnam National University, South Korea. At CNU she had been leader of the Computer Graphics Laboratory until she moved to France as CNRS researcher in 2010.
Her research interest centers primarily around 3D/4D shape analysis and modeling, with focus on human data. She is currently focusing on learning-based methods for 2D-to-3D shape reconstruction, motion generation, and inverse simulation, among others. So far, she has authored about 60 published articles in international journals and conferences, 4 book chapters, and 3 patents.
She has served several editorial boards for international journals, among them is The Visual Computer journal (Springer Nature) where she has been an associate editor-in-chief (2016-2020). She also serves as PC member for several international conferences, have participated in the organization of several international conferences including Symposium on Solid and Physical Modeling 2020 (conference co-chair). During 2012-2016, she has been the elected member of the national committee of CNRS at the 7th section: signal, image, automatic, robotics, and human-computer interaction. Since 2021, she co-leading a new research team Machine Learning, Modeling & Simulation (MLMS) where she coordinates several research projects such as RoSaCo (Network of associated teams of Robotics and AI for Health in South Korea) and HuMoCar (Human Model for Real-World Human Cognition by Care Robots).
Co-Organiser
Name: Hadrien Courtecuisse
Affiliation: CNRS-Univ Strasbourg, France
Address: Laboratoire ICube
Bâtiment Clovis Vincent, 1 place de l'Hôpital
67000 Strasbourg Cedex FRANCE
Bio: Hadrien Courtecuisse (Ph.D., HDR) is a research scientist at CNRS in section 07/06 (computer science applied to medical systems). He obtained his Ph.D. thesis in the SHAMAN team at Inria in 2011 (New parallel architectures for interactive medical simulations). In 2012 he did a postdoc as a research associate in the Institute of Mechanics and Advanced Materials (IMAM) at Cardiff University with Stéphane Bordas. In 2013 he moved to Strasbourg to work as a research engineer for the Institut Hospitalo-Universitaire (IHU). Between 2013 and 2022, the multidisciplinary nature of his activity stabilized around objectives spanning several scientific areas: multiphysics modeling, numerical simulations, robotics, and biomechanical registration with experimental data. His major contributions are related to sparse linear algebra, collision detection, simulation of contact response, with a particular interest in real-time simulations and parallel architectures such as GPU and parallel co-processors and simulation-based robotic needle-insertion. In 2013, he joined the Inria Mimesis project lead by Stéphane Cotin and mainly focused his research activity on the development of original methods to control medical robots interacting with deformable structures using inverse real-time FE models while maintaining user interactivity.
Co-Organiser
Name: Minsu Jang
Affiliation: Electronics and Telecommunications Research Institute (South Korea)
Website: https://zebehn.github.io
Bio: Minsu Jang is a Principal Researcher at Electronics and Telecommunications Research Institute and an Associate Professor at University of Science and Technology (South Korea). He is currently a principal investigator for projects on developing 1) cloud-robot intelligence that can make robots effectively adapt to diverse environments and personalize services for different customers by domain adaptation and collaborative learning via cloud platforms, and 2) LBA(Learning-By-Asking) agents that can self-improve intelligence by detecting uncertainties and expand knowledge via active question answering sessions in the real-world. The main goal of his research is to build service robots not only working well but also learning well in the real-world. He is a member of the directors of Korea Robotics Society and has served as an organizer for workshops, special sessions and conferences at RO-MAN, HRI and ICSR, and as a guest editor for special issues in journals including International Journal of Social Robotics, Frontiers in Robotics and Artificial Intelligence and Intelligent Service Robotics.
Co-Organiser
Name: Paul Baksic
Affiliation: CNRS-Univ Strasbourg, France
Address: Laboratoire ICube
Bâtiment Clovis Vincent, 1 place de l'Hôpital
67000 Strasbourg Cedex FRANCE
Bio: Paul Baksic obtained his M.Sc. degree in robotic engineering applied to surgery in 2018 and started a Ph.D. in the same field at Strasbourg University. He proposed innovative solutions to help practitioners during percutaneous procedures on soft organs. His background in robotics, biomechanics, and computer science helped him develop a method that copes with needle and tissue deformations during robotic needle insertion while leaving per-operative control to the practitioner. This shared control strategy developed during his Ph.D. opens up a door to clinical use thanks to the decision-making part still being held by the practitioner.
Speaker
Name: Jaesoon Choi, PhD
Affiliation:
Professor, Dept. of Biomedical Engineering, University of Ulsan College of Medicine
Director, Biomedical Engineering Research Center, Asan Medical Center
CEO, LN Robotics Inc
Bio: Dr. Jaesoon Choi received B.S. degree in control and instrumentation engineering and M.S. and Ph.D. degree in biomedical engineering from Seoul National University in 1995, 1997, 2003. He had research training at the Department of Biomedical Engineering, Cleveland Clinic, U.S.A., from 1999 to 2000. Between 2003 and 2006, he worked as a Staff Researcher at Research Institute, National Cancer Center, Korea. From 2007 to 2012, he was with College of Medicine, Korea University as Research Professor. He is currently Full Professor at Dept. of Biomedical Engineering, University of Ulsan College of Medicine and Asan Medical Center, Seoul, Korea.
He has published 55 peer-reviewed papers in international journals and holds more than 90 patents in surgery / intervention assist robot, medical simulation / visualization, bioprinting technologies and mechatronics applications in biomedicine.
Speaker
Name: Oriane Dermy
Affiliation: Post-doctorante, équipe BIRD, LORIA Nancy
Bio: Oriane Dermy obtained her Ph.D. in Computer Science at INRIA, University of Lorraine, in 2018. She worked within the LARSEN team, focusing on modeling and predicting human motion, from simple gestures to full-body movements for collaborative robotics. Since then, she has been conducting postdoctoral research at LORIA, University of Lorraine, in the BIRD team. Her current work involves data mining applied to education, specifically focusing on dynamic modeling of student behavior. Her research is centered around modeling human behavior and predicting human intent, both in the field of robotics and e-learning.
Acknowledgements
The majority of the travel expenses will be funded by the Institute for Information Sciences (INS2I) of CNRS, in the framework of RoSaCo (Network of associated teams of Robotics and AI for Health in South Korea) project.