Our lab focuses on the design, modeling, and control principles applied to medical and service fields, particularly in the following four areas.
Flexible Surgical Robots
We have extensively studied flexible surgical robots, focusing on overcoming challenges caused by the complex, flexible transmission mechanisms that connect actuators to end-effectors. Our research has identified key issues such as tendon elongation and transmission delays, and developed mathematical models to explain these phenomena. Building on this, we have designed advanced control algorithms that compensate for hysteresis and dynamic changes in the robot’s transmission path.
Intraoperative Perceptions
In our lab, we aim to advance intelligent intraoperative perceptions, such as haptic and bionic tactile technologies, passive magnetic-based tracking, and medical imaging reconstruction and understanding. These technologies have been used in medical applications such as retinal surgery, nasogastric intubation, bronchoscopy intervention, and automatic ultrasound scanning.
Human-Robot Interaction
As robotic technology becomes increasingly integrated into medical treatment and surgical procedures, the topic of human-robot interaction significantly impacts the intuitiveness of these robots. Our lab also developed algorithms to enhance scene understanding through contrastive learning and image captioning, and to implement multi-modal signal fusion (such as gesture, EEG, and pulses, etc) in human-robot coordination and interaction.
Embodiment AI (Autonomous Systems)
Meanwhile, our lab has a strong interest in autonomous systems—often referred to today as embodied AI. We have developed an automated pick-and-place mobile robot, earning the Merit Award at the JDX Challenge 2018. Our work also includes the implementation of a Real2Sim2Real framework for adaptive control of continuum robots. Additionally, we have pioneered autonomous ultrasound scanning techniques, successfully testing them on human volunteers.
Most recently, we collaborated with Prof. Xiaoqiang Ji to integrate large language models (LLMs) into multi-robot task coordination and planning.