AIM 2025 Workshop
Data-Enabled Learning Control for Intelligent Mechatronics and Robotics
Data-Enabled Learning Control for Intelligent Mechatronics and Robotics
Assistant Professor, Eastern Institute of Technology, Ningbo
Professor, Southeast University
Professor, Zhejiang University
Assistant Professor, The Hong Kong University of Science and Technology, China
Associate Professor, New York University
Professor, Tsinghua University
Associate Professor, Westlake University
Professor, The University of Texas at Austin
9:00am – 9:10am: Xiaocong Li — Opening Remarks
9:10am – 9:40am: Hao Su — Bridging the Sim-to-Real Gap for Autonomous Control of Exoskeletons via Learning-in-Simulation and High-Torque Motors
Can we design wearable robots for everyone and everywhere? This talk introduces a new design paradigm that leverages custom high-torque density motors to electrify robotic actuation. This allows our wearable robots to achieve exceptional performance, including ultra-compact and lightweight exoskeletons/exosuits, along with high compliance and bandwidth in human-robot interactions. The presentation will also cover a data-driven, physics-informed reinforcement learning framework that accelerates control policy development in simulation, significantly reducing wearable robot development time. Our learning-in-simulation controllers bridge the sim-to-real gap and reduce energy consumption during activities like walking, running, and stair climbing, leading to significant energy savings for users. Additionally, our advances in bionic limbs enhance mobility and manipulation for individuals with musculoskeletal and neurological injuries. We envision these innovations sparking a paradigm shift in wearable robotics, transforming them from lab-bound rehabilitation devices to ubiquitous personal robots for everyone, everywhere - in applications such as workplace injury prevention, pediatric and elderly rehabilitation, home care, and sports.
9:40am – 10:10am: Chuxiong Hu — Intelligent Learning-Based Ultraprecision Mechatronic Motion Systems: An Innovative GRU-RIC Feedforward Control Approach
To meet the progressively stringent demands for motion control performance in precision and ultraprecision manufacturing industries, intelligent learning-based feedforward control has gained significant attention. This presentation introduces a novel GRU-RIC control approach that synergizes gated recurrent unit (GRU) and real-time iterative compensation (RIC) techniques. For nanometer-level accuracy requirements over large motion strokes, a GRU neural network is developed to achieve precise error prediction and compensation through offline learning, while an RIC strategy dynamically suppresses residual errors caused by imperfect offline prediction and external disturbances via online learning. Experimental results on an ultraprecision motion stage demonstrate its nanometer-level accuracy over a 100-mm motion stroke, matching the performance of well-established iterative learning control, while offering superior robustness to trajectory variations and external disturbances. These advantages highlight the potential of the proposed method for advanced ultraprecision industries such as IC fabrication.
10:10am – 10:40am: Coffee break and networking
10:40am – 11:10am: Shiyu Zhao — Advancing Multi-Robot Systems: Generative Approaches with Large Language Models and Multi-Agent Reinforcement Learning
Although multi-robot systems have been studied for decades, it is still challenging for them to handle new situations or multiple tasks in open environments. This talk will introduce our latest research results on generative multi-robot systems based on large language models and multi-agent reinforcement learning. The proposed methods can greatly reduce the cost of development and significantly improve the intelligence of the system.
11:10am – 11:40am: Peter Stone — Human-in-the-Loop Machine Learning for Robot Navigation and Manipulation
While there have been huge advances in Machine Learning in recent years, many of the successes have relied on immense amounts of training data. Especially for sequential-decision-making tasks (the realm of reinforcement learning), obtaining such data from online experience can take a very long time. On the other hand, learning can often be dramatically accelerated by leveraging human input, for example as demonstrations of successful task executions, as interventions to correct mistakes, or simply as evaluative feedback separating "correct" actions from incorrect actions. This talk focuses on such Human-in-the-Loop Machine Learning for robotics tasks, covering both navigation, especially in tightly constrained spaces, and manipulation in open-world settings.