Data-Driven Control via Deep Koopman MPC
We develop advanced data-driven control strategies for complex nonlinear systems utilizing Deep Koopman operator theory. By employing deep neural networks to lift highly nonlinear system dynamics into a linear latent space, we enable the application of efficient linear Model Predictive Control (MPC) to originally intractable systems. This innovative approach allows for real-time optimal control and trajectory planning of complex robotic manipulators using purely data-driven models, seamlessly bridging machine learning with rigorous optimal control.
Reinforcement Learning-based Robust Control
Our research bridges the gap between modern machine learning and classical robust control. We develop Reinforcement Learning (RL) based control algorithms tailored for uncertain nonlinear systems. By combining RL agents with advanced state estimators, such as Extended State Observers (ESO), our hybrid controllers can autonomously learn optimal policies while guaranteeing robustness against unmodeled dynamics and external disturbances. This ensures that our AI-driven control policies are safe, reliable, and practically deployable in real-world physical systems.