We are happy to announce that we have the following keynote speakers confirmed. More information on the talks will be posted shortly.
Jitendra Malik (University of California at Berkeley)
Giancarlo Ferrari-Trecate (École Polytechnique Fédérale de Lausanne)
Yuanyuan Shi (University of California at San Diego)
Alessandro Abate (Oxford University)
University of California at Berkeley
Robot Learning, with Inspiration from Child Development
For intelligent robots to become ubiquitous, we need to “solve" locomotion, navigation and manipulation at sufficient reliability in widely varying environments. In locomotion, we now have demonstrations of humanoid walking in a variety of challenging environments. In navigation, we pursued the task of “Go to Any Thing” – a robot, on entering a newly rented Airbnb, should be able to find objects such as TV sets or potted plants. The biggest challenges in robotics today lie in manipulation, particularly in dexterous manipulation with multi-fingered hands. Learning approaches have been responsible for recent advances, but they are held up by the lack of “big data” at the scale available in language and vision. I argue that this shortage can be circumvented by taking inspiration from how humans acquire motor skills in childhood. For dexterous manipulation, multimodal perception is key – vision, touch and proprioception. In my view, visual imitation should be based on 3D/4D reconstruction – then a physics simulator provides a pre-trained world model. The core technology for reconstruction of human bodies, hands, and objects now exists with systems like HMR, HaMeR and SAM 3D. Visual imitation, while essential, is not sufficient, as policies need to consider contact forces as well. RL in simulation and sim-to-real have been workhorse technologies for us, assisted by a few technical innovations. I will sketch promising directions for future work.
Bio: Jitendra Malik is Arthur J. Chick Professor in the Department of Electrical Engineering and Computer Science at the University of California at Berkeley, where he also holds appointments in vision science, cognitive science and Bioengineering. He received the PhD degree in Computer Science from Stanford University in 1985 following which he joined UC Berkeley as a faculty member. He served as Chair of the Computer Science Division during 2002-2006, and of the Department of EECS during 2004-2006.
Jitendra's group has worked on computer vision, computational modeling of biological vision, computer graphics and machine learning. Several well-known concepts and algorithms arose in this work, such as anisotropic diffusion, normalized cuts, high dynamic range imaging, shape contexts and R-CNN. His publications have received numerous best paper awards, including five test of time awards - the Longuet-Higgins Prize for papers published at CVPR (twice) and the Helmholtz Prize for papers published at ICCV (three times). He received the 2013 IEEE PAMI-TC Distinguished Researcher in Computer Vision Award, the 2014 K.S. Fu Prize from the International Association of Pattern Recognition, the 2016 ACM-AAAI Allen Newell Award, the 2018 IJCAI Award for Research Excellence in AI, and the 2019 IEEE Computer Society Computer Pioneer Award.
Jitendra Malik is a Fellow of the IEEE, ACM, and the American Academy of Arts and Sciences, and a member of the National Academy of Sciences and the National Academy of Engineering.
École Polytechnique Fédérale de Lausanne
Performance Boosting for Nonlinear Systems via Stability-Preserving Neural Architectures
Neural network (NN)-based control architectures have driven impressive advances in complex, nonlinear systems, ranging from dexterous robotics to decentralized energy and multi-agent systems. Yet, a core challenge remains: how to fully exploit the expressive power of NN controllers while maintaining rigorous guarantees on closed-loop stability, robustness, and safety.
This talk addresses this challenge by introducing a control framework that enhances performance without compromising stability. We present a complete characterization of stability-preserving policies where the design flexibility is entirely encapsulated within a stable operator. This operator can be represented by broad classes of freely parametrized deep NNs, enabling control design through unconstrained gradient descent or policy distribution sampling.
The proposed framework allows for a rigorous separation of concerns in design: a simple suboptimal regulator ensures stability, while an external optimal loop minimizes a general cost function to optimize performance and safety. This approach naturally extends to a range of settings, including tracking and distributed coordination, while seamlessly incorporating context awareness.
Applications in cooperative robotics and electrical engineering will be discussed to showcase the achievement of complex, high-performance dynamics in realistic scenarios.
Bio: Giancarlo Ferrari Trecate received a Ph.D. in Electronic and Computer Engineering from the Universita' Degli Studi di Pavia in 1999. Since September 2016, he has been a Professor at EPFL, Lausanne, Switzerland. In the fall of 1998, he joined the Automatic Control Laboratory, ETH, Zurich, Switzerland, as a Postdoctoral Fellow. He was appointed Oberassistent at ETH in 2000. In 2002, he joined INRIA, Rocquencourt, France, as a Research Fellow. From March to October 2005, he was a researcher at the Politecnico di Milano, Italy. From 2005 to August 2016, he was Associate Professor at the Dipartimento di Ingegneria Industriale e dell'Informazione of the Universita' degli Studi di Pavia.
His research interests encompass neural network control, machine learning, microgrids, networked control systems, and hybrid systems.
Giancarlo Ferrari Trecate is the founder and current chair of the Swiss chapter of the IEEE Control Systems Society. He is Senior Editor of the IEEE Transactions on Control Systems Technology and has served on the editorial boards of Automatica and Nonlinear Analysis: Hybrid Systems.
University of California at San Diego
Neural Operator Learning for Control
In this talk, we present a novel set of tools and methodologies on physics-informed Neural Operator Control (NOC) for ODE and PDE governed systems. Specifically, we will present NOC for predictor feedback in nonlinear delay systems, NOC for PDE backstepping control, and NOC for optimal motion planning.
The first part of the talk is about NOC for predictor feedback in nonlinear delay systems. Predictor feedback is effective for delay compensation, yet a critical challenge lies on efficient computation of the predictor operator. We introduce NOC for approximating the nonlinear predictor mapping and prove semiglobal practical stability (dependent on the learning error) of the proposed NOC predictor feedback via back-stepping transformation.
The second part of the talk is about NOC for PDE control. Model-based methods such as PDE backstepping offer provable guarantees for control but are often computationally prohibitive for real-time implementation. We propose a NOC framework that approximates the mapping from functional coefficients to control gains with desired accuracy. Using neural operator-approximated backstepping gains, we show that our method can accelerate PDE control by up to three orders of magnitude while retaining stability guarantees.
If time permits, we will share our recent advance in designing physics-informed NOC for generalizable motion planning in highly dynamic environments. We propose to encode the obstacle geometries as cost functions and produce fast value function approximations for motion planning, which is defined by the Eikonal PDE. We will conclude the talk by challenges and opportunities in operator learning in control and planning.
Bio: Yuanyuan Shi is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of California San Diego. Her research focuses on machine learning, dynamical systems, and control, with applications to sustainable power and energy systems. She received her B.S. in Automation Engineering from Nanjing University, China, and her Ph.D. in Electrical and Computer Engineering from the University of Washington, Seattle, WA in 2020. From 2020 to 2021, she was a Postdoctoral Scholar at the California Institute of Technology. Her research has been recognized with several awards, including the inaugural Michael R. Anastasio LANL-UC Early Career Faculty Scholar award, the NSF CAREER Award, the Schmidt Sciences AI2050 Early Career Fellowship, the Hellman Fellowship, and Best Paper Finalist recognitions at L4DC 2025 and ACM e-Energy 2022.
Oxford University
Model-Based and Sample-Driven Synthesis with Logic and Probability
We are witnessing an inter-disciplinary convergence between scientific areas underpinned by model-based reasoning and by sample-driven learning. In this talk, I will discuss general synthesis problems - of proofs for verification, of controllers/policies for sequential decision making (e.g., in RL), of models and their abstractions - around complex temporal objectives.
I shall describe how techniques from formal verification (logic, SAT, automata theory) and from learning (neural architectures, sample-driven approaches and probabilistic guarantees) can be together leveraged and integrated to attain both sound and effective synthesis outcomes.
I will showcase how this general framework is applicable for the formal verification and control of complex dynamical models and of reactive software programs.
Bio: Alessandro Abate is Professor of Verification and Control in the Department of Computer Science at the University of Oxford. Earlier, he did research at Stanford University and at SRI International, and was Assistant Professor at TU Delft. He received a PhD/MSc degrees from UC Berkeley and the University of Padua. His research spans logic, probability, control and decision theory.