Upcoming Talks:
June 04th, 2025, 10:00 AM - 11:00 AM PT
Dr. Yongchao Huang (University of Aberdeen, UK)
Title: Designing Particle-Based Samplers Using Physical Principles
Abstract: We introduce 3 new particle-based sampling methods inspired by physical principles: Electrostatics-based particle variational inference (EParVI), Smoothed Particle Hydrodynamics based particle variational inference (SPH-ParVI), and Material Point Method based particle variational inference (MPM-ParVI). Leveraging electrostatics, fluid dynamics, and continuum mechanics, these methods offer deterministic, flexible solutions for sampling complex, high-dimensional distributions in Bayesian inference and generative modeling. As exemplars of science for AI, these physics-based samplers pave the way for a new category of sampling techniques. This talk focuses on their mathematical formulations, algorithmic efficiency, and addresses some unresolved challenges.
Apr 23rd, 2025, 10:00 AM - 11:00 AM PT
Dr. Mingyu Cai (UC Riverside)
Title: Safe and Logic AI-enabled Autonomy
Abstract: Recent advances in machine learning enable the deployment of autonomous systems in our everyday life such as self-driving, smart home technologies, human-assistive robotics, etc. However, machine learning models may generate unexpected outputs that limit their success in practical applications. Real-world robotic deployments increasingly require algorithms to handle wider classes of complex tasks, while ensuring that these systems act safely. New fundamental questions are raised about building trustworthy AI-enabled autonomy. Researchers are working towards the next generation of robotics that can connect machine logic with natural languages, verify AI-based models, and assign interpretability to autonomous behaviors.
In this talk, I will introduce techniques from the field of formal methods, namely temporal logics, to complement a learning-enabled robot autonomy stack, thereby leading to safer and more interpretable robot behaviors. First, I will discuss how we can integrate formal logics to produce motion planning, machine learning, and control strategies to satisfy complex human-readable specifications. Second, I will demonstrate how formal logics can incorporate interpretable elements and tools into AI-enabled systems to provide safety-critical guarantees and verify neural network models. I will end the talk by showing existing open challenges and the deep potential of applying formal logics and machine learning for human-in-the-loop, perception, and generalization across robotic platforms, and thus the exciting directions these entail.
Bio: Mingyu Cai is currently an assistant professor in the Department of Mechanical Engineering at University of California, Riverside, where I am leading the Robotics and Explainable AI Lab (REAL). There, He is also an Advisor of the UCR Robotic Program as well as a cooperating faculty member in the Department of Electrical and Computer Engineering. His research specializes in robotics, machine learning, control, formal methods, aiming to enhance applications in autonomy systems, human-robot interaction, and AI Security.
Before joining UCR, He was a research scientist at Honda Research Institute (San Jose Office) from 2023-2024. From 2021-2023, he was a postdoctoral associate in the Autonomous and Intelligent Robotics Laboratory of Lehigh University and postdoc researcher of MIT Lincoln Lab. I received the Diploma in Mechanical Engineering in 2020 from the University of Iowa. He also received the M.Sc. and B.Eng in mechanical engineering from University of Florida, Gainesville, USA, 2017 and Beijing Institute of Technology, Beijing, China, 2015, respectively.