KEYNOTE SPEAKERS
Andrea Bajcsy
Andrea Bajcsy is an Assistant Professor in the Robotics Institute at Carnegie Mellon University. She received her doctoral degree in Electrical Engineering & Computer Science from UC Berkeley. She works at the intersection of robotics, machine learning, and human-AI interaction. Her research develops theoretical frameworks and practical algorithms for autonomous robots to safely interact with people, in applications such as personal robotic manipulators, quadrotors, and autonomous vehicles. Her work is funded by the NSF and has been featured in NBC news, WIRED magazine, and the Robohub podcast. She is the recipient of an Honorable Mention for the T-RO Best Paper Award, the NSF Graduate Research Fellowship, UC Berkeley Chancellor’s Fellowship, and worked at NVIDIA Research and Max Planck Institute for Intelligent Systems.
Keynote's title: Towards Human—AI Safety: Unifying Generative AI and Control Systems Safety
Abstract: As generative artificial intelligence (AI) is embedded into more autonomy pipelines—from behavior predictors to language models—it is enabling robots to interact with people at an unprecedented scale. On one hand, these models offer a surprisingly general understanding of the world; on the other hand, integrating them safely into human-robot interactions remains a challenge. In this talk, I argue there is a high-value window of opportunity to combine the growing capabilities of generative AI with the robust, interaction-aware dynamical safety frameworks from control theory. This synergy can unlock a new generation of human–AI safety mechanisms that can perform systematic risk mitigation at scale.
Stefanos Nikolaidis
Stefanos Nikolaidis is an Assistant Professor in Computer Science and the Fluor Early Career Chair at the University of Southern California, where he leads the Interactive and Collaborative Autonomous Robotics Systems (ICAROS) lab. His research draws upon expertise on artificial intelligence, procedural content generation and quality diversity optimization and leads to end-to-end solutions that enable deployed robotic systems to act robustly when interacting with people in practical, real-world applications.
Stefanos completed his PhD at Carnegie Mellon's Robotics Institute and received an MS from MIT, a MEng from the University of Tokyo and a BS from the National Technical University of Athens. In 2022, Stefanos was the sole recipient of the Agilent Early Career Professor Award for his work on human-robot collaboration, as well as the recipient of an NSF CAREER award for his work on “Enhancing the Robustness of Human-Robot Interactions via Automatic Scenario Generation.” His research has also been recognized with best paper awards and nominations from the IEEE/ACM International Conference on Human-Robot Interaction, The Genetic and Evolutionary Computation Conference, the International Conference on Intelligent Robots and Systems, and the International Symposium on Robotics.
Keynote's title: Algorithmic Scenario Generation for Robust Human-Robot Interaction
Abstract: The advent of state-of-the-art machine learning models and complex human-robot interaction systems has been accompanied by an increasing need for the efficient generation of diverse and challenging scenarios that test these systems to improve safety and robustness.
In this talk, I will formalize the problem of algorithmic scenario generation and propose a general framework for searching, generating, and evaluating simulated scenarios that result in human-robot interaction failures with significant safety implications. I will first discuss our fundamental advances in quality diversity optimization algorithms that search the continuous, multi-dimensional scenario space. I will then show how integrating quality diversity algorithms with generative models allows the generation of realistic scenarios. Instead of performing expensive evaluations for every single generated scenario in a robotic simulator, I will discuss combining the scenario search with the self-supervised learning of surrogate models that predict human-robot interaction outcomes, facilitating the efficient identification of unsafe conditions. Finally, I will introduce the notion of 'soft archives' for registering the generated scenarios, which significantly improves performance in hard-to-optimize domains. I will show how the proposed framework leads to the discovery of different types of unsafe behaviors and failure modes in collaborative manipulation tasks.
Missy Cummings
Professor Mary (Missy) Cummings received her B.S. in Mathematics from the US Naval Academy in 1988, her M.S. in Space Systems Engineering from the Naval Postgraduate School in 1994, and her Ph.D. in Systems Engineering from the University of Virginia in 2004. A naval officer and military pilot from 1988-1999, she was one of the U.S. Navy's first female fighter pilots. She is a Professor in the George Mason University College of Engineering and Computing, and directs the Mason Responsible AI program as well as the Mason Autonomy and Robotics Center (MARC). She is an American Institute of Aeronautics and Astronautics (AIAA) and a Royal Aeronautical Society Fellow, and recently served as the senior safety advisor to the National Highway Traffic Safety Administration.
Keynote title: "Deploying AI: Lessons learned from self-driving cars"
Abstract: With the rise of artificial intelligence (AI), the dream of self-driving cars has seemingly become reality with driverless commercial operations in a handful of cities around the world. However, multiple high-profile self-driving crashes has highlighted problems, both for self-driving cars and AI in safety-critical systems in general. This talk will address the AI-related issues that have emerged with self-driving cars and what lessons can be learned for all safety-critical systems with embedded AI.
Harold Soh
Harold Soh is an Assistant Professor of Computer Science at the National University of Singapore, where he leads the Collaborative Learning and Adaptive Robots (CLeAR) group. He completed his Ph.D. at Imperial College London, focusing on online learning for assistive robots. Harold's research primarily involves machine learning, particularly generative AI, and decision-making in trustworthy collaborative robots.
His contributions have been recognized with a R:SS Early Career Spotlight in 2023, best paper awards at IROS'21 and T-AFFC'21, and several nominations (R:SS'18, HRI'18, RecSys'18, IROS'12). Harold has played significant roles in the HRI community, most recently as co-Program Chair of ACM/IEEE HRI'24. He is an Associate Editor for the ACM Transactions on Human Robot Interaction, Robotics Automation and Letters (RA-L), and the International Journal on Robotics Research (IJRR).
Keynote's title: Guiding Robot Behavior: Constraining Diffusion Models for Safety and Norm Adherence
Abstract: Generative models for robot trajectories often risk violating safety and normative behaviors. In this talk, we will discuss two approaches to bias generated trajectories towards satisfying safety specifications and norms specified at test time. First, we introduce LTLDoG, a diffusion-based framework that generates long-horizon trajectories adhering to constraints defined by finite linear temporal logic (LTLf). We guide the sampling process with a satisfaction value function to ensure compliance with safety norms. Second, we present a zero-shot, open-vocabulary diffusion policy for robot manipulation. Using Vision-Language Models (VLMs), we transform linguistic task descriptions into actionable 3D keyframes. Our inpainting optimization strategy balances keyframe adherence with the training data distribution, addressing issues of incorrect and out-of-distribution keyframes. These methods are a step towards enhancing the safety and reliability of generative models for robot behavior. We will conclude with a discussion on open topics surrounding these works and potential steps forward.