SCHEDULE
All times are CEST (UTC+1)
Monday, July 15th, 2024, Delft, The Netherlands 🌷
All times are CEST (UTC+1)
Monday, July 15th, 2024, Delft, The Netherlands 🌷
This is a past workshop.
Time Activity
08:45 - 09:00 Welcome by organizers
Title: Algorithmic Scenario Generation for Robust Human-Robot Interaction
Abstract: The advent of state-of-the-art machine learning models and complex human-robot interaction systems has been accompanied by an increasing need for the efficient generation of diverse and challenging scenarios that test these systems to improve safety and robustness.
In this talk, I will formalize the problem of algorithmic scenario generation and propose a general framework for searching, generating, and evaluating simulated scenarios that result in human-robot interaction failures with significant safety implications. I will first discuss our fundamental advances in quality diversity optimization algorithms that search the continuous, multi-dimensional scenario space. I will then show how integrating quality diversity algorithms with generative models allows the generation of realistic scenarios. Instead of performing expensive evaluations for every single generated scenario in a robotic simulator, I will discuss combining the scenario search with the self-supervised learning of surrogate models that predict human-robot interaction outcomes, facilitating the efficient identification of unsafe conditions. Finally, I will introduce the notion of 'soft archives' for registering the generated scenarios, which significantly improves performance in hard-to-optimize domains. I will show how the proposed framework leads to the discovery of different types of unsafe behaviors and failure modes in collaborative manipulation tasks.
09:30 - 10:00 Networking Activity
10:00 - 10:30 Coffee break
Title: Towards Human—AI Safety: Unifying Generative AI and Control Systems Safety
Abstract: As generative artificial intelligence (AI) is embedded into more autonomy pipelines—from behavior predictors to language models—it is enabling robots to interact with people at an unprecedented scale. On one hand, these models offer a surprisingly general understanding of the world; on the other hand, integrating them safely into human-robot interactions remains a challenge. In this talk, I argue there is a high-value window of opportunity to combine the growing capabilities of generative AI with the robust, interaction-aware dynamical safety frameworks from control theory. This synergy can unlock a new generation of human–AI safety mechanisms that can perform systematic risk mitigation at scale.
11:00 - 12:30 Lightning talks + poster session
12:30 - 14:00 Lunch break
14:00 - 15:00 Debate panel discussion
Discussion Topic: TBD
Moderator: David Abbink
Our experts panelists are:
Leila Takayama (Hoku Labs)
Maria Luce Lupetti (Politecnico di Torino)
Morteza Lahijanian (University of Colorado Boulder)
Andrea Bajcsy (Carnegie Mellon University)
Malte Jung (Cornell University)
Tariq Iqbal (University of Virginia)
Title: Deploying AI: Lessons learned from self-driving cars
Abstract: With the rise of artificial intelligence (AI), the dream of self-driving cars has seemingly become reality with driverless commercial operations in a handful of cities around the world. However, multiple high-profile self-driving crashes has highlighted problems, both for self-driving cars and AI in safety-critical systems in general. This talk will address the AI-related issues that have emerged with self-driving cars and what lessons can be learned for all safety-critical systems with embedded AI.
15:30 - 16:00 Coffee break
Title: Guiding Robot Behavior: Constraining Diffusion Models for Safety and Norm Adherence
Abstract: Generative models for robot trajectories often risk violating safety and normative behaviors. In this talk, we will discuss two approaches to bias generated trajectories towards satisfying safety specifications and norms specified at test time. First, we introduce LTLDoG, a diffusion-based framework that generates long-horizon trajectories adhering to constraints defined by finite linear temporal logic (LTLf). We guide the sampling process with a satisfaction value function to ensure compliance with safety norms. Second, we present a zero-shot, open-vocabulary diffusion policy for robot manipulation. Using Vision-Language Models (VLMs), we transform linguistic task descriptions into actionable 3D keyframes. Our inpainting optimization strategy balances keyframe adherence with the training data distribution, addressing issues of incorrect and out-of-distribution keyframes. These methods are a step towards enhancing the safety and reliability of generative models for robot behavior. We will conclude with a discussion on open topics surrounding these works and potential steps forward.
16:30 - 16:55 Group discussions
16:50 - 17:00 Closing remarks by organizers