The groundbreaking capabilities of generative AI (GenAI) provide a trove of opportunities for robotics, enabling general-purpose skill acquisition and contextual reasoning for more intelligent and reliable operations beyond the traditional sense. However, the emergence of GenAI also introduces additional safety and security challenges for real-world robot autonomy (e.g., handling uncertainties and deliberate adversaries). These safety and security issues exacerbated by GenAI are ingrained in all aspects of robot autonomy, ranging from perception to understanding, reasoning and action.
This half-day workshop brings together researchers from academia and industry, drawing on expertise in cybersecurity, robot perception, learning, control, explainable AI, and evaluation for real-world robotics, to foster an interdisciplinary exchange on reliable robot autonomy with two connected themes: (i) enhanced reliable robot autonomy enabled by large-scale models and (ii) safety and security challenges introduced by GenAI in robotics. By uniting diverse perspectives, the workshop aims to drive discussion on pressing challenges at the intersection of GenAI, safety, and security in robotics while further promoting collaboration across disciplines.
Safe & secure deployment of language models for robots
Can LLMs be trusted to enforce the safety of robots?
How can we stop automated attacks by LLMs?
What tools are needed to reliably test LLM-based robotic solutions?
Use of generative AI to address safety concerns
How can language models be used to verify safe robot behavior or enable safe behavior specification?
How can generative models be used effectively for safer sim-to-real transfers?
How can we best leverage foundation models to build environment representations that would facilitate downstream reasoning and safe operation?
Leveraging generative AI to recover from off-nominal conditions
Can the multimodal capabilities of GenAI improve the resilience of robots to cyber faults and real-world failures?
How can foundation models compensate for uncertainty and disturbances?
Can GenAI be leveraged to enhance system redundancy and resilience?
Combatting advanced adversarial attacks on robotics systems
What is the ideal integration of safety and security measures and methods?
What measures and benchmarks are required to evaluate attacker-robust algorithms?
What security vulnerabilities to physical and cyber-attacks are introduced by incorporating generative AI in robots?
We invite researchers to share their findings with the community by submitting short, non-archival papers (4 pages + references). Submissions will be presented in-person during a short spotlight talk and a poster session on-site during the workshop.
Submission Deadline: May 30, 2025, 23:59 (AOE)
Reviews and Decisions: June 6, 2025, 23:59 (AOE)
Workshop date: June 21, 2025, at USC, Los Angeles
Recently published papers and novel research are both welcome
Single blind RSS format
Submission link: OpenReview
The contributed papers will be made available on this website. However, this does not constitute an archival publication, and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.
We will highlight the Best Paper Award and Most Popular Poster Award during the closing remarks at the workshop event.
Haruki Nishimura Toyota Research Institute
Gentiane Venture
University of Tokyo
Igor Gilitschenski
University of Toronto
Saadia Gabriel
UCLA
08:45 - 09:00 - Welcome and Opening Remarks
09:00 - 09:25 - "Towards Understanding Performance Fluctuation of Generative Imitation Policies: A Deployment-Centric Approach" (H. Nishimura)
09:25 - 09:50 - "Philosophical reflection on living with robots" (G. Venture)
09:50 - 10:10 - Lightning Talks
10:10 - 10:55 - Coffee Break and Poster Session
10:55 - 11:20 - "Simulating Emergent LLM Social Behaviors in Multi-agent Systems"(S. Gabriel)
11:20 - 11:45 - "Do Androids Dream of Electric Sheep? A Generative Paradigm for Dataset Design" (I. Gilitschenski)
11:45 - 12:15 - Panel Discussion with Speakers
12:15 - 12:30 - Awards and Closing Remarks
Jean-Baptiste Bouvier
UC Berkeley
Fanta Camara
University of York
Glen Chou
Georgia Tech
Siqi Zhou
TU Munich