Rethinking What It Means to be "Safe" for Generalist Robots
Monday, July 13 2026 (Morning)
Monday, July 13 2026 (Morning)
Workshop Overview
The robotics community has long studied how to ensure safety of robotic systems in specialized contexts. However, as modern robot “generalists” begin to operate across diverse tasks and environments, a more fundamental question arises: what does it actually mean for a robot to be “safe”? Consider a robot that is helping prepare you a meal. This robot should understand that breaking an egg is unacceptable while grocery shopping, but becomes a necessary constraint when cooking an omelette. Now imagine the robot moving a knife across the counter near your hand while cooking. Even without contact, the behavior may still be perceived as too risky by some users. User preferences and risk tolerance thus become part of the context defining acceptable behavior. As generalist robots increasingly rely on larger amounts of data, both during pre-training and adaptation, additional concerns arise regarding data privacy and security. Although traditional safety mechanisms (e.g., collision avoidance, joint limits) remain important, these examples reveal a deeper limitation: safety cannot be reduced to constraint enforcement without reasoning about context, intent, and human expectations.
Through this workshop, we aim to foster discussion on rethinking how safety for generalist robots should be defined, measured, and incorporated into learning and control systems. To do so, this workshop will bring together (a) industry leaders to share shortcomings of existing safety definitions that are only available from large-scale training and deployment, (b) academic experts from disciplines ranging from decision-theory, privacy, and human-robot-interaction to provide perspectives on safety is modeled in different domains, (c) government leaders to discuss how they are thinking about open-world robot safety from regulatory perspectives. Together, we aim to synthesize these perspectives into a community position paper that reframes how safety for generalist robots should be defined, measured, and realized.
Discussion Themes:
What is the taxonomy of safety concerns in robotics?
Can we unify differing “specialist” notions of safety for “generalist” robots?
What can we learn about safety from non-embodied AI disciplines (e.g., LLMs) and what challenges are unique to robotics?
What assumptions should we make when formalizing the definition of safety for generalist robots?
Given this new notion of safety, to what extent do existing methods still hold? If not, how can this new definition inform modern algorithms for robot safety?
Tentative Schedule
Opening Remarks: 5 min
Speaker 1: 25+5 min
Speaker 2: 25+5 min
Student Spotlights (25 min)
— Coffee Break: 30 min —
Speaker 3: 25+5 min
Speaker 4: 25+5 min
Panel: 30 min
Closing Remarks: 5 min
Organizers
Arpit Bahety
UT Austin
Kensuke Nakamura
Carnegie Mellon University
Haruki Nishimura
Toyota Research Institute
Lihan Zha
Princeton University
Ian Abraham
Univeristy of Sydney / Yale University
Andrea Bajcsy
Carnegie Mellon University
Roberto Martin-Martin
UT Austin