For robot policies to be useful, they need to be able to operate in our everyday environments. Over the last years, the robotics community has made progress on training increasingly general policies across a broad range of tasks, but for the longest time, these policies have been confined to lab-like settings. The next frontier for generalist policy research is the deployment of policies in real-world human environments like homes, offices, or industrial environments. Building on a long history of in-the-wild robot research, a number of works have recently demonstrated that modern robot learning approaches can generalize, out of the box, to unseen everyday environments. Thus, the time is right for a focused workshop on the approaches, opportunities and challenges of bringing generalist policies into the wild.
Deployment in open-world, everyday environments brings many new challenges. We invite workshop submissions on a number of topics related to generalist policies and their deployment outside of lab environments. Relevant topics include, but are not limited to:
Data: How can we effectively collect data for in-the-wild deployment?
Models: How can we develop models that can act as proficient generalists in increasingly in-the-wild environments?
Evaluation: How can we compare the performance of policies and algorithms, designed to perform a large number of tasks in many environments?
Safety: How can we guarantee safety as end-to-end models get deployed into real-world environments and around humans?
As part of this workshop we will also be introducing the RoboArena generalist robot policy development challenge. In this challenge, we aim to provide resources that making training and evaluating generalist policies as accessible as possible. We hope that RoboArena can serve as a useful testbed for novel research on generalist policies, and by readily providing resources for generalist policy development, including data, model training code, and robots for evaluation, we hope even non-roboticists interested in generalist policies can participate.
Key Dates
July 15th, 2025: RoboArena challenge public announcement, all materials & instructions released, simulated environments released
Aug 1st - Sep 12th: Eval office hours – by request Zoom debugging sessions to test policies on robot. Policies can also be submitted directly to RoboArena (https://robo-arena.github.io/submit) and will get evaluated
Sept 8th: Soft-deadline to submit policies to RoboArena. We recommend making the soft-deadline so that you can get some signal about how well your policies are doing/can debug any unforseen issues.
Sept 13th: Hard-deadline for final submission of policies (11:59 PM anywhere on earth). RoboArena evaluations will be run in the week following, from Sep 13 until Sep 20, to determine the final ranking of all competing policies & the winner.
Resources
Open-source DROID Dataset: https://droid-dataset.github.io/
Starter VLA model training code: https://github.com/Physical-Intelligence/openpi/blob/main/examples/droid/README_train.md
Simulator for quick debugging: https://github.com/arhanjain/sim-evals
The RoboArena benchmark itself! Submit policies to get them evaluated: https://robo-arena.github.io/submit
(we provide extra evaluation credit for challenge participants!)
Submission Deadline: September 17, 2025, 23:59 AOE
Acceptance Notification: September 20, 2025
Camera-ready Deadline: September 23, 2025, 23:59 AOE
Submission Portal: OpenReview
Workshop Date: Sept 27, 2025
We invite submissions with up to 8 pages for the main paper, and unlimited references / appendices. Submissions are encouraged to use the CoRL template. We welcome relevant submissions that were recently published at other venues (e.g. NeurIPS / ICML / ICLR), but ask authors to indicate this upon submission. All accepted papers will be presented as posters, and select papers as spotlight talks.
Yang Gao
Assistant Professor, Tsinghua University
Laura Smith
Research Scientist, Physical Intelligence
Elahe Arani
Head of AI, Wayve
Karl Pertsch
UC Berkeley & Stanford
Pranav Atreya
UC Berkeley
Tony Lee
Stanford
Arhan Jain
University of Washington
Artur Kuramshin
Université de Montréal
Cyrus Nealy
Université de Montréal
Edward Hu
University of Pennsylvania
Jie Wang
University of Pennsylvania
Kirsty Ellis
Université de Montréal
Luca Macesanu
UT Austin
Matthew Leonard
University of Pennsylvania
Meedeum Cho
Yonsei University
Ozgur Aslan
Université de Montréal
Shivin Dass
UT Austin
William Painter Reger
UT Austin
Xingfang Yang
University of Pennsylvania
Abhishek Gupta
University of Washington
Dinesh Jayaraman
University of Pennsylvania
Glen Berseth
Université de Montréal
Roberto Martin-Martin
UT Austin
Youngwoon Lee
Yonsei University
Percy Liang
Stanford
Chelsea Finn
Stanford
Sergey Levine
UC Berkeley