This workshop explores moving beyond human preference signals—train and align reinforcement learning systems using measurable, real-world feedback, such as efficiency, safety, health, performance, and economic outcomes, to achieve scalable, robust, and truly grounded intelligence. The workshop brings together researchers across reinforcement learning, foundation models, robotics, systems, and AI alignment to study how heterogeneous, noisy, and delayed world feedback can be modeled and integrated into modern learning pipelines. Through invited talks, contributed papers, and interactive panels, the workshop aims to advance scalable, world-grounded training paradigms and redefine feedback as a central interface between intelligent agents and real-world objectives.
The goal of this workshop is to advance a new reinforcement learning paradigm in which world feedback—objective signals arising from real-world interactions such as efficiency, safety, health, performance, and economic outcomes—is treated as a first-class learning signal alongside or beyond human feedback. By uniting researchers across reinforcement learning, foundation models, robotics, systems, and AI alignment, the workshop aims to clarify core challenges, share emerging methods, and establish common frameworks for learning from heterogeneous, noisy, and delayed feedback. Ultimately, the workshop seeks to catalyze scalable, robust, and deployable learning approaches that are grounded in real-world consequences rather than solely human preference.
8:00–8:10 Opening remarks
8:10–8:40 Invited talk 1
8:40–9:10 Invited talk 2
9:10–10:00 Oral presentations
10:00–10:30 Coffee break
10:30–11:00 Invited talk 3
11:00–12:00 Poster session 1
12:00–13:00 Lunch
13:00–13:30 Invited talk 4
13:30–14:00 Invited talk 5
14:00–15:00 Poster session 2
15:00–15:30 Coffee break
15:30–16:00 Invited talk 6
16:00–17:00 Panel discussion
To account for underrepresented researchers who may not have access to large amounts of compute, we propose two types of submissions, including (1) a full-paper submission (up to 8 pages) in ICML format with potentially large-scale experiments and (2) a short submission (2-4 pages) in ICML format with proof-of-concept demonstrations of the idea proposed. The proof-of-concept submission can include demos, code, and a blog post. We will use OpenReview to manage the submissions and the double-blind review process.
Key dates (tentative).
Submission deadline: May 13, 2026, AoE
Author notification: May 31, 2026, AoE
Camera-ready deadline: June 30, 2026, AoE
Workshop date: July 10/11, 2026