Human to Robot (H2R):
Workshop on Sensorizing, Modeling, and Learning from Humans @ CoRL 2025
Seoul, South Korea (Sep 27, 2025)
Seoul, South Korea (Sep 27, 2025)
This workshop focuses on the "Human-to-Robot" (H2R) challenge, bringing together researchers to tackle the core obstacles hindering effective learning from human data. We structure these challenges around four interconnected themes:
Sensorizing Humans: To learn from humans, we must first capture the richness of their embodied experience. This requires advancements in unobtrusive, multimodal sensing technologies (beyond standard vision/audio to include touch, force, haptics, motion, gaze, and physiological signals) and methods for large-scale, long-term data acquisition in naturalistic settings. This pillar focuses on creating the comprehensive data foundation necessary for H2R.
Modeling Human Behavior: To effectively leverage rich, in-the-wild human data for robot learning, we must first develop a deeper understanding of the humans generating it. Raw sensor data alone has thus far proven insufficient because existing learning algorithms often struggle to capture the nuances, underlying intent, long-term goals, and complex decision-making processes inherent in natural human behavior observed outside constrained lab settings.
Robot Learning from Human Data: With rich data and human models, the next step is enabling robots to effectively learn skills. This requires robust algorithms for imitation learning, learning from observation, affordance grounding, and policy transfer that can handle the variability and scale of real-world human data. The focus is on translating the understanding gleaned from human data into tangible robot skills, particularly for complex, long-horizon tasks.
Leveraging Human Understanding for Better Human-Robot Interaction: Beyond offline skill learning, a deeper understanding of human state, intent, attention, and non-verbal cues (derived from advanced sensing and modeling) is critical for robots designed to work alongside people. This theme explores how insights from human data can enable robots to interact more safely, intuitively, predictably, and collaboratively in shared environments.
The workshop best paper awards are generously sponsored by Meta (Project Aria).
09:30 am - 09:45 am Introductory Remarks
09:45 am - 10:15 am Invited Speaker 1: Tess Hellebrekers (Meta AI Research)
10:15 am - 10:30 am Oral Paper Talks (5 min each)
10:30 am - 11:00 am Coffee Break (Poster Session + Demo)
11:00 am - 11:30 am Invited Speaker 2: Matei Ciocarlie (Columbia University)
11:30 am - 12:00 pm Invited Speaker 3: Mingfei Yan (Meta Reality Labs)
12:00 pm - 12:30 pm Invited Speaker 4: Suraj Nair (Physical Intelligence)
12:30 pm - 01:30 pm Lunch
01:30 pm - 02:00 pm Invited Speaker 5: Edward Johns (Imperial College London)
02:00 pm - 02:30 pm Invited Speaker 6: Karen C. Liu (Stanford University)
02:30 pm - 03:00 pm Coffee
03:00 pm - 03:30 pm Invited Speaker 7: Hanbyul Joo (Seoul National University)
03:30 pm - 04:15 pm Panel Discussion
04:15 pm - 04:30 pm Closing Remarks & Award Announcement
Microsoft Research
Topic: Robot learning from Human Video with Tactile
Columbia University
Topic: Human-in-the-Loop Robot Learning
Physical Intelligence
Topic: Datasets for Open-World Robotic Foundation Models
Stanford University
Topic: From Human Characters to Humanoids
Seoul National University
Topic: Towards Capturing Everyday Movements to Scale Up and Enrich Human Motion Data for Robotics
Faculty at Georgia Tech School of Interactive Computing and Research Scientist at NVIDIA
Email: danfei@gatech.edu
We welcome original research, work-in-progress, and position papers. We welcome papers that are unpublished, currently under review, or recently published. Accepted papers will be published on the workshop website but will be considered non-archival and will not be part of the official conference proceedings.
The workshop focuses on discussing the challenges in Human-to-Robot (H2R). We welcome papers working on either hardware or software. Potential topics include, but are not limited to:
Sensorizing Humans
Modeling Human Behavior
Robot Learning from Human Data
Leveraging Human Understanding for Better Human-Robot Interaction
...
Submission Platform: We will be using OpenReview for all submissions. The submission page is here.
Paper Format: Please use the CoRL template and style files, which can be found here. Submissions should be anonymized for review. There is no strict page limit, but we recommend a length of 4-9 pages, excluding references and appendices.
Presentation: Accepted papers will be presented as posters. A selection of papers will be chosen for oral spotlight presentations.
Submission Deadline: August 15, 23:59 (AoE), 2025
Notification of Acceptance: September 5, 2025
Late Submission Deadline: August 30, 23:59 (AoE), 2025
Late Notification of Acceptance: September 15, 2025
Camera-Ready Papers Due: September 20, 23:59 (AoE), 2025
All deadlines are Anywhere on Earth (AoE) time. A link showing the current AoE time can be found here.