Human to Robot (H2R):
Workshop on Sensorizing, Modeling, and Learning from Humans @ CoRL 2025
Seoul, South Korea (Sep 27, 2025)
Room E3
Seoul, South Korea (Sep 27, 2025)
Room E3
This workshop focuses on the "Human-to-Robot" (H2R) challenge, bringing together researchers to tackle the core obstacles hindering effective learning from human data. We structure these challenges around four interconnected themes:
Sensorizing Humans: To learn from humans, we must first capture the richness of their embodied experience. This requires advancements in unobtrusive, multimodal sensing technologies (beyond standard vision/audio to include touch, force, haptics, motion, gaze, and physiological signals) and methods for large-scale, long-term data acquisition in naturalistic settings. This pillar focuses on creating the comprehensive data foundation necessary for H2R.
Modeling Human Behavior: To effectively leverage rich, in-the-wild human data for robot learning, we must first develop a deeper understanding of the humans generating it. Raw sensor data alone has thus far proven insufficient because existing learning algorithms often struggle to capture the nuances, underlying intent, long-term goals, and complex decision-making processes inherent in natural human behavior observed outside constrained lab settings.
Robot Learning from Human Data: With rich data and human models, the next step is enabling robots to effectively learn skills. This requires robust algorithms for imitation learning, learning from observation, affordance grounding, and policy transfer that can handle the variability and scale of real-world human data. The focus is on translating the understanding gleaned from human data into tangible robot skills, particularly for complex, long-horizon tasks.
Leveraging Human Understanding for Better Human-Robot Interaction: Beyond offline skill learning, a deeper understanding of human state, intent, attention, and non-verbal cues (derived from advanced sensing and modeling) is critical for robots designed to work alongside people. This theme explores how insights from human data can enable robots to interact more safely, intuitively, predictably, and collaboratively in shared environments.
The workshop best paper awards are generously sponsored by Meta (Project Aria).
09:30 am - 09:40 am Introductory Remarks
09:40 am - 10:05 am Invited Speaker 1: Tess Hellebrekers (Microsoft Research)
10:05 am - 10:30 am Invited Speaker 2: Matei Ciocarlie (Columbia University)
10:30 am - 11:00 am Coffee Break (Poster Session 1)
11:00 am - 11:10 am Oral Presentations 1, 2 (5 min each)
EgoBridge: Domain Adaptation for Generalizable Imitation from Egocentric Human Data
Compliant Residual DAgger: Improving Real-World Contact-Rich Manipulation with Human Corrections
11:15 am - 11:40 am Invited Speaker 3: Karen C. Liu (Stanford University)
11:40 am - 12:05 pm Invited Speaker 4: Suraj Nair (Physical Intelligence)
12:05 pm - 12:30 pm Invited Speaker 5: Hanbyul Joo (Seoul National University)
12:30 pm - 01:30 pm Lunch
01:30 pm - 01:55 pm Invited Speaker 6: Edward Johns (Imperial College London)
01:55 pm - 02:20 pm Invited Speaker 7: Prince Gupta (Meta Reality Labs Research)
02:20 pm - 02:30 pm Oral Presentations 3, 4 (5 min each)
ROSE: Reconstructing Objects, Scenes, and Trajectories from Casual Videos for Robotic Manipulation
2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos
02:30 pm - 03:00 pm Coffee Break (Poster Session 2 + Aria Gen2 Demo)
03:00 pm - 03:25 pm Invited Speaker 8: Marc Pollefeys (ETH Zurich)
03:30 pm - 04:15 pm Panel Discussion
04:15 pm - 04:30 pm Closing Remarks & Award Announcement
Microsoft Research
Topic: Robot learning from Human Video with Tactile
Columbia University
Topic: Human-in-the-Loop Robot Learning
Meta Reality Labs
Topic: Meta’s Project Aria: Introducing Aria Gen2 Glasses for Robotics Research
Physical Intelligence
Topic: Datasets for Open-World Robotic Foundation Models
Stanford University
Topic: From Human Characters to Humanoids
ETH Zürich
Topic: Learning from Egocentric Data
Seoul National University
Topic: Towards Capturing Everyday Movements to Scale Up and Enrich Human Motion Data for Robotics
Faculty at Georgia Tech School of Interactive Computing and Research Scientist at NVIDIA
Email: danfei@gatech.edu