Learning to Adapt and Improve in the Real World








One-day workshop to bring together the ideas of adapting and improving robots in the dynamic real-world

The world is ever-changing and thus difficult to exactly capture at training time. Whether we train policies in realistic simulation, from offline data, or even in the real world, they are unlikely to generalize well to the varied conditions of the physical world. Hence, our robots must learn to adapt in the real world through efficient exploration and improvement.


To this end, we are organizing a CoRL 2022 workshop on Learning to Adapt and Improve in the Real World. The focus will be on research directions that enable robots to continuously adapt to changes in tasks and environments, generalize to unseen settings and hone existing skills.


Logistics


Invited Talks

Representation learning for robot manipulation, collecting and pre-training from generalizable offline datasets

Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University. Her lab, IRIS, studies intelligence through robotic interaction at scale, and is affiliated with SAIL and the Statistical ML Group. She also spend time at Google as a part of the Google Brain team.

She is interested in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction.

Previously, she completed my Ph.D. in computer science at UC Berkeley and my B.S. in electrical engineering and computer science at MIT.

Adaption for legged robots, exploration in the real world, active exploration initialized from passive data

Deepak Pathak is a faculty in the School of Computer Science at Carnegie Mellon University. He is a member of the Robotics Institute and affiliated to Machine Learning Department. He works in Artificial Intelligence at the intersection of Computer Vision, Machine Learning & Robotics.

His ultimate goal is to build agents with a human-like ability to generalize in real and diverse environments. His work draws inspiration from psychology to build practical systems at the interface of vision, learning and robotics that can learn using data as its own supervision.

Previously was a researcher at Facebook AI Research and visiting researcher at UC Berkeley. He received his Ph.D. from UC Berkeley.

Mobile robotics, aerial vehicle and quadcopter control,
safe learning in robotics

Angela Schoellig is an Associate Professor at the University of Toronto Institute for Aerospace Studies and a Faculty Member of the Vector Institute. She holds a Canada Research Chair (Tier 2) in Machine Learning for Robotics and Control and a Canada CIFAR Chair in Artificial Intelligence. She is a principal investigator of the NSERC Canadian Robotics Network and the University’s Robotics Institute.

She conducts research at the intersection of robotics, controls, and machine learning. Her goal is to enhance the performance, safety, and autonomy of robots by enabling them to learn from past experiments and from each other.

Agile, perception-driven flight, event-based cameras for robotics, autonomous aerial navigation

Davide Scaramuzza is a Professor of Robotics and Perception at the University of Zurich, where he does research at the intersection of robotics, computer vision, and machine learning, using standard cameras and event cameras, and aims to enable autonomous, agile navigation of micro drones in search and rescue applications.

For his research contributions to autonomous, vision-based, drone navigation and event cameras, he won prestigious awards, such as a European Research Council (ERC) Consolidator Grant, the IEEE Robotics and Automation Society Early Career Award, a Google Research Award, and two Qualcomm Innovation Fellowships.

Social robotics, personalized agents, human-robot interaction

Hae Won Park, PhD, is a Research Scientist at MIT Media Lab and a Principal Investigator of the Social Robot Companions for Aging and Contextualized Intelligence Grants. Her research focuses on socio-emotive AI and personalization of socially embodied agents that support long-term interaction and relationship building with users.

Her work is applied to a range of real-world domains including early childhood education and healthier aging in place. Her research has been published at top robotics, HCI, and AI venues and has received many awards for best paper and innovative robot applications. Hae Won received her PhD from Georgia Tech in 2014, at which time she also co-founded Zyrobotics, an assistive education AI startup, to support play and education for neurodiverse learners.

Program Committee

Alexander Khazatsky (Stanford)

Ali Ghadirzadeh (Stanford)

Antonio Loquercio (UC Berkeley)

Gaoyue Zhou (CMU)

Haozhi Qi (UC Berkeley)

Homer Walke (UC Berkeley)

Huang Huang (UC Berkeley)

Jerry Zhi-Yang He (UC Berkeley)

Justin Wasserman (UIUC)

Sudeep Dasari (CMU)

Kenneth Shaw (CMU)

Lili Chen (CMU)

Mohan Kumar Srirama (CMU)

Murtaza Dalal (CMU)

Patrick Lancaster (Meta)

Russell Mendonca (CMU)

Sam Powers (CMU)

Tony Z. Zhao (Stanford)

Xuxin Cheng (CMU)

Zhongyu Li (UC Berkeley)

Organizers

Annie Xie
(Stanford)

Ashish Kumar
(Berkeley)

Laura Smith
(
Berkeley)

Shikhar Bahl
(CMU, F
AIR)

Byron Boots
(
UW, NVIDIA)

Jitendra Malik
(Berkeley, FAIR)