LangRob @ CoRL 2023

Workshop on Language and Robot Learning
Language as Grounding

The rapid advancements in natural language processing (NLP) within the past few years have sparked a growing interest in the integration of language into robot learning. This has resulted in a growing body of research at the intersection of NLP, computer vision and robotics, covering a wide range of topics, such as human-robot communication, language-driven representation learning, specification of rewards, tasks, and constraints, as well as the utilization of large pretrained language models for control. The first edition of this workshop featured over 100 participants engaging with 9 diverse invited speakers, 2 panel discussions, 14 accepted posters, and 1 live interactive robotics demonstration. 

In last year’s exploratory first edition of the workshop, speakers and participants agreed that the most interesting future challenges are in how language in robot learning can progress towards grounding more modalities. This excitement was reflected in the broader community as well, with the proliferation of LLM and VLM driven robotics research in recent months. Building on top of these ideas, the second edition of this workshop aims to focus on how language can act as a common grounding for multimodal data, across robot embodiments, and beyond offline training methods.

Speakers

Edward Johns

Imperial College

Jeannette Bohg

Stanford University


Fei Xia

Google DeepMind

Anca Dragan

UC Berkeley


Zsolt Kira

Georgia Tech


Nathan Lambert

Allen Institute for AI


Panelists

Animesh Garg

Georgia Tech

Dorsa Sadigh

Stanford University


Ken Goldberg

UC Berkeley

Dhruv Batra

Meta / Georgia Tech

Sergey Levine

UC Berkeley

Thanks for Participating!

Workshop Details

Organizers

Dhruv Shah

UC Berkeley

Oier Mees

UC Berkeley

Ted Xiao

Google DeepMind

Sponsors

We are very thankful to our corporate sponsors for enabling us to provide best paper awards and student registration fees.