UR-RAD

2023 AAAI Fall Symposium on Unifying Representations for Robot Application Development


Westin Arlington Gateway, Arlington, VA, USA

October 25-27, 2023 

About UR-RAD

Behind any robot task or interaction is a representation that should (a) enable sufficient contextualization; (b) support any existing predefined, learned, and/or reusable skills onboard the robot; (c) be verifiable at design time and behave consistently at run-time; and (d) can be tested, executed, and modified for reuse on a variety of different robot morphologies. Enabling end users to express their intent within different representations has long played a pivotal role in robot application development, i.e., the construction of robot services, social interactions, and/or collaborative tasks.

The problem is that there is a lack of consistency and uniformity in how these representations are selected and used by robotics researchers. Thus, end users (i.e., robot application developers) face a myriad of challenges owing to this lack of cohesion.

The goals of the symposium will therefore be to (a) categorize current representational trends for robot application development, (b) discuss best practices for future adoption of languages, logical representations, development frameworks, etc. that integrate advances from the wider AI community, and (c) identify opportunities for collaboration between academia and industry.

The call for papers can be found here.

Get in touch with us! urrad.symposium@gmail.com


Symposium Goals

We aim to generate discussion surrounding the following categories:


Symposium Format

The symposium will be made up of paper presentations, keynote talks, panels, and breakout discussions.

Confirmed Speakers

Dana Nau

Matthias Scheutz

Stefanie Tellex

Chien-Ming Huang

Post-Symposium Update

Thanks to everyone who attended for a fantastic symposium! The best paper winner and nominees are as follows:

🏆 WINNER: A Categorical Representation Language and Computational System for Knowledge-Based Robotic Task Planning

Angeline Aguinaldo, Evan Patterson, James Fairbanks, William Regli and Jaime Ruiz

NOMINEE: Grounding Complex Natural Language Commands for Temporal Tasks in Unseen Environments

Jason Liu, Ziyi Yang, Ifrah Idrees, Sam Liang, Benjamin Schornstein, Stefanie Tellex and Ankit Shah

NOMINEE: Interface Design for Learning from Demonstration with Older Adults

Lakshmi Seelam, Erin Hedlund-Botti, Chuxuan Yang and Matthew Gombolay