Abstract
This workshop will discuss and propose new benchmarks, competitions, and performance metrics that address the specific challenges arising when deploying (deep) learning in robotics. Researchers in robotics currently lack widely-accepted meaningful benchmarks and competitions that inspire the community to work on the critical research challenges for robotic learning, and allow repeatable experiments and quantitative evaluation. This is in stark contrast to computer vision, where datasets like ImageNet and COCO, and the associated competitions, fueled much of the advances in recent years.
This workshop will therefore bring together experts from the robotics, machine learning, and computer vision communities to identify the shortcomings of existing benchmarks, datasets, and evaluation metrics. We will discuss the critical challenges for learning in robotic perception, planning, and control that are not well covered by the existing benchmarks, and combine the results of these discussions to outline new benchmarks for learning in robotic perception, planning, and control.
The new proposed benchmarks shall complement existing benchmark competitions and be run annually in conjunction with conferences such as RSS, CoRL, ICRA, NIPS, or CVPR. They will help to close the gap between robotics, computer vision, and machine learning communities, and will foster crucial advancements in machine learning for robotics.
Motivation
Researchers in Robotics often lack standardized realistic benchmarks to conduct repeatable large-scale experiments in order to evaluate and quantitatively compare the performance of their algorithmic approaches and overall systems. This is in stark contrast to the computer vision community where datasets such as Pascal VOC, ImageNet, or COCO, and the associated evaluation protocols, fueled much of the advances in object recognition, object detection, semantic segmentation, image captioning, and visual question answering in recent years.
The lack of a standardized benchmarks is a significant roadblock for meaningful progress in robotics, especially in robotic learning for perception and action. It currently causes researchers to conduct non-comparable and non-repeatable experiments and ultimately compromises the overall validity of evaluations in our field of research. The goal of the workshop is to provide a forum where the community can discuss and propose new benchmarks, competitions, and performance metrics addressing the specific challenges of robotic learning.
Schedule
The workshop will cover 1.5 days. The first day (29 June) is dedicated to invited talks, panel discussions, and contributed paper poster presentations with spotlight talks. The organisers, invited speakers, and interested participants will get together in the morning of the second day (30 June) to consolidate the discussions and work on a document that summarizes the outcomes of the workshop. This can be extended further into a survey paper to be submitted to a journal.
Schedule Day 1 (Friday, 29 June) Room DH 2315
- 9.00 Welcome and Introduction
- 9.20 Ken Goldberg (UC Berkeley)
- 9.40 Oliver Brock (TU Berlin)
- 10.00 Coffee Break
- 10.30 Angel Chang (Princeton University)
- 10.50 Vladlen Koltun (Intel Intelligent Systems Lab)
- 11.10 Dieter Fox (University of Washington)
- 11.30 Discussion
- 12.00 Lunch Break
- 13.30 Contributed Paper 1: Towards An Empirically Reproducible Benchmark for Deep Learning Grasping Algorithms. Andrey Kurenkov, Roberto Martin-Martin, Animesh Garg, Ken Goldberg, Silvio Savarese.
- 13.40 Contributed Paper 2: The AI Driving Olympics at NIPS 2018.
- 13.50 Contributed Paper 3: Dataset for Near Contact Grasping Trajectories. Ammar Kothari, Yi Ong, John Morrow, Cindy Grimm.
- 14.00 Niko Sünderhauf (Queensland University of Technology)
- 14.20 Juxi Leitner (Queensland University of Technology)
- 14.30 Coffee Break with Poster Session
- 15.00 Peter Henderson (McGill University)
- 15.20 Angela Schöllig (University of Toronto)
- 15.40 Wolfram Burgard (University of Freiburg)
- 16.00 Stefanie Tellex (Brown University)
- 16.20 Panel Discussion
- Ask your questions for the panel discussion here
- 17.00 Conclusions and Closing Remarks
Schedule Day 2 (Saturday, 30 June) Room GHC 4303
- 9.00 Welcome and Introduction
- 9.10 Break into working groups. Each group discusses and starts drafting concrete proposals for benchmarks, metrics, and competitions.
- 10.00 Coffee Break until 10.30, continue discussions
- 11.00 Finalise proposals.
- 11.30 Get together in a big group, discuss and consolidate proposal drafts. Plan next steps for after RSS.
- 12.00 Workshop Conclusions
Call for Participation
Our workshop puts a very strong emphasis on developing new benchmarks that address the challenges arising when deploying deep learning for robotics in complex real-world scenarios, current gaps in our collective knowledge in this area, and the necessary new research directions to close these gaps.
We therefore invite authors to contribute extended abstracts or full papers that:
- identify the shortcomings of existing benchmarks, datasets, and evaluation metrics for robotics
- propose improved datasets, evaluation metrics, benchmarks, and protocols for robotics that foster repeatable evaluation and motivate research in important areas not well covered by existing benchmarks
- address specific robotics learning-related research challenges like coping with open-set conditions, uncertainty estimation, incremental / continuous learning, active learning, active vision, transfer learning
Papers on benchmarks and datasets should be guided by the following questions:
- Where do you see the shortcomings in existing benchmarks and evaluation metrics?
- What are important research challenges for robotic learning that are not well covered by existing benchmarks?
- What characteristics should new benchmarks have to allow meaningful repeatable evaluation of approaches in robotic vision, while steering the community to addressing the open research challenges?
Instructions for Authors
Papers can be submitted as extended abstracts (2-3 pages plus references) or full papers (6-8 pages plus references) using this form.
Important Dates
- June 8, 2018 : Deadline for submission. (Extended)
- June 15, 2018 : Acceptance notification.
- June 29 (full day) + June 30 (morning) : Workshop at RSS in Pittsburgh
Organisers
- Niko Sünderhauf (Chief Investigator, Australian Centre for Robotic Vision)
- Markus Wulfmeier (Postdoctoral Research Scientist, University of Oxford)
- Anelia Angelova (Research Scientist, Google Research)
- Feras Dayoub (Postdoctoral Fellow, QUT)
- Juxi Leitner (Postdoctoral Fellow, QUT)
With support by
- Trung T. Pham (Postdoctoral Fellow, University of Adelaide)
- Vijay Kumar (Postdoctoral Fellow, University of Adelaide)
- Gustavo Carneiro (Associate Professor, University of Adelaide)
- Peter Anderson (PhD Researcher, ANU)
- Ingmar Posner (Associate Professor, University of Oxford)
- Michael Milford (Professor, QUT)
- Anton van den Hengel (Professor, University of Adelaide)
- Ken Goldberg (Professor, UC Berkeley)
- Peter Corke (Professor, QUT, and Director, Australian Centre for Robotic Vision)