New Benchmarks, Metrics, and Competitions for Robotic Learning
A RSS Workshop – Pittsburgh, 29 - 30 June 2018
Ask your questions for the panel discussion here
Rooms: June 29 DH 2315 -- June 30 GHC 4303
This workshop will discuss and propose new benchmarks, competitions, and performance metrics that address the specific challenges arising when deploying (deep) learning in robotics. Researchers in robotics currently lack widely-accepted meaningful benchmarks and competitions that inspire the community to work on the critical research challenges for robotic learning, and allow repeatable experiments and quantitative evaluation. This is in stark contrast to computer vision, where datasets like ImageNet and COCO, and the associated competitions, fueled much of the advances in recent years.
This workshop will therefore bring together experts from the robotics, machine learning, and computer vision communities to identify the shortcomings of existing benchmarks, datasets, and evaluation metrics. We will discuss the critical challenges for learning in robotic perception, planning, and control that are not well covered by the existing benchmarks, and combine the results of these discussions to outline new benchmarks for learning in robotic perception, planning, and control.
The new proposed benchmarks shall complement existing benchmark competitions and be run annually in conjunction with conferences such as RSS, CoRL, ICRA, NIPS, or CVPR. They will help to close the gap between robotics, computer vision, and machine learning communities, and will foster crucial advancements in machine learning for robotics.
Researchers in Robotics often lack standardized realistic benchmarks to conduct repeatable large-scale experiments in order to evaluate and quantitatively compare the performance of their algorithmic approaches and overall systems. This is in stark contrast to the computer vision community where datasets such as Pascal VOC, ImageNet, or COCO, and the associated evaluation protocols, fueled much of the advances in object recognition, object detection, semantic segmentation, image captioning, and visual question answering in recent years.
The lack of a standardized benchmarks is a significant roadblock for meaningful progress in robotics, especially in robotic learning for perception and action. It currently causes researchers to conduct non-comparable and non-repeatable experiments and ultimately compromises the overall validity of evaluations in our field of research. The goal of the workshop is to provide a forum where the community can discuss and propose new benchmarks, competitions, and performance metrics addressing the specific challenges of robotic learning.
The workshop will cover 1.5 days. The first day (29 June) is dedicated to invited talks, panel discussions, and contributed paper poster presentations with spotlight talks. The organisers, invited speakers, and interested participants will get together in the morning of the second day (30 June) to consolidate the discussions and work on a document that summarizes the outcomes of the workshop. This can be extended further into a survey paper to be submitted to a journal.
Our workshop puts a very strong emphasis on developing new benchmarks that address the challenges arising when deploying deep learning for robotics in complex real-world scenarios, current gaps in our collective knowledge in this area, and the necessary new research directions to close these gaps.
We therefore invite authors to contribute extended abstracts or full papers that:
Papers on benchmarks and datasets should be guided by the following questions:
Papers can be submitted as extended abstracts (2-3 pages plus references) or full papers (6-8 pages plus references) using this form.
With support by