Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR

Workshop on

Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR

A full day workshop

Objectives

Synthetic datasets have gained an enormous amount of popularity in the computer vision community, from training and evaluation of Deep Learning-based methods to benchmarking Simultaneous Localization and Mapping (SLAM). Having the right tools to create customized datasets will enable faster development, with the focus on the applications of robotics. A large number of datasets exist, but with emerging applications and new research directions, there is the need to have versatile dataset generation tools, covering all aspects of our daily life. On the other hand, SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem, since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phone-based AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. This workshop aims to bring experts in these two fields, dataset generation tools and benchmarking, to address challenges researchers facing.


This event will introduce novel benchmarking and dataset generation methods. As organizers, we will introduce InteriorNet (BMVC 2018), SLAMBench2.0 (ICRA 2018), and MLPerf.

  • InteriorNet, developed at Imperial College London, is a versatile dataset generation application, capable of simulating a wide range of sensor and variations in environments, such a moving object and day lighting variation.
  • SLAMBench2.0, developed at the University of Edinburgh, Imperial College London and the University of Manchester, is an open-source benchmarking framework for evaluating existing and future SLAM systems, both open and closed source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing datasets such as InteriorNet, TUM, ICL-NUIM, and also many SLAM algorithms such as ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS are supported. Integrating new algorithms and datasets to SLAMBench2.0 is straightforward and clearly has been specified by the framework. Attendees will gain experience on generating datasets and evaluating SLAM systems with SLAMBench.
  • The MLPerf effort aims to build a common set of benchmarks that enables the machine learning (ML) field to measure system performance for both training and inference from mobile devices to cloud services. Researchers from several universities including Harvard University, Stanford University, University of Arkansas Littlerock, University of California Berkeley, University of Illinois Urbana Champaign, University of Minnesota, University of Texas Austin, and University of Toronto have contributed to MLPerf.

For further information about these works, please refer to the following links:

InteriorNet: https://interiornet.org/

SLAMBench2.0: https://github.com/pamela-project/slambench2

MLPerf: https://mlperf.org/

Lighting Variation


Moving objects


Topics of Interest

Topics of interest include, but not limited to:

  • SLAM Evaluation
  • Reproducible Results
  • Performance Analysis
  • Application-oriented Mapping
  • Metrics for Loop Closure Evaluation
  • Active Vision Benchmarking and Datasets
  • Metrics for Evaluations: from Perception to Motion Control
  • Dataset and Benchmarking of SLAM in Dynamic Environments
  • Task-based SLAM Evaluation: Navigation, Grasping, Planning, etc.
  • Datasets and Benchmarking of AI for Robotics and Scene Understanding
  • Customized Dataset Generation for SLAM and robotics learning: tools and datasets
  • Deep Learning and AI: Datasets, Evaluation, and Benchmarking for Semantic and 3D Scene Understanding

Head-to-head benchmarking for SLAM algorithms

Invited Speakers

"Benchmarking SLAM: Current Status and Road Ahead "

Prof. Davide Scaramuzza, Director of Robotics and Perception Group, University of Zurich, Switzerland

"Spatial Perception for Mobile Robotics"

Prof. Stefan Leutenegger, Senior Lecturer, Director of Smart Robotics Lab, Co-directer of Dyson Robotics Lab, Imperial College London, UK

Claire Delaunay, VP of Engineering at Nvidia, Co-founder at Otto, San Francisco Bay Area, USA

"SLAM for AR: What Is Really Needed, and How Can We Measure It?"

Dr. Jakob Engel, Facebook Reality Labs, Oculus Research, Redmond WA, USA

"MLPerf: A Benchmark Suite for Machine Learning"

Prof. Vijay Janapa Reddi, Associate Professor, John A. Paulson School of Engineering and Applied Sciences, Harvard University, USA

"Habitat: A Platform for Embodied AI Research"

Prof. Manolis Savva, Assistant Professor, Simon Fraser University, Vancouver BC, Canada; Visiting Researcher, Facebook AI Research

"InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset"

Prof. Wenbin Li, Lecturer (Assistant Professor), University of Bath, UK

Program

08:50 - 09:00 Welcome and Overview

09:00 - 10:00 First Session (60 minutes)

09:00 - 09:30 Invited Talk 1/8: Prof. Davide Scaramuzza, Director of Robotics and Perception Group, University of Zurich, Switzerland

"Benchmarking SLAM: Current Status and Road Ahead "

09:30 - 10:00 Invited Talk 2/8: Dr. Jakob Engel, Facebook Reality Labs, Oculus Research, Redmond WA, USA

"SLAM for AR: What Is Really Needed, and How Can We Measure It?"

10:00 - 10:30 Coffee Break / Poster Session 2 (30 minutes)

10:30 - 12:30 Second Session (120 minutes)

10:30 - 11:00 Invited Talk 3/8: Prof. Stefan Leutenegger, Senior Lecturer, Director of Smart Robotics Lab, Co-directer of Dyson Robotics Lab, Imperial College London, UK

"Spatial Perception for Mobile Robotics"

11:00 - 11:30 Invited Talk 4/8: Prof. Bruno Bodin, Assistant Professor, Yale-NUS College, Singapore

"SLAMBench2: Multi-Objective Head-to-Head Benchmarking for Visual SLAM", AND

M. Bujanca, P. Grafton, S. Saeedi, A. Nisbet, B. Bodin, M. F.P. O'Boyle, A. J. Davison, P. H.J. Kelly, G. Riley, B. Lennox, M. Luján, S. Furber

"SLAMBench3.0: Systematic Automated Reproducible Evaluation of SLAM Systems for Robot Vision Challenges and Scene Understanding" PDF

11:30 - 11:50 Contributed Paper 1/6: M. Zaffar, A. Khaliq, M. Milford, and K. McDonald-Maier,

"Levelling the Playing Field: A Comprehensive Comparison of Visual Place Recognition Approaches under Changing Conditions"

11:50 - 12:10 Contributed Paper 2/6: Z. Zhang, and D. Scaramuzza,

"Rethinking Trajectory Evaluation for SLAM: a Probabilistic, Continuous-Time Approach"

12:10 - 12:30 Contributed Paper 3/6: J. Skinner, D. Hall, H. Zhang, F. Dayoub, and N. Sünderhauf,

"The Probabilistic Object Detection Challenge"

12:30 - 13:30 Launch Break / Poster Session 2 (60 minutes)

13:30 - 15:00 Third Session (90 minutes)

13:30 - 14:00 Invited Talk 5/8: Prof. Manolis Savva, Assistant Professor, Simon Fraser University, Vancouver BC, Canada; Visiting Researcher, Facebook AI Research

"Habitat: A Platform for Embodied AI Research"

14:00 - 14:30 Invited Talk 6/8: Prof. Vijay Janapa Reddi, Associate Professor, John A. Paulson School of Engineering and Applied Sciences, Harvard University, USA

"MLPerf: A Benchmark Suite for Machine Learning"

14:30 - 14:45 Contributed Paper 4/6: L. Rodriguez, V. Chandragiri, D. Pena, and D. Moloney,

"End-to-End Relative Pose Estimation of Point Clouds and Voxel Grids"

14:45 - 15:00 Contributed Paper 5/6: Wenkai Ye, Yipu Zhao, and P. A. Vela

"Characterizing SLAM Benchmarks and Methods for the Robust Perception Age"

15:00 - 15:30 Coffee Break / Poster Session 3 (30 minutes)

15:30 - 17:30 Fourth Session (120 minutes)

15:30 - 16:00 Invited Talk 7/8: Prof. Wenbin Li, Lecturer, Bath University, Bath UK

"InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset"

16:00 - 16:30 Invited Talk 8/8: Claire Delaunay - VP of Engineering - NVIDIA

16:30 - 16:45 Contributed Paper 6/6: A. J. Lee, Y. Cho, S. Yoon, Y. Shin, and A. Kim

"ViViD : Vision for Visibility Dataset"

16:45 - 17:30 Discussion: moderated by the organizers, challenges and future topics

Papers and Posters

  • PDF "End-to-End Relative Pose Estimation of Point Clouds and Voxel Grids", L. Rodriguez, V. Chandragiri, D. Pena, and D. Moloney
  • PDF "Lifelong SLAM Dataset and Benchmark", X. Shi, F. Qiao, and Q. She
  • PDF "Robustness of VO Systems to Subsampled Motions", Georges Younes, Daniel Asmar, and John Zelek
  • PDF "Marine Perception Datasets: a Work in Progress", P. Robinette, M. Sacarny, M. DeFilippo, M. Novitzky, and M. R. Benjamin
  • PDF "The Probabilistic Object Detection Challenge", J. Skinner, D. Hall, H. Zhang, F. Dayoub, and N. Sünderhauf,
  • PDF "SLAMBench3.0: Systematic Automated Reproducible Evaluation of SLAM Systems for Robot Vision Challenges and Scene Understanding", M. Bujanca, P. Grafton, S. Saeedi, A. Nisbet, B. Bodin, M. F.P. O'Boyle, A. J. Davison, P. H.J. Kelly, G. Riley, B. Lennox, M. Luján, S. Furber
  • PDF "Characterizing SLAM Benchmarks and Methods for the Robust Perception Age", Wenkai Ye, Yipu Zhao, and P. A. Vela
  • PDF "Challenges of Benchmarking SLAM Performance for Construction Specific Applications", S. A. Kay, S. Julier, and V. M.Pawar
  • PDF "Wiception: Augmenting Visual Sensing with Wireless Sensing for Fun and Profit", A. Balakrishnan, C. Adhivarahan, Z. Hashemifar, and Karthik Dantu
  • PDF SLIDES (* BEST PAPER AWARD SPONSORED BY NVIDIA *) "Rethinking Trajectory Evaluation for SLAM: a Probabilistic, Continuous-Time Approach", Z. Zhang, and D. Scaramuzza,
  • PDF "Towards Generation and Evaluation of Comprehensive Mapping Robot Datasets", H. Chen, X. Zhao, J. Luo, Z. Yang, Z. Zhao, H. Wan, X. Ye, G. Weng, Z. He, T. Dong, and S. Schwertfeger
  • PDF SLIDES (* BEST POSTER PRESENTATION AWARD SPONSORED BY KUJIALE.COM *) "ViViD : Vision for Visibility Dataset" , A. J. Lee, Y. Cho, S. Yoon, Y. Shin, and A. Kim
  • PDF "Radar Dataset for Robust Localization and Mapping in Urban Environment", Y. Sang Park, J. Jeong, Y. Shin, and A. Kim
  • PDF "Levelling the Playing Field: A Comprehensive Comparison of Visual Place Recognition Approaches under Changing Conditions", M. Zaffar, A. Khaliq, M. Milford, and K. McDonald-Maier
  • PDF "Multisensor Dataset Repository for Benchmarking SLAM in Challenging Scenarios", L. Arora, A. Chundi, N. Mohan Krishna, K. Rajawat, and R. M Hegde

Call for Contributions

  • The workshop accepts contributions of research papers describing early research on emerging topics.
  • The workshop is intended for quick publication of work-in-progress, early results, etc. The workshop is not intended to prevent later publication of extended papers.
  • Prizes will be given to the best paper and also to the best poster presentation.
  • Submission Format: please use standard IEEE format (2-8 pages).
  • Submission Link: https://easychair.org/conferences/?conf=dgbicra2019

Important Dates

Abstract submission due date: (February 10 2019) March 15 2019

Abstract acceptance notification: (February 25 2019) March 30 2019

Camera-ready version due date: (March 5 2019) April 30 2019, Final Submission Link: https://easychair.org/conferences/?conf=dgbicra2019

Workshop day: Friday May 24 2019

Room: 517c

Technical Program Committee

Sajad Saeedi (Imperial College London)

Bruno Bodin (Yale-NUS College)

Wenbin Li (University of Bath)

Rui Tang (Kujiale.com)

Luigi Nardi (Stanford University)

Paul H.J. Kelly (Imperial College London)

Ankur Handa (Nvidia)


The Workshop is supported by the following IEEE RAS Technical Committees:

IEEE Robotics and Automation Society's Technical Committee on Computer & Robot Vision (TCVision)

IEEE Robotics and Automation Society's Technical Committee on Performance Evaluation & Benchmarking of Robotics and Automation Systems (TCPEBRAS)

Organizers

Sponsors

Contact us

If you have any questions, please contact organizers at icra.workshop@gmail.com