Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR

Workshop on

Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR

A full day workshop


Synthetic datasets have gained an enormous amount of popularity in the computer vision community, from training and evaluation of Deep Learning-based methods to benchmarking Simultaneous Localization and Mapping (SLAM). Having the right tools to create customized datasets will enable faster development, with the focus on the applications of robotics. A large number of datasets exist, but with emerging applications and new research directions, there is the need to have versatile dataset generation tools, covering all aspects of our daily life. On the other hand, SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem, since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phone-based AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. This workshop aims to bring experts in these two fields, dataset generation tools and benchmarking, to address challenges researchers facing.

This event will introduce novel benchmarking and dataset generation methods. As organizers, we will introduce InteriorNet (BMVC 2018), SLAMBench2.0 (ICRA 2018), and DAWNBench (NIPS Workshop 2017). InteriorNet, developed at Imperial College London, is a versatile dataset generation application, capable of simulating a wide range of sensor and variations in environments, such a moving object and day lighting variation. SLAMBench2.0, developed at the University of Edinburgh, Imperial College London and the University of Manchester, is an open-source benchmarking framework for evaluating existing and future SLAM systems, both open and closed source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing datasets such as InteriorNet, TUM, ICL-NUIM, and also many SLAM algorithms such as ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS are supported. Integrating new algorithms and datasets to SLAMBench2.0 is straightforward and clearly has been specified by the framework. Attendees will gain experience on generating datasets and evaluating SLAM systems with SLAMBench. DAWNBench, developed at Stanford University, has functionalities similar to SLAMBench2.0, but targeting other computer vision task such as classification etc.




Lighting Variation

Moving objects

Topics of Interest

Topics of interest include, but not limited to:

  • SLAM Evaluation
  • Reproducible Results
  • Performance Analysis
  • Application-oriented Mapping
  • Metrics for Loop Closure Evaluation
  • Active Vision Benchmarking and Datasets
  • Metrics for Evaluations: from Perception to Motion Control
  • Dataset and Benchmarking of SLAM in Dynamic Environments
  • Task-based SLAM Evaluation: Navigation, Grasping, Planning, etc.
  • Datasets and Benchmarking of AI for Robotics and Scene Understanding
  • Customized Dataset Generation for SLAM and robotics learning: tools and datasets
  • Deep Learning and AI: Datasets, Evaluation, and Benchmarking for Semantic and 3D Scene Understanding

Head-to-head benchmarking for SLAM algorithms

Keynote Speaker

Prof. Andrew J Davison, Imperial College London, Fellow of the Royal Academy of Engineering

Andrew Davison is a Professor of Robot Vision at the Department of Computing, Imperial College London. He is a Fellow of the Royal Academy of Engineering, and leads the Robot Vision Research Group and the Dyson Robotics Laboratory at Imperial College. His MonoSLAM algorithm opened the door for devices with low-cost cameras to localise and understand their surroundings. This work is having huge industrial impact in robotics, augmented reality and mobile devices.

Invited Speakers

Prof. Davide Scaramuzza, Director of the Robotics and Perception Group, University of Zurich, Switzerland

Dr. David Moloney, Founder of Intel Movidius, Dublin, Ireland

Dr. Jakob Engel, Facebook Reality Labs, Oculus Research, Redmond WA, USA



Call for Contributions

  • The workshop accepts contributions of research papers describing early research on emerging topics.
  • The workshop is intended for quick publication of work-in-progress, early results, etc. The workshop is not intended to prevent later publication of extended papers.
  • Prizes will be given to the best paper and also to the best presentation.
  • Submission Format: for extended abstract or full papers, please use standard IEEE format (2-8 pages).
  • Submission Link:

Important Dates

Abstract submission due date: February 10 2019

Abstract acceptance notification: February 25 2019

Camera-ready version due date: March 5 2019

Workshop day: May 23 2019 or May 24 2019

Technical Program Committee

Sajad Saeedi (Imperial College London)

Bruno Bodin (Yale-NUS College)

Wenbin Li (University of Bath)

Rui Tang (

Luigi Nardi (Stanford University)

Paul H.J. Kelly (Imperial College London)

Ankur Handa (Nvidia)



Contact us

If you have any questions, please contact organizers at