7th Workshop on Computer Vision for Road Scene Understanding and Autonomous Driving
ICCV 2019, Oct 27th Seoul Korea
NightOwls Pedestrian Detection Challenge 2019
competition submission deadline *13th October 2019*
Pedestrian detection at night from RGB camera is an under-represented yet very important problem, where current state-of-the-art vision algorithms fail. Computer vision methods for detection at night have not received much attention, despite the fact they are a critical building block of many systems such as safe and robust autonomous cars. To further assess and advance the state of the art, we organize the NightOwls Pedestrian Detetection Challenge 2019.
The competition uses the recently published NightOwls dataset, consisting of 279,000 fully-annotated images in 40 video sequences recorded at night across 3 different countries by an industry-standard camera. Participants are encouraged to train their models on the provided training subset (128k images), tune the hyper-parameters on the validation subset (48k) and then submit their detection results on the testing subset (128k images), by the competition submission deadline *13th October 2019*.
For more information, visit the competition website:
List of accepted papers
- Estimation of Absolute Scale in Monocular SLAM Using Synthetic Data, D. Rukhovich, D. Mouritzen, R. Kaestner, M. Rufli, A. Velizhev
- Short-Term Prediction and Multi-Camera Fusion on Semantic Grids, L. Hoyer, P. Kesper, A. Khoreva, V. Fischer
- Probabilistic Vehicle Reconstruction using a multi-Task CNN, M. Coenen, F. Rottensteiner
- Unsupervised Labeled Lane Markers Using Maps, K. Behrendt; R. Soussan,
- Boxy Vehicle Detection in Large Images, K. Behrendt,
- ShelfNet for fast semantic segmentation, J. Zhuang, J. Yang, L. Gu, N. Dvornek
- Monocular 3D Object Detection with Pseudo-LiDAR Point Cloud, X. Weng, K. Kitani
- Road Scene Understanding by Occupancy Grid Learning from Sparse Radar Clusters using Semantic Segmentation, S. Oron, L. Sless, G. Cohen, S. Gilad, E. Bat
- Reverse and Boundary Attention Network for Road Segmentation, J. Sun, S. Kim, S. Lee, Y. Kim, K. Ye-Won; S. Ko
- Small Obstacle Avoidance Based on RGB-D Semantic Segmentation, M. Hua, Y. Nan, S. Lian
- Robust Absolute and Relative Pose Estimation of a Central Camera System from 2D-3D Line Correspondences, H. Abdellali, R. Frohlich, Z. Kato
- End-to-end Lane Detection through Differentiable Least-Squares Fitting, W. Van Gansbeke, B. De Brabandere, D. Neven, Davy, M. Proesmans, L. Van Gool
- Temporal Coherence for Active Learning in Videos, B. Zolfaghari, A. Gonzalez-Garcia, G. Villalonga,B. Raducanu, Habibi H. Aghdam, M. Mozerov, A.M. Lopez, .J. van de Weijer
- Vehicle Detection with Automotive Radar using Deep Learning on Range-Azimuth-Doppler Tensors, B. Major, D. Fontijne, A. Ansari, R. Teja Sukhavasi, R. Gowaikar, M. J Hamilton, S. Lee, S. Grzechnik, S. Subramanian,
- Exploiting Temporality for Semi-Supervised Video Segmentation, Sibechi, Radu; Booij, Olaf; Baka, Nora; Bloem, Peter
- LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation, P. Biasutti, V. Lepetit, M. Bredif, J. Aujo, A. Bugeau
- I Bet You Are Wrong: Gambling Adversarial Networks for Structured Semantic Segmentation, L. Samson, N. van Noord, O. Booij, S. Gavves, M. Hofmann, M. Ghafoorian
Topics of Interest
Analyzing road scenes using cameras could have a crucial impact in many domains, such as autonomous driving, advanced driver assistance systems (ADAS), personal navigation, mapping of large scale environments and road maintenance. For instance, vehicle infrastructure, signage, and rules of the road have been designed to be interpreted fully by visual inspection. As the field of computer vision becomes increasingly mature, practical solutions to many of these tasks are now within reach. Nonetheless, there still seems to exist a wide gap between what is needed by the automotive industry and what is currently possible using computer vision techniques.
The goal of this workshop is to allow researchers in the fields of road scene understanding and autonomous driving to present their progress and discuss novel ideas that will shape the future of this area. In particular, we would like this workshop to bridge the gap between the community that develops novel theoretical approaches for road scene understanding and the community that builds working real-life systems performing in real-world conditions. To this end, we will aim to have invited speakers covering different continents and coming from both academia and industry.
We encourage submissions of original and unpublished work in the area of vision-based road scene understanding. The topics of interest include (but are not limited to):
- Road scene understanding in mature and emerging markets
- Deep learning for road scene understanding
- Prediction and modeling of road scenes and scenarios
- Semantic labeling, object detection and recognition in road scenes
- Dynamic 3D reconstruction, SLAM and ego-motion estimation
- Visual feature extraction, classification and tracking
- Design and development of robust and real-time architectures
- Use of emerging sensors (e.g., multispectral, RGB-D, LIDAR and LADAR)
- Fusion of RGB imagery with other sensing modalities
- Interdisciplinary contributions across computer vision, optics, robotics and other related fields.
We encourage researchers to submit not only theoretical contributions, but also work more focused on applications. Each paper will receive double blind reviews, which will be moderated by the workshop chairs.
- Submission Deadline: July 26th, 2019.
- Notification of Acceptance: August 23th, 2019.
- Camera-ready Deadline: August 30th, 2019.
- Workshop: October 27th, 2019.
Papers will be limited up to 8 pages according to the ICCV format (main conference authors guidelines). All papers will be reviewed by at least two reviewers with double blind policy. Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. Papers will be published in the ICCV 2019 proceedings.
All the papers should be submitted through the CMT website.
- Dr. Mathieu Salzmann, EPFL, Switzerland
- Dr. Jose Alvarez, NVIDIA, USA
- Dr. Lars Petersson, Data61 CSIRO, Australia
- Prof. Fredrik Kahl, Chalmers University of Technology, Sweden
- Dr. Bart Nabbe, Aurora, USA
- Dr.Lukas Neumann, Univ. Oxford, UK
- Dr. Andrea Vedaldi, Univ. Oxford, UK
- Prof. Andrew Zisserman, Univ. Oxford, UK
- Prof. Dr. Bernt Schiele, Max-Planck Institut, Germany