CVPR 2019

Workshop on Autonomous Driving

June 17th | Long Beach, USA

Note: this is the website of the Workshop on Autonomous Driving as listed in the CVPR program. It is different from the workshop on Autonomous Driving Beyond Single-Frame Perception which is hosted at wad.ai. We apologize for the confusion.

Introduction

The CVPR 2019 Workshop on Autonomous Driving (WAD) aims to gather researchers and engineers from academia and industry to discuss the latest advances in perception for autonomous driving. In this one-day workshop, we will have regular paper presentations, invited speakers, and technical benchmark challenges to present the current state of the art, as well as the limitations and future directions for computer vision in autonomous driving, arguably the most promising application of computer vision and AI in general. The previous chapters of the workshop at CVPR attracted hundreds of researchers to attend. This year, multiple industry sponsors also join our organizing efforts to push its success to a new level.

News and Updates

Participation

Paper Submission

We solicit paper submissions on novel methods and application scenarios of CV for Autonomous vehicles. We accept papers on a variety of topics, including autonomous navigation and exploration, ADAS, UAV, deep learning, calibration, SLAM, etc.. Papers will be peer reviewed under double-blind policy and the submission deadline is 20th March 2019. Accepted papers will be presented at the poster session, some as orals and one paper will be awarded as the best paper.

Challenge Track

We host a challenge to understand the current status of computer vision algorithms in solving the environmental perception problems for autonomous driving. We have prepared a number of large scale datasets with fine annotation, collected and annotated by Berkeley Deep Driving Consortium, nuTonomy and DiDi. Based on the datasets, we have define a set of four realistic problems and encourage new algorithms and pipelines to be invented for autonomous driving. More specifically, they are nuScenes challenge (nuScenes 3D detection challenge), Berkeley Deep Drive challenge (Object tracking and instance segmentation challenges), and DiDi (D²-City Detection Transfer Learning Challenge, D²-City Intelligent Annotation Teaser Challenge).

Invited Speakers

Kilian Q. Weinberger

Cornell University

Trevor Darrell

UC Berkeley

Alex Kendall

Cambridge & Wayve

Raquel Urtasun

UofT & Uber

Bo Li

UIUC

Zhaoyin JIA

Didi Chuxing

Manmohan Chandraker

UCSD & NEC Labs

Diamond sponsors