Woodscape Challenge

In the previous iterations of the WoodScape challenge, Valeo have hosted semantic segmentation (OmniCV 2021) and object detection tasks (OmniCV 2022). We have received an overwhelming response from researchers globally with 200+ participants overall. The winners of the challenge each year were given the opportunity to present their novel solutions in our workshop special session.  

We continue to host these challenges to encourage the community to adapt computer vision models for fisheye cameras instead of naive rectification and encourage further research in building a unified perception model for autonomous driving. This year we will focus on a moving object segmentation task with domain adoption. We are developing the challenge in conjunction with Parallel Domain. Parallel Domain has created a  synthetic data platform that is the fastest way to generate the high quality data required to train and test perception models.

This year’s challenge involves training a single model using both synthetic and real data that gets the best score on a motion segmentation task tested against real data (WoodScape). The annotations are at the pixel level and the submissions will consist of binary masks for static/motion pixels. The best model will have the highest mIoU on these two classes. 

At Valeo, we develop wide field of view fisheye cameras and computer vision software in order to enable automated driving and parking.  A common vehicle sensor configuration can consist of multiple fisheye cameras placed around the vehicle. This allows 360° perception which is key for achieving autonomous driving. Computer vision consists of multiple perception tasks that process the images coming from these cameras in real time to provide pedestrian and vehicle detection, lane marking detection, curb detection, and so on. The development of many of these perception tasks rely on training neural networks (Deep Learning) using many tens of thousands of human annotated examples in order to learn how to identify such objects in the image. Generating massive amounts of labelled data is complex, time consuming and expensive, but is a necessity in order to meet the required perception detection rates and accuracy levels to achieve vehicle autonomy. While there exists many publicly available automotive datasets of single narrow field of view cameras, until now there are no surround view fisheye datasets available to the public that would enable state of the art research, both industrial and academic.

Datasets

WoodScape comprises four surround-view fisheye cameras and nine tasks, including segmentation, depth estimation, 3D bounding box detection, and a novel soiling detection. Semantic annotation of 40+ classes at the instance level is provided for over 10,000 images. With WoodScape, we would like to encourage the community to adapt computer vision models for the fisheye camera instead of using naive rectification.

The Parallel Domain dataset, consisting of 105 20-second urban daytime scenes, is twice the size of the Woodscape dataset.  The four fisheye cameras are simulated under variable cloud cover across 4,200 keyframes, which are annotated with 2D and 3D bounding boxes, semantic segmentation, and instance segmentation.  

The Challenge

Labeling data for perception tasks can be a time-consuming and error-prone process, particularly for moving object segmentation. To address this challenge, we are seeking innovative methods that can help to streamline the development of moving object detection tasks while reducing the reliance on real-world data.

The objective of this challenge is to advance the state of the art in moving object segmentation by benchmarking techniques that utilize the least amount of real data possible. Contestants will have access to both real-world Woodscape data and synthetic Parallel Domain data. The goal is to develop a solution that utilizes synthetic data from Parallel Domain to achieve the highest accuracy with the least amount of real-world data from Woodscape.

Leaderboard rankings will be determined by a formula that takes into account two factors: accuracy on the test set and the amount of real-world data used (less is better). The aim of the challenge is to find new ways to leverage synthetic data and reduce the dependence on real-world data in moving object segmentation tasks. 

The challenge is hosted on CodaLab.

Prizes

Challenge Rules

Timeline


Leaderboard

The challenge ended on June 4, 2023. Test Set leaderboard with top 5 entries is shown below:

Challenge Toppers



Sponsors

Organising Committee

Saravanabalagi Ramachandran

Maynooth University 

John McDonald

Maynooth University 

Omar Maher

Parallel Domain 

Nate Cibik

Parallel Domain 

Phillip Thomas

Parallel Domain 

Jonathan Horgan

Valeo Vision Systems 

Ganesh Sistu

Valeo Vision Systems 

Forum & Contact

There is a forum for public queries and discussion in CodaLab, and for private queries, please send an email to saravanabalagi [at] gmail [dot] com

Previous Challenges

Click here for the 2021 edition of the WoodScape Challenge

Click here for the 2022 edition of the WoodScape Challenge