In the previous iterations of the WoodScape challenge, Valeo have hosted semantic segmentation (OmniCV 2021) and object detection tasks (OmniCV 2022). We have received an overwhelming response from researchers globally with 200+ participants overall. The winners of the challenge each year were given the opportunity to present their novel solutions in our workshop special session.
We continue to host these challenges to encourage the community to adapt computer vision models for fisheye cameras instead of naive rectification and encourage further research in building a unified perception model for autonomous driving. This year we will focus on a moving object segmentation task with domain adoption. We are developing the challenge in conjunction with Parallel Domain. Parallel Domain has created a synthetic data platform that is the fastest way to generate the high quality data required to train and test perception models.
This year’s challenge involves training a single model using both synthetic and real data that gets the best score on a motion segmentation task tested against real data (WoodScape). The annotations are at the pixel level and the submissions will consist of binary masks for static/motion pixels. The best model will have the highest mIoU on these two classes.
At Valeo, we develop wide field of view fisheye cameras and computer vision software in order to enable automated driving and parking. A common vehicle sensor configuration can consist of multiple fisheye cameras placed around the vehicle. This allows 360° perception which is key for achieving autonomous driving. Computer vision consists of multiple perception tasks that process the images coming from these cameras in real time to provide pedestrian and vehicle detection, lane marking detection, curb detection, and so on. The development of many of these perception tasks rely on training neural networks (Deep Learning) using many tens of thousands of human annotated examples in order to learn how to identify such objects in the image. Generating massive amounts of labelled data is complex, time consuming and expensive, but is a necessity in order to meet the required perception detection rates and accuracy levels to achieve vehicle autonomy. While there exists many publicly available automotive datasets of single narrow field of view cameras, until now there are no surround view fisheye datasets available to the public that would enable state of the art research, both industrial and academic.
Datasets
WoodScape comprises four surround-view fisheye cameras and nine tasks, including segmentation, depth estimation, 3D bounding box detection, and a novel soiling detection. Semantic annotation of 40+ classes at the instance level is provided for over 10,000 images. With WoodScape, we would like to encourage the community to adapt computer vision models for the fisheye camera instead of using naive rectification.
The Parallel Domain dataset, consisting of 105 20-second urban daytime scenes, is twice the size of the Woodscape dataset. The four fisheye cameras are simulated under variable cloud cover across 4,200 keyframes, which are annotated with 2D and 3D bounding boxes, semantic segmentation, and instance segmentation.
The Challenge
Labeling data for perception tasks can be a time-consuming and error-prone process, particularly for moving object segmentation. To address this challenge, we are seeking innovative methods that can help to streamline the development of moving object detection tasks while reducing the reliance on real-world data.
The objective of this challenge is to advance the state of the art in moving object segmentation by benchmarking techniques that utilize the least amount of real data possible. Contestants will have access to both real-world Woodscape data and synthetic Parallel Domain data. The goal is to develop a solution that utilizes synthetic data from Parallel Domain to achieve the highest accuracy with the least amount of real-world data from Woodscape.
Leaderboard rankings will be determined by a formula that takes into account two factors: accuracy on the test set and the amount of real-world data used (less is better). The aim of the challenge is to find new ways to leverage synthetic data and reduce the dependence on real-world data in moving object segmentation tasks.
The challenge is hosted on CodaLab.
Prizes
The following prizes, sponsored by both Valeo and Parallel Domain, are awarded to the top 3 teams/individuals on the final leaderboard once results are verified. See below for further terms & conditions.
1st - US$2,500
2nd - US$1,500
3rd - US$1,000
The winning individual/team is expected to present their technical solution in a speaking slot at the OmniCV workshop event. There is no associated paper or poster required at the conference.
Teams placing in the top 3 are also expected to detail their technical solution for a blog post and/or arXiv paper published post competition, similar to here. This should specifically mention how the balance between real and synthetic data was achieved.
The prize will be awarded through a single payment to each team lead. Distribution of the prize amongst team members is the responsibility of the team lead.
In the case of a tie the prize will be split.
The final award remains at the discretion of the organizing committee.
The final leaderboard following the test phase and verification will be published on the competition website. We encourage participants to share details of their solutions by sharing links to associated publications and code.
Prize Tax Info: Please note that for the second and third place awards, winners must provide tax documentation before receiving their awards (Just one designated team member per winning team must complete this requirement to collect the award). This process is simple and ensures compliance with tax laws. U.S. winners and residents should submit a W-9 form, while international winners usually need a W-8BEN form (unless they're U.S. citizens or have a valid SSN, in which case they'll need a W-9 form). We'll guide you through this process to make it as seamless as possible. Your success in the competition is our top priority, and we can't wait to see your innovative solutions!"
Challenge Rules
Individuals and teams (unlimited size) can enter the competition.
Limit of 5 submissions per day and 100 submissions in total per person/team.
Usage of pre-trained models is not acceptable.
There are no limits on training time or network capacity.
Teams placing in the top 3 could be requested to share their code in order for the competition organisers to verify the data usage submissions.
No Valeo or Parallel Domain employees may take part in the challenge.
Employees of third party companies, universities or institutions that contributed to the creation or have access to the full WoodScape dataset may not take part in the challenge.
The associated WoodScape dataset terms of use continue to apply for the data usage within this challenge.
The terms of use for the Parallel Domain synthetic data provided for this challenge is detailed here.
Timeline
Competition Release - April 5, 2023
Dev Phase Deadline - May 24, 2023 June 2, 2023
Test Phase Deadline - May 26, 2023 June 4, 2023
Top 3 submissions informed and results verification- May 27, 2023 June 7, 2023
Prize winners confirmed - June 5, 2023 June 9-12, 2023
OmniCV 2023 winner presentation - June 19, 2023
All deadlines are at 11:59 PM UTC on the corresponding day unless otherwise noted. The competition organizers reserve the right to update the contest timeline if they deem it necessary.
Leaderboard
The challenge ended on June 4, 2023. Test Set leaderboard with top 5 entries is shown below:
Challenge Toppers
Winning Team: STAR (Username: www)
Runner Up: USTC-IAT-United (Username: USTCxNetEaseFuxi)
Jun Yu
Renjie Lu
Leilei Wang
Shuoping Yang
Gongpeng Zhao
Renda Li
Bingyuan Zhang
Affiliation: University of Science and Technology of China
Third Place: XMU-UAV (Username: heboyong)
Boyong He
Weijie Guo
Xi Lin
Yuxiang Ji
Affiliation: Xiamen University
Sponsors
Organising Committee
Forum & Contact
There is a forum for public queries and discussion in CodaLab, and for private queries, please send an email to saravanabalagi [at] gmail [dot] com
Previous Challenges
Click here for the 2021 edition of the WoodScape Challenge
Click here for the 2022 edition of the WoodScape Challenge