As a part of the Robotic Vision Challenge group with the Australian Centre for Robotic Vision and Queensland Centre for Robotics I have been involved in creating robotic vision challenges. These challenges are designed to promote different fields of robotic vision research and provide tools to quantitatively compare and rank different algorithms.
This page provides summaries and important links to challenges I have been involved in organizing.
To talk to me and the team about our challenges, contact us using the details below:
Website: http://roboticvisionchallenge.org
E-mail: contact@roboticvisionchallenge.org
Slack: http://tinyurl.com/rvcslack
Tiwtter: @robVisChallenge
Title: The Robotic Vision Scene Understanding Challenge
Status: Active challenge with prizes!
Description:
The Robotic Vision Scene Understanding Challenge evaluates how well a robotic vision system can understand the semantic and geometric aspects of its environment. The challenge is performed using an active robot agent navigating and exploring high-fidelity simulated environments. The challenge consists of two distinct tasks: Object-based Semantic SLAM, and Scene Change Detection. We provide three levels of difficulty for competitors with lower difficulty levels removing some of the complexities of robotic systems such as self-localization, obstacle avoidance, and exploration. All this is made possible using our new BenchBot evaluation framework.
Current Prizes:
1 RTX A6000 and up to 5 Jetson Nanos (1 for each member) for each of the top 2 winning teams.
Note that the current challenge only considers the top two difficulty levels when awarding prizes
Important Links:
Submission Website: https://eval.ai/web/challenges/challenge-page/1614
Challenge Overview Website: http://tinyurl.com/acrv-rvc-sceneu
Embodied AI Workshop Website: https://embodied-ai.org/
BenchBot Github: http://benchbot.org
BenchBot Paper: https://arxiv.org/abs/2008.00635
BEAR Data Paper: https://doi.org/10.1177/02783649211069404
Challenge Paper: https://arxiv.org/abs/2009.05246
Title: The Probabilistic Object Detection (PrOD) Challenge
Status: Continuous evaluation server available
Description:
This challenge requires participants to detect objects in video data produced from high-fidelity simulations. The novelty of this challenge is that participants are rewarded for providing accurate estimates of both spatial and semantic uncertainty for every detection using probabilistic bounding boxes.
Accurate spatial and semantic uncertainty estimates are rewarded by our newly developed probability-based detection quality (PDQ) measure.
Current Prizes:
None
Important Links:
Continuous Evaluation Server: https://competitions.codalab.org/competitions/2059
Previous Challenge Server: https://competitions.codalab.org/competitions/20597
Challenge Overview Website: https://nikosuenderhauf.github.io/roboticvisionchallenges/object-detection
PrOD and PDQ definition paper (arxiv version): https://arxiv.org/abs/1811.10800