FathomNet2023

Overview

Ocean going camera systems have given scientists access to an amazing data product that allows them to monitor populations and discover new organisms. Many groups have had success training and deploying machine learning models to help sort incoming visual data. But these models typically do not generalize well to new situations — different cameras or illumination, new organisms appearing, changes in the appearance of the seafloor — an especially vexing problem in the dynamic ocean. Improving the robustness of these tools will allow ecologists to better leverage existing data and provide engineers with the ability to deploy instruments in ever more remote parts of the sea.

For this competition, we have selected data from the broader FathomNet annotated image set that represents a challenging use case: the training set is collected in the upper ocean (< 800 m) while the target data is collected from deeper waters. This is a common scenario in ocean research: deeper waters are more difficult to access and typically more annotated data is available close to the surface. The species distributions are overlapping but not identical and diverge as the vertical distance between the samples increases. The challenge is both to identify animals in a target image and assess if the image is from a different distribution relative to the training data. Such out-of-sample detection could help scientists discover new animals and improve ecosystem management practices.

Competition

Organizers