The OceanDark dataset is composed by 183 underwater images of 1280 x 720 pixels captured by video cameras located in profound depths using artificial lighting. The dataset is intended to help researchers understand scenes with sub-optimal lighting by providing a variety of real-world examples, allowing for the creation of enhancement methods. The metadata provided for each sample includes: capture date, time, location, latitude, longitude, depth, and camera system used. Please cite the works related to the dataset [1,2] if it proves to be useful to your research.
Ocean Networks Canada (ONC) offers digital platforms for the viewing and downloading of historic and live data from their large catalogue of sensors. For the OceanDark dataset, 183 image samples of low-lighting underwater images captured using artificial lighting were selected from video footage of ONC's cameras located in the Northeast Pacific Ocean by downloading the original videos, then selecting specific frames. Images in the dataset portray the following scenarios:
- Low-lighting underwater images with artificial lighting sources: sites located deep into the ocean usually contain dark regions that hide valuable information in the images.
- Images with meaningful structures: all samples in the dataset contain large objects, either biological or artificial, that suffer from sub-optimal illumination: skates, crabs, fish, urchins, scientific apparatus, among others.
Location and depth
Oceandark's samples are excerpts of footage ranging from the years of 2014 to 2019, and cover four different locations with varying depths in the Pacific Ocean:
- Northeast Pacific Ocean/ Barkley Canyon/Axis: 48.31672,-126.05065 (depth: 980 m).
- Northeast Pacific Ocean/ Barkley Canyon/Upper Slope South: 48.42704,-126.17445 (depth: 394 m).
- Northeast Pacific Ocean/ Barkley Canyon/Upper Slope: 48.42703,-126.17472 (depth: 389 m).
- Northeast Pacific Ocean/ Barkley Canyon/MidEast: 48.31499,-126.0585 (depth: 890 m).
Different species boast a variety of colors, shapes and other distinct features that can help to evaluate the enhancement of the underwater images (e.g., measuring the increase in number of edges, detecting specific shapes before and after the processing). Some examples of species present in the dataset are: crabs and hermit crabs, anemones, stingrays, starfishes, shrimps, and a variety of fish.
Since this dataset was created to support the development and evaluation of a contrast-guided low-lighting underwater enhancement system [1,2], all the images are captured under sub-optimal lighting conditions. More specifically, the samples from OceanDark are obtained using one or multiple artificial lighting sources. This setting leads to the presence of dark regions in the images, which are then enhanced using the method proposed in [1,2].
: Marques, T. P., Albu, A. B., "L^2UWE: A Framework for the Efficient Enhancement of Low-Light Underwater Images Using Local Contrast and Multi-Scale Fusion ," IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshop - NTIRE: New Trends in Image Restoration and Enhancement, 2020. link
: Marques, T. P., Albu, A. B., Hoeberechts, M., “A Contrast-Guided Approach for the Enhancement of Low-Lighting Underwater Images,” MDPI Journal of Imaging, vol. 5, no. 10: 79, 2019. link