Residual Graph Convolutional Networks for Passive Visual Monitoring in Aquatic Environments
An unmanned autonomous ocean vehicle (AOV) is employed to explore the ocean floor, performing tasks such as object detection and navigation through complex marine environments. However, underwater imaging is inherently challenging due to factors such as non-uniform illumination, blurring, color distortion, and haziness arising from the physical properties of the water medium. These degradations complicate reliable object detection and navigation. Conventional deep neural network (DNN) models have demonstrated limited effectiveness under such conditions. To address these limitations, this study proposes an end-to-end architecture for underwater moving object detection. Specifically, a novel graph-refactored convolutional neural network (Graph-RCN) is introduced to detect moving objects in visually complex underwater scenarios. The proposed Graph-RCN architecture preserves spatial and contextual relationships, thereby improving feature extraction and the representation of object details. Additionally, skip connections are incorporated within the graph convolutional network (GCN) to maintain spatio-contextual coherence. The performance of the proposed model is evaluated on the Fish4Knowledge and Underwater Change Detection datasets, demonstrating its effectiveness in enhancing object detection in challenging underwater environments.