Screen-space vortex-aware sound synthesis for ocean waves via particle-based and learning-based mapping
Jong-Hyun Kim*
(* : Inha University)
IEEE Access 2026
Jong-Hyun Kim*
(* : Inha University)
IEEE Access 2026
Abstract : This paper presents an audio–visual mapping framework for synthesizing ocean wave sounds that are temporally aligned with scene dynamics, derived from the motion of foam particles in particle-based fluid simulations. To efficiently handle large-scale particle data, the proposed method employs dynamic K-means clustering that adapts to variations in particle count, and performs example-based sound matching and volume control using cluster-level velocity- and density-based features. To overcome the limitations of scale-oriented mappings that fail to capture viewpoint changes and structural differences in flow patterns, we introduce a screen-space vortex representation. Specifically, the physical quantities of visible foam particles are projected onto a screen-space grid from the camera viewpoint, and a vortex (2D vorticity) field is computed from the projected 2D velocity field. This vortex feature is combined with velocity–density descriptors to improve the discriminability and temporal stability of data-driven sound matching. Furthermore, to preserve the interpretability of rule-based mappings while compensating for nonlinear and viewpoint-dependent relationships, we introduce a learning-based embedding as a solver extension that maps cluster- and vortex-level features into a latent space. Experimental results across various ocean wave scenarios (wave initiation, reflection, moving objects, spinning emitters, and propellers) demonstrate that the proposed approach produces differentiated auditory responses depending on the viewpoint, even under identical simulations, and significantly improves the stability of sound synthesis and audio–visual coherence compared to fixed clustering and scale-based mappings.
[paper]