Mr. Marc Steinberg
Science of Autonomy Program Officer
Office of Naval Research (ONR), USA
"Maritime Simulators for Fundamental Research in the Science of Autonomy"
Assistant Professor
University of Michigan, USA
"Simulating Visual and Acoustic Data for Marine Robot Perception"
Underwater simulators offer support for building robust underwater perception solutions. Significant work has recently been done to develop new simulators and to advance the performance of existing underwater simulators. In this talk, I will present recent work on simulating visual and acoustic data for marine robotics applications. First, I will focus on our work to leverage simulated side scan sonar data for learning-based shipwreck detection in the Great Lakes. Second, I will present OceanSim, a high-fidelity GPU-accelerated underwater simulator that can achieve real-time imaging sonar rendering and photorealistic underwater image simulation. Experiments from real field expeditions and water tank testing will be presented.
Assistant Professor
Brigham Young University, USA
"HoloOcean 2.0 - Lowering the Barrier-of-Entry to Marine Robotics Development via Marine Autonomy Simulation"
The field of marine robotics has vast potential to address various problems of utmost importance in the defense, scientific, cultural, and economic sectors. However, the barrier-of-entry to begin conducting research in this field is high due to the significant cost and risk involved in building, testing, and deploying marine robotic systems. Simulation has the potential to address these challenges by enabling marine robotic testing and development without (or in preparation for) conducting full-scale robotic system development and field testing. In the Field Robotics Systems Lab at BYU, we have developed the HoloOcean simulator as a tool to lower this barrier-of-entry and enable a wider portion of the robotics research community to begin targeting problems within the marine domain. HoloOcean is built upon Unreal Engine and supports simulation of multiple underwater and surface agents as well as the navigational and perceptual sensors common to the underwater domain. Features include simulation of various sonar modalities (forward-looking, sidescan, bathymetric/profiling, and single-beam sonar), camera, acoustic/optical communications, USBL, DVL, IMU, and GPS, as well as ground-truth pose information. We are happy to announce the release of HoloOcean 2.0 which upgrades HoloOcean to Unreal 5.3 and provides multiple additional features and upgrades. These features include improved high-fidelity dynamics, a ROS 2 bridge, additional agents (including multiple additional torpedo-like UUVs and the BlueROV2), and improved lighting/environment rendering. Additionally, multiple features are in development for release in the near future including: raytracing-based sonar simulations, LiDAR, upgraded camera, class/instance label extraction, and automatic environment generation.
Post-Doc
Christian-Albrechts-University of Kiel, Germany
"Inverse Computational Ocean Optics"
Underwater imagery is governed by a unique image formation model. Its complexity often makes automatic processing a very complex task. Hence, knowing the basics is greatly beneficial for practitioners defining downstream tasks on underwater data. This talk will detail geometric as well as radiometric distortions, due to cameras operating directly water. The former emerge when a light ray passes interfaces between media with different optical densities, specifically: air–glass–water. While the latter are caused by attenuation and scattering effects inside the medium itself. Furthermore, homogeneous illumination by the Sun, inhomogeneous artificial illumination, or even a mixture of both contribute another dimension of complexity to be explored. Physically based rendering approaches are particularly well suited to tackle problems in underwater vision, due to the dualism between models originally devised in physical oceanography and the medium models which are nowadays typically employed in physically-based raytracing. This enables us to capture underwater imagery and simultaneously measure optical properties of the water using an established sensor suite. We can then directly synthesize images with the same medium-properties and verify our rendering-systems. This enables us to provide reliable synthetic image data focusing on specific problems to train, develop, and test algorithms with. With the advent of massively parallel computations on GPUs in conjunction with inverse physically-based rendering methods it, vice versa, became possible to infer inherent optical properties of the water body directly from images using an analysis-by-synthesis approach. The same approach can be applied to flat ports, dome ports as well as light sources. Being able to calibrate and simulate refraction, light and the optical properties of the water directly enables a wide range of applications such as image restoration, shadow removal and light removal directly on submerged 3D models.
Associate Professor
Louisiana State University, USA
"Intelligent Underwater Robotics: The Need for Dynamic Models"
Underwater robots have gained research attention in the past years as they can expand our knowledge of the oceans and perform dangerous tasks in extreme environments. However, these robotic systems are expensive, require large infrastructures for deployment, and the commercially available system are either teleoperated or have pre-programmed missions that are not adaptable to changes in the environment or the system. Achieving full autonomy and long-term deployments for marine robots requires addressing limitations in system modeling and predictive behaviors, scene understanding, control and planning, and energy management. This talk will focus on the steps taken to create intelligent marine robots capable of adapting to environmental changes and hardware limitations, by looking at predictive models and model-based control architectures for underwater vehicles and manipulators. The talk will discuss stochastic physics – data informed modeling techniques and pure data-driven techniques in the framework of digital twins. The talk will also demonstrate how these models can be used to ensure marine vehicles complete autonomous survey missions and underwater manipulators can optimally interact with the environment.
Prof. Eleni Kelasidi
Full Professor
NTNU and SINTEF, Norway
"Robust Field Autonomy"
Underwater robotic systems have been used and demonstrated for use in diverse industrial settings, and research has recently targeted to address the required level of autonomy during robotic operations in fish farms. Manually operated ROVs have been used for several monitoring operations such as inspection of nets and mooring lines, as well as monitoring and inspection of water quality, the cage environment, and fish population. This talk will be about Robust Field Autonomy and will aim to provide an overview of current research and innovative solutions for robust, safe, efficient autonomous monitoring, inspection and repair (IMR) operations in fish farms to reduce costs, and risks, increase objectivity and production, and contribute to better fish welfare. Relevant R&D project results will be presented demonstrating how the adaptation of robotic solutions with an increased level of autonomy can contribute to increasing the weather window and support the industry toward increased production, objectivity, and efficiency. The content of the talk will cover areas such as the design of aquaculture-dedicated robotic systems, realistic simulation case studies, vehicle interaction with the environment and fish, collision-free motion planning methods and intelligent control approaches able to deal with dynamically changing environments, sensors and tools, underwater localization and communication, and remote operations.