Recent advances in 3D shape and motion measurement and understanding in computer vision plays an important role in developing emerging real-world applications such as person identification from surveillance videos and autonomous car driving. In these successful application scenarios, however, the capture targets are assumed to be opaque objects in the air. Measuring techniques for transparent objects involving refraction, transmission, scattering, etc is still an open problem in computer vision while it has a wide variety of applications in bioinformatics, biology, fishery and  aquaculture as objects in water or microscopic environment are semitransparent in general.
   
Towards realizing 3D shape and motion capture of such semitransparent objects in water, Image Processing Lab in Graduate School of Informatics, Kyoto University studies "aqua vision" technique, and organizes this workshop to exchange ideas with researchers not only from computer vision but also from other areas such as biology.

Dates

  • Sep 26th and 27th, 2016

Venue

Registration

  • Link (free, on-site registration is also available)

Program

Monday 26th: Measurement, Shape and Motion Capture

13:30 Shohei Nobuhara (Kyoto U) Aqua Vision: Image-based 3D Shape and Motion Capture in Water
14:00 Jules Jaffe (UCSD) Underwater Microscopy: From the Lab to the Sea
14:40 Zachary Murez (UCSD) Photometric Stereo with Fluorescence and in Scattering Media
15:20 (break)
15:35 Hiroshi Kawasaki (Kagoshima Univ.) Challenges on Underwater Active One-shot Scan with Static Grid Pattern
16:15 Kei Terayama (U of Tokyo) Dual-scale Fish Tracking in a Large School for Collective Behavior Analysis
16:55 Rodrigo Verschae (Kyoto U) Towards Real-Time 3D Fish Detection and Tracking

Tuesday 27th: Recognition, Analysis, and Real World Applications

10:00 Toshihiro Maki (U of Tokyo) Autonomous Underwater Platform Systems
10:40 David Kriegman (UCSD) Automated Annotation of Benthic Images of Coral Reefs: From Fluorescence to Deep Networks
11:20 Hitoshi Habe (Kindai U) Computer Vison for Fish Farm Monitoring: A Case Study at Kindai Tuna Farm
12:00 (break)
13:15 Yuzuru Ikeda (Ryukyu U) Brainy behavior of cephalopods: toward future study with magic wand
13:55 Hiroshi Hosokawa (Kyoto U) Visual world of aquatic animals
14:35 Hiroaki Kawashima (Kyoto U) Interaction Analysis of Fish Group Using Visual Stimuli: Toward Fish-group Control

Invited Talks

Prof. David Kriegman (UCSD) 『Automated Annotation of Benthic Images of Coral Reefs: From Fluorescence to Deep Networks』

David Kriegman
Globally, coral reefs are in rapid decline, and the state of the reef (coral cover, bleaching, growth, etc) is mostly commonly measured through photographic surveys. While it has become cheaper and easier to acquire digital images using towed sleds, remotely operated vehicles and autonomous underwater vehicles, image analysis has become a time consuming bottleneck. Manual analysis using tools like Coral Point Count do not scale, and the UCSD Computer Vision Coral Ecology effort has aimed to address this. 

This talk will summarize our methods for automatically annotating images of Coral Reefs. Our initial fully-automated methods based on conventional computer vision methods (color and texture features, bag of words, multiscale pooling, and support vector machines) yielded rapid, unbiased cover estimates but with increased variance. Semi-automated annotation, where 50% of the annotation decisions were performed automatically, yielded cover estimate errors comparable to those of the human experts. This approach has been deployed on CoralNet (coralnet.ucsd.edu), and nearly 250,000 images have been uploaded. To increase accuracy further, we have taken two approaches. First, we looked to find stronger signals for classifying corals. We built a camera for wide field fluorescence imaging (Fluoris) and found that fluorescence images, when coupled with RGB reflectance images, decreased classification error rates by 22%. Second, we have developed methods using deep neural networks that leverage the wealth of available expert annotated data to significantly increase accuracy and reduce human effort. 

Prof. Jules Jaffe (UCSD) 『Underwater Microscopy: From the Lab to the Sea』

In understanding the ecology of the planet, and the oceanic processes that drive it, it is clear that some of the smallest processes play a major role. As one example, half of the oxygen that we breath is created by oceanic cyanobacteria of dimensions smaller than a micro-meter. Such processes are also difficult, if not impossible, to study in the laboratory, as it has proven nearly impossible to keep the vast majority of microbes alive. Our own research has, therefore, focussed on the development of microscopes that can be used in marine environments to observe a multitude of environmental and organismal processes that occur at these scales and are, clearly, best observed in their native environment. Over the last decade we have developed a number of microscopes that have; (1) been anchored to a pier, (2) been placed on an underwater profiling mooring, (3) been attached to the conical tip of towed plankton net, and (4) are portable enough for a diver to use. The resulting image libraries consist of both video and still frame images of a vast number of organisms that often break upon collection. and hence, observed in an altered state. Presently, our unique image library is being processed to obtain classification and, eventually, ecology. 

Mr. Zachary Murez (UCSD) 『Photometric Stereo with Fluorescence and in Scattering Media』

Photometric stereo is widely used for 3D reconstruction. However it relies on many simplifying assumptions, which limit its applicability. Here we present our work on extending photometric stereo to cases when the surface is not Lambertian but fluoresces, and when it is immersed in a scattering medium. First, we leverage the fact that when a fluorescent material is illuminated with a short wavelength light, it re-emits light at a longer wavelength isotropically in a similar manner as a Lambertian surface reflects light. This hitherto neglected property opens the doors to using fluorescence to reconstruct 3D shape with the same techniques as for Lambertian surfaces, even when the surface’s reflectance is highly non-Lambertian. Second, when imaging in a scattering media, such as in fog or under water, light is scattered, significantly affecting image formation and degrading the quality of photometric reconstructions. Here we make three contributions to address the key modes of light propagation, under the common single scattering assumption for dilute media. First, we show through simulations that single-scattered light from a source can be approximated by a point light source with a single direction. This alleviates the need to handle light source blur explicitly. Next, we model the blur due to scattering of light from the object. We measure the object point-spread function and introduce a simple deconvolution method. Finally, we show how imaging fluorescence emission where available, eliminates the backscatter component and increases the signal-to-noise ratio. Experimental results in a water tank, with different concentrations of scattering media added, show that deconvolution produces higher-quality 3D reconstructions than previous techniques, and that when combined with fluorescence, can produce results similar to that in clear water even for highly turbid media. 

Prof. Hiroshi Kawasaki (Kagoshima Univ.) 『Challenges on Underwater Active One-shot Scan with Static Grid Pattern』

川崎 洋(鹿児島大)Underwater 3D shape scanning system becomes more important than before because of various reasons, such as map making of submarine topography, motion capture of swimming human or fish and 3D sensor for navigating autonomous underwater vehicles (UAV). Structured Light Systems (SLS) based active 3D scanning systems are widely used in the air and it is also promising to apply underwater environments. When SLS is used in an air medium, the stereo correspondence problem can be efficiently solved by epipolar geometry due to the co-planarity of the 3D point and its corresponding 2D points on camera/projector planes. However, in underwater environments, the camera and projector are usually set in special housings and refraction occurs at the interfaces between water/glass and glass/air, resulting in invalid conditions for epipolar geometry which strongly aggravates the correspondence search. In this presentation, we show an efficient technique to calibrate the underwater SLS systems as well as robust 3D shape acquisition technique to solve the problem. In order to avoid the calculation complexity, we approximate the system with central projection model. Although such an approximation produces an inevitable errors in the system, such errors are solved by the grid based SLS technique followed by a bundle adjustment to retrieve the fine 3D shape. We also show actual scanning results of fish at the Kagoshima aquarium using an underwater SLS prototype, consisting of custom-made diffractive optical element (DOE) laser and underwater housings.

Prof. Yuzuru Ikeda (Ryukyu Univ.) 『Brainy behavior of cephalopods: toward future study with magic wand』

Unlike other mollusks, the coleoid cephalopods (squids, cuttlefish and octopuses) possess sophisticated lens eyes similar to our own and highly developed nervous systems with huge brains equivalent in size to some vertebrate brains. They exhibited highly intelligent and complicated behavior like impressive memory, learning, and body patterning. The reason why cephalopods have developed such huge brains and cognitive ability is still in question. To get answer to this question, we have investigated the recognizing abilities and social interactions of cephalopods inhabiting at the tropical ocean of the Ryukyu Archipelago, Japan. These behavioral experiments include, for example, some aspects of social recognition in oval squid, which include specific reactions to the mirror reflection, school caste like behavior and some aspects of group dynamics. Through these studies, we met many challenges how we record and analyze behaviors of cephalopod in captivity and in nature. Because cephalopods exhibit extraordinary unique behavior such as sudden color change of whole appearance, the tracking system applied for common experimental animals inefficient to follow cephalopod behavior. In this talk, I will introduce the series of behavioral experiments on cephalopods and also mention preferable visual recording system we need in the future to understand this attractive aquatic animal.

Prof. Toshihiro Maki (Univ. of Tokyo) 『Autonomous Underwater Platform Systems』

Autonomous underwater platform systems will realize wide-area, high-accuracy, and long-term observation through collaboration of multiple autonomous agents such as autonomous underwater vehicles (AUVs) and seafloor stations. We have developed a series of autonomous underwater platforms for detailed seafloor observation. For example, AUV Tri-TON 2 can perform highly-accurate seafloor 3D imaging based on a seafloor station. The navigation algorithm and experimental results will be shown in the talk. The autonomous docking scheme for the AUV based on fusion of acoustic and visual sensing will be also explained. 

Dr. Hiroshi Hosokawa (Kyoto Univ.) 『Visual world of aquatic animals』

Dr. Hitoshi Habe (Kindai Univ.) 『Computer Vison for Fish Farm Monitoring: A Case Study at Kindai Tuna Farm』

The depletion of fishery resources has become a serious problem. Aquaculture would be a solution for protecting natural fishery resources and providing sufficient fishery products. The monitoring of fishes is important to optimize the means of fish farming, such as feeding and environmental factors, for raising fishes efficiently. Visual monitoring based on computer vision techniques has a possibility to enhance the efficiency of fish farming, to reduce the cost, and to save the labor. This talk introduces our case study at a bluefin tuna farm.

Dr. Kei Terayama (Univ. of Tokyo) 『Dual-scale Fish Tracking in a Large School for Collective Behavior Analysis』

Measuring collective behaviors of fish is essential to understand the mechanism of schooling behavior. However, behaviors of fish schools, in particular large and dense ones, have not been sufficiently empirically analyzed and understood due to difficulties in measuring schools in underwater environments. To automatically observe and measure fish behabiors, we have developed two vision-based measurement methods: an occlusion robust method for fish tracking and a velocity-distribution measurement method without individual tracking. In this talk, I will introduce the measurement methods and related topics of fish schooling.


Contact