Multispectral Near-Infrared Imaging for Wetness Estimation
Y. Maeda, G. Tsukimura, D. Sugimura and T. Hamamoto, JOSA A, Optica, 2022. [paper]
Abstract
Estimation of the wetness of objects is an important technique for recognizing states in the real world. In this paper, we propose a non-contact method for estimating the wetness of objects using multispectral near-infrared (NIR) imaging. In contrast with a previous method that requires hyperspectral (110-band) images taken with fine spectral resolution (5 nm intervals) to estimate the degree of wetness, our method enables accurate wetness estimation using few-band NIR images with coarse spectral resolution (40 nm intervals). In general, water absorbs a substantial amount of incident light at wavelengths around 1000 nm and a smaller amount at wavelengths around 900 nm. This phenomenon indicates that the light absorption coefficient of water particularly varies over the NIR spectral band. These differences in the light absorption coefficients of water in the NIR bands are exploited in the model we derived for the appearance of a wet object surface, facilitating accurate wetness estimation. The effectiveness of the proposed method is demonstrated experimentally.
Multi-Frame RGB-NIR Imaging for Color Image Super-Resolution
T. Honda, D. Sugimura and T. Hamamoto, IEEE TCI, IEEE, 2019. [paper]
T. Honda, D. Sugimura and T. Hamamoto, IEEE ICIP, IEEE, 2018. [paper]
Abstract
We propose a method for super-resolution (SR) of low-resolution (LR) color images taken in low-light scenes. Our method is based on the multi-frame SR technique, which reconstructs a high-resolution color image by fusing multiple LR images that were taken at different camera positions so as to alter the alignment between scene details and pixels. Previous multi-frame SR methods have implicitly assumed that the LR images could be captured with less noise and blur. However, images taken in low-light scene will contain high level of noise and motion blur, making it difficult to achieve high quality SR. To overcome these problems, we utilize a single sensor that captures red, green, blue (RGB) and near-infrared (NIR) information. We capture multiple raw images of a low-light scene using our RGB/NIR single sensor with an NIR flash unit. Because a NIR flash can provide sufficient illumination for low-light scenes, the structural scene information that, contributes to an effective SR process, can be captured with high quality. With the help of the NIR information, we jointly perform deblurring, denoising and SR of LR RGB/NIR raw images. Our experiments demonstrate the effectiveness of the proposed method using synthetic and real RGB/NIR raw images.
Exposure Bracketing Imaging for Underwater Image Enhancement
K. Nomura, D. Sugimura and T. Hamamoto, IEEE SPL, IEEE, 2018. [paper]
K. Nomura, D. Sugimura and T. Hamamoto, IEEE ICIP, IEEE, 2017. [paper]
project page [link]
Abstract
Absorption and scattering of light in an underwater scene saliently attenuate red spectrum components. They cause heavy color distortions in the captured underwater images. In this letter, we propose a method for color-correcting underwater images, utilizing a framework of gray information estimation for color constancy. The key novelty of our method is to utilize exposure-bracketing imaging: a technique to capture multiple images with different exposure times for color correction. The long-exposure image is useful for sufficiently acquiring red spectrum information of underwater scenes. In contrast, pixel values in the green and blue channels in the short-exposure image are suitable because they are unlikely to attenuate more than the red ones. By selecting appropriate images (i.e., least over- and under-exposed images) for each color channel from those taken with exposure-bracketing imaging, we fuse an image that includes sufficient spectral information of underwater scenes. The fused image allows us to extract reliable gray information of scenes; thus, effective color corrections can be achieved. We perform color correction by linear regression of gray information estimated from the fused image. Experiments using real underwater images demonstrate the effectiveness of our method.
Dual-resolution Light Field Imaging: A Concept
D. Sugimura, S. Kobayashi and T. Hamamoto, AO, Optica, 2017. [paper]
S. Kobayashi, D. Sugimura and T. Hamamoto, IEEE ICIP, IEEE, 2016. [paper]
Abstract
Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.
Compressive Multi-spectral Imaging with Hierarchical Joint Sparsity Models
D. Sugimura, M. Tomabechi, T. Hosaka and T. Hamamoto, MVAP, Springer, 2016. [paper]
Abstract
We propose a novel multi-spectral imaging method based on compressive sensing (CS). In CS theory, the enhancement of signal sparsity is important for accurate signal reconstruction. The main novelty of the proposed method is the employment of a self-correlation of an image, that is a local intensity similarity and multi-spectral correlation, to enhance the sparsity of the multi-spectral image to be recovered. Local intensity similarity, which is based on the concept that spatial changes in intensity are likely to be similar within local regions, contributes to sparsity enhancement. Furthermore, we exploit multi-spectral correlation to improve the sparsity of the multi-spectral components to be recovered. In order to simultaneously exploit different types of characteristics (i.e., local intensity similarity and multi-spectral correlation) for representing a signal as sufficiently sparse, we introduce a hierarchical joint sparsity model in the CS image recovery process. Our experiments show that the use of a self-correlation significantly improves the performance of multi-spectral image reconstruction.
RGB/NIR Imaging with Different Exposure Times for Color Image Enhancement
D. Sugimura, T. Mikami, H. Yamashita and T. Hamamoto, IEEE TIP, IEEE, 2015. [paper]
T. Mikami, D. Sugimura and T. Hamamoto, IEEE ICIP, IEEE, 2014. [paper]
Abstract
We propose a novel method to synthesize a noise- and blur-free color image sequence using near-infrared (NIR) images captured in extremely low light conditions. In extremely low light scenes, heavy noise and motion blur are simultaneously produced in the captured images. Our goal is to enhance the color image sequence of an extremely low light scene. In this paper, we augment the imaging system as well as enhancing the image synthesis scheme. We propose a novel imaging system that can simultaneously capture the red, green, blue (RGB) and the NIR images with different exposure times. An RGB image is taken with a long exposure time to acquire sufficient color information and mitigates the effects of heavy noise. By contrast, the NIR images are captured with a short exposure time to measure the structure of the scenes. Our imaging system using different exposure times allows us to ensure sufficient information to reconstruct a clear color image sequence. Using the captured image pairs, we reconstruct a latent color image sequence using an adaptive smoothness condition based on gradient and color correlations. Our experiments using both synthetic images and real image sequences show that our method outperforms other state-of-the-art methods.