Investigates Image data fusion techniques and video analytics that combine image and track data from multiple sensors to achieve improved accuracies and more specific inferences than could be achieved by using a single sensor alone. Our aim is to explore the state-of-the-art image processing and video analytics algorithms for achieving effective enhancement, detection, tracking, and video summarization as in:
enhance video from low-light environments
Lowlight
Contents
1. Introduction
2. Main Algorithm and Principle
3. Demo
1. Introduction
- Over the last several decades, there have been substantial improvements in modern digital cameras including resolutions and sensitivity. Despite these improvements, quality of videos in low-light conditions is still limited. Firstly, low-light videos have poor dynamic range. To capture images of high dynamic range, most consumer cameras often rely on automatic exposure control, but longer exposure time results motion blur. Secondly, image sequences captured in low-light conditions often have very low signal-to-noise ratio (SNR). The level of input signal may be modified by increasing sensitivity of cameras (ISO level). However, noise in an input signal is also amplified with no effective noise reduction steps are taken.
- Most of the approaches introduced so far consider only videos under moderately dark conditions in which most objects and background are almost visually recognizable
- The proposed method is aimed to develop a novel framework to enhance video from extremely low-light environments.
2. Main Algorithm and Principle
- Overall Framework
- Temporal Noise Reduction
n A motion adaptive temporal filtering based on a Kalman structured updating
- Tone Mapping
n Histogram adjustment with Gamma correction
- Spatial Noise Reduction
n NLM (Non-local Means) based noise reduction
3. Demo