Image Tone Mapping for HDR Images with Edge Preserving Filtering
A starlit night has an average luminance level of around 10^-3 candelas/m^2, and daylight scenes are close to 10^5 cd/m^2 ;
Humans can see detail in regions that vary by 1:10^4 at any given adaptation level, over which the eye gets swamped by stray light (i.e., disability glare) and details are lost;
A well-designed CRT monitor's maximum display luminance is only around 100 cd/m^2;
A high-quality xenon film projector is still two orders of magnitude away from the optimal light level for human acuity and colour perception;
HDR image save each pixel with 4 bytes (32bits);
Two main methods for generating HDR imaging:
Use physically based renderers, generating basically all visible colours;
Take photographs of a particular scene at different exposure times, apply radiometric calibration to each camera, and combine the calibrated images.
Fig. 1, Range of luminances in the real world compared to RGB
Fig. 2, Taking pictures at multiple exposure times and camera response function
Match one observer model applied to the scene to another observer model applied to the desired display image;
HDR needs tone mapping to display colors on monitor;
Tone Mapping operators' 4 Categories
global (spatially uniform): Compress images using an identical (non-linear) curve for each pixel;
local (spatially varying): Achieve dynamic range reduction by modulating a non-linear curve for each pixel independently (local neighborhood);
frequency domain: Reduce the dynamic range of image components selectively, based on their spatial frequency;
gradient domain: Modify the derivative of an image to achieve dynamic range reduction. (Gradient domain HDR compression);
Video tone mapping: temporal coherence;
Inverse tone mapping: from LDR to HDR;
TV (Total Variation) for edge preserving filtering;
Anistropic diffusion (robust);
Bilateral filter;
Weighted LS method;
EAW (Edge Avoiding Wavelet) filter;
Non-local means;
L0 Gradient minimization;
Domain Transform for edge preserving filtering;
Local Laplacian Filters for edge preserving processing;
BM3D (block matching 3-d collaborative filtering).
Detail.img = Input.img/Base.img: I(x,y) = R(x,y)*L(x,y);
Smooth base layer (large scale variations);
Residual detail layer (small scale details);
Compression in illumination layer;
Unchanged in reflectance layers.