Objects exhibit different colors under various light sources. The goal of color constancy algorithms is to remove this effect. This can be done by first estimating the color of the light source and using this illuminant estimate to transform the image as if it was taken under a neutral white light source. The aim of this transformation is not to scale the brightness level of the image, as color constancy methods only correct for the chromaticity of the light source.
We make publicly available a new large dataset for illumination estimation. This dataset, called INTEL-TAU, contains 7022 images in total, which makes it the largest available high-resolution dataset for illumination estimation research. The variety of scenes captured using three different camera models, i.e., Canon 5DSR, Nikon D810, and Sony IMX135, makes the dataset appropriate for evaluating the camera and scene invariance of the different illumination estimation techniques. Privacy masking is done for sensitive information, e.g., faces. Thus, the dataset is coherent with the new General Data Protection Regulation (GDPR) regulations. Furthermore, the effect of color shading for mobile images can be evaluated with INTEL-TAU, as we provide both corrected and uncorrected versions of the raw data. We provide in this paper evaluation of several color constancy approaches
BoCFs is a novel color constancy approach building upon Bag-of-Features pooling. It substantially reduces the number of parameters needed for illumination estimation. At the same time, it is consistent with the color constancy assumption stating that global spatial information is not relevant for illumination estimation and local information (edges, etc.) is sufficient. Furthermore, BoCF is consistent with color constancy statistical approaches and can be interpreted as a learning-based generalization of many statistical approaches. To further improve the illumination estimation accuracy, a novel attention mechanism for the BoCF model with two variants based on self-attention is implemented. BoCF approach and its variants achieve competitive, compared to the state of the art, results while requiring much fewer parameters on three benchmark datasets: ColorChecker RECommended, INTEL-TUT version 2, and NUS8.
We propose a novel unsupervised color constancy method, called Probabilistic Color Constancy (PCC). We define a framework for estimating the illumination of a scene by weighting the contribution of different image regions using a graph-based representation of the image. To estimate the weight of each (super-)pixel, we rely on two assumptions: (Super-)pixels with similar colors contribute similarly and darker (super-)pixels contribute less. The resulting system has one global optimum solution. The proposed method achieves competitive performance, compared to the state-of-the-art, on INTEL-TAU dataset.
We study the importance of pre-training for the generalization capability in the color constancy problem. We propose two novel approaches based on convolutional autoencoders: an unsupervised pre-training algorithm using a fine-tuned encoder and a semi-supervised pre-training algorithm using a novel composite-loss function. This enables us to solve the data scarcity problem and achieve competitive, to the state-of-the-art, results while requiring much fewer parameters on ColorChecker RECommended dataset. We further study the over-fitting phenomenon on the recently introduced version of INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both field and non-field scenes acquired by three different camera models.
The list provided in the following may be incomplete. The complete list of papers related to this topic can be found in the lists of journal papers and conference papers.
F. Laakom, N. Passalis, J. Raitoharju, J. Nikkanen, A. Tefas, A. Iosifidis and M. Gabbouj, “Bag of Color Features for Color Constancy”, IEEE Transactions on Image Processing, vol. 29, pp. 7722 – 7734, 2020
F. Laakom, J. Raitoharju, A. Iosifidis, U. Tuna, J. Nikkanen and M. Gabbouj, “Probabilistic Color Constancy”, IEEE International Conference on Image Processing, Abu Dhabi, United Arab Emirates, 2020
F. Laakom, J. Raitoharju, A. Iosifidis, J. Nikkanen and M. Gabbouj, "Color Constancy Convolutional Autoencoder", IEEE Symposium Series on Computational Intelligence, Xiamen, China, 2019 (arXiv)
F. Laakom, J. Raitoharju, A. Iosifidis, J. Nikkanen and M. Gabbouj, “INTEL-TAU: A Color Constancy Dataset”, arXiv:1910.10404, 2019