My research interests are:
Pattern recognition and machine learning for remote sensing and medical (ongoing) applications
Data mining from big data (especially large-scale documentary heritage)
Image processing and analysis
Multispectral/Hyperspectral high resolution image processing
Document image analysis
Graph theory (mainly for unlabeled data classification)
Quality image assessment
Some selected works:
R. Hedjam, M. Kalacska, Max Mignotte, H. Zieae Nafchi and M. Cheriet
IEEE Transactions on Geoscience and Remote Sensing, Nov. 2016
Abstract: In this paper, we propose a new unsupervised change detection method designed to analyse multispectral remotely sensed image pairs. It is formulated as a segmentation problem to discriminate the changed class from the unchanged class in the difference images. The proposed method is in the category of the committee machine learning model that utilizes an ensemble of classifiers (i.e., the set of segmentation results obtained by several thresholding methods) with a dynamic structure type. More specifically, in order to obtain the final “change/no-change” output, the responses of several classifiers are combined by means of a mechanism that involves the input data (the difference image) under an iterative Bayesian-Markovian framework. The proposed method is evaluated and compared to previously published results using satellite imagery.
R. Hedjam, H. Ziaei Nafchi, M. Kalacska, M. Cheriet
IEEE Transactions on Image Processing, May 2015
Abstract: This paper presents a novel pre-processing method of color-to-gray document image conversion. In contrast to the conventional methods designed for natural images that aim to preserve the contrast between different classes in the converted gray image, the proposed conversion method reduces as much as possible the contrast (i.e. intensity variance) within the text class. It is based on learning a linear
filter from a predefined dataset of text and background pixels that: i) when applied to background pixels, minimizes the output response; and ii) when applied to text pixels, maximizes the output response, while minimizing the intensity variance within the text class. Our proposed method (called here LC2G for Learning-based Color-to-Gray) is conceived to be used as pre-processing for document image binarization. A dataset of forty-six (46) historical document images is created and used to evaluate subjectively and objectively the proposed method. The method demonstrates drastically its effectiveness and impact on the performance of state-of-the-art binarization methods. Four (4) other web-based image datasets are created to evaluate the scalability of the proposed method.
M Cheriet, RF Moghaddam, R Hedjam
Elsevier, Computer vision and image understanding 117 (3), 269-280, 2013
Abstract: Almost all binarization methods have a few parameters that require setting. However, they do not usually achieve their upper-bound performance unless the parameters are individually set and optimized for each input document image. In this work, a learning framework for the optimization of the binarization methods is introduced, which is designed to determine the optimal parameter values for a document image. The framework, which works with any binarization method, has a standard structure, and performs three main steps: (i) extracts features, (ii) estimates optimal parameters, and (iii) learns the relationship between features and optimal parameters. First, an approach is proposed to generate numerical feature vectors from 2D data. The statistics of various maps are extracted and then combined into a final feature vector, in a nonlinear way. The optimal behavior is learned using support vector regression (SVR). Although the framework works with any binarization method, two methods are considered as typical examples in this work: the grid-based Sauvola method, and Lu’s method, which placed first in the DIBCO’09 contest. The experiments are performed on the DIBCO’09 and H-DIBCO’10 datasets, and combinations of these datasets with promising results.
Quality image assessment
Hossein Ziaei Nafchi · Atena Shahkolaei · Rachid Hedjam ·Mohamed Cheriet
IEEE Access, Aug 2016
Abstract: Applications of perceptual image quality assessment (IQA) in image and video processing, such as image acquisition , image compression, image restoration and multimedia communication, have led to the development of many IQA metrics. In this paper, a reliable full reference IQA model is proposed that utilize gradient similarity (GS), chromaticity similarity (CS), and deviation pooling (DP). By considering the shortcomings of the commonly used GS to model human visual system (HVS), a new GS is proposed through a fusion technique that is more likely to follow HVS. We propose an efficient and effective formulation to calculate the joint similarity map of two chromatic channels for the purpose of measuring color changes. In comparison with a commonly used formulation in the literature, the proposed CS map is shown to be more efficient and provide comparable or better quality predictions. Motivated by a recent work that utilizes the standard deviation pooling, a general formulation of the DP is presented in this paper and used to compute a final score from the proposed GS and CS maps. This proposed formulation of DP benefits from the Minkowski pooling and a proposed power pooling as well. The experimental results on six datasets of natural images, a synthetic dataset, and a digitally retouched dataset show that the proposed index provides comparable or better quality predictions than the most recent and competing state-of-the-art IQA metrics in the literature, it is reliable and has low complexity.
Doc. image restoration
Rachid Hedjam · Mohamed Cheriet
Elsevier, Pattern Recognition, Aug 2013
Abstract: Thousands of valuable historical documents stored on the shelves of national libraries throughout the world are waiting to be scanned in order to facilitate access to the information they contain. The first major problem faced is degradation, which renders the visual quality of the document very poor, and in most cases, difficult to decipher. This work is part of our collaboration with the BAnQ (Bibliothèque et Archive Nationales de Québec), which aims to propose a new approach to provide the end user (historian, scholars, researchers, etc.) with an acceptable visualization of these images. To that end, we have adopted a multispectral imaging system capable of producing images in invisible lighting, such as infrared lights. In fact, in addition to visible (color) images, the additional information provided by the infrared spectrum as well as the physical properties of the ink (used on these historical documents) will be further incorporated into a mathematical model, transforming the degraded image into its new clean version suitable for visualization. Depending on the degree of degradation, the problem of cleaning them could be resolved by image enhancement and restoration, whereby the degradation could be isolated in the Infrared spectrum, and then eliminated in the visible spectrum. The final color image is then reconstructed from the enhanced visible spectra (red, green and blue). The first experimental results are promising and our aim in collaboration with the BAnQ, is to give this documentary heritage to the public and build an intelligent engine for accessing the documents.