Whole brain estimation of the haemodynamic response function (HRF) in functional magnetic resonance imaging (fMRI) is critical to get insight on the global status of the neurovascular coupling of an individual in healthy or pathological condition. Most of existing approaches in the literature works on task-fMRI data and relies on the experimental paradigm as a surrogate of neural activity, hence remaining inoperative on resting-stage fMRI (rsfMRI) data. To cope with this issue, recent works have performed either a two-step analysis to detect large neural events and then characterize the HRF shape or a joint estimation of both the neural and haemodynamic components in an univariate fashion. In this work, we express the neural activity signals as a combination of piece-wise constant temporal atoms associated with sparse spatial maps and introduce an haemodynamic parcellation of the brain featuring a temporally dilated version of a given HRF model in each parcel with unknown dilation parameters. We formulate the joint estimation of the HRF shapes and spatio-temporal neural representations as a multivariate semi-blind deconvolution problem in a paradigm-free setting and introduce constraints inspired from the dictionary learning literature to ease its identifiability. A fast alternating minimization algorithm, along with its efficient implementation, is proposed and validated on both synthetic and real rs-fMRI data at the subject level. To demonstrate its significance at the population level, we apply this new framework to the UK Biobank data set, first for the discrimination of haemodynamic territories between balanced groups (𝑛 = 24 individuals in each) patients with an history of stroke and healthy controls and second, for the analysis of normal aging on the neurovascular coupling. Overall, we statistically demonstrate that a pathology like stroke or a condition like normal brain aging induce longer haemodynamic delays in certain brain areas (e.g. Willis polygon, occipital, temporal and frontal cortices) and that this haemodynamic feature may be predictive with an accuracy of 74 % of the individual’s age in a supervised classification task performed on 𝑛 = 459 subjects.
H. Cherkaoui, T. Moreau, A. Halimi, C. Leroy, P. Ciuciu, "Multivariate semi-blind deconvolution of fMRI time series," NeuroImage , In-press. 2021.
Standard methodologies for functional Magnetic Resonance Imaging (fMRI) data analysis decompose the observed Blood Oxygenation Level Dependent (BOLD) signals using voxelwise linear model and perform maximum likelihood estimation to get the parameters associated with the regressors. In task fMRI, the latter are usually defined from the experimental paradigm and some confounds whereas in resting-state acquisitions, a seedvoxel time-course may be used as predictor. Nowadays, most fMRI datasets offer resting-state acquisitions, requiring multivariate approaches (e.g., PCA, ICA, etc) to extract meaningful information in a data-driven manner. Here, we propose a novel low-rank model of fMRI BOLD data but instead of considering a dimension reduction in space as in ICA, our model relies on convolutional sparse coding between the hemodynamic system and a few temporal atoms which code for the neural activity inducing signals. A rank-1 constraint is also associated with each temporal atom to spatially map its influence in the brain. Within a variational framework, the joint estimation of the neural signals and the associated spatial maps is formulated as a nonconvex optimization problem. A local minimizer is computed using an efficient alternate minimization algorithm. The proposed approach is first validated on simulations and then applied to task fMRI data for illustration purpose. Its comparison to a state-of-the-art approach suggests that our method is competitive regarding the uncovered neural fingerprints while offering a richer decomposition in time and space.
H. Cherkaoui, T. Moreau, A. Halimi, and P. Ciuciu, "fMRI BOLD signal decomposition via a multivariate low-rank model," in Proc. European Signal Processing Conf. (EUSIPCO), A Coruna, Spain 2019.
Mammography imaging for tumor detection
Multifractal analysis (MFA) provides a framework for the global characterization of image textures by describing the spatial fluctuations of their local regularity based on the multifractal spectrum. Several works have shown the interest of using MFA for the description of homogeneous textures in images. Nevertheless, natural images can be composed of several textures and, in turn, multifractal properties associated with those textures. This paper introduces an unsupervised Bayesian multifractal segmentation method to model and segment multifractal textures by jointly estimating the multifractal parameters and labels on images, at the pixel-level. For this, a computationally and statistically efficient multifractal parameter estimation model for wavelet leaders is firstly developed, defining different multifractality parameters for different regions of an image. Then, a multiscale Potts Markov random field is introduced as a prior to model the inherent spatial and scale correlations (referred to as cross-scale correlations) between the labels of the wavelet leaders. A Gibbs sampling methodology is finally used to draw samples from the posterior distribution of the unknown model parameters. Numerical experiments are conducted on synthetic multifractal images to evaluate the performance of the proposed segmentation approach. The proposed method achieves superior performance compared to traditional unsupervised segmentation techniques as well as modern deep learning-based approaches, showing its effectiveness for multifractal image segmentation.
K. M. Leon-Lopez, A. Halimi, J.-Y. Tourneret, and H. Wendt, "Bayesian Multifractal Image Segmentation," IEEE Trans. Image Processing, vol. 34, pp. 8500-8510, 2025.
(a) MR image (b) US image (c) Fused image
This paper studies a new fusion method designed for magnetic resonance (MR) and ultrasound (US) images, with a specific focus on endometriosis diagnosis. The proposed method is based on guided filtering, leveraging the advantages of this technique to enhance the quality of fused images. The fused image is a weighted average of base and detail images from the MR and US images. The weights assigned to the US image account for the presence of speckle noise, a common challenge in US imaging whereas the weights assigned to the MR image allow the contrast of the fused image to be enhanced. The effectiveness of the method is evaluated using synthetic and phantom data, showing promising results. The image provided by the proposed fusion method holds potential for enhancing visualization and aiding decision-making in endometriosis surgery, offering a valuable contribution to the field of medical image fusion.
Y. El Bennioui, A. Halimi, A. Basarab, and J.-Y. Tourneret, "Fusion of Magnetic Resonance and Ultrasound Images Using Guided Filtering: Application to Endometriosis Surgery", EUSIPCO, Lyon, 2024.