My research at CNRS (LaBRI UMR 5800, University de Bordeaux) aims to redefine the methodological foundations of medical image analysis by developing next-generation deep learning (DL) frameworks and scalable platforms tailored to the specific constraints of clinical neuroimaging. While DL has transformed computer vision, its direct translation to medical imaging faces unique challenges, including heterogeneous data, limited annotations, and the need for anatomically consistent and clinically reliable outputs. My work addresses these limitations by developing principled, domain-adapted DL methods that integrate anatomical knowledge and statistical modeling into data-driven frameworks, enabling robust, interpretable, and generalizable performance across large and diverse clinical datasets.
My research aims to redefine the methodological foundations of medical image analysis by developing next-generation deep learning (DL) frameworks tailored to the specific constraints of clinical neuroimaging. While DL has revolutionized computer vision, its direct application to medical imaging remains limited by fundamental challenges, including data heterogeneity, scarcity of annotations, and the need for anatomically consistent and clinically reliable outputs.
To address these limitations, I develop principled, domain-adapted DL methods that integrate physical, statistical, and anatomical priors into learning-based models. My work goes beyond black-box approaches by designing algorithms that explicitly account for the structure and variability of biomedical images, enabling robust and generalizable performance across diverse clinical datasets.
My contributions are structured around three main methodological axes:
Learning-based image enhancement and reconstruction: I develop DL methods for MRI denoising and resolution enhancement that recover high-fidelity anatomical details from degraded acquisitions. These approaches are designed to preserve subtle structural patterns while improving signal quality, thereby enabling more accurate downstream analysis in low-quality or accelerated imaging settings.
Segmentation and atlas-informed representation learning: A central contribution of my research is the development of DL-based segmentation methods for brain structures and lesions, combining convolutional architectures with atlas-based modeling. I explore hybrid strategies that fuse data-driven learning with explicit anatomical representations, leading to improved robustness, interpretability, and cross-dataset generalization.
Computer-aided diagnosis (CAD) from MRI: Building on segmentation and quantitative analysis, I have developed DL-based frameworks for computer-aided diagnosis that extract imaging biomarkers for neurodegenerative diseases, stroke, and other neurological conditions. These methods combine automated segmentation, atlas-informed representations, and advanced feature extraction to produce interpretable, clinically relevant outputs. By integrating DL with structured anatomical knowledge, these CAD tools support early detection, risk stratification, and quantitative disease monitoring, bridging methodological innovation with translational impact.
These methodological advances are systematically integrated into scalable and reproducible processing frameworks, enabling large-scale analysis of heterogeneous neuroimaging datasets and supporting translational applications through platforms such as volBrain.
Overall, my work establishes a new paradigm for medical image computing, where deep learning is tightly coupled with anatomical knowledge and statistical modeling, paving the way toward reliable, generalizable, and clinically actionable image analysis tools.
My research leverages the emergence of large-scale neuroimaging datasets to address one of the central unresolved questions in neuroscience: the characterization of brain structure trajectories across the entire lifespan. Despite extensive efforts, there is still no consensus on brain maturation and aging patterns, largely due to limitations in previous studies, which were restricted to narrow age ranges, small sample sizes, and heterogeneous processing methodologies. These limitations have led to inconsistent findings and hindered reproducibility across studies.
To overcome these challenges, I exploit the recent paradigm shift toward Big Data sharing in neuroimaging, combined with advances in image processing that enable the unified analysis of data from neonates to the elderly. My work focuses on developing methodological frameworks that ensure scalability, robustness, and consistency across highly heterogeneous datasets.
My main contributions in this field are threefold:
Scalable and robust processing of large-scale neuroimaging data: I develop automated pipelines capable of processing massive datasets with high robustness to variability in acquisition protocols, image quality, and subject populations. These tools are designed to ensure reproducibility and consistency of quantitative measurements across thousands of scans.
Lifespan modeling of brain structure trajectories: I have contributed to the first comprehensive analyses of brain structural variations across the entire lifespan, from early development to advanced aging. By applying unified processing frameworks, my work provides consistent and reproducible estimates of volumetric changes, enabling the characterization of normative brain trajectories at an unprecedented scale.
Translation to neurodegenerative disease modeling: I extend these lifespan models to the study of neurological disorders, particularly neurodegenerative diseases, by identifying deviations from normative trajectories. This enables the development of quantitative biomarkers for disease characterization and progression, bridging large-scale population modeling with clinical applications.
These contributions are integrated into scalable platforms such as volBrain, enabling the systematic analysis of large and diverse cohorts.
Overall, my work establishes a data-driven and unified framework for lifespan neuroimaging, providing robust references for brain development and aging, and opening new avenues for the quantitative study of neurological diseases.
My research has also pioneered a paradigm shift in the deployment of medical image analysis tools, moving from locally installed software toward fully accessible, cloud-based infrastructures. Traditionally, MRI processing pipelines require complex installation, configuration, and maintenance, as well as dedicated computational resources and technical expertise. These constraints significantly limit their adoption, particularly in clinical environments and non-specialized research settings.
To overcome these barriers, I developed volBrain, a cloud-based platform that provides fully automated MRI analysis through a web interface, following a Software-as-a-Service (SaaS) model. This approach removes the need for local installation, user training, or dedicated hardware, enabling seamless access to advanced image processing tools for a broad community of users.
My main contributions in this field are threefold:
Development of a large-scale cloud infrastructure for neuroimaging: I co-designed and co-implemented a distributed platform capable of handling high-throughput MRI processing, ensuring scalability, robustness, and reproducibility. The system leverages shared computational resources to efficiently process large volumes of data while maintaining consistent performance.
Integration of advanced image processing pipelines: I led the integration of a comprehensive suite of automated pipelines covering a wide range of neuroimaging tasks, including segmentation, volumetry, and quality-controlled reporting. These pipelines are continuously updated to incorporate state-of-the-art methodological advances.
Global dissemination and large-scale usage: The volbrain platform (www.volbrain.net) has been widely adopted by the international community, with more than 800,000 MRI scans processed worldwide, demonstrating its robustness, usability, and clinical relevance. This large-scale usage has enabled the creation of unprecedented datasets and has facilitated reproducible research across institutions.
Overall, this work establishes a new framework for accessible, scalable, and reproducible medical image analysis, bridging methodological innovation with real-world deployment, and significantly accelerating the translation of advanced imaging tools into clinical and research practice.