Accepted papers:
X-Ray Near-Field Holotomography Reconstruction using Implicit Neural Representations
Johannes Gruen, Sebastian Eberle, Imke Greving, Silja Flenner, Martin Burger, Christian G. Schroer, Johannes Hagemann
X-ray near-field holotomography provides non-destructive, in situ 3D visualization of specimen interiors at nanometer-scale resolution. Reconstruction traditionally involves two separate steps: first retrieving the projected phase for different rotation angles, then applying tomographic reconstruction to obtain a 3D volume from 2D projections. Both steps are ill-posed inverse problems and separating them leads to information loss, due to reconstruction errors. Recent advances in implicit neural representations (INRs) have demonstrated remarkable capabilities in scene rendering and tomographic reconstruction. In this work, we propose a unified INR-based framework that jointly solves the phase retrieval and tomographic reconstruction problems. This joint formulation enforces 3D consistency, resulting in significantly improved phase, absorption, and volumetric reconstructions. Moreover, INRs provide substantial data compression. This compression reduces storage requirements by 95%, which is particularly important with the advent of fourth-generation synchrotron sources and the corresponding growth in data volume.
Optimizing the Iterative Reprojection Phase Retrieval Algorithm via Angle Sampling
Daniel Hernández Durán, Johannes Dora, Johannes Hagemann, Christian G. Schroer, Tobias Knopp
Coherence-based imaging with synchrotron radiation offers spatial resolutions beyond the limits of conventional X-ray imaging and enables quantitative mapping of the refractive index. In nanotomography experiments with phase contrast, reconstruction is commonly performed in a two-step procedure, namely phase retrieval and computer tomography. One may enforce a 3D consistency constraint by repeatedly switching between both steps through a reprojection of the tomogram, which comes at a high computational cost. In this study, we accelerate the reprojection approach by updating the phase retrieval step on a subset of the projection angles. We show that the improvement is also reflected on the rest of angles through the tomographic reconstruction. We present an exemplary phantom to show that the angle sampling both accelerates convergence and helps reconstruction of challenging angles.
Sparsity-based learning enables absorption contrast of mixed-contrast objects in tomographic X-ray near-field holography
Johannes Dora, Christian G. Schroer, Johannes Hagemann
In the hard X-ray regime, the imaginary part β of the complex refractive index of matter is usually hundred times smaller than the real part δ and has a low signal-to-noise ratio in near-field holograms. Commonly used algorithms that implicitly or directly reconstruct the complex refractive indices of mixed-contrast objects typically fail to recover the imaginary part, without imposing strong priors on β. In this article, we present a learning-based approach to extract wavelet coefficients of absorption images from single holograms. We demonstrate the robustness of the approach by reconstructing an absorption tomogram from experimental data.
Self-supervised Deep Convolutional Reconstruction for Low-light X-ray Fluorescence Ghost Imaging
Nicola Viganò, Dmitry Karpov, Kees Joost Batenburg, Sharon Shwartz
We recently developed a new self-supervised deep-learningbased Ghost Imaging (GI) reconstruction method, which provides unparalleled reconstruction performance for noisy acquisitions among unsupervised methods. Self-supervision removes the need for clean reference data while offering strong noise reduction. This provides the necessary tools for addressing signal-to-noise ratio concerns for GI acquisitions in emerging and cutting-edge low-light GI scenarios. Notable examples include micro- and nano-scale x-ray emission imaging, e.g., x-ray fluorescence (XRF) imaging of dosesensitive samples. Here, we will analyze the performance of our method against the state of the art in unsupervised reconstruction methods for the reconstruction of XRF-GI data.
Regularizing INR with Diffusion Prior for Self-Supervised 3D Reconstruction of Neutron Computed Tomography Data
Maliha Hossain, Haley Duba-Sullivan, Amirkoushyar Ziabari
Recently, generative diffusion priors have made huge strides as inverse problem solvers, including the ability to be adapted for inference on out-of-distribution data. Concurrently, implicit neural representations (INRs) have emerged as fast and lightweight inverse imaging solvers that are amenable to hybrid approaches that combine learned priors with traditional inverse problem formulations. In this paper, we present a diffusive computed tomography (CT) inversion framework for regularizing INRs called Diffusive INR (DINR), designed to enable high-quality reconstruction from sparse-view neutron CT. Pretrained purely on synthetic data, DINR is evaluated on simulated and experimentally obtained observations of concrete microstructures, where traditional reconstruction methods suffer substantial degradation when the number of views is reduced. Our approach delivers superior performance, reduces reconstruction artifacts, and achieves gains in PSNR and SSIM, enabling accurate micro-structural characterization even under extreme data limitations compared to state-of-the-art sparse-view reconstruction techniques.
Invited talks:
Fast Eikonal Phase Retrieval for High-Throughput Beamlines
Alessandro Mirone (ESRF)
The ESRF-EBS BM18 beamline is dedicated to high-sensitivity phase-contrast tomography of large and complex samples, combining hierarchical (multi-resolution) imaging with propagation-based phase contrast enabled by its 220 m beamline length and propagation distances of up to 36 m. These capabilities open access to new imaging regimes and experimental opportunities, but also redefine the validity domain of standard propagation-based phase-contrast CT (PB-PCCT / PBI-CT) approaches. In particular, the approximations underlying widely used phase-retrieval methods—such as single-distance acquisition and single-material assumptions—become increasingly inadequate in these extended propagation and contrast conditions.
In this talk, we introduce a fast eikonal phase-retrieval formalism specifically designed to operate within these newly accessible regimes. The framework extends the validity of phase retrieval beyond conventional approximations while remaining computationally efficient and robust under realistic beamline conditions. Its implementation is tailored for integration into high-throughput reconstruction pipelines, enabling sustained processing of BM18-scale experiments where data volumes can reach the petabyte-per-week range. The presentation is based on our recent arXiv work.
3D and 4D coherent X-ray imaging reconstruction algorithms using end-to-end AI+Physics
Pablo Villanueva-Perez (Lund University/MAXIV)
The advent of diffraction-limited storage rings, such as MAX IV, SIRIUS, ESRF-EBS, and APS-U, has created an opportunity to achieve a new spatiotemporal frontier with coherent X-ray imaging. To fully exploit such capabilities, however, we depend on advanced reconstruction methods that can handle sparse, noisy, and highly dynamic datasets.
This talk will focus on our recent progress in reconstruction algorithms that enable high-fidelity 3D and 4D (3D+time) coherent X-ray imaging under challenging experimental conditions. We will discuss end-to-end AI+physics reconstruction frameworks that integrate unsupervised machine learning with physical priors describing X-ray interaction, propagation, image formation (forward model), and sample dynamics. These hybrid approaches provide a powerful mechanism for embedding strong physical priors directly into the reconstruction process for coherent X-ray imaging techniques. Specifically, in this talk, we will present AI Physics reconstruction algorithms for:
· Robust phase retrieval for coherent X-ray imaging in the near- and far-field domains.
· End-to-end 3D and 4D reconstructions to enable high-quality reconstructions from extremely sparse or incomplete data.
Together, these methods show how tightly coupling AI with physical priors can significantly enhance reconstruction speed, stability, and fidelity, opening new spatiotemporal regimes for addressing previously inaccessible scientific and industrial questions.