18 March, 9:30 - 11:00
Frontier of Black Hole Imaging: Computational Algorithms Driving the Scientific Breakthroughs
In April 2019 and May 2021, hundreds of front pages worldwide featured the first images of the supermassive black holes M87* and Sgr A*, revealing the shadows cast by their event horizons—the visible edge of spacetime. These breakthroughs yielded the second most-cited ground-based astronomy result of the past decade and were delivered by the Event Horizon Telescope (EHT), a global network of (sub)millimeter radio telescopes. The EHT achieves the sharpest angular resolution of any existing astronomical instrument by computationally synthesizing an Earth-sized aperture using very long baseline interferometry, a technique of radio interferometry (RI).
As a computational telescope, this success relies centrally on innovative RI imaging algorithms tailored to the EHT. RI imaging is an underdetermined inverse problem: one must reconstruct an image from incomplete and noisy Fourier-space measurements. The EHT introduces two additional acute challenges—extremely sparse Fourier-plane coverage and severe data corruption arising from the scarcity of suitable calibrators at EHT resolution. The limitations of traditional methods motivated the development of dedicated techniques now known as regularized maximum likelihood (RML) methods, which couple the power of regularization with efficient data models robust to calibration errors. RML approaches have since been widely adopted across radio astronomy, well beyond the EHT.
RI algorithm development remains an active area of research, driven by the rapidly growing demand for larger-scale and more complex modeling of black holes enabled by planned extensions. Upcoming upgrades to the EHT promise major advances in temporal resolution, sensitivity, and frequency coverage, producing five-dimensional Fourier-space data across multiple spatial scales. Looking further ahead, community roadmaps envision extending the EHT into space with the proposed Black Hole Explorer (BHEX) mission, ushering in a new era of angular resolution. Rapid advances in artificial intelligence and its modern software infrastructure—particularly for accelerating Bayesian parameter inference and enabling more expressive and efficient regularization for high-dimensional images than traditional hand-crafted regularizers—have driven the development of AI-powered algorithms for EHT data processing and physical interpretation.
This talk will introduce radio interferometry through the lens of computational imaging, present the algorithms underpinning black hole imaging, and outline the emerging AI-enabled frontier designed to meet the demands of next-generation facilities. I will also highlight cross-disciplinary relevance to other inverse problems in the physical sciences, including medical imaging.
17 March, 9:30 - 11:00
Gravitational lense effect in a fabric membrane
This presentation is devoted to elastic waves in a fabric membrane. Elastic membranes are often used as didactic demonstration of gravitation from the general relativity perspective. Indeed, trajectories of rolling spheres such as billiard balls influence each other through the deformation their mass print within the membrane tissue as would the space-time curvature of gravity. The analogy is pushed here using membrane waves. This allows revisiting through membrane waves, the famous 1919 Eddington experiment that demonstrated light deviations of stars in the vicinity of the sun.
19 March, 14:00 - 15:30
Lessons from a decade of gravitational-wave astronomy and advances in polarimetric sensing
On September 14, 2015, the LIGO detectors in the US recorded the first gravitational wave signal—GW150914—from the merger of two black holes, opening the dawn of gravitational-wave astronomy. The field has advanced rapidly since, culminating in the fourth observing run performed by LIGO together with Virgo in Europe and KAGRA in Japan, which concluded in November 2025. The number of candidate detections now exceeds 300. This presentation will review the historical and scientific milestones of the past decade and explore the prospects for polarimetric measurements with next-generation detectors.
19 March, 9:30 - 11:00
Posterior inference in large-scale inverse problems by MCMC, and not MCMC
Sample-based inference using MCMC (Markov chain Monte Carlo) has become the default for calculating posterior expectations in Bayesian hierarchical formulations of inverse problems, since the first examples of such methods appeared in 1997. Advances in computing, but more importantly in algorithms, now enable full posterior inference in large-scale inverse problems, by open-source UQ (uncertainty quantification) packages that run provably-convergent MCMC, seamlessly utilize bespoke forward codes and hardware, with 10,000,000 variables in the latent field — all in an afternoon. (At least for low-level prior representations.) I’ll show some details of that.
Recent advances in high-dimensional scientific computing, particularly numerical linear algebra, now allow us to compute full posterior inference for moderately sized problems (thousands of variables) using only deterministic algorithms — no MCMC, no sampling, no Monte Carlo integration. Just function approximation, numerical linear algebra, and numerical quadrature. I’ll present that work, with some recent examples computed by my students here @ Otago.
17 March, 14:00 - 15:30
Computational imaging with randomness
Computational imaging is a powerful framework for designing imaging systems by cooperating optics and information science. Especially, state-of-the-art technologies in information science, such as compressive sensing and deep learning, excite the field of computational imaging. In this talk, I will present our recent research activities related to computational imaging with random optical modulations, such as scattering media.
16 March, 13:30 - 15:00
Phase retrieval algorithms for X-ray nanoimaging
Hard X-ray ptychography is a powerful technique for non-destructive nanostructure visualization via phase retrieval; however, it remains constrained by radiation damage and reconstruction instability. This presentation outlines our approach to addressing these challenges through optimization, machine learning, and sampling. First, we introduce iterative algorithms designed to enhance image quality and accelerate convergence under low-dose conditions. Second, we present a hybrid framework combining model-based phase retrieval with deep neural networks utilizing formula-driven training. Finally, we demonstrate optimized 3D sampling strategies employing efficient lattice structures.
17 March, 11:30 - 12:30
Geometric phases to infer polarization fluctuations of gravitational waves
Gravitational waves (GWs) are polarized and exhibit two independent modes according to General Relativity. Merging black hole binaries emit bursts of GWs, whose polarization varies over time due to spin-orbit precession when spins are misaligned with the orbital momentum. Measuring the GW polarizations remains challenging with the current LIGO/Virgo/Kagra detector network, but could become feasible with future detectors. In this presentation, we investigate the use of the geometric phase to measure fluctuations of the polarization pattern. Geometric phases, a concept originating in quantum mechanics, have found applications from asteroseismology to quantum information. We show how these phases can characterize polarization variations in multivariate time series, and how this can be applied to the analysis of GW signals.
17 March, 15:30 - 16:30
Structured low-rank methods for gravitational wave ringdown data analysis
During a binary black hole merger, the resulting black hole resonates into equilibrium during the so-called ringdown phase. Thanks to the LVK collaboration, these ringdown signals can be obtained and analyzed, to test the general relativity for example. It has been shown with general relativity that the ringdown can be modeled as a sum of polarized modes, which can be seen as a sum of damped ellipses.
Assuming this model, we propose to use structured low-rank approximation (SLRA) methods to decompose this signal. A particular attention will be given to the polarized nature of the signal, which carries potential information on the studied physical phenomenon. In a first approach, an SLRA method using quaternions is proposed to decompose the source signal: a bivariate signal composed of two polarized states. In a second more realistic approach, a method proposes to decompose the ringdown by analyzing directly the strain signals obtained by the interferometers. By stacking the Hankel matrices of each strain into an order-3 tensor, decomposing the ringdown comes to the estimation of a constrained block-term tensor decomposition.
20 March, 10:00 - 11:00
Robust sparse convolution coding for radar applications
In this talk, we present a unified sparse convolutional coding framework for radar imaging inverse problems, with applications to Ground Penetrating Radar (GPR) and Through-Wall Radar Imaging (TWRI). Both modalities share a common structure: target responses (hyperbolas or point scatterers) must be separated from clutter exhibiting low-rank structure.
We formulate this as a convex optimization problem combining sparse coding with a physics-based dictionary and low-rank modeling, solved via ADMM. To handle heavy-tailed noise and acquisition artifacts, we introduce a robust variant replacing the L2 data-fidelity term with the Huber norm. Experiments on real GPR data demonstrate significant detection improvements over classical approaches.
We then extend this framework through deep unrolling: the iterative proximal gradient descent algorithm is unfolded into a trainable neural network (LCRPCA) that learns convolutional filters while preserving interpretable low-rank plus sparse structure. This hybrid approach achieves competitive performance on TWRI localization while requiring far fewer training samples than generic deep networks—crucial when radar measurements are scarce.
This work illustrates how combining robust estimation, structural priors, and modern learning techniques addresses challenging inverse problems in radar sensing.
17 March, 16:00 - 17:00
Statistical Validation of Structures in Astronomical and Medical Image Reconstruction via Hypothesis Testing
While image reconstruction processes such as PSF deconvolution can recover fine structures, their uncertainty remains unclear and is crucial for quantitative analysis. We present a method to evaluate the statistical significance of a given structure. The method performs a hypothesis test between two reconstructed images: one under a null model in which the structure is suppressed, and the MAP estimate as the alternative model. If the null model is statistically rejected at a chosen significance level, the structure is regarded as statistically significant. Simulations using astronomical galaxy images and low-dose X-ray CT data indicate that genuine structures tend to be significant, while artefacts are not.
19 March, 11:30 - 12:30
Majorization-Minimization Bregman Proximal Gradient Algorithms for NMF with the Kullback–Leibler Divergence
Nonnegative matrix factorization (NMF) has been studied in machine learning and signal processing as well as in mathematical optimization. NMF is a method to decompose an observed nonnegative matrix into two nonnegative matrices. In this talk, in order to solve NMF, we propose new algorithms, called majorization-minimization Bregman proximal gradient algorithm (MMBPG) and MMBPG with extrapolation (MMBPGe). These iterative algorithms minimize the objective function and its potential function monotonically. We establish that a sequence generated by MMBPG(e) converges to a stationary point from any initial point. We apply MMBPG and MMBPGe to the Kullback–Leibler (KL) divergence-based NMF. While most existing KL-based NMF methods update two blocks or each variable alternately, our algorithms update all variables simultaneously. MMBPG and MMBPGe for KL-based NMF are equipped with a separable Bregman distance that satisfies the smooth adaptable property, and that makes its subproblem solvable in closed form. Using this fact, we guarantee that a sequence generated by MMBPG(e) converges to a Karush–Kuhn–Tucker point of KL-based NMF from any initial point. In numerical experiments, we compare proposed algorithms with existing algorithms on synthetic data and real-world data.
19 March, 16:00 - 17:00
Decomposition of Signal and Noise toward Deep Imaging Spectroscopy with Submillimeter Single-dish Telescopes
We present a matrix decomposition approach for detecting faint astronomical signals from noisy time-series spectra obtained with submillimeter wide-band and multi-pixel superconducting instruments on ground-based single-dish telescopes. Such observations inevitably suffer from atmospheric and instrumental noise contamination that is typically orders of magnitude stronger than the astronomical signals. Since signal and noise are physically independent and have different temporal frequencies, the proposed approach based on independence or low-rankness is expected to effectively separate components, thereby enabling the subsequent detection of faint signals. We demonstrate improvements in signal detection using existing small-scale instruments on the world’s largest millimeter single-dish telescopes (Nobeyama 45-m and LMT 50-m), and report on the ongoing application to DESHIMA 2.0 on ASTE 10-m, a predecessor of upcoming deep imaging spectroscopic instruments such as TIFUUN.
17 March, 9:30 - 11:00
Image Reconstruction Framework for MeV Gamma-ray Astronom: From INTEGRAL/SPI to COSI
MeV gamma-ray observations provide unique insights into cosmic nucleosynthesis and positron annihilation in our Galaxy. However, image reconstruction in this energy band presents significant challenges. Instruments such as Compton cameras and coded-mask telescopes provide only indirect measurements of photon directions, leading to an ill-posed inverse problem that requires sophisticated statistical data analysis to recover images from background-dominant data.
For such statistical data analysis in MeV gamma-ray astronomy, we are developing a comprehensive analysis framework COSIpy. While primarily designed for COSI, NASA’s MeV Compton telescope to be launched in 2027, it adopts a versatile architecture applicable to various gamma-ray telescopes. Also, we have extended the widely-used Richardson-Lucy algorithm to a Bayesian framework and proposed a generalized approach that flexibly incorporates different prior distributions.
We demonstrated the framework's performance through analysis of 20 years of INTEGRAL/SPI observations, a coded aperture mask on an ESA satellite, producing the most detailed 511 keV emission map to date (Yoneda et al. 2025, A&A, 702, A220). We will also present COSI simulations for key nuclear lines, demonstrating improved image quality (Yoneda et al. 2025, A&A, 697, A117). Finally, we will discuss current challenges and our ongoing efforts for COSI's future observations.