May 19, 2020
Speaker: Raja Giryes ( Tel Aviv University )
Title: Joint Design of Optics and Post-Processing Algorithms Based on Deep Learning for Generating Advanced Imaging Features
Abstract: After the tremendous success of deep learning (DL) for image processing and computer vision applications, these days almost every signal processing task is analyzed using such tools. In the presented work, the DL design revolution is brought one step deeper, into the optical image formation process. By considering the lens as an analog signal processor of the incoming optical wavefront (originating from the scene), the optics is modeled as an additional 'layer' in a DL model, and its parameters are optimized jointly with the 'conventional' DL layers, end-to-end. This design scheme allows the introduction of unique feature encoding in the intermediate optical image, since the lens 'has access' to information that is lost in conventional 2D imaging. Therefore, such design allows a holistic design of the entire IP/CV system. The proposed design approach will be presented with several applications: an extended Depth-Of-Field (DOF) camera; a passive depth estimation solution based on a single image from a single camera; non-uniform motion deblurring; and enhanced stereo camera with extended dynamic range and self-calibration abilities. Experimental results will be presented and discussed. This is a joint work with Shay Elmalem, Harel Haim, Yotam Gil Alex Bronstein and Emanuel Marom.
Slides [ Download ]
June 2, 2020
Speaker: Laura Waller ( UC Berkeley )
Title: End-To-End Learning for Computational Microscopy
Abstract: Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. Computers can replace bulky and expensive optics by solving computational inverse problems. This talk will describe end-to-end learning for development of new microscopes that use computational imaging to enable 3D fluorescence and phase measurement. Traditional model-based image reconstruction algorithms are based on large-scale nonlinear non-convex optimization; we combine these with unrolled neural networks to learn both the image reconstruction algorithm and the optimized data capture strategy.
Video Recording [ Youtube ]
June 16, 2020
Speaker: Michael Unser ( Ecole Polytechnique Fédérale de Lausanne )
Title: CryoGAN: A novel paradigm for single-particle analysis and 3D reconstruction in cryo-EM microscopy
Abstract: Single-particle cryo-EM has revolutionized structural biology over the last decade and remains an active topic of research. In particular, the reconstruction task is an enduring technical challenge due to the imaged 3D particles having unknown orientations. Scientists have spent the better part of the last 30 years designing a solid computational pipeline that can reliably deliver 3D structures with atomic resolution. The result is an intricate multi-steps procedure that permits the regular discovery of new structures, but that is yet still prone to overfitting and irreproducibility. The most notable difficulties with the current paradigms are the need for pose-estimation methods, the reliance on user expertise for appropriate parameter tuning, and the non-straightforward extension to the handling of structural heterogeneity. To overcome these limitations, we recently proposed a completely new paradigm for single-particle cryo-EM reconstruction that leverages the remarkable capability of deep neural networks to capture data distribution. Based on an adversarial learning scheme, the CryoGAN algorithm can resolve a 3D structure in a single algorithmic run using only the dataset of picked particles and CTF estimations as inputs. Hence, CryoGAN bypasses many cumbersome processing steps, including the delicate pose-estimation procedure. The algorithm is completely unsupervised, does not rely on an initial volume estimate, and requires minimal user interaction. To the best of our knowledge, CryoGAN is the first demonstration of a deep-learning architecture able to singlehandedly perform the full single-particle cryo-EM reconstruction procedure without prior training. This is joint work with Harshit Gupta, Michael T. McCann, Laurène Donati.
Video Recording [ Youtube ]
June 30, 2020
Speaker: Katie Bouman - ( Caltech )
Title: Capturing the First Image of a Black Hole & Designing the Future of Black Hole Imaging
Abstract: This talk will present the methods and procedures used to produce the first image of a black hole from the Event Horizon Telescope, as well as discuss future developments. It had been theorized for decades that a black hole would leave a "shadow" on a background of hot gas. Taking a picture of this black hole shadow would help to address a number of important scientific questions, both on the nature of black holes and the validity of general relativity. Unfortunately, due to its small size, traditional imaging approaches require an Earth-sized radio telescope. In this talk, I discuss techniques the Event Horizon Telescope Collaboration has developed to photograph a black hole using the Event Horizon Telescope, a network of telescopes scattered across the globe. Imaging a black hole’s structure with this computational telescope required us to reconstruct images from sparse measurements, heavily corrupted by atmospheric error. This talk will summarize how the data from the 2017 observations were calibrated and imaged, and explain some of the challenges that arise with a heterogeneous telescope array like the EHT. The talk will also discuss future developments, including how we are developing machine learning methods to help design future telescope arrays.
Video Recording [ Youtube ]
July 14, 2020
Speaker: Jong Chul Ye - ( KAIST )
Title: Optimal transport driven CycleGAN for unsupervised learning in inverse problems
Abstract: The penalized least squares (PLS) is a classic method to solve inverse problems, where a regularization term is added to stabilize the solution. Optimal transport (OT) is another mathematical framework that has recently received significant attention by computer vision community, for it provides means to transport one distribution to another in an unsupervised manner. The cycle-consistent generative adversarial network (cycleGAN) is a recent extension of GAN to learn target distributions with less mode collapsing behavior. Although similar in that no supervised training is required, the algorithms look different, so the mathematical relationship between these approaches is not clear. In this talk, I explain an important advance to unveil the missing link. Specifically, we propose a novel PLS cost to measure the sum of distances in the measurement space and the latent space. When used as a transportation cost for optimal transport, we show that this new PLS cost leads to a novel cycleGAN architecture as a Kantorovich dual OT formulation. One of the most important advantages of this formulation is that depending on the knowledge of the forward problem, distinct variations of cycleGAN architecture can be derived. The new cycleGAN formulation have been applied for various imaging problems, such as accelerated magnetic resonance imaging (MRI), super-resolution/deconvolution microscopy, low-dose x-ray computed tomography (CT), satellite imagery, etc. Experimental results confirm the efficacy and flexibility of the theory.
Video Recording [ Youtube ]
July 28, 2020
Speaker: Orazio Gallo - ( NVIDIA Research )
Title: Depth Estimation from RGB Images with Applications to Novel View Synthesis and Autonomous Navigation
Abstract: Depth information is a central requirement for many computer vision and computational imaging applications. A number of sensors exist that can capture depth directly. Standard RGB cameras offer a particularly attractive alternative thanks to their lower price point and widespread availability, but they also introduce new challenges. In this talk I will address two of their main challenges. The first challenge is dynamic content. When a scene is captured with a monocular camera, moving objects break the epipolar constraints, thus making it impossible to directly estimate depth. I will describe a method to address this issue while also improving the quality of the depth estimation in the static regions of the scene. I will then use the resulting depth to synthesize novel views of the scene or to create effects like the bullet-time effect, but without the need for synchronized cameras. The issue of dynamic content can also be addressed by using multiple cameras simultaneously, as is the case of stereo. However, while state-of-the-art, deep-learning stereo algorithms produce high-quality depth, they are far from real time--a central requirement for applications such as autonomous navigation. I will present Bi3D, a stereo algorithm that tackles this second challenge. Bi3D allows to trade depth quantization for latency. Given a strict time budget, Bi3D can detect objects closer than a given distance D in as little as 5ms. It can also estimate depth with arbitrarily coarse quantization and complexity linear with the number of quantization levels. For instance, it takes 9.8ms to estimate a 2-bit depthmap or 18.5ms for a 3-bit depthmap. Bi3D can also use the allotted quantization levels to get regular, continuous depth, but in a specific depth range.
Video Recording [ Youtube ]
Aug 11, 2020
Speaker: Xiao Xiang Zhu - ( TUM, Germany )
Title: Data Science in Earth Observation
Abstract: Geoinformation derived from Earth observation satellite data is indispensable for many scientific, governmental and planning tasks. Geoscience, atmospheric sciences, cartography, resource management, civil security, disaster relief, as well as planning and decision support are just a few examples. Furthermore, Earth observation has irreversibly arrived in the Big Data era, e.g. with ESA’s Sentinel satellites and with the blooming of NewSpace companies. This requires not only new technological approaches to manage and process large amounts of data, but also new analysis methods. Here, methods of data science and artificial intelligence (AI), such as machine learning, become indispensable.
In this talk, explorative signal processing and machine learning algorithms, such as compressive sensing and deep learning, will be shown to significantly improve information retrieval from remote sensing data, and consequently lead to breakthroughs in geoscientific and environmental research. In particular, by the fusion of petabytes of EO data from satellite to social media, fermented with tailored and sophisticated data science algorithms, it is now possible to tackle unprecedented, large-scale, influential challenges, such as the mapping of global urbanization — one of the most important megatrends of global changes.
Aug 25, 2020
Speaker: Saiprasad Ravishankar - ( MSU, USA )
Title: From Transform Learning to Deep Learning and Beyond for Imaging
Abstract: The next generation computational imaging systems are expected to be increasingly data-driven. There is growing interest in learning effective models for image reconstruction from limited training data, which would benefit many applications. In this talk, we will first review efficient methods for learning sparsifying transform models for signals along with convergence analysis. Various properties can be enforced during transform learning such as sparsity in a union of transforms with clustering, non-local sparsity, or multi-layer models, which help improve image reconstruction quality over conventional schemes in applications such as X-ray computed tomography (CT) or magnetic resonance imaging. Sparsifying transforms are often learned in an unsupervised manner from (unpaired) images or on-the-fly from measurements and used to aid model-based image reconstruction that combines physics-based forward models, noise models, and image priors. We show that these sparsity-based models can also be trained in a supervised manner for optimal reconstruction quality by learning the model parameters from (deep) unrolled block coordinate descent algorithms for transform-regularized problems, or by directly learning the sparsity regularizer itself within model-based reconstruction formulations. The latter approach leads to a challenging bilevel training optimization problem for which we analyze the optimization landscape with l1 sparsity and propose new algorithms without approximations or relaxations. Our initial results show promise for such supervised operator learning in denoising compared to popular methods. We then propose unified bilevel machine learning formulations that effectively combine the benefits of imaging forward models, noise models, supervised deep learning, unsupervised learning-based models, and analytical priors. Our unified approach substantially improves image reconstruction quality in low-dose X-ray CT compared to popular recent methods. We conclude with a brief discussion of ongoing and future research pathways.
Video Recording [ Youtube ]
Sep 8, 2020
Speaker: Anat Levin - ( Technion, Israel )
Title: Rendering speckle statistics in scattering media and its applications in computational imaging
Abstract: We present a Monte Carlo rendering framework for the physically-accurate simulation of speckle patterns arising from volumetric scattering of coherent waves. These noise-like patterns are characterized by strong statistical properties, such as the so-called memory effect. These properties are at the core of imaging techniques for applications as diverse as tissue imaging,motion tracking, and non-line-of-sight imaging. Our rendering framework can replicate these properties computationally, in a way that is orders of magnitude more efficient than alternatives based on directly solving the wave equations. At the core of our framework is a path-space formulation for the covariance of speckle patterns arising from a scattering volume, which we derive from first principles. We use this formulation to develop two Monte Carlo rendering algorithms, for computing speckle covariance as well as directly speckle fields. While approaches based on wave equation solvers require knowing the microscopic position of wavelength-sized scatterers, our approach takes as input only bulk parameters describing the statistical distribution of these scatterers inside a volume. We validate the accuracy of our framework by comparing against speckle patterns simulated using wave equation solvers, use it to simulate memory effect observations that were previously only possible through lab measurements, and demonstrate its applicability for computational imaging tasks. In particular, we show an order of magnitude extension of the angular range at which one can use speckle correlations to see through a scattering volume.
Video Recording [ Youtube ]
Oct 6, 2020
Speaker: John Wright - ( Columbia University, USA )
Title: Geometry and Symmetry in (some!) Nonconvex Optimization Problems
Abstract: Nonconvex optimization plays an important role in wide range of areas of science and engineering — from learning feature representations for visual classification, to reconstructing images in biology, medicine and astronomy, to disentangling spikes from multiple neurons. The worst case theory for nonconvex optimization is dismal: in general, even guaranteeing a local minimum is NP hard. However, in these and other applications, very simple iterative methods often perform surprisingly well.
In this talk, I will discuss a family of nonconvex optimization problems that can be solved to global optimality using simple iterative methods, which succeed independent of initialization. This family includes certain model problems in feature learning, imaging and scientific data analysis. These problems possess a characteristic structure, in which (i) all local minima are global, and (ii) the optimization landscape does not have any flat saddle points. I will describe how these features arise naturally as a consequence of problem symmetries, and how they lead to new types of performance guarantees for efficient methods. I will motivate these problems from microscopy, astronomy and computer vision, and show applications of our results in these domains. Includes joint work with Yuqian Zhang, Qing Qu, Ju Sun, Henry Kuo, Yenson Lau, Dar Gilboa, Abhay Pauspathy.
Oct 20, 2020
Speaker: Bihan Wen - ( Nanyang Technological University, Singapore )
Title: From Signal Processing to Machine Learning: How "Old" Ways Can Join The New
Abstract: Machine learning, especially deep learning technologies have made incredible progress in the past few years. It enables people to rethink how we integrate information, analyze data and make decision. While the classic signal processing theory laid the foundation for applications ranging from computational imaging to computer vision, nowadays, the advances of machine learning provide a more effective approach to the data-driven solutions. While deep learning methods achieved the state-of-the-art task performance over many benchmark datasets, there are still limitations in practice, such as robustness and data-efficiency comparing to the classic approaches. In this talk, I will discuss some of our recent works on machine learning techniques for imaging and image processing problems, and show how they evolve from signal processing to building deep neural networks, while the "old" ways can also join the new. I will compare the classic optimization with the deep learning approaches, and discuss how one can reconcile their pros and cons towards a more effective and practical solution in image processing.
Video Recording [ Youtube ]
Nov 3, 2020
Speaker: Nicole Seiberlich - ( University of Michigan, USA )
Title: Bringing New Imaging Technologies to the Clinic
Abstract: Computational Imaging is changing the landscape of medical imaging. However, the process of moving a new imaging technology to routine clinical use can be challenging, requiring collaboration between engineers, physicians, and radiology staff. The lecture will describe some of the hurdles that must be overcome to move advanced imaging methods from ideas to practice, using Magnetic Resonance Fingerprinting as an example.
Nov 17, 2020
Speaker: Yoram Bresler - ( University of Illinois at Urbana-Champaign, USA )
Title: Two Topics in Deep Learning for Image Reconstruction: (i) Physics-based x-ray scatter correction for CT; and (ii) Adversarial training for improved robustness.
Abstract: In the first part of this talk, we consider an instance of a highly nonlinear, nonlocal inverse problem, for which even the forward problem is expensive to solve using conventional methods. Photon scattering in X-ray CT creates streaks, cupping, shading artifacts and decreased contrast in the reconstructions. We describe a physics-motivated deep-learning-based method to estimate and correct the scatter. In the second part of the talk, we address a vulnerability of image reconstruction by deep learning to adversarial examples, which has been demonstrated recently, and which casts some doubts on the use of this methodology for mission-critical applications. We describe an adversarial training strategy for end-to-end deep-learning-based inverse problem solvers, providing significantly improved robustness.
Video Recording [ Youtube ]
Dec 1, 2020
Speaker: Singanallur V Venkatakrishnan - ( Oak Ridge National Laboratory, USA )
Title: Pushing the Limits of Scientific CT Instruments using Algorithms : Model-based and Data-Driven Approaches
Abstract: Computed Tomography (CT) systems play a vital role in making scientific discoveries in diverse fields including biology, material sciences and additive manufacturing. The first generation of CT systems typically relied on fast algorithms to invert the measurements based on analytic inversion techniques. However, the performance of these algorithms can be poor when dealing with non-linearities in the measurement, the presence of high-levels of noise, and the limited number of measurements that commonly occur when we seek to accelerate the acquisition.
In this talk, I will present algorithms for improving the performance of CT systems - enabling faster, more accurate and novel imaging capabilities. The first part of the talk will focus on model-based image reconstruction (MBIR) algorithms that formulate the inversion as solving a high-dimensional optimization problem involving a data-fidelity term and a regularization term. By accurately modeling the physics+noise statistics of the measurement and combining it with useful regularizers, I will demonstrate how we can significantly improve the performance for neutron CT, X-ray micro-CT, and single particle Cryo-EM systems. In the last part of the talk, I will present recent results of using deep-learning based algorithms for accelerated neutron/X-ray CT and highlight challenges that exist in extending the use of such techniques for scientific applications.
Dec 15, 2020
Speaker: J. Webster Stayman - ( Johns Hopkins University, USA )
Title: Novel data acquisition and task-based optimization in computed tomography
Abstract: In this talk, new advances in data acquisition for computed tomography will be presented including strategies for fluence field modulation of the x-ray beam, non-circular/non-spiral source-detector trajectories, and methods for obtaining spectral information. The projection data obtained via these approaches often requires advanced data processing methods due to data complexities in sampling and the need for more sophisticated physical models. Moreover, with increased flexibility in acquisition, the opportunity for optimized data acquisitions is available. Finding such optimal strategies requires quantification of imaging performance. In this work, we move beyond simple image quality metrics and present techniques for optimization based on modeled and predicted performance of specific imaging tasks.