Scheduled Talks

Time and Program Details have been changed. Now SPACE webinar will be divided into WEST and EAST sessions, to cater audiences from different time zones.

  • WEST sessions will be at 1:00pm New York Time ( UTC -4 ), on Tuesday once a month.

  • EAST session will be at 7:00pm Beijing / Singapore Time (UTC +8), 8:00pm Korean Time (UTC + 9), or 12:00pm London Time (UTC + 1), on Tuesday once a month.

For attendees from other time zones, please use the [ time zone converter ]. Talks will be approximately 1 hour, followed by Q&A and discussions.

New Season 3, 2021


Past Invited Talks in Season 3, 2021

  1. Sep 7, 2021

    • Speaker: Qing Qu ( University of Michigan )

    • Title: From Shallow to Deep Representation Learning in Imaging and Beyond: Global Nonconvex Theory and Algorithms.

    • Abstract: In this talk, we consider two fundamental problems in signal processing and machine learning: (convolutional) dictionary learning and deep network training. For both problems, we provide the first global nonconvex landscape analysis of the learned represenations, which will in turn provide new guiding principles for better model/architecture design, optimization, robustness, and in both supervised and unsupervised scenarios. More specifically, the first part of the talk will focus on (convolutional) dictionary learning (aka shallow representation learning) in the unsupervised setting, we show that a nonconvex $\ell^4$ loss over the sphere has no spurious local minimizers. This further inspires us to design efficient optimization methods for convolutional dictionary learning, with applications in imaging sciences. Second, we study the last-layer representation in deep learning, where recent seminal work by Donoho et al. showed a prevalence empirical phenomenon during the terminal phase of network training - neural collapse. By studying the optimization landscape of the training loss under a unconstrained feature model, we provide the theoretical justification for this phenomenon, which could have broad implications for network training, design, and beyond.

The talk is based upon two ICLR papers: https://arxiv.org/abs/1908.10959, and one Arxiv preprint: https://arxiv.org/abs/2105.02375.

  1. Sep 21, 2021

    • Speaker: Bin Dong ( Peking University )

    • Title: Data- and Task-Driven CT Imaging by Deep Learning

    • Abstract: In this talk, I will start with a brief review of the dynamics and optimal control perspective on deep learning (including supervised learning, reinforcement learning, and meta-learning). Then, I will present some of our recent studies on how this perspective may help us to advance CT imaging and image-based diagnosis further. Specifically, I will focus on our thoughts on how to combine the wisdom from mathematical modeling with ideas from deep learning. Such combination leads to new data-driven/task-driven image reconstruction models and new data-driven scanning strategies for CT imaging, and with a potential to be generalized to other imaging modalities.

    • Video Recording [ Youtube ]

  2. Oct 5, 2021

    • Speaker: Wolfgang Heidrich ( KAUST )

    • Title: Deep Optics — Joint Design of Imaging Hardware and Reconstruction Methods

    • Abstract: Classical imaging systems are characterized by the independent design of optics, sensors, and image processing algorithms. In contrast, computational imaging systems are based on a joint design of two or more of these components, which allows for greater flexibility of the type of captured information beyond classical 2D photos, as well as for new form factors and domain-specific imaging systems. In this talk, I will describe how numerical optimization and learning-based methods can be used to achieve truly end-to-end optimized imaging systems that outperform classical solutions.

    • Video Recording[ Youtube ]

  3. Nov 2, 2021

    • Speaker: Salman Asif ( University of California, Riverside )

    • Title: Lensless Imaging with Programmable Masks and Illumination

    • Abstract: In this talk, I will present some of our recent work on lensless imaging using programmable optical elements. Existing methods for lensless imaging can recover the depth and intensity of the scene, but they require solving computationally expensive inverse problems. I will start with a discussion on how we can recover RGBD images using lensless cameras. I will then discuss how we can simplify the 3D recovery problem using a small number of measurements with a shifting or programmable mask. In a nutshell, the recovery process involves diagonalization of convolution operators in the Fourier domain and solving multiple small least-squares problems. I will also present a design template for optimizing the mask patterns with the goal of improving depth estimation and experimental results from a camera prototype. Finally, I will present some of our recent results on using coded illumination with lensless imaging that significantly improves the quality of reconstructed images.

    • Video Recording[ Youtube ]

  4. Nov 30, 2021

    • Speaker: Mathews Jacob ( University of Iowa )

    • Title: Model-based Deep Learning for Large-scale Inverse Problems

    • Abstract: Deep learning algorithms, which can learn from examples, are emerging as powerful alternatives to traditional convex optimization algorithms in a variety of inverse problems in imaging. While most of the solutions are readily applicable to small-scale problems, the application of these frameworks to large scale imaging problems (e.g., higher dimensional, dynamic, and high-resolution settings) is challenging. In particular, the available memory on current GPU systems and the lack of fully sampled and noise-free training data pose significant challenges. I will present some of the solutions from our group to overcome these challenges, with applications to high-resolution imaging and dynamic MRI.

    • Video Recording[ Youtube ]

  5. Dec 14, 2021

    • Speaker: Se Young Chun ( Seoul National University )

    • Title: Towards Deep Learning-Based Image Reconstruction With Model-Based Self-Supervision

    • Abstract: Model-based image reconstruction (MBIR) solves inverse problems in medical imaging by exploiting accurate forward models for imaging system, prior models for image and optimization algorithms. MBIR has contributed to fast MR imaging, low-dose CT imaging , PET / SPECT imaging and more. Recently, the success of deep learning has affected the field of medical imaging and deep learning-based image reconstruction (DLBIR) has been actively investigated, outperforming MBIR with much faster computation. Unlike MBIR, DLBIR does not use forward and prior models, but learns a mapping from sub-optimal images (e.g., from compressive sampling without prior) to corresponding optimal images (e.g., from Nyquist sampling). However, what if exquisite ground truth is not available? What if DLBIR is "the" best method to reconstruct images that should be used as ground truth? There have been recent efforts on incorporating MBIR into DLBIR (or vice versa) for model-based self-supervision (MBSS). My lab has also been investigating on utilizing the noise models of imaging systems through Stein's unbiased risk estimator (SURE) and its extensions for self-supervised training of deep neural networks (DNNs). In this talk, I will elaborate on these works to train DNNs from noisy and/or undersampled measurements with MBSS for denoising, compressive image recovery, fast MR reconstruction, low-dose CT reconstruction, and recently PET reconstruction.

    • Video Recording[ Youtube ]


Past Invited Talks in Season 2, 2021

  1. Feb 9, 2021

    • Speaker: YongKeun Park ( KAIST )

    • Title: Quantitative phase imaging and artificial intelligence: label-free 3D imaging, classification, and inference

    • Abstract: Quantitative phase imaging (QPI) exploits refractive index (RI) distribution as intrinsic imaging contrast and enables label-free quantitative phase imaging 1,2. Optical diffraction tomography (ODT) or holotomography (HT), is one of the 3D quantitative phase imaging (QPI) techniques, uses laser interferometry to measure 3-D refractive index (RI) distribution 3. HT is an optical analogous to X-ray computed tomography (CT). HT measured multiple 2-D holograms of a sample with various illumination angles, from which a 3-D RI distribution of the sample is reconstructed by inversely solving the wave equation. We present a rapid and label-free method for single-cell analysis, utilizing quantitative phase imaging (QPI) and machine learning. Optical diffraction tomography (ODT), one of the 3D QPI techniques, uses laser interferometry to measure 3-D refractive index (RI) distribution. ODT serves as a powerful tool for imaging small transparent objects, such as biological cells and tissues. ODT measured multiple 2-D holograms of a sample with various illumination angles, from which a 3-D RI distribution of the sample is reconstructed by inversely solving the wave equation. By exploiting label-free and quantitative 3D imaging capability of ODT, the measured 3D RI tomograms of individual cells are analyzed exploiting various machine learning algorithms. We will discuss potentials and challenges of combining QPI and artificial intelligence. In particular, we discuss the potentials of the present approach for label-free imaging of individual bacteria and the species classification. The combination of QPI and AI will open new avenues for label-free bioimaging. Although this talk will focus on the label-free classification of cell types, QPI and AI will also be useful for various aspects of imaging and analysis, including phase retrieval, tomographic reconstruction, segmentation, imaging inference, and noise reduction.

    • Video Recording: [ Download ]

  2. Feb 23, 2021

    • Speaker: Pier Luigi Dragotti, ( Imperial College London )

    • Title: Computational Imaging for Art Investigation: Revealing Hidden Drawings in Leonardo’s Paintings

    • Abstract: The heritage sector is experiencing a digital revolution driven in part by the increasing use of non-invasive, non-destructive imaging techniques. These techniques range from visible images, images taken using different forms of radiation, e.g., infrared and X-ray, as well as images derived using new spectroscopic imaging techniques such as, for example, Macro X-Ray Fluorescence (MA-XRF). These new imaging methods provide a non-destructive way to capture information about an entire painting and can give us information about features at or below the surface of the painting. This is important to support art historical research to interpret and contextualize a collection and to help conserve and care for the collection. At the same time, these new imaging methods also provide new exciting ways of engaging with objects from our cultural heritage. However, the wealth of digital data generated by these instruments calls for new computational approaches to extract as much information as possible from these often very large datasets.

In this talk, we focus on macro X-Ray Fluorescence (XRF) scanning, which is a technique for the mapping of chemical elements in paintings. After describing in broad terms the working of this device, a method that can process huge-amount of XRF scanning data from paintings fully automatically is introduced. The method is based on connecting the problem of extracting elemental maps in XRF data to Prony's method, a technique broadly used in engineering to estimate frequencies of a sum of sinusoids. The results presented show the ability of our method to detect and separate weak signals related to hidden chemical elements in the paintings. We then discuss results on the Leonardo’s “The Virgin of the Rocks” and show that our algorithm is able to reveal, more clearly than ever before, the hidden drawings of a previous composition that Leonardo then abandoned for the painting that we can now see. Finally, we discuss an image separation problem related to the visualization of concealed features in a painting and present a separation method based on the use of a connected auto-encoder that exploits information from the visible image.

This work is done in collaboration with the National Gallery and University College London and is supported by the British Research Council (EPSRC).

  1. Mar 9, 2021

    • Speaker: Gordon Wetzstein, ( Stanford University )

    • Title: Towards Neural Signal Processing and Imaging

    • Abstract: Computational imaging leverages the co-design of hardware and software to re-define next-generation camera and display systems. In this talk, we discuss recent advances in artificial intelligence-driven computational imaging techniques, including single-photon imaging for non-line-of-sight vision and 3D imaging through highly scattering media as well as end-to-end optimization of optics and imaging processing algorithms for unlocking unprecedented capabilities in depth imaging and hybrid optical-electronic computing. We will also discuss recent progress on implicitly defined neural-network-parameterized signal representation, processing, and rendering techniques.

    • Video Recording [ Youtube ]

  2. Mar 24, 2021

    • Speaker: Yonina Eldar, ( Weizmann Institute of Science, Israel )

    • Title: Model Based Deep Learning: Applications to Imaging and Communications

    • Abstract: Deep neural networks provide unprecedented performance gains in many real-world problems in signal and image processing. Despite these gains, the future development and practical deployment of deep networks are hindered by their black-box nature, i.e., a lack of interpretability and the need for very large training sets. On the other hand, signal processing and communications have traditionally relied on classical statistical modeling techniques that utilize mathematical formulations representing the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. Here we introduce various approaches to model based learning which merge parametric models with optimization tools leading to efficient, interpretable networks from reasonably sized training sets. We will consider examples of such model-based deep networks to image deblurring, image separation, super resolution in ultrasound and microscopy, efficient communications systems, and finally we will see how model-based methods can also be used for efficient diagnosis of COVID19 using X-ray and ultrasound.

    • Video Recording [ Youtube ]

  3. Apr 6, 2021

    • Speaker: Ivan Dokmanić, ( University of Basel, Switzerland )

    • Title: Learning the Geometry of Wave-Based Imaging

    • Abstract: A key difficulty in wave imaging with a varying background wave speed is that the medium “bends” the waves differently depending on their position and direction. This space-bending geometry makes the inductive biases of the usual neural networks (say, convolutional) unsuitable for many wave-based inverse problems. To address this issue, we propose the FIONet - a neural architecture derived from wave physics. Instead of directly using the wave equation, the FIONet models the geometry of wave propagation as captured by the Fourier integral operators (FIOs). FIOs appear in the description of a wide range of wave-based imaging modalities, from seismology and radar to Doppler and ultrasound. Their geometry is characterized by how they propagate singularities; we show how to learn it using optimal transport in the wave packet representation. The FIONet performs significantly better than strong baselines on a number of problems, especially when applied to out-of-distribution data.

    • Video Recording [ Youtube ]

  4. Apr 20, 2021

    • Speaker: Ori Katz, ( Hebrew University of Jerusalem )

    • Title: Imaging with scattered light: Exploiting speckle to see deeper and sharper

    • Abstract: Scattering of light in complex samples such as biological tissue renders most samples opaque to conventional optical imaging techniques, a problem of great practical importance. However, although random, scattering of coherent light generates speckle patterns with universal statistics and angular and spatial correlations, that allow computational retrieval of diffraction-limited imaging. Furthermore, the random temporal fluctuations of speckle patterns, formed by light propagation in dynamic samples, can be exploited for super-resolution photo-acoustic and acousto-optic imaging, deep inside scattering samples. I will present the fundamental principles and limitations of these novel approaches, as well as some of our recent efforts in utilizing these new insights for the development of novel endoscopic.

    • Video Recording [ Youtube ]

  5. May 4, 2021

    • Speaker: Lei Tian, ( Boston University )

    • Title: Model and learning strategies for computational 3D phase microscopy

    • Abstract: Intensity Diffraction Tomography (IDT) is a new computational microscopy technique providing quantitative 3D phase imaging of biological samples. IDT can be easily implemented in a standard microscope equipped with an LED array source and requires no exogeneous contrast agents, making the technology easily accessible to the biological research community. In this seminar, I will present both model and learning strategies for improving the imaging capabilities of IDT for handling complex 3D objects. I will discuss our recent effort of building a physical model simulator-trained neural network for imaging multiple-scattering dynamic biological samples. Our work highlights that large-scale multiple-scattering models can be leveraged in place of acquiring experimental datasets for achieving highly generalizable deep learning models.

    • Video Recording [ Youtube ]

  6. May 18, 2021

    • Speaker: Rebecca Willett, ( University of Chicago )

    • Title: Machine Learning and Inverse Problems in Imaging

    • Abstract: Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, inpainting, compressed sensing, and superresolution all lie in this framework. Recent advances in machine learning and image processing have illustrated that it is often possible to learn inverse problem solvers from training data that can outperform more traditional approaches by large margins. These promising initial results lead to a myriad of mathematical and computational challenges and opportunities at the intersection of optimization theory, signal processing, and inverse problem theory. In this talk, we will explore several of these challenges and the foundational tradeoffs that underlie them. First, we will examine how knowledge of the forward model can be incorporated into learned solvers and its impact on the amount of training data necessary for accurate solutions. Second, we will see how the convergence properties of many common approaches can be improved, leading to substantial empirical improvements in reconstruction accuracy. Finally, we will consider mechanisms that leverage learned solvers for one inverse problem to develop improved solvers for related inverse problems. This is joint work with Davis Gilton and Greg Ongie.

    • Video Recording [ Youtube ]

  7. June 1, 2021

    • Speaker: Marvin M. Doyley, ( University of Rochester )

    • Title: Elastography from theory to practice

    • Abstract: Elastography is emerging as an imaging technique for visualizing the mechanical properties within biological tissues. For the last 15 years, my group has been actively developing inverse reconstruction techniques for computing mechanical parameters (shear modulus) from tissue displacements measured with conventional imaging modalities (Ultrasound, Magnetic resonance imaging, and optical coherence tomography). In addition to developing computational framework for solving the inverse reconstruction problem, my group is also evaluating the role of elastography in:

  1. Understanding how the tumor microenvironment impacts the stiffness of the extracellular matrix.

  2. Visualizing the structural properties of life-threatening atherosclerotic plaques.

  3. Robotic-assisted breast scanning.

In this talk, I will discuss the general principles of elastography. I will demonstrate that elastography improves the differential diagnosis of breast cancer and cardiovascular disease; provides valuable insight into how pancreatic cancer tumor microenvironment changes during therapy. I will also discuss the role that advanced machine learning algorithms may play in elastography.

  1. June 23, 2021

    • Speaker: Sabine Süsstrunk, ( EPFL )

    • Title: Opponency Revisted

    • Abstract: According to the efficient coding hypothesis, the goal of the visual system should be to encode the information presented to the retina with as little redundancy as possible. From a signal processing point of view, the first step in removing redundancy is de-correlation, which removes the second order dependencies in the signal. This principle was explored in the context of trichromatic vision by Buchsbaum and Gottschalk (1) and later Ruderman et al. (2) who found that linear de-correlation of the LMS cone responses matches the opponent color coding in the human visual system. In this talk, I will illustrate with several examples from our research that considering opponent colors can significantly improve image processing and computer vision tasks. We have in addition extended the concept of “color opponency” to include near-infrared. And we found that the de-correlation concept also applies to deep learning models in rather interesting ways.

    • Video Recording [ Youtube ]

Season 1, 2020

  1. May 19, 2020

    • Speaker: Raja Giryes ( Tel Aviv University )

    • Title: Joint Design of Optics and Post-Processing Algorithms Based on Deep Learning for Generating Advanced Imaging Features

    • Abstract: After the tremendous success of deep learning (DL) for image processing and computer vision applications, these days almost every signal processing task is analyzed using such tools. In the presented work, the DL design revolution is brought one step deeper, into the optical image formation process. By considering the lens as an analog signal processor of the incoming optical wavefront (originating from the scene), the optics is modeled as an additional 'layer' in a DL model, and its parameters are optimized jointly with the 'conventional' DL layers, end-to-end. This design scheme allows the introduction of unique feature encoding in the intermediate optical image, since the lens 'has access' to information that is lost in conventional 2D imaging. Therefore, such design allows a holistic design of the entire IP/CV system. The proposed design approach will be presented with several applications: an extended Depth-Of-Field (DOF) camera; a passive depth estimation solution based on a single image from a single camera; non-uniform motion deblurring; and enhanced stereo camera with extended dynamic range and self-calibration abilities. Experimental results will be presented and discussed. This is a joint work with Shay Elmalem, Harel Haim, Yotam Gil Alex Bronstein and Emanuel Marom.

    • Slides [ Download ]

  2. June 2, 2020

    • Speaker: Laura Waller ( UC Berkeley )

    • Title: End-To-End Learning for Computational Microscopy

    • Abstract: Computational imaging involves the joint design of imaging system hardware and software, optimizing across the entire pipeline from acquisition to reconstruction. Computers can replace bulky and expensive optics by solving computational inverse problems. This talk will describe end-to-end learning for development of new microscopes that use computational imaging to enable 3D fluorescence and phase measurement. Traditional model-based image reconstruction algorithms are based on large-scale nonlinear non-convex optimization; we combine these with unrolled neural networks to learn both the image reconstruction algorithm and the optimized data capture strategy.

    • Video Recording [ Youtube ]

  3. June 16, 2020

    • Speaker: Michael Unser ( Ecole Polytechnique Fédérale de Lausanne )

    • Title: CryoGAN: A novel paradigm for single-particle analysis and 3D reconstruction in cryo-EM microscopy

    • Abstract: Single-particle cryo-EM has revolutionized structural biology over the last decade and remains an active topic of research. In particular, the reconstruction task is an enduring technical challenge due to the imaged 3D particles having unknown orientations. Scientists have spent the better part of the last 30 years designing a solid computational pipeline that can reliably deliver 3D structures with atomic resolution. The result is an intricate multi-steps procedure that permits the regular discovery of new structures, but that is yet still prone to overfitting and irreproducibility. The most notable difficulties with the current paradigms are the need for pose-estimation methods, the reliance on user expertise for appropriate parameter tuning, and the non-straightforward extension to the handling of structural heterogeneity. To overcome these limitations, we recently proposed a completely new paradigm for single-particle cryo-EM reconstruction that leverages the remarkable capability of deep neural networks to capture data distribution. Based on an adversarial learning scheme, the CryoGAN algorithm can resolve a 3D structure in a single algorithmic run using only the dataset of picked particles and CTF estimations as inputs. Hence, CryoGAN bypasses many cumbersome processing steps, including the delicate pose-estimation procedure. The algorithm is completely unsupervised, does not rely on an initial volume estimate, and requires minimal user interaction. To the best of our knowledge, CryoGAN is the first demonstration of a deep-learning architecture able to singlehandedly perform the full single-particle cryo-EM reconstruction procedure without prior training. This is joint work with Harshit Gupta, Michael T. McCann, Laurène Donati.

    • Video Recording [ Youtube ]

  4. June 30, 2020

    • Speaker: Katie Bouman - ( Caltech )

    • Title: Capturing the First Image of a Black Hole & Designing the Future of Black Hole Imaging

    • Abstract: This talk will present the methods and procedures used to produce the first image of a black hole from the Event Horizon Telescope, as well as discuss future developments. It had been theorized for decades that a black hole would leave a "shadow" on a background of hot gas. Taking a picture of this black hole shadow would help to address a number of important scientific questions, both on the nature of black holes and the validity of general relativity. Unfortunately, due to its small size, traditional imaging approaches require an Earth-sized radio telescope. In this talk, I discuss techniques the Event Horizon Telescope Collaboration has developed to photograph a black hole using the Event Horizon Telescope, a network of telescopes scattered across the globe. Imaging a black hole’s structure with this computational telescope required us to reconstruct images from sparse measurements, heavily corrupted by atmospheric error. This talk will summarize how the data from the 2017 observations were calibrated and imaged, and explain some of the challenges that arise with a heterogeneous telescope array like the EHT. The talk will also discuss future developments, including how we are developing machine learning methods to help design future telescope arrays.

    • Video Recording [ Youtube ]

  5. July 14, 2020

    • Speaker: Jong Chul Ye - ( KAIST )

    • Title: Optimal transport driven CycleGAN for unsupervised learning in inverse problems

    • Abstract: The penalized least squares (PLS) is a classic method to solve inverse problems, where a regularization term is added to stabilize the solution. Optimal transport (OT) is another mathematical framework that has recently received significant attention by computer vision community, for it provides means to transport one distribution to another in an unsupervised manner. The cycle-consistent generative adversarial network (cycleGAN) is a recent extension of GAN to learn target distributions with less mode collapsing behavior. Although similar in that no supervised training is required, the algorithms look different, so the mathematical relationship between these approaches is not clear. In this talk, I explain an important advance to unveil the missing link. Specifically, we propose a novel PLS cost to measure the sum of distances in the measurement space and the latent space. When used as a transportation cost for optimal transport, we show that this new PLS cost leads to a novel cycleGAN architecture as a Kantorovich dual OT formulation. One of the most important advantages of this formulation is that depending on the knowledge of the forward problem, distinct variations of cycleGAN architecture can be derived. The new cycleGAN formulation have been applied for various imaging problems, such as accelerated magnetic resonance imaging (MRI), super-resolution/deconvolution microscopy, low-dose x-ray computed tomography (CT), satellite imagery, etc. Experimental results confirm the efficacy and flexibility of the theory.

    • Video Recording [ Youtube ]

  6. July 28, 2020

    • Speaker: Orazio Gallo - ( NVIDIA Research )

    • Title: Depth Estimation from RGB Images with Applications to Novel View Synthesis and Autonomous Navigation

    • Abstract: Depth information is a central requirement for many computer vision and computational imaging applications. A number of sensors exist that can capture depth directly. Standard RGB cameras offer a particularly attractive alternative thanks to their lower price point and widespread availability, but they also introduce new challenges. In this talk I will address two of their main challenges. The first challenge is dynamic content. When a scene is captured with a monocular camera, moving objects break the epipolar constraints, thus making it impossible to directly estimate depth. I will describe a method to address this issue while also improving the quality of the depth estimation in the static regions of the scene. I will then use the resulting depth to synthesize novel views of the scene or to create effects like the bullet-time effect, but without the need for synchronized cameras. The issue of dynamic content can also be addressed by using multiple cameras simultaneously, as is the case of stereo. However, while state-of-the-art, deep-learning stereo algorithms produce high-quality depth, they are far from real time--a central requirement for applications such as autonomous navigation. I will present Bi3D, a stereo algorithm that tackles this second challenge. Bi3D allows to trade depth quantization for latency. Given a strict time budget, Bi3D can detect objects closer than a given distance D in as little as 5ms. It can also estimate depth with arbitrarily coarse quantization and complexity linear with the number of quantization levels. For instance, it takes 9.8ms to estimate a 2-bit depthmap or 18.5ms for a 3-bit depthmap. Bi3D can also use the allotted quantization levels to get regular, continuous depth, but in a specific depth range.

    • Video Recording [ Youtube ]

  7. Aug 11, 2020

    • Speaker: Xiao Xiang Zhu - ( TUM, Germany )

    • Title: Data Science in Earth Observation

    • Abstract: Geoinformation derived from Earth observation satellite data is indispensable for many scientific, governmental and planning tasks. Geoscience, atmospheric sciences, cartography, resource management, civil security, disaster relief, as well as planning and decision support are just a few examples. Furthermore, Earth observation has irreversibly arrived in the Big Data era, e.g. with ESA’s Sentinel satellites and with the blooming of NewSpace companies. This requires not only new technological approaches to manage and process large amounts of data, but also new analysis methods. Here, methods of data science and artificial intelligence (AI), such as machine learning, become indispensable.

In this talk, explorative signal processing and machine learning algorithms, such as compressive sensing and deep learning, will be shown to significantly improve information retrieval from remote sensing data, and consequently lead to breakthroughs in geoscientific and environmental research. In particular, by the fusion of petabytes of EO data from satellite to social media, fermented with tailored and sophisticated data science algorithms, it is now possible to tackle unprecedented, large-scale, influential challenges, such as the mapping of global urbanization — one of the most important megatrends of global changes.

  1. Aug 25, 2020

    • Speaker: Saiprasad Ravishankar - ( MSU, USA )

    • Title: From Transform Learning to Deep Learning and Beyond for Imaging

    • Abstract: The next generation computational imaging systems are expected to be increasingly data-driven. There is growing interest in learning effective models for image reconstruction from limited training data, which would benefit many applications. In this talk, we will first review efficient methods for learning sparsifying transform models for signals along with convergence analysis. Various properties can be enforced during transform learning such as sparsity in a union of transforms with clustering, non-local sparsity, or multi-layer models, which help improve image reconstruction quality over conventional schemes in applications such as X-ray computed tomography (CT) or magnetic resonance imaging. Sparsifying transforms are often learned in an unsupervised manner from (unpaired) images or on-the-fly from measurements and used to aid model-based image reconstruction that combines physics-based forward models, noise models, and image priors. We show that these sparsity-based models can also be trained in a supervised manner for optimal reconstruction quality by learning the model parameters from (deep) unrolled block coordinate descent algorithms for transform-regularized problems, or by directly learning the sparsity regularizer itself within model-based reconstruction formulations. The latter approach leads to a challenging bilevel training optimization problem for which we analyze the optimization landscape with l1 sparsity and propose new algorithms without approximations or relaxations. Our initial results show promise for such supervised operator learning in denoising compared to popular methods. We then propose unified bilevel machine learning formulations that effectively combine the benefits of imaging forward models, noise models, supervised deep learning, unsupervised learning-based models, and analytical priors. Our unified approach substantially improves image reconstruction quality in low-dose X-ray CT compared to popular recent methods. We conclude with a brief discussion of ongoing and future research pathways.

    • Video Recording [ Youtube ]

  2. Sep 8, 2020

    • Speaker: Anat Levin - ( Technion, Israel )

    • Title: Rendering speckle statistics in scattering media and its applications in computational imaging

    • Abstract: We present a Monte Carlo rendering framework for the physically-accurate simulation of speckle patterns arising from volumetric scattering of coherent waves. These noise-like patterns are characterized by strong statistical properties, such as the so-called memory effect. These properties are at the core of imaging techniques for applications as diverse as tissue imaging,motion tracking, and non-line-of-sight imaging. Our rendering framework can replicate these properties computationally, in a way that is orders of magnitude more efficient than alternatives based on directly solving the wave equations. At the core of our framework is a path-space formulation for the covariance of speckle patterns arising from a scattering volume, which we derive from first principles. We use this formulation to develop two Monte Carlo rendering algorithms, for computing speckle covariance as well as directly speckle fields. While approaches based on wave equation solvers require knowing the microscopic position of wavelength-sized scatterers, our approach takes as input only bulk parameters describing the statistical distribution of these scatterers inside a volume. We validate the accuracy of our framework by comparing against speckle patterns simulated using wave equation solvers, use it to simulate memory effect observations that were previously only possible through lab measurements, and demonstrate its applicability for computational imaging tasks. In particular, we show an order of magnitude extension of the angular range at which one can use speckle correlations to see through a scattering volume.

    • Video Recording [ Youtube ]

  3. Oct 6, 2020

    • Speaker: John Wright - ( Columbia University, USA )

    • Title: Geometry and Symmetry in (some!) Nonconvex Optimization Problems

    • Abstract: Nonconvex optimization plays an important role in wide range of areas of science and engineering — from learning feature representations for visual classification, to reconstructing images in biology, medicine and astronomy, to disentangling spikes from multiple neurons. The worst case theory for nonconvex optimization is dismal: in general, even guaranteeing a local minimum is NP hard. However, in these and other applications, very simple iterative methods often perform surprisingly well.

In this talk, I will discuss a family of nonconvex optimization problems that can be solved to global optimality using simple iterative methods, which succeed independent of initialization. This family includes certain model problems in feature learning, imaging and scientific data analysis. These problems possess a characteristic structure, in which (i) all local minima are global, and (ii) the optimization landscape does not have any flat saddle points. I will describe how these features arise naturally as a consequence of problem symmetries, and how they lead to new types of performance guarantees for efficient methods. I will motivate these problems from microscopy, astronomy and computer vision, and show applications of our results in these domains. Includes joint work with Yuqian Zhang, Qing Qu, Ju Sun, Henry Kuo, Yenson Lau, Dar Gilboa, Abhay Pauspathy.

  1. Oct 20, 2020

    • Speaker: Bihan Wen - ( Nanyang Technological University, Singapore )

    • Title: From Signal Processing to Machine Learning: How "Old" Ways Can Join The New

    • Abstract: Machine learning, especially deep learning technologies have made incredible progress in the past few years. It enables people to rethink how we integrate information, analyze data and make decision. While the classic signal processing theory laid the foundation for applications ranging from computational imaging to computer vision, nowadays, the advances of machine learning provide a more effective approach to the data-driven solutions. While deep learning methods achieved the state-of-the-art task performance over many benchmark datasets, there are still limitations in practice, such as robustness and data-efficiency comparing to the classic approaches. In this talk, I will discuss some of our recent works on machine learning techniques for imaging and image processing problems, and show how they evolve from signal processing to building deep neural networks, while the "old" ways can also join the new. I will compare the classic optimization with the deep learning approaches, and discuss how one can reconcile their pros and cons towards a more effective and practical solution in image processing.

    • Video Recording [ Youtube ]

  2. Nov 3, 2020

    • Speaker: Nicole Seiberlich - ( University of Michigan, USA )

    • Title: Bringing New Imaging Technologies to the Clinic

    • Abstract: Computational Imaging is changing the landscape of medical imaging. However, the process of moving a new imaging technology to routine clinical use can be challenging, requiring collaboration between engineers, physicians, and radiology staff. The lecture will describe some of the hurdles that must be overcome to move advanced imaging methods from ideas to practice, using Magnetic Resonance Fingerprinting as an example.

  3. Nov 17, 2020

    • Speaker: Yoram Bresler - ( University of Illinois at Urbana-Champaign, USA )

    • Title: Two Topics in Deep Learning for Image Reconstruction: (i) Physics-based x-ray scatter correction for CT; and (ii) Adversarial training for improved robustness.

    • Abstract: In the first part of this talk, we consider an instance of a highly nonlinear, nonlocal inverse problem, for which even the forward problem is expensive to solve using conventional methods. Photon scattering in X-ray CT creates streaks, cupping, shading artifacts and decreased contrast in the reconstructions. We describe a physics-motivated deep-learning-based method to estimate and correct the scatter. In the second part of the talk, we address a vulnerability of image reconstruction by deep learning to adversarial examples, which has been demonstrated recently, and which casts some doubts on the use of this methodology for mission-critical applications. We describe an adversarial training strategy for end-to-end deep-learning-based inverse problem solvers, providing significantly improved robustness.

    • Video Recording [ Youtube ]

  4. Dec 1, 2020

    • Speaker: Singanallur V Venkatakrishnan - ( Oak Ridge National Laboratory, USA )

    • Title: Pushing the Limits of Scientific CT Instruments using Algorithms : Model-based and Data-Driven Approaches

    • Abstract: Computed Tomography (CT) systems play a vital role in making scientific discoveries in diverse fields including biology, material sciences and additive manufacturing. The first generation of CT systems typically relied on fast algorithms to invert the measurements based on analytic inversion techniques. However, the performance of these algorithms can be poor when dealing with non-linearities in the measurement, the presence of high-levels of noise, and the limited number of measurements that commonly occur when we seek to accelerate the acquisition.

In this talk, I will present algorithms for improving the performance of CT systems - enabling faster, more accurate and novel imaging capabilities. The first part of the talk will focus on model-based image reconstruction (MBIR) algorithms that formulate the inversion as solving a high-dimensional optimization problem involving a data-fidelity term and a regularization term. By accurately modeling the physics+noise statistics of the measurement and combining it with useful regularizers, I will demonstrate how we can significantly improve the performance for neutron CT, X-ray micro-CT, and single particle Cryo-EM systems. In the last part of the talk, I will present recent results of using deep-learning based algorithms for accelerated neutron/X-ray CT and highlight challenges that exist in extending the use of such techniques for scientific applications.

  1. Dec 15, 2020

    • Speaker: J. Webster Stayman - ( Johns Hopkins University, USA )

    • Title: Novel data acquisition and task-based optimization in computed tomography

    • Abstract: In this talk, new advances in data acquisition for computed tomography will be presented including strategies for fluence field modulation of the x-ray beam, non-circular/non-spiral source-detector trajectories, and methods for obtaining spectral information. The projection data obtained via these approaches often requires advanced data processing methods due to data complexities in sampling and the need for more sophisticated physical models. Moreover, with increased flexibility in acquisition, the opportunity for optimized data acquisitions is available. Finding such optimal strategies requires quantification of imaging performance. In this work, we move beyond simple image quality metrics and present techniques for optimization based on modeled and predicted performance of specific imaging tasks.