Organizer for Winter 2020: Naoki Saito (Email: saito@math.ucdavis.edu)
Note that there will no be MADDD seminar talks for the first three weeks of January due to the talks by job candidates.
01/28: Gal Mishne (UCSD)
Title: Multiway Tensor Analysis with Neuroscience Applications
Abstract: Experimental advances in neuroscience enable the acquisition of increasingly large-scale, high-dimensional and high-resolution neuronal and behavioral datasets, however addressing the full spatiotemporal complexity of these datasets poses significant challenges for data analysis and modeling. We propose to model such datasets as multiway tensors with an underlying graph structure along each mode, learned from the data. In this talk I will present three frameworks we have developed to model, analyze and organize tensor data that infer the coupled multi-scale structure of the data, reveal latent variables and visualize short and long-term temporal dynamics with applications in calcium imaging analysis, fMRI and artificial neural networks.
02/04: Randall O'Reilly (UC Davis, Psychology & Computer Science)
Title: Predictive Error-driven Learning in the Brain
Abstract: I will present some recent computational models of brain circuits that can support predictive error-driven learning, along with a discussion of prior work on how the brain might support something like error backpropagation more generally. Error backpropagation is the engine of modern deep neural network models, and there has been a bit of a resurgence of interest in its possible biological basis recently. Top-down connections in the cortex can potentially provide a mechanism of error propagation, and there are various proposals that make distinct biological predictions, which will be reviewed. Predictive learning provides an attractive solution to a remaining challenge: where do all the error signals come from in the first place? Specific circuits between the thalamus and cortex appear ideally configured to support a form of predictive learning, which differs significantly from other machine-learning / Bayesian approaches. Our models show that this mechanism can learn abstract categorical representations from movies of rotating and translating 3D objects.
02/11: Richard Levenson (UCD Med)
Title: Path ↔ Math: Opportunities for cross-disciplinary adventure
Abstract: Radiology moved on from film to direct-to-digital years ago, creating an open arena for the development of mathematical image processing tools, now being enhanced by artificial intelligence applications. Pathology, also an image-centric discipline, is only now dipping its toes into the digital world, largely because the glass microscope slide, unlike film in radiology, has not been supplanted, meaning that the digitization step is an added cost and inconvenience. Consequently, histopathology is only now getting serious about the opportunities for digital image enhancement and automated diagnostics. Fortunately, this step coincides with the recent blooming of AI, and thus there are a number of exciting research, clinical and commercial areas to pursue. Topics to be discussed include novel slide-free imaging technologies, AI-based image modulation, (big) data fusion, and extraction of additional, previously latent, content.
02/18: No Seminar
02/25: Guy Wolf (U. Montreal)
Title: Geometry-based Data Exploration
Abstract: High-throughput data collection technologies are becoming increasingly common in many fields, especially in biomedical applications involving single cell data (e.g., scRNA-seq and CyTOF). These introduce a rising need for exploratory analysis to reveal and understand hidden structure in the collected (high-dimensional) Big Data. A crucial aspect in such analysis is the separation of intrinsic data geometry from data distribution, as (a) the latter is typically biased by collection artifacts and data availability, and (b) rare subpopulations and sparse transitions between meta-stable states are often of great interest in biomedical data analysis. In this talk, I will show several tools that leverage manifold learning, graph signal processing, and harmonic analysis for biomedical (in particular, genomic/proteomic) data exploration, with emphasis on visualization, data generation/augmentation, and nonlinear feature extraction. A common thread in the presented tools is the construction of a data-driven diffusion geometry that both captures intrinsic structure in data and provides a generalization of Fourier harmonics on it. These, in turn, are used to process data features along the data geometry for denoising and generative purposes. Finally, I will relate this approach to the recently-proposed geometric scattering transform that generalizes Mallat's scattering to non-Euclidean domains, and provides a mathematical framework for theoretical understanding of the emerging field of geometric deep learning.
03/03: David Gamarnik (MIT Sloan School of Management) [joint with Statistics Seminar]
Title: Algorithmic Challenges in High-Dimensional Inference Models: Insights from the Statistical Physics
Abstract: Inference problems arising in modern day statistics, machine learning and artificial intelligence fields often involve models with exploding dimensions, giving rise to a multitude of computational challenges. Many such problems "infamously" resist the construction of tractable inference algorithms, and thus are possibly fundamentally non-solvable by fast computational methods. A particularly intriguing form of such intractability is the so-called computational vs information theoretic gap, where effective inference is achievable by some form of exhaustive search type computational procedure, but fast computational methods are not known and conjectured not to exist. A great deal of insight into the mysterious nature of this gap has emerged from the field of statistical physics, where the computational difficulty is linked to a phase transition phenomena of the solution space topology. We will discuss one such phase transition obstruction, which takes the form of the Overlap Gap Property: the property referring to the topological disconnectivity (gaps) of the set of valid solutions.
03/10: Will Leeb (Univ. Minnesota)
Title: Matrix Denoising with Weighted Loss
Abstract: This talk will describe a new class of methods for estimating a low-rank matrix from a noisy observed matrix, where the error is measured by a type of weighted loss function. Such loss functions arise naturally in a variety of problems, such as heteroscedastic noise, missing data, and submatrix estimation. We introduce a family of spectral denoisers, which preserve the left and right singular subspaces of the observed matrix. Using new asymptotic results on the spiked covariance model in high dimensions, we derive the optimal spectral denoiser for weighted loss. We demonstrate the behavior of our method through numerical simulations.