Title: Fokas Diagonalization
Abstract: We describe a new form of diagonalization for linear two point constant coefficient differential operators with arbitrary linear boundary conditions. Although the diagonalization is in a weaker sense than that usually employed to solve initial boundary value problems (IBVP), we show that it is sufficient to solve IBVP whose spatial parts are described by such operators. We argue that the method described may be viewed as a reimplementation of the Fokas transform method for linear evolution equations on the finite interval. The results are extended to multipoint and interface operators, including operators defined on networks of finite intervals, in which the coefficients of the differential operator may vary between subintervals, and arbitrary interface and boundary conditions may be imposed; differential operators with piecewise constant coefficients are thus included.
Location and time: Smith Hall 205 at 4pm
Title: An Alternative View on AI: Collaborative Learning, Incentives, and Social Welfare
Abstract: Artificial intelligence (AI) has focused on a paradigm in which intelligence inheres in a single, autonomous agent. Social issues are entirely secondary in this paradigm. When AI systems are deployed in social contexts, however, the overall design of such systems is often naive---a centralized entity provides services to passive agents and reaps the rewards. Such a paradigm need not be the dominant paradigm for information technology. In a broader framing, agents are active, they are cooperative, and they wish to obtain value from their participation in learning-based systems. Agents may supply data and other resources to the system, only if it is in their interest to do so. Critically, intelligence inheres as much in the overall system as it does in individual agents, be they humans or computers. This is a perspective familiar in the social sciences, and a key theme in my work is that of bringing economics into contact with foundational issues in computing and data sciences. I'll emphasize some of the mathematical challenges that arise at this tripartite interface.
Location and time: Smith Hall 205 at 4:00 pm
Title: Local growth behavior of Gaussian elimination with partial and complete pivoting
Abstract: Gaussian elimination (GE) remains one of the most used dense linear solvers. Error analysis of GE with selected pivoting strategies on well-conditioned systems can focus on studying the behavior of growth factors. Although exponential growth is possible with GE with partial pivoting (GEPP), growth tends to stay much smaller in practice. Support for this behavior was provided last year by Huang and Tikhomirov’s average-case analysis of GEPP, which showed GEPP growth stays at most polynomial with very high probability when using small Gaussian perturbations. Research on GE with complete pivoting (GECP) has also seen a lot of recent interest, with improvements to lower bounds on worst-case GECP growth provided by Edelman and Urschel earlier this year. I am interested in studying how GEPP and GECP behave on the same linear systems, with a focus on large growth systems and orthogonal matrices. One direction will explore when GECP is less stable than GEPP, which will lead to new empirical lower bounds on how much worse GECP can behave compared to GEPP in terms of growth. Another direction will include an empirical study on a family of exponential GEPP growth matrices whose polynomial behavior in small neighborhoods limits to the initial GECP growth factor.
Location: Smith Hall 205 at 4pm
Title: Measuring Water Waves: Using Pressure to Reconstruct Wave Profiles
Abstract: Euler's equations describe water-waves on the surface of an ideal fluid. In this talk, I will discuss an inverse problem related to measuring water-waves using pressure sensors placed inside the fluid. Using a non-local formulation of the water-wave problem, we can directly determine the pressure below both traveling-wave and time-dependent solutions of Euler's equations. This method requires the numerical solution of a nonlinear, nonlocal equation relating the pressure and the surface elevation which is obtained without approximation. From this formulation, a variety of different asymptotic formulas are derived and are compared with both numerical data and physical experiments.
Location: Smith Hall 205 at 4pm
Title: Particle Growth Models in the Plane (DLA, DBM, ...)
Abstract: We'll discuss growth patterns in the plane (describing, for example, the spread of certain plants, certain chemical reactions, Lichtenberg figures, ...). The most famous such model is DLA where new particles arrive via Brownian motion and get stuck once they hit an existing particle. DLA forms the most beautiful fractal patterns (pictures will be provided). Despite this, DLA is actually fairly poorly understood and we will quickly survey the existing ideas (many of which are due to Harry Kesten and from the 1980s). I will then present a new type of growth model that behaves similarly and which can be very precisely analyzed (in certain cases). No prior knowledge is necessary and there will be lots of fun pictures!
Location: Smith Hall 205 at 4pm
Title: Computational Hypergraph Discovery, a framework for connecting the dots
Abstract: unction approximation can be categorized into three levels of complexity. Type 1: Approximate an unknown function given (possibly noisy) input/output data. Type 2: Consider a collection of variables and functions indexed by the nodes and hyperedges of a hypergraph (a generalization of a graph in which edges can connect more than two vertices). Assume some of the functions to be unknown. Given multiple samples from subsets of the variables of the hypergraph (satisfying the functional dependencies imposed by its structure), approximate all the unobserved variables and unknown functions of the hypergraph. Type 3: Expanding on Type 2, if the hypergraph structure itself is unknown, use partial observations (of subsets of the variables of the hypergraph) to uncover its structure (the hyperedges and potentially missing vertices) and then approximate its unknown functions and unobserved variables. Numerical approximation, Supervised Learning, and Operator Learning can all be formulated as type 1 problems (with functional inputs/outputs spaces). Type 2 problems include solving and learning (possibly stochastic) ordinary or partial differential equations, Deep Learning, dimension reduction, reduced-ordered modeling, system identification, closure modeling, etc. The scope of Type 3 problems extends well beyond Type 2 problems and includes applications involving model/equation/network discovery and reasoning with raw data. While most problems in Computational Sciences and Engineering (CSE) and Scientific Machine Learning (SciML) can be framed as Type 1 and Type 2 challenges, many problems in science can only be categorized as Type 3 problems. Despite their prevalence, these Type 3 challenges have been largely overlooked due to their inherent complexity. Although Gaussian Process (GP) methods are sometimes perceived as a well-founded but old technology limited to curve fitting (Type 1 problems), a generalization of these methods holds the key to developing an interpretable framework for solving Type 2 and Type 3 problems inheriting the simple and transparent theoretical and computational guarantees of kernel/optimal recovery methods. This will be the topic of our presentation.
Location: Smith Hall 205 at 4pm
Title: Adventures in structured matrix approximation methods
Abstract: Many of the structured matrices we know and love (Toeplitz, Hankel, Cauchy, Vandermonde) have special properties that make it efficient to represent and store them computationally using low rank approximation methods. This may be surprising at first, since not all of these matrices are directly compressible. In this talk, we explore why and how we can compress structured matrices such as these, the ways in which their structures can be exploited for fast computations, and how related structures arise more broadly in numerical linear algebra.
Location: SMI 211 at 4pm
Title: Mean field theory in Inverse Problems: from Bayesian inference to overparameterization of networks
Abstract: Bayesian sampling and neural networks are seemingly two different machine learning areas, but they both deal with many particle systems. In sampling, one evolves a large number of samples (particles) to match a target distribution function, and in optimizing over-parameterized neural networks, one can view neurons particles that feed each other information in the DNN flow. These perspectives allow us to employ mean-field theory, a powerful tool that translates dynamics of many particle system into a partial differential equation (PDE), so rich PDE analysis techniques can be used to understand both the convergence of sampling methods and the zero-loss property of over-parameterization of ResNets. We showcase the use of mean-field theory in these two machine learning areas.
Location: SMI 205 at 4pm
Title: Floquet Hamiltonians - Spectrum and dynamics
Abstract: The last decade has witnessed tremendous experimental progress in the study of "Floquet media," crystalline materials whose properties are altered by time-periodic parametric forcing. Theoretical advancements, however, have so far been achieved through discrete and approximate models. Understanding these materials from their underlying, first-principle PDE models, however, remains an open problem. Specifically, semi-metals such as graphene are known to transform into insulators under periodic driving. While traditionally this phenomenon is modeled by a spectral gap, in PDE models no such gaps are conjectured to form. How do we reconcile these seemingly contradictory statements? We show that the driven Schrödinger equation possesses an “effective gap” – a novel and physically relevant relaxation of a spectral gap. Adopting a broader perspective, we study the influence of time-periodic forcing on a general band structure. A spectrally-local notion of stability is formulated and proven, using methods from periodic homogenization theory.
Location: Smith Hall 205 at 4pm
Title: Making Black Girls Count in Math Education: A Black Feminist Vision of Transformative Teaching
Abstract: Much is lost when we do not politicize Black girls’ math education. Centering Black girls as knowledge producers through a critical analysis of their experiences is necessary. In this talk, I aim to bridge-build and inspire all to listen, reflect, ask questions, and then act in solidarity with Black girls to realize their limitless possibilities in mathematics.
Location: Smith Hall 205 at 4pm
Title: Water Waves: Instabilities of Stokes Waves
Abstract: The study of ocean waves, especially surface gravity waves, is essential for understanding the formation of rogue waves and whitecaps within ocean swell. Waves that propagate from the epicenter of a storm can be treated as unidirectional. In this presentation, we will examine periodic traveling waves on the free surface of an ideal two-dimensional fluid of infinite depth. Specifically, we will introduce surface waves of permanent shape, also known as Stokes waves and discuss their stability.
Location: Smith Hall 205 at 4pm
Title: Rigorous Guarantees for Randomized Diagonalization Algorithms
Abstract:The task of designing efficient and reliable algorithms for computing the eigenvalues and eigenvectors of a matrix (i.e. solving the eigenvalue problem) is of unquestionable relevance in science and engineering. However, despite substantial progress on several of its practical facets, there are fundamental theoretical questions about the eigenvalue problem that remain poorly understood.
In this talk, I will present results that provide rigorous guarantees for two practical diagonalization algorithms. Specifically, we show that the eigenvalue problem can be solved (via the spectral bisection algorithm) in nearly matrix-multiplication time and that a certain randomized shifting strategy for the QR algorithm is globally convergent (implying an O(n^3) worst-case running time for this algorithm).
A key step in our analysis requires understanding the spectral stability properties of arbitrary deterministic matrices that have been randomly perturbed by a very small "noise matrix". With this, in the spirit of smoothed analysis, we can show rigorous guarantees for the aforementioned algorithms when run on tiny perturbations of arbitrary (possibly non-normal) inputs.
This is joint work with Jess Banks, Archit Kulkarni, and Nikhil Srivastava.
Abstract: TBA