Time and Date: Tuesdays 3:30 - 4:30
Unless otherwise noted, all talks will take place in Math Sciences Building 110 at the University of Missouri.
Organized by Tim Duff and Dan Edidin. Contact Tim if you want to be on the mailing list.
Title: Linear Fundamental Matrix Estimation from 7 or 5 Points
Abstract: We revisit the problem of estimating the fundamental matrix of a pair of perspective cameras, a cornerstone of geometric computer vision. As is well-known, linear solvers require at least 8 point correspondences, whereas nonlinear minimal solvers require just 7 in the uncalibrated case or 5 in the calibrated case. In this paper, we consider a special case of the 7-point problem where 5 of the points are configured to lie on two lines, which has previously been shown to have a unique solution. As a theoretical contribution, we offer an analysis of how this uniqueness manifests in the standard 7-point algorithm. On a practical level, we provide the first practical linear solver for the minimal problem associated to this special configuration. Additionally, we evaluate a heuristic 5-point fundamental matrix solver based on the construction of virtual midpoints. When combined with early non-minimal fitting, the runtime and accuracy of our solver is competitive with the state-of-the-art (SoTA) on multiple benchmarks.
Abstract:
Abstract:
Title: Interpretable, Explainable, and Adversarial AI: Data Science Buzzwords and You (Mathematicians)
Abstract: Many state-of-the-art methods in machine learning are black boxes which do not allow humans to understand how decisions are made. In a number of applications, like medicine and atmospheric science, researchers do not trust such black boxes. Explainable AI can be thought of as attempts to open the black box of neural networks, while interpretable AI focuses on creating clear boxes. Adversarial attacks are small perturbations of data that cause a neural network to misclassify the data or act in other undesirable ways. Such attacks are potentially very dangerous when applied to technology like self-driving cars. The goal of this talk is to introduce mathematicians to problems they can attack using their favorite mathematical tools. The mathematical structure of transformers, the powerhouse behind large language models like ChatGPT, will also be explained.
Abstract: This talk discusses joint work with Venkat Chandrasekaran, Jose Israel Rodriguez, and Kevin Shu, where we initiate the study of Lagrangian dual sections. This theory gives rise to sufficient conditions for the "hidden convexity" of certain nonconvex optimization problems. Notable examples include spectral inverse problems and certain unbalanced Procrustes problems. As an additional bonus, when the constraint set is a compact Riemannian manifold, the Lagrangian formulation allows us to solve these problems using a numerical continuation algorithm based on Riemannian gradient descent.