Junior Applied Math Seminar

Welcome to the webpage for the Junior Applied Math Seminar at UT Austin! 

What counts as applied math?

Any graduate students or post-docs interested in applied math are welcome to attend. 

All talks for Spring 2024 will be at 1pm in PMA 9.166 unless otherwise stated.

2023 - 2024 Organizers: Addie Duncan and Paulina Hoyos.

Spring 2024 Talk Schedule

January 22 - Organizational Meeting

January 29

Speaker: Erisa HasaniTitle: Neural operators and synthetic dataAbstract: Neural operators are a class of deep learning data-driven architectures that learn maps between function spaces. Typically, an operator like that is capable of learning how to solve a family of a particular PDE problem by learning the operator that maps the known function(s) of an equation to the unknown solution. One of the challenges I would like to address in this talk is how to obtain data in order to train such models. Classical numerical method (such as finite element/differences) have been used to solve instances of PDEs for training and testing models. However, if we wish to outperform classical solvers, how can we really still depend on them to generate training data? 

This talk will be based on a recent preprint with Prof. Rachel Ward (see https://arxiv.org/pdf/2401.02398.pdf).

February 5 - No Talk

February 12 - No Talk

February 19

Speaker: Will PorteousTitle: Introduction to Optimal Control Problems and ApplicationsAbstract: Suppose your self-driving car arrives at a 4-way stop sign, and confronts 3 human drivers, all arriving at approximately the same time. How do we (the self-driving car) know it is safe for us to cross the stop sign? We could rely on our cameras to 'wait till the stop is clear', but at rush hour, you're never going to get to move. You could enforce that the car 'always go' - and risk hitting someone who believed it was their turn. The foundational field for problems like this is called Optimal Control Theory, and it is intimately related to Game Theory. Though we can't do it all in one talk, we will start by giving an introduction to deterministic Optimal Control Theory- it is surprisingly simple, though most students have never seen it. We will answer the questions: What is an optimal control problem (and why is it suspiciously like a calculus-of-variations problem)? How do such problems yield Hamilton-Jacobi equations? Where does ML and modern PDE theory come in? Come to find out! 

February 26

Speaker: Ziheng ChenTitle: Through the Looking-Glass, Alice Found The Brownian MotionAbstract: After a brief introduction to Brownian motion and Wiener process, we study the simulation of stochastic systems with a reflecting or absorbing boundary condition, which is critical in the numerical implementation of the so-called, Milestoning algorithm. We'll start our journey from the Cameron-Martin-Girsanov theorem and navigate our way through the forest of reflection principles and inverse Gaussian distributions. There are also plenty of figures and animations to help understand this theory on an intuitive level!

March 4

Speaker: Patrícia Muñoz EwaldTitle: Critical points of ReLU neural networksAbstract: Considering how popular they are, surprisingly little is known about neural networks. Quite a bit is known about critical points of deep linear networks, but as soon as we introduce non-linear activation functions, things get complicated. In this talk, I’ll describe an explicit construction of global minimizers for a classification problem using a ReLU neural network, under some assumptions on the data. This construction has the advantage of providing a clear geometric picture for the action of the layers on the data, and so there will be pictures! Based on joint work with Thomas Chen.

March 11 - Spring Break

March 18 - No Talk

March 25

Speaker: Lewis LiuTitle: Effect of multiscale dataset and gradient descents with multi-learning rateAbstract: We all kinda know (or assume) that data embedded in high-dimensional space follows some intrinsic low-dimensional structure (distribution/manifold). However, it seems too idealized to assume that the data distribution/manifold would exhibit a consistent scale across all directions. Indeed, through empirical observations, usually they are not. This prompts some important questions: what does this characteristic of data bring to the machine learning algorithms? How can we effectively leverage such information? 

In this talk, we will explore how this variability in scales influences machine learning algorithms; then I will introduce a gradient descent algorithm using different learning rates based on the multiscale information, prove convergence and demonstrate improved efficiency if time permits. [This can be thought as a continuation of the previous talk in the fall if some still recalled, but fear not, we will start from the scratch!]

April 1 - No Talk

April 8 - Eclipse

April 15 - No Talk

April 22 - No Talk

April 29

Speaker: Jen RozenblitTitle: Topology in Graph Neural Networks

Abstract: In the evolving landscape of machine learning, graph neural networks (GNNs) stand out for their proficiency in processing and understanding structures within data. However, conventional GNN architectures often overlook intricate graph substructures like cycles, which are critical for understanding complex network topologies. In this talk we will introduces a new approach called "ToGL: Topological Graph Layer", which integrates topological data analysis (TDA) with graph neural networks to capture these subtle (but significant!) patterns.

ToGL leverages persistent homology, a central method in TDA, to encode multi-scale topological information into GNNs, enhancing their ability to recognize and utilize these structures for improved predictive performance. This integration not only bridges the gap between theoretical topology and practical machine learning but also shows how TDA can enrich contemporary ML techniques. Our aim in this talk is to show the potential of TDA in augmenting neural networks, while also incorporating an already well-established technique in TDA, and presenting how ToGL could pave the way for more robust and insightful data analysis tools in various applications.

You will be contacted the week before your talk for a title and an abstract. Title and abstract information is due the Thursday before your talk.