Geometric Deep Learning: How Differential Geometry Improves AI
When data lives on curved surfaces—like the surface of the Earth—traditional neural networks often struggle. This challenge arises in applications like global weather prediction, where the planet’s spherical shape must be taken into account. Standard AI tools assume flat, grid-like data, but the Earth is anything but flat, and that’s where differential geometry comes in. By providing the mathematical language to work with curved spaces, it helps us design neural networks that respect the underlying geometry of the data—leading to more accurate and physically meaningful predictions. In this talk, we’ll explore how geometry and AI come together, and why thinking "curved" can make our models smarter.
The Action of SO(3) on the Two-Sphere Through the Lens of Lie Groupoids
Lie groupoids generalize Lie groups by only allowing partial multiplication—not every pair of elements can be multiplied—capturing symmetries that vary across a space. Basic examples arise from Lie group actions on manifolds. When the action is transitive, this gives an alternative to the classical construction of a homogeneous manifold, e.g. a principal H-bundle, where H is the isotropy subgroup at a chosen point. The groupoid approach offers a more intrinsic framework, avoiding the need to select a reference point. In this talk, I’ll present an instructive example of a Lie groupoid: the action of SO(3) —the group of rotations in R^3— on the 2-sphere.
Estimating neural connection strengths from firing intervals
In this presentation, I will show how we can use a standard neuron network model to formulate an inverse problem, where we compute the connection strengths between neurons based on firing data. Despite the nonlinear nature of the model, the inverse problem can be simplified to solving a linear system of equations. I will also showcase numerical results to illustrate the performance of the inverse solver.
Spinning doughnuts and other symmetries
This talk is about using algebra to study shapes (like the doughnut), and what happens when we incorporate symmetry.
Applications of PINNS
This is an on going work explores the use of Physics Informed Neural Networks to combat jamming attacks. By using Maxwell's Equation, a neural network is able to detect jamming activity.
Additive Effects in Titanium-Iron (TiFe) Metal Alloy Systems: Fourier Analysis of Spectroscopic Data (X-ray) for Hydrogen Storage Applications
Hydrogen storage remains a pivotal challenge in advancing the hydrogen economy, primarily due to hydrogen's inherently low volumetric density. Titanium-iron (TiFe) alloys have garnered attention as viable candidates for solid-state reversible hydrogen storage materials under ambient conditions; however, their practical utility is constrained by limited gravimetric capacity and susceptibility to surface oxide formation.
We explored the strategic doping of TiFe alloys with transition elements—namely vanadium (V), niobium (Nb), and tantalum (Ta)—to overcome these limitations. Employing advanced characterization techniques, such as powder X-ray diffraction (PXRD) and Extended X-ray Absorption Fine Structure (EXAFS) analysis, we reveal a robust correlation between dopant-induced crystallographic refinements and enhancements in hydrogen absorption capacity and kinetics. The results underscore the critical role of these dopants in optimizing the structural and electronic properties of TiFe alloys, offering a pathway toward the design of next-generation hydrogen storage materials with superior performance.
Computational approaches for simulation of pressure development on multi-site CO2 operations
We want to simulate potential pressure communication and interference of multi-site CO2 injection at storage hubs, like the Horda Platform, and also other subsurface activities that may not consider CO2 storage. We need a simulation workflow where information must be exchanged between regional and site-scale models. Through this presentation, I aim to provide an understanding of domain decomposition theory and its critical role in advancing our capabilities in simulating CO2 storage.
Optimization on Riemannian Manifolds
Measured data usually comes with underlying structures that can be exploited to improve the performance of optimization algorithms. These often naturally exhibit a Riemannian manifold structure, allowing the definition of optimization tasks in a way that respects the geometry of the data. For example, when dealing with unit-norm vectors, these can be seen as points on the unit sphere, which is a Riemannian manifold. More abstract examples include the space of rotations or positive definite matrices. This talk explores optimization problems that may be nonsmooth, nonconvex and posed on Riemannian manifolds, and strategies to solve them.
Examining the robustness of Physics-Informed Neural Networks to noise for Inverse Problems
PINNs are a recent new way to solve PDEs, both forward and inverse problem, with and without noise. We show that even a simple traditional general-purpose model outperforms PINNs despite contrary claims in the literature. We suggest new "early stopping" approaches will be key to improve PINN performance in the presence of noise.
Stable numerical methods on Riemannian manifolds
Applying Neural Networks to Offshore Operations