## Applied Differential Geometry

In general, I study principled approaches for applying differential geometry to real-world problems. Differential geometry establishes a foundation for studying a calculus over smooth manifold topologies (like the depicted sphere) and has an explicit real-world utility when studying spaces of shapes/curves/surfaces---spaces where we intuitively understand that linear combinations of the elements have no physical meaning (e.g., try adding two shapes... can't? there in lies the challenge of defining a calculus over a manifold).
The abstractions of differential geometry make it a great candidate for studying applications involving artificial intelligence (AI) and machine learning (ML)---in particular the popular sub-field of manifold learning.
My motivations stem from a desire to make the utility of the abstractions tangible for real-world applications in addition to developing new theory in the study of manifold learning.
This research is driven by the following hypothesis:
We assume that elements described in an ambient Euclidean space with high dimensionality (feature space) are actually restricted to a manifold of reduced intrinsic dimension (latent space)---subsequently enabling approximation, optimization, and uncertainty quantification given algorithms which are anticipated (or proven) to converge as a function of the reduced intrinsic dimensionality.
Thus making the seemingly intractable, tractable.

## Inferential Statistics over Manifolds

Once we have established a calculus of smooth manifolds, we can begin to inform data-driven approaches for solving real-world problems. Data-driven approaches to manifold learning---and other methods of parameter dimension reduction---are the focal point of my research. I am not solely interested in the abstractions of differential geometry but how these abstractions can be made useful for a multitude of disciplines. These data-driven approaches are typically broken down into two categories:>> Unsupervised "learning" - looking for reduced dimensional structure in data absent a response (coloring) of the data. E.g., PCA, ISOMAP, autoencoders>> Supervised "learning" - looking for reduced dimensional structure in data given a scalar-valued response (coloring) of the data. E.g., Active subspaces, interpolation, Gaussian processes (RBF kernel approximations)
Although I share a general distaste for the description "learning," this description has become ubiquitous in the AI and ML literature (no sentient machines here). In either case, I'm driven by applications involving data and models.

## Active Manifold-Geodesics

Thus far, my contribution to the field is a generalization of active subspaces for scalar-valued functions with manifold-valued domains. Click on any of the images to access my Ph.D. dissertation and learn about the details!
Below is an illustration of my research story (left): evolving from studying functional response of parameterized shape representations to a parameterization-independent global adjoint-based sensitivity analysis. These ideas have lead to novel approaches for understanding artificial intelligence and machine learning technologies which preoccupy the majority of my research as an NRC postdoc at the National Institute of Standards and Technology
I also had the opportunity to look for ridge structure in a variety of applications at Rolls-Royce. Below is a snapshot (right) of the various applications where I identified this structure and exploited it for UQ, optimization, and approximation: