Once we have established a calculus of smooth manifolds, we can begin to inform data-driven approaches for solving real-world problems. Data-driven approaches to manifold learning---and other methods of parameter dimension reduction---are the focal point of my research. I am not solely interested in the abstractions of differential geometry but how these abstractions can be made useful for a multitude of disciplines. These data-driven approaches are typically broken down into two categories:>> Unsupervised "learning" - looking for reduced dimensional structure in data absent a response (coloring) of the data. E.g., PCA, ISOMAP, autoencoders>> Supervised "learning" - looking for reduced dimensional structure in data given a scalar-valued response (coloring) of the data. E.g., Active subspaces, interpolation, Gaussian processes (RBF kernel approximations)
Although I share a general distaste for the description "learning," this description has become ubiquitous in the AI and ML literature (no sentient machines here). In either case, I'm driven by applications involving data and models.