News - I am on the Job Market this year.
New Paper at AISTATS - Near-Interpolators: Rapid Norm Growth and the Trade-Off between Interpolation and Generalization with Yutong Wang and Wei Hu
Will be mentoring a project at LogML in July https://www.logml.ai
Will be presenting four posters at Neurips. Three at the M3L workshop and one at Neurreps.
Will be presenting a poster at DeepMath Conference
Will be presenting at Workshop on Geometry and Machine Learning at Max Plank Institute
Will serve on the Program Committee for the Workshop on Symmetry and Neural Representations are NeurIPS 2023
New paper at Nature Machine Intelligence - Predicting the Future of AI with AI: High Quality Link Prediciton in an Exponentially Growing Knowlegde Network
Will be at Aspen Center of Physics for the workshop on Theoretical Physics and Deep Learning Theory.
Organizing a session on Geometry and Optimization at SampTA being held in Yale.
New Paper - Training Data Size Induced Double Descent For Denoising Feedforward Neural Networks and the Role of Training Noise accepted at TMLR.
Award - Top Reviewer at NeurIPS 2022
New Paper - Paper from mentoring REU students: Knowledge Graphs for QAnon Twitter Network at IEEE BigData Workshop.
New Paper - Paper from menotring student: Hyperbolic and Mixed Geometry Neural Networks at NeurIPS Neurreps workshop.
New Paper - Project and Forget was accepted at JMLR
Award - Peter Smereka Award for Best Applied Math Thesis
Award - AMS Simons Travel Grant
I will be at Max Planck MPI in Leibzig over the summer of 2022.
I will be the AMS MRC on Data Science in June 2022.
New Paper - ICLR 2022 Challenges for Computational Geometry and Topology at PMLR 2022
New Paper - CubeRep: Learning Relations Between Different Views of Data at PMLR 2022
New Paper - Paper from mentoring REU students: An Analysis of COVID-19 Knowledge Graphs Construction and Application at IEEE BigData Conference 2021.
New Paper - What can go wrong with multidimensional scaling? Accepted at Neurips 2021! Work with Anna Gilbert, Ben Raichel and Greg Van Buskirk
About Me
Hi! I am a Hedrick Assistant Adjunct Professor at UCLA under Andrea Bertozzi, Jacob Foster, and Guido Montufar. I obtained my Ph.D. in Applied and Interdisciplinary Mathematics from the University of Michigan. I won the Peter Smereka award for the best applied math thesis. My advisors were Anna C. Gilbert and Raj Rao Nadakuditi. I did my undergrad at Carnegie Mellon University where I obtained a B.S. with double majors in Discrete Math and Computer Science.
I believe that understanding the mathematical foundations of machine learning algorithms is crucial. My work has contributed to a better understanding of data's intrinsic geometric and probabilistic structure. This understanding has been applied to design better machine learning algorithms. My current area of focus is the mathematical underpinnings of geometric deep learning, optimization, and generalization.
Current Projects
Denoising Autoencoder - Denosing autoencoders work by learning a map from nosiy data to denoised data. Hence we need to have denoised training data to train the neurla network. Currently, this noise is either added in an ad hoc manner or added so that the training data SNR is the same as the test data SNR. I currently working on theoretically aspects such as optimization and generalization.
Using Hyperbolic Geometry for Machine Learning - Large parts of machine learning is about learing parameterized functions. This is traditionally done by fiiting the function to some data. Classicaly, we cared about minimizing a loss function on this data. However, in the modern regime, due to overparameterization, there exist multiple global minima. Hence the bias of a method towards picking a specific global minima is known as the implicit bias of the method. I am interested in the interplay between the implicit bias and geometry.
Here geometry can play a role in a variety of ways. First, we could be looking at certain subspaces of functions, in which the geometry of the subspace is important. Second, we could care about maps that factor through different manifolds. Here the geometry of the manifold is important. Third, we could restrict our parameters to living in certain spaces. Hence in this case the geoemtry of this subspace is important. I am interested in understanding how the geometry affects the inductive bias of the machine learning methods.
Generative Modelling - Generative models are methods that allow us to sample from probability distributions. I am interested in using Random Matrix Theory to imporve these methods.
Algebraic Structure of Graphical Models - Many probabilistic models, such as graphical models, have algebraic structures, such as they they model a semi-algebraic subset of the probability simplex. Hence I am interested in understanding this structure.
See pubications for prior projects.
If you have any questions, ideas you want to discuss, or just want to talk about math and computer science, ways to contact me can be found under the contact me tab.