I am a final year PhD student at UCLA ECE advised by Prof. Suhas Diggavi. I am broadly interested in large scale machine learning and challenges revolving around it. Currently, I’m invested in several areas, including privacy-preserving machine learning, efficient training of large language models, optimization techniques for large language models, and personalization.
At UCLA, I have been working on personalization in the context of Federated Learning. As peripherals, I have worked on model compression, communication efficiency, privacy, distributed optimization, and generative models.
During my internships at AWS AI, I mainly worked on efficient LLM training. I have designed meta/learned optimizers to train LLMs, and worked on theoretical and practical implications of quantized training of LLMs.