The goal of my research is to build the foundations of ethical and trustworthy machine learning and carry them all the way to practice so that AI can truly bring about social good. I am particularly interested in addressing the challenges around explainability, efficiency, fairness, and robustness by bringing in novel mathematical perspectives deep-rooted in information theory, statistics, causality, and optimization.
Our work has appeared in several machine learning conferences, namely, NeurIPS, ICML, ICLR, AAAI, AISTATS, AAMAS, etc. as well as, mathematically-rigorous information-theory venues, namely, ISIT and IEEE Transactions on Information Theory, featured in New Scientist, Baltimore Sun, and Montreal AI Ethics Brief, and also been adopted at JPMorgan. Our research group is supported by an NSF Career Award, an NSF MPS SPEED Award, a JPMorgan Faculty Award, a Google Gift Funding, and a Northrop Grumman Seed Grant.
Before joining UMD, I was a senior research associate at JPMorgan Chase AI Research in the Explainable AI Centre of Excellence (XAI CoE). I also received the Simons Institute Fellowship for the Causality Program in 2022.
Recently, I have been quite intrigued by leveraging interpretability for efficient and trustworthy machine learning - how can model understanding help in better model and data compression? I am also interested in AI for accelerated material discovery.
Some of our recent research projects include: robust counterfactual explanations for algorithmic recourse, quantifying prediction consistency in Tabular LLMs, data-efficient instruction-tuning LLMs, efficient model reconstruction and distillation using counterfactual explanations, and information-theoretic measures for fairness and explainability.
I am looking for motivated students to join my research group!
Prospective Students: Please apply to the UMD Graduate Program and mention my name in your application.
Current Students: If you are already admitted to UMD, please send me an email with your resume and transcript.
I received my Ph. D. from Carnegie Mellon University. My Ph.D. thesis received the A.G. Milnes Outstanding Thesis Award. I received the K&L Gates Presidential Fellowship. My research on quantifying accuracy-fairness tradeoffs using information theory (with IBM Research) was featured in New Scientist.
I have also examined problems in reliable computing, proposing solutions for large-scale distributed machine learning using tools from coding theory (an emerging area called “coded computing”). My results on coded computing have received substantial attention from across disciplines.
Trustworthy Machine Learning
Instruction Tuning, Data Selection
Efficiency, Distillation, Model Compression
Explainability, Interpretability, Robustness
Fairness, Privacy
Optimization, Information Theory, Probability, Causality
Distributed Machine Learning, Coded Computing
Compressive Sensing and Sparse Linear Algebra