About Me
I am a Research Scientist in the Dept. of Electrical and Computer Engineering, at Carnegie Mellon University, working with Prof. Gauri Joshi. In August 2021, I finished my PhD in Electrical Engineering and Computer Science at Syracuse University with Prof. Pramod K. Varshney. Before coming to the US, (which feels like long ago in a galaxy far away) I finished my B.Tech-M.Tech dual-degree in Electrical Engineering from IIT Kanpur.
Career Update: I will join IIT Bombay as an Assistant Professor in the Centre for Machine Intelligence and Data Science, starting in Spring 2025. I'll be looking for PhD and master’s students, and a few full-time research assistants to work on optimization and machine learning theory.
Research Interests
Federated and Collaborative Learning, Stochastic Optimization, Deep Learning Theory, Reinforcement Learning, Differential Privacy
I'm always looking for collaborations. If you are a researcher with similar research interests, feel free to email me and we can set up a time to chat. Also, if you're a beginner in the fascinating, though often overwhelming, world of research and would like some friendly advice, do reach out.
Recent News
Oct-Nov 2024: I took three guest lectures in the course 18-667: Algorithms for Large-scale Distributed Machine Learning and Optimization offered this semester by my advisor Prof. Gauri Joshi. Check out my slides below:
Oct 2024: Check out three new papers.
B. Askin, P. Sharma, G. Joshi, and C. Joe-Wong, "Federated Communication-Efficient Multi-Objective Optimization."
A. Armacki, S. Yu, P. Sharma, G. Joshi, D. Bajovic, D. Jakovetic, and S. Kar, "Nonlinear Stochastic Gradient Descent and Heavy-tailed Noise: A Unified Framework and High-probability Guarantees."
Z. Sun, Z. Zhang, Z. Xu, G. Joshi, P. Sharma, and E. Wei, "Debiasing Federated Learning with Correlated Client Participation."
July 2024: Along with Zheng Chen and Erik G. Larsson (from Linköping University), I am organizing a special session on Distributed optimization and learning with resource-constrained communication at ICASSP'25.
Jun 2024: Attended the AIMACCS workshop organized by the NSF-AI Egde Institute at Ohio State University. Thanks to all the organizers!
Apr 2024: One paper accepted in UAI 2024. Congrats to all the co-authors.
B. Askin, P. Sharma, G. Joshi, and C. Joe-Wong, "FedAST: Federated Asynchronous Simultaneous Training." (acceptance rate 27%)
Dec 2023-April 2024: talks on Computation and Communication-Efficient Distributed Learning
School of Artificial Intelligence (ScAI), Indian Institute of Technology Delhi
EE, Indian Institute of Technology Madras
Centre for Machine Intelligence and Data Science (CMInDS), Indian Institute of Technology Bombay (see slides)
ECE, Indian Institute of Science, Bangalore
School of Technology and Computer Science (STCS), Tata Institute of Fundamental Research (TIFR), Mumbai
IEOR, Indian Institute of Technology Bombay
Dec 2023 II: Our paper on Random Reshuffling over Networks got accepted in ICASSP'24.
Dec 2023 I: Attended NeurIPS'23 in New Orleans from Dec 11-16. Do check our papers and posters.
Correlation Aware Sparsified Mean Estimation Using Random Projection. Check out the paper and poster.
Model Sparsity Can Simplify Machine Unlearning. Check out the Spotlight paper and poster.
Nov 2023: Invited talk at the Google FL seminar at Google Research, Mountain View. Thanks, Zheng Xu for the invite.
Oct 2023 III: Presented our recent work on minimax optimization and cyclic federated learning at the Missouri S&T CS Department seminar. Thanks, Sid Nadendla for the invite. Check out the slides here.
Oct 2023 II: Our paper on min-max optimization just got accepted (with minor revisions) in TMLR. Congrats to all the co-authors.
P. Sharma, R. Panda, and G. Joshi, "Federated Minimax Optimization with Client Heterogeneity."
Oct 2023 I: Presented our recent work on minimax optimization at the AI-EDGE SPARKS seminar. Thanks for the invite. Check out the slides here.
Sept 2023 II: Two papers accepted in NeurIPS 2023. Congratulations to all the co-authors.
S. Jiang, P. Sharma, and G. Joshi, "Correlation Aware Sparsified Mean Estimation Using Random Projection." Poster presentation.
J. Jia, J. Liu, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, and S. Liu, "Model Sparsity Can Simplify Machine Unlearning." Spotlight presentation.
Sept 2023 I: Attended the New Frontiers in Federated Learning Workshop at the Toyota Technological Institute in Chicago. Thanks to all the organizers!
July 2023: Attended ICML'23 in Honolulu, Hawaii from July 24-27. Do check our paper and poster.
June 2023: Presented our recent work on minimax optimization at the SIAM Conference on Optimization (OP23) in Seattle, in the session on Recent Advancements in Optimization Methods for Machine Learning. Thanks, Nicolas Loizou and Siqi Zhang for the invite!
May 2023: One paper accepted in ICML 2023. Congrats to all the co-authors.
Y-J. Cho, P. Sharma, G. Joshi, Z. Xu, S. Kale, T. Zhang, "On the Convergence of Federated Averaging with Cyclic Client Participation." Short presentation (acceptance rate 27.9%).
March-April 2023: Presented our recent work on minimax optimization and cyclic federated learning at the following places: Prof. Mingyi Hong's (ECE, UMN) research group, EE-IIT Bombay, CNI-IISc Bengaluru, EE IIT Madras, and EE IIT Kanpur. The talk at CNI was live-streamed on youtube and can be found here.
Feb 2023: Check out two new papers.
P. Sharma, R. Panda, and G. Joshi, "Federated Minimax Optimization with Client Heterogeneity."
Y-J. Cho, P. Sharma, G. Joshi, Z. Xu, S. Kale, T. Zhang, "On the Convergence of Federated Averaging with Cyclic Client Participation."
Jan 2023: One paper accepted in ICLR 2023.
Y. Zhang, P. Sharma, P. Ram, M. Hong, K. R. Varshney, and S. Liu, "What Is Missing in IRM Training and Evaluation? Challenges and Solutions." Poster presentation (acceptance rate: 31.8%).
Nov 2022: Presented our work on Minimax Optimization at the 57th Annual Asilomar Conference on Signals, Systems, and Computers (held in Pacific Grove, CA from October 29th to November 1st). Check out the 15-min version of the talk here.
Oct 2022: Presented our work on Minimax Optimization at the 2022 Informs Annual Meeting (held in Indianapolis from October 16th to 19th) in the session "Learning and Inference for Designing Policies in Stochastic Systems," chaired by Prof. Weina Wang (CMU). Thanks for the invitation!
May 2022 II: One paper accepted in UAI 2022.
D. Jhunjhunwala, P. Sharma, A. Nagarkatti, and G. Joshi, "FedVARP: Tackling the Variance Due to Partial Client Participation in Federated Learning." Poster presentation (acceptance rate: 32.3%).
May 2022 I: Two papers accepted in ICML 2022.
P. Sharma, R. Panda, G. Joshi, P. K. Varshney, "Federated Minimax Optimization: Improved Convergence Analyses and Algorithms." Short presentation (acceptance rate: 19.8%). Check out the 5-min video uploaded for ICML here.
S. Khodadadian, P. Sharma, G. Joshi, S. Maguluri, "Federated Reinforcement Learning: Communication-Efficient Algorithms and Convergence Analysis." Long presentation (acceptance rate: 2.1%).
Sept 2021: One paper accepted in NeurIPS 2021.
P. Khanduri, P. Sharma, H. Yang, M. Hong, J. Liu, K. Rajawat, P. K. Varshney, ''STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning,'' poster.
Aug 2021: I joined the ECE Dept. at CMU as a postdoctoral researcher.
July 2021: I successfully defended my PhD thesis on Distributed Tracking and Optimization. Thanks to my advisor Prof. Pramod K. Varshney and all the committee members.