Federated and Collaborative Learning
Stochastic Optimization
Deep Learning Theory
Reinforcement Learning
Differential Privacy
If you're a beginner (senior undergrad, master's, or beginner PhD student) in the fascinating world of research and would like some friendly advice, do reach out.
S. Jiang, P. Sharma, Z. S. Wu, G. Joshi, "The Cost of Shuffling in Private Gradient Based Optimization," arXiv.
N. Jali, E. Pathak, P. Sharma, G. Qu, and G. Joshi, "Natural Policy Gradient for Average Reward Non-Stationary RL." (preprint coming soon)
D. Jhunjhunwala, P. Sharma, Z. Xu, and G. Joshi, "Initialization Matters: Unraveling the Impact of Pre-Training on Federated Learning." (preprint coming soon)
P. Sharma, P. Khanduri, S. Bulusu, K. Rajawat, and P. K. Varshney, "Parallel Restarted SPIDER - Communication Efficient Distributed Nonconvex Optimization with Optimal Computation Complexity", arXiv.
Z. Sun, Z. Zhang, Z. Xu, G. Joshi, P. Sharma, and E. Wei, "Debiasing Federated Learning with Correlated Client Participation," accepted in ICLR'25. (acceptance rate 32%)
B. Askin, P. Sharma, G. Joshi, and C. Joe-Wong, "Federated Communication-Efficient Multi-Objective Optimization," accepted in AISTATS'25. (acceptance rate 31.3%)
A. Armacki, S. Yu, P. Sharma, G. Joshi, D. Bajovic, D. Jakovetic, and S. Kar, "High-probability Convergence Bounds for Online Nonlinear Stochastic Gradient Descent Under Heavy-tailed Noise," accepted in AISTATS'25. (acceptance rate 31.3%)
B. Askin, P. Sharma, G. Joshi, and C. Joe-Wong, "FedAST: Federated Asynchronous Simultaneous Training," accepted in UAI 2024. (acceptance rate 27%)
S. Jiang, P. Sharma, and G. Joshi, "Correlation Aware Sparsified Mean Estimation Using Random Projection," accepted in NeurIPS 2023 as a poster presentation. (acceptance rate: 26.1%)
J. Jia, J. Liu, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, and S. Liu, "Model Sparsity Can Simplify Machine Unlearning," accepted in NeurIPS 2023 as a spotlight presentation. (acceptance rate: 26.1%)
P. Sharma, R. Panda, and G. Joshi, "Federated Minimax Optimization with Client Heterogeneity," Transactions on Machine Learning Research, 2023.
Y-J. Cho, P. Sharma, G. Joshi, Z. Xu, S. Kale, T. Zhang, "On the Convergence of Federated Averaging with Cyclic Client Participation." accepted in ICML 2023 for short presentation. (acceptance rate: 27.9%)
Y. Zhang, P. Sharma, P. Ram, M. Hong, K. R. Varshney, and S. Liu, "What Is Missing in IRM Training and Evaluation? Challenges and Solutions." accepted in ICLR 2023 for a poster presentation. (acceptance rate: 31.8%)
D. Jhunjhunwala, P. Sharma, A. Nagarkatti, and G. Joshi, "FedVARP: Tackling the Variance Due to Partial Client Participation in Federated Learning," accepted in UAI 2022 for a poster presentation. (acceptance rate: 32.3%)
S. Khodadadian, P. Sharma, G. Joshi, and S. Maguluri, "Federated Reinforcement Learning: Communication-Efficient Algorithms and Convergence Analysis," accepted in ICML 2022 for long presentation. (acceptance rate: 2.1%)
P. Sharma, R. Panda, G. Joshi, and P. K. Varshney, "Federated Minimax Optimization: Improved Convergence Analyses and Algorithms," accepted in ICML 2022 for short presentation. (acceptance rate: 19.8%)
P. Khanduri, P. Sharma, H. Yang, M. Hong, J. Liu, K. Rajawat, and P. K. Varshney, ''STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning,'' NeurIPS 2021.
S. Bulusu, P. Khanduri, S. Kafle, P. Sharma, and P. K. Varshney, "Byzantine Resilient Non-Convex SCSG With Distributed Batch Gradient Computations," IEEE Transactions on Signal and Information Processing over Networks, Vol. 7, pages 754-766, 2021.
P. Sharma, A. A. Saucan, D. J. Bucci, and P. K. Varshney, "Decentralized Gaussian Filters for Cooperative Self-localization and Multi-target Tracking," IEEE Transactions on Signal Processing, Vol. 67, no. 22, pages 5896-5911, 2019.
P. Sharma, D. J. Bucci, S. K. Brahma, and P. K. Varshney, "Communication Network Topology Inference via Transfer Entropy," IEEE Transactions on Network Science and Engineering, Vol. 7, no. 1, pages 562-575, 2020.
S. D. Sharma, P. Sharma, and K. Rajawat, "On Decentralized Learning with Stochastic Subspace Descent," accepted in ICASSP 2025.
P. Sharma, J. Li, and G. Joshi, "On Improved Distributed Random Reshuffling over Networks," accepted in ICASSP 2024.
P. Sharma, P. Khanduri, L. Shen, D. J. Bucci Jr., and P. K. Varshney, "On Distributed Online Convex Optimization with Sublinear Dynamic Regret and Fit," Asilomar 2021.
S. Bulusu, P. Khanduri, P. Sharma, and P. K. Varshney, "On Distributed Stochastic Gradient Descent for Nonconvex Functions in the Presence of Byzantines," ICASSP 2020.
P. Khanduri, S. Bulusu, P. Sharma, and P. K. Varshney, "Byzantine Resilient Non-Convex SVRG with Distributed Batch Gradient Computations," OPTML 2019.
P. Sharma, A. A. Saucan, D. J. Bucci and P. K. Varshney, "On Decentralized Self-localization and Tracking Under Measurement Origin Uncertainty," 22nd Int. Conf. Infor. Fusion (FUSION), 2019.
P. Sharma, A. A. Saucan, D. J. Bucci, and P. K. Varshney, "On Self-Localization and Tracking with an Unknown Number of Targets," Asilomar, 2018.
K. R. Varshney, P. Khanduri*, P. Sharma*, S. Zhang, and P. K. Varshney, "Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory," WHI, ICML 2018. [* equal contribution]
M. Gagrani*, P. Sharma*, S. Iyengar, V. S. S. (Sid) Nadendla, A. Vempaty, H. Chen and P. K. Varshney, "On Noise-enhanced Distributed Inference in the Presence of Byzantines," Allerton, 2011. [* equal contribution]