Indian Institute of Technology, Roorkee
A Ph.D. researcher in distributed and federated learning for communication systems, focusing on divergence-based adaptive algorithms, communication-efficient training, and secure decentralized frameworks.
Python, MATLAB, C, LaTeX
AI for 6G, Distributed & Decentralized Learning, Communication-Efficient Federated Learning, Secure & Privacy-Preserving Learning (QKD/FL), Channel Modeling & Estimation, IoT & Edge Intelligence, Graph & Multimodal Learning
PyTorch, Hugging Face (transformers, torchvision, timm), Sionna, SionnaRT, OpenCV, scikit-learn, pandas, NumPy, Matplotlib, seaborn
Shekhar Pratap Singh, Parth Sharma, and Pyari Mohan Pradhan
10 September 2025 • AEU - International Journal of Electronics and Communications
With the increase in the number of users, the channel estimation in the presence of pilot contamination has become more challenging. To reduce the computational complexity involved in performing adaptive channel estimation in real time, block adaptive filters are widely used. This paper proposes a channel estimation technique that uses least mean square (LMS) adaptive filtering algorithm based on information-theoretic divergence, named Amari-Alpha divergence based Block LMS (AABLMS) algorithm. This algorithm is used to study a scenario where multiple users receive contaminated pilot signals. The condition for convergence of the proposed AABLMS algorithm in the mean sense is derived, and the upper and lower bounds for the learning rate are derived. Further, the block counterparts of existing state-of-the-art LMS variants are compared with that of the proposed AABLMS algorithm in terms of computational complexity, mean square deviation (MSD), and mean square error (MSE). The simulation results show that the proposed AABLMS algorithm performs better than other block LMS-based counterparts in the presence of channel noise and pilot contaminating noise.
Parth Sharma and Pyari Mohan Pradhan
25 April 2025 • 2024 34th International Conference on Computer Theory and Applications (ICCTA)
Precise segmentation of brain tumors is essential in medical imaging, supporting early diagnosis, personalized treatment planning, and effective monitoring of tumor development and response to therapies. While centralized deep learning models like U-Net are highly effective for this task, they raise substantial privacy concerns when healthcare or-ganizations need to exchange confidential patient information. To address these concerns, the BrainDiffU-Net framework is introduced, a diffusion-based U-Net model designed for decen-tralized learning environments. In this framework, each medical institution independently trains a model on its local dataset and exchanges model updates with neighboring institutions, ensuring data privacy without requiring a central server. The diffusion process allows model parameters to be shared securely across the network, facilitating collaborative learning across multiple institutions while preserving privacy. Simulations conducted on the Low-Grade Glioma (LGG) MRI Segmentation Dataset demonstrate that BrainDiffU-Net achieves segmentation performance comparable to centralized models and significantly outperforms non-cooperative strategies in terms of average loss and Dice coefficient. These results show that BrainDiffU-Net provides an optimal balance between segmentation accuracy and privacy preservation, making it a viable solution for decen-tralized medical applications. The source code is available at https://github.com/Parth-nXp/BrainDiffU-Net.
Parth Sharma and Pyari Mohan Pradhan
03 March 2025 • 2024 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS)
Estimation of sparse parameters in presence of non-Gaussian noise is a challenge in Internet of Things (IoT) networks. This paper introduces a novel diffusion β-divergence based block proportionate normalized least mean squares (DβBPNLMS) algorithm tailored for distributed estimation in Internet of Things (IoT) networks. The proposed algorithm incorporates β-divergence measure to enhance robustness against non-Gaussian noise. Further, it employs block proportionate normalization to reduce computational complexity while maintaining high estimation accuracy. The algorithm also utilizes a coefficient re-ordering strategy and uniform weighting within blocks to efficiently prioritize significant coefficients. An adapt-then-combine (ATC) diffusion strategy is implemented to improve robustness and convergence speed in distributed parameter estimation. The performance of the proposed DβBPNLMS algorithm is validated through simulation study on two critical IoT applications: acoustic echo path estimation and urban microcell wireless channel estimation. The results demonstrate that the proposed algorithm significantly outperforms state-of-the-art algorithms, achieving lower steady-state error across various noise conditions, thereby proving its effectiveness and practical applicability in real-time IoT networks.