Publications:
Publications:
5. Conformal Sparsification for Bandwidth-Efficient Edge-Cloud Speculative Decoding
Payel Bhattacharjee*, F. Tian*, Meiyu Zhong*, Guangyi Zhang, Osvaldo Simeone, Ravi Tandon
NeurIPS 2025 Workshop: AI and ML for Next-Generation Wireless Communications and Networking, 2025
We first derive an information-theoretic bound that decomposes the token rejection rate into contributions from SLM-LLM distribution mismatch and from quantization distortion. Guided by this analysis, we propose the Sparse Quantize-and-Sample SD (SQS-SD) framework, which exploits distributional sparsity through structured sparsification and lattice-based quantization.
4. PROPS: Progressively Private Self-alignment of Large Language Models
Noel Teku, F. Tian, Payel Bhattacharjee, Souradip Chakraborty, Amrit Singh Bedi, Ravi Tandon
We propose PROPS (PROgressively Private Self-alignment), a multi-stage privacy preserving alignment framework where privately aligned models in previous stages can serve as labelers for supplementing training data in the subsequent stages of alignment.
3. A Framework for Multi-source Privacy Preserving Epidemic Analysis
Zihan Guan, Zhiyuan Zhao, F. Tian, Dung Nguyen, Payel Bhattacharjee, Ravi Tandon, B. Aditya Prakash, Anil Vullikanti
We develop a framework that combines deep learning and epidemic models to jointly forecast and learn epidemic dynamics using multiple datasets and show that even DP-protected financial data improves forecasting.
2. Learning to Diagnose Privately: DP-Powered LLMs for Radiology Report Classification
Payel Bhattacharjee, F. Tian, Ravi Tandon, Joseph Lo, Heidi Hanson, Geoffrey Rubin, Nirav Merchant, John Gounley
This study proposes a framework for fine-tuning large language models (LLMs) with differential privacy (DP) to perform multi-abnormality classification on radiology report text. The framework seeks to mitigate the privacy risks associated with sensitive patient data and protect against data leakage while maintaining classification performance.
F. Tian and R. Tandon
57th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, October 2024.
We introduce the concept of Inference Privacy (IP), a new framework designed to ensure privacy for user’s query/ input data during inference. The core idea behind IP is to obscure model outputs to the extent that adversaries are unable to discern the specific query input within a defined privacy radius.