Publications:
Publications:
4. PROPS: Progressively Private Self-alignment of Large Language Models
Noel Teku, F. Tian, Payel Bhattacharjee, Souradip Chakraborty, Amrit Singh Bedi, Ravi Tandon
We propose PROPS (PROgressively Private Self-alignment), a multi-stage privacy preserving alignment framework where privately aligned models in previous stages can serve as labelers for supplementing training data in the subsequent stages of alignment.
3. A Framework for Multi-source Privacy Preserving Epidemic Analysis
Zihan Guan, Zhiyuan Zhao, F. Tian, Dung Nguyen, Payel Bhattacharjee, Ravi Tandon, B. Aditya Prakash, Anil Vullikanti
We develop a framework that combines deep learning and epidemic models to jointly forecast and learn epidemic dynamics using multiple datasets and show that even DP-protected financial data improves forecasting.
2. Learning to Diagnose Privately: DP-Powered LLMs for Radiology Report Classification
Payel Bhattacharjee, F. Tian, Ravi Tandon, Joseph Lo, Heidi Hanson, Geoffrey Rubin, Nirav Merchant, John Gounley
This study proposes a framework for fine-tuning large language models (LLMs) with differential privacy (DP) to perform multi-abnormality classification on radiology report text. The framework seeks to mitigate the privacy risks associated with sensitive patient data and protect against data leakage while maintaining classification performance.
F. Tian and R. Tandon
57th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, October 2024.
We introduce the concept of Inference Privacy (IP), a new framework designed to ensure privacy for user’s query/ input data during inference.
The core idea behind IP is to obscure model outputs to the extent that adversaries are unable to discern the specific query input within a defined privacy radius.