Support Tokens, Stability Margins, and a New Foundation for Robust LLMs. Deepak Agarwal, Dhyey Dharmendrakumar Mavani, Suyash Gupta, Karthik Sethuraman, and Tejas Dharamsi. [arxiv]
Time-Aligned Multi-Domain Long-History Embedding System with Learnable Slicing for Generative Recommenders. Suyash Gupta, Shihai He, David Byrne, Adrian Englhardt, Rajdeep Sarkar, Shijie Wu. US Patent 2027 (approved, pending grant).
Adaptive Risk-based challenge system for platform-wide friction management using conformal uncertainty calibration. Jack Gindi*, Suyash Gupta*, Rohit Patra*. US Patent 2026 (approved, pending grant).
Efficient machine learning prediction system with adaptive switching between lightweight and heavy models (large language models) using conformal inference. Suyash Gupta, Rohit Patra, Xuexin Ren, Aman Gupta, Viral Gupta. US Patent 2026 (approved, pending grant).
Predictive inference in Multi-environment scenarios. John Duchi*, Suyash Gupta*, Kuanhao Jiang*, Pragya Sur*. Statistical Science 2025. Shorter version accepted at NeurIPS workshop on statistical frontiers in LLMs and foundation models 2024. [workshop] [journal]
Predictive inference with weak supervision. Maxime Cauchois*, Suyash Gupta*, Alnur Ali, John Duchi. Journal of Machine Learning Research 2024. Shorter version accepted at ICML Workshop on Distribution-free Uncertainty Quantification 2021. [workshop][journal]
Robust Validation: Confident Predictions Even When Distributions Shift. Maxime Cauchois*, Suyash Gupta*, Alnur Ali, John Duchi. Extended version published in Journal of American Statistical Association 2024. Shorter version accepted at ICML Workshop on Distribution-free Uncertainty Quantification 2021. [workshop] [journal]
The s-value: evaluating stability with respect to distributional shifts. Suyash Gupta, Dominik Rothenhaeusler. NeurIPS 2023. [conference]
Knowing what you know: valid confidence sets in multiclass and multilabel prediction. Maxime Cauchois*, Suyash Gupta*, John Duchi. Journal of Machine Learning Research 2021. [journal]
PhD Thesis: Reliability and stability in statistical and machine learning problems, Stanford University, 2022.