Krishna Balasubramanian
Associate professor
Department of Statistics
Graduate Group in Applied Mathematics
Graduate Program in Electrical and Computer Engineering
University of California, Davis
Office: MSB 4109
Email: kbala@ucdavis.edu || CV || Students
Deep Learning:
Provable In-context Learning for Mixture of Linear Regressions using Transformers (with Yanhao Jin and Lifeng Lai), 2024.
Transformers Handle Endogeneity in In-Context Linear Regression (with Haodong Liang and Lifeng Lai), 2024.
From Stability to Chaos: Analyzing Gradient Descent Dynamics in Quadratic Regression (with Xuxing Chen, Promit Ghosal and Bhavya Agrawalla), Transactions on Machine Learning Research (TMLR), 2024.
Gaussian random field approximation via Stein's method with applications to wide random neural networks (with Larry Goldstein, Nathan Ross and Adil Salim), Applied and Computational Harmonic Analysis, 2024.
Sampling:
Improved Finite-Particle Convergence Rates for Stein Variational Gradient Descent (Sayan Banerjee and Promit Ghosal), 2024.
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers (with Ye He, Alireza Mousavi-Hosseini and Murat A. Erdogdu), NeurIPS, 2024.
Regularized Stein Variational Gradient Flow (with Ye He, Bharath K. Sriperumbudur and Jianfeng Lu), Foundations of Computational Mathematics, 2024.
Towards Understanding the Dynamics of Gaussian-Stein Variational Gradient Descent (with Tianle Liu, Promit Ghosal and Natesh Pillai), NeurIPS, 2023.
Mean-square Analysis of Discretized Itô Diffusions for Heavy-tailed Sampling (with Ye He, Tyler Farghly and Murat A. Erdogdu), Journal of Machine Learning Research, 2024.
An analysis of Transformed Unadjusted Langevin Algorithm for Heavy-tailed Sampling (with Ye He and Murat A. Erdogdu), IEEE Information Theory, 2024.
Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein Space (with Michael Diao, Sinho Chewi and Adil Salim), ICML, 2023.
Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincaré Inequality (with Alireza Mousavi-Hosseini, Tyler Farghly, Ye He and Murat A. Erdogdu), COLT, 2023.
Improved Discretization Analysis for Underdamped Langevin Monte Carlo (with Matthew Zhang, Sinho Chewi, Mufan Bill Li, and Murat A. Erdogdu), COLT, 2023.
Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo (with Sinho Chewi, Murat A. Erdogdu, Adil Salim and Matthew Zhang), COLT, 2022.
Stochastic Zeroth-order Discretizations of Langevin Diffusions for Bayesian Inference (with Abhishek Roy, Lingqing Shen and Saeed Ghadimi), Bernoulli [link], 2022.
On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method (with Ye He and Murat Erdogdu), NeurIPS, 2020.
Stochastic Optimization:
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance (with Promit Ghosal and Ye He), Annals of Applied Probability, 2024.
Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data (with Xuxing Chen, Abhishek Roy and Yifan Hu), NeurIPS, 2024.
Zeroth-order Riemannian Averaging Stochastic Approximation Algorithms (with Jiaxiang Li and Shiqian Ma) SIAM Journal of Optimization, 2024.
Online covariance estimation for stochastic gradient descent under Markovian sampling (with Abhishek Roy), Journal of the American Statistical Association (under major revision), 2024.
Stochastic Nested Compositional Bi-level Optimization for Robust Feature Learning (with Xuxing Chen and Saeed Ghadimi), Mathematical Programming (under minor revision), 2024.
Optimal algorithms for stochastic bilevel optimization under relaxed smoothness conditions (with Xuxing Chen and Tesi Xiao), Journal of Machine Learning Research, 2024.
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression (with Bhavya Agrawalla and Promit Ghosal), 2023.
Decentralized Stochastic Bilevel Optimization with Improved Per-Iteration Complexity (with Xuxing Chen, Minhui Huang and Shiqian Ma), ICML, 2023.
A One-Sample Decentralized Proximal Algorithm for Non-Convex Stochastic Composite Optimization (with Tesi Xiao, Xuxing Chen and Saeed Ghadimi), UAI, 2023.
Stochastic Zeroth-order Riemannian Derivative Estimation and Optimization (with Jiaxiang Li and Shiqian Ma), Mathematics of Operations Research [link], 2022.
Stochastic Zeroth-order Functional Constrained Optimization: Oracle Complexity and Applications (with Anthony Nguyen), INFORMS Journal on Optimization [link], 2022.
Constrained Stochastic Nonconvex Optimization with State-dependent Markov Data (with Abhishek Roy and Saeed Ghadimi), NeurIPS, 2022.
A Projection-free Algorithm for Constrained Stochastic Multi-level Composition Optimization (with Tesi Xiao and Saeed Ghadimi), NeurIPS, 2022.
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance (with Nuri Mert Vural, Lu Yu, Stanislav Volgushev and Murat A. Erdogdu), COLT, 2022.
Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates (with Saeed Ghadimi and Anthony Nguyen), SIAM Journal on Optimization [link], 2022.
Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality and Saddle-Points, (with Saeed Ghadimi), Foundations of Computational Mathematics [link], 2022. (Short version accepted in NeurIPS, 2018).
Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved Complexities (with Shiqian Ma, Meisam Razaviyayn and Zhongruo Wang), Journal of Global Optimization [link], 2022.
Stochastic Zeroth-Order Optimization under Nonstationarity and Nonconvexity (with Abhishek Roy, Saeed Ghadimi and Prasant Mohapatra), Journal of Machine Learning Research, 2022.
Improved Complexities for Stochastic Conditional Gradient Methods under Interpolation-like Conditions (with Tesi Xiao and Saeed Ghadimi), Operations Research Letters [link], 2022.
High-Probability Bounds for Robust Stochastic Frank-Wolfe Algorithm (with Tongyi Tang and Thomas C.M. Lee) UAI, 2022.
Statistical Inference for Polyak-Ruppert Averaged Zeroth-order Stochastic Gradient Algorithm (with Yanhao Jin and Tesi Xiao), 2021.
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias (with Lu Yu, Stanislav Volgushev and Murat A. Erdogdu), NeurIPS, 2021.
Escaping Saddle-Points Faster under Interpolation-like Conditions (with Abhishek Roy, Saeed Ghadimi and Prasant Mohapatra), NeurIPS, 2020.
Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT, (with Andreas Anastasiou and Murat A. Erdogdu), COLT, 2019.
Online and Bandit Algorithms for Nonstationary Stochastic Saddle-Point Optimization (with Yifang Chen and Abhishek Roy), 2019.
Nonparametric, Geometric and Topological Statistics:
Statistical-Computational Trade-offs for Greedy Recursive Partitioning Estimators (with Yan Shuo Tan and Jason M. Klusowski), 2024.
Multivariate Gaussian Approximation for Random Forest via Region-based Stabilization (with Zhaoyang Shi, Chinmoy Bhattacharjee and Wolfgang Polonik), 2024
Minimax Optimal Goodness-of-Fit Testing with Kernel Stein Discrepancy (with Omar Hagrass and Bharath Sriperumbudur), 2024.
Nonsmooth Nonparametric Regression via Fractional Laplacian Eigenmaps (with Zhaoyang Shi and Wolfgang Polonik), 2024.
Adaptive and non-adaptive minimax rates for weighted Laplacian-eigenmap based nonparametric regression (with Zhaoyang Shi and Wolfgang Polonik), International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
A Flexible Approach for Normal Approximation of Geometric and Topological Statistics (with Zhaoyang Shi and Wolfgang Polonik), Bernoulli, 2024.
Functional linear and single-index models: A unified approach via Gaussian Stein identity (with Hans-Georg Müller and Bharath Sriperumbudur), Bernoulli, 2024.
Topologically penalized regression on manifolds (with Olympio Hacquard, Gilles Blanchard, Clément Levrard and Wolfgang Polonik), Journal of Machine Learning Research, 2022.
Fractal Gaussian Networks: A sparse random graph model based on Gaussian Multiplicative Chaos (with Subhroshekar Ghosh and Xiaochuan Yang), IEEE Transactions on Information Theory [link], 2022. (short version appeared in ICML 2020).
Nonparametric Modeling of Higher-Order Interactions via Hypergraphons, Journal of Machine Learning Research, 2021.
On the Optimality of Kernel-Embedding Based Goodness-of-Fit Test, (with Tong Li and Ming Yuan), Journal of Machine Learning Research, 2021.
Ultrahigh Dimensional Feature Screening via RKHS Embeddings, (with Bharath Sriperumbudur and Guy Lebanon), International Conference on Artificial Intelligence and Statistics (AISTATS), 2013.
High-dimensional Statistics and Machine Learning:
Meta-Learning with Generalized Ridge Regression: High-dimensional Asymptotics, Optimality and Hyper-covariance Estimation (with Yanhao Jin and Debashis Paul), 2024.
On Empirical Risk Minimization with Dependent and Heavy-tailed Data (with Abhishek Roy and Murat A. Erdogdu), NeurIPS, 2021.
Tensor Methods for Additive Index Models under Discordance and Heterogeneity, (with Jianqing Fan and Zhuoran Yang), Under Revision, 2018.
Inhomogeneous Random Tensors and its Applications to Tensor PCA and Tensor Sparsification (with Subhroshekar Ghosh), 2020.
Estimating High-dimensional Non-Gaussian Multiple Index Models via Stein’s Lemma, (with Zhuoran Yang, Zhaoran Wang and Han Liu), NeurIPS 2017.
High-dimensional Non-Gaussian Single Index Models via Thresholded Score Function Estimation, (with Zhuoran Yang and Han Liu), ICML 2017.
Discussion: Estimating Structured High-Dimensional Covariance and Precision Matrices: Optimal Rates and Adaptive Estimation, (with Ming Yuan), Electronic Journal of Statistics, 2016.
Smooth Sparse Coding via Marginal Regression for Learning Sparse Representations, (with Kai Yu and Guy Lebanon), Artificial Intelligence, 2016. Short version: International Conference on Machine Learning (ICML), 2013. (Best Paper Runner-up Award).
High-dimensional Joint Sparsity Random Effects Model for Multi-task Learning, (with Kai Yu and Tong Zhang), Conference on Uncertainty in Artificial Intelligence (UAI), 2013.
A Landmark Selection Approach for Multi-output Prediction Problems, (with Guy Lebanon), International Conference on Machine Learning (ICML), 2012.
Unsupervised Supervised Learning II: Margin-Based Classification without Labels, (with Pinar Donmez and Guy Lebanon), Journal of Machine Learning Research, 12, pp. 3119-3145, 2011. Short version: International Conference on Artificial Intelligence and Statistics (AISTATS), 2011.
Unsupervised Supervised Learning I: Estimating Classification and Regression Errors without Labels, (with Pinar Donmez and Guy Lebanon), Journal of Machine Learning Research, 2010.
Asymptotic Analysis of Generative Semi-Supervised Learning, (with Joshua Dillon and Guy Lebanon), International Conference on Machine Learning (ICML), 2010.
Dimensionality Reduction for Text using Domain Knowledge, (with Yi Mao and Guy Lebanon), International Conference on Computational Linguistics (COLING), 2010.
A Fast Algorithm for Nonnegative Tensor Factorization using Block Coordinate Descent and an Active-set-type Method, (with Jingu Kim, Andrey Puretskiy, Micheal Berry and Haesun Park), Text Mining Workshop: SIAM International Conference on Data Mining , 2010.