Computable Bounds for Strong Approximations with Applications[arxiv ][2025]
H Ye and M.Austern
Inference on Optimal Policy Values and Other Irregular Functionals via Smoothing [arxiv] [2025]
J.Whitehouse, M.Austern and V.Syrkganis
Poisson-Process Topic Model for Integrating Knowledge from Pre-trained Language Models arxiv[In revision at JASA][2025]
M. Austern, Y. Guo, T. Liu and T. Ke (Alphabetical order)
Universality of High-Dimensional Logistic Regression and a Novel CGMT under Dependence with Applications to Data Augmentation [arxiv ][ COLT][2025]
K. Huang, M. Mallory, and M. Austern
Bounding Hellinger Distance with Stein's Method [arxiv ][Revision at Electronic Letters in Probability][2024]
Gaussian universality for approximately polynomial functions of high-dimensional data [arxiv] [Submitted][2024]
K.Huang, M.Austern and P.Orbanz
Random Geometric Graph Alignment with Graph Neural Networks[arxiv] [AISTATS][2025]
S. Liu and M. Austern
Statistical Guarantees for Link Prediction using Graph Neural Networks[arxiv ][Submitted to JMLR][2024]
A.Chung, A. Saberi and M.Austern
Poisson approximation for stochastic processes summed over amenable groups[arxiv ][Submitted ][2024]
H.Ye, P.Orbanz and M.Austern
Inference on Optimal Dynamic Policies via Softmax Approximation [arxiv] [In revision at JRSS-B][2024]
Q. Chen, M.Austern and V. Syrkganis
Wassertein-p bounds for the central limit theorem under weak dependence [arxiv] [Submitted to Information and Inference][2023]
Wasserstein-p Bounds in the Central Limit Theorem Under Local Dependence [arxiv] [EJP][2023]
T.Liu, M.Austern
Efficient Concentration with Gaussian Approximation [arxiv ][In revision at PTRF][2022]
Debiased Machine learning without sample-splitting for stable estimators [arxiv ] [NeurIPS][2022]
Quantifying the effect of data augmentation [arxiv ] [Under review at AOS][2022]
Asymptotics of network embeddings learned via subsampling [arxiv] [JMLR][2021]
A free central-limit theorem for dynamical systems[arxiv]
Asymptotics of Cross Validation [arxiv] [Annals of Institut Henri Poincare]
Asymptotics of the Bootstrap method beyond asymptotically Gaussian Estimators [arxiv][NeurIPS]
Limit theorems for invariant distributions [arxiv][ Annals of Statistics]
On the Gaussianity of Kolmogorov Complexity of Mixing Sequences [arxiv] [Transactions of Information Theory]
Compressibility and Generalization in Large-Scale Deep Learning [arxiv] [ICLR]
Empirical Risk Minimization and Stochastic Gradient Descent for Relational Data [arxiv] [AISTATS]