Enoch Hyunwook Kang
University of Washington
Research Expertise
Computer Science methods for social science: AI agents, Machine learning theory, Reinforcement learning theory, Adaptive experimentation theory, Graph neural networks
Econometric methods for social science: Dynamic Structural Models, Causal Machine Learning
Services
Program Committee, Economics and Computation (2026)
Reviewer, Marketing Science
Reviewer, ICLR (2026, 2025, 2024, 2023)
Reviewer, ICML (2025, 2024, 2023)
Reviewer, NeurIPS (2025, 2024, 2023, 2022)
Reviewer, AISTATS (2023)
Bio
Enoch Hyunwook Kang is a researcher studying marketing at the University of Washington. As a computer scientist with a prior Ph.D and an econometrician, he develops methods for innovating marketing-related decision-making using causal ML, online experimentation, structural econometrics, reinforcement learning theory, and AI agents. His recent work covers a scalable machine learning method for dynamic discrete choice (invited tutorial at the Econometric Society Summer School 2025), Self-improving AI as Bayesian Optimization in language space, auto-debiasing unstructured data using language model representations, and proving impossibility results for adaptive experimentation with delayed feedback. As a body of work, his research presents new methods that shed light on the inflection points that the marketing industry is facing in the era of AI. He also hosts two shows, the bi-daily "Best AI papers explained" podcast and the weekly "Pitching the AI Startup" podcast.
Education
Ph.D. candidate, Marketing, University of Washington
Ph.D., Computer Engineering, Texas A&M University
B.S., Mathematics, Korea Advanced Institute of Science & Technology
News
> November 22, 2025: New arXiv preprint "Bayesian Optimization in Language Space: An Eval-Efficient AI Self-Improvement Framework" [arXiv 2511.12063] is out! This paper proposes an optimal self-improving AI system design framework via iterative prompt optimization, formulating it as a Bayesian Optimization in language space.
> November 21, 2025: Best AI papers explained (Apple podcasts, Spotify) hit 500 subscribers with 150+ daily listeners!
> August 26, 2025: New arXiv preprint "Stability and generalization for Bellman residuals" [arXiv 2508.18741] is out! This paper proves the first O(1/n) statistical convergence guarantee for gradient-based methods for Offline RL/IRL/Dynamic discrete choice models.
> July 15, 2025: My tutorial lecture at Econometric Society Summer School in Dynamic Structural Econometrics 2025 with John Rust is out! (YouTube Link)