Speaker: Xin Guo (UC Berkeley)
Date/Time: Wednesday, 11/5, 7pm CET (10am PST, 1pm EST)
Abstract: Transfer learning is a machine learning technique that leverages knowledge acquired in one domain to enhance performance on a related task. It plays a central role in the success of large language models (LLMs) such as GPT and BERT, which leverage pretraining to enable broad generalization across downstream applications. In this talk, I will discuss how reinforcement learning (RL), and in particular continuous time RL, can benefit from transfer learning principles. I will present convergence results formulated through stability analysis for stochastic control systems, using rough differential equation techniques. Finally, I will show how this analysis yields a natural corollary establishing robustness guarantees for a class of score-based generative diffusion models.
Based on joint work with Zijiu Lyu of UC Berkeley.
Bio: Xin Guo is the department chair of IEOR at UC Berkeley and a Coleman Fung Chair professor in financial engineering. Her research interests are theory of stochastic controls and games, theory of machine learning, including multi-agent reinforcement learning, generative models, and transfer learning, with applications to medical and financial data analysis. She received her B.S. degree in 1992 and M.S. degree in 1995 in mathematics from the University of Science and Technology, and PhD in 1999 in mathematics from Rutgers University. Prior to joining UC Berkeley, she was a Herman Goldstine postdoctoral fellow at the Mathematics department at T.J. Watson Research Center, and an associate tenured professor at Cornell University.