Speaker: Wenpin Tang (Columbia U)
Date/Time: Tuesday, 10/28, 7pm CET (11am PST, 2pm EST)
Abstract: The past decade has witnessed the success of generative modeling (e.g. GANs, VAEs,...) in creating high quality samples in a wide variety of data modalities. In the first part of this talk, I will briefly introduce the recently developed diffusion models from a continuous-time perspective. Then in the second part, I will discuss three different approaches to fine-tune the diffusion models: conditioning (classifier guidance), stochastic control and reinforcement learning. Each of these approaches will lead to a nice theory with applications. If time permits, I will also discuss the DPO (Direct preference optimization) approach to fine-tuning both large language models and text-to-image diffusion models.
Bio: Wenpin Tang is Assistant Professor at Department of IEOR, Columbia University. He graduated from Department of Statistics, UC Berkeley. His research interests lie at the intersection of probability, statistics, ML and financial engineering.
Meeting Recording: https://ucsb.zoom.us/rec/share/7-JUgqMUoZTU2WlrcslQw4SPAWoLdx0BUbqXsU-JzoEGXdihxz6WAi-D8kcr2qUu.-3aL8V-VHGI74Bu0
Access Passcode: gz8*NV5J