Real-Time Continuous & Discrete Diffusion
We present a concise, hands-on tutorial on fast diffusion-based generation across continuous and discrete data, featuring live demos that attendees can readily adapt for their own research.
The first part is based on The Principles of Diffusion Models, which unifies diffusion through variational-based, score-based, and flow-based viewpoints, then focuses on efficiency: ODE samplers (Euler/Heun-type), distillation of pretrained diffusion models into few-step generators (e.g., DMD), and flow-map alternatives including Consistency Models, Consistency Trajectory Models, and MeanFlow. We focus on first principles, together with practical training recipes and live demos.
The second part focuses on discrete diffusion. We introduce its core theoretical foundations, with emphasis on Diffusion Duality, which shows how discrete diffusion processes can emerge from Gaussian diffusion and provides a principled way to design discrete analogues of continuous-space methods. Building on this framework, we present Discrete Consistency Distillation for few-step generation in discrete diffusion models, and walk through its training and practical implementation. We conclude by exploring two families of samplers: those enabling few-step generation and those supporting inference-time scaling.
The tutorial is intended for participants familiar with neural networks and PyTorch, with some background in classic generative modeling concepts.
Click here for the detailed schedule.
OpenAI*
Adobe
Sony AI
Stanford University