8:05 AM
8:35 AM
8:55 AM
9:35 AM
Instructor: Chieh-Hsin (Jesse) Lai
This section develops the theoretical and empirical foundations of diffusion and flow-map models for fast sampling on continuous data.
Train a continuous diffusion model
Distill pretrained diffusion models into fast samplers, including DMD
Train flow-map models, including Consistency Model, Consistency Trajectory Model, MeanFlow
Familiarity with classic generative models (VAEs, energy-based models, and normalizing flows), and PyTorch.
Continuous Diffusion Overview from Three Origins
Variational-based: VAE ➞ DDPM
Score-based: Energy-based model ➞ Score SDE
Flow-based: Normalizing flows ➞ Flow matching & Rectified flow
Distribution-based Distillation from Pretrained Diffusion Models into Fast Generators
Live Demo 1: Quick Play Around Distribution-based Distillation
Diffusion-Motivated Flow-Map Models as Fast Generators
Efficient Flow-Map Training with Consistency Mid-Training
Live Demo 2: Training a Flow Map Model
Audio-Visual Generation & Protection of Creation ーー Instructor: Yuki Mitsufuji
This part briefly covers AI content creation and protection, from diffusion memorization and attribution to audio-visual models such as MMAudio, highlighting both creativity and the need for safeguards.
The Principles of Diffusion Models by Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, Stefano Ermon.
9:50 AM
10:10 AM
10:40 AM
11:10 AM
11:30 AM
Instructor: Subham Sekhar Sahoo
This section builds the theoretical + empirical foundation for diffusion when the data lives in discrete space.
Train a discrete diffusion model.
Perform self-distillation for faster generation.
Implement samplers for
Faster generation
Inference-time scaling
Familiarity with neural networks and PyTorch.
Discrete Diffusion Overview
Uniform-state diffusion: Forward / reverse process.
Training + Implementation.
Samplers.
Live Demo 1: Image generation
Training a discrete diffusion model for image generation.
Implement samplers:
Ancestral-sampler.
Psi-sampler [2] for inference-time scaling.
Self-Distillation to accelerate sampling
The discrete-space and continuous-space diffusion duality [1].
Use the duality to design Discrete Consistency Distillation [1].
Implementation.
Live Demo 2: Discrete Consistency Distillation
Distill a pre-trained discrete diffusion model for faster inference via Discrete Consistency Distillation.
Implement Greedy-Tail sampler [1] for few-step generation.
Subham Sekhar Sahoo, Justin Deschenaux, Aaron Gokaslan, Guanghan Wang, Justin Chiu, Volodymyr Kuleshov, "The Diffusion Duality", ICML 2025.
Justin Deschenaux, Caglar Gulcehre, Subham Sekhar Sahoo, "The Diffusion Duality, Chapter II: Psi-Samplers and Efficient Curriculum", ICLR 2026.