Invited Speakers

MIT

Song Han is an associate professor at MIT EECS. He received his PhD degree from Stanford University. He proposed the “Deep Compression” technique including pruning and quantization that is widely used for efficient AI computing, and “Efficient Inference Engine” that first brought weight sparsity to modern AI chips, which is a top-5 cited paper in 50 years of ISCA. He pioneered the TinyML research that brings deep learning to IoT devices, enabling learning on the edge (appeared on MIT home page). His team’s work on hardware-aware neural architecture search (once-for-all network) enables users to design, optimize, shrink and deploy AI models to resource-constrained hardware devices, receiving the first place in many low-power computer vision contests in flagship AI conferences.  

Webpage: https://songhan.mit.edu/

Google

Tim Salimans is a generative modeling researcher at Google DeepMind Amsterdam. His contributions include fundamental work on VAE, GANs, and generative model evaluation using the Inception Score. Most recently his focus has been on diffusion models, which he's used to build foundation models for images and video, and for which he's proposed efficient inference techniques such as progressive distillation and classifier-free guidance. 

Webpage: https://research.google/people/tim-salimans/

Tsinghua University

Jun Zhu is a Bosch AI Professor at the Department of Computer Science and Technology in Tsinghua University. He is an IEEE Fellow and AAAI Fellow. He was an Adjunct Faculty at the Machine Learning Department in Carnegie Mellon University (CMU). He got Ph.D. in Computer Science from Tsinghua and did post-doctoral research in CMU. His research interest lies in probabilistic machine learning. He is an associate editor-in-chief for IEEE Trans. on PAMI and served as senior area chairs for ICML, NeurIPS, and ICLR. He received various awards, including ICLR Outstanding Paper Award, IEEE CoG Best Paper Award, XPlorer Prize and IEEE Intelligent Systems "AI's 10 to Watch" Award.

Webpage: https://ml.cs.tsinghua.edu.cn/~jun

Meta

Christoph Feichtenhofer: Christoph Feichtenhofer is a Research Scientist Manager at Meta AI (FAIR). He received the BSc, MSc and PhD degrees (all with distinction) in computer science from TU Graz in 2011, 2013 and 2017, and spent time as a visiting researcher at York University, Toronto as well as the University of Oxford. He is a recipient of the PAMI Young Researcher Award, the DOC Fellowship of the Austrian Academy of Sciences, and the Award of Excellence for outstanding doctoral theses in Austria. His main areas of research include the development of effective representations for image and video understanding.

Webpage: http://feichtenhofer.github.io/

Snap

Sergey Tulyakov is a Principal Research Scientist heading the Creative Vision team at Snap Research. His work focuses on creating methods for manipulating the world via computer vision and machine learning. This includes human and object understanding in 2D and 3D, photorealistic manipulation and animation, video synthesis, prediction and retargeting. Sergey co-invented the unsupervised image animation domain with MonkeyNet and First Order Motion Model that sparkled a number of startups in the domain. His work on Interactive Video Stylization received the Best in Show Award at SIGGRAPH Real-Time Live! 2020. 

Webpage: http://www.stulyakov.com/

Stability.AI

Robin Rombach is a research scientist at Stability AI. After studying physics at the University of Heidelberg from 2013-2020, he started a PhD in computer science in the Computer Vision group in Heidelberg in 2020 under the supervision of Björn Ommer and moved to LMU Munich with the research group in 2021. His research focuses on generative deep learning models, in particular text-to-image systems. During his PhD, Robin was instrumental in the development and publication of several now widely used projects, such as VQGAN and Taming Transformers, and Latent Diffusion Models. In collaboration with Stability AI, Robin scaled the latent diffusion approach and published a series of models now known as Stable Diffusion, which have been widely adapted by the community. His subsequent works at Stability AI include SDXL, Stable Video Diffusion, Adversarial Diffusion Distillation and Stable Diffusion 3. Robin is a proponent of open source ML models.