Second  Workshop on PRETRAINING 


January 7, 2024 

at WACV 2024 

About the Workshop


The power of self-supervised learning on datasets and models at massive scales continues to drive performance in key applications and scenarios. From diffusion to contrastive learning, domain adaptation to dense representations, the ability of pretrained models to outperform previous methods appears largely promising. However, key challenges remain in extending, enhancing, and democratizing these capabilities. In this workshop we welcome diverse and critical perspectives along the entire spectrum of pretraining, encompassing the creation and application of foundation models, efficiency enhancements to reduce compute and data needs, courageous steps to scale models and datasets both up and down, and even negative results providing evidence where pretraining didn’t appear to benefit a given application. We also welcome work that explores auxiliary topics like reinforcement learning with human feedback, model distillation and compilation, scaling laws with their inverse, multimodal modeling, bias detection with mitigation, and more. 

KEYNOTE SPEAKERS

Bjorn Ommer


Leader, Computer Vision and Learning Group

Heidelberg University

Louis Phillipe Morency

Lead, MultiModal Communication and Machine Learning Laboratory

CMU

Svitlana Volkova


Chief Scientist

Aptima 

VENUE

WaiKoloa Beach Marriot Resort & SPA