How Do Vision Models Work (HOW) is a follow-up to last year’s Mechanistic Interpretability for Vision (MIV) workshop, expanding its focus from mechanistic analysis to a broader scientific study of how vision models learn, represent, and reason. As large-scale training has made learned representations increasingly opaque, developing a systematic understanding of these models has become a central scientific challenge. This workshop brings together researchers to investigate the principles and mechanisms underlying the behavior of vision models through observation, experimentation, and theory.
UC Berkeley
Northeastern University
Reve
Northeastern