Sparsity in Neural Networks
Advancing Understanding and Practice
Recordings are now available!
Day-1 Day-2
Slides are available at schedule-page.
July 8-9th 2021, 2pm-7pm GMT, Virtual
Call for Papers/Abstract
Call for Papers/Abstract
A neural network is sparse when a portion of its parameters have been fixed to 0. Neural network sparsity is:
Research interest in sparsity in deep learning have exploded in recent years from both the academic and the industry, and we believe the community is now large and diverse enough to join together to discuss shared research priorities and cross-cutting issues. Currently, the communities working on aspects of sparsity and related problems are disparate, oftentimes presenting at separate venues for separate audiences.
This workshop aims to bring together researchers working on problems related to the practical, theoretical, and scientific aspects of neural network sparsity, and members of adjacent communities, in order to build connections across different areas, create opportunities for new collaborations, and articulate shared challenges.
We believe the time is right to bring these stakeholders together, and we intend this workshop as the way to do so. We aspire to build a lasting, interdisciplinary research community among those who share an interest in neural network sparsity.
- A compelling practical opportunity to reduce the cost of training and inference (through applied work on algorithms, systems, and hardware);
- An important topic for understanding how neural networks train with/without overparameterization and the representations they learn (through theoretical and scientific work).
Research interest in sparsity in deep learning have exploded in recent years from both the academic and the industry, and we believe the community is now large and diverse enough to join together to discuss shared research priorities and cross-cutting issues. Currently, the communities working on aspects of sparsity and related problems are disparate, oftentimes presenting at separate venues for separate audiences.
This workshop aims to bring together researchers working on problems related to the practical, theoretical, and scientific aspects of neural network sparsity, and members of adjacent communities, in order to build connections across different areas, create opportunities for new collaborations, and articulate shared challenges.
We believe the time is right to bring these stakeholders together, and we intend this workshop as the way to do so. We aspire to build a lasting, interdisciplinary research community among those who share an interest in neural network sparsity.
Our Speakers
Our Speakers
Anna Golubeva
Anna Golubeva
IAIFI, Boston
Diana Marculescu
Diana Marculescu
UT Austin
Cliff Young
Cliff Young
Gintare Karolina Dziugaite
Gintare Karolina Dziugaite
ServiceNow
Sara Hooker
Sara Hooker
Google/MILA
Selima Curci
Selima Curci
smartQare
Paulius Micikevicius
Paulius Micikevicius
NVIDIA
Rosanne Liu
Rosanne Liu
Google/ML Collective
Torsten Hoefler
Torsten Hoefler
ETH Zürich
Friedemann Zenke
Friedemann Zenke
FMI, Basel
Natalia Vassilieva
Natalia Vassilieva
Cerebras Systems
Mitchell Wortsman
Mitchell Wortsman
University of Washington