Sparsity in Neural Networks

Advancing Understanding and Practice

July 13th 2022

Virtual + ICML meetup

Recording

Call for Papers/Abstract

A neural network is sparse when a portion of its parameters or activations have been fixed to 0. Neural network sparsity is:

  1. A compelling practical opportunity to reduce the cost of training and inference (through applied work on algorithms, systems, and hardware);

  2. An important topic for understanding how neural networks train with/without overparameterization and the representations they learn (through theoretical and scientific work).

Research interest in sparsity in deep learning has exploded in recent years from both the academic and the industry, and we believe the community is now large and diverse enough to join together to discuss shared research priorities and cross-cutting issues. Currently, the communities working on aspects of sparsity and related problems are disparate, oftentimes presenting at separate venues for separate audiences.

This second edition of the workshop aims to bring together researchers working on problems related to the practical, theoretical, and scientific aspects of neural network sparsity, and members of adjacent communities, in order to build connections across different areas, create opportunities for new collaborations, and articulate shared challenges. We aspire to continue building a lasting, interdisciplinary research community among those who share an interest in neural network sparsity.