Call for Papers

Important Dates

Submissions Open: May 10, 2022

Submission Deadline: June 10 June 13, 2022 AoE (anywhere on earth)

Author Notification: June 30 July 2nd, 2022

Workshop Date: July 13, 2022

Submission Instructions

The workshop submission and review will be handled by OpenReview.

Link: https://openreview.net/group?id=Sparsity_in_Neural_Networks/2022/Workshop/SNN


Eligible Work

We aim to showcase:

  • The latest research innovations at all stages of the research process, from work-in-progress to recently published papers

    • We define “recent” as presented within one year of the workshop, e.g., the manuscript is first public available on arxiv or else no earlier than July 13, 2021.

  • Position or survey papers on any topics relevant to this workshop (see above)


Concretely, we ask members of the community to submit an abstract (250 words or fewer) describing the work and one or more of the following accompanying materials that describe the work in further detail. Higher quality accompanying materials improve the likelihood of acceptance and of spotlighting work with an oral presentation.

  • A poster (in PDF form) presenting results of work-in-progress.

  • A link to a blog post (e.g., distill.pub, Medium) describing results.

  • A workshop paper of approximately four pages in length presenting results of work-in-progress. Papers should be submitted using the NeurIPS 2022 format.

  • A position paper with no page limit.

  • A published paper in the form that it was published. We will only consider papers that were published in the year prior to this workshop.


This workshop is non-archival, and it will not have proceedings. We permit under-review or concurrent submissions. Submissions will receive one of three possible decisions:

  • Accept (Spotlight Presentation). The authors will be invited to present the work during the main conference, with live Q&A.

  • Accept (Poster Presentation). The authors will be invited to present their work as a poster during the workshop’s interactive poster sessions.

  • Reject. The paper will not be presented at the workshop.

Topics of Interest

  • Algorithms for Sparsity

    • Pruning both for post-training inference, and during training

    • Algorithms for fully sparse training (fixed or dynamic), including biologically inspired algorithms

    • Algorithms for ephemeral (activation) sparsity

    • Sparsely activated expert models

    • Scaling laws for sparsity

    • Sparsity in deep reinforcement learning

  • Systems for Sparsity

    • Libraries, kernels, and compilers for accelerating sparse computation

    • Hardware with support for sparse computation

  • Theory and Science of Sparsity

    • When is overparameterization necessary (or not)

    • Optimization behavior of sparse networks

    • Representation ability of sparse networks

    • Sparsity and generalization

    • The stability of sparse models

    • Forgetting owing to sparsity, including fairness, privacy and bias concerns

    • Connecting neural network sparsity with traditional sparse dictionary modeling

  • Applications for Sparsity

    • Resource-efficient learning at the edge or the cloud

    • Data-efficient learning for sparse models

    • Communication-efficient distributed or federated learning with sparse models

    • Graph and network science applications

Reviewing Criteria

Our goal is to build a broad community around questions related to neural network sparsity. As such, we aim to accept all submissions that are (1) relevant to the topic area of the conference, (2) technically well-substantiated, and (3) non-trivial or previously unknown results.


Reviewing will be conducted in a single-blind fashion.