Accepted Papers
Spotlights
NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks [Poster]
Universality of Winning Tickets: A Renormalization Group Perspective [Poster]
Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks [Poster] [Code]
Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts [Poster] [Blog]
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness [Poster] [Code]
DRAGONN: Distributed Randomized Approximate Gradients of Neural Networks [Poster]
Honorable Mentions
Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets [Poster]
Lottery Tickets in Evolutionary Optimization: On Sparse Backpropagation-Free Trainability [Poster]
The Unreasonable Effectiveness of Random Pruning: Return of the most naive baseline for sparse training [Poster] [Code]
The State of Sparse Training in Deep Reinforcement Learning [Poster] [Code]
Posters
The Price of Sparsity: Generalization and Memorization in Sparse Neural Network [Poster] [Code]
Training Your Sparse Neural Network Better with Any Mask [Poster]
Training Thinner and Deeper Neural Networks: Jumpstart Regularization [Poster]
Bit-wise Training of Neural Network Weights [Poster]
LidarCSNet: A Deep Convolutional Compressive Sensing Reconstruction Framework for 3D Airborne Lidar Point Cloud [Poster] [Blog]
Adversarial robustness of sparse local Lipschitz predictors [Poster]
From Hardness to Efficiency in Sparse Deep Network Training [Poster]
Efficient Processing of Sparse and Compact DNN Models on Hardware Accelerators [Poster] [tweet]
On the Presence of Winning Tickets in Model-Free Reinforcement Learning [Poster]
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks [Poster] [Code]
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks [Poster]
Neural Implicit Dictionary Learning via Mixture-of-Expert Training [Poster] [Code]
Training Recipe for N:M Structured Sparsity with Decaying Pruning Mask [Poster]
Reverse-Engineering Sparse ReLU Networks [Poster]
EGRU: Event-based GRU for activity-sparse inference and learning [Poster]
Covid-19 Segmentation of the Lungs using a Sparse AE-CNN [Poster]
Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training [Poster]
Efficient identification of sparse neural networks with butterfly structure [Poster]
Avoiding Catastrophe: Active Dendrites and Sparse Representations Enable Dynamic Multi-Task Learning [Poster] [Video]
FasterAI: A Ligthweight Library for Creating Sparse Neural Networks [Poster] [Code] [Documentation]
On the Emergence of Sparse Activation in Trained Transformer Models [Poster]
Look-ups are not (yet) all you need for deep learning inference [Poster]
Robust Training under Label Noise by Over-parameterization [Poster]
STen: An Interface for Efficient Sparsity in PyTorch [Poster] [Code]
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks [Poster]
Super Seeds: extreme model compression by trading off storage with computation [Poster]
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm [Poster]
BERT Pruning: Is Magnitude All You Need? [Poster]
S4 : a High-Sparsity, High-Performance AI Accelerator [Poster]
Think Fast: Time Control in Varying Paradigms of Spiking Neural Networks [Poster]
L0onie: Compressing COINs with L0-constraints [Poster] [Code]
Structural Learning in Artificial Neural Networks: A Neural Operator Perspective [Poster]
Zeroth-Order Topological Insights into Iterative Magnitude Pruning [Poster]
Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation [Poster]
Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided Compression [Poster]
The State of Unstructured Sparsity for Vision Transformers [Poster]
Towards Implementing Truly Sparse Connections in Deep RL Agents [Poster]
On the Robustness and Anomaly Detection of Sparse Neural Networks [Poster]
Experimental implementation of a neural network optical channel equalizer in restricted hardware using pruning and quantization [Poster]
Pruning Complex-Valued Neural Networks forOptical Channel Non-linear Impairments Mitigation [Poster]
Low Rank Pruning via Output Perturbation [Poster]
CrAM: A Compression-Aware Minimizer [Poster]
Accelerating Sparse Training via Variance Reduction [Poster]
Sparse Probabilistic Circuits via Pruning and Growing [Poster]
Weight-space ensembling of functionally diverse minima: Where does it all go wrong? [Poster]
Finding Structured Winning Tickets with Early Kernel Pruning [Poster]
Studying the impact of magnitude pruning on contrastive learning methods [Poster]