Spectral Tensor Train Parameterization of Deep Learning Layers [Poster]
On the Transferability of Winning Tickets in Non-Natural Image Datasets [Poster]
Doping: A technique for extreme compression of LSTM models using sparse structured additive matrices [Poster] [Website]
Chasing Sparsity in Vision Transformers: An End-to-End Exploration [Poster] [Code]
Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective [Poster] [Code]
Are wider nets better given the same number of parameters? [Poster]
SparseDNN: Fast Sparse Deep Learning Inference on CPUs [Poster]
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration [Poster] [Code]
Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning [Poster]
Meta-learning sparse implicit neural representations [Poster]
Rate-Distortion Theoretic Model Compression: Successive Refinement for Pruning [Poster]
Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer [Poster]
The self-sparsification behavior of gradient descent for training two-layer neural networks [Poster]
Extreme sparsity gives rise to functional specialization [Poster]
Powerpropagation: A sparsity inducing weight reparameterisation [Poster]
Multiplying Matrices Without Multiplying
Sparse PointPillars: Exploiting Sparsity in Birds-Eye-View Object Detection [Poster] [Code]
Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets Win [Poster]
Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones? [Poster]
Efficient Proximal Mapping of the 1-path-norm of Shallow Networks [Poster]
Uncertainty Quantification for Sparse Deep Learning [Poster]
Non-Convex Tl1 Regularization for Learning Sparse Neural Networks [Poster]
Lottery Ticket Hypothesis in Random Features Models [Poster]
Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation [Poster]
On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning [Poster]
SpaceNet: Make Free Space For Continual Learning [Poster] [Code]
Dynamic Sparse Training for Deep Reinforcement Learning [Poster] [Code]
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks [Poster]
Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders [Poster] [Code]
One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget [Blog] [Code]
Channel Permutations for N:M Sparsity [Poster]
Understanding the effect of sparsity on neural networks' robustness [Poster]
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks [Poster]
"How can we be so slow?" Realizing the performance benefits of Sparse networks [Poster]
Structured Sparsity in Deep Neural Networks using Attention based Variance Regularization [Poster]
A Generalized Lottery Ticket Hypothesis [Poster]
Robustness of sparse MLPs for supervised feature selection [Poster]
Pruning Convolutional Filters using Batch Bridgeout [Poster]
Sparse embeddings for reduced communication costs in federated learning of language models [Poster]
Finding Everything within Random Binary Networks [Poster]
GreedyPrune: layer-wise optimization algorithms for magnitude-based pruning [Poster]
Scaling Up Exact Neural Network Compression by ReLU Stability [Poster]
FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity [Poster] [Code]
Keep the Gradients Flowing: Using Gradient Flow to study Sparse Network Optimization [Poster]
MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training [Poster] [Video]
Scatterbrain: Unifying Sparse and Low-rank Attention Approximation [Poster]
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness [Poster]
Why is Pruning at Initialization Immune to Reinitializing and Shuffling? [Poster]
Learning Digital Circuits: A Journey Through Weight Invariant Self-Pruning Neural Networks [Code]
A Unified Analysis of Network Pruning through the Lens of Gradient Flow and Symmetry [Poster] [Paper 1] [Paper 2]
On independent pruning of attention heads [Poster]
Model-Invariant State Abstractions for Model-Based Reinforcement Learning [Poster]
Algorithm to Compilation Co-design: An Integrated View of Neural Network Sparsity [Poster]
Going Beyond Classification Accuracy Metrics in Model Compression [Poster]