Accepted Posters

An Information-Theoretic Justification for Model Pruning

Neural Network Compression for Noisy Storage Devices

Learned Token Pruning for Transformers

Unity: Accelerating DNN Training Through Joint Optimization of Algebraic Transformations and Parallelization [code]

Two Sparsities are Better than One: Efficient Sparse-Sparse ConvNets

A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators*

Monarch: Expressive Structured Matrices for Efficient and Accurate Training*

Reversible Vision Transformers

Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation

Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime with Search


*Outstanding Poster award winner