Accepted Posters
An Information-Theoretic Justification for Model Pruning
An Information-Theoretic Justification for Model Pruning
Neural Network Compression for Noisy Storage Devices
Neural Network Compression for Noisy Storage Devices
Learned Token Pruning for Transformers
Learned Token Pruning for Transformers
Unity: Accelerating DNN Training Through Joint Optimization of Algebraic Transformations and Parallelization [code]
Unity: Accelerating DNN Training Through Joint Optimization of Algebraic Transformations and Parallelization [code]
Two Sparsities are Better than One: Efficient Sparse-Sparse ConvNets
Two Sparsities are Better than One: Efficient Sparse-Sparse ConvNets
A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators*
A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators*
Monarch: Expressive Structured Matrices for Efficient and Accurate Training*
Monarch: Expressive Structured Matrices for Efficient and Accurate Training*
Reversible Vision Transformers
Reversible Vision Transformers
Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation
Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation
Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime with Search
Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime with Search
*Outstanding Poster award winner