Efficient Machine Learning - Reading Group

Today’s world needs orders of magnitude more efficient ML to address environmental and energy crises, optimize resource consumption and improve sustainability. With the end of Moore’s Law and Dennard Scaling, we can no longer expect more and faster transistors for the same cost and power budget. This is particularly problematic when looking at the growing data volumes collected by populated sensors and systems, larger and larger models we train, and the fact that most ML models have to run on edge devices to minimize latency, preserve privacy and save energy. The algorithmic efficiency of deep learning becomes essential to achieve desirable speedups, along with efficient hardware implementations and compiler optimizations for common math operations. ML efficiency is being actively investigated in many research communities. This reading group aims to help onboard young scientists interested in the topic and offers researchers at all levels a platform for an open dialog to foster collaboration, and stay up-to-date with rapid developments in the field of efficient ML. We welcome and discuss fresh research findings published as a pre-print or recently presented at research venues. The list of topics includes but is not limited to:

Schedule / Upcoming Talks

Subscribe to the Efficient ML mailing list / import the Efficient ML Events calendar to receive information on how to join the virtual talks.

13. May 2024 @ 5pm CET / 11am EST / 8am PST [timezone converter]
94% on CIFAR-10 in 3.29 Seconds on a Single GPU
Keller Jordan, Independent Researcher

AbstractCIFAR-10 is among the most widely used datasets in machine learning, facilitating thousands of research projects per year. To accelerate research and reduce the cost of experiments, we introduce training methods for CIFAR-10 which reach 94% accuracy in 3.29 seconds, 95% in 10.4 seconds, and 96% in 46.3 seconds, when run on a single NVIDIA A100 GPU. As one factor contributing to these training speeds, we propose a derandomized variant of horizontal flipping augmentation, which we show improves over the standard method in every case where flipping is beneficial over no flipping at all.

arXiv: https://arxiv.org/pdf/2404.00498.pdf 

27. May 2024 @ 5pm CET / 11am EST / 8am PST [timezone converter]
LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition
Youbing Hu (AIoT Lab, Harbin Institute of Technology) and Yun Cheng (Swiss Data Science Center)

Abstract:  The Vision Transformer (ViT) excels in accuracy when handling high-resolution images, yet it confronts the challenge of significant spatial redundancy, leading to increased computational and memory requirements. To address this, we present the Localization and Focus Vision Transformer (LFViT). This model operates by strategically curtailing computational demands without impinging on performance. In the Localization phase, a reduced-resolution image is processed; if a definitive prediction remains elusive, our pioneering Neighborhood Global Class Attention (NGCA) mechanism is triggered, effectively identifying and spotlighting class-discriminative regions based on initial findings. Subsequently, in the Focus phase, this designated region is used from the original image to enhance recognition. Uniquely, LF-ViT employs consistent parameters across both phases, ensuring seamless end-to-end optimization. Our empirical tests affirm LF-ViT’s prowess: it remarkably decreases Deit-S’s FLOPs by 63% and concurrently amplifies throughput twofold.

arXiv: https://arxiv.org/pdf/2402.00033.pdf 

15. July 2024 @ 5pm CET / 11am EST / 8am PST [timezone converter]
Subspace-Configurable Networks
Dong Wang, Graz University of Technology, Austria

Abstract:  While the deployment of deep learning models on edge devices is increasing, these models often lack robustness when faced with dynamic changes in sensed data. This can be attributed to sensor drift, or variations in the data compared to what was used during offline training due to factors such as specific sensor placement or naturally changing sensing conditions. Hence, achieving the desired robustness necessitates the utilization of either an invariant architecture or specialized training approaches, like data augmentation. Alternatively, input transformations can be treated as a domain shift problem, and solved by post-deployment model adaptation. In this paper, we train a parameterized subspace of configurable networks, where an optimal network for a particular parameter setting is part of this subspace. The obtained subspace is low-dimensional and has a surprisingly simple structure even for complex, non- invertible transformations of the input, leading to an exceptionally high efficiency of subspace-configurable networks (SCNs) when limited storage and computing resources are at stake. We evaluate SCNs on a wide range of standard datasets, architectures, and transformations, and demonstrate their power on resource-constrained IoT devices, where they can take up to 2.4 times less RAM and be 7.6 times faster at inference time than a model that achieves the same test set accuracy, yet is trained with data augmentations to cover the desired range of input transformations.

arXiv: https://arxiv.org/pdf/2305.13536 

Past Events and Talks

2024

2023

Organizing Team and Contact

Contact us in case of questions or suggestions (efficientml@gmail.com). Self-nominations to present your freshly published work in the reading group are welcome.