ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

Haoran You1, Baopu Li2,3, Huihong Shi1, Yonggan Fu1, Yingyan Lin1
1Rice University, 2Oracle Corporation, 3Baidu USA

Background and Motivation

Neural networks (NNs) with intensive multiplications (e.g., convolutions and transformers) are capable yet power hungry, impeding their more extensive deployment into resource-constrained devices. As such, multiplication-free networks, which follow a common practice in energy-efficient hardware implementation to parameterize NNs with more efficient operators (e.g., bitwise shifts and additions), have gained growing attention. However, multiplication-free networks usually under-perform their vanilla counterparts in terms of the achieved accuracy. To this end, this work advocates hybrid NNs that consist of both powerful yet costly multiplications and efficient yet less powerful operators for marrying the best of both worlds, and proposes ShiftAddNAS, which can automatically search for more accurate and more efficient NNs. 

Contributions

Experimental Results

Extensive experiments and ablation studies on various models, datasets, and tasks consistently validate the efficacy of ShiftAddNAS, e.g., achieving up to a +7.7% higher accuracy or a +4.9 better BLEU score compared to state-of-the-art NN, while leading to up to 93% or 69% energy and latency savings, respectively.

Citation

@inproceedings{You2022ShiftAddNASHS,

  title={ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks},

  author={Haoran You and Baopu Li and Huihong Shi and Y. Fu and Yingyan Lin},

  booktitle={ICML},

  year={2022}

}