Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization
Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization
Janghwan Lee*1, Minsoo Kim*1, Seungcheol Baek 2, Seok Joong Hwang 2, Wonyong Sung 3 and Jungwook Choi †1
1 Hanyang University, 2 SAPEON Korea Inc., 3 Seoul National University, Republic of Korea
EMNLP 2023 Paper: https://aclanthology.org/2023.emnlp-main.910.pdf
Abstract
Large Language Models (LLMs) are proficient in natural language processing tasks, but their deployment is often restricted by extensive parameter sizes and computational demands. This paper focuses on post-training quantization (PTQ) in LLMs, specifically 4-bit weight and 8-bit activation (W4A8) quantization, to enhance computational efficiency—a topic less explored compared to weight-only quantization. We present two innovative techniques: activation-quantization-aware scaling (AQAS) and sequence-length-aware calibration (SLAC) to enhance PTQ by considering the combined effects on weights and activations and aligning calibration sequence lengths to target tasks. Moreover, we introduce dINT, a hybrid data format combining integer and denormal representations, to address the underflow issue in W4A8 quantization, where small values are rounded to zero. Through rigorous evaluations of LLMs, including OPT and LLaMA, we demonstrate that our techniques significantly boost task accuracies to levels comparable with full-precision models. By developing arithmetic units compatible with dINT, we further confirm that our methods yield a 2× hardware efficiency improvement compared to 8-bit integer MAC unit.
Challenges in Developing the Best W4A8 LLM Recipe
AQAS: Activation Quantization Aware Scaling
What is the best scaling for W4A8 Quantization?
Solely focusing on W or A quantization is not optimal for other.
Solution: Awareness of both weight and activation quantization
AQAS balances weight and activation magnitudes to prevent excessive quantization burden on either side.
SLAC: Sequence Length Aware Calibration
dINT: Integer with Denormal Representation
Experimental Results