I am an undergraduate at ACM Honors Class, Shanghai Jiao Tong University. Currently, I am fortunate to work with Prof. Song Han at MIT HAN Lab as a research intern.
During my junior year, I also had a wonderful time as an undergraduate researcher advised by Prof. Jingwen Leng at SJTU EPCC Lab.
My research interests lie in Efficient Systems and Algorithms for Large Language Models.
News
Publications
* indicates equal contribution
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Ji Lin*,
Jiaming Tang*,
Haotian Tang,
Shang Yang,
Xingyu Dang,
and Song Han
MLSys 2024
/ Abstract
/ Code
Large language models (LLMs) have shown excellent performance on various tasks, but the astronomical model size raises the hardware barrier for serving (memory size) and slows down token generation (memory bandwidth). In this paper, we propose Activation-aware Weight Quantization (AWQ), a hardwarefriendly approach for LLM low-bit weight-only quantization. Our method is based on the observation that weights are not equally important: protecting only 1% of salient weights can greatly reduce quantization error. We then propose to search for the optimal per-channel scaling that protects the salient weights by observing the activation, not weights. AWQ does not rely on any backpropagation or reconstruction, so it can well preserve LLMs’ generalization ability on different domains and modalities, without overfitting to the calibration set; it also does not rely on any data layout reordering, maintaining the hardware efficiency. AWQ outperforms existing work on various language modeling, common sense QA, and domain-specific benchmarks. Thanks to better generalization, it achieves excellent quantization performance for instruction-tuned LMs and, for the first time, multimodal LMs. We also implement efficient tensor core kernels with reorder-free online dequantization to accelerate AWQ, achieving a 1.45x speedup over GPTQ and is 1.85x faster than the cuBLAS FP16 implementation. Our method provides a turn-key solution to compress LLMs to 3/4 bits for efficient deployment.
OliVe: Accelerating Large Language Models via Hardware-friendly Outlier-Victim Pair Quantization
Cong Guo*,
Jiaming Tang*,
Weiming Hu,
Jingwen Leng,
Chen Zhang,
Fan Yang,
Yunxin Liu,
Minyi Guo,
and Yuhao Zhu
ISCA 2023
/ Abstract
/ Code
We propose OliVe, an algorithm/architecture co-designed solution that adopts an outlier-victim pair (OVP) quantization, which can handle outlier values locally with low hardware overheads and can achieve high performance gains. The key insight of OliVe is that outliers are important while the normal values next to them are not. Thus those normal values (called victims) can be sacrificed to accommodate outliers. This enables a memory-aligned OVP encoding scheme, which can be efficiently integrated to the existing hardware accelerators like systolic array and tensor core. As a result, OliVe-based accelerator surpasses the existing outlier-aware accelerator, GOBO, by 4.5x speedup and 4.0x energy reduction, respectively, with a superior model accuracy.