FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix Approximation

  • Zhuguanyu Wu
  • , Shihe Wang
  • , Jiayi Zhang
  • , Jiaxin Chen*
  • , Yunhong Wang*
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Post-training quantization (PTQ) has stood out as a cost-effective and promising model compression paradigm in recent years, as it avoids computationally intensive model retraining. Nevertheless, current PTQ methods for Vision Transformers (ViTs) still suffer from significant accuracy degradation, especially under low-bit quantization. To address these shortcomings, we analyze the prevailing Hessian-guided quantization loss, and uncover certain limitations of conventional Hessian approximations. By following the block-wise reconstruction framework, we propose a novel PTQ method for ViTs, dubbed FIMA-Q. Specifically, we firstly establish the connection between KL divergence and FIM, which enables fast computation of the quantization loss during reconstruction. We further propose an efficient FIM approximation method, namely DPLR-FIM, by employing the diagonal plus low-rank principle, and formulate the ultimate quantization loss. Our extensive experiments, conducted across various vision tasks with representative ViT-based architectures on public datasets, demonstrate that our method substantially promotes the accuracy compared to the state-of-the-art approaches, especially in the case of low-bit quantization. The source code is available at https://github.com/ShiheWang/FIMA-Q.

Original languageEnglish
Pages (from-to)14891-14900
Number of pages10
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - 2025
Event2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025 - Nashville, United States
Duration: 11 Jun 202515 Jun 2025

Fingerprint

Dive into the research topics of 'FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix Approximation'. Together they form a unique fingerprint.

Cite this