面向卷积神经网络的高能效比特稀疏加速器设计

Translated title of the contribution: Energy-Efficient Bit-Sparse Accelerator Design for Convolutional Neural Network
  • Hang Xiao
  • , Haobo Xu*
  • , Ying Wang
  • , Jiajun Li
  • , Yujie Wang
  • , Yinhe Han
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The high energy-efficient bit-sparse accelerator design is proposed to address the performance bottleneck of current bit-sparse architectures. Firstly, a coding method and corresponding circuit are proposed to enhance the bit-sparsity of convolutional neural networks, and employ the bit-serial circuit to eliminate computations of zero bits on the fly and accelerate neural networks. Secondly, a column shared scheme is proposed to address the synchronization issue of bit-sparse architectures for further acceleration with small area and power overhead. Finally, the energy efficiency of different bit-sparse architectures is evaluated with SMIC 40nm technology at 1GHz. The experimental results show that the energy efficiency of the proposed accelerator is 544% and 179% higher than dense accelerator (VAA) and bit-sparse accelerator (LS-PRA), respectively.

Translated title of the contributionEnergy-Efficient Bit-Sparse Accelerator Design for Convolutional Neural Network
Original languageChinese (Traditional)
Pages (from-to)1122-1131
Number of pages10
JournalJisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics
Volume35
Issue number7
DOIs
StatePublished - Jul 2023
Externally publishedYes

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 7 - Affordable and Clean Energy
    SDG 7 Affordable and Clean Energy

Fingerprint

Dive into the research topics of 'Energy-Efficient Bit-Sparse Accelerator Design for Convolutional Neural Network'. Together they form a unique fingerprint.

Cite this