跳到主要导航 跳到搜索 跳到主要内容

GShuttle: Optimizing Memory Access Efficiency for Graph Convolutional Neural Network Accelerators

  • Jia Jun Li*
  • , Ke Wang
  • , Hao Zheng
  • , Ahmed Louri
  • *此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Graph convolutional neural networks (GCNs) have emerged as an effective approach to extending deep learning for graph data analytics, but they are computationally challenging given the irregular graphs and the large number of nodes in a graph. GCNs involve chain sparse-dense matrix multiplications with six loops, which results in a large design space for GCN accelerators. Prior work on GCN acceleration either employs limited loop optimization techniques, or determines the design variables based on random sampling, which can hardly exploit data reuse efficiently, thus degrading system efficiency. To overcome this limitation, this paper proposes GShuttle, a GCN acceleration scheme that maximizes memory access efficiency to achieve high performance and energy efficiency. GShuttle systematically explores loop optimization techniques for GCN acceleration, and quantitatively analyzes the design objectives (e.g., required DRAM accesses and SRAM accesses) by analytical calculation based on multiple design variables. GShuttle further employs two approaches, pruned search space sweeping and greedy search, to find the optimal design variables under certain design constraints. We demonstrated the efficacy of GShuttle by evaluation on five widely used graph datasets. The experimental simulations show that GShuttle reduces the number of DRAM accesses by a factor of 1.5 and saves energy by a factor of 1.7 compared with the state-of-the-art approaches.

源语言英语
页(从-至)115-127
页数13
期刊Journal of Computer Science and Technology
38
1
DOI
出版状态已出版 - 2月 2023

联合国可持续发展目标

此成果有助于实现下列可持续发展目标:

  1. 可持续发展目标 7 - 经济适用的清洁能源
    可持续发展目标 7 经济适用的清洁能源

指纹

探究 'GShuttle: Optimizing Memory Access Efficiency for Graph Convolutional Neural Network Accelerators' 的科研主题。它们共同构成独一无二的指纹。

引用此