跳到主要导航 跳到搜索 跳到主要内容

Triangle Counting Accelerations: From Algorithm to In-Memory Computing Architecture

科研成果: 期刊稿件文章同行评审

摘要

Triangles are the basic substructure of networks and triangle counting (TC) has been a fundamental graph computing problem in numerous fields such as social network analysis. Nevertheless, like other graph computing problems, due to the high memory-computation ratio and random memory access pattern, TC involves a large amount of data transfers thus suffers from the bandwidth bottleneck in the traditional Von-Neumann architecture. To overcome this challenge, in this paper, we propose to accelerate TC with the emerging processing-in-memory (PIM) architecture through an algorithm-architecture co-optimization manner. To enable the efficient in-memory implementations, we come up to reformulate TC with bitwise logic operations (such as AND), and develop customized graph compression and mapping techniques for efficient data flow management. With the emerging computational Spin-Transfer Torque Magnetic RAM (STT-MRAM) array, which is one of the most promising PIM enabling techniques, the device-to-architecture co-simulation results demonstrate that the proposed TC in-memory accelerator outperforms the state-of-the-art GPU and FPGA accelerations by 12.2× and 31.8 ×, respectively, and achieves a 34× energy efficiency improvement over the FPGA accelerator.

源语言英语
页(从-至)2462-2472
页数11
期刊IEEE Transactions on Computers
71
10
DOI
出版状态已出版 - 1 10月 2022

联合国可持续发展目标

此成果有助于实现下列可持续发展目标:

  1. 可持续发展目标 7 - 经济适用的清洁能源
    可持续发展目标 7 经济适用的清洁能源

指纹

探究 'Triangle Counting Accelerations: From Algorithm to In-Memory Computing Architecture' 的科研主题。它们共同构成独一无二的指纹。

引用此