Skip to main navigation Skip to search Skip to main content

Simulating Neural Network Processors

  • Jian Hu
  • , Xianlong Zhang
  • , Xiaohua Shi*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning has achieved competing results compared with human beings in many fields. Traditionally, deep learning networks are executed on CPUs and GPUs. In recent years, more and more neural network accelerators have been introduced in both academia and industry to improve the performance and energy efficiency for deep learning networks. In this paper, we introduce a flexible and configurable functional NN accelerator simulator, which could be configured to simulate u-architectures for different NN accelerators. The extensible and configurable simulator is helpful for system-level exploration of u-architecture, as well as operator optimization algorithm developments. The simulator is a functional simulator that simulates the latencies of calculation and memory access and the concurrent process between modules, and it gives the number of program execution cycles after the simulation is completed. We also integrated the simulator into the TVM compilation stack as an optional backend. Users can use TVM to write operators and execute them on the simulator.

Original languageEnglish
Article number7500195
JournalWireless Communications and Mobile Computing
Volume2022
DOIs
StatePublished - 2022

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 7 - Affordable and Clean Energy
    SDG 7 Affordable and Clean Energy

Fingerprint

Dive into the research topics of 'Simulating Neural Network Processors'. Together they form a unique fingerprint.

Cite this