NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark

Lu, Anni and Peng, Xiaochen and Li, Wantong and Jiang, Hongwu and Yu, Shimeng (2021) NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark. Frontiers in Artificial Intelligence, 4. ISSN 2624-8212

[thumbnail of pubmed-zip/versions/2/package-entries/frai-04-659060.pdf] Text
pubmed-zip/versions/2/package-entries/frai-04-659060.pdf - Published Version

Download (1MB)

Abstract

Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and-accumulate (MAC) operations in deep neural network (DNN) hardware accelerators. A simulator with options of various mainstream and emerging memory technologies, architectures, and networks can be a great convenience for fast early-stage design space exploration of CIM hardware accelerators. DNN+NeuroSim is an integrated benchmark framework supporting flexible and hierarchical CIM array design options from a device level, to a circuit level and up to an algorithm level. In this study, we validate and calibrate the prediction of NeuroSim against a 40-nm RRAM-based CIM macro post-layout simulations. First, the parameters of a memory device and CMOS transistor are extracted from the foundry’s process design kit (PDK) and employed in the NeuroSim settings; the peripheral modules and operating dataflow are also configured to be the same as the actual chip implementation. Next, the area, critical path, and energy consumption values from the SPICE simulations at the module level are compared with those from NeuroSim. Some adjustment factors are introduced to account for transistor sizing and wiring area in the layout, gate switching activity, post-layout performance drop, etc. We show that the prediction from NeuroSim is precise with chip-level error under 1% after the calibration. Finally, the system-level performance benchmark is conducted with various device technologies and compared with the results before the validation. The general conclusions stay the same after the validation, but the performance degrades slightly due to the post-layout calibration.

Item Type: Article
Subjects: Journal Eprints > Multidisciplinary
Depositing User: Managing Editor
Date Deposited: 09 Feb 2023 07:10
Last Modified: 01 Aug 2024 06:54
URI: http://repository.journal4submission.com/id/eprint/944

Actions (login required)

View Item
View Item