In this paper, we present an energy-efficient SNN architecture, which can seamlessly run deep spiking neural networks (SNNs) with improved accuracy. First, we propose a conversion aware training (CAT) to reduce ANN-to-SNN conversion loss without hardware implementation overhead. In the proposed CAT, the activation function developed for simulating SNN during ANN training, is efficiently exploited to reduce the data representation error after conversion. Based on the CAT technique, we also present a time-to-first-spike coding that allows lightweight logarithmic computation by utilizing spike time information. The SNN processor design that supports the proposed techniques has been implemented using 28nm CMOS process. The processor achieves the top-1 accuracies of 91.7%, 67.9% and 57.4% with inference energy of 486.7uJ, 503.6uJ, and 1426uJ to process CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively, when running VGG-16 with 5bit logarithmic weights.
|Title of host publication||Proceedings of the 59th ACM/IEEE Design Automation Conference, DAC 2022|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||6|
|Publication status||Published - 2022 Jul 10|
|Event||59th ACM/IEEE Design Automation Conference, DAC 2022 - San Francisco, United States|
Duration: 2022 Jul 10 → 2022 Jul 14
|Name||Proceedings - Design Automation Conference|
|Conference||59th ACM/IEEE Design Automation Conference, DAC 2022|
|Period||22/7/10 → 22/7/14|
Bibliographical noteFunding Information:
ACKNOWLEDGMENTS This work was supported by the National Research Foundation of Korea grant funded by the Korea government (No. NRF-2020R1A2C3014820).
© 2022 ACM.
- ANN-to-SNN conversion
- logarithmic computations
- spiking neural network
- temporal coding
ASJC Scopus subject areas
- Computer Science Applications
- Control and Systems Engineering
- Electrical and Electronic Engineering
- Modelling and Simulation