Abstract
In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning approach utilizes pre- and post-synaptic spike counts to reduce the bit-width of synaptic weights as well as the number of weight updates. For the energy efficient inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined number of spikes are reached. In addition, the computation skip schemes identify meaningless computations and skip them to improve energy efficiency. Based on the proposed low complexity design techniques, we design and implement the SNN processor using 65nm CMOS process. According to the implementation results, the SNN processor achieves 87.4% of recognition accuracy in MNIST dataset using only 1-bit 230k synaptic weights with 400 excitatory neurons. The energy consumptions are 0.26pJ/SOP and 0.31μJ/inference in inferencing mode, and 1.42pJ/SOP and 2.63μJ/learning in learning mode of operations.
Original language | English |
---|---|
Journal | IEEE Transactions on Biomedical Circuits and Systems |
DOIs | |
Publication status | Accepted/In press - 2019 |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- 1-bit synapse weights
- Axons
- Complexity theory
- Computer architecture
- Hardware
- Membrane potentials
- On-Chip Learning
- Spiking Neural Network Processor
- Synapses
- Unsupervised Learning
ASJC Scopus subject areas
- Biomedical Engineering
- Electrical and Electronic Engineering