TY - GEN
T1 - An Energy-efficient On-chip Learning Architecture for STDP based Sparse Coding
AU - Kim, Heetak
AU - Tang, Hoyoung
AU - Park, Jongsun
N1 - Funding Information:
This work was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2019-2018-0-01433) supervised by the IITP(Institute for Information & communications Technology Promotion) and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No. NRF-2016R1A2B4015329) and the Industrial Strategic Technology Development Program(10077445, Development of SoC technology based on Spiking Neural Cell for smart mobile and IoT Devices) funded By the Ministry of Trade, Industry & Energy(MOTIE, Korea)
Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - Two main bottlenecks encountered when implementing energy-efficient spike-timing-dependent plasticity (STDP) based sparse coding, are the complex computation of winner-take-all (WTA) operation and repetitive neuronal operations in the time domain processing. In this paper, we present an energy-efficient STDP based sparse coding processor. The low-cost hardware is based on algorithmic reduction techniques as following: First, the complex WTA operation is simplified based on the prediction of spike emitting neurons. Sparsity based approximation in spatial and temporal domain are also efficiently exploited to remove the redundant neurons with negligible algorithmic accuracy loss. We designed and implemented the hardware of the STDP based sparse coding using 65nm CMOS process. By exploiting input sparsity, the proposed architecture can dynamically trade off computation energy (up to 74%) with algorithmic quality for Natural image (maximum 3.55% quality loss) and MNIST (no quality loss) applications. In the inference mode of operations, the SNN hardware achieves the throughput of 374Mpixels/s and 840.2GSOP/s with energy-efficiency of 781.52pJ/pixel and 0.35pJ/SOP.
AB - Two main bottlenecks encountered when implementing energy-efficient spike-timing-dependent plasticity (STDP) based sparse coding, are the complex computation of winner-take-all (WTA) operation and repetitive neuronal operations in the time domain processing. In this paper, we present an energy-efficient STDP based sparse coding processor. The low-cost hardware is based on algorithmic reduction techniques as following: First, the complex WTA operation is simplified based on the prediction of spike emitting neurons. Sparsity based approximation in spatial and temporal domain are also efficiently exploited to remove the redundant neurons with negligible algorithmic accuracy loss. We designed and implemented the hardware of the STDP based sparse coding using 65nm CMOS process. By exploiting input sparsity, the proposed architecture can dynamically trade off computation energy (up to 74%) with algorithmic quality for Natural image (maximum 3.55% quality loss) and MNIST (no quality loss) applications. In the inference mode of operations, the SNN hardware achieves the throughput of 374Mpixels/s and 840.2GSOP/s with energy-efficiency of 781.52pJ/pixel and 0.35pJ/SOP.
KW - On-chip learning
KW - Sparse coding
KW - Spike timing dependent plasticity
KW - Spiking neural network
UR - http://www.scopus.com/inward/record.url?scp=85072674029&partnerID=8YFLogxK
U2 - 10.1109/ISLPED.2019.8824938
DO - 10.1109/ISLPED.2019.8824938
M3 - Conference contribution
AN - SCOPUS:85072674029
T3 - Proceedings of the International Symposium on Low Power Electronics and Design
BT - International Symposium on Low Power Electronics and Design, ISLPED 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2019
Y2 - 29 July 2019 through 31 July 2019
ER -