Abstract
The presence of sparsity in both input features and weights within convolutional neural networks offers a valuable opportunity to significantly reduce the number of computations required during inference. Moreover, the practice of compressing input data serves to diminish storage requirements and lower data transfer costs, ultimately enhancing overall power efficiency. However, the compression of randomly sparse inputs introduces challenges in the input matching process, often resulting in substantial hardware overhead and increased power consumption. These challenges arise due to the irregular nature of sparse inputs and the variability in convolutional strides. In response to these challenges, this research introduces an innovative data compression method, named Stride-Aware Sparsity Compression (StarSPA), designed to effectively locate valid input values and expedite the multiplication process. To fully capitalize on this proposed compression method, a weight-stationary approach is employed for efficient convolution. Comprehensive simulations demonstrate that the proposed accelerator achieves speedup factors of 1.17×, 1.05×, 1.09×, 1.23×, and 1.12× when compared to the recent accelerator named SparTen for AlexNet, VGG16, GoogLeNet, ResNet34, and EfficientNetV2, respectively. Furthermore, FPGA implementation of the core reveals a noteworthy 2.55× reduction in hardware size and a 5× enhancement in energy efficiency when compared to SparTen.
Original language | English |
---|---|
Pages (from-to) | 10893-10909 |
Number of pages | 17 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Keywords
- AI accelerator
- convolutional neural networks (CNNs)
- data compression
- dataflow
- network on a chip (NoC)
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering