290 132

CENNA: Cost-Effective Neural Network Accelerator

Title
CENNA: Cost-Effective Neural Network Accelerator
Author
정기석
Keywords
convolutional neural network (CNN); neural network accelerator; neural processing unit (NPU); CNN inference
Issue Date
2020-01
Publisher
MDPI
Citation
ELECTRONICS, v. 9, no. 1, article no. 134
Abstract
Convolutional neural networks (CNNs) are widely adopted in various applications. State-of-the-art CNN models deliver excellent classification performance, but they require a large amount of computation and data exchange because they typically employ many processing layers. Among these processing layers, convolution layers, which carry out many multiplications and additions, account for a major portion of computation and memory access. Therefore, reducing the amount of computation and memory access is the key for high-performance CNNs. In this study, we propose a cost-effective neural network accelerator, named CENNA, whose hardware cost is reduced by employing a cost-centric matrix multiplication that employs both Strassen's multiplication and a naive multiplication. Furthermore, the convolution method using the proposed matrix multiplication can minimize data movement by reusing both the feature map and the convolution kernel without any additional control logic. In terms of throughput, power consumption, and silicon area, the efficiency of CENNA is up to 88 times higher than that of conventional designs for the CNN inference.
URI
https://www.mdpi.com/2079-9292/9/1/134https://repository.hanyang.ac.kr/handle/20.500.11754/160649
ISSN
2079-9292
DOI
10.3390/electronics9010134
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > ELECTRONIC ENGINEERING(융합전자공학부) > Articles
Files in This Item:
CENNA Cost-Effective Neural Network Accelerator.pdfDownload
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE