329 143

Full metadata record

DC FieldValueLanguage
dc.contributor.author정기석-
dc.date.accessioned2021-03-18T01:23:38Z-
dc.date.available2021-03-18T01:23:38Z-
dc.date.issued2020-01-
dc.identifier.citationELECTRONICS, v. 9, no. 1, article no. 134en_US
dc.identifier.issn2079-9292-
dc.identifier.urihttps://www.mdpi.com/2079-9292/9/1/134-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/160649-
dc.description.abstractConvolutional neural networks (CNNs) are widely adopted in various applications. State-of-the-art CNN models deliver excellent classification performance, but they require a large amount of computation and data exchange because they typically employ many processing layers. Among these processing layers, convolution layers, which carry out many multiplications and additions, account for a major portion of computation and memory access. Therefore, reducing the amount of computation and memory access is the key for high-performance CNNs. In this study, we propose a cost-effective neural network accelerator, named CENNA, whose hardware cost is reduced by employing a cost-centric matrix multiplication that employs both Strassen's multiplication and a naive multiplication. Furthermore, the convolution method using the proposed matrix multiplication can minimize data movement by reusing both the feature map and the convolution kernel without any additional control logic. In terms of throughput, power consumption, and silicon area, the efficiency of CENNA is up to 88 times higher than that of conventional designs for the CNN inference.en_US
dc.description.sponsorshipThis research was funded by the Technology Innovation Program MOTIE (No. 10076583, Development of free-running speech recognition technologies for embedded robot system) and by the Competency Development Program for Industry Specialists MOTIE No. 0001883, HRD program for the Intelligent semiconductor Industry.en_US
dc.language.isoenen_US
dc.publisherMDPIen_US
dc.subjectconvolutional neural network (CNN)en_US
dc.subjectneural network acceleratoren_US
dc.subjectneural processing unit (NPU)en_US
dc.subjectCNN inferenceen_US
dc.titleCENNA: Cost-Effective Neural Network Acceleratoren_US
dc.typeArticleen_US
dc.relation.no1-
dc.relation.volume9-
dc.identifier.doi10.3390/electronics9010134-
dc.relation.page1-19-
dc.relation.journalELECTRONICS-
dc.contributor.googleauthorPark, Sang-Soo-
dc.contributor.googleauthorChung, Ki-Seok-
dc.relation.code2020049669-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF ELECTRONIC ENGINEERING-
dc.identifier.pidkchung-


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE