196 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author신현철-
dc.date.accessioned2021-12-23T02:29:48Z-
dc.date.available2021-12-23T02:29:48Z-
dc.date.issued2021-03-
dc.identifier.citationARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, Page. 1-10en_US
dc.identifier.issn2193-567X-
dc.identifier.issn2191-4281-
dc.identifier.urihttps://link.springer.com/article/10.1007/s13369-021-05455-4-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/166901-
dc.description.abstractOwing to good performance, deep Convolution Neural Networks (CNNs) are rapidly rising in popularity across a broad range of applications. Since high accuracy CNNs are both computation intensive and memory intensive, many researchers have shown significant interest in the accelerator design. Furthermore, the AI chip market size grows and the competition on the performance, cost, and power consumption of the artificial intelligence SoC designs is increasing. Therefore, it is important to develop design techniques and platforms that are useful for the efficient design of optimized AI architectures to satisfy the given specifications in a short design time. In this research, we have developed design space exploration techniques and environments for the optimal design of the overall system including computing modules and memories. Our current design platform is built using NVIDIA Deep Learning Accelerator as a computing model, SRAM as a buffer, and DRAM with GDDR6 as an off-chip memory. We also developed a program to estimate the processing time of a given neural network. By modifying both the on-chip SRAM size and the computing module size, a designer can explore the design space efficiently, and then choose the optimal architecture which shows the minimal cost while satisfying the performance specification. To illustrate the operation of the design platform, two well-known deep CNNs are used, which are YOLOv3 and faster RCNN. This technology can be used to explore and to optimize the hardware architectures of the CNNs so that the cost can be minimized.en_US
dc.language.isoen_USen_US
dc.publisherSPRINGER HEIDELBERGen_US
dc.titleHardware Architecture Exploration for Deep Neural Networksen_US
dc.typeArticleen_US
dc.identifier.doi10.1007/s13369-021-05455-4-
dc.relation.page1-10-
dc.relation.journalARABIAN JOURNAL FOR SCIENCE AND ENGINEERING-
dc.contributor.googleauthorZheng, Wenqi-
dc.contributor.googleauthorZhao, Yangyi-
dc.contributor.googleauthorChen, Yunfan-
dc.contributor.googleauthorPark, Jinhong-
dc.contributor.googleauthorShin, Hyunchul-
dc.relation.code2021002132-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentDIVISION OF ELECTRICAL ENGINEERING-
dc.identifier.pidshin-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ELECTRICAL ENGINEERING(전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE