221 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author김영훈-
dc.date.accessioned2023-01-03T05:18:52Z-
dc.date.available2023-01-03T05:18:52Z-
dc.date.issued2020-02-
dc.identifier.citationCC 2020 - Proceedings of the 29th International Conference on Compiler Construction, Page. 74-84-
dc.identifier.urihttps://dl.acm.org/doi/10.1145/3377555.3377900en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/178600-
dc.description.abstractWe studied robust quantization of deep neural networks (DNNs) for embedded devices. Existing compression techniques often generate DNNs that are sensitive to external errors. Because embedded devices may be affected by external lights and outside weather, DNNs running on those devices must be robust to such errors. For robust quantization of DNNs, we formulate an optimization problem that finds the bit width for each layer minimizing the robustness loss. To efficiently find the solution, we design a dynamic programming based algorithm, called Qed. We also propose an incremental algorithm, Q∗ that quickly finds a reasonably robust quantization and then gradually improves it. We have evaluated Qed and Q∗ with three DNN models (LeNet, AlexNet, and VGG-16) and with Gaussian random errors and realistic errors. For comparison, we also evaluate universal quantization that uses equal bit width for all layers and Deep Compression, a weight-sharing based compression technique. When tested with increasing size of errors, Qed most robustly gives correct inference output. Even if a DNN is optimized for robustness, its quantizations may not be robust unless Qed is used. Moreover, we evaluate Q∗ for its trade off in execution time and robustness. In one tenth of Qed's execution time, Q∗ gives a quantization 98% as robust as the one by Qed. © 2020 Association for Computing Machinery.-
dc.description.sponsorshipThis work is supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1A020 86132), by Institute of Information Communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2013-0-00109, WiseKB: Big data based self-evolving knowledge base and reasoning platform), by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (NRF-2016M3C4A7952635), and by the Ministry of Science, ICT & Future Planning (2017M3C4A7063570). The corresponding author is Jiwon Seo.-
dc.languageen-
dc.publisherAssociation for Computing Machinery, Inc-
dc.subjectNeural Network Quantization-
dc.titleRobust quantization of deep neural networks-
dc.typeArticle-
dc.identifier.doi10.1145/3377555.3377900-
dc.relation.page74-84-
dc.relation.journalCC 2020 - Proceedings of the 29th International Conference on Compiler Construction-
dc.contributor.googleauthorKim, Youngseok-
dc.contributor.googleauthorLee, Junyeol-
dc.contributor.googleauthorKim, Younghoon-
dc.contributor.googleauthorSeo, Jiwon-
dc.sector.campusE-
dc.sector.daehak소프트웨어융합대학-
dc.sector.department인공지능학과-
dc.identifier.pidnongaussian-
Appears in Collections:
ETC[S] > ETC
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE