429 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author최정욱-
dc.date.accessioned2022-11-02T06:59:30Z-
dc.date.available2022-11-02T06:59:30Z-
dc.date.issued2021-02-
dc.identifier.citation35th AAAI Conference on Artificial Intelligence, v. 35, page. 6794-6802en_US
dc.identifier.issn2159-5399; 2374-3468en_US
dc.identifier.urihttps://ojs.aaai.org/index.php/AAAI/article/view/16839en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/176246-
dc.description.abstractThe quantization of deep neural networks (QDNNs) has been actively studied for deployment in edge devices. Recent studies employ the knowledge distillation (KD) method to improve the performance of quantized networks. In this study, we propose stochastic precision ensemble training for QDNNs (SPEQ). SPEQ is a knowledge distillation training scheme; however, the teacher is formed by sharing the model parameters of the student network. We obtain the soft labels of the teacher by randomly changing the bit precision of the activation stochastically at each layer of the forward-pass computation. The student model is trained with these soft labels to reduce the activation quantization noise. The cosine similarity loss is employed, instead of the KL-divergence, for KD training. As the teacher model changes continuously by random bit-precision assignment, it exploits the effect of stochastic ensemble KD. SPEQ outperforms the existing quantization training methods in various tasks, such as image classification, question-answering, and transfer learning without the need for cumbersome teacher networks.en_US
dc.description.sponsorshipThis work was supported in part by the Samsung Advanced Institute of Technology through Neural Processing Research Center (NPRC) in Seoul National University and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.2018R1A2A1A05079504). This work was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT)(No.2020R1F1A1076233).en_US
dc.languageenen_US
dc.publisherAssociation for the Advancement of Artificial Intelligenceen_US
dc.titleStochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networksen_US
dc.typeArticleen_US
dc.identifier.doi10.1609/aaai.v35i8.16839en_US
dc.relation.page0-0-
dc.contributor.googleauthorBoo, Yoonho-
dc.contributor.googleauthorShin, Sungho-
dc.contributor.googleauthorChoi, Jungwook-
dc.contributor.googleauthorSung, Wonyong-
dc.relation.code20210007-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentSCHOOL OF ELECTRONIC ENGINEERING-
dc.identifier.pidchoij-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > ELECTRONIC ENGINEERING(융합전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE