98 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author박정은-
dc.date.accessioned2024-07-08T04:15:09Z-
dc.date.available2024-07-08T04:15:09Z-
dc.date.issued2022-11-01-
dc.identifier.citationEXPERT SYSTEMS WITH APPLICATIONS, v. 205, article no. 117792, page. 1-14en_US
dc.identifier.issn0957-4174en_US
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S0957417422010594en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/191187-
dc.description.abstractAs dynamic vision sensors can operate at low power while having a fast response, they can mitigate the disadvantages of gyro sensors when used for turning on mobile devices. Therefore, we propose an ultra-lightweight face activation neural network that combines handcrafted convolutional landmark filters extracted from facial features with randomly initialized trainable convolutional filters. Face activation is the task of identifying the presence or absence of a face intended to activate the mobile device. Our proposed model, F-LandmarkNet, has four steps. First, we construct customized landmark filters that can effectively identify numerous facial features. Second, F-LandmarkNet is constructed by using a convolutional layer that fuses handcrafted landmark filters and trainable convolution filters. Third, a compact version is constructed by selecting only the four most influential face filters according to their importance. Finally, performance is improved through knowledge distillation. The fusion of handcrafted landmark filters and trainable convolutional filters is quite effective in extremely lightweight models. It is observed that the classification accuracy of our proposed model is similar to that of existing lightweight convolutional neural network models, while the number of floating-point operations and parameters are markedly lower. Our model also runs faster under a central processing unit environment than comparison models. Thus, the proposed model shows high potential for use in actual mobile systems.en_US
dc.description.sponsorshipThis research was supported by the System LSI Business, Samsung Electronics Co., Ltd. It was also supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 21CTAPC163730-01).en_US
dc.languageen_USen_US
dc.publisherPERGAMON-ELSEVIER SCIENCE LTDen_US
dc.relation.ispartofseriesv. 205, article no. 117792;1-14-
dc.subjectUltra-lightweight face activationen_US
dc.subjectDynamic vision sensoren_US
dc.subjectFilter fusionen_US
dc.subjectEfficient convolutional neural networken_US
dc.subjectFacial landmarken_US
dc.subjectKnowledge distillationen_US
dc.titleUltra-lightweight face activation for dynamic vision sensor with convolutional filter-level fusion using facial landmarksen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1016/j.eswa.2022.117792en_US
dc.relation.journalEXPERT SYSTEMS WITH APPLICATIONS-
dc.contributor.googleauthorKim, Sungsoo-
dc.contributor.googleauthorPark, Jeongeun-
dc.contributor.googleauthorYang, Donguk-
dc.contributor.googleauthorShin, Dongyup-
dc.contributor.googleauthorKim, Jungyeon-
dc.contributor.googleauthorRyu, Hyunsurk Eric-
dc.contributor.googleauthorKim, Ha Young-
dc.relation.code2022038187-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF COMPUTING[E]-
dc.sector.departmentSCHOOL OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY-
dc.identifier.pidparkje-
Appears in Collections:
COLLEGE OF COMPUTING[E](소프트웨어융합대학) > MEDIA, CULTURE, AND DESIGN TECHNOLOGY(ICT융합학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE