99 0

Ultra-lightweight face activation for dynamic vision sensor with convolutional filter-level fusion using facial landmarks

Title
Ultra-lightweight face activation for dynamic vision sensor with convolutional filter-level fusion using facial landmarks
Author
박정은
Keywords
Ultra-lightweight face activation; Dynamic vision sensor; Filter fusion; Efficient convolutional neural network; Facial landmark; Knowledge distillation
Issue Date
2022-11-01
Publisher
PERGAMON-ELSEVIER SCIENCE LTD
Citation
EXPERT SYSTEMS WITH APPLICATIONS, v. 205, article no. 117792, page. 1-14
Abstract
As dynamic vision sensors can operate at low power while having a fast response, they can mitigate the disadvantages of gyro sensors when used for turning on mobile devices. Therefore, we propose an ultra-lightweight face activation neural network that combines handcrafted convolutional landmark filters extracted from facial features with randomly initialized trainable convolutional filters. Face activation is the task of identifying the presence or absence of a face intended to activate the mobile device. Our proposed model, F-LandmarkNet, has four steps. First, we construct customized landmark filters that can effectively identify numerous facial features. Second, F-LandmarkNet is constructed by using a convolutional layer that fuses handcrafted landmark filters and trainable convolution filters. Third, a compact version is constructed by selecting only the four most influential face filters according to their importance. Finally, performance is improved through knowledge distillation. The fusion of handcrafted landmark filters and trainable convolutional filters is quite effective in extremely lightweight models. It is observed that the classification accuracy of our proposed model is similar to that of existing lightweight convolutional neural network models, while the number of floating-point operations and parameters are markedly lower. Our model also runs faster under a central processing unit environment than comparison models. Thus, the proposed model shows high potential for use in actual mobile systems.
URI
https://www.sciencedirect.com/science/article/pii/S0957417422010594https://repository.hanyang.ac.kr/handle/20.500.11754/191187
ISSN
0957-4174
DOI
https://doi.org/10.1016/j.eswa.2022.117792
Appears in Collections:
COLLEGE OF COMPUTING[E](소프트웨어융합대학) > MEDIA, CULTURE, AND DESIGN TECHNOLOGY(ICT융합학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE