Full metadata record

DC FieldValueLanguage
dc.contributor.advisor고민삼-
dc.contributor.author이현아-
dc.date.accessioned2023-05-11T11:54:11Z-
dc.date.available2023-05-11T11:54:11Z-
dc.date.issued2023. 2-
dc.identifier.urihttp://hanyang.dcollection.net/common/orgView/200000655250en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/179803-
dc.description.abstractFacial expression is the most natural and direct way to convey people’s inner emotions and thoughts. Among basic facial expressions, smile is one of the most important facial expressions and smile is a potent social tool. Although facial expressions and smiles are essential to human life, there are people who feel it difficult to make facial expressions or smile for various reasons. They practice making facial expressions look natural, usually by practicing on their own without any guide or by using a clinic to improve their facial expressions. In these situations, it is important to know the current state of facial expression, find out which facial parts to be moved, and figure out how to move the facial parts. Therefore, it is necessary to provide a guide to instruct which facial parts to be moved and how to move the parts. To this end, first, the current facial expression needs to be detected. And then, it is necessary to localize which part of the face needs to be changed to become the target expression. That way, it is possible to generate a more precise guide on how to move on that part. In the computer vision domain, facial expression recognition has been studied a lot and its performance is also excellent. However, there have been no studies considering visual guides to help practice facial expressions. Thus, in this paper, we propose a new approach for smile detection and localization to guide which facial parts to be moved to make smile. Our proposed method is based on a basic theme of reconstruction-based anomaly detection using generative model and used convolutional variational autoencoder(cVAE) for reconstruction. And we used the difference between the reconstructed image and the input to detect the smile. Also, if the prediction is not a smile, the facial area that must move to be a smile is visualized in the form of a heat map, which called smile localization. Unlike previous supervised learning-based smile detection studies, our proposed method can figure out which facial parts to be moved to make smile. Experimental results demonstrate that our approach achieves a promising performance in smile detection. Not that all, our smile localization approach is explainable and deliberate. Also, there is the potential to generate a guidance on how to move the facial part to make the current facial expression smile.-
dc.publisher한양대학교-
dc.titleExplainable Smile Detection Trough Semi-Supervised Learning with Generative Networks-
dc.title.alternative생성적 신경망 기반 준지도학습을 통한 설명 가능한 미소 탐지-
dc.typeTheses-
dc.contributor.googleauthor이현아-
dc.sector.campusS-
dc.sector.daehak대학원-
dc.sector.department인공지능융합학과-
dc.description.degreeMaster-
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > APPLIED ARTIFICIAL INTELLIGENCE(인공지능융합학과) > Theses(Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE