317 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author임종우-
dc.date.accessioned2021-03-31T01:54:38Z-
dc.date.available2021-03-31T01:54:38Z-
dc.date.issued2020-01-
dc.identifier.citationINTERNATIONAL JOURNAL OF COMPUTER VISION, v. 128, no. 1, page. 96-120en_US
dc.identifier.issn0920-5691-
dc.identifier.issn1573-1405-
dc.identifier.urihttps://link.springer.com/article/10.1007%2Fs11263-019-01212-1-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/160988-
dc.description.abstractMulti-face tracking in unconstrained videos is a challenging problem as faces of one person often can appear drastically different in multiple shots due to significant variations in scale, pose, expression, illumination, and make-up. Existing multi-target tracking methods often use low-level features which are not sufficiently discriminative for identifying faces with such large appearance variations. In this paper, we tackle this problem by learning discriminative, video-specific face representations using convolutional neural networks (CNNs). Unlike existing CNN-based approaches which are only trained on large-scale face image datasets offline, we automatically generate a large number of training samples using the contextual constraints for a given video, and further adapt the pre-trained face CNN to the characters in the specific videos using discovered training samples. The embedding feature space is fine-tuned so that the Euclidean distance in the space corresponds to the semantic face similarity. To this end, we devise a symmetric triplet loss function which optimizes the network more effectively than the conventional triplet loss. With the learned discriminative features, we apply an EM clustering algorithm to link tracklets across multiple shots to generate the final trajectories. We extensively evaluate the proposed algorithm on two sets of TV sitcoms and YouTube music videos, analyze the contribution of each component, and demonstrate significant performance improvement over existing techniques.en_US
dc.description.sponsorshipThe work is supported by National Basic Research Program of China (973 Program, 2015CB351705), National Key Research and Development Program of China (2017YFA0700805), NSFC (61703344), Office of Naval Research (N0014-16-1-2314), Ministry of Science and ICT of Korea (NRF-2017R1A2B4011928 and Next-Generation Information Computing Development program NRF-2017M3C4A7069369), NSF CRII (1755785), NSF CAREER (1149783) and gifts from Adobe, Panasonic, NEC, and NVIDIA.en_US
dc.language.isoenen_US
dc.publisherSPRINGERen_US
dc.subjectFace trackingen_US
dc.subjectTransfer learningen_US
dc.subjectConvolutional neural networksen_US
dc.subjectTriplet lossen_US
dc.titleTracking Persons-of-Interest via Unsupervised Representation Adaptationen_US
dc.typeArticleen_US
dc.relation.no1-
dc.relation.volume128-
dc.identifier.doi10.1007/s11263-019-01212-1-
dc.relation.page96-120-
dc.relation.journalINTERNATIONAL JOURNAL OF COMPUTER VISION-
dc.contributor.googleauthorZhang, Shun-
dc.contributor.googleauthorHuang, Jia-Bin-
dc.contributor.googleauthorLim, Jongwoo-
dc.contributor.googleauthorGong, Yihong-
dc.contributor.googleauthorWang, Jinjun-
dc.contributor.googleauthorAhuja, Narendra-
dc.contributor.googleauthorYang, Ming-Hsuan-
dc.relation.code2020053839-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF COMPUTER SCIENCE-
dc.identifier.pidjlim-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > COMPUTER SCIENCE(컴퓨터소프트웨어학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE