227 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author이민식-
dc.date.accessioned2019-11-26T07:58:27Z-
dc.date.available2019-11-26T07:58:27Z-
dc.date.issued2019-05-
dc.identifier.citationCOMPUTER VISION AND IMAGE UNDERSTANDING, v. 182, Page. 64-70en_US
dc.identifier.issn1077-3142-
dc.identifier.issn1090-235X-
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S1077314218301462-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/114798-
dc.description.abstractIn this paper, we address the problem of estimating a 3D human pose from a single image, which is important but difficult to solve due to reasons, such as self-occlusions, wild appearance changes, and inherent ambiguities of 3D estimation from a 2D cue. These difficulties make the problem ill-posed, which have become requiring increasingly complex estimators to enhance the performance. On the other hand, most existing methods try to handle this problem based on a single complex estimator, which might not be good solutions for 3D human pose estimation. In this paper, to resolve this issue, we propose a multiple-partial-hypothesis-based framework for the problem of estimating 3D human pose from a single image, which can be fine-tuned in an end-to-end fashion. We first select several joint groups from a human joint model using the proposed sampling scheme, and estimate the 3D pose of each joint group separately based on deep neural networks. After that, the estimated poses are aggregated to obtain the final 3D pose using the proposed robust optimization formula. The overall procedure can be fine-tuned in an end-to-end fashion, resulting in better estimation performance. In the experiments, the proposed framework shows the state-of-the-art performances on popular benchmark data sets, namely Human3.6M and HumanEva, which demonstrate the effectiveness of the proposed framework.en_US
dc.description.sponsorshipThis work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF), Republic of Korea funded by the Ministry of Science and ICT (NRF-2017R1A2B2006136).This work was also supported by 'The Cross-Ministry Giga KOREA Project' grant funded by the Korea government (MSIT), Republic of Korea (No. GK18P0300, Real-time 4D reconstruction of dynamic objects for ultra-realistic service).en_US
dc.language.isoen_USen_US
dc.publisherACADEMIC PRESS INC ELSEVIER SCIENCEen_US
dc.subject3D human pose estimationen_US
dc.subjectSingle-image-based 3D human pose estimationen_US
dc.subjectMultiple-partial-hypothesis-based schemeen_US
dc.titleDeep pose consensus networksen_US
dc.typeArticleen_US
dc.relation.volume182-
dc.identifier.doi10.1016/j.cviu.2019.03.004-
dc.relation.page64-70-
dc.relation.journalCOMPUTER VISION AND IMAGE UNDERSTANDING-
dc.contributor.googleauthorCha, Geonho-
dc.contributor.googleauthorLee, Minsik-
dc.contributor.googleauthorCho, Jungchan-
dc.contributor.googleauthorOh, Songhwai-
dc.relation.code2019002964-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentDIVISION OF ELECTRICAL ENGINEERING-
dc.identifier.pidmleepaper-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ELECTRICAL ENGINEERING(전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE