104 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author김기범-
dc.date.accessioned2022-04-06T00:26:41Z-
dc.date.available2022-04-06T00:26:41Z-
dc.date.issued2021-11-
dc.identifier.citationPEERJ COMPUTER SCIENCE, v. 7, Page. 1-36en_US
dc.identifier.issn2376-5992-
dc.identifier.urihttps://doaj.org/article/27a4025cbc82419297e4590b08f1aa2f-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/169730-
dc.description.abstractThe study of human posture analysis and gait event detection from various types of inputs is a key contribution to the human life log. With the help of this research and technologies humans can save costs in terms of time and utility resources. In this paper we present a robust approach to human posture analysis and gait event detection from complex video-based data. For this, initially posture information, landmark information are extracted, and human 2D skeleton mesh are extracted, using this information set we reconstruct the human 2D to 3D model. Contextual features, namely, degrees of freedom over detected body parts, joint angle information, periodic and non-periodic motion, and human motion direction flow, are extracted. For features mining, we applied the rule-based features mining technique and, for gait event detection and classification, the deep learning-based CNN technique is applied over the mpii-video pose, the COCO, and the pose track datasets. For the mpii-video pose dataset, we achieved a human landmark detection mean accuracy of 87.09% and a gait event recognition mean accuracy of 90.90%. For the COCO dataset, we achieved a human landmark detection mean accuracy of 87.36% and a gait event recognition mean accuracy of 89.09%. For the pose track dataset, we achieved a human landmark detection mean accuracy of 87.72% and a gait event recognition mean accuracy of 88.18%. The proposed system performance shows a significant improvement compared to existing state-of-the-art frameworks.en_US
dc.description.sponsorshipThis research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (No. 2018R1D1A1A02085645). Also, this work was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 202012D05-02). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.en_US
dc.language.isoenen_US
dc.publisherPEERJ INCen_US
dc.subject2D to 3D reconstructionen_US
dc.subjectConvolutional neural networken_US
dc.subjectGait event classificationen_US
dc.subjectHuman posture analysisen_US
dc.subjectLandmark detectionen_US
dc.subjectSynthetic modelen_US
dc.subjectElectronic computers. Computer scienceen_US
dc.subjectQA75.5-76.95en_US
dc.titleSyntactic model-based human body 3D reconstruction and event classification via association based features mining and deep learningen_US
dc.typeArticleen_US
dc.relation.volume7-
dc.identifier.doi10.7717/peerj-cs.764-
dc.relation.page1-36-
dc.relation.journalPEERJ COMPUTER SCIENCE-
dc.contributor.googleauthorGhadi, Yazeed-
dc.contributor.googleauthorAkhter, Israr-
dc.contributor.googleauthorAlarfaj, Mohammed-
dc.contributor.googleauthorJalal, Ahmad-
dc.contributor.googleauthorKim, Kibum-
dc.relation.code2021008886-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF COMPUTING[E]-
dc.sector.departmentSCHOOL OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY-
dc.identifier.pidkibum-
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE