279 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author김기범-
dc.date.accessioned2022-08-17T01:33:08Z-
dc.date.available2022-08-17T01:33:08Z-
dc.date.issued2021-07-
dc.identifier.citationIEEE ACCESS, v. 9, Page. 111249-111266en_US
dc.identifier.issn2169-3536-
dc.identifier.urihttps://doaj.org/article/beeae16cd98842b991d1f2f699bc8b9d-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/172489-
dc.description.abstractHuman-Object Interaction (HOI) recognition, due to its significance in many computer vision-based applications, requires in-depth and meaningful details from image sequences. Incorporating semantics in scene understanding has led to a deep understanding of human-centric actions. Therefore, in this research work, we propose a semantic HOI recognition system based on multi-vision sensors. In the proposed system, the de-noised RGB and depth images, via Bilateral Filtering (BLF), are segmented into multiple clusters using a Simple Linear Iterative Clustering (SLIC) algorithm. The skeleton is then extracted from segmented RGB and depth images via Euclidean Distance Transform (EDT). Human joints, extracted from the skeleton, provide the annotations for accurate pixel-level labeling. An elliptical human model is then generated via a Gaussian Mixture Model (GMM). A Conditional Random Field (CRF) model is trained to allocate a specific label to each pixel of different human body parts and an interaction object. Two semantic feature types that are extracted from each labeled body part of the human and labelled objects are: Fiducial points and 3D point cloud. Features descriptors are quantized using Fisher's Linear Discriminant Analysis (FLDA) and classified using K-ary Tree Hashing (KATH). In experimentation phase the recognition accuracy achieved with the Sports dataset is 92.88%, with the Sun Yat-Sen University (SYSU) 3D HOI dataset is 93.5% and with the Nanyang Technological University (NTU) RGB+D dataset it is 94.16%. The proposed system is validated via extensive experimentation and should be applicable to many computer-vision based applications such as healthcare monitoring, security systems and assisted living etc.en_US
dc.description.sponsorshipThis work was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) under Grant 2018R1D1A1A02085645, in part by Korea Medical Device Development Fund Grant through Korean Government (the Ministry of Science and ICT; the Ministry of Trade, Industry and Energy; the Ministry of Health and Welfare; and the Ministry of Food and Drug Safety) under Grant 202012D05-02, and in part by Hanyang University under Grant 201800000000647.en_US
dc.language.isoenen_US
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INCen_US
dc.subject3D point clouden_US
dc.subjectfiducial pointsen_US
dc.subjecthuman-object interactionen_US
dc.subjectpixel labelingen_US
dc.subjectsemantic segmentationen_US
dc.subjectsuper-pixelsen_US
dc.subjectElectrical engineering. Electronics. Nuclear engineeringen_US
dc.subjectTK1-9971en_US
dc.titleSemantic Recognition of Human-Object Interactions via Gaussian-Based Elliptical Modeling and Pixel-Level Labelingen_US
dc.typeArticleen_US
dc.relation.volume9-
dc.identifier.doi10.1109/ACCESS.2021.3101716-
dc.relation.page111249-111266-
dc.relation.journalIEEE ACCESS-
dc.contributor.googleauthorKhalid, Nida-
dc.contributor.googleauthorGhadi, Yazeed Yasin-
dc.contributor.googleauthorGochoo, Munkhjargal-
dc.contributor.googleauthorJalal, Ahmad-
dc.contributor.googleauthorKim, Kibum-
dc.relation.code2021000011-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF COMPUTING[E]-
dc.sector.departmentSCHOOL OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY-
dc.identifier.pidkibum-
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE