Full metadata record

DC FieldValueLanguage
dc.contributor.author안용한-
dc.date.accessioned2023-12-22T01:51:57Z-
dc.date.available2023-12-22T01:51:57Z-
dc.date.issued2023-08-
dc.identifier.citationSensors, v. 23, NO. 15, article no. 6997, Page. 1.0-20.0-
dc.identifier.issn1424-8220;1424-3210-
dc.identifier.urihttps://www.mdpi.com/1424-8220/23/15/6997en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/187843-
dc.description.abstractAs the use of construction robots continues to increase, ensuring safety and productivity while working alongside human workers becomes crucial. To prevent collisions, robots must recognize human behavior in close proximity. However, single, or RGB-depth cameras have limitations, such as detection failure, sensor malfunction, occlusions, unconstrained lighting, and motion blur. Therefore, this study proposes a multiple-camera approach for human activity recognition during human–robot collaborative activities in construction. The proposed approach employs a particle filter, to estimate the 3D human pose by fusing 2D joint locations extracted from multiple cameras and applies long short-term memory network (LSTM) to recognize ten activities associated with human and robot collaboration tasks in construction. The study compared the performance of human activity recognition models using one, two, three, and four cameras. Results showed that using multiple cameras enhances recognition performance, providing a more accurate and reliable means of identifying and differentiating between various activities. The results of this study are expected to contribute to the advancement of human activity recognition and utilization in human–robot collaboration in construction. © 2023 by the authors.-
dc.description.sponsorshipThis work was supported by Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government (MOTIE) (20202020800030, Development of Smart Hybrid Envelope Systems for Zero Energy Buildings through Holistic Performance Test and Evaluation Methods and Fields Verifications).-
dc.languageen-
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)-
dc.subjecthuman activity recognition-
dc.subjecthuman pose estimation-
dc.subjectlong short-term memory-
dc.subjectmultiple cameras-
dc.subjectparticle filter-
dc.titleMulti-Camera-Based Human Activity Recognition for Human–Robot Collaboration in Construction-
dc.typeArticle-
dc.relation.no15-
dc.relation.volume23-
dc.identifier.doi10.3390/s23156997-
dc.relation.page1.0-20.0-
dc.relation.journalSensors-
dc.contributor.googleauthorJang, Youjin-
dc.contributor.googleauthorJeong, Inbae-
dc.contributor.googleauthorYounesi Heravi, Moein-
dc.contributor.googleauthorSarkar, Sajib-
dc.contributor.googleauthorShin, Hyunkyu-
dc.contributor.googleauthorAhn, Yonghan-
dc.sector.campusE-
dc.sector.daehak공학대학-
dc.sector.department건축공학전공-
dc.identifier.pidyhahn-
dc.identifier.article6997-


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE