297 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author한상욱-
dc.date.accessioned2018-03-23T05:40:40Z-
dc.date.available2018-03-23T05:40:40Z-
dc.date.issued2013-11-
dc.identifier.citationAutomation In Construction, 2013, 35, P.131-141en_US
dc.identifier.issn0926-5805-
dc.identifier.issn1872-7891-
dc.identifier.urihttp://apps.webofknowledge.com/full_record.do?product=WOS&search_mode=GeneralSearch&qid=50&SID=F22ft5AY9lV5zE8olqB&page=1&doc=1-
dc.identifier.urihttp://hdl.handle.net/20.500.11754/51309-
dc.description.abstractIn construction, about 80%-90% of accidents are associated with workers' unsafe acts. Nevertheless, the measurement of workers' behavior has not been actively applied in practice, due to the difficulties in observing workers on jobsites. In an effort to provide a robust and automated means for worker observation, this paper proposes a framework of vision-based unsafe action detection for behavior monitoring. The framework consists of (1) the identification of critical unsafe behavior, (2) the collection of relevant motion templates and site videos, (3) the 3D skeleton extraction from the videos, and (4) the detection of unsafe actions using the motion templates and skeleton models. For a proof of concept, experimental studies are undertaken to detect unsafe actions during ladder climbing (i.e., reaching far to a side) in motion datasets extracted from videos. The result indicates that the proposed framework can potentially perform well at detecting predefined unsafe actions in videos. (C) 2013 Elsevier B.V. All rights reserved.en_US
dc.description.sponsorshipWe would like to thank the following people: Dr. Thomas Armstrong, the director of the Center for Ergonomics, for his help in designing and conducting our experiments; Chunxia Li, a former MS student at UM; and staff at the UM 3D lab for their help in motion data collection. The work presented in this paper was supported financially with two National Science Foundation Awards (No. CMMI-1161123 and CMMI-1200120) and a CPWR grant through NIOSH cooperative agreement OH009762. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.en_US
dc.language.isoenen_US
dc.publisherElsevier Science B.Ven_US
dc.subjectSafetyen_US
dc.subjectBehavior-based safety analysisen_US
dc.subjectVision-based trackingen_US
dc.subjectMotion captureen_US
dc.subjectMotion recognitionen_US
dc.titleA vision-based motion capture and recognition framework for behavior-based safety managementen_US
dc.typeArticleen_US
dc.relation.volume35-
dc.identifier.doi10.1016/j.autcon.2013.05.001-
dc.relation.page131-141-
dc.relation.journalAUTOMATION IN CONSTRUCTION-
dc.contributor.googleauthorHan, SangUk-
dc.contributor.googleauthorLee, SangHyun-
dc.relation.code2013001039-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF CIVIL AND ENVIRONMENTAL ENGINEERING-
dc.identifier.pidsanguk-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > CIVIL AND ENVIRONMENTAL ENGINEERING(건설환경공학과) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE