334 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author신현철-
dc.date.accessioned2019-12-26T01:47:46Z-
dc.date.available2019-12-26T01:47:46Z-
dc.date.issued2018-12-
dc.identifier.citationIET COMPUTER VISION, v. 12, No. 8, Page. 1179-1187en_US
dc.identifier.issn1751-9632-
dc.identifier.issn1751-9640-
dc.identifier.urihttps://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2018.5315-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/121402-
dc.description.abstractIn this study, a novel multi-layer fused convolution neural network (MLF-CNN) is proposed for detecting pedestrians under adverse illumination conditions. Currently, most existing pedestrian detectors are very likely to be stuck under adverse illumination circumstances such as shadows, overexposure, or nighttime. To detect pedestrians under such conditions, the authors apply deep learning for effective fusion of the visible and thermal information in multispectral images. The MLF-CNN consists of a proposal generation stage and a detection stage. In the first stage, they design an MLF region proposal network and propose to use summation fusion method for integration of the two convolutional layers. This combination can detect pedestrians in different scales, even in adverse illumination. Furthermore, instead of extracting features from a single layer, they extract features from three feature maps and match the scale using the fused ROI pooling layers. This new multiple-layer fusion technique can significantly reduce the detection miss rate. Extensive evaluations of several challenging datasets well demonstrate that their approach achieves state-of-the-art performance. For example, their method performs 28.62% better than the baseline method and 11.35% better than the well-known faster R-CNN halfway fusion method in detection accuracy on KAIST multispectral pedestrian dataset.en_US
dc.description.sponsorshipThis material is based on work supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (10080619).en_US
dc.language.isoen_USen_US
dc.publisherINST ENGINEERING TECHNOLOGY-IETen_US
dc.subjectpedestriansen_US
dc.subjectfeature extractionen_US
dc.subjectimage matchingen_US
dc.subjectimage fusionen_US
dc.subjectobject detectionen_US
dc.subjectfeedforward neural netsen_US
dc.subjectlearning (artificial intelligence)en_US
dc.subjectmultilayer fusion techniquesen_US
dc.subjectCNNen_US
dc.subjectmultispectral pedestrian detectionen_US
dc.subjectmultilayer fused convolution neural networken_US
dc.subjectpedestrian detectorsen_US
dc.subjectadverse illumination circumstancesen_US
dc.subjectshadowsen_US
dc.subjectoverexposureen_US
dc.subjectnighttimeen_US
dc.subjectdeep learningen_US
dc.subjectvisible informationen_US
dc.subjectthermal informationen_US
dc.subjectMLF region proposal networken_US
dc.subjectsummation fusion methoden_US
dc.subjectconvolutional layersen_US
dc.subjectadverse illuminationen_US
dc.subjectfeature extractionen_US
dc.subjectfeature mapsen_US
dc.subjectscale matchingen_US
dc.subjectfused ROI pooling layersen_US
dc.subjectdetection miss rate reductionen_US
dc.subjectKAIST multispectral pedestrian dataseten_US
dc.titleMulti-layer fusion techniques using a CNN for multispectral pedestrian detectionen_US
dc.typeArticleen_US
dc.relation.no8-
dc.relation.volume12-
dc.identifier.doi10.1049/iet-cvi.2018.5315-
dc.relation.page1179-1187-
dc.relation.journalIET COMPUTER VISION-
dc.contributor.googleauthorChen, Yunfan-
dc.contributor.googleauthorXie, Han-
dc.contributor.googleauthorShin, Hyunchul-
dc.relation.code2018000124-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentDIVISION OF ELECTRICAL ENGINEERING-
dc.identifier.pidshin-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ELECTRICAL ENGINEERING(전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE