244 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author신현철-
dc.date.accessioned2021-12-23T04:08:04Z-
dc.date.available2021-12-23T04:08:04Z-
dc.date.issued2021-02-
dc.identifier.citationMULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING, v. 32, No. 3, Page. 897-913en_US
dc.identifier.issn0923-6082-
dc.identifier.issn1573-0824-
dc.identifier.urihttps://link.springer.com/article/10.1007%2Fs11045-021-00764-1-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/166986-
dc.description.abstractDetecting small-scale pedestrians in aerial images is a challenging task that can be difficult even for humans. Observing that the single image based method cannot achieve robust performance because of the poor visual cues of small instances. Considering that multiple frames may provide more information to detect such difficult case instead of only single frame, we design a novel video based pedestrian detection method with a two-stream network pipeline to fully utilize the temporal and contextual information of a video. An aggregated feature map is proposed to absorb the spatial and temporal information with the help of spatial and temporal sub-networks. To better capture motion information, a more refined flow net (SPyNet) is adopted instead of a simple flownet. In the spatial stream subnetwork, we modified the backbone network structure by increasing the feature map resolution with relatively larger receptive field to make it suitable for small-scale detection. Experimental results based on drone video datasets demonstrate that our approach improves detection accuracy in the case of small-scale instances and reduces false positive detections. By exploiting the temporal information and aggregating the feature maps, our two-stream method improves the detection performance by 8.48% in mean Average Precision (mAP) from that of the basic single stream R-FCN method, and it outperforms the state-of-the-art method by 3.09% on the Okutama Human-action dataset.en_US
dc.language.isoen_USen_US
dc.publisherSPRINGERen_US
dc.subjectPedestrian detectionen_US
dc.subjectFeature aggregationen_US
dc.subjectDrone visionen_US
dc.subjectNeural networken_US
dc.subjectDeep learningen_US
dc.titleTwo-stream small-scale pedestrian detection network with feature aggregation for drone-view videosen_US
dc.typeArticleen_US
dc.relation.no3-
dc.relation.volume32-
dc.identifier.doi10.1007/s11045-021-00764-1-
dc.relation.page897-913-
dc.relation.journalMULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING-
dc.contributor.googleauthorXie, Han-
dc.contributor.googleauthorShin, Hyunchul-
dc.relation.code2021006555-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentDIVISION OF ELECTRICAL ENGINEERING-
dc.identifier.pidshin-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ELECTRICAL ENGINEERING(전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE