404 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author강경태-
dc.date.accessioned2021-09-09T05:07:59Z-
dc.date.available2021-09-09T05:07:59Z-
dc.date.issued2020-12-
dc.identifier.citation2020 IEEE Real-Time Systems Symposium (RTSS), page. 191-204en_US
dc.identifier.isbn978-1-7281-8324-4-
dc.identifier.issn2576-3172-
dc.identifier.issn10.1109/RTSS49844.2020.00027-
dc.identifier.urihttps://ieeexplore.ieee.org/document/9355528?arnumber=9355528&SID=EBSCO:edseee-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/165045-
dc.description.abstractFor realizing safe autonomous driving, the end-to-end delays of real-time object detection systems should be thoroughly analyzed and minimized. However, despite recent development of neural networks with minimized inference delays, surprisingly little attention has been paid to their end-to-end delays from an object's appearance until its detection is reported. With this motivation, this paper aims to provide more comprehensive understanding of the end-to-end delay, through which precise best- and worst-case delay predictions are formulated, and three optimization methods are implemented: (i) on-demand capture, (ii) zero-slack pipeline, and (iii) contention-free pipeline. Our experimental results show a 76% reduction in the end-to-end delay of Darknet YOLO (You Only Look Once) v3 (from 1070 ms to 261 ms), thereby demonstrating the great potential of exploiting the end-to-end delay analysis for autonomous driving. Furthermore, as we only modify the system architecture and do not change the neural network architecture itself, our approach incurs no penalty on the detection accuracy.en_US
dc.description.sponsorshipThis work was supported partly by the Korea Evaluation Institute Of Industrial Technology (KEIT) grant funded by the Ministry of Trade, Industry and Energy (MOTIE) (20000316, Scene Understanding and Threat Assessment based on Deep Learning for Automatic Emergency Steering), partly by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (2014-3-00065, Resilient Cyber-Physical Systems Research), partly by the Ministry of Land, Infrastructure, and Transport (MOLIT), Korea, through the Transportation Logistics Development Program (20TLRP-B147674-03, Development of Operation Technology for V2X Truck Platooning), and partially by NSF grant CCF-1704859.en_US
dc.language.isoen_USen_US
dc.publisherIEEE COMPUTER SOCen_US
dc.subjectPipelinesen_US
dc.subjectNeural networksen_US
dc.subjectOptimization methodsen_US
dc.subjectObject detectionen_US
dc.subjectReal-time systemsen_US
dc.subjectDelaysen_US
dc.subjectAutonomous vehiclesen_US
dc.titleR-TOD: Real-Time Object Detector with Minimized End-to-End Delay for Autonomous Drivingen_US
dc.typeArticleen_US
dc.relation.noNA-
dc.relation.volumeNA-
dc.relation.page1-14-
dc.contributor.googleauthorJang, Wonseok-
dc.contributor.googleauthorJeong, Hansaem-
dc.contributor.googleauthorKang, Kyungtae-
dc.contributor.googleauthorDutt, Nikil-
dc.contributor.googleauthorKim, Jong-Chan-
dc.relation.code20200019-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF COMPUTING[E]-
dc.sector.departmentDEPARTMENT OF ARTIFICIAL INTELLIGENCE-
dc.identifier.pidktkang-
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE