211 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author신현철-
dc.date.accessioned2021-12-23T03:56:51Z-
dc.date.available2021-12-23T03:56:51Z-
dc.date.issued2021-02-
dc.identifier.citationSignals and Communication Technology, v. 21, Issue. 1, Page. 98-107en_US
dc.identifier.urihttps://www.mdpi.com/2624-6120/2/1/9-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/166973-
dc.description.abstractThree-dimensional (3D) object detection is essential in autonomous driving. Three-dimensional (3D) Lidar sensor can capture three-dimensional objects, such as vehicles, cycles, pedestrians, and other objects on the road. Although Lidar can generate point clouds in 3D space, it still lacks the fine resolution of 2D information. Therefore, Lidar and camera fusion has gradually become a practical method for 3D object detection. Previous strategies focused on the extraction of voxel points and the fusion of feature maps. However, the biggest challenge is in extracting enough edge information to detect small objects. To solve this problem, we found that attention modules are beneficial in detecting small objects. In this work, we developed Frustum ConvNet and attention modules for the fusion of images from a camera and point clouds from a Lidar. Multilayer Perceptron (MLP) and tanh activation functions were used in the attention modules. Furthermore, the attention modules were designed on PointNet to perform multilayer edge detection for 3D object detection. Compared with a previous well-known method, Frustum ConvNet, our method achieved competitive results, with an improvement of 0.27%, 0.43%, and 0.36% in Average Precision (AP) for 3D object detection in easy, moderate, and hard cases, respectively, and an improvement of 0.21%, 0.27%, and 0.01% in AP for Bird’s Eye View (BEV) object detection in easy, moderate, and hard cases, respectively, on the KITTI detection benchmarks. Our method also obtained the best results in four cases in AP on the indoor SUN-RGBD dataset for 3D object detection.en_US
dc.language.isoen_USen_US
dc.publisherSpringer International Publishing AGen_US
dc.subject3D visionen_US
dc.subjectattention moduleen_US
dc.subjectfusionen_US
dc.subjectpoint clouden_US
dc.subjectvehicle detectionen_US
dc.title3D object detection using Frustums and attention modules for images and point cloudsen_US
dc.typeArticleen_US
dc.identifier.doi10.3390/signals2010009-
dc.relation.page98-107-
dc.relation.journalSignals and Communication Technology-
dc.contributor.googleauthorLi, Yiran-
dc.contributor.googleauthorXie, Han-
dc.contributor.googleauthorShin, Hyunchul-
dc.relation.code2021034972-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentDIVISION OF ELECTRICAL ENGINEERING-
dc.identifier.pidshin-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ELECTRICAL ENGINEERING(전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE