35 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author임종우-
dc.date.accessioned2019-11-27T02:26:06Z-
dc.date.available2019-11-27T02:26:06Z-
dc.date.issued2019-05-
dc.identifier.citationIEEE International Conference on Robotics and Automation, Page. 6073-6079en_US
dc.identifier.isbn978-153866026-3-
dc.identifier.issn2577-087X-
dc.identifier.urihttps://ieeexplore.ieee.org/document/8793823-
dc.identifier.urihttp://repository.hanyang.ac.kr/handle/20.500.11754/114931-
dc.description.abstractOmnidirectional depth sensing has its advantage over the conventional stereo systems since it enables us to recognize the objects of interest in all directions without any blind regions. In this paper, we propose a novel wide-baseline omnidirectional stereo algorithm which computes the dense depth estimate from the fisheye images using a deep convolutional neural network. The capture system consists of multiple cameras mounted on a wide-baseline rig with ultra-wide field of view (FOV) lenses, and we present the calibration algorithm for the extrinsic parameters based on the bundle adjustment. Instead of estimating depth maps from multiple sets of rectified images and stitching them, our approach directly generates one dense omnidirectional depth map with full 360° coverage at the rig global coordinate system. To this end, the proposed neural network is designed to output the cost volume from the warped images in the sphere sweeping method, and the final depth map is estimated by taking the minimum cost indices of the aggregated cost volume by SGM. For training the deep neural network and testing the entire system, realistic synthetic urban datasets are rendered using Blender. The experiments using the synthetic and real-world datasets show that our algorithm outperforms the conventional depth estimation methods and generate highly accurate depth maps.en_US
dc.description.sponsorshipThis research was supported by Samsung Research Funding & Incubation Center for Future Technology under Project Number SRFC-TC1603-05, Next-Generation Information Computing Development Program through National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT (NRF-2017M3C4A7069369), and the NRF grant funded by the Korea government(MISP)(NRF-2017R1A2B4011928).en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectCamerasen_US
dc.subjectLensesen_US
dc.subjectCalibrationen_US
dc.subjectNeural networksen_US
dc.subjectThree-dimensional displaysen_US
dc.subjectRobot vision systemsen_US
dc.titleSweepNet: Wide-baseline Omnidirectional Depth Estimationen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ICRA.2019.8793823-
dc.relation.page6073-6079-
dc.contributor.googleauthorWon, Changhee-
dc.contributor.googleauthorRyu, Jongbin-
dc.contributor.googleauthorLim, Jongwoo-
dc.relation.code20190143-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF COMPUTER SCIENCE-
dc.identifier.pidjlim-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > COMPUTER SCIENCE(컴퓨터소프트웨어학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE