Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 임종우 | - |
dc.date.accessioned | 2019-11-27T02:26:06Z | - |
dc.date.available | 2019-11-27T02:26:06Z | - |
dc.date.issued | 2019-05 | - |
dc.identifier.citation | IEEE International Conference on Robotics and Automation, Page. 6073-6079 | en_US |
dc.identifier.isbn | 978-153866026-3 | - |
dc.identifier.issn | 2577-087X | - |
dc.identifier.uri | https://ieeexplore.ieee.org/document/8793823 | - |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/114931 | - |
dc.description.abstract | Omnidirectional depth sensing has its advantage over the conventional stereo systems since it enables us to recognize the objects of interest in all directions without any blind regions. In this paper, we propose a novel wide-baseline omnidirectional stereo algorithm which computes the dense depth estimate from the fisheye images using a deep convolutional neural network. The capture system consists of multiple cameras mounted on a wide-baseline rig with ultra-wide field of view (FOV) lenses, and we present the calibration algorithm for the extrinsic parameters based on the bundle adjustment. Instead of estimating depth maps from multiple sets of rectified images and stitching them, our approach directly generates one dense omnidirectional depth map with full 360° coverage at the rig global coordinate system. To this end, the proposed neural network is designed to output the cost volume from the warped images in the sphere sweeping method, and the final depth map is estimated by taking the minimum cost indices of the aggregated cost volume by SGM. For training the deep neural network and testing the entire system, realistic synthetic urban datasets are rendered using Blender. The experiments using the synthetic and real-world datasets show that our algorithm outperforms the conventional depth estimation methods and generate highly accurate depth maps. | en_US |
dc.description.sponsorship | This research was supported by Samsung Research Funding & Incubation Center for Future Technology under Project Number SRFC-TC1603-05, Next-Generation Information Computing Development Program through National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT (NRF-2017M3C4A7069369), and the NRF grant funded by the Korea government(MISP)(NRF-2017R1A2B4011928). | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.subject | Cameras | en_US |
dc.subject | Lenses | en_US |
dc.subject | Calibration | en_US |
dc.subject | Neural networks | en_US |
dc.subject | Three-dimensional displays | en_US |
dc.subject | Robot vision systems | en_US |
dc.title | SweepNet: Wide-baseline Omnidirectional Depth Estimation | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/ICRA.2019.8793823 | - |
dc.relation.page | 6073-6079 | - |
dc.contributor.googleauthor | Won, Changhee | - |
dc.contributor.googleauthor | Ryu, Jongbin | - |
dc.contributor.googleauthor | Lim, Jongwoo | - |
dc.relation.code | 20190143 | - |
dc.sector.campus | S | - |
dc.sector.daehak | COLLEGE OF ENGINEERING[S] | - |
dc.sector.department | DEPARTMENT OF COMPUTER SCIENCE | - |
dc.identifier.pid | jlim | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.