388 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author임종우-
dc.date.accessioned2020-10-05T02:37:41Z-
dc.date.available2020-10-05T02:37:41Z-
dc.date.issued2019-10-
dc.identifier.citation2019 IEEE/CVF International Conference on Computer Vision (ICCV), Page. 8986-8996en_US
dc.identifier.isbn978-1-7281-4803-8-
dc.identifier.issn2380-7504-
dc.identifier.urihttps://ieeexplore.ieee.org/document/9010863-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/154350-
dc.description.abstractIn this paper, we propose a novel end-to-end deep neural network model for omnidirectional depth estimation from a wide-baseline multi-view stereo setup. The images captured with ultra wide field-of-view (FOV) cameras on an omnidirectional rig are processed by the feature extraction module, and then the deep feature maps are warped onto the concentric spheres swept through all candidate depths using the calibrated camera parameters. The 3D encoder-decoder block takes the aligned feature volume to produce the omnidirectional depth estimate with regularization on uncertain regions utilizing the global context information. In addition, we present large-scale synthetic datasets for training and testing omnidirectional multi-view stereo algorithms. Our datasets consist of 11K ground-truth depth maps and 45K fisheye images in four orthogonal directions with various objects and environments. Experimental results show that the proposed method generates excellent results in both synthetic and real-world environments, and it outperforms the prior art and the omnidirectional versions of the state-of-the-art conventional stereo algorithms.en_US
dc.description.sponsorshipThis research was supported by Next-Generation Information Computing Development program through National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (NR-2017M3C4A7069369), the NR grant funded by the Korea govemment (MSIP)(NRF-2017RlA2B4011928), Research Fellow program funded by the Korea government (NRF- 2017RlA6A3A11031193), and Samsung Research Funding & Incubation Center for Future Technology (SRFC-TC1603-05).en_US
dc.language.isoenen_US
dc.publisherIEEE/CVFen_US
dc.subjectCamerasen_US
dc.subjectFeature extractionen_US
dc.subjectThree-dimensional displaysen_US
dc.subjectEstimationen_US
dc.subjectNeural networksen_US
dc.subjectMachine learningen_US
dc.subjectComputational modelingen_US
dc.titleOmniMVS: End-to-End Learning for Omnidirectional Stereo Matchingen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ICCV.2019.00908-
dc.relation.page8987-8996-
dc.contributor.googleauthorWon, Changhee-
dc.contributor.googleauthorRyu, Jongbin-
dc.contributor.googleauthorLim, Jongwoo-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF COMPUTER SCIENCE-
dc.identifier.pidjlim-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > COMPUTER SCIENCE(컴퓨터소프트웨어학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE