323 0

Full metadata record

DC FieldValueLanguage
dc.contributor.advisor박종일-
dc.contributor.author강민석-
dc.date.accessioned2020-02-18T16:31:40Z-
dc.date.available2020-02-18T16:31:40Z-
dc.date.issued2016-02-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/126484-
dc.identifier.urihttp://hanyang.dcollection.net/common/orgView/200000428153en_US
dc.description.abstractThree dimensional object tracking is one of the popular subjects of computer vision and augmented reality. This is because recognizing a specific object from a camera image and tracking it’s real world three dimensional location have large value for practical uses in various fields. 3D object tracking from camera image is used in fields such as augmented reality, human-computer interaction, security and surveillance, medical imaging, visual servoing, and etc. Even though the depth camera offers depth information of a scene and has number of advantages for 3D object tracking, the generic red-green-blue (RGB) camera has its own strength. It is due to the fact that majority of cameras attached to mobile devices (smartphone, tablet pc, laptop, etc.) are RGB cameras, not depth cameras. This means that a target object can be tracked by using the above mobile devices if the tracking algorithm only needs RGB information. For this reason, we use RGB information as a visual cue for tracking in this thesis. Unlike the flat object tracking, tracking 3D object and estimating camera six degrees of freedom require high level of accuracy. Inaccurately estimated camera pose causes a tracking jittering. A jittering is critical when augmented reality, which is based on location of the target object being tracked, is implemented. In this thesis, we propose the method which significantly reduces tracking jittering. Also, it is robust to illumination changes. The method requires a 3D model of a target object and its respective texture of the target object. Therefore, this thesis starts from the question on how to acquire a target object texture. Also, the thesis include processes of resolving the problems that have occurred during target tracking experiment. To reduce jittering, we use a rendered object model, the camera motion and the two-phase jittering decision. The method apply perspective effect to object feature points by using a rendered object model and the camera pose. As a result, the method is able to accurately estimate the camera pose. Accurately estimated camera pose reduces tracking jittering significantly. However, some mild jittering still remained. For this reason, we propose two-phase jittering decision to eliminate mild jittering. And, the effect of our method is verified in the experiment results. Also, direct light on target object interfere with estimating an accurate target object’s location. This is because the object tracking methods using visual cues are largely influenced by changes in the illumination; illumination contains a lot of information of a camera image. However, chromaticity is invariant to illumination changes, and it reflects characteristic of each object’s surface material. We propose the object tracking method which use chromaticity image. The method uses each channel of chromaticity information, and it is much more robust to direct light and illumination changes than a method which use RGB images. Effect of our method is also verified in the experiment results.-
dc.publisher한양대학교-
dc.titleRobust 3D Object Tracking Under Illumination Changes-
dc.title.alternative조명 변화에 강인한 3차원 객체 추적-
dc.typeTheses-
dc.contributor.googleauthorKang, Minseok-
dc.contributor.alternativeauthor강민석-
dc.sector.campusS-
dc.sector.daehak대학원-
dc.sector.department컴퓨터·소프트웨어학과-
dc.description.degreeMaster-
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > COMPUTER SCIENCE(컴퓨터·소프트웨어학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE