259 0

Enhanced Rendering of Point-Cloud Contents

Title
Enhanced Rendering of Point-Cloud Contents
Author
Heejea Lee
Alternative Author(s)
이희제
Advisor(s)
박종일
Issue Date
2022. 8
Publisher
한양대학교
Degree
Master
Abstract
Due to the influence of the Pandemic, the demand for non-face-to-face solutions and metaverse increased rapidly. One of the important factors that connect virtual reality and real world is immersion. If immersion is lowered in virtual reality, non-face-to-face solutions and meta-verse will be difficult to replace real world. There are several factors that give a sense of immersion in virtual reality, such as synchronization of human behavior and virtual object behavior, response speed of virtual object, and rendering quality of virtual object, and in this thesis, we propose how to improve rendering quality when rendering virtual object with point cloud data. Point cloud content data consists of 3D coordinates and color information. When rendering a simple point cloud, the speed is fast, but when zooming or rotating, the gap between the point and the gap is widened, resulting in a hole. The generated hole makes the user feel the quality degradation, and two methods are proposed to prevent the quality degradation. The first method is interpolating the generated hole. There are two ways to interpolate a hole: one is to render a 3D point to 3D volume by increasing its size to a splat with a 3D volume. Another method is to locate the hole and interpolate it using inverse warping to the color corresponding to the visibility indicator map. Both methods show the effect of filling holes, but in the method using splat, the area of detailed expression is crushed compared to the inverse warping method. As a result of comparing the PSNR values of the two methods, the inverse warping method shows a higher average PSNR value of 1.7dB. The second method is to increase the point cloud density. Increasing density in an appropriate way improves rendering quality. point cloud content data is estimated the motion vectors with other frame for registration, and then the frame is combined to increase point cloud density. In order to estimate the motion vector between frames, a point-by-point pair must be made. Using geometry information and color information, Gaussian weights are used to pair points and estimate motion vectors. We use the estimated motion vector to update the motion vector local coherence with the surrounding area to estimate a more detailed motion vector. When the estimated motion vector is compensated to the source point and combined with the target point, registration is completed.
URI
http://hanyang.dcollection.net/common/orgView/200000626599https://repository.hanyang.ac.kr/handle/20.500.11754/174217
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > COMPUTER SCIENCE(컴퓨터·소프트웨어학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE