708 0

TS-GAN: Image Generation based on Text and Sketch with Generative Adversarial Networks

Title
TS-GAN: Image Generation based on Text and Sketch with Generative Adversarial Networks
Author
Lee, Je Hoon
Alternative Author(s)
이제훈
Advisor(s)
이동호
Issue Date
2019-02
Publisher
한양대학교
Degree
Master
Abstract
There have been many studies to generate images based on text description and sketch image with Generative Adversarial Network. However, since previous studies generate images with only one single input such as text description or sketch image, there is a limitation that an unwanted image is generated when text description is not enough or the sketched image is very different from the real image. In this thesis, i propose TS-GAN, a new technique to overcome the limitations of previous studies by using sketch image resources and text description resources together. TS-GAN consists of two main steps. In the first step, it generates the main shape of the object and background at low resolution based on the sketch image. And then, the second step is to generate a more realistic image at high resolution based on the image generated from the first step and text description. Through various experiments with CUB and Oxford-102 datasets which are widely used in computer vision, we show that TS-GAN generates images very well by using text description and image sketch.
URI
https://repository.hanyang.ac.kr/handle/20.500.11754/99816http://hanyang.dcollection.net/common/orgView/200000434784
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > COMPUTER SCIENCE & ENGINEERING(컴퓨터공학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE