Full metadata record

DC FieldValueLanguage
dc.contributor.advisor권태수-
dc.contributor.author손서영-
dc.date.accessioned2023-09-27T02:06:06Z-
dc.date.available2023-09-27T02:06:06Z-
dc.date.issued2023. 8-
dc.identifier.urihttp://hanyang.dcollection.net/common/orgView/200000684444en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/187118-
dc.description.abstractWith a limited capacity of motion capture (mocap) data, it has been a challenge in character animation to recreate a desired trajectory for goal-oriented locomotion. We introduce a GAN-based approach and reinforcement learning to produce goal specific tasks such as punching random targets and engaging in competitive sports as boxing. Data-driven methods using generative models such as Variational Autoencoder, Conditional Variational Encoder, and Normalizing Flows demonstrated efficient way of predicting long sequence of motions. Though these methods produce high-quality long-term motion, they may face limitations when synthesizing motion in more challenging scenarios such as punching a random target. This can be solved by using GAN Discriminator to imitate motion data clips and incorporating reinforcement learning to compose goal-oriented motions. In this paper, we provide an overview of the overall method that combines GAN-based techniques and deep reinforcement learning and compare two state-of-the-arts methods: Adversarial Motion Prior and Adversarial Skill Embedding. Also, we adopt TimeChamber, a large-scale self-play framework for multi agent reinforcement learning to create an environment for competitive sports such as boxing. We experimentally demonstrate that both the Adversarial Motion Prior and Adversarial Skill Embeddings methods are capable of generating viable motions for a character punching a random target, even in the absence of mocap data that specifically captures the transition between punching and locomotion. Also, with a single learned policy, multiple task controllers can be constructed through the TimeChamber framework. In short, we conducted several experiments on demonstrating character punching task without giving explicit transition information between locomotion and punching. Our experiments validate our redesigned reward functions on given, random facing term. We have also incorporated a boxing task for two characters using the TimeChamber framework, where our reward function operates similarly to the punching task.-
dc.publisher한양대학교-
dc.titleGAN 기반 접근 방식과 강화 학습의 캐릭터 복싱과제에서의 효과성 탐구-
dc.title.alternativeExploring the Effectiveness of GAN-based Approach Reinforcement Learning in Character Boxing Task-
dc.typeTheses-
dc.contributor.googleauthor손서영-
dc.contributor.alternativeauthorSeoyoung Son-
dc.sector.campusS-
dc.sector.daehak대학원-
dc.sector.department컴퓨터·소프트웨어학과-
dc.description.degreeMaster-
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > COMPUTER SCIENCE(컴퓨터·소프트웨어학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE