382 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author이윤상-
dc.date.accessioned2022-10-13T01:11:28Z-
dc.date.available2022-10-13T01:11:28Z-
dc.date.issued2021-01-
dc.identifier.citationIEEE ACCESS, v. 9, page. 20662-20672en_US
dc.identifier.issn2169-3536en_US
dc.identifier.urihttps://ieeexplore.ieee.org/document/9337805en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/175298-
dc.description.abstractRecently, deep reinforcement learning (DRL) is commonly used to create controllers for physically simulated characters. Among DRL-based approaches, imitation learning for character control using motion capture clips as tracking references has shown successful results in controlling various motor skills with natural movement. However, the output motion tends to be constrained close to the reference motion, and thus the learning of various styles of motion requires many motion clips. In this paper, we present a DRL method for learning a finite state machine (FSM) based policy in a motion-free manner (without the use of any motion data), which controls a simulated character to produce a gait as specified by the desired gait parameters. The control policy learns to output the target pose for each FSM state and transition timing between states, based on the character state at the beginning of each step and the user-specified gait parameters, such as the desired step length or maximum swing foot height. The combination of FSM-based policy learning and simple linear balance feedback embedded in the base controller has a positive synergistic effect on the performance of the learned policy. The learned policy allows the simulated character to walk as instructed by the continuously changing the gait parameters while responding to external perturbations. We demonstrate the effectiveness of our approach through interactive control, external push, comparison, and ablation studies.en_US
dc.description.sponsorshipThis work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) under Grant NRF-2019R1C1C1006778 and Grant NRF-2019R1A4A1029800, and in part by the Research Fund of Hanyang University under Grant HY-2018.en_US
dc.language.isoenen_US
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INCen_US
dc.subjectCharacter control; deep reinforcement learning; motion-free learning; locomotion control; physically based animationen_US
dc.titleFinite State Machine-Based Motion-Free Learning of Biped Walkingen_US
dc.typeArticleen_US
dc.relation.volume9-
dc.identifier.doi10.1109/ACCESS.2021.3055241en_US
dc.relation.page20662-20672-
dc.relation.journalIEEE ACCESS-
dc.contributor.googleauthorKang, Gyoo-Chul-
dc.contributor.googleauthorLee, Yoonsang-
dc.relation.code2021000011-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentSCHOOL OF COMPUTER SCIENCE-
dc.identifier.pidyoonsanglee-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > COMPUTER SCIENCE(컴퓨터소프트웨어학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE