Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이윤상 | - |
dc.date.accessioned | 2022-10-13T01:11:28Z | - |
dc.date.available | 2022-10-13T01:11:28Z | - |
dc.date.issued | 2021-01 | - |
dc.identifier.citation | IEEE ACCESS, v. 9, page. 20662-20672 | en_US |
dc.identifier.issn | 2169-3536 | en_US |
dc.identifier.uri | https://ieeexplore.ieee.org/document/9337805 | en_US |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/175298 | - |
dc.description.abstract | Recently, deep reinforcement learning (DRL) is commonly used to create controllers for physically simulated characters. Among DRL-based approaches, imitation learning for character control using motion capture clips as tracking references has shown successful results in controlling various motor skills with natural movement. However, the output motion tends to be constrained close to the reference motion, and thus the learning of various styles of motion requires many motion clips. In this paper, we present a DRL method for learning a finite state machine (FSM) based policy in a motion-free manner (without the use of any motion data), which controls a simulated character to produce a gait as specified by the desired gait parameters. The control policy learns to output the target pose for each FSM state and transition timing between states, based on the character state at the beginning of each step and the user-specified gait parameters, such as the desired step length or maximum swing foot height. The combination of FSM-based policy learning and simple linear balance feedback embedded in the base controller has a positive synergistic effect on the performance of the learned policy. The learned policy allows the simulated character to walk as instructed by the continuously changing the gait parameters while responding to external perturbations. We demonstrate the effectiveness of our approach through interactive control, external push, comparison, and ablation studies. | en_US |
dc.description.sponsorship | This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) under Grant NRF-2019R1C1C1006778 and Grant NRF-2019R1A4A1029800, and in part by the Research Fund of Hanyang University under Grant HY-2018. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | en_US |
dc.subject | Character control; deep reinforcement learning; motion-free learning; locomotion control; physically based animation | en_US |
dc.title | Finite State Machine-Based Motion-Free Learning of Biped Walking | en_US |
dc.type | Article | en_US |
dc.relation.volume | 9 | - |
dc.identifier.doi | 10.1109/ACCESS.2021.3055241 | en_US |
dc.relation.page | 20662-20672 | - |
dc.relation.journal | IEEE ACCESS | - |
dc.contributor.googleauthor | Kang, Gyoo-Chul | - |
dc.contributor.googleauthor | Lee, Yoonsang | - |
dc.relation.code | 2021000011 | - |
dc.sector.campus | S | - |
dc.sector.daehak | COLLEGE OF ENGINEERING[S] | - |
dc.sector.department | SCHOOL OF COMPUTER SCIENCE | - |
dc.identifier.pid | yoonsanglee | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.