285 0

Full metadata record

DC FieldValueLanguage
dc.contributor.advisor서일홍-
dc.contributor.author조남준-
dc.date.accessioned2020-02-11T03:55:51Z-
dc.date.available2020-02-11T03:55:51Z-
dc.date.issued2020-02-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/123760-
dc.identifier.urihttp://hanyang.dcollection.net/common/orgView/200000436940en_US
dc.description.abstractIn this work, a unified framework is proposed for robots to learn, improve, and generalize motor skills while maximizing the strengths and overcoming the limitations of imitation learning, reinforcement learning, and deep learning approaches through their combination. These motor skills are learned through two types of learning processes: (i) a process for learning motor skills while using small data without using image data and (ii) a process for learning motor skills while using large data including image data. The two learning processes are called the imitation-learning (IL)-based and the deep-learning (DL)-based learning processes, respectively. In the IL-based learning process, motor skills are acquired by using a mixture of IL and reinforcement learning (RL) without image data. In contrast, a CNN-based motor skill is learned through IL, RL, and DL in the DL based learning process. Furthermore, this framework addresses a fundamental question that can arise during the learning processes: is it reasonable to use all information obtained from human demonstrations to model motor skills? To obtain the answer to this question, two new measures are defined to specify motion significance and motion complexity from human demonstrations. Motion significance indicates the relative meaningfulness of each motion frame, which is defined as a set of data points acquired from multiple motion trajectories at a time index. Motion complexity indicates the number of meaningful motion frames involved in a set of human demonstrations. These two measures are proposed to measure motion significance and motion complexity while satisfying the requirements of neural complexity and motion granularity from continuous motion trajectories. These measures attain small values for totally random or totally regular activities. These measures are used to improve the representability of key motor skills in accomplishing a robotic task and select significant information needed to learn motor skills from human demonstrations. For this purpose, the method is proposed to improve the performance of a Gaussian mixture model by weighting the difference levels of significance in all data points of a training dataset. Next, the method is proposed to model social interactions between humans and robots. It is not easy to model such social interactions because of the vast number of possible combinations of body joints. To resolve this issue, this method measures joint motion significance and selects significant features from an entire set of features based on joint motion significance. Finally, the complexity measure is used to generate the order for motor skill transfer in the RL process. Robots should not only optimize all motor skills of a robotic task but also transfer these motor skills to those of another tasks, based on the order of motor skill transfer. In addition, two techniques are revisited to learn and execute motor skills. First, autonomous motion segmentation method is presented to separate human demonstrations into several motion phases before learning motor skills. Second, a motor skill planning method is presented to select a goal-oriented as well as situation-adequate motor skill from multiple motor skills after learning motor skills. To evaluate and apply the framework, various manipulation tasks are performed using an actual robotic arm. In addition, social interaction tasks are considered to verify an application of motion significance. These measures and processes associated with the framework are defined, evaluated, and discussed by individual experiments to highlight their characteristics in Chapters 2–4. Furthermore, their guidelines and conclusions are provided in these chapters. The overall conclusion and future research plans are presented in Chapter 5. Finally, two additional techniques are revisited for segmenting human demonstrations and planning motor skills in Appendices A and B.-
dc.publisher한양대학교-
dc.titleLearning, Improving, and Generalizing Motor Skills for Autonomous Robot Manipulation: An Integration of Imitation Learning, Reinforcement Learning, and Deep Learning-
dc.title.alternative자율 로봇 매니퓰레이션을 위한 로봇 운동 솜씨 학습, 개선 및 일반화: 모방 학습, 강화 학습, 그리고 심층신경망 학습의 통합-
dc.typeTheses-
dc.contributor.googleauthorNam Jun Cho-
dc.contributor.alternativeauthor조남준-
dc.sector.campusS-
dc.sector.daehak대학원-
dc.sector.department전자컴퓨터통신공학과-
dc.description.degreeDoctor-
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > ELECTRONICS AND COMPUTER ENGINEERING(전자컴퓨터통신공학과) > Theses (Ph.D.)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE