231 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author서일홍-
dc.date.accessioned2018-03-12T05:06:36Z-
dc.date.available2018-03-12T05:06:36Z-
dc.date.issued2013-05-
dc.identifier.citationInstitute of Electrical and Electronics Engineers, 2013, P.1323-1330en_US
dc.identifier.isbn978-146735641-1-
dc.identifier.issn1050-4729-
dc.identifier.urihttp://ieeexplore.ieee.org/document/6630742/-
dc.identifier.urihttp://hdl.handle.net/20.500.11754/45354-
dc.description.abstractIn manipulation tasks, skills are usually modeled using the continuous motion trajectories acquired in the task space. The motion trajectories obtained from a human's multiple demonstrations can be broadly divided into four portions, according to the spatial variations between the demonstrations and the time spent in the demonstrations: the portions in which a long/short time is spent, and those in which the spatial variations are large/small. In these four portions, the portions in which a long time is spent and the spatial variation is small (e.g., passing a thread through the eye of a needle) are usually modeled using a small number of parameters, even if such portions represent the movement that is essential for achieving the task. The reason for this is that these portions are slightly changed in the task space as compared with the other portions. In fact, such portions should be densely modeled using more parameters (i.e., overfitting) to improve the performance of the skill because the movements of those portions must be accurately executed to achieve the task. In this paper, we propose a method for adaptively fitting these skills based on the temporal and the spatial entropies calculated by a Gaussian mixture model. We found that it is possible to retrieve accurate motion trajectories as compared with those of well-fitted models, whereas the estimation performance is generally higher than that of an overfitted model. To validate our proposed method, we present the experimental results and evaluations when using a robot arm that performed two tasks.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectTrajectoryen_US
dc.subjectEntropyen_US
dc.subjectPaintingen_US
dc.subjectPrincipal component analysisen_US
dc.subjectMotion segmentationen_US
dc.subjectAssemblyen_US
dc.titleSkill Learning using Temporal and Spatial Entropies for Accurate Skill Acquisitionen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ICRA.2013.6630742-
dc.relation.page1315-1322-
dc.contributor.googleauthorLee, Sang Hyoung-
dc.contributor.googleauthorHan, Gyung Nam-
dc.contributor.googleauthorSuh, Il Hong-
dc.contributor.googleauthorYou, Bum-Jae-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF ELECTRONIC ENGINEERING-
dc.identifier.pidihsuh-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > ELECTRONIC ENGINEERING(융합전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE