Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이병주 | - |
dc.date.accessioned | 2019-07-03T05:56:06Z | - |
dc.date.available | 2019-07-03T05:56:06Z | - |
dc.date.issued | 2007-10 | - |
dc.identifier.citation | 2007 International Conference on Control, Automation and Systems, Page. 1222-1227 | en_US |
dc.identifier.isbn | 978-89-950038-6-2 | - |
dc.identifier.uri | https://ieeexplore.ieee.org/document/4406521 | - |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/107054 | - |
dc.description.abstract | When the Stewart Platform mechanism is modified to have three RRPS type struts each of which is assumed to have an active prismatic joint and to be constrained by an additional serial passive PPPRRR type subchain, the modified mechanism could be reconfigured as one of various types of the non-redundant 3-degree-of-freedom mechanisms depending on which three joints of the passive PPPRRR subchain are locked and unlocked during real operation. This type of modified Stewart Platform mechanisms manifest a distinctive feature: that is, the modified mechanism could be reached to whole six-degree-of-freedom output workspace by properly controlling lock and unlock conditions of the corresponding number of joints among six passive joints of a PPPRRR serial subchain only with three active prismatic joints in struts. In this paper, this advantageous feature is investigated and verified through simulation. For that purpose, trajectory planning of the modified 3-degrees-of-freedom Stewart platform mechanism in static environments where obstacles are sparsely placed is studied. The objective of the trajectory planning is to find the path which could maintain good kinematic isotropic property while avoiding obstacles and switch to better 3-degrees-of-freedom configurations along the trajectory if necessary, for the given both initial and final configurations of the robot in six-degrees-of-freedom operational space. To find such a path, Q-learning algorithm which is one of reinforcement learning methods is employed. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | IEEE | en_US |
dc.subject | Stewart Platform | en_US |
dc.subject | Parallel Mechanism | en_US |
dc.subject | Trajectory Planning | en_US |
dc.subject | Q-learning | en_US |
dc.title | Trajectory Planning of 6 degree of freedom operational space for the 3 degree of freedom mechanism configured by constraining the stewart platform structure | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/ICCAS.2007.4406521 | - |
dc.contributor.googleauthor | Choi, M. | - |
dc.contributor.googleauthor | Kim, W. | - |
dc.contributor.googleauthor | Yi, B.-J. | - |
dc.sector.campus | E | - |
dc.sector.daehak | COLLEGE OF ENGINEERING SCIENCES[E] | - |
dc.sector.department | DIVISION OF ELECTRICAL ENGINEERING | - |
dc.identifier.pid | bj | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.