Full metadata record

DC FieldValueLanguage
dc.contributor.author김완수-
dc.date.accessioned2024-06-17T01:55:00Z-
dc.date.available2024-06-17T01:55:00Z-
dc.date.issued2023-11-
dc.identifier.citation2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), page. 1530-1536en_US
dc.identifier.issn1944-9437en_US
dc.identifier.issn1944-9445en_US
dc.identifier.urihttps://ieeexplore.ieee.org/abstract/document/10309384en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/190764-
dc.description.abstractTo control the lower-limb exoskeleton robot effectively, it is essential to accurately recognize user status and environmental conditions. Previous studies have typically addressed these recognition challenges through independent models for each task, resulting in an inefficient model development process. In this study, we propose a Multitask learning approach that can address multiple recognition challenges simultaneously. This approach can enhance data efficiency by enabling knowledge sharing between each recognition model. We demonstrate the effectiveness of this approach using Gait phase recognition (GPR) and Terrain classification (TC) as examples, the most conventional recognition tasks in lower-limb exoskeleton robots. We first created a high-performing GPR model that achieved a Root mean square error (RMSE) value of 2.345 ± 0.08 and then utilized its knowledge-sharing backbone feature network to learn a TC model with an extremely limited dataset. Using a limited dataset for the TC model allows us to validate the data efficiency of our proposed Multitask learning approach. We compared the accuracy of the proposed TC model against other TC baseline models. The proposed model achieved 99.5 ± 0.044% accuracy with a limited dataset, outperforming other baseline models, demonstrating its effectiveness in terms of data efficiency. Future research will focus on extending the Multitask learning framework to encompass additional recognition tasks.en_US
dc.languageen_USen_US
dc.publisherIEEEen_US
dc.relation.ispartofseries;1530-1536-
dc.subjectTrainingen_US
dc.subjectSolid modelingen_US
dc.subjectExoskeletonsen_US
dc.subjectData modelsen_US
dc.subjectMathematical modelsen_US
dc.subjectSensorsen_US
dc.subjectConvolutional neural networksen_US
dc.titleMultitask Learning for Multiple Recognition Tasks: A Framework for Lower-limb Exoskeleton Robot Applicationsen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/RO-MAN57019.2023.10309384en_US
dc.relation.page1530-1536-
dc.contributor.googleauthorKim, Joonhyun-
dc.contributor.googleauthorHa, Seongmin-
dc.contributor.googleauthorShin, Dongbin-
dc.contributor.googleauthorHam, Seoyeon-
dc.contributor.googleauthorJang, Jaepil-
dc.contributor.googleauthorKim, Wansoo-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentDEPARTMENT OF ROBOTICS-
dc.identifier.pidwansookim-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ETC
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE