47 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author남해운-
dc.date.accessioned2024-06-14T04:33:17Z-
dc.date.available2024-06-14T04:33:17Z-
dc.date.issued2023-10-
dc.identifier.citation2023 14th International Conference on Information and Communication Technology Convergence (ICTC), page. 440-445en_US
dc.identifier.issn2162-1241en_US
dc.identifier.issn2162-1233en_US
dc.identifier.urihttps://ieeexplore.ieee.org/document/10393297en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/190723-
dc.description.abstractTo address the issue of navigation failure caused by light reflection in real-world navigation scenarios using inexpensive 2D LiDARs, traditional SAC-based algorithms face challenges such as inability to train in highly randomized and sparsely rewarded environments, as well as slow training. In this paper, we propose a combination of a monocular camera and a depth estimation model as a substitute for the inexpensive 2D LiDAR and introduce a variant algorithm called Sharing Encoder Self-Attention Soft Actor Critic (SESA-SAC) for collision-free indoor navigation of mobile robots. To improve the efficiency of robot learning in sparse environments, we collect expert data from 200 episodes and store them in a replay buffer. We conduct training by randomly sampling from both exploration data and expert data, without pre-training. To enhance training performance, we introduce a channel-wise self-attention structure and layer normalization in the network to learn better features. Additionally, we propose a shared feature extractor to achieve more stable training. Moreover, we conduct training and testing in GAZEBO, and the experimental results demonstrate that our proposed SESA-SAC algorithm outperforms traditional SAC algorithms in terms of convergence speed, stability, and efficiency for indoor navigation tasks.en_US
dc.description.sponsorshipThis work was supported by the Technology development Program (No. RS-2022-00164803) funded by the Ministry of SMEs and Startups (MSS, Korea).en_US
dc.languageen_USen_US
dc.publisherIEEEen_US
dc.relation.ispartofseries;440-445-
dc.subjectreal-worlden_US
dc.subjectdeep reinforcement learningen_US
dc.subjectindoor navigationen_US
dc.titleVisual-Based Deep Reinforcement Learning for Mobile Robot Obstacle Avoidance Navigationen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ICTC58733.2023.10393297en_US
dc.relation.page1-2-
dc.contributor.googleauthorNan, Zhiyuan-
dc.contributor.googleauthorNam, Haewoon-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentSCHOOL OF ELECTRICAL ENGINEERING-
dc.identifier.pidhnam-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ELECTRICAL ENGINEERING(전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE