Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 남해운 | - |
dc.date.accessioned | 2024-06-14T04:33:17Z | - |
dc.date.available | 2024-06-14T04:33:17Z | - |
dc.date.issued | 2023-10 | - |
dc.identifier.citation | 2023 14th International Conference on Information and Communication Technology Convergence (ICTC), page. 440-445 | en_US |
dc.identifier.issn | 2162-1241 | en_US |
dc.identifier.issn | 2162-1233 | en_US |
dc.identifier.uri | https://ieeexplore.ieee.org/document/10393297 | en_US |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/190723 | - |
dc.description.abstract | To address the issue of navigation failure caused by light reflection in real-world navigation scenarios using inexpensive 2D LiDARs, traditional SAC-based algorithms face challenges such as inability to train in highly randomized and sparsely rewarded environments, as well as slow training. In this paper, we propose a combination of a monocular camera and a depth estimation model as a substitute for the inexpensive 2D LiDAR and introduce a variant algorithm called Sharing Encoder Self-Attention Soft Actor Critic (SESA-SAC) for collision-free indoor navigation of mobile robots. To improve the efficiency of robot learning in sparse environments, we collect expert data from 200 episodes and store them in a replay buffer. We conduct training by randomly sampling from both exploration data and expert data, without pre-training. To enhance training performance, we introduce a channel-wise self-attention structure and layer normalization in the network to learn better features. Additionally, we propose a shared feature extractor to achieve more stable training. Moreover, we conduct training and testing in GAZEBO, and the experimental results demonstrate that our proposed SESA-SAC algorithm outperforms traditional SAC algorithms in terms of convergence speed, stability, and efficiency for indoor navigation tasks. | en_US |
dc.description.sponsorship | This work was supported by the Technology development Program (No. RS-2022-00164803) funded by the Ministry of SMEs and Startups (MSS, Korea). | en_US |
dc.language | en_US | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartofseries | ;440-445 | - |
dc.subject | real-world | en_US |
dc.subject | deep reinforcement learning | en_US |
dc.subject | indoor navigation | en_US |
dc.title | Visual-Based Deep Reinforcement Learning for Mobile Robot Obstacle Avoidance Navigation | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/ICTC58733.2023.10393297 | en_US |
dc.relation.page | 1-2 | - |
dc.contributor.googleauthor | Nan, Zhiyuan | - |
dc.contributor.googleauthor | Nam, Haewoon | - |
dc.sector.campus | E | - |
dc.sector.daehak | COLLEGE OF ENGINEERING SCIENCES[E] | - |
dc.sector.department | SCHOOL OF ELECTRICAL ENGINEERING | - |
dc.identifier.pid | hnam | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.