162 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author김남욱-
dc.date.accessioned2023-01-04T02:02:29Z-
dc.date.available2023-01-04T02:02:29Z-
dc.date.issued2021-04-
dc.identifier.citationSAE Technical Papers, Page. 1-8-
dc.identifier.issn0148-7191;2688-3627-
dc.identifier.urihttps://saemobilus.sae.org/content/2021-01-0434/en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/178693-
dc.description.abstractAs connectivity and sensing technologies become more mature, automated vehicles can predict future driving situations and utilize this information to drive more energy-efficiently than human-driven vehicles. However, future information beyond the limited connectivity and sensing range is difficult to predict and utilize, limiting the energy-saving potential of energy-efficient driving. Thus, we combine a conventional speed optimization planner, developed in our previous work, and reinforcement learning to propose a real-time intelligent speed optimization planner for connected and automated vehicles. We briefly summarize the conventional speed optimization planner with limited information, based on closed-form energy-optimal solutions, and present its multiple parameters that determine reference speed trajectories. Then, we use a deep reinforcement learning (DRL) algorithm, such as a deep Q-learning algorithm, to find the policy of how to adjust these parameters in real-time to dynamically changing situations in order to realize the full potential of energy-efficient driving. The model-free DRL algorithm, based on the experience of the system, can learn the optimal policy through iteratively interacting with different driving scenarios without increasing the limited connectivity and sensing range. The training process of the parameter adaptation policy exploits a high-fidelity simulation framework that can simulate multiple vehicles with full powertrain models and the interactions between vehicles and their environment. We consider intersection-approaching scenarios where there is one traffic light with different signal phase and timing setup. Results show that the learned optimal policy enables the proposed intelligent speed optimization planner to properly adjust the parameters in a piecewise constant manner, leading to additional energy savings without increasing total travel time compared to the conventional speed optimization planner. © 2021 SAE International; UChicago Argonne, LLC.-
dc.languageen-
dc.publisherSAE International-
dc.titleA Real-Time Intelligent Speed Optimization Planner Using Reinforcement Learning-
dc.typeArticle-
dc.identifier.doi10.4271/2021-01-0434-
dc.relation.page1-8-
dc.relation.journalSAE Technical Papers-
dc.contributor.googleauthorLee, Woong-
dc.contributor.googleauthorHan, Jihun-
dc.contributor.googleauthorZhang, Yaoahong-
dc.contributor.googleauthorKarbowski, Dominik-
dc.contributor.googleauthorRousseau, Aymeric-
dc.contributor.googleauthorKim, Namwook-
dc.sector.campusE-
dc.sector.daehak공학대학-
dc.sector.department기계공학과-
dc.identifier.pidnwkim-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > MECHANICAL ENGINEERING(기계공학과) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE