303 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author이웅-
dc.date.accessioned2022-07-27T00:55:57Z-
dc.date.available2022-07-27T00:55:57Z-
dc.date.issued2021-04-
dc.identifier.citationSAE Technical Papers, Page. 1-8en_US
dc.identifier.issn0148-7191-
dc.identifier.issn2688-3627-
dc.identifier.urihttps://saemobilus.sae.org/content/2021-01-0434/-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/171753-
dc.description.abstractAs connectivity and sensing technologies become more mature, automated vehicles can predict future driving situations and utilize this information to drive more energy-efficiently than human-driven vehicles. However, future information beyond the limited connectivity and sensing range is difficult to predict and utilize, limiting the energy-saving potential of energy-efficient driving. Thus, we combine a conventional speed optimization planner, developed in our previous work, and reinforcement learning to propose a real-time intelligent speed optimization planner for connected and automated vehicles. We briefly summarize the conventional speed optimization planner with limited information, based on closed-form energy-optimal solutions, and present its multiple parameters that determine reference speed trajectories. Then, we use a deep reinforcement learning (DRL) algorithm, such as a deep Q-learning algorithm, to find the policy of how to adjust these parameters in real-time to dynamically changing situations in order to realize the full potential of energy-efficient driving. The model-free DRL algorithm, based on the experience of the system, can learn the optimal policy through iteratively interacting with different driving scenarios without increasing the limited connectivity and sensing range. The training process of the parameter adaptation policy exploits a high-fidelity simulation framework that can simulate multiple vehicles with full powertrain models and the interactions between vehicles and their environment. We consider intersection-approaching scenarios where there is one traffic light with different signal phase and timing setup. Results show that the learned optimal policy enables the proposed intelligent speed optimization planner to properly adjust the parameters in a piecewise constant manner, leading to additional energy savings without increasing total travel time compared to the conventional speed optimization planner.en_US
dc.description.sponsorshipThis report and the work described were sponsored by the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) under the Systems and Modelling for Accelerated Research in Transportation (SMART) Mobility Laboratory Consortium, an initiative of the Energy Efficient Mobility Systems (EEMS) Program. The authors would like to thank David Anderson of DOE Office of Energy Efficiency and Renewable Energy (EERE) manager for playing an important role in establishing the project concept, advancing implementation, and providing ongoing guidance.en_US
dc.language.isoenen_US
dc.publisherSAE Internationalen_US
dc.titleA Real-Time Intelligent Speed Optimization Planner Using Reinforcement Learningen_US
dc.typeArticleen_US
dc.identifier.doi10.4271/2021-01-0434-
dc.relation.page1-8-
dc.relation.journalSAE Technical Papers-
dc.contributor.googleauthorLee, Woong-
dc.relation.code2021034502-
dc.sector.campusE-
dc.sector.daehakRESEARCH INSTITUTE[E]-
dc.sector.departmentRESEARCH INSTITUTE OF ENGINEERING & TECHNOLOGY-
dc.identifier.piddldndnd12-
Appears in Collections:
RESEARCH INSTITUTE[E](부설연구소) > RESEARCH INSTITUTE OF ENGINEERING & TECHNOLOGY(공학기술연구소) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE