Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이웅 | - |
dc.date.accessioned | 2022-07-27T00:55:57Z | - |
dc.date.available | 2022-07-27T00:55:57Z | - |
dc.date.issued | 2021-04 | - |
dc.identifier.citation | SAE Technical Papers, Page. 1-8 | en_US |
dc.identifier.issn | 0148-7191 | - |
dc.identifier.issn | 2688-3627 | - |
dc.identifier.uri | https://saemobilus.sae.org/content/2021-01-0434/ | - |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/171753 | - |
dc.description.abstract | As connectivity and sensing technologies become more mature, automated vehicles can predict future driving situations and utilize this information to drive more energy-efficiently than human-driven vehicles. However, future information beyond the limited connectivity and sensing range is difficult to predict and utilize, limiting the energy-saving potential of energy-efficient driving. Thus, we combine a conventional speed optimization planner, developed in our previous work, and reinforcement learning to propose a real-time intelligent speed optimization planner for connected and automated vehicles. We briefly summarize the conventional speed optimization planner with limited information, based on closed-form energy-optimal solutions, and present its multiple parameters that determine reference speed trajectories. Then, we use a deep reinforcement learning (DRL) algorithm, such as a deep Q-learning algorithm, to find the policy of how to adjust these parameters in real-time to dynamically changing situations in order to realize the full potential of energy-efficient driving. The model-free DRL algorithm, based on the experience of the system, can learn the optimal policy through iteratively interacting with different driving scenarios without increasing the limited connectivity and sensing range. The training process of the parameter adaptation policy exploits a high-fidelity simulation framework that can simulate multiple vehicles with full powertrain models and the interactions between vehicles and their environment. We consider intersection-approaching scenarios where there is one traffic light with different signal phase and timing setup. Results show that the learned optimal policy enables the proposed intelligent speed optimization planner to properly adjust the parameters in a piecewise constant manner, leading to additional energy savings without increasing total travel time compared to the conventional speed optimization planner. | en_US |
dc.description.sponsorship | This report and the work described were sponsored by the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) under the Systems and Modelling for Accelerated Research in Transportation (SMART) Mobility Laboratory Consortium, an initiative of the Energy Efficient Mobility Systems (EEMS) Program. The authors would like to thank David Anderson of DOE Office of Energy Efficiency and Renewable Energy (EERE) manager for playing an important role in establishing the project concept, advancing implementation, and providing ongoing guidance. | en_US |
dc.language.iso | en | en_US |
dc.publisher | SAE International | en_US |
dc.title | A Real-Time Intelligent Speed Optimization Planner Using Reinforcement Learning | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.4271/2021-01-0434 | - |
dc.relation.page | 1-8 | - |
dc.relation.journal | SAE Technical Papers | - |
dc.contributor.googleauthor | Lee, Woong | - |
dc.relation.code | 2021034502 | - |
dc.sector.campus | E | - |
dc.sector.daehak | RESEARCH INSTITUTE[E] | - |
dc.sector.department | RESEARCH INSTITUTE OF ENGINEERING & TECHNOLOGY | - |
dc.identifier.pid | dldndnd12 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.