333 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author홍승호-
dc.date.accessioned2019-01-08T02:09:02Z-
dc.date.available2019-01-08T02:09:02Z-
dc.date.issued2018-06-
dc.identifier.citationAPPLIED ENERGY, v. 220, Page. 220-230en_US
dc.identifier.issn0306-2619-
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S0306261918304112-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/81117-
dc.description.abstractWith the modern advanced information and communication technologies in smart grid systems, demand response (DR) has become an effective method for improving grid reliability and reducing energy costs due to the ability to react quickly to supply-demand mismatches by adjusting flexible loads on the demand side. This paper proposes a dynamic pricing DR algorithm for energy management in a hierarchical electricity market that considers both service provider's (SP) profit and customers' (CUs) costs. Reinforcement learning (RL) is used to illustrate the hierarchical decision-making framework, in which the dynamic pricing problem is formulated as a discrete finite Markov decision process (MDP), and Q-learning is adopted to solve this decision-making problem. Using RL, the SP can adaptively decide the retail electricity price during the on-line learning process where the uncertainty of CUs' load demand profiles and the flexibility of wholesale electricity prices are addressed. Simulation results show that this proposed DR algorithm, can promote SP profitability, reduce energy costs for CUs, balance energy supply and demand in the electricity market, and improve the reliability of electric power systems, which can be regarded as a win-win strategy for both SP and CUs.en_US
dc.description.sponsorshipThis work was partially supported by National Research Foundation of Korea (NRF-2016K2A9A2A11938310), and in part by Ansan-Si hidden champion fostering and supporting project (Development of Industrial Communication-IoT Gateway & Industrial Wireless IoT Sensor) funded by Ansan city, and partially supported by the Human Resources Program in Energy Technology of the Korea Institute of Energy Technology Evaluation and Planning (KETEP).en_US
dc.language.isoen_USen_US
dc.publisherELSEVIER SCI LTDen_US
dc.subjectDemand responseen_US
dc.subjectDynamic pricingen_US
dc.subjectArtificial intelligenceen_US
dc.subjectReinforcement learningen_US
dc.subjectMarkov decision processen_US
dc.subjectQ-learningen_US
dc.titleA Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approachen_US
dc.typeArticleen_US
dc.relation.volume220-
dc.identifier.doi10.1016/j.apenergy.2018.03.072-
dc.relation.page220-230-
dc.relation.journalAPPLIED ENERGY-
dc.contributor.googleauthorLu, Renzhi-
dc.contributor.googleauthorHong, Seung Ho-
dc.contributor.googleauthorZhang, Xiongfeng-
dc.relation.code2018002084-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentDIVISION OF ELECTRICAL ENGINEERING-
dc.identifier.pidshhong-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ELECTRICAL ENGINEERING(전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE