276 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author전상운-
dc.date.accessioned2023-05-17T05:10:04Z-
dc.date.available2023-05-17T05:10:04Z-
dc.date.issued2023-04-
dc.identifier.citationIEEE ACCESS, v. 11, Page. 24737.0-24751.0-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://ieeexplore.ieee.org/document/10064273/en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/180682-
dc.description.abstractIn this paper, we consider the problem of decision making in the context of a dense heterogeneous network with a macro base station and multiple small base stations. We propose a deep Q-learning based algorithm that efficiently minimizes the overall energy consumption by taking into account both the energy consumption from transmission and overheads, and various network information such as channel conditions and causal association information. The proposed algorithm is designed based on the centralized training with decentralized execution (CTDE) framework in which a centralized training agent manages the replay buffer for training its deep Q-network by gathering state, action, and reward information reported from the distributed agents that execute the actions. We perform several numerical evaluations and demonstrate that the proposed algorithm provides significant energy savings over other contemporary mechanisms depending on overhead costs, especially when additional energy consumption is required for handover procedure.-
dc.description.sponsorshipThe work of Yujae Song was supported by the project titled "Development of polar region communication technology and equipment for Internet of Extreme Things (IoET)" funded by the Ministry of Science and ICT (MSIT). The work of Sung Hoon Lim was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology under Grant NRF-2020R1F1A1074926. The work of Sang-Woon Jeon was supported by NRF funded by the Ministry of Education, Science and Technology, MSIT, under Grant NRF-2020R1C1C1013806.-
dc.languageen-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.subjectHandover-
dc.subjectQ-learning-
dc.subjectTraining-
dc.subjectEnergy consumption-
dc.subjectDecision making-
dc.subjectResource management-
dc.subjectRayleigh channels-
dc.subjectDeep learning-
dc.subjectcentralized training decentralized execution-
dc.subjectenergy minimization-
dc.subjectheterogeneous networks-
dc.subjectload balancing-
dc.subjectreinforcement learning-
dc.titleHandover Decision Making for Dense HetNets: A Reinforcement Learning Approach-
dc.typeArticle-
dc.relation.volume11-
dc.identifier.doi10.1109/ACCESS.2023.3254557-
dc.relation.page24737.0-24751.0-
dc.relation.journalIEEE ACCESS-
dc.contributor.googleauthorSong, Yujae-
dc.contributor.googleauthorLim, Sung Hoon-
dc.contributor.googleauthorJeon, Sang-Woon-
dc.sector.campusE-
dc.sector.daehak공학대학-
dc.sector.department국방정보공학과-
dc.identifier.pidsangwoonjeon-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > MILITARY INFORMATION ENGINEERING(국방정보공학과) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE