Full metadata record

DC FieldValueLanguage
dc.contributor.author이주현-
dc.date.accessioned2023-12-22T01:48:14Z-
dc.date.available2023-12-22T01:48:14Z-
dc.date.issued2023-08-
dc.identifier.citationChina Communications, v. 20, NO. 8, Page. 78.0-88.0-
dc.identifier.issn1673-5447-
dc.identifier.urihttps://ieeexplore.ieee.org/document/10238405en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/187828-
dc.description.abstractDue to the fading characteristics of wireless channels and the burstiness of data traffic, how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging. In this paper, we focus on enabling congestion control to minimize network transmission delays through flexible power control. To effectively solve the congestion problem, we propose a distributed cross-layer scheduling algorithm, which is empowered by graph-based multi-agent deep reinforcement learning. The transmit power is adaptively adjusted in real-time by our algorithm based only on local information (i.e., channel state information and queue length) and local communication (i.e., information exchanged with neighbors). Moreover, the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network. In the evaluation, we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states, and demonstrate the adaptability and stability in different topologies. The method is general and can be extended to various types of topologies.-
dc.description.sponsorshipThis work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155885, Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA)). This work was supported by the National Natural Science Foundation of China under Grant No. 61971264 and the National Natural Science Foundation of China/Research Grants Council Collaborative Research Scheme under Grant No. 62261160390.-
dc.languageen-
dc.publisherChina Institute of Communication-
dc.subjectAd-hoc network-
dc.subjectcross-layer scheduling-
dc.subjectmulti agent deep reinforcement learning-
dc.subjectinterference elimination-
dc.subjectpower control-
dc.subjectqueue scheduling-
dc.subjectactorcritic methods-
dc.subjectmarkov decision process-
dc.titleMulti-agent deep reinforcement learning for cross-layer scheduling in mobile ad-hoc networks-
dc.typeArticle-
dc.relation.no8-
dc.relation.volume20-
dc.identifier.doi10.23919/JCC.fa.2022-0496.202308-
dc.relation.page78.0-88.0-
dc.relation.journalChina Communications-
dc.contributor.googleauthorZheng, Xinxing-
dc.contributor.googleauthorZhao, Yu-
dc.contributor.googleauthorLee, Joohyun-
dc.contributor.googleauthorChen, Wei-
dc.sector.campusE-
dc.sector.daehak공학대학-
dc.sector.department전자공학부-
dc.identifier.pidjoohyunlee-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ELECTRICAL ENGINEERING(전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE