Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 전상운 | - |
dc.date.accessioned | 2022-07-26T00:58:27Z | - |
dc.date.available | 2022-07-26T00:58:27Z | - |
dc.date.issued | 2021-06 | - |
dc.identifier.citation | IEEE Transactions on Wireless Communications IEEE Trans. Wireless Commun. Wireless Communications, IEEE Transactions on. 21(6):3994-4008 Jun, 2022 | en_US |
dc.identifier.issn | 1536-1276 | - |
dc.identifier.issn | 1558-2248 | - |
dc.identifier.uri | https://ieeexplore.ieee.org/document/9619960?arnumber=9619960&SID=EBSCO:edseee | - |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/171623 | - |
dc.description.abstract | We consider a multichannel random access system in which each user accesses a single channel at each time slot to communicate with an access point (AP). Users arrive to the system at random and be activated for a certain period of time slots and then disappear from the system. Under such dynamic network environment, we propose a distributed multichannel access protocol based on multi-agent reinforcement learning (RL) to improve both throughput and fairness between active users. Unlike the previous approaches adjusting channel access probabilities at each time slot, the proposed RL algorithm deterministically selects a set of channel access policies for several consecutive time slots. To effectively reduce the complexity of the proposed RL algorithm, we adopt a branching dueling Q-network architecture and propose an efficient training methodology for producing proper Q-values over time-varying user sets. We perform extensive simulations on realistic traffic environments and demonstrate that the proposed online learning improves both throughput and fairness compared to the conventional RL approaches and centralized scheduling policies. | en_US |
dc.description.sponsorship | This work was supported by Samsung Research Funding and Incubation Center of Samsung Electronics under Project SRFC-TB1803-05. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.subject | Reinforcement learning | en_US |
dc.subject | deep learning | en_US |
dc.subject | random access | en_US |
dc.subject | resource allocation | en_US |
dc.subject | fairness | en_US |
dc.title | Dynamic Multichannel Access via Multi-agent Reinforcement Learning: Throughput and Fairness Guarantees | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/TWC.2021.3126112 | - |
dc.relation.page | 1-1 | - |
dc.contributor.googleauthor | Sohailb, Muhammad | - |
dc.contributor.googleauthor | Jeong, Jongjin | - |
dc.contributor.googleauthor | Jeon, Sang-Woon | - |
dc.sector.campus | E | - |
dc.sector.daehak | COLLEGE OF ENGINEERING SCIENCES[E] | - |
dc.sector.department | DEPARTMENT OF MILITARY INFORMATION ENGINEERING | - |
dc.identifier.pid | sangwoonjeon | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.