259 0

Multi Agent PPO-based Hyperparameter Optimization

Title
Multi Agent PPO-based Hyperparameter Optimization
Author
마지흔
Advisor(s)
조인휘
Issue Date
2023. 2
Publisher
한양대학교
Degree
Master
Abstract
Nowadays, traditional machine learning models and deep convolutional networks play an important role in various fields, such as classification tasks, image processing. A good hyperparameter configuration can allow the models to perform well, therefore, the choice of hyperparameters has a significant impact on the performance of the model. As a result, experts must spend a significant amount of time performing hyperparameter tuning when building a model to accomplish a task. While there are many algorithms for solving hyperparameter optimization (HPO), most methods require actual experimental results in each epoch to help perform the search. Therefore, to reduce the time and computational resources for searching, we propose a multi-agent Proximal Policy Optimization (MAPPO) reinforcement learning algorithm to solve the HPO problem in this paper. Our model uses the centralized training and decentralized execution framework, where each hyperparameter corresponds to an agent and all agents share the reward. We conducted experiments on HPOBench and the results show that our model can converge faster and achieve lower loss compared to other traditional methods.
URI
http://hanyang.dcollection.net/common/orgView/200000652062https://repository.hanyang.ac.kr/handle/20.500.11754/179418
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > COMPUTER SCIENCE(컴퓨터·소프트웨어학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE