258 119

Full metadata record

DC FieldValueLanguage
dc.contributor.author전상운-
dc.date.accessioned2023-05-17T05:12:52Z-
dc.date.available2023-05-17T05:12:52Z-
dc.date.issued2022-04-
dc.identifier.citationMATHEMATICS, v. 10, NO. 7, article no. 1072.0, Page. 1.0-32.0-
dc.identifier.issn2227-7390;2227-7390-
dc.identifier.urihttps://www.mdpi.com/2227-7390/10/7/1072en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/180688-
dc.description.abstractHigh-dimensional optimization problems are more and more common in the era of big data and the Internet of things (IoT), which seriously challenge the optimization performance of existing optimizers. To solve these kinds of problems effectively, this paper devises a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) by integrating valuable evolutionary information in different elite particles in the swarm to guide the updating of inferior ones. Specifically, the swarm is first separated into two exclusive sets, namely the elite set (ES) containing the top best individuals, and the non-elite set (NES), consisting of the remaining individuals. Then, the dimensions of each particle in NES are randomly divided into several groups with equal sizes. Subsequently, each dimension group of each non-elite particle is guided by two different elites randomly selected from ES. In this way, each non-elite particle in NES is comprehensively guided by multiple elite particles in ES. Therefore, not only could high diversity be maintained, but fast convergence is also likely guaranteed. To alleviate the sensitivity of DGCELSO to the associated parameters, we further devise dynamic adjustment strategies to change the parameter settings during the evolution. With the above mechanisms, DGCELSO is expected to explore and exploit the solution space properly to find the optimum solutions for optimization problems. Extensive experiments conducted on two commonly used large-scale benchmark problem sets demonstrate that DGCELSO achieves highly competitive or even much better performance than several state-of-the-art large-scale optimizers.-
dc.description.sponsorshipThis work was supported in part by the National Natural Science Foundation of China under Grant 62006124 and U20B2061, in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJB520006, in part by the National Research Foundation of Korea (NRF-2021H1D3A2A01082705), and in part by the Startup Foundation for Introducing Talent of NUIST.-
dc.languageen-
dc.publisherMDPI-
dc.subjectlarge-scale optimization-
dc.subjectparticle swarm optimization-
dc.subjectdimension group-based comprehensive elite learning-
dc.subjecthigh-dimensional problems-
dc.subjectelite learning-
dc.titleA Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization-
dc.typeArticle-
dc.relation.no7-
dc.relation.volume10-
dc.identifier.doi10.3390/math10071072-
dc.relation.page1.0-32.0-
dc.relation.journalMATHEMATICS-
dc.contributor.googleauthorYang, Qiang-
dc.contributor.googleauthorZhang, Kai-Xuan-
dc.contributor.googleauthorGao, Xu-Dong-
dc.contributor.googleauthorXu, Dong-Dong-
dc.contributor.googleauthorLu, Zhen-Yu-
dc.contributor.googleauthorJeon, Sang-Woon-
dc.contributor.googleauthorZhang, Jun-
dc.sector.campusE-
dc.sector.daehak공학대학-
dc.sector.department국방정보공학과-
dc.identifier.pidsangwoonjeon-
dc.identifier.article1072.0-


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE