44 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author구재훈-
dc.date.accessioned2024-04-16T23:37:30Z-
dc.date.available2024-04-16T23:37:30Z-
dc.date.issued2022-09-06-
dc.identifier.citationEXPERT SYSTEMS WITH APPLICATIONS, v. 212, Article No. 118761, Page. 1-11en_US
dc.identifier.issn0957-4174en_US
dc.identifier.urihttps://information.hanyang.ac.kr/#/eds/detail?an=S0957417422017791&dbId=edselpen_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/189805-
dc.description.abstractMost recent machine learning research focuses on developing new classifiers for the sake of improving classification accuracy. With many well-performing state-of-the-art classifiers available, there is a growing need for understanding interpretability of a classifier necessitated by practical purposes such as to find the best diet recommendation for a diabetes patient. Inverse classification is a post modeling process to find changes in input features of samples to alter the initially predicted class. It is useful in many business applications to determine how to adjust a sample input data such that the classifier predicts it to be in a desired class. In real world applications, a budget on perturbations of samples corresponding to customers or patients is usually considered, and in this setting, the number of successfully perturbed samples is key to increase benefits. In this study, we propose a new framework to solve inverse classification that maximizes the number of perturbed samples subject to a per-feature-budget limits and favorable classification classes of the perturbed samples. We design algorithms to solve this optimization problem based on gradient methods, stochastic processes, Lagrangian relaxations, and the Gumbel trick. In experiments, we find that our algorithms based on stochastic processes exhibit an excellent performance in different budget settings and they scale well. The relative improvement of the proposed stochastic algorithms over an existing method with a traditional formulation is 15% in the real-world dataset and 21% in two public datasets on average.en_US
dc.languageen_USen_US
dc.publisherPERGAMON-ELSEVIER SCIENCE LTDen_US
dc.relation.ispartofseriesv. 212, Article No. 118761;1-11-
dc.subjectInverse classificationen_US
dc.subjectAdversarial learningen_US
dc.subjectCounterfactual explanationen_US
dc.subjectMachine learningen_US
dc.subjectNeural networksen_US
dc.titleAn inverse classification framework with limited budget and maximum number of perturbed samplesen_US
dc.typeArticleen_US
dc.relation.volume212-
dc.identifier.doi10.1016/j.eswa.2022.118761en_US
dc.relation.page1-11-
dc.relation.journalEXPERT SYSTEMS WITH APPLICATIONS-
dc.contributor.googleauthorKoo, Jaehoon-
dc.contributor.googleauthorKlabjan, Diego-
dc.contributor.googleauthorUtke, Jean-
dc.relation.code2023035751-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF BUSINESS AND ECONOMICS[E]-
dc.sector.departmentSCHOOL OF BUSINESS ADMINISTRATION-
dc.identifier.pidjaehoonkoo-
Appears in Collections:
COLLEGE OF BUSINESS AND ECONOMICS[E](경상대학) > BUSINESS ADMINISTRATION(경영학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE