197 99

Full metadata record

DC FieldValueLanguage
dc.contributor.author강경태-
dc.date.accessioned2023-06-23T01:07:29Z-
dc.date.available2023-06-23T01:07:29Z-
dc.date.issued2022-05-
dc.identifier.citationCLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, v. 26, NO. 1, Page. 349.0-366.0-
dc.identifier.issn1386-7857;1573-7543-
dc.identifier.urihttps://link.springer.com/article/10.1007/s10586-022-03596-1en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/182265-
dc.description.abstractFederated Learning (FL) is a technology that facilitates a sophisticated way to train distributed data. As the FL does not expose sensitive data in the training process, it was considered privacy-safe deep learning. However, a few recent studies proved that it is possible to expose the hidden data by exploiting the shared models only. One common solution for the data exposure is differential privacy that adds noise to hinder such an attack, however, it inevitably involves a trade-off between privacy and utility. This paper demonstrates the effectiveness of image augmentation as an alternative defense strategy that has less impact of the trade-off. We conduct comprehensive experiments on the CIFAR-10 and CIFAR-100 datasets with 14 augmentations and 9 magnitudes. As a result, the best combination of augmentation and magnitude for each image class in the datasets was discovered. Also, our results show that a well-fitted augmentation strategy can outperform differential privacy.-
dc.description.sponsorshipThis research was supported by the MSIT (Ministry of Science, ICT), Korea, under the High-Potential Individuals Global Training Program (2021-0-01547-001) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation).-
dc.languageen-
dc.publisherSPRINGER-
dc.subjectFederated learning-
dc.subjectModel inversion attack-
dc.subjectImage augmentation-
dc.subjectDefensive augmentation-
dc.subjectDifferential privacy-
dc.titleAn empirical analysis of image augmentation against model inversion attack in federated learning-
dc.typeArticle-
dc.relation.no1-
dc.relation.volume26-
dc.identifier.doi10.1007/s10586-022-03596-1-
dc.relation.page349.0-366.0-
dc.relation.journalCLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS-
dc.contributor.googleauthorShin, Seunghyeon-
dc.contributor.googleauthorBoyapati, Mallika-
dc.contributor.googleauthorSuo, Kun-
dc.contributor.googleauthorKang, Kyungtae-
dc.contributor.googleauthorSon, Junggab-
dc.sector.campusE-
dc.sector.daehak소프트웨어융합대학-
dc.sector.department인공지능학과-
dc.identifier.pidktkang-
Appears in Collections:
ETC[S] > ETC
Files in This Item:
85377_강경태.pdfDownload
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE