269 0

Digestive Neural Networks: A Novel Defense Strategy Against Inference Attacks in Federated Learning

Title
Digestive Neural Networks: A Novel Defense Strategy Against Inference Attacks in Federated Learning
Author
조성현
Keywords
Federated learning (FL); Inference attack; White-box assumption; Digestive neural networks; t-SNE analysis; Federated learning security; ML Security; AI Security
Issue Date
2021-06
Publisher
ELSEVIER ADVANCED TECHNOLOGY
Citation
COMPUTERS & SECURITY, v. 109, Article no. 102378, 20pp
Abstract
Federated Learning (FL) is an efficient and secure machine learning technique designed for decentralized computing systems such as fog and edge computing. Its learning process employs frequent communications as the participating local devices send updates, either gradients or parameters of their models, to a central server that aggregates them and redistributes new weights to the devices. In FL, private data does not leave the individual local devices, and thus, rendered as a robust solution in terms of privacy preservation. However, the recently introduced membership inference attacks pose a critical threat to the impeccability of FL mechanisms. By eavesdropping only on the updates transferring to the center server, these attacks can recover the private data of a local device. A prevalent solution against such attacks is the differential privacy scheme that augments a sufficient amount of noise to each update to hinder the recovering process. However, it suffers from a significant sacrifice in the classification accuracy of the FL. To effectively alleviate the problem, this paper proposes a Digestive Neural Network (DNN), an independent neural network attached to the FL. The private data owned by each device will pass through the DNN and then train the FL. The DNN modifies the input data, which results in distorting updates, in a way to maximize the classification accuracy of FL while the accuracy of inference attacks is minimized. Our simulation result shows that the proposed DNN shows significant performance on both gradient sharing- and weight sharing-based FL mechanisms. For the gradient sharing, the DNN achieved higher classification accuracy by 16.17% while 9% lower attack accuracy than the existing differential privacy schemes. For the weight sharing FL scheme, the DNN achieved at most 46.68% lower attack success rate with 3% higher classification accuracy.
URI
https://www.sciencedirect.com/science/article/pii/S0167404821002029https://repository.hanyang.ac.kr/handle/20.500.11754/166574
ISSN
0167-4048
DOI
10.1016/j.cose.2021.102378
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE