574 0

Natural Manipulation Methods of Images for Adversarial Attack on Deep Neural Networks

Title
Natural Manipulation Methods of Images for Adversarial Attack on Deep Neural Networks
Author
장기림
Alternative Author(s)
장기림
Advisor(s)
김영훈
Issue Date
2021. 8
Publisher
한양대학교
Degree
Master
Abstract
Deep neural networks(DNNs) have achieved great success in various computer vision fields such as autonomous driving, face recognition, medical imaging and object detection. They are changing the way we interact with the world and bringing in much convenience in our life and study. However, a well-trained deep neural network can be attacked by malicious manipulation. An attacker deliberately designs on input data to cause the networks to make a mistake, this is called the adversarial attack on deep learning. The input data generated through the adversarial attack method is known as an adversarial example. Adversarial attack is an important aspect in Artificial Intelligence(AI) because they represent a concrete problem in Artificial Intelligence safety. At the moment, a common adversarial attack method is that adding some adversarial perturbations to pixel space of input images and they are used to elicit the misclassifications of networks. In the image classification task, feature extraction and classification function are important steps in the networks building process. These malicious perturbations can have negative effect on feature extraction and classification function of networks. Usually, attackers make the adversarial perturbations by limiting the measures in order to prevent detection with naked eye. Even then, sometimes the pixel-based adversarial perturbations can make the image unnatural and they easily detected. In this paper, we propose two new adversarial attack methods to generate a variety of more natural adversarial examples: (1)Universal adversarial image stamp-based adversarial attack, (2)Semantic face transformation-based adversarial attack. Comprehensive experiments are also conducted to show that the accuracy of classification networks can be reduced using our adversarial attack methods.
URI
http://hanyang.dcollection.net/common/orgView/200000491796https://repository.hanyang.ac.kr/handle/20.500.11754/163698
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > COMPUTER SCIENCE & ENGINEERING(컴퓨터공학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE