Experimental Verification of AI Adversarial Attacks using Open Deep Learning Library

Author(s)
정재한
Advisor
손태식
Department
일반대학원 컴퓨터공학과
Publisher
The Graduate School, Ajou University
Publication Year
2019-08
Language
eng
Alternative Abstract
Deep-learning solves many problems by automatically learning datasets. However, such a Deep-learning model can be threatened by Adversarial Attack. In this paper, we used image datasets and network datasets in the Deep-Learning classification model. The adversarial sample generated by the malicious attacker had been experimentally verified that the classification accuracy of the model is lowered. A common network dataset NSL-KDD and a common image set MNIST were used. We used the Tensorlfow and PyTorch library to create an Autoencoder classification model and a Convolution neural network (CNN) classification model. Detection accuracy was measured by injecting Adversarial sample into this model. We will construct each deep-learning model as basic and measure the effect of the adversarial sample on the general effect. The adversarial sample was generated by the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA) method. The classification accuracy decreased from 99% to 50% by the adversarial sample.
URI
https://dspace.ajou.ac.kr/handle/2018.oak/15547
Fulltext

Appears in Collections:
Graduate School of Ajou University > Department of Computer Engineering > 3. Theses(Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse