Experimental Verification of AI Adversarial Attacks using Open Deep Learning Library

DC Field Value Language
dc.contributor.advisor손태식-
dc.contributor.author정재한-
dc.date.accessioned2019-08-13T16:41:10Z-
dc.date.available2019-08-13T16:41:10Z-
dc.date.issued2019-08-
dc.identifier.other29179-
dc.identifier.urihttps://dspace.ajou.ac.kr/handle/2018.oak/15547-
dc.description학위논문(석사)--아주대학교 일반대학원 :컴퓨터공학과,2019. 8-
dc.description.tableofcontentsChapter 1 Introduction 1 Chapter 2 Related Work 4 Section 1 Characteristics of Deep Learning 5 Section 2 Adversarial Sample Attack in the Real World 7 Section 3 Adversarial Sample Generation Method 9 Chapter 3 Deep-Learning Model and Adversarial Sample Creation 14 Section 1 Deep-Learning Model 14 Section 2 Generation Method of a Adversarial Sample 19 Chapter 4 Experimental Environment and Datasets 21 Section 1 Background of using Datasets 21 Section 2 Learning and Adversarial Library 25 Chapter 5 Experiment and Result 28 Section 1 Experimental Environment and Setting Variables 28 Section 2 Experiment Result 30 Chapter 6 Conclusion and Future Research 34 References 36-
dc.language.isoeng-
dc.publisherThe Graduate School, Ajou University-
dc.rights아주대학교 논문은 저작권에 의해 보호받습니다.-
dc.titleExperimental Verification of AI Adversarial Attacks using Open Deep Learning Library-
dc.typeThesis-
dc.contributor.affiliation아주대학교 일반대학원-
dc.contributor.department일반대학원 컴퓨터공학과-
dc.date.awarded2019. 8-
dc.description.degreeMaster-
dc.identifier.localId952030-
dc.identifier.uciI804:41038-000000029179-
dc.identifier.urlhttp://dcoll.ajou.ac.kr:9080/dcollection/common/orgView/000000029179-
dc.description.alternativeAbstractDeep-learning solves many problems by automatically learning datasets. However, such a Deep-learning model can be threatened by Adversarial Attack. In this paper, we used image datasets and network datasets in the Deep-Learning classification model. The adversarial sample generated by the malicious attacker had been experimentally verified that the classification accuracy of the model is lowered. A common network dataset NSL-KDD and a common image set MNIST were used. We used the Tensorlfow and PyTorch library to create an Autoencoder classification model and a Convolution neural network (CNN) classification model. Detection accuracy was measured by injecting Adversarial sample into this model. We will construct each deep-learning model as basic and measure the effect of the adversarial sample on the general effect. The adversarial sample was generated by the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA) method. The classification accuracy decreased from 99% to 50% by the adversarial sample.-
Appears in Collections:
Graduate School of Ajou University > Department of Computer Engineering > 3. Theses(Master)
Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse