Experimental Verification of AI Adversarial Attacks using Open Deep Learning Library
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 손태식 | - |
dc.contributor.author | 정재한 | - |
dc.date.accessioned | 2019-08-13T16:41:10Z | - |
dc.date.available | 2019-08-13T16:41:10Z | - |
dc.date.issued | 2019-08 | - |
dc.identifier.other | 29179 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/handle/2018.oak/15547 | - |
dc.description | 학위논문(석사)--아주대학교 일반대학원 :컴퓨터공학과,2019. 8 | - |
dc.description.tableofcontents | Chapter 1 Introduction 1 Chapter 2 Related Work 4 Section 1 Characteristics of Deep Learning 5 Section 2 Adversarial Sample Attack in the Real World 7 Section 3 Adversarial Sample Generation Method 9 Chapter 3 Deep-Learning Model and Adversarial Sample Creation 14 Section 1 Deep-Learning Model 14 Section 2 Generation Method of a Adversarial Sample 19 Chapter 4 Experimental Environment and Datasets 21 Section 1 Background of using Datasets 21 Section 2 Learning and Adversarial Library 25 Chapter 5 Experiment and Result 28 Section 1 Experimental Environment and Setting Variables 28 Section 2 Experiment Result 30 Chapter 6 Conclusion and Future Research 34 References 36 | - |
dc.language.iso | eng | - |
dc.publisher | The Graduate School, Ajou University | - |
dc.rights | 아주대학교 논문은 저작권에 의해 보호받습니다. | - |
dc.title | Experimental Verification of AI Adversarial Attacks using Open Deep Learning Library | - |
dc.type | Thesis | - |
dc.contributor.affiliation | 아주대학교 일반대학원 | - |
dc.contributor.department | 일반대학원 컴퓨터공학과 | - |
dc.date.awarded | 2019. 8 | - |
dc.description.degree | Master | - |
dc.identifier.localId | 952030 | - |
dc.identifier.uci | I804:41038-000000029179 | - |
dc.identifier.url | http://dcoll.ajou.ac.kr:9080/dcollection/common/orgView/000000029179 | - |
dc.description.alternativeAbstract | Deep-learning solves many problems by automatically learning datasets. However, such a Deep-learning model can be threatened by Adversarial Attack. In this paper, we used image datasets and network datasets in the Deep-Learning classification model. The adversarial sample generated by the malicious attacker had been experimentally verified that the classification accuracy of the model is lowered. A common network dataset NSL-KDD and a common image set MNIST were used. We used the Tensorlfow and PyTorch library to create an Autoencoder classification model and a Convolution neural network (CNN) classification model. Detection accuracy was measured by injecting Adversarial sample into this model. We will construct each deep-learning model as basic and measure the effect of the adversarial sample on the general effect. The adversarial sample was generated by the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA) method. The classification accuracy decreased from 99% to 50% by the adversarial sample. | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.