Deep-learning solves many problems by automatically learning datasets. However, such a Deep-learning model can be threatened by Adversarial Attack.
In this paper, we used image datasets and network datasets in the Deep-Learning classification model. The adversarial sample generated by the malicious attacker had been experimentally verified that the classification accuracy of the model is lowered. A common network dataset NSL-KDD and a common image set MNIST were used. We used the Tensorlfow and PyTorch library to create an Autoencoder classification model and a Convolution neural network (CNN) classification model. Detection accuracy was measured by injecting Adversarial sample into this model. We will construct each deep-learning model as basic and measure the effect of the adversarial sample on the general effect. The adversarial sample was generated by the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA) method. The classification accuracy decreased from 99% to 50% by the adversarial sample.