• Laser & Optoelectronics Progress
  • Vol. 59, Issue 12, 1210004 (2022)
Xifan Zhang* and Lingzhi Yu
Author Affiliations
  • School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
  • show less
    DOI: 10.3788/LOP202259.1210004 Cite this Article Set citation alerts
    Xifan Zhang, Lingzhi Yu. Image Defense Algorithm Against Adversarial Attacks Based on Low-Rank Dimensionality Reduction and Sparse Reconstruction[J]. Laser & Optoelectronics Progress, 2022, 59(12): 1210004 Copy Citation Text show less
    References

    [1] Szegedy C, Zaremba W, Sutskever I et al. Intriguing properties of neural networks[EB/OL]. https://arxiv.org/abs/1312.6199

    [2] Cisse M M, Adi Y, Neverova N et al. Houdini: fooling deep structured visual and speech recognition models with adversarial examples[C], 6977-6987(2017).

    [3] Kurakin A, Goodfellow I J, Bengio S. Adversarial machine learning at scale[EB/OL]. https://arxiv.org/abs/1611.01236

    [4] Wang J X, Lei Z C. A convolutional neural network based on feature fusion for face recognition[J]. Laser & Optoelectronics Progress, 57, 101508(2020).

    [5] Li Z W, Cao H, Yang F et al. Research progress of brain tumor segmentation based on convolutional neural network[J]. Laser & Optoelectronics Progress, 58, 2400003(2021).

    [6] Zhou S, Wu D, Jin J. Lane instance segmentation algorithm based on convolutional neural network[J]. Laser & Optoelectronics Progress, 58, 0815007(2021).

    [7] Guo C, Rana M, Cisse M et al. Countering adversarial images using input transformations[EB/OL]. https://arxiv.org/abs/1711.00117

    [8] Shaham U, Garritano J, Yamada Y et al. Defending against adversarial images using basis functions transformations[EB/OL]. https://arxiv.org/abs/1803.10840

    [9] Dziugaite G K, Ghahramani Z, Roy D M. A study of the effect of JPG compression on adversarial images[EB/OL]. https://arxiv.org/abs/1608.00853

    [10] Jia X J, Wei X X, Cao X C et al. ComDefend: an efficient image compression model to defend adversarial examples[C], 6077-6085(2019).

    [11] Xu W L, Evans D, Qi Y J. Feature squeezing: detecting adversarial examples in deep neural networks[C](2018).

    [12] Prakash A, Moran N, Garber S et al. Deflecting adversarial attacks with pixel deflection[C], 8571-8580(2018).

    [13] Sun B, Tsai N H, Liu F C et al. Adversarial defense by stratified convolutional sparse coding[C], 11439-11448(2019).

    [14] Zheng S, Song Y, Leung T et al. Improving the robustness of deep neural networks via stability training[C], 4480-4488(2016).

    [15] Metzen J H, Genewein T, Fischer V et al. On detecting adversarial perturbations[C](2017).

    [16] Zantedeschi V, Nicolae M I, Rawat A. Efficient defenses against adversarial attacks[C], 39-49(2017).

    [17] Ross A S, Doshi-Velez F. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients[C], 1660-1669(2017).

    [18] Samangouei P, Kabkab M, Chellappa R. Defense-GAN: protecting classifiers against adversarial attacks using generative models[C](2018).

    [19] Lee D D, Seung H S. Learning the parts of objects by non-negative matrix factorization[J]. Nature, 401, 788-791(1999).

    [20] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[C](2015).

    [21] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[C](2015).

    [22] Kurakin A, Goodfellow I J, Bengio S. Adversarial examples in the physical world[EB/OL]. https://arxiv.org/abs/1607.02533v4

    [23] Moosavi-Dezfooli S M, Fawzi A, Frossard P. DeepFool: a simple and accurate method to fool deep neural networks[C], 2574-2582(2016).

    [24] Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C], 39-57(2017).

    [25] Papernot N, McDaniel P, Jha S et al. The limitations of deep learning in adversarial settings[C], 372-387(2016).

    [26] Xiao C W, Li B, Zhu J Y et al. Generating adversarial examples with adversarial networks[C], 3905-3911(2018).

    [27] Su J W, Vargas D V, Sakurai K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 23, 828-841(2019).

    [28] Moosavi-Dezfooli S M, Fawzi A, Fawzi O et al. Universal adversarial perturbations[C], 86-94(2017).

    [29] Xie C H, Wang J Y, Zhang Z S et al. Mitigating adversarial effects through randomization[C](2018).

    [30] Aharon M, Elad M, Bruckstein A. K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation[J]. IEEE Transactions on Signal Processing, 54, 4311-4322(2006).

    [31] Pati Y C, Rezaiifar R, Krishnaprasad P S. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition[C], 40-44(1993).

    [32] He K M, Zhang X Y, Ren S Q et al. Deep residual learning for image recognition[C], 770-778(2016).

    [33] Russakovsky O, Deng J, Su H et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 115, 211-252(2015).

    [34] Papernot N, Faghri F, Carlini N et al. Technical report on the CleverHans v2.1.0 adversarial examples library[EB/OL]. https://arxiv.org/abs/1610.00768

    [35] Zhou B L, Khosla A, Lapedriza A et al. Learning deep features for discriminative localization[C], 2921-2929(2016).

    [36] Szegedy C, Vanhoucke V, Ioffe S et al. Rethinking the inception architecture for computer vision[C], 2818-2826(2016).

    Xifan Zhang, Lingzhi Yu. Image Defense Algorithm Against Adversarial Attacks Based on Low-Rank Dimensionality Reduction and Sparse Reconstruction[J]. Laser & Optoelectronics Progress, 2022, 59(12): 1210004
    Download Citation