• Journal of Terahertz Science and Electronic Information Technology
  • Vol. 20, Issue 8, 836 (2022)
ZHAO Haojun1、2、*, LIN Yun1、2, BAO Zhida1、2, SHI Jibo1、2, and GE Bin3
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • 3[in Chinese]
  • show less
    DOI: 10.11805/tkyda2020692 Cite this Article
    ZHAO Haojun, LIN Yun, BAO Zhida, SHI Jibo, GE Bin. Targeted adversarial attack in modulation recognition[J]. Journal of Terahertz Science and Electronic Information Technology , 2022, 20(8): 836 Copy Citation Text show less

    Abstract

    Since deep learning algorithms have outstanding advantages such as strong feature expression ability, automatic feature extraction, and end-to-end learning, more and more researchers have applied them to the field of communication signal recognition. However, the discovery of adversarial examples exposes deep learning models to potential risk factors to a great extent, which has a serious impact on current modulation recognition tasks. From the perspective of an attacker, adversarial examples are added to the currently transmitted communication signal to verify and evaluate the attack performance of the target countermeasure sample to the modulation recognition model. Experimental results show that the current targeted attack can effectively reduce the accuracy of model recognition, and the constructed logit indicator can be better applied to measure the targeted effect more fine-grained.
    ZHAO Haojun, LIN Yun, BAO Zhida, SHI Jibo, GE Bin. Targeted adversarial attack in modulation recognition[J]. Journal of Terahertz Science and Electronic Information Technology , 2022, 20(8): 836
    Download Citation