计算机科学 ›› 2025, Vol. 52 ›› Issue (10): 374-381.doi: 10.11896/jsjkx.241000030

• 信息安全 • 上一篇    下一篇

基于高频特征掩蔽的对抗攻击算法

王柳依1, 周淳2, 曾文强2, 何星星2, 孟华2   

  1. 1 西南交通大学信息科学与技术学院 成都 611756
    2 西南交通大学数学学院 成都 611756
  • 收稿日期:2024-10-08 修回日期:2024-12-07 出版日期:2025-10-15 发布日期:2025-10-14
  • 通讯作者: 孟华(menghua@swjtu.edu.cn)
  • 作者简介:(wangly202410@163.com)
  • 基金资助:
    中央高校基本科研业务费专项资金(2682024ZTPY041);四川省科技计划项目(2023YFH0066);成都市科技项目(2023-RK00-00080-ZF)

High-frequency Feature Masking-based Adversarial Attack Algorithm

WANG Liuyi1, ZHOU Chun2, ZENG Wenqiang2, HE Xingxing2, MENG Hua2   

  1. 1 School of Information Science and Technology,Southwest Jiaotong University,Chengdu 611756,China
    2 School of Mathematics,Southwest Jiaotong University,Chengdu 611756,China
  • Received:2024-10-08 Revised:2024-12-07 Online:2025-10-15 Published:2025-10-14
  • About author:WANG Liuyi,born in 2002,postgra-duate.Her main research interests include artificial intelligence and adversarial examples.
    MENG Hua,born in 1982,Ph.D,asso-ciate professor.His research interests include interpretability in deep lear-ning,topological data analysis and knowledge representation and reaso-ning.
  • Supported by:
    Fundamental Research Funds for the Central Universities of Ministry of Education of China(2682024ZTPY041),Science and Technology Program of Sichuan Province(2023YFH0066) and Science and Technology Program of Chengdu(2023-RK00-00080-ZF).

摘要: 深度神经网络在图像识别领域取得广泛应用,但其结构复杂,容易受到对抗样本的攻击。构造人眼不可察觉的对抗样本,对测试网络的安全性有着重要的意义。现有针对图像的对抗样本生成方法通常是对原始样本进行微小扰动,而扰动通常用lp范数距离进行约束,这种简单方案将所有像素点平等对待,每个点允许扰动的范围满足同样的约束,这限制了对抗样本的构造方式,使得扰动易被人眼察觉。而在现实应用中,人眼对不同颜色和区域的像素点扰动的敏感性亦不相同。针对这一特点,设计了一种基于观测敏感性的自适应扰动方案,为不同的像素点设计不同的扰动约束,从而提升对抗样本的鲁棒性。具体而言,该方法通过频谱分析将图像划分为高频和低频区域,并通过新的空间约束规范扰动,对高频不敏感区域增加更大的扰动,以提升对抗能力。基于ImageNet-1K和CIFAR-10数据集进行的一系列实验表明,新的对抗样本构造策略能与多种攻击方法相耦合,并在保障隐蔽性的前提下提升对抗性能。

关键词: 深度神经网络, 对抗样本, 高频, 扰动, 鲁棒性

Abstract: Deep neural networks have achieved widespread application in the field of imagerecognition,however,their complex structures make them vulnerable to adversarial attacks.Constructing adversarial examples that are imperceptible to the human eye is crucial for evaluating the security of these networks.Existing adversarial example generation methods for images typically involve small perturbations to the original samples,constrained by lp-norms.This simplistic approach treats all pixels equally,applying uniform constraints to the allowable perturbations at each pixel,which limits the flexibility of adversarial example generation and makes the perturbations more detectable to the human eye.In practical applications,human visual sensitivity varies across different colors and image regions.To address this issue,this paper proposes an adaptive perturbation scheme based on perceptual sensitivity,where different perturbation constraints are applied to different pixels,thereby enhancing the robustness of the adversarial examples.Specifically,this method employs spectral analysis to divide the image into high-frequency and low-frequency regions and applies novel spatial constraints to regulate perturbations.Larger perturbations are introduced in regions less sensitive to high-frequency changes,improving adversarial effectiveness.Extensive experiments conducted on the ImageNet-1K and CIFAR-10 datasets demonstrate that the proposed adversarial example generation strategy can be coupled with various attack me-thods,significantly enhancing adversarial performance while ensuring imperceptibility.

Key words: Deep neural networks,Adversarial examples,High-frequency,Perturbations,Robustness

中图分类号: 

  • TP183
[1]HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[2]HUANG G,LIU Z,VAN DER MAATEN L,et al.Densely con-nected convolutional networks [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:4700-4708.
[3]WU W,SU Y,LYU M R,et al.Improving the transferability of adversarial samples with adversarial transformations [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:9024-9033.
[4]TAIGMAN Y,YANG M,RANZATO M A,et al.DeepFace:Closing the gap to human-level performance in face verification [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2014:1701-1708.
[5]WANG H,WANG Y,ZHOU Z,et al.CosFace:Large margincosine loss for deep face recognition [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:5265-5274.
[6]LIU A,LIU X,FAN J,et al.Perceptual-sensitive GAN for gene-rating adversarial patches [C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:1028-1035.
[7]SALLAB A E L,ABDOU M,PEROT E,et al.Deep reinforcement learning framework for autonomous driving [J].Electronic Imaging,2017,29:70-76.
[8]AKHTAR N,MIAN A.Threat of adversarial attacks on deep learning in computer vision:A survey [J].IEEE Access,2018,6:14410-14430.
[9]COHEN J,ROSENFELD E,KOLTER Z.Certified adversarialrobustness via randomized smoothing [C]//International Conference on Machine Learning.PMLR,2019:1310-1320.
[10]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks [C]//International Conference on Learning Representations.2018.
[11]TRAMÈR F,KURAKIN A,PAPERNOT N,et al.Ensembleadversarial training:Attacks and defenses [C]//International Conference on Learning Representations.2018.
[12]WONG E,KOLTER Z.Provable defenses against adversarialexamples via the convex outer adversarial polytope [C]//International Conference on Machine Learning.PMLR,2018:5286-5295.
[13]SHARIF M,BAUER L,REITER M K.On the suitability of lp-norms for creating and preventing adversarial examples [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern RecognitionWorkshops.2018:1605-1613.
[14]LUO B,LIU Y,WEI L,et al.Towards imperceptible and robust adversarial example attacks against neural networks [C]//Proceedings of the AAAI Conference on Artificial Intelligence.2018.
[15]AKHTAR N,MIAN A.Threat of adversarial attacks on deep learning in computer vision:A survey [J].IEEE Access,2018,6:14410-14430.
[16]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining andharnessing adversarial examples[C]// International Conference on Learning Representations(Poster).2015.
[17]KURAKIN A,GOODFELLOW I J,BENGIO S.Adversarial examples in the physical world [M]//Artificial Intelligence Safety and Security.Chapman and Hall/CRC,2018:99-112.
[18]CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks [C]//2017 IEEE Symposium on Security and Privacy(SP).IEEE,2017:39-57.
[19]ZHAO Z,LIU Z,LARSON M.Towards large yet imperceptible adversarial image perturbations with perceptual color distance [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:1039-1048.
[20]LUO M R,CUI G,RIGG B.The development of the CIE 2000 colour-difference formula:CIEDE2000 [J].Color Research & Application,2001,26(5):340-350.
[21]LUO C,LIN Q,XIE W,et al.Frequency-driven imperceptibleadversarial attack on semantic similarity [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:15315-15324.
[22]LIU J,LU B,XIONG M,et al.Low frequency sparse adversarial attack[J].Computers & Security,2023,132:103379.
[23]ZHANG Y,TAN Y,SUN H,et al.Improving the invisibility of adversarial examples with perceptually adaptive perturbation[J].Information Sciences,2023,635:126-137.
[24]LI C,LIU Y,ZHANG X,et al.Exploiting Frequency Characteristics for Boosting the Invisibility of Adversarial Attacks[J].Applied Sciences,2024,14(8):3315.
[25]WANG H,WU X,HUANG Z,et al.High-frequency component helps explain the generalization of convolutional neural networks [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:8684-8694.
[26]YIN D,GONTIJO LOPES R,SHLENS J,et al.A Fourier perspective on model robustness in computer vision [C]//Procee-dings of the 33rd Conference on Neural Information Processing Systems.2019.
[27]SUBRAMANIAN A,SIZIKOVA E,MAJAJ N,et al.Spatial-frequency channels,shape bias,and adversarial robustness [C]//NeurIPS 2023.2023.
[28]AHMED N,NATARAJAN T,RAO K R.Discrete cosine transform [J].IEEETransactions on Computers,1974,c-23(1):90-93.
[29]RUSSAKOVSKY O,DENG J,SU H,et al.ImageNet large scale visual recognition challenge [J].International Journal of Computer Vision,2015,115:211-252.
[30]KRIZHEVSKY A.Learning multiple layers of features from tiny images [D].Toronto:University of Toronto,2009.
[31]WANG X,HE K.Enhancing the transferability of adversarialattacks through variance tuning [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:1924-1933.
[32]ZHANG R,ISOLA P,EFROS A A,et al.The unreasonable effectiveness of deep features as a perceptual metric [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:586-595.
[33]XU W,EVANS D,QI Y.Feature squeezing:Detecting adversarial examples in deep neural networks [C]//Proceedings of the 2018 Network and Distributed System Security Symposium.Internet Society.2018.
[34]DAS N,SHANBHOGUE M,CHEN S T,et al.Shield:Fast,practical defense and vaccination for deep learning using JPEG compression [C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mi-ning.2018:196-204.
[35]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-CAM:Visual explanations from deep networks via gradient-based localization [J].International Journal of Computer Vision,2020,128:336-359.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!