计算机科学 ›› 2025, Vol. 52 ›› Issue (11A): 241100104-7.doi: 10.11896/jsjkx.241100104

• 信息安全 • 上一篇    下一篇

基于消除语义特征的图像篡改定位模型对抗攻击

蒋伟豪, 刘波   

  1. 重庆邮电大学图像认知重庆市重点实验室 重庆 400065
  • 出版日期:2025-11-15 发布日期:2025-11-10
  • 通讯作者: 刘波(boliu@cqupt.edu.cn)
  • 作者简介:2021211795@stu.cqupt.edu.cn
  • 基金资助:
    重庆市自然科学基金面上项目(CSTB2023NSCQ-MSX0341)

Attacking Image Manipulation Localization Model by Eliminating Semantic Features

JIANG Weihao, LIU Bo   

  1. Chongqing Key Laboratory of Image Cognition,Chongqing University of Posts and Telecommunications,Chongqing 400065,China
  • Online:2025-11-15 Published:2025-11-10
  • Supported by:
    Natural Science Foundation of Chongqing(CSTB2023NSCQ-MSX0341).

摘要: 目前,公众对于日新月异的图像篡改技术越来越担忧,因为它会引发伦理和安全问题。利用深度神经网络可以定位图像篡改区域。然而,随着深度神经网络的发展,针对它的对抗性攻击也层出不穷,这些攻击方法也促进了模型的鲁棒性研究。现有的对抗攻击方法主要关注篡改痕迹特征,然而不同图像篡改定位模型关注的篡改痕迹特征有所不同,导致对抗攻击的迁移能力不足。由于卷积神经网络或Transformer网络也能够提取语义特征,而图像篡改定位模型往往将这些模型作为基线模型,因此模型在提取篡改特征时会不可避免地提取到部分语义特征。为了提高对抗样本的泛化能力,提出一种攻击方法,重点关注消除篡改图像的语义特征,训练一个语义分割网络作为攻击目标,提出一种攻击中间语义特征的损失函数,使得模型难以识别出图像篡改部分的语义信息。这种攻击方法具有较高的迁移能力,可以更好地隐藏扰动并生成更具攻击性的对抗样本,在多种实验下被证明可以攻击绝大多数现有模型并优于其他对抗攻击方法,并为图像篡改定位任务提供了更新颖的见解。

关键词: 对抗攻击, 深度学习, 图像篡改定位

Abstract: At present,the public is increasingly concerned about the image tampering technology because it will cause ethical and security issues.Deep neural networks can be used to locateimage tampering areas.However,with the development of deep neural networks,adversarial attacks against them have also developed,and these attack methods have also promoted the research on the robustness of the model.Existing adversarial attack methods mainly focus on tampering trace features,but different Image Manipulation Localization models focus on different tampering trace features,resulting in insufficient migration ability of adversarial attacks.Since convolutional neural networks or Transformer networks can also extract semantic features,and Image Manipulation Localization models often use these models as baseline models,which would inevitably extract some semantic features when extracting tampering features.In order to improve the generalization ability of adversarial samples,a attack method is proposed,focusing on eliminating the semantic features of tampered images,training a semantic segmentation network as the attack target,and proposing a loss function for attacking intermediate semantic features,making it difficult for the model to identify the semantic information of the tampered part of the image.This attack method has better transfer ability,can hide perturbations and genera-te more aggressive adversarial samples.It has been proven in multiple experiments that it can attack most existing models and outperform other adversarial attack methods,and provides novel insights for the image manipulation localization.

Key words: Adversarial attack, Deep network, Image manipulation localization

中图分类号: 

  • TP309
[1]BI X L,WEI Y,XIAO B,et al.RRU-Net:The ringed residual U-Net for image splicing forgery detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.2019.
[2]LIU X H,LIU Y J,CHEN J,et al.PSCC-Net:Progressive spatio-channel correlation network for image manipulation detection and localization[J].IEEE Transactions on Circuits and Systems for Video Technology,2022,32(11):7505-7517.
[3]DONG C B,CHEN X R,HU R H,et al.MVSSNet:Multi-View Multi-Scale Supervised Networks for Image Manipulation Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,45(3):3539-3553.
[4]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014.
[5]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[J].arXiv:1706.06083,2017.
[6]ROZSA A,ZHONG Z,BOULT T E.Adversarial attack on deep learning-based splice localization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.2020:648-649.
[7]GRAGNANIELLO D,MARRA F,POGGI G,et al.Analysis of adversarial attacks against CNN-based image forgery detectors[C]//2018 26th European Signal Processing Conference(EUSIPCO).IEEE,2018:967-971.
[8]ZHU P,OSADA G,KATAOKA H,et al.Frequency-awareGAN for adversarial manipulation generation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2023:4315-4324.
[9]MATTHEW D Z,FERGUS R.Visualizing and understanding convolutional networks[C]//Computer Vision-ECCV 2014:13th European Conference,Zurich,Switzerland,Part I 13.Springer,2014:818-833.
[10]VASWANI A,SHAZEER N,PARMAR N,et al.Attention is all you need[C]//Proceedings of the 31st International Confe-rence on Neural Information Processing Systems.2017:6000-6010.
[11]ZHANG Z Y,QIAN Y,ZHAO Y X,Noise and edge based dual branch image manipulation detection[C]//Proceedings of the 2023 4th International Conference on Computing,Networks and Internet of Things.2023:963-968.
[12]MA X C,DU B,JIANG Z H,et al.IML-ViT:BenchmarkingIma-ge Manipulation Localization by Vision Transformer[J].arXiv:2307.14863,2023.
[13]TRIARIDIS K,MEZARIS V.Exploring Multi-Modal Fusion for Image Manipulation Detection and Localization[C]//Procee-dings of the 30th International Conference on MultiMedia Mode-ling(MMM 2024).2024.
[14]GUO X,LIU X H,REN Z Y,et al.Hierarchical Fine-Grained Image Forgery Detection and Localization[C]//CVPR.2023.
[15]KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial machine learning at scale[J].arXiv:1611.01236,2016.
[16]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[J].arXiv:1706.06083,2017.
[17]PAPERNOT N,MCDANIEL P,WU X,et al.Distillation as adefense to adversarial perturbations against deep neural networks[C]//2016 IEEE Symposium on Security and Privacy(SP).IEEE,2016:582-597.
[18]PAPERNOT N,MCDANIEL P,JHA S,et al.The limitations of deep learning in adversarial settings[C]//2016 IEEE European symposium on security and privacy(EuroS&P).IEEE,2016:372-387.
[19]DONG Y P,LIAO F Z,PANG T Y,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Confe-rence on Computer Vision and Pattern Recognition.2018:9185-9193.
[20]PAPERNOT N,MCDANIEL P,GOODFELLOW I,et al.Practical black-box attacks against machine learning[C]//Procee-dings of the 2017 ACM on Asia Conference on Computer and Communications Security.2017:506-519.
[21]PAPERNOT N,MCDANIEL P,GOODFELLOW I.Transfera-bility in machine learning:from phenomena to black-box attacks using adversarial samples[J].arXiv:1605.07277,2016.
[22]CHEN S Z,HE Z B,SUN C J,et al.Universal adversarial attack on attention and the resulting dataset damagenet[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2020,44(4):2188-2197.
[23]HUANG H,CHEN Z Y,CHEN H R,et al.T-sea:Transfer-based self-ensemble attack on object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2023:20514-20523.
[24]ZHU Y,CHEN C F,YAN G,et al.ARNet:Adaptive attention and residual refinement network for copy-move forgery detection[J].IEEE Transactions on Industrial Informatics,2020,16(10):6714-6723.
[25]HU X F,ZHANG Z H,JIANG Z Y,et al.Span:Spatial pyra-mid attention network for image manipulation localization[C]//Computer Vision-ECCV 2020:16th European Conference,Glasgow,UK,Part XXI 16.Springer,2020:312-328.
[26]FU J,LIU J,TIAN H J,et al.Dual attention network for scene segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:3146-3154.
[27]HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[28]GUILLARO F,COZZOLINO D,SUD A,et al.TruFor:Leveraging All-Round Clues for Trustworthy Image Forgery Detection and Localization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).2023:20606-20615.
[29]WU H,CHEN Y,ZHOU J.Rethinking Image Forgery Detection via Contrastive Learning and Unsupervised Clustering[J].arXiv:2308.09307,2023.
[30]KWON M J,NAM S H,YU I J,et al.Learning JPEG Compression Artifacts for Image Manipulation Detection and Localization[J].International Journal of Computer Vision,2022(8):1875-1895.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!