计算机科学 ›› 2025, Vol. 52 ›› Issue (10): 382-394.doi: 10.11896/jsjkx.240800046

• 信息安全 • 上一篇    下一篇

基于良性显著区域的端到端恶意软件对抗样本生成方法

袁梦娇, 芦天亮, 黄万鑫, 何厚翰   

  1. 中国人民公安大学信息网络安全学院 北京 100038
  • 收稿日期:2024-08-07 修回日期:2024-11-09 出版日期:2025-10-15 发布日期:2025-10-14
  • 通讯作者: 芦天亮(lutianliang@ppsuc.edu.cn)
  • 作者简介:(2022211473@stu.ppsuc.edu.cn)
  • 基金资助:
    公安部科技计划项目(2023JSM09);中央高校基本科研业务费项目(2023JKF01ZK08);中国人民公安大学网络空间安全执法技术双一流创新研究专项(2023SYL07)

Benign-salient Region Based End-to-End Adversarial Malware Generation Method

YUAN Mengjiao, LU Tianliang, HUANG Wanxin, HE Houhan   

  1. College of Information Network Security,People's Public Security University of China,Beijing 100038,China
  • Received:2024-08-07 Revised:2024-11-09 Online:2025-10-15 Published:2025-10-14
  • About author:YUAN Mengjiao,born in 1999,postgraduate.Her main research interests include adversarial examples and malware detection.
    LU Tianliang,born in 1985,Ph.D,professor,Ph.D supervisor.His main research interests include cyber security and artificial intelligence security.
  • Supported by:
    Science and Technology Program of Ministry of Public Security(2023JSM09),Fundamental Research Funds for the Central Universities of Ministry of Education of China(2023JKF01ZK08) and Double First-Class Innovation Research Project for People's Public Security University of China(2023SYL07).

摘要: 基于可视化和深度学习的恶意软件检测方法准确率高、成本低,因此受到广泛关注。然而,深度学习模型容易遭受对抗样本攻击,少量精心设计的对抗扰动就可以误导模型以高置信度做出错误决策。目前,针对Windows恶意软件可视化检测方法的对抗攻击研究侧重于改进对抗图像的攻击效果,而忽略了对抗样本的实际危害性。为此,提出了一种基于良性显著区域的端到端恶意软件对抗样本生成方法BREAM(Benign-salient Region Based End-to-end Adversarial Malware Generation)。首先,通过选取良性图像的显著性区域作为初始扰动来提高对抗图像的攻击效果,同时引入掩码矩阵限制扰动区域来保证对抗样本的功能性;其次,提出逆特征映射方法,将对抗图像转换成对抗恶意软件,最终实现了恶意软件对抗样本的端到端生成。在4种目标模型上评估BREAM的攻击性能,实验结果表明,当目标模型分别采用双线性插值和最近邻插值时,相比于现有方法,BREAM生成的对抗图像攻击成功率平均提高了47.96%和28.39%;对抗恶意软件攻击成功率平均提高了53.25%和61.93%,使得目标模型的分类准确率平均下降92.82%和73.64%。

关键词: 对抗样本, 对抗攻击, 恶意软件检测, 恶意软件可视化, 卷积神经网络

Abstract: Malware detection methods combining visualization techniques and deep learning have gained widespread attention due to their high accuracy and low cost.However,deep learning models are vulnerable to adversarial attacks,where intentional small-scale perturbations can misguide the model into making incorrect decisions with high confidence.Current research on adversarial attacks targeting visualization-based detection methods for Windows malware has primarily focused on improving the effectiveness of adversarial images,while neglecting the actual harmfulness of adversarial examples.Therefore,this study proposes a novel method to generate harmful adversarial malware,BREAM(Benign-salient Region based End-to-end Adversarial Malware generation method).Firstly,selecting the salient regions of benign images as initial perturbations to enhance the attack effect on adversarial images,and introducing a mask matrix to restrict the perturbation range to ensure the functionality of adversarial examples.Then,an inverse feature mapping method is proposed to convert adversarial images into adversarial malware,achieving end-to-end generation of malware adversarial examples.The attack performance of BREAM is evaluated on four target models,and experimental results show that when the target models employ bilinear interpolation and nearest neighbor interpolation respectively,compared with existing methods,the attack success rate of adversarial images generated by BREAM has increased by an average of 47.96% and 28.39%;the attack success rate of adversarial malware has increased by an average of 53.25% and 61.93%,causing the classification accuracy of the target models to decrease by an average of 92.82% and 73.64%.

Key words: Adversarial examples,Adversarial attacks,Malware detection,Malware visualization,Convolutional neural network

中图分类号: 

  • TP309.5
[1]SKOUDIS E,ZELTSER L.Malware:Fighting malicious code[M].Prentice Hall Professional,2004.
[2]SIKORSKI M,HONIG A.Practical malware analysis:thehands-on guide to dissecting malicious software[M].No Starch Press,2012.
[3]GOPINATH M,SETHURAMAN S C.A comprehensive survey on deep learning based malware detection techniques[J].Computer Science Review,2023,47:100529.
[4]CUI Z,XUE F,CAI X,et al.Detection of malicious code va-riants based on deep learning[J].IEEE Transactions on Industrial Informatics,2018,14(7):3187-3196.
[5]KALASH M,ROCHAN M,MOHAMMED N,et al.Malwareclassification with deep convolutional neural net-works[C]//2018 9th IFIP International Conference on New Technologies,Mobility and Security(NTMS).2018:1-5.
[6]KORNISH D,GEARY J,SANSING V,et al.Malware classification using deep convolutional neural networks[C]//2018 IEEE Applied Imagery Pattern Recognition Workshop(AIPR).2018:1-6.
[7]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014.
[8]PAPERNOT N,MCDANIEL P,WU X,et al.Distillation as adefense to adversarial perturbations against deep neural net-works[C]//2016 IEEE Symposium on Security and Privacy(SP).2016:582-597.
[9]CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy(SP).2017:39-57.
[10]DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[11]LIN J,SONG C,HE K,et al.Nesterov accelerated gradient and scale invariance for adversarial attacks[J].arXiv:1908.06281,2019.
[12]PIERAZZI F,PENDLEBURY F,CORTELLAZZI J,et al.Intriguing properties of adversarial ml attacks in the problem space[C]//2020 IEEE Symposium on Security and Privacy(SP).2020:1332-1349.
[13]LING X,WU L,ZHANG J,et al.Adversarial attacks againstWindows PE malware detection:A survey of the state-of-the-art[J].Computers & Security,2023,128:103134.
[14]CHATTOPADHAY A,SARKAR A,HOWLADER P,et al.Grad-cam++:Generalized gradient-based visual explanations for deep convolutional networks[C]//2018 IEEE Winter Conference on Applications of Computer Vision(WACV).2018:839-847.
[15]NATARAJ L,KARTHIKEYAN S,JACOB G,et al.Malware images:visualization and automatic classification[C]//Procee-dings of the 8th International Symposium on Visualization for Cyber Security.2011:1-7.
[16]HAN K S,LIM J H,KANG B,et al.Malware analysis usingvisualized images and entropy graphs[J].International Journal of Information Security,2015,14:1-14.
[17]MAKANDAR A,PATROT A.Malware class recognition using image processing techniques[C]//2017 International Conference on Data Management,Analytics and Innovation(ICDMAI).2017:76-80.
[18]CUI Z,XUE F,CAI X,et al.Detection of malicious code va-riants based on deep learning[J].IEEE Transactions on Industrial Informatics,2018,14(7):3187-3196.
[19]GIBERT D,MATEU C,PLANES J,et al.Using convolutional neural networks for classification of malware represented as images[J].Journal of Computer Virology and Hacking Techniques,2019,15:15-28.
[20]LIU X,ZHANG J,LIN Y,et al.ATMPA:attacking machine learning-based malware visualization detection methods via adversarial examples[C]//Proceedings of the International Symposium on Quality of Service.2019:1-10.
[21]CHEN J Y,ZOU J F,YUAN J K,et al.Black-box adversarial attack method towards malware detection [J].Computer Science,2021,48(5):60-67.
[22]KHORMALI A,ABUSNAINA A,CHEN S,et al.COPYCAT:practical adversarial attacks on visualization-based malware detection[J].arXiv:1909.09735,2019.
[23]ZHAN D,DUAN Y,HU Y,et al.AMGmal:Adaptive mask-guided adversarial attack against malware detection with minimal perturbation[J].Computers & Security,2023,127:103103.
[24]ZHAN D,HU Y,LI W,et al.Towards robust CNN-based malware classifiers using adversarial examples generated based on two saliency similarities[J].Neural Computing and Applications,2023,35(23):17129-17146.
[25]JIANG Z J,CHEN Y,XIONG Z M,et al.Adversarial Examples with Unlimited Amount of Additions[J].Journal of Frontiers of Computer Science and Technology,2024,18(2):526-537.
[26]XIAO M,GUO C,SHEN G W,et al.Adversarial Example Remaining Availability and Functionality[J].Journal of Frontiers of Computer Science and Technology,2022,16(10):2286-2297.
[27]LI K,GUO W,ZHANG F,et al.Adversarial malware generation method based on genetic algorithm[J].Computer Science,2023,50(7):325-331.
[28]CHEN B,REN Z,YU C,et al.Adversarial examples for cnn-based malware detectors[J].IEEE Access,2019,7:54360-54371.
[29]ZHOU B,KHOSLA A,LAPEDRIZA A,et al.Learning deep features for discriminative localization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2921-2929.
[30]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-cam:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Confe-rence on Computer Vision.2017:618-626.
[31]KREUK F,BARAK A,AVIV-REUVEN S,et al.Deceiving end-to-end deep learning malware detectors using adversarial examples[J].arXiv:1802.04528,2018.
[32]SUCIU O,COULL S E,JOHNS J.Exploring adversarial examples in malware detection[C]//2019 IEEE Security and Privacy Workshops(SPW).2019:8-14.
[33]NOCEDAL J.Updating quasi-Newton matrices with limitedstorage[J].Mathematics of Computation,1980,35(151):773-782.
[34]YANG L,CIPTADI A,LAZIUK I,et al.BODMAS:An open dataset for learning based temporal analysis of PE malware[C]//2021 IEEE Security and Privacy Workshops(SPW).2021:78-84.
[35]HUANG G,LIU Z,VAN DER MAATEN L,et al.Densely con-nected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:4700-4708.
[36]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[37]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[38]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenetclassification with deep convolutional neural networks[J].Advances in Neural Information Processing Systems,2017,60(6):84-90.
[39]WANG Z,BOVIK A C,SHEIKH H R,et al.Image quality assessment:from error visibility to structural similarity[J].IEEE Transactions on Image Processing,2004,13(4):600-612.
[40]ZHANG R,ISOLA P,EFROS A A,et al.The unreasonable effectiveness of deep features as a perceptual metric[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:586-595.
[41]HARANG R,RUDD E M.SOREL-20M:A large scale bench-mark dataset for malicious PE detection[J].arXiv:2012.07634,2020.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!