计算机科学 ›› 2018, Vol. 45 ›› Issue (1): 34-38.doi: 10.11896/j.issn.1002-137X.2018.01.005

• CRSSC-CWI-CGrC-3WD 2017 • 上一篇    下一篇

一种多强度攻击下的对抗逃避攻击集成学习算法

刘晓琴,王婕婷,钱宇华,王笑月   

  1. 山西大学大数据科学与产业研究院 太原030006;计算智能与中文信息处理教育部重点实验室 太原030006;山西大学计算机与信息技术学院 太原030006,山西大学大数据科学与产业研究院 太原030006;计算智能与中文信息处理教育部重点实验室 太原030006;山西大学计算机与信息技术学院 太原030006,山西大学大数据科学与产业研究院 太原030006;计算智能与中文信息处理教育部重点实验室 太原030006;山西大学计算机与信息技术学院 太原030006,山西大学软件学院 太原030006
  • 出版日期:2018-01-15 发布日期:2018-11-13
  • 基金资助:
    本文受国家自然科学基金(61672332,1,61432011,U1435212),教育部新世纪优秀人才支持计划(NCET-12-1031),山西省教育厅高等学校中青年拔尖创新人才支持计划,山西省“三晋学者”特聘教授资助

Ensemble Method Against Evasion Attack with Different Strength of Attack

LIU Xiao-qin, WANG Jie-ting, QIAN Yu-hua and WANG Xiao-yue   

  • Online:2018-01-15 Published:2018-11-13

摘要: 在对抗性学习中,攻击者在非法目的的驱使下,通过探索分类器的漏洞并利用漏洞,使得恶意样本逃过分类器的检测。目前,对抗性学习已被广泛应用于计算机网络中的入侵检测、垃圾邮件过滤和生物识别等领域。现有研究者仅把现有的集成方法应用在对抗性分类中,并证明了多分类器比单分类器更鲁棒。然而,在对抗性学习中,攻击者的先验信息对分类器的鲁棒性有较大的影响。基于此,通过在学习过程中模拟不同强度的攻击,并增大错分样本的权重,提出的 多强度攻击下的对抗逃避攻击集成学习算法 可以在保持多分类器准确性的同时提高鲁棒性。将其与Bagging集成的多分类器进行比较,结果表明所提算法 具有更强的鲁棒性。最后,分析了算法的收敛性以及参数对算法的影响。

关键词: 对抗性学习,逃避攻击,多分类器,鲁棒性

Abstract: Driven by the illegal purpose,attackers often exploit the vulnerability of the classifier to make the malicious samples free of detection in adversarial learning.At present,adversarial learning has been widely used in computer network intrusion detection,spam filtering,biometrics identification and other fields.Many researchers only apply the exi-sting ensemble methods in adversarial learning,and prove that multiple classi-fiers are more robust than single classi-fier.However,priori information about the attacker has a great influence on the robustness of the classifier in adversariallearning.Based on this situation,by simulating different strength of attack in learning process and increasing the weight of the misclassified sample,the robustness of the multiple classifiers can be improved with maintaining the accuracy.The experimental results show that the ensemble algorithm against evasion attack with different strength of attack is more robust than Bagging.Finally,the convergence of the algorithm and the influence of parameter on the algorithm were analyzed.

Key words: Adversarial learning,Evasion attacks,Multiple classifier systems,Robustness

[1] ZHANG F.Researches on defense strategy against evasion attacks[D].Guangzhou:South China University of Technology,2015.(in Chinese) 张非.对抗逃避攻击的防守策略研究[D].广州:华南理工大学,2015.
[2] ZHANG M F,LI Y C,LI W.Survey of application of bayesian cIassifying method to spam filtering[J].Application Research of Computers,2005,22(8):14-19.(in Chinese) 张铭锋,李云春,李巍.垃圾邮件过滤的贝叶斯方法综述[J].计算机应用研究,2005,22(8):14-19.
[3] DENG W.Adversarial classification for email spam filtering[D].Chengdu:University of Electronic Science and Technolgy of China,2011.(in Chinese) 邓蔚.垃圾邮件过滤中的敌手分类问题研究[D].成都:电子科技大学,2011.
[4] DENG W,QIN Z G,LIU Q,et al.Chinese spam filtering model for combating good word attacks[J].Journal of Electronic Measurement and Instrumentation,2011,24(12):1146-1152.(in Chinese) 邓蔚,秦志光,刘峤,等.抗好词攻击的中文垃圾邮件过滤模型[J].电子测量与仪器学报,2011,24(12):1146-1152.
[5] CRETU G F,STAVROU A,LOCASTO M E,et al.Casting out demons:sanitizing training data for anomaly sensors[C]∥IEEE Symposium on Security & Privacy.IEEE,2008:81-95.
[6] NELSON B,RUBINSTEIN B I P,HUANG L,et al.Near-optimal evasion of convex-Inducing classifiers[C]∥13th International Conference on Artificial Intelligence and Statistics.2010:549-556.
[7] NELSON B,RUBINSTEIN B I P,HUANG L,et al.Querystrategies for evading convex-inducing classifiers[J].Journal of Machine Learning Research,2012,13(5):1293-1332.
[8] BIGGIO B,CORONA I,MAIORCA D,et al.Evasion attacksagainst machine learning at test time[M]∥Machine Learning and Knowledge Discovery in Databases.Springer Berlin Heidelberg,2013:387-402.
[9] ZHANG F,CHAN P P K,BIGGIO B,et al.Adversarial feature selection against evasion attacks[J].IEEE Transactions on Cybernetics,2016,46(3):766-777.
[10] O’ULLIVAN J,LANGFORD J,CARUANA R,et al.Featureboost:a meta learning algorithm that improves model robustness[C]∥Proceedings of the 7th International Conference on Machine Learing.2000:703-710 .
[11] BIGGIO B,FUMERA G,ROLI F.Multiple classifier systems for robust classifier design in adversarial environments[J].International Journal of Machine Learning and Cybernetics,2010,1(1-4):27-41.
[12] BIGGIO B,FUMERA G,ROLI F.Adversarial pattern classification using multiple classifiers and randomisation[C]∥Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition.2008:500-509.
[13] BARRENO M,NELSON B,SEARS R,et al.Can machine lear-ning be secure?[C]∥Proceedings of the 2006 ACM Sympo-sium on Information,Computer and Communications Security.ACM,2006:16-25.
[14] HUANG L,JOSEPH A D,NELSON B,et al.Adversarial machine learning[C]∥Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence.2011:43-58.
[15] BIGGIO B,FUMERA G,ROLI F.Multiple classifier systems for adversarial classification tasks[C]∥International Workshop on Multiple Classifier Systems.Springer Berlin Heidelberg,2009:132-141.
[16] ZHANG F,HUANG W J,CHAN P P K.Hardness of evasion of multiple classifier system with non-linear classifiers [C]∥2014 International Conference on Wavelet Analysis and Pattern Re-cognition.2014:56-60.
[17] QIAN Y H,LI F J,LIANG J Y,et al.Space structure and clustering of categorical data[J].IEEE Transactions on Neural Networks & Learning Systems,2016,27(10):2047-2059.
[18] QUINLAN J R.Bagging,boosting,and C4.5[C]∥Proceedings of the Thirteenth National Conference on Artificial Intelligence.1996:725-730.
[19] QIAN Y H,XU H,LIANG J Y,et al.Fusing monotonic decision trees[J].IEEE Transactions on Knowledge and Data Enginee-ring,2015,27(10):2717-2728.
[20] BIGGIO B,CORONA I,HE Z M,et al.One-and-a-half-classmultiple classifier systems for secure learning against evasion attacks at test time[C]∥International Workshop on Multiple Classifier Systems.Springer International Publishing,2015:168-180.
[21] UCI Machine Learning Repository.http://archive.ics.uci.edu/ml/datasets/Spambase.

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!