计算机科学 ›› 2021, Vol. 48 ›› Issue (7): 17-24.doi: 10.11896/jsjkx.210300305

所属专题: 人工智能安全

• 人工智能安全* • 上一篇    下一篇

针对人脸检测对抗攻击风险的安全测评方法

景慧昀1, 周川2,3, 贺欣4   

  1. 1 中国信息通信研究院 北京100083
    2 中国科学院信息工程研究所 北京100097
    3 中国科学院大学网络空间安全学院 北京100049
    4 国家计算机网络应急技术处理协调中心 北京102209
  • 收稿日期:2021-03-21 修回日期:2021-04-23 出版日期:2021-07-15 发布日期:2021-07-02
  • 通讯作者: 周川(zhouchuan1@iie.ac.cn)
  • 基金资助:
    国家242信息安全计划(2018Q39)

Security Evaluation Method for Risk of Adversarial Attack on Face Detection

JING Hui-yun1, ZHOU Chuan 2,3, HE Xin4   

  1. 1 China Academy of Information and Communications Technology,Beijing 100083,China
    2 Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100097,China
    3 School of Cyber Security,University of Chinese Academy of Sciences,Beijing 100049,China
    4 National Computer Network Emergency Response Technical Team/Coordination Center of China,Beijing 102209,China
  • Received:2021-03-21 Revised:2021-04-23 Online:2021-07-15 Published:2021-07-02
  • About author:JING Hui-yun,born in 1987,Ph.D,se-nior engineer.Her main research inte-rests include artificial intelligence security and data security.(jinghuiyun@caict.ac.cn)
    ZHOU Chuan,born in 1997,postgra-duate,is a student member of China Computer Federation.His main research interests include artificial intelligence security and cloud computing security.
  • Supported by:
    National 242 Information Security Program(2018Q39).

摘要: 人脸检测是计算机视觉领域的一个经典问题,其在人工智能大数据驱动的赋能下焕发出崭新生机,在刷脸支付、身份认证、摄像美颜、智能安防等领域均体现出重要的应用价值与广阔的应用前景。然而,随着人脸检测部署应用进程的全面加速,其安全风险与隐患也日益凸显。因此,文中分析总结了现行人脸检测模型在全生命周期的各阶段所面临的安全风险,其中对抗攻击因对人脸检测的可用性和可靠性构成严重威胁,并可能使人脸检测模块丧失基本功能性而受到了广泛关注。目前,面向人脸检测的对抗攻击算法主要集中于白盒攻击。但是,由于白盒对抗攻击需要充分理解特定人脸检测模型的内部结构和全部参数,而出于对保护商业机密和企业利益的考虑,现实物理世界中商业部署的人脸检测模型的结构与参数通常是不可访问的,这使得使用白盒攻击方法在现实世界中攻破商业人脸检测模型几乎不可能。针对上述问题,提出了一种面向人脸检测的黑盒物理域对抗攻击方法。通过集成学习的思想,提取众多人脸检测模型的公共注意力热力图,并针对获取到的公共注意力热力图发起攻击。实验结果表明,该方法能够成功逃逸部署于移动终端的黑盒人脸检测模型,包括移动终端自带相机软件、刷脸支付软件、美颜相机软件的人脸检测模块。这说明所提出的方法对评测人脸检测模型的安全性能够提供有益帮助。

关键词: 对抗攻击, 人工智能安全, 人脸检测

Abstract: Face detection is a classic problem in the field of computer vision.With the power-driven by artificial intelligence and big data,it has displayed a new vitality.Face detection shows its important application value and great application prospect in the fields of face payment,identity authentication,beauty camera,intelligent security,and so on.However,with the overall acceleration of face detection deployment and application process,its security risks and hidden dangers have become increasingly prominent.Therefore,this paper analyzes and summarizes the security risks which the current face detection models face in each stage of their life cycle.Among them,adversarial attack has received extensive attention because it poses a serious threat to the availability and reliability of face detection,and may cause the dysfunction of the face detection module.The current adversarial attacks on face detection mainly focus on white-box adversarial attacks.However,because white-box adversarial attacks require a full understanding of the internal structure and all parameters of a specific face detection model,and for the protection of business secrets and corporate interests,the structure and parameters of a commercially deployed face detection model in the real physical world are usually inaccessible.This makes it almost impossible to use white-box adversarial methods to attack commercial face detection models in the real world.To solve the above problems,this paper proposes a black-box physical adversarial attack me-thod for face detection.Through the idea of ensemble learning,the public attention heat map of many face detection models is extracted,then the obtained public attention heat map is attacked.Experiments show that our method realizes the successful escape of the black-box face detection model deployed on mobile terminals,including the face detection module of mobile terminal’s built-in camera software,face payment software,and beauty camera software.This demonstrates that our method will be helpful to evaluate the security of face detection models in the real world.

Key words: Adversarial attack, Artificial intelligence security, Face detection

中图分类号: 

  • TP183
[1]ZHANG T,HE Z,LEE R B.Privacy-preserving machine lear-ning through data obfuscation[J].arXiv:1807.01860,2018.
[2]XU K,CAO T,SHAH S,et al.Cleaning the null space:A privacy mechanism for predictors[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2017.
[3]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[J].arXiv:1312.6199,2013.
[4]GEIGEL A.Neural network trojan[J].Journal of Computer Security,2013,21(2):191-232.
[5]TRAMÈR F,ZHANG F,JUELS A,et al.Stealing machinelearning models via prediction apis[C]//25th USENIX Confe-rence on Security. USENIX Association.2016:601-618.
[6]SHOKRI R,STRONATI M,SONG C,et al.Membership infe-rence attacks against machine learning models[C]//2017 IEEE Symposium on Security and Privacy (SP).IEEE,2017:3-18.
[7]FREDRIKSON M,LANTZ E,JHA S,et al.Privacy in pharmacogenetics:An end-to-end case study of personalized warfarin dosing[C]//23rd USENIX Conference on Security. USENIX Association.2014:17-32.
[8]CHEN D,ZHAO H.Data security and privacy protection issues in cloud computing[C]//2012 International Conference on Computer Science and Electronics Engineering.IEEE,2012:647-651.
[9]BOSE A J,AARABI P.Adversarial attacks on face detectorsusing neural net based constrained optimization[C]//2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP).IEEE,2018:1-6.
[10]ZHOU Z,TANG D,WANG X,et al.Invisible mask:Practical attacks on face recognition with infrared[J].arXiv:1803.04683,2018.
[11]KAZIAKHMEDOV E,KIREEV K,MELNIKOV G,et al.Real-world attack on MTCNN face detection system[C]//2019 International Multi-Conference on Engineering,Computer and Information Sciences (SIBIRCON).IEEE,2019:0422-0427.
[12]REN S,HE K,GIRSHICK R,et al.Faster r-cnn:Towards real-time object detection with region proposal networks[J].arXiv:1506.01497,2015.
[13]ZHANG K,ZHANG Z,LI Z,et al.Joint face detection andalignment using multitask cascaded convolutional networks[J].IEEE Signal Processing Letters,2016,23(10):1499-1503.
[14]CHEN S,HE Z,SUN C,et al.Universal adversarial attack on attention and the resulting dataset damagenet[C]//IEEE Transactions on Pattern Analysis and Machine Intelligence.2020.
[15]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-cam:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Confe-rence on Computer Vision.2017:618-626.
[16]FREDRIKSON M,JHA S,RISTENPART T.Model inversionattacks that exploit confidence information and basic countermeasures[C]//Proceedings of the 22nd ACM SIGSAC Confe-rence on Computer and Communications Security.2015:1322-1333.
[17]ZHAO Y,ZHU H,LIANG R,et al.Seeing isn’t believing:Towards more robust adversarial attack against real world object detectors[C]//Proceedings of the 2019 ACM SIGSAC Confe-rence on Computer and Communications Security.2019:1989-2004.
[18]CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy (SP).IEEE,2017:39-57.
[19]TANG X,DU D K,HE Z,et al.Pyramidbox:A context-assisted single shot face detector[C]//Proceedings of the European Conference on Computer Vision (ECCV).2018:797-813.
[20]ZHANG S,ZHU X,LEI Z,et al.Faceboxes:A CPU real-time face detector with high accuracy[C]//2017 IEEE International Joint Conference on Biometrics (IJCB).IEEE,2017:1-9.
[21]LI J,WANG Y,WANG C,et al.DSFD:dual shot face detector[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:5060-5069.
[22]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-cam:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Confe-rence on Computer Vision.2017:618-626.
[23]DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[1] 郝志荣, 陈龙, 黄嘉成.
面向文本分类的类别区分式通用对抗攻击方法
Class Discriminative Universal Adversarial Attack for Text Classification
计算机科学, 2022, 49(8): 323-329. https://doi.org/10.11896/jsjkx.220200077
[2] 吴子斌, 闫巧.
基于动量的映射式梯度下降算法
Projected Gradient Descent Algorithm with Momentum
计算机科学, 2022, 49(6A): 178-183. https://doi.org/10.11896/jsjkx.210500039
[3] 闫萌, 林英, 聂志深, 曹一凡, 皮欢, 张兰.
一种提高联邦学习模型鲁棒性的训练方法
Training Method to Improve Robustness of Federated Learning
计算机科学, 2022, 49(6A): 496-501. https://doi.org/10.11896/jsjkx.210400298
[4] 李建, 郭延明, 于天元, 武与伦, 王翔汉, 老松杨.
基于生成对抗网络的多目标类别对抗样本生成算法
Multi-target Category Adversarial Example Generating Algorithm Based on GAN
计算机科学, 2022, 49(2): 83-91. https://doi.org/10.11896/jsjkx.210800130
[5] 陈梦轩, 张振永, 纪守领, 魏贵义, 邵俊.
图像对抗样本研究综述
Survey of Research Progress on Adversarial Examples in Images
计算机科学, 2022, 49(2): 92-106. https://doi.org/10.11896/jsjkx.210800087
[6] 谢宸琪, 张保稳, 易平.
人工智能模型水印研究综述
Survey on Artificial Intelligence Model Watermarking
计算机科学, 2021, 48(7): 9-16. https://doi.org/10.11896/jsjkx.201200204
[7] 羊洋, 陈伟, 张丹懿, 王丹妮, 宋爽.
对抗攻击威胁基于卷积神经网络的网络流量分类
Adversarial Attacks Threatened Network Traffic Classification Based on CNN
计算机科学, 2021, 48(7): 55-61. https://doi.org/10.11896/jsjkx.210100095
[8] 暴雨轩, 芦天亮, 杜彦辉, 石达.
基于i_ResNet34模型和数据增强的深度伪造视频检测方法
Deepfake Videos Detection Method Based on i_ResNet34 Model and Data Augmentation
计算机科学, 2021, 48(7): 77-85. https://doi.org/10.11896/jsjkx.210300258
[9] 陈晋音, 邹健飞, 袁俊坤, 叶林辉.
面向恶意软件检测模型的黑盒对抗攻击方法
Black-box Adversarial Attack Method Towards Malware Detection
计算机科学, 2021, 48(5): 60-67. https://doi.org/10.11896/jsjkx.200300127
[10] 张开强, 蒋从锋, 程小兰, 贾刚勇, 张纪林, 万健.
多分辨率下资源感知的图像目标自适应缩放检测
Resource-aware Based Adaptive-scaling Image Target Detection Under Multi-resolution Scenario
计算机科学, 2021, 48(4): 180-186. https://doi.org/10.11896/jsjkx.201200116
[11] 陈凯, 魏志鹏, 陈静静, 姜育刚.
多媒体模型对抗攻防综述
Adversarial Attacks and Defenses on Multimedia Models:A Survey
计算机科学, 2021, 48(3): 27-39. https://doi.org/10.11896/jsjkx.210100079
[12] 徐行, 孙嘉良, 汪政, 杨阳.
基于特征变换的图像检索对抗防御
Feature Transformation for Defending Adversarial Attack on Image Retrieval
计算机科学, 2021, 48(10): 258-265. https://doi.org/10.11896/jsjkx.200800222
[13] 仝鑫, 王斌君, 王润正, 潘孝勤.
面向自然语言处理的深度学习对抗样本综述
Survey on Adversarial Sample of Deep Learning Towards Natural Language Processing
计算机科学, 2021, 48(1): 258-267. https://doi.org/10.11896/jsjkx.200500078
[14] 邓良, 许庚林, 李梦杰, 陈章进.
基于深度学习与多哈希相似度加权实现快速人脸识别
Fast Face Recognition Based on Deep Learning and Multiple Hash Similarity Weighting
计算机科学, 2020, 47(9): 163-168. https://doi.org/10.11896/jsjkx.190900118
[15] 赵博, 杨明, 汤志伟, 蔡玉鑫.
基于FPGA的智能视频加速检索系统
Intelligent Video Surveillance Systems Based on FPGA
计算机科学, 2020, 47(6A): 609-611. https://doi.org/10.11896/JsJkx.190700118
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!