Computer Science ›› 2021, Vol. 48 ›› Issue (7): 17-24.doi: 10.11896/jsjkx.210300305

Special Issue: Artificial Intelligence Security

• Artificial Intelligence Security • Previous Articles     Next Articles

Security Evaluation Method for Risk of Adversarial Attack on Face Detection

JING Hui-yun1, ZHOU Chuan 2,3, HE Xin4   

  1. 1 China Academy of Information and Communications Technology,Beijing 100083,China
    2 Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100097,China
    3 School of Cyber Security,University of Chinese Academy of Sciences,Beijing 100049,China
    4 National Computer Network Emergency Response Technical Team/Coordination Center of China,Beijing 102209,China
  • Received:2021-03-21 Revised:2021-04-23 Online:2021-07-15 Published:2021-07-02
  • About author:JING Hui-yun,born in 1987,Ph.D,se-nior engineer.Her main research inte-rests include artificial intelligence security and data security.(jinghuiyun@caict.ac.cn)
    ZHOU Chuan,born in 1997,postgra-duate,is a student member of China Computer Federation.His main research interests include artificial intelligence security and cloud computing security.
  • Supported by:
    National 242 Information Security Program(2018Q39).

Abstract: Face detection is a classic problem in the field of computer vision.With the power-driven by artificial intelligence and big data,it has displayed a new vitality.Face detection shows its important application value and great application prospect in the fields of face payment,identity authentication,beauty camera,intelligent security,and so on.However,with the overall acceleration of face detection deployment and application process,its security risks and hidden dangers have become increasingly prominent.Therefore,this paper analyzes and summarizes the security risks which the current face detection models face in each stage of their life cycle.Among them,adversarial attack has received extensive attention because it poses a serious threat to the availability and reliability of face detection,and may cause the dysfunction of the face detection module.The current adversarial attacks on face detection mainly focus on white-box adversarial attacks.However,because white-box adversarial attacks require a full understanding of the internal structure and all parameters of a specific face detection model,and for the protection of business secrets and corporate interests,the structure and parameters of a commercially deployed face detection model in the real physical world are usually inaccessible.This makes it almost impossible to use white-box adversarial methods to attack commercial face detection models in the real world.To solve the above problems,this paper proposes a black-box physical adversarial attack me-thod for face detection.Through the idea of ensemble learning,the public attention heat map of many face detection models is extracted,then the obtained public attention heat map is attacked.Experiments show that our method realizes the successful escape of the black-box face detection model deployed on mobile terminals,including the face detection module of mobile terminal’s built-in camera software,face payment software,and beauty camera software.This demonstrates that our method will be helpful to evaluate the security of face detection models in the real world.

Key words: Adversarial attack, Artificial intelligence security, Face detection

CLC Number: 

  • TP183
[1]ZHANG T,HE Z,LEE R B.Privacy-preserving machine lear-ning through data obfuscation[J].arXiv:1807.01860,2018.
[2]XU K,CAO T,SHAH S,et al.Cleaning the null space:A privacy mechanism for predictors[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2017.
[3]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[J].arXiv:1312.6199,2013.
[4]GEIGEL A.Neural network trojan[J].Journal of Computer Security,2013,21(2):191-232.
[5]TRAMÈR F,ZHANG F,JUELS A,et al.Stealing machinelearning models via prediction apis[C]//25th USENIX Confe-rence on Security. USENIX Association.2016:601-618.
[6]SHOKRI R,STRONATI M,SONG C,et al.Membership infe-rence attacks against machine learning models[C]//2017 IEEE Symposium on Security and Privacy (SP).IEEE,2017:3-18.
[7]FREDRIKSON M,LANTZ E,JHA S,et al.Privacy in pharmacogenetics:An end-to-end case study of personalized warfarin dosing[C]//23rd USENIX Conference on Security. USENIX Association.2014:17-32.
[8]CHEN D,ZHAO H.Data security and privacy protection issues in cloud computing[C]//2012 International Conference on Computer Science and Electronics Engineering.IEEE,2012:647-651.
[9]BOSE A J,AARABI P.Adversarial attacks on face detectorsusing neural net based constrained optimization[C]//2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP).IEEE,2018:1-6.
[10]ZHOU Z,TANG D,WANG X,et al.Invisible mask:Practical attacks on face recognition with infrared[J].arXiv:1803.04683,2018.
[11]KAZIAKHMEDOV E,KIREEV K,MELNIKOV G,et al.Real-world attack on MTCNN face detection system[C]//2019 International Multi-Conference on Engineering,Computer and Information Sciences (SIBIRCON).IEEE,2019:0422-0427.
[12]REN S,HE K,GIRSHICK R,et al.Faster r-cnn:Towards real-time object detection with region proposal networks[J].arXiv:1506.01497,2015.
[13]ZHANG K,ZHANG Z,LI Z,et al.Joint face detection andalignment using multitask cascaded convolutional networks[J].IEEE Signal Processing Letters,2016,23(10):1499-1503.
[14]CHEN S,HE Z,SUN C,et al.Universal adversarial attack on attention and the resulting dataset damagenet[C]//IEEE Transactions on Pattern Analysis and Machine Intelligence.2020.
[15]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-cam:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Confe-rence on Computer Vision.2017:618-626.
[16]FREDRIKSON M,JHA S,RISTENPART T.Model inversionattacks that exploit confidence information and basic countermeasures[C]//Proceedings of the 22nd ACM SIGSAC Confe-rence on Computer and Communications Security.2015:1322-1333.
[17]ZHAO Y,ZHU H,LIANG R,et al.Seeing isn’t believing:Towards more robust adversarial attack against real world object detectors[C]//Proceedings of the 2019 ACM SIGSAC Confe-rence on Computer and Communications Security.2019:1989-2004.
[18]CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy (SP).IEEE,2017:39-57.
[19]TANG X,DU D K,HE Z,et al.Pyramidbox:A context-assisted single shot face detector[C]//Proceedings of the European Conference on Computer Vision (ECCV).2018:797-813.
[20]ZHANG S,ZHU X,LEI Z,et al.Faceboxes:A CPU real-time face detector with high accuracy[C]//2017 IEEE International Joint Conference on Biometrics (IJCB).IEEE,2017:1-9.
[21]LI J,WANG Y,WANG C,et al.DSFD:dual shot face detector[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:5060-5069.
[22]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-cam:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Confe-rence on Computer Vision.2017:618-626.
[23]DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[1] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[2] WU Zi-bin, YAN Qiao. Projected Gradient Descent Algorithm with Momentum [J]. Computer Science, 2022, 49(6A): 178-183.
[3] YAN Meng, LIN Ying, NIE Zhi-shen, CAO Yi-fan, PI Huan, ZHANG Lan. Training Method to Improve Robustness of Federated Learning [J]. Computer Science, 2022, 49(6A): 496-501.
[4] LI Jian, GUO Yan-ming, YU Tian-yuan, WU Yu-lun, WANG Xiang-han, LAO Song-yang. Multi-target Category Adversarial Example Generating Algorithm Based on GAN [J]. Computer Science, 2022, 49(2): 83-91.
[5] CHEN Meng-xuan, ZHANG Zhen-yong, JI Shou-ling, WEI Gui-yi, SHAO Jun. Survey of Research Progress on Adversarial Examples in Images [J]. Computer Science, 2022, 49(2): 92-106.
[6] XIE Chen-qi, ZHANG Bao-wen, YI Ping. Survey on Artificial Intelligence Model Watermarking [J]. Computer Science, 2021, 48(7): 9-16.
[7] YANG Yang, CHEN Wei, ZHANG Dan-yi, WANG Dan-ni, SONG Shuang. Adversarial Attacks Threatened Network Traffic Classification Based on CNN [J]. Computer Science, 2021, 48(7): 55-61.
[8] BAO Yu-xuan, LU Tian-liang, DU Yan-hui, SHI Da. Deepfake Videos Detection Method Based on i_ResNet34 Model and Data Augmentation [J]. Computer Science, 2021, 48(7): 77-85.
[9] CHEN Jin-yin, ZOU Jian-fei, YUAN Jun-kun, YE Lin-hui. Black-box Adversarial Attack Method Towards Malware Detection [J]. Computer Science, 2021, 48(5): 60-67.
[10] ZHANG Kai-qiang, JIANG Cong-feng, CHENG Xiao-lan, JIA Gang-yong, ZHANG Ji-lin, WAN Jian. Resource-aware Based Adaptive-scaling Image Target Detection Under Multi-resolution Scenario [J]. Computer Science, 2021, 48(4): 180-186.
[11] CHEN Kai, WEI Zhi-peng, CHEN Jing-jing, JIANG Yu-gang. Adversarial Attacks and Defenses on Multimedia Models:A Survey [J]. Computer Science, 2021, 48(3): 27-39.
[12] XU Xing, SUN Jia-liang, WANG Zheng, YANG Yang. Feature Transformation for Defending Adversarial Attack on Image Retrieval [J]. Computer Science, 2021, 48(10): 258-265.
[13] DENG Liang, XU Geng-lin, LI Meng-jie, CHEN Zhang-jin. Fast Face Recognition Based on Deep Learning and Multiple Hash Similarity Weighting [J]. Computer Science, 2020, 47(9): 163-168.
[14] ZHAO Bo, YANG Ming, TANG Zhi-wei and CAI Yu-xin. Intelligent Video Surveillance Systems Based on FPGA [J]. Computer Science, 2020, 47(6A): 609-611.
[15] ZHONG Ma-chi, ZHANG Jun-lang, LAN Yang-bo, HE Yue-hua. Study on Online Education Focus Degree Based on Face Detection and Fuzzy Comprehensive Evaluation [J]. Computer Science, 2020, 47(11A): 196-203.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!