Computer Science ›› 2021, Vol. 48 ›› Issue (7): 1-8.doi: 10.11896/jsjkx.210300306

Special Issue: Artificial Intelligence Security

• Artificial Intelligence Security • Previous Articles     Next Articles

Artificial Intelligence Security Framework

JING Hui-yun1, WEI Wei1, ZHOU Chuan2,3, HE Xin4   

  1. 1 China Academy of Information and Communications Technology,Beijing 100083,China
    2 Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100097,China
    3 School of Cyber Security,University of Chinese Academy of Sciences,Beijing 100049,China
    4 National Computer Network Emergency Response Technical Team/Coordination Center of China,Beijing 102209,China
  • Received:2021-03-30 Revised:2021-04-28 Online:2021-07-15 Published:2021-07-02
  • About author:JING Hui-yun,born in 1987,Ph.D,se-nior engineer.Her main research in-terests include artificial intelligence security and data security.(jinghuiyun@caict.ac.cn)
    HE Xin,born in 1982,Ph.D,senior engineer.His main research interests include network information security and so on.
  • Supported by:
    National 242 Information Security Program(2018Q39).

Abstract: With the advent of artificial intelligence,all walks of life begin to deploy artificial intelligence systems according to their own business needs,which accelerates the scale construction and widespread application of artificial intelligence worldwide in an all-around way.However,the security risks of artificial intelligence infrastructure,design and development,and integration applications also arise.To avoid risks,countries worldwide have formulated AI ethical norms and improved laws and regulations and industry management to carry out artificial intelligence safety governance.In the artificial intelligence security governance,the artificial intelligence security technology system has important guiding significance.Specifically,the artificial intelligence security technology system is an essential part of artificial intelligence security governance,critical support for implementing artificial intelligence ethical norms,meeting legal and regulatory requirements.However,there is a general lack of artificial intelligence security framework in the world at the current stage,and security risks are prominent and separated.Therefore,it is urgent to summarize and conclude the security risks existing in each life cycle of artificial intelligence.To solve the above problems,this paper proposes an AI security framework covering AI security goals,graded capabilities of AI security,and AI security technologies and management systems.It looks forward to providing valuable references for the community to improve artificial intelligence's safety and protection capabilities.

Key words: Artificial intelligence, Security framework

CLC Number: 

  • TP183
[1]CHEN X,LIU C,LI B,et al.Targeted backdoor attacks on deep learning systems using data poisoning[J].arXiv:1712.05526,2017.
[2]PEI K,CAO Y,YANG J,et al.Deepxplore:Automated whitebox testing of deep learning systems[C]//Proceedings of the 26th Symposium on Operating Systems Principles.2017:1-18.
[3]360政企安全.AI框架安全依旧堪忧:360 AI安全研究院披露Tensorflow 24个漏洞[EB/OL].http://www.anquanke.com/post/id/218839.
[4]ZHANG T,HE Z,LEE R B.Privacy-preserving machine lear-ning through data obfuscation[J].arXiv:1807.01860,2018.
[5]XU K,CAO T,SHAH S,et al.Cleaning the null space:A privacy mechanism for predictors[C]//Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.2017:2789-2795.
[6]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[J].arXiv:1312.6199,2013.
[7]GEIGEL A.Neural network trojan[J].Journal of Computer Security,2013,21(2):191-232.
[8]TRAMÈR F,ZHANG F,JUELS A,et al.Stealing machine learning models via prediction apis[C]//25th {USENIX} Security Symposium ({USENIX} Security 16).2016:601-618.
[9]SHOKRI R,STRONATI M,SONG C,et al.Membership infe-rence attacks against machine learning models[C]//2017 IEEE Symposium on Security and Privacy (SP).IEEE,2017:3-18.
[10]FREDRIKSON M,LANTZ E,JHA S,et al.Privacy in pharmacogenetics:An end-to-end case study of personalized warfarin dosing[C]//23rd {USENIX} Security Symposium ({USENIX} Security 14).2014:17-32.
[11]REN K,MENG Q R,YAN S K,et al.Survey of artificial intelligence data security and privacy protection[J].Chinese Journal of Network and Information Security,2021,7(1):1-10.
[12]CHEN D,ZHAO H.Data security and privacy protection issues in cloud computing[C]//2012 International Conference on Computer Science and Electronics Engineering.IEEE,2012,1:647-651.
[13]ROBERT M L.The Sliding Scale of Cyber Security[EB/OL].http://www.sans.org/reading-room/whitepapers/analyst/mem-bership/36240,2015-8.
[14]YEOM S,GIACOMELLI I,FREDRIKSON M,et al.Privacyrisk in machine learning:Analyzing the connection to overfitting[C]//2018 IEEE 31st Computer Security Foundations Symposium (CSF).IEEE,2018:268-282.
[15]JAGIELSKI M,OPREA A,BIGGIO B,et al.Manipulating machine learning:Poisoning attacks and countermeasures for regression learning[C]//2018 IEEE Symposium on Security and Privacy (SP).IEEE,2018:19-35.
[16]GILPIN L H,BAU D,YUAN B Z,et al.Explaining explanations:An overview of interpretability of machine learning[C]//2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).IEEE,2018:80-89.
[17]HUA Y,ZHANG D,GE S.Research progress in the interpre-tability of deep learning models[J].Journal of Cyber Security,2020,5(3):1-12.
[18]ZEILER M D,FERGUS R.Visualizing and understanding con-volutional networks[C]//European Conference on Computer Vision.Springer,Cham,2014:818-833.
[19]TALVITIE E.Model Regularization for Stable Sample Rollouts[C]//UAI.2014:780-789.
[20]DWORK C.Differential privacy:A survey of results[C]//International Conference on Theory and Applications of Models of Computation.Berlin,Heidelberg:Springer,2008:1-19.
[21]GENTRY C.A fully homomorphic encryption scheme[M].Stanford:Stanford University,2009.
[22]YANG Q,LIU Y,CHEN T,et al.Federated machine learning:Concept and applications[J].ACM Transactions on Intelligent Systems and Technology (TIST),2019,10(2):1-19.
[23]MindSpore[OL].http://www.mindspore.cn/security.
[24]Porn Producers Offer to Help Hollywood Take Down Deepfake Videos,Janko,Roettgers[OL].http://variety.com/2018/di-gital/news/deepfakes-porn-adult-industry-1202705749/,Aug.2019.
[25]MENG D,CHEN H.Magnet:a two-pronged defense against adversarial examples[C]//Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.2017:135-147.
[26]MA X,LI B,WANG Y,et al.Characterizing adversarial sub-spaces using local intrinsic dimensionality[J].arXiv:1801.02613,2018.
[27]GU S,YI P,ZHU T,et al.Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters[C]//11th International Conference on Agents and Artificial Intelligence.2019.
[28]CHEN B,CARVALHO W,BARACALDO N,et al.Detectingbackdoor attacks on deep neural networks by activation clustering[J].arXiv:1811.03728,2018.
[29]WANG B,YAO Y,SHAN S,et al.Neural cleanse:Identifying and mitigating backdoor attacks in neural networks[C]//2019 IEEE Symposium on Security and Privacy (SP).IEEE,2019:707-723.
[30]OH S J,SCHIELE B,FRITZ M.Towards reverse-engineering black-box neural networks[M]//Explainable AI:Interpreting,Explaining and Visualizing Deep Learning.Springer,Cham,2019:121-144.
[31]CHEN Z,XIE L,PANG S,et al.MagDR:Mask-guided Detec-tion and Reconstruction for Defending Deepfakes[J].arXiv:2103.14211,2021.
[32]ZHAO H,ZHOU W,CHEN D,et al.Multi-attentional deepfake detection[J].arXiv:2103.02406,2021.
[33]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[J].arXiv:1706.06083,2017.
[34]BOSE A,HAMILTON W.Compositional fairness constraintsfor graph embeddings[C]//International Conference on Machine Learning.PMLR,2019:715-724.
[35]CHAKRABORTY S,TOMSETT R,RAGHAVENDRA R,et al.Interpretability of deep learning models:a survey of results[C]//2017 IEEE smartworld,Ubiquitous Intelligence & Computing,Advanced & Trusted Computed,Scalable Computing & Communications,Cloud & Big Data Computing,Internet of People and Smart City Innovation (Smartworld/SCALCOM/UIC/ATC/CBDcom/IOP/SCI).IEEE,2017:1-6.
[36]LIU X,WANG X,MATWIN S.Improving the interpretability of deep neural networks with knowledge distillation[C]//2018 IEEE International Conference on Data Mining Workshops (ICDMW).IEEE,2018:905-912.
[37]YANG C,RANGARAJAN A,RANKA S.Global model interpretation via recursive partitioning[C]//2018 IEEE 20th International Conference on High Performance Computing and Communications;IEEE 16th International Conference on Smart City;IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS).IEEE,2018:1563-1570.
[38]ZHANG J,CHEN D,LIAO J,et al.Model watermarking forimage processing networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020,34(7):12805-12812.
[39]TRAN B,LI J,MADRY A.Spectral signatures in backdoor attacks[J].arXiv:1811.00636,2018.
[40]FELDMAN M,FRIEDLER S A,MOELLER J,et al.Certifying and removing disparate impact[C]//Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Disco-very and Data Mining.2015:259-268.
[41]BELLAMY R K E,DEY K,HIND M,et al.AI Fairness 360:An extensible toolkit for detecting,understanding,and mitigating unwanted algorithmic bias[J].arXiv:1810.01943,2018.
[1] LI Ye, CHEN Song-can. Physics-informed Neural Networks:Recent Advances and Prospects [J]. Computer Science, 2022, 49(4): 254-262.
[2] CHAO Le-men, YIN Xian-long. AI Governance and System:Current Situation and Trend [J]. Computer Science, 2021, 48(9): 1-8.
[3] XIE Chen-qi, ZHANG Bao-wen, YI Ping. Survey on Artificial Intelligence Model Watermarking [J]. Computer Science, 2021, 48(7): 9-16.
[4] JING Hui-yun, ZHOU Chuan, HE Xin. Security Evaluation Method for Risk of Adversarial Attack on Face Detection [J]. Computer Science, 2021, 48(7): 17-24.
[5] BAO Yu-xuan, LU Tian-liang, DU Yan-hui, SHI Da. Deepfake Videos Detection Method Based on i_ResNet34 Model and Data Augmentation [J]. Computer Science, 2021, 48(7): 77-85.
[6] QIN Zhi-hui, LI Ning, LIU Xiao-tong, LIU Xiu-lei, TONG Qiang, LIU Xu-hong. Overview of Research on Model-free Reinforcement Learning [J]. Computer Science, 2021, 48(3): 180-187.
[7] REN Yi. Design of Network Multi-server SIP Information Encryption System Based on Block Chain and Artificial Intelligence [J]. Computer Science, 2020, 47(6A): 634-638.
[8] ZHAO Cheng, YE Yao-wei, YAO Ming-hai. Stock Volatility Forecast Based on Financial Text Emotion [J]. Computer Science, 2020, 47(5): 79-83.
[9] WANG Guo-yin, QU Zhong, ZHAO Xian-lian. Practical Exploration of Discipline Construction of Artificial Intelligence+ [J]. Computer Science, 2020, 47(4): 1-5.
[10] WANG Xiao-ming,ZHAO Xin-bo. Survey of Construction and Application of Reading Eye-tracking Corpus [J]. Computer Science, 2020, 47(3): 174-181.
[11] ANG Wei-yi,BAI Chen-jia,CAI Chao,ZHAO Ying-nan,LIU Peng. Survey on Sparse Reward in Deep Reinforcement Learning [J]. Computer Science, 2020, 47(3): 182-191.
[12] CAO Feng,XU Yang,ZHONG Jian,NING Xin-ran. First-order Logic Clause Set Preprocessing Method Based on Goal Deduction Distance [J]. Computer Science, 2020, 47(3): 217-221.
[13] DONG Chao-ying, XU Xin, LIU Ai-jun, CHANG Jing-hui. New Routing Methods of LEO Satellite Networks [J]. Computer Science, 2020, 47(12): 285-290.
[14] WANG Hai-tao, SONG Li-hua, XIANG Ting-ting, LIU Li-jun. New Development Direction of Artificial Intelligence-Human Cyber Physical Ternary Fusion Intelligence [J]. Computer Science, 2020, 47(11A): 1-5.
[15] ZHANG Yu-qian,GU Dong-yun. Review of Computer Aided Diagnosis for Parkinson’s Tremor and Essential Tremor [J]. Computer Science, 2019, 46(7): 22-29.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!