计算机科学 ›› 2021, Vol. 48 ›› Issue (7): 1-8.doi: 10.11896/jsjkx.210300306

所属专题: 人工智能安全

• 人工智能安全* • 上一篇    下一篇

人工智能安全框架

景慧昀1, 魏薇1, 周川2,3, 贺欣4   

  1. 1 中国信息通信研究院 北京 100083
    2 中国科学院信息工程研究所 北京 100097
    3 中国科学院大学网络空间安全学院 北京 100049
    4 国家计算机网络应急技术处理协调中心 北京 102209
  • 收稿日期:2021-03-30 修回日期:2021-04-28 出版日期:2021-07-15 发布日期:2021-07-02
  • 通讯作者: 贺欣(hexin@cert.org.cn)
  • 基金资助:
    国家242信息安全计划(2018Q39)

Artificial Intelligence Security Framework

JING Hui-yun1, WEI Wei1, ZHOU Chuan2,3, HE Xin4   

  1. 1 China Academy of Information and Communications Technology,Beijing 100083,China
    2 Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100097,China
    3 School of Cyber Security,University of Chinese Academy of Sciences,Beijing 100049,China
    4 National Computer Network Emergency Response Technical Team/Coordination Center of China,Beijing 102209,China
  • Received:2021-03-30 Revised:2021-04-28 Online:2021-07-15 Published:2021-07-02
  • About author:JING Hui-yun,born in 1987,Ph.D,se-nior engineer.Her main research in-terests include artificial intelligence security and data security.(jinghuiyun@caict.ac.cn)
    HE Xin,born in 1982,Ph.D,senior engineer.His main research interests include network information security and so on.
  • Supported by:
    National 242 Information Security Program(2018Q39).

摘要: 随着人工智能时代的到来,各行各业均开始结合自身业务需要部署人工智能系统,这全面加速了全球范围内人工智能规模化部署和应用进程。然而,人工智能基础设施、设计研发以及融合应用过程中面临的安全风险也随之而来。为了充分规避风险,世界各国纷纷采取制定人工智能伦理准则、完善法律法规和行业管理等方式来进行人工智能安全治理。在人工智能安全治理中,人工智能安全技术体系具有重要指导意义。具体而言,人工智能安全技术体系是人工智能安全治理的重要组成部分,是落实人工智能伦理规范和法律监管要求的重要支撑,更是人工智能产业健康有序发展的重要保障。然而,在当前阶段,全球范围内人工智能安全框架普遍缺失,安全风险突出且分立,迫切需要对人工智能各生命周期存在的安全风险进行总结与归纳。为解决上述问题,文中提出了涵盖人工智能安全目标、人工智能安全分级能力、人工智能安全技术和管理体系的人工智能安全框架,期待为社会各界提升人工智能安全防护能力提供有益参考。

关键词: 安全框架, 人工智能

Abstract: With the advent of artificial intelligence,all walks of life begin to deploy artificial intelligence systems according to their own business needs,which accelerates the scale construction and widespread application of artificial intelligence worldwide in an all-around way.However,the security risks of artificial intelligence infrastructure,design and development,and integration applications also arise.To avoid risks,countries worldwide have formulated AI ethical norms and improved laws and regulations and industry management to carry out artificial intelligence safety governance.In the artificial intelligence security governance,the artificial intelligence security technology system has important guiding significance.Specifically,the artificial intelligence security technology system is an essential part of artificial intelligence security governance,critical support for implementing artificial intelligence ethical norms,meeting legal and regulatory requirements.However,there is a general lack of artificial intelligence security framework in the world at the current stage,and security risks are prominent and separated.Therefore,it is urgent to summarize and conclude the security risks existing in each life cycle of artificial intelligence.To solve the above problems,this paper proposes an AI security framework covering AI security goals,graded capabilities of AI security,and AI security technologies and management systems.It looks forward to providing valuable references for the community to improve artificial intelligence's safety and protection capabilities.

Key words: Artificial intelligence, Security framework

中图分类号: 

  • TP183
[1]CHEN X,LIU C,LI B,et al.Targeted backdoor attacks on deep learning systems using data poisoning[J].arXiv:1712.05526,2017.
[2]PEI K,CAO Y,YANG J,et al.Deepxplore:Automated whitebox testing of deep learning systems[C]//Proceedings of the 26th Symposium on Operating Systems Principles.2017:1-18.
[3]360政企安全.AI框架安全依旧堪忧:360 AI安全研究院披露Tensorflow 24个漏洞[EB/OL].http://www.anquanke.com/post/id/218839.
[4]ZHANG T,HE Z,LEE R B.Privacy-preserving machine lear-ning through data obfuscation[J].arXiv:1807.01860,2018.
[5]XU K,CAO T,SHAH S,et al.Cleaning the null space:A privacy mechanism for predictors[C]//Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.2017:2789-2795.
[6]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[J].arXiv:1312.6199,2013.
[7]GEIGEL A.Neural network trojan[J].Journal of Computer Security,2013,21(2):191-232.
[8]TRAMÈR F,ZHANG F,JUELS A,et al.Stealing machine learning models via prediction apis[C]//25th {USENIX} Security Symposium ({USENIX} Security 16).2016:601-618.
[9]SHOKRI R,STRONATI M,SONG C,et al.Membership infe-rence attacks against machine learning models[C]//2017 IEEE Symposium on Security and Privacy (SP).IEEE,2017:3-18.
[10]FREDRIKSON M,LANTZ E,JHA S,et al.Privacy in pharmacogenetics:An end-to-end case study of personalized warfarin dosing[C]//23rd {USENIX} Security Symposium ({USENIX} Security 14).2014:17-32.
[11]REN K,MENG Q R,YAN S K,et al.Survey of artificial intelligence data security and privacy protection[J].Chinese Journal of Network and Information Security,2021,7(1):1-10.
[12]CHEN D,ZHAO H.Data security and privacy protection issues in cloud computing[C]//2012 International Conference on Computer Science and Electronics Engineering.IEEE,2012,1:647-651.
[13]ROBERT M L.The Sliding Scale of Cyber Security[EB/OL].http://www.sans.org/reading-room/whitepapers/analyst/mem-bership/36240,2015-8.
[14]YEOM S,GIACOMELLI I,FREDRIKSON M,et al.Privacyrisk in machine learning:Analyzing the connection to overfitting[C]//2018 IEEE 31st Computer Security Foundations Symposium (CSF).IEEE,2018:268-282.
[15]JAGIELSKI M,OPREA A,BIGGIO B,et al.Manipulating machine learning:Poisoning attacks and countermeasures for regression learning[C]//2018 IEEE Symposium on Security and Privacy (SP).IEEE,2018:19-35.
[16]GILPIN L H,BAU D,YUAN B Z,et al.Explaining explanations:An overview of interpretability of machine learning[C]//2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).IEEE,2018:80-89.
[17]HUA Y,ZHANG D,GE S.Research progress in the interpre-tability of deep learning models[J].Journal of Cyber Security,2020,5(3):1-12.
[18]ZEILER M D,FERGUS R.Visualizing and understanding con-volutional networks[C]//European Conference on Computer Vision.Springer,Cham,2014:818-833.
[19]TALVITIE E.Model Regularization for Stable Sample Rollouts[C]//UAI.2014:780-789.
[20]DWORK C.Differential privacy:A survey of results[C]//International Conference on Theory and Applications of Models of Computation.Berlin,Heidelberg:Springer,2008:1-19.
[21]GENTRY C.A fully homomorphic encryption scheme[M].Stanford:Stanford University,2009.
[22]YANG Q,LIU Y,CHEN T,et al.Federated machine learning:Concept and applications[J].ACM Transactions on Intelligent Systems and Technology (TIST),2019,10(2):1-19.
[23]MindSpore[OL].http://www.mindspore.cn/security.
[24]Porn Producers Offer to Help Hollywood Take Down Deepfake Videos,Janko,Roettgers[OL].http://variety.com/2018/di-gital/news/deepfakes-porn-adult-industry-1202705749/,Aug.2019.
[25]MENG D,CHEN H.Magnet:a two-pronged defense against adversarial examples[C]//Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.2017:135-147.
[26]MA X,LI B,WANG Y,et al.Characterizing adversarial sub-spaces using local intrinsic dimensionality[J].arXiv:1801.02613,2018.
[27]GU S,YI P,ZHU T,et al.Detecting Adversarial Examples in Deep Neural Networks using Normalizing Filters[C]//11th International Conference on Agents and Artificial Intelligence.2019.
[28]CHEN B,CARVALHO W,BARACALDO N,et al.Detectingbackdoor attacks on deep neural networks by activation clustering[J].arXiv:1811.03728,2018.
[29]WANG B,YAO Y,SHAN S,et al.Neural cleanse:Identifying and mitigating backdoor attacks in neural networks[C]//2019 IEEE Symposium on Security and Privacy (SP).IEEE,2019:707-723.
[30]OH S J,SCHIELE B,FRITZ M.Towards reverse-engineering black-box neural networks[M]//Explainable AI:Interpreting,Explaining and Visualizing Deep Learning.Springer,Cham,2019:121-144.
[31]CHEN Z,XIE L,PANG S,et al.MagDR:Mask-guided Detec-tion and Reconstruction for Defending Deepfakes[J].arXiv:2103.14211,2021.
[32]ZHAO H,ZHOU W,CHEN D,et al.Multi-attentional deepfake detection[J].arXiv:2103.02406,2021.
[33]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[J].arXiv:1706.06083,2017.
[34]BOSE A,HAMILTON W.Compositional fairness constraintsfor graph embeddings[C]//International Conference on Machine Learning.PMLR,2019:715-724.
[35]CHAKRABORTY S,TOMSETT R,RAGHAVENDRA R,et al.Interpretability of deep learning models:a survey of results[C]//2017 IEEE smartworld,Ubiquitous Intelligence & Computing,Advanced & Trusted Computed,Scalable Computing & Communications,Cloud & Big Data Computing,Internet of People and Smart City Innovation (Smartworld/SCALCOM/UIC/ATC/CBDcom/IOP/SCI).IEEE,2017:1-6.
[36]LIU X,WANG X,MATWIN S.Improving the interpretability of deep neural networks with knowledge distillation[C]//2018 IEEE International Conference on Data Mining Workshops (ICDMW).IEEE,2018:905-912.
[37]YANG C,RANGARAJAN A,RANKA S.Global model interpretation via recursive partitioning[C]//2018 IEEE 20th International Conference on High Performance Computing and Communications;IEEE 16th International Conference on Smart City;IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS).IEEE,2018:1563-1570.
[38]ZHANG J,CHEN D,LIAO J,et al.Model watermarking forimage processing networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020,34(7):12805-12812.
[39]TRAN B,LI J,MADRY A.Spectral signatures in backdoor attacks[J].arXiv:1811.00636,2018.
[40]FELDMAN M,FRIEDLER S A,MOELLER J,et al.Certifying and removing disparate impact[C]//Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Disco-very and Data Mining.2015:259-268.
[41]BELLAMY R K E,DEY K,HIND M,et al.AI Fairness 360:An extensible toolkit for detecting,understanding,and mitigating unwanted algorithmic bias[J].arXiv:1810.01943,2018.
[1] 丛颖男, 王兆毓, 朱金清.
关于法律人工智能数据和算法问题的若干思考
Insights into Dataset and Algorithm Related Problems in Artificial Intelligence for Law
计算机科学, 2022, 49(4): 74-79. https://doi.org/10.11896/jsjkx.210900191
[2] 李野, 陈松灿.
基于物理信息的神经网络:最新进展与展望
Physics-informed Neural Networks:Recent Advances and Prospects
计算机科学, 2022, 49(4): 254-262. https://doi.org/10.11896/jsjkx.210500158
[3] 朝乐门, 尹显龙.
人工智能治理理论及系统的现状与趋势
AI Governance and System:Current Situation and Trend
计算机科学, 2021, 48(9): 1-8. https://doi.org/10.11896/jsjkx.210600034
[4] 谢宸琪, 张保稳, 易平.
人工智能模型水印研究综述
Survey on Artificial Intelligence Model Watermarking
计算机科学, 2021, 48(7): 9-16. https://doi.org/10.11896/jsjkx.201200204
[5] 景慧昀, 周川, 贺欣.
针对人脸检测对抗攻击风险的安全测评方法
Security Evaluation Method for Risk of Adversarial Attack on Face Detection
计算机科学, 2021, 48(7): 17-24. https://doi.org/10.11896/jsjkx.210300305
[6] 暴雨轩, 芦天亮, 杜彦辉, 石达.
基于i_ResNet34模型和数据增强的深度伪造视频检测方法
Deepfake Videos Detection Method Based on i_ResNet34 Model and Data Augmentation
计算机科学, 2021, 48(7): 77-85. https://doi.org/10.11896/jsjkx.210300258
[7] 秦智慧, 李宁, 刘晓彤, 刘秀磊, 佟强, 刘旭红.
无模型强化学习研究综述
Overview of Research on Model-free Reinforcement Learning
计算机科学, 2021, 48(3): 180-187. https://doi.org/10.11896/jsjkx.200700217
[8] 仝鑫, 王斌君, 王润正, 潘孝勤.
面向自然语言处理的深度学习对抗样本综述
Survey on Adversarial Sample of Deep Learning Towards Natural Language Processing
计算机科学, 2021, 48(1): 258-267. https://doi.org/10.11896/jsjkx.200500078
[9] 周蔚, 罗旭东.
一种替代性纠纷在线仲裁系统
Alternative Online Arbitration System for Dispute
计算机科学, 2020, 47(6A): 583-590. https://doi.org/10.11896/JsJkx.190900140
[10] 任仪.
基于区块链与人工智能的网络多服务器SIP信息加密系统设计
Design of Network Multi-server SIP Information Encryption System Based on Block Chain and Artificial Intelligence
计算机科学, 2020, 47(6A): 634-638. https://doi.org/10.11896/JsJkx.190600075
[11] 赵澄, 叶耀威, 姚明海.
基于金融文本情感的股票波动预测
Stock Volatility Forecast Based on Financial Text Emotion
计算机科学, 2020, 47(5): 79-83. https://doi.org/10.11896/jsjkx.190400145
[12] 王国胤, 瞿中, 赵显莲.
交叉融合的“人工智能+”学科建设探索与实践
Practical Exploration of Discipline Construction of Artificial Intelligence+
计算机科学, 2020, 47(4): 1-5. https://doi.org/10.11896/jsjkx.200300144
[13] 王晓明,赵歆波.
阅读眼动追踪语料库的构建与应用研究综述
Survey of Construction and Application of Reading Eye-tracking Corpus
计算机科学, 2020, 47(3): 174-181. https://doi.org/10.11896/jsjkx.190800040
[14] 杨惟轶,白辰甲,蔡超,赵英男,刘鹏.
深度强化学习中稀疏奖励问题研究综述
Survey on Sparse Reward in Deep Reinforcement Learning
计算机科学, 2020, 47(3): 182-191. https://doi.org/10.11896/jsjkx.190200352
[15] 曹锋,徐扬,钟建,宁欣然.
基于目标演绎距离的一阶逻辑子句集预处理方法
First-order Logic Clause Set Preprocessing Method Based on Goal Deduction Distance
计算机科学, 2020, 47(3): 217-221. https://doi.org/10.11896/jsjkx.190100004
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!