Computer Science ›› 2023, Vol. 50 ›› Issue (3): 333-350.doi: 10.11896/jsjkx.220600031

• Information Security • Previous Articles     Next Articles

Backdoor Attack on Deep Learning Models:A Survey

YING Zonghao, WU Bin   

  1. State Key Laboratory of Information Security,Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100085,ChinaSchool of Cyber Security,University of Chinese Academy of Sciences,Beijing 100049,China
  • Received:2022-06-02 Revised:2022-11-19 Online:2023-03-15 Published:2023-03-15
  • About author:YING Zonghao,born in 1997,postgra-duate,is a member of China Computer Federation.His main research interests include adversarial attack and backdoor attack.
    WU Bin,born in 1980,Ph.D supervisor,is a senior member of China Computer Federation.His main research interests include network security and covert communication.
  • Supported by:
    National Natural Science Foundation of China(U1936119,62272007),Major Technology Program of Hainan,China(ZDKJ2019003) and Science and Technology Research and Development Program Project of China State Railway Group Co.,Ltd(N2021W003,N2021W004).

Abstract: In recent years,artificial intelligence represented by deep learning has made breakthroughs in theories and technologies.With the strong support of data,algorithms and computing power,deep learning has received unprecedented attention and has been widely used in various fields,bringing great improvements to the corresponding fields.With the wide application of deep learning technology in various fields including security critical ones,the security issue of deep learning has attracted more and more attention.Researchers have found many security risks in deep learning systems.In terms of the security of deep learning models,researchers have extensively explored the new attack paradigm of backdoor attack.Backdoor attack can threaten deep learning models throughout their whole life cycle.A large number of researchers have proposed series of attack scheme from different angles.This paper takes the security threats of deep learning system as a starting point,introduces the current attack paradigms.On this basis,it gives the back-ground and principle of backdoor attack,distinguishes the similar attack paradigms such as adversarial attack and data poisoning attack,then continues to elaborate on the attack principle and outstanding features of the classic methods of backdoor attack to date.According to the working principle,the attack schemes are divided into data poisoning based attack and model poisoning based attack and others,the paper systematically summarizes them and clarify the advantages and disadvantages of current research.Then,this paper surveys the state-of-the-art works of backdoor attack against various typical applications and popular deep learning paradigms,which further reveal the threat of backdoor attack towards deep learning models.Finally,this paper summarizes the research work on applying backdoor attack characteristics to positive applications and explores the current challenges of backdoor attack,as well as discusses future research directions worthy of in-depth exploration,aiming to provide guidance for the follow-up researchers to further promote the development of backdoor attack and security of deep learning.

Key words: Deep learning, Security of model, Backdoor attack, Attack paradigms, Data poisoning

CLC Number: 

  • TP391
[1]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenetclassification with deep convolutional neural networks[J].Advances in Neural Information Processing Systems,2012,25:1097-1105.
[2]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[3]SZEGEDY C,LIU W,JIA Y,et al.Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2015:1-9.
[4]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[5]LUCKOW A,COOK M,ASHCRAFT N,et al.Deep learning in the automotive industry:Applications and tools[C]//2016 IEEE International Conference on Big Data.2016:3759-3768.
[6]DENG L,PLATT J C.Ensemble deep learning for speech recognition[C]//Fifteenth Annual Conference of the International Speech Communication Association.2014:1915-1919.
[7]GLOROT X,BORDES A,BENGIO Y.Domain adaptation for large-scale sentiment classification:A deep learning approach[C]//ICML.2011:513-520.
[8]ALTAF F,ISLAM S M S,AKHTAR N,et al.Going deep in medical image analysis:concepts,methods,challenges,and future directions[J].IEEE Access,2019,7:99540-99572.
[9]GE Y,WANG Q,ZHENG B,et al.Anti-Distillation Backdoor Attacks:Backdoors Can Really Survive in Knowledge Distillation[C]//Proceedings of the 29th ACM InternationalConfe-rence on Multimedia.2021:826-834.
[10]BHOWMICK A,HAZARIKA S M.E-mail spam filtering:a review of techniques and trends[J].Advances in Electronics,Communication and Computing,2018,443:583-590.
[11]SORKUN M C,TORAMAN T.Fraud detection on financialstatements using data mining techniques[J].International Journal of Intelligent Systems and Applications in Engineering,2017,5(3):132-134.
[12]GU T,DOLAN-GAVITT B,GARG S.Badnets:Identifying vulnerabilities in the machine learning model supply chain[J].ar-Xiv:1708.06733,2017.
[13]LIU Y,MA S,AAFER Y,et al.Trojaning attack on neural networks[C]//25th Annual Network and Distributed System Security Symposium.2018.
[14]BIGGIO B,NELSON B,LASKOV P.Poisoning attacks against support vector machines[J].arXiv:1206.6389,2012.
[15]SCHWARZSCHILD A,GOLDBLUM M,GUPTA A,et al.Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks[C]//International Conference on Machine Learning.2021:9389-9398.
[16]ZHANG X,ZHU X,LESSARD L.Online data poisoning attacks[C]//Learning for Dynamics and Control.2020:201-210.
[17]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014.
[18]CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy.2017:39-57.
[19]YUAN X,HE P,ZHU Q,et al.Adversarial examples:Attacks and defenses for deep learning[J].IEEE Transactions on Neural Networks and Learning Systems,2019,30(9):2805-2824.
[20]FREDRIKSON M,JHA S,RISTENPART T.Model inversionattacks that exploit confidence information and basic countermeasures[C]//Proceedings of the 22nd ACM SIGSAC Confe-rence on Computer and Communications Security.2015:1322-1333.
[21]WANG Y,SI C,WU X.Regression model fitting under differential privacy and model inversion attack[C]//Twenty-fourth International Joint Conference on Artificial Intelligence.2015:1003-1009.
[22]ZHANG Y,JIA R,PEI H,et al.The secret revealer:Generative model-inversion attacks against deep neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:253-261.
[23]YOSHIDA K,KUBOTA T,SHIOZAKI M,et al.Model-extraction attack against FPGA-DNN accelerator utilizing correlation electromagnetic analysis[C]//2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines.2019:318-318.
[24]ZHANG X,FANG C,SHI J.Thief,Beware of What Get YouThere:Towards Understanding Model Extraction Attack[J].arXiv:2104.05921,2021.
[25]ZHU Y,CHENG Y,ZHOU H,et al.Hermes attack:Steal{DNN} models with lossless inference accuracy[C]//30th USENIX Security Symposium.2021:1973-1988.
[26]DU M,LIU N,HU X.Techniques for interpretable machinelearning[J].Communications of the ACM,2019,63(1):68-77.
[27]GUNNING D,AHA D.DARPA's explainable artificial intelligence(XAI) program[J].AI Magazine,2019,40(2):44-58.
[28]SAMEK W,WIEGAND T,MÜLLER K R.Explainable artificial intelligence:Understanding,visualizing and interpreting deep learning models[J].arXiv:1708.08296,2017.
[29]MOOSAVI-DEZFOOLI S M,FAWZI A,FAWZI O,et al.Universal adversarial perturbations[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition.2017:86-94.
[30]BROWN T B,MANÉ D,ROY A,et al.Adversarial patch[J].arXiv:1712.09665,2017.
[31]CHEN X,LIU C,LI B,et al.Targeted backdoor attacks on deep learning systems using data poisoning[J].arXiv:1712.05526,2017.
[32]PENG M,XIONG Z,SUN M,et al.Label-Smoothed BackdoorAttack[J].arXiv:2202.11203,2022.
[33]ALI H,NEPAL S,KANHERE S S,et al.Has-nets:A heal and select mechanism to defend dnns against backdoor attacks for data collection scenarios[J].arXiv:2012.07474,2020.
[34]WANG B,YAO Y,SHAN S,et al.Neural cleanse:Identifying and mitigating backdoor attacks in neural networks[C]//2019 IEEE Symposium on Security and Privacy.2019:707-723.
[35]GAO Y,XU C,WANG D,et al.Strip:A defence against trojan attacks on deep neural networks[C]//Proceedings of the 35th Annual Computer Security Applications Conference.2019:113-125.
[36]LIU Y,LEE W C,TAO G,et al.ABS:Scanning neural networks for back-doors by artificial brain stimulation[C]//Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security.2019:1265-1282.
[37]UDESHI S,PENG S,WOO G,et al.Model agnostic defenceagainst backdoor attacks in machine learning[J].arXiv:1908.02203,2019.
[38]SALEM A,WEN R,BACKES M,et al.Dynamic backdoor attacks against machine learning models[J].arXiv:2003.03675,2020.
[39]CRESWELL A,WHITE T,DUMOULIN V,et al.Generativeadversarial networks:An overview[J].IEEE Signal Processing Magazine,2018,35(1):53-65.
[40]NGUYEN A,TRAN A.Input-aware dynamic backdoor attack[J].arXiv:2010.08138,2020.
[41]LIU K,DOLAN-GAVITT B,GARG S.Fine-pruning:Defending against backdooring attacks on deep neural networks[C]//International Symposium on Research in Attacks,Intrusions,and Defenses.2018:273-294.
[42]GARG S,KUMAR A,GOEL V,et al.Can adversarial weightperturbations inject neural backdoors[C]//Proceedings of the 29th ACM International Conference on Information & Know-ledge Management.2020:2029-2032.
[43]CHEN X,SALEM A,BACKES M,et al.Badnl:Backdoor at-tacks against nlp models[J].arXiv:2006.01043,2020.
[44]KURITA K,MICHEL P,NEUBIG G.Weight poisoning attacks on pre-trained models[J].arXiv:2004.06660,2020.
[45]KWON H,LEE S.Textual Backdoor Attack for the Text Classification System[J/OL].Security and Communication Networks.https://www.hindawi.com/journals/scn/2021/2938386/.
[46]ZHANG X,ZHANG Z,JI S,et al.Trojaning language modelsfor fun and profit[J].arXiv:2008.00312,2020.
[47]DAI J,CHEN C,LI Y.A backdoor attack against lstm-based text classification systems[J].IEEE Access,2019,7:138872-138878.
[48]LIAO C,ZHONG H,SQUICCIARINI A,et al.Backdoor em-bedding in convolutional neural network models via invisible perturbation[J].arXiv:1808.10307,2018.
[49]ZHANG Q,DING Y,TIAN Y,et al.AdvDoor:adversarial backdoor attack of deep learning system[C]//Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis.2021:127-138.
[50]MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.Deepfool:a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2574-2582.
[51]CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy.2017:39-57.
[52]LI S,XUE M,ZHAO B,et al.Invisible backdoor attacks on deep neural networks via steganography and regularization[J].IEEE Transactions on Dependable and Secure Computing,2020,18:2088-2105.
[53]XUE M,NI S,WU Y,et al.Imperceptible and Multi-channelBackdoor Attack against Deep Neural Networks[J].arXiv:2201.13164,2022.
[54]LI Y,LI Y,WU B,et al.Invisible Backdoor Attack with Sample-Specific Triggers[J].arXiv:2012.03816,2020.
[55]ZHANG J,CHEN D,LIAO J,et al.Poison Ink:Robust and Invisible Backdoor AttackJ].arXiv:2108.02488,2021.
[56]CHAN C K,CHENG L M.Hiding data in images by simple LSB substitution[J].Pattern Recognition,2004,37(3):469-474.
[57]HASHAD A I,MADANI A S,WAHDAN A.A robust steg-anography technique using discrete cosine transform insertion[C]//2005 International Conference on Information and Communication Technology.2005:255-264.
[58]BALUJA S.Hiding images in plain sight:Deep steganography[C]//Advances in Neural Information Processing Systems.2017:2069-2079.
[59]ZHU J,KAPLAN R,JOHNSON J,et al.Hidden:Hiding data with deep networks[C]//Proceedings of the European Confe-rence on Computer Vision.2018:657-672.
[60]QUIRING E,RIECK K.Backdooring and poisoning neural networks with image-scaling attacks[C]//2020 IEEE Security and Privacy Workshops.2020:41-47.
[61]WANG T,YAO Y,XU F,et al.Backdoor Attack through Frequency Domain[J].arXiv:2111.10991,2021.
[62]QI F,LI M,CHEN Y,et al.Hidden killer:Invisible textualbackdoor attacks with syntactic trigger[J].arXiv:2105.12400,2021.
[63]LI S,LIU H,DONG T,et al.Hidden backdoors in human-centric language models[J].arXiv:2105.00164,2021.
[64]HE C,XUE M,WANG J,et al.Embedding backdoors as the facial features:Invisible backdoor attacks against face recognition systems[C]//Proceedings of the ACM Turing Celebration Conference-China.2020:231-235.
[65]SARKAR E,BENKRAOUDA H,MANIATAKOS M.Face-Hack:Triggering backdoored facial recognition systems using facial characteristics[J].arXiv:2006.11623,2020.
[66]LIN J,XU L,LIU Y,et al.Composite backdoor attack for deep neural network by mixing existing benign features[C]//Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security.2020:113-131.
[67]NGUYEN A,TRAN A.WaNet--Imperceptible Warping-basedBackdoor Attack[J].arXiv:2102.10369,2021.
[68]DOAN K,LAO Y,LI P.Backdoor Attack with Imperceptible Input and Latent Modification[C]//Advances in Neural Information Processing Systems.2021,34.
[69]CHENG S,LIU Y,MA S,et al.Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification[J].ar-Xiv:2012.11212,2020.
[70]DOAN K,LAO Y,ZHAO W,et al.LIRA:Learnable,Imperceptible and Robust Backdoor Attacks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:11966-11976.
[71]YANG W,LIN Y,LI P,et al.Rethinking stealthiness of backdoor attack against nlp models[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.2021:5543-5557.
[72]QI F,YAO Y,XU S,et al.Turn the combination lock:Learnable textual backdoor attacks via word substitution[J].arXiv:2106.06361,2021.
[73]CHAN A,TAY Y,ONG Y S,et al.Poison attacks against text datasets with conditional adversarially regularized autoencoder[J].arXiv:2010.02684,2020.
[74]BARNI M,KALLAS K,TONDI B.A new backdoor attack in CNNs by training set corruption without label poisoning[C]//2019 IEEE International Conference on Image Processing.2019:101-105.
[75]GAN L,LI J,ZHANG T,et al.Triggerless Backdoor Attack forNLP Tasks with Clean Labels[J].arXiv:2111.07970,2021.
[76]LIU Y,MA X,BAILEY J,et al.Reflection backdoor:A natural backdoor attack on deep neural networks[C]//European Conference on Computer Vision.2020:182-199.
[77]NING R,LI J,XIN C,et al.Invisible Poison:A Blackbox Clean Label Backdoor Attack to Deep Neural Networks[C]//IEEE INFOCOM 2021-IEEE Conference on Computer Communications.2021:1-10.
[78]SHAFAHI A,HUANG W R,NAJIBI M,et al.Poison frogs!targeted clean-label poisoning attacks on neural networks[J].arXiv:1804.00792,2018.
[79]SAHA A,SUBRAMANYA A,PIRSIAVASH H.Hidden trigger backdoor attacks[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:11957-11965.
[80]TURNER A,TSIPRAS D,MADRY A.Clean-label backdoor attacks[EB/OL].https://openreview.net/forum?id=HJg6e2CcK7.
[81]ZHAO S,MA X,ZHENG X,et al.Clean-label backdoor attacks on video recognition models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:14443-14452.
[82]DUMFORD J,SCHEIRER W.Backdooring convolutional neural networks via targeted weight perturbations[C]//2020 IEEE International Joint Conference on Biometrics.2020:1-9.
[83]HONG S,CARLINI N,KURAKIN A.Handcrafted Backdoors in Deep Neural Networks[J].arXiv:2106.04690,2021.
[84]JI Y,ZHANG X,WANG T.Backdoor attacks against learning systems[C]//2017 IEEE Conference on Communications and Network Security.2017:1-9.
[85]GARG S,KUMAR A,GOEL V,et al.Can adversarial weightperturbations inject neural backdoors[C]//Proceedings of the 29th ACM International Conference on Information & Know-ledge Management.2020:2029-2032.
[86]ZHANG Z,LYU L,WANG W,et al.How to Inject Backdoors with Better Consistency:Logit Anchoring on Clean Data[J].arXiv:2109.01300,2021.
[87]COSTALES R,MAO C,NORWITZ R,et al.Live Trojan attacks on deep neural networks[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition Workshops.2020:796-797.
[88]QI X,ZHU J,XIE C,et al.Subnet Replacement:Deployment-stage backdoor attack against deep neural networks in gray-box setting[J].arXiv:2107.07240,2021.
[89]RAKIN A S,HE Z,FAN D.Tbt:Targeted neural network attack with bit trojan[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition.2020:13198-13207.
[90]CHEN H,FU C,ZHAO J,et al.ProFlip:Targeted Trojan Attack with Progressive Bit Flips[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:7718-7727.
[91]KIM Y,DALY R,KIM J,et al.Flipping bits in memory without accessing them:An experimental study of DRAM disturbance errors[J].ACM SIGARCH Computer Architecture News,2014,42(3):361-372.
[92]SALEM A,BACKES M,ZHANG Y.Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks[J].arXiv:2010.03282,2020.
[93]CLEMENTS J,LAO Y.Backdoor attacks on neural network operations[C]//2018 IEEE Global Conference on Signal and Information Processing.2018:1154-1158.
[94]PAPERNOT N,MCDANIEL P,JHA S,et al.The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy.2016:372-387.
[95]TANG R,DU M,LIU N,et al.An embarrassingly simple approach for trojan attack in deep neural networks[C]//Procee-dings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.2020:218-228.
[96]LI Y,HUA J,WANG H,et al.DeepPayload:Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection[C]//2021 IEEE/ACM 43rd International Conference on Software Engineering.2021:263-274.
[97]YANG W,LI L,ZHANG Z,et al.Be careful about poisonedword embeddings:Exploring the vulnerability of the embedding layers in NLP models[J].arXiv:2103.15543,2021.
[98]CLEMENTS J,LAO Y.Hardware trojan attacks on neural networks[J].arXiv:1806.05768,2018.
[99]LI W,YU J,NING X,et al.Hu-fu:Hardware and software collaborative attack framework against neural networks[C]//2018 IEEE Computer Society Annual Symposium on VLSI.2018:482-487.
[100]ZHAO Y,HU X,LI S,et al.Memory trojan attack on neural network accelerators[C]//2019 Design,Automation & Test in Europe Conference & Exhibition.2019:1415-1420.
[101]BAGDASARYAN E,SHMATIKOV V.Blind backdoors in deep learning models[J].arXiv:2005.03823,2020.
[102]DÉSIDÉRI J A.Multiple-gradient descent algorithm(MGDA)for multiobjective optimization[J].Comptes Rendus Mathematique,2012,350(5/6):313-318.
[103]SENER O,KOLTUN V.Multi-task learning as multi-objective optimization[C]//Advances in Neural Information Processing Systems.2018:525-536.
[104]SHUMAILOV I,SHUMAYLOV Z,KAZHDAN D,et al.Manipulating SGD with data ordering attacks[J].arXiv:2104.09667,2021.
[105]WENGER E,PASSANANTI J,YAO Y,et al.Backdoor attacks on facial recognition in the physical world[J].arXiv:2006.14580,2020.
[106]LI Y,ZHAI T,JIANG Y,et al.Backdoor Attack in the Physical World[J].arXiv:2104.02361,2021.
[107]XUE M,HE C,SUN S,et al.Robust Backdoor Attacks against Deep Neural Networks in Real Physical World[J].arXiv:2104.07395,2021.
[108]CHACON H D,RAD P.Effect of backdoor attacks over thecomplexity of the latent space distribution[J].arXiv:2012.01931,2020.
[109]ZENG Y,PARK W,MAO Z M,et al.Rethinking the Backdoor Attacks’ Triggers:A Frequency Perspective[J].arXiv:2104.03413,2021.
[110]LI Y,ZHAI T,WU B,et al.Rethinking the trigger of backdoor attack[J].arXiv:2004.04692,2020.
[111]PASQUINI C,BÖHME R.Trembling triggers:exploring thesensitivity of backdoors in DNN-based face recognition[J].EURASIP Journal on Information Security,2020,2020(1):1-15.
[112]TRUONG L,JONES C,HUTCHINSON B,et al.Systematicevaluation of backdoor data poisoning attacks on image classi-fiers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.2020:788-789.
[113]CINÀ A E,GROSSE K,VASCON S,et al.Backdoor Learning Curves:Explaining Backdoor Poisoning Beyond Influence Functions[J].arXiv:2106.07214,2021.
[114]CHEN Y,QI F,LIU Z,et al.Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks[J].arXiv:2110.08247,2021.
[115]SCHWARZSCHILD A,GOLDBLUM M,GUPTA A,et al.Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks[C]//International Conference on Machine Learning.2021:9389-9398.
[116]SHEN L,JIANG H,LIU L,et al.Rethink Stealthy BackdoorAttacks in Natural Language Processing[J].arXiv:2201.02993,2022.
[117]LECUN Y,BOTTOU L,BENGIO Y,et al.Haffner.Gradient-based learning applied to document recognition[C]//Procee-dings of the IEEE.1998:2278-2324.
[118]NETZER Y,WANG T,COATES A,et al.Reading digits in na-tural images with unsupervised feature learning[EB/OL].http://ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf.
[119]STALLKAMP J,SCHLIPSING M,SALMEN J,et al.The German traffic sign recognition benchmark:a multi-class classification competition[C]//the 2011 International Joint Conference on Neural Networks.IEEE,2011:1453-1460.
[120]TIMOFTE R,ZIMMERMANN K,VAN GOOL L.Multi-view traffic sign detection,recognition,and 3D localisation[J].Machine Vision and Applications,2014,25(3):633-647.
[121]WOLF L,HASSNER T,MAOZ I.Face recognition in unconstrained videos with matched background similarity[C]//CVPR.2011:529-534.
[122]GUO Y,ZHANG L,HU Y,et al.Ms-celeb-1m:A dataset and benchmark for large-scale face recognition[C]//European Conference on Computer Vision.2016:87-102.
[123]LIU Z,LUO P,WANG X,et al.Deep learning face attributes in the wild[C]//Proceedings of the IEEE International Conference on Computer Vision.2015:3730-3738.
[124]PARKHI O M,VEDALDI A,ZISSERMAN A.Deep face recognition[C]//British Machine Vision Conference.2015.
[125]CAO Q,SHEN L,XIE W,et al.Vggface2:A dataset for recognising faces across pose and age[C]//2018 13th IEEE International Conference on Automatic Face & Gesture Recognition.2018:67-74.
[126]PINTO N,STONE Z,ZICKLER T,et al.Scaling up biologically-inspired computer vision:A case study in unconstrained face re-cognition on facebook[C]//CVPR.2011:35-42.
[127]XIAO H,RASUL K,VOLLGRAF R.Fashion-mnist:a novelimage dataset for benchmarking machine learning algorithms[J].arXiv:1708.07747,2017.
[128]KRIZHEVSKY A,HINTON G.Learning multiple layers of features from tiny images[J].Handbook of Systemic Autoimmune Diseases,2009,1(4):1-60.
[129]RUSSAKOVSKY O,DENG J,SU H,et al.Imagenet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115(3):211-252.
[130]DENG J,DONG W,SOCHER R,et al.Imagenet:A large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition.2009:248-255.
[131]SOCHER R,PERELYGIN A,WU J,et al.Recursive deep mo-dels for semantic compositionality over a sentiment treebank[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing.2013:1631-1642.
[132]MAAS A,DALY R E,PHAM P T,et al.Learning word vectorsfor sentiment analysis[C]//Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics:Human Language Technologies.2011:142-150.
[133]BLITZER J,DREDZE M,PEREIRA F.Biographies,bollywood,boom-boxes and blenders:Domain adaptation for sentiment classification[C]//Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics.2007:440-447.
[134]ZHANG X,ZHAO J,LECUN Y.Character-level convolutional networks for text classification[C]//Advances in Neural Information Processing Systems.2015:649-657.
[135]LI Y,LI Y,LV Y,et al.Hidden backdoor attack against semantic segmentation models[J].arXiv:2103.04038,2021.
[136]LI Y,ZHONG H,MA X,et al.Few-shot backdoor attacks on visual object tracking[J].arXiv:2201.13178,2022.
[137]QI F,CHEN Y,ZHANG X,et al.Mind the style of text! adversarial and backdoor attacks based on text style transfer[J].ar-Xiv:2110.07139,2021.
[138]WANG J,XU C,GUZMÁN F,et al.Putting words into the system's mouth:A targeted attack on neural machine translation using monolingual data poisoning[J].arXiv:2107.05243,2021.
[139]FAN C,LI X,MENG Y,et al.Defending against backdoor attacks in natural language generation[J].arXiv:2106.01810,2021.
[140]HU F,CHEN A,LI X.Targeted Trojan-Horse Attacks on Language-based Image Retrieval[J].arXiv:2202.03861,2022.
[141]WALMER M,SIKKA K,SUR I,et al.Dual-Key MultimodalBackdoors for Visual Question Answering[J].arXiv:2112.07668,2021.
[142]ZHAI T,LI Y,ZHANG Z,et al.Backdoor attack against spea-ker verification[C]//ICASSP 2021-2021 IEEE International Conference on Acoustics,Speech and Signal Processing.2021:2560-2564.
[143]KOFFAS S,XU J,CONTI M,et al.Can You Hear It? Backdoor Attacks via Ultrasonic Triggers[J].arXiv:2107.14569,2021.
[144]DAVASLIOGLU K,SAGDUYU Y E.Trojan attacks on wireless signal classification with adversarial machine learning[C]//2019 IEEE International Symposium on Dynamic Spectrum Access Networks.2019:1-6.
[145]FANG S,CHOROMANSKA A.Backdoor Attacks on the DNN Interpretation System[J].arXiv:2011.10698,2020.
[146]HAN S,MAO H,DALLY W J.Deep compression:Compressing deep neural networks with pruning,trained quantization and huffman coding[J].arXiv:1510.00149,2015.
[147]YAO Y,LI H,ZHENG H,et al.Latent backdoor attacks on deep neural networks[C]//Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security.2019:2041-2055.
[148]WANG S,NEPAL S,RUDOLPH C,et al.Backdoor attacksagainst transfer learning with pre-trained deep learning models[J].IEEE Transactions on Services Computing,2020,15:1526-1539.
[149]CHEN K,MENG Y,SUN X,et al.Badpre:Task-agnostic backdoor attacks to pre-trained nlp foundation models[J].arXiv:2110.02467,2021.
[150]JI Y,LIU Z,HU X,et al.Programmable neural network trojan for pre-trained feature extractor[J].arXiv:1901.07766,2019.
[151]ZHANG Z,XIAO G,LI Y,et al.Red Alarm for Pre-trainedModels:Universal Vulnerability to Neuron-Level Backdoor Attacks[J].arXiv:2101.06969,2021.
[152]HINTON G,VINYALS O,DEAN J.Distilling the knowledge in a neural network[J].arXiv:1503.02531,2015.
[153]GE Y,WANG Q,ZHENG B,et al.Anti-Distillation Backdoor Attacks:Backdoors Can Really Survive in Knowledge Distillation[C]//Proceedings of the 29th ACM International Confe-rence on Multimedia.2021:826-834.
[154]BAGDASARYAN E,VEIT A,HUA Y,et al.How to backdoor federated learning[C]//International Conference on Artificial Intelligence and Statistics.2020:2938-2948.
[155]XIE C,HUANG K,CHEN P Y,et al.Dba:Distributed backdoor attacks against federated learning[C]//International Conference on Learning Representations.2019.
[156]LIU Y,YI Z,CHEN T.Backdoor attacks and defenses in feature-partitioned collaborative learning[J].arXiv:2007.03608,2020.
[157]HUANG A.Dynamic backdoor attacks against federated lear-ning[J].arXiv:2011.07429,2020.
[158]BHAGOJI A N,CHAKRABORTY S,MITTAL P,et al.Analyzing federated learning through an adversarial lens[C]//International Conference on Machine Learning.2019:634-643.
[159]YIN Z,YUAN Y,GUO P,et al.Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis[J].arXiv:2109.10512,2021.
[160]LAI P,PHAN N H,KHREISHAH A,et al.Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning[J].arXiv:2201.07063,2022.
[161]LI A,SUN J,WANG B,et al.Lotteryfl:Personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets[J].arXiv:2008.03371,2020.
[162]SHAMSIAN A,NAVON A,FETAYA E,et al.Personalizedfederated learning using hypernetworks[C]//International Conference on Machine Learning.2021:9489-9502.
[163]ZAWAD S,ALI A,CHEN P Y,et al.Curse or redemption? how data heterogeneity affects the robustness of federated learning[J].arXiv:2102.00655,2021.
[164]WANG H,SREENIVASAN K,RAJPUT S,et al.Attack of the tails:Yes,you really can backdoor federated learning[J].Advances in Neural Information Processing Systems,2020,33:16070-16084.
[165]ZHANG Z,JIA J,WANG B,et al.Backdoor attacks to graph neural networks[C]//Proceedings of the 26th ACM Symposium on Access Control Models and Technologies.2021:15-26.
[166]XI Z,PANG R,JI S,et al.Graph backdoor[C]//30th {USENIX} Security Symposium.2021.
[167]CHEN L,PENG Q,LI J,et al.Neighboring Backdoor Attacks on Graph Convolutional Network[J].arXiv:2201.06202,2022.
[168]XU J,XUE M,PICEK S.Explainability-based backdoor attacks against graph neural networks[C]//Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning.2021:31-36.
[169]YING R,BOURGEOIS D,YOU J,et al.Gnnexplainer:Generating explanations for graph neural networks[J].Advances in Neural Information Processing Systems,2019,32:9240.
[170]HUANG Q,YAMADA M,TIAN Y,et al.Graphlime:Local interpretable model explanations for graph neural networks[J].arXiv:2001.06216,2020.
[171]KIOURTI P,WARDEGA K,JHA S,et al.Trojdrl:Trojan attacks on deep reinforcement learning agents[J].arXiv:1903.06638,2019.
[172]ASHCRAFT C,KARRA K.Poisoning Deep ReinforcementLearning Agents with In-Distribution Triggers[J].arXiv:2106.07798,2021.
[173]WANG Y,SARKAR E,LI W,et al.Stop-and-go:Exploringbackdoor attacks on deep reinforcement learning-based traffic congestion control systems[J].arXiv:2003.07859,2020.
[174]KRAJZEWICZ D,ERDMANN J,BEHRISCH M,et al.Recent development and applications of SUMO-Simulation of Urban MObility[J].International Journal on Advances in Systems and Measurements,2012,5(3/4):128-138.
[175]GUO J,POTKONJAK M.Watermarking deep neural networks for embedded systems[C]//2018 IEEE/ACM International Conference on Computer-Aided Design(ICCAD).2018:1-8.
[176]ZHANG J,GU Z,JANG J,et al.Protecting intellectual property of deep neural networks with watermarking[C]//Proceedings of the 2018 on Asia Conference on Computer and Communications Security.2018:159-172.
[177]CAO X,JIA J,GONG N Z.IPGuard:Protecting intellectualproperty of deep neural networks via fingerprinting the classification boundary[C]//Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security.2021:14-25.
[178]LUKAS N,ZHANG Y,KERSCHBAUM F.Deep neural net-work fingerprinting by conferrable adversarial examples[J].arXiv:1912.00888,2019.
[179]ADI Y,BAUM C,CISSE M,et al.Turning your weakness into a strength:Watermarking deep neural networks by backdooring[C]//27th {USENIX} Security Symposium.2018:1615-1631.
[180]XU J,PICEK S.Watermarking Graph Neural Networks based on Backdoor Attacks[J].arXiv:2110.11024,2021.
[181]LI G,XU G,QIU H,et al.A Novel Verifiable Fingerprinting Scheme for Generative Adversarial Networks[J].arXiv:2106.11760,2021.
[182]LI Y,ZHANG Z,BAI J,et al.Open-sourced Dataset Protection via Backdoor Watermarking[J].arXiv:2010.05821,2020.
[183]SOMMER D M,SONG L,WAGH S,et al.Towards probabilistic verification of machine unlearning[J].arXiv:2003.04247,2020.
[184]SUN Z,DU X,SONG F,et al.CoProtector:Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning[J].arXiv:2110.12925,2021.
[185]HOWARD G D.GitHub Copilot:Copyright,Fair Use,Creativity,Transformativity,and Algorithms[EB/OL].https://gavinhoward.com/uploads/copilot.pdf.
[186]LIN Y S,LEE W C,CELIK Z B.What do you see? evaluation of explainable artificial intelligence(xai) interpretability through neural backdoors[J].arXiv:2009.10639,2020.
[187]SHAN S,WENGER E,WANG B,et al.Gotta Catch’Em All:Using Honeypots to Catch Adversarial Attacks on Neural Networks[C]//Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security.2020:67-83.
[188]WU S,HE Q,ZHANG Y,et al.Debiasing Backdoor Attack:A Benign Application of Backdoor Attack in Eliminating Data Bias[J].arXiv:2202.10582,2022.
[189]THORNTON C,HUTTER F,HOOS H H,et al.Auto-WEKA:Combined selection and hyperparameter optimization of classification algorithms[C]//Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.2013:847-855.
[190]ERICKSON N,MUELLER J,SHIRKOV A,et al.Autogluon-tabular:Robust and accurate automl for structured data[J].ar-Xiv:2003.06505,2020.
[1] DONG Yongfeng, HUANG Gang, XUE Wanruo, LI Linhao. Graph Attention Deep Knowledge Tracing Model Integrated with IRT [J]. Computer Science, 2023, 50(3): 173-180.
[2] HUA Xiaofeng, FENG Na, YU Junqing, HE Yunfeng. Shooting Event Detection of Free Kick in Soccer Video Based on Rule Reasoning [J]. Computer Science, 2023, 50(3): 181-190.
[3] MEI Pengcheng, YANG Jibin, ZHANG Qiang, HUANG Xiang. Sound Event Joint Estimation Method Based on Three-dimension Convolution [J]. Computer Science, 2023, 50(3): 191-198.
[4] BAI Xuefei, MA Yanan, WANG Wenjian. Segmentation Method of Edge-guided Breast Ultrasound Images Based on Feature Fusion [J]. Computer Science, 2023, 50(3): 199-207.
[5] LIU Hang, PU Yuanyuan, LYU Dahua, ZHAO Zhengpeng, XU Dan, QIAN Wenhua. Polarized Self-attention Constrains Color Overflow in Automatic Coloring of Image [J]. Computer Science, 2023, 50(3): 208-215.
[6] CHEN Liang, WANG Lu, LI Shengchun, LIU Changhong. Study on Visual Dashboard Generation Technology Based on Deep Learning [J]. Computer Science, 2023, 50(3): 238-245.
[7] ZHANG Yi, WU Qin. Crowd Counting Network Based on Feature Enhancement Loss and Foreground Attention [J]. Computer Science, 2023, 50(3): 246-253.
[8] ZOU Yunzhu, DU Shengdong, TENG Fei, LI Tianrui. Visual Question Answering Model Based on Multi-modal Deep Feature Fusion [J]. Computer Science, 2023, 50(2): 123-129.
[9] WANG Pengyu, TAI Wenxin, LIU Fang, ZHONG Ting, LUO Xucheng, ZHOU Fan. Self-supervised Flight Trajectory Prediction Based on Data Augmentation [J]. Computer Science, 2023, 50(2): 130-137.
[10] GUO Nan, LI Jingyuan, REN Xi. Survey of Rigid Object Pose Estimation Algorithms Based on Deep Learning [J]. Computer Science, 2023, 50(2): 178-189.
[11] LI Junlin, OUYANG Zhi, DU Nisuo. Scene Text Detection with Improved Region Proposal Network [J]. Computer Science, 2023, 50(2): 201-208.
[12] HUA Jie, LIU Xueliang, ZHAO Ye. Few-shot Object Detection Based on Feature Fusion [J]. Computer Science, 2023, 50(2): 209-213.
[13] LIANG Jiali, HUA Baojian, SU Shaobo. Tensor Instruction Generation Optimization Fusing with Loop Partitioning [J]. Computer Science, 2023, 50(2): 374-383.
[14] CAI Xiao, CEHN Zhihua, SHENG Bin. SPT:Swin Pyramid Transformer for Object Detection of Remote Sensing [J]. Computer Science, 2023, 50(1): 105-113.
[15] WANG Bin, LIANG Yudong, LIU Zhe, ZHANG Chao, LI Deyu. Study on Unsupervised Image Dehazing and Low-light Image Enhancement Algorithms Based on Luminance Adjustment [J]. Computer Science, 2023, 50(1): 123-130.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!