Computer Science ›› 2022, Vol. 49 ›› Issue (2): 92-106.doi: 10.11896/jsjkx.210800087

• Computer Vision: Theory and Application • Previous Articles     Next Articles

Survey of Research Progress on Adversarial Examples in Images

CHEN Meng-xuan1, ZHANG Zhen-yong1, JI Shou-ling2, WEI Gui-yi3,4, SHAO Jun1   

  1. 1 School of Computer and Information Engineering,Zhejiang Gongshang University,Hangzhou 310018,China
    2 College of Computer Science and Technology,Zhejiang University,Hangzhou 310058,China
    3 School of Information and Electronic Engineering,Zhejiang Gongshang University,Hangzhou 310018,China
    4 Sussex Artificial Intelligence Institute,Zhejiang Gongshang University,Hangzhou 310018,China
  • Received:2021-08-10 Revised:2021-09-18 Online:2022-02-15 Published:2022-02-23
  • About author:CHEN Meng-xuan,born in 1996,postgraduate,is a member of China Computer Federation.Her main research interests include AI security and adver-sarial examples.
    SHAO Jun,born in 1981,Ph.D,professor,is a member of China Computer Federation.His main research interests include applied cryptography,blockchain and AI security.
  • Supported by:
    National Key Research and Development Program of China(2019YFB1804500) and National Natural Science Foundation of China(U1709217).

Abstract: With the development of deep learning theory,deep neural network has made a series of breakthrough progress and has been widely applied in various fields.Among them,applications in the image field such as image classification are the most popular.However,research suggests that deep neural network has many security risks,especially the threat from adversarial examples,which seriously hinder the application of image classification.To address this challenge,many research efforts have recently been dedicated to adversarial examples in images,and a large number of research results have come out.This paper first introduces the relative concepts and terms of adversarial examples in images,reviews the adversarial attack methodsand defense me-thods based on the existing research results.In particular,it classifies them according to the attacker's ability and the train of thought in defense methods.This paper also analyzes the characteristics and the connections of different categories.Secondly,it briefly describes the adversarial attacks in the physical world.In the end,it discusses the challenges of adversarial examples in images and the potential future research directions.

Key words: Adversarial attacks, Adversarial examples, Deep learning, Defense methods, Image field, Physical world

CLC Number: 

  • TP391
[1]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenetclassification with deep convolutional neural networks[C]//Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2012:1106-1114.
[2]HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2016:770-778.
[3]REN S Q,HE K M,GIRSHICK R B,et al.Faster r-cnn:to-wards real-time object detection with region proposal networks[C]//Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2015:91-99.
[4]MOHAMED A R,DAHL G E,HINTON G E.Acoustic mode-ling using deep belief networks [J].IEEE Transactions on Audio,Speech & Language Processing,2012,20(1):14-22.
[5]BAHDANAU D,CHOROWSKI J,SERDYUK D,et al.End-to-end attention-based largevocabulary speech recognition[C]//Proceedings of the 33rd IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP).Piscataway,NJ:IEEE,2016:4945-4949.
[6]BOJARSKI M,TESTA D D,DWORAKOWSKI D,et al.End to end learning for self-driving cars [J].arXiv:1604.07316,2016.
[7]TIAN Y C,PEI K X,JANA S,et al.DeepTest:Automated tes-ting of deep-neural-network-driven autonomous cars[C]//Proceedings of the 40th IEEE International Conference on Software Engineering (ICSE).Piscataway,NJ:IEEE,2018:303-314.
[8]LOPES A T,AGUIAR E D,SOUZA A F D,et al.Facial expression recognition with convolutional neural networks:coping with few data and the training sample order [J].Pattern Recognition,2017,61:610-628.
[9]SUN Y,WANG X G,TANG X O.Deep convolutional network cascade for facial point detection[C]//Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2013:3476-3483.
[10]MEI S K,ZHU X J.Using machine teaching to identify optimal training-set attacks on machine learners[C]//Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2015:2871-2877.
[11]SHOKRI R,STRONATI M,SONG C Z,et al.Membership in-ference attacks against machine learning models[C]//Procee-dings of the 38th IEEE Symposium on Security and Privacy (S&P).Piscataway,NJ:IEEE,2017:3-18.
[12]JI Y J,ZHANG X Y,WANG T.Backdoor attacks against lear-ning systems[C]//Proceedings of the 5th IEEE Conference on Communications and Network Security (CNS).Piscataway,NJ:IEEE,2017:1-9.
[13]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[C]//Proceedings of the 2nd International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2014.
[14]AKHTAR N,MIAN A S.Threat of adversarial attacks on deep learning in computer vision:a survey [J].IEEE Access,2018,6:14410-14430.
[15]PAPERNOT N,MCDANIEL P D,SINHA A,et al.SoK:Security and privacy in machine learning[C]//Proceedings of the 3th IEEE European Symposium on Security and Privacy (EuroS&P).Piscataway,NJ:IEEE,2018:399-414.
[16]YUAN X Y,HE P,ZHU Q L,et al.Adversarial examples:attacks and defenses for deep learning [J].IEEE Transactions on Neural Networks and Learning Systems,2019,30(9):2805-2824.
[17]PAN W W,WANG X Y,SONG M L,et al.Overview of adversarial sample generation technology [J].Journal of Software,2020,31(01):67-81.
[18]WANG K D,YI P.Overview of research on model robustness in artificial intelligence confrontation environment [J].Journal of Information Security,2020,5(3):13-22.
[19]ZHANG T,YANG K W,WEI J H,et al.Survey on Detecting and Defending Adversarial Examples for Image Data [J/OL].Journal of Computer Research and Development[2021-08-08].http://kns.cnki.net/kc ms/detail/11.1777.TP.20210607.1630.004.html.
[20]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[C]//Proceedings of the 3rd International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2015.
[21]KURAKIN A,GOODFELLOW I J,BENGIO S.Adversarialexamples in the physical world[J].arXiv:1607.02533,2017.
[22]DONG Y P,LIAO F Z,PANG T Y,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the 31st IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2018:9185-9193.
[23]XIE C H,ZHANG Z S,ZHOU Y Y,et al.Improving transferability of adversarial examples with input diversity[C]//Procee-dings of the 32nd IEEE Conference on ComputerVision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2019:2730-2739.
[24]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[C]//Procee-dings of the 6th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2018.
[25]SRIRAMANAN G,ADDEPALLI S,BABURAJ A,et al.Guided adversarial attack for evaluating and enhancing adversarial defenses[C]//Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2020.
[26]SIMONYAN K,VEDALDI A,ZISSERMAN A.Deep insideconvolutional networks:visualising image classification models and saliency maps[J].arXiv:1312.6034,2014.
[27]PAPERNOT N,MCDANIEL P D,JHA S,et al.The limitations of deep learning in adversarial settings[C]//Proceedings of the 1th IEEE European Symposium on Security and Privacy (EuroS&P).Piscataway,NJ:IEEE,2016:372-387.
[28]CISSÉ M,ADI Y,NEVEROVA N,et al.Houdini:fooling deep structured prediction models [J].arXiv:1707.05373,2017.
[29]CARLINI N,WAGNER D A.Towards evaluating the robustness of neural networks[C]//Proceedings of the 38th IEEE Symposium on Security and Privacy (S&P).Piscataway,NJ:IEEE,2017:39-57.
[30]BALUJA S,FISCHER I.Adversarial transformation networks:learning to generate adversarial examples [J].arXiv:1703.09387,2017.
[31]CHEN P Y,SHARMA Y,ZHANG H,et al.EAD:Elastic-netattacks to deep neural networks via adversarial examples[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2018:10-17.
[32]ZOU H,HASTIE T.Regularization and variable selection viathe elastic net [J].Journal of the Royal Statistical Society:Series B (Statistical Methodology),2005,67(2):301-320.
[33]SU J W,VARGAS D V,SAKURAI K.One pixel attack for fooling deep neural networks [J].IEEE Transactions on Evolutio-nary Computation,2019,23(5):828-841.
[34]MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.DeepFool:A simple and accurate method to fool deep neural networks[C]//Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2016:2574-2582.
[35]MOOSAVI-DEZFOOLI S M,FAWZI A,FAWZI O,et al.Universal adversarial perturbations[C]//Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2017:86-94.
[36]LAIDLAW C,FEIZI S.Functional adversarial attacks[C]//Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2019:10408-10418.
[37]SARKAR S,BANSAL A,MAHBUB U,et al.UPSET and ANGRI:Breaking high performance image classifiers [J].arXiv:1707.01159,2017.
[38]PHAN H,XIE Y,LIAO S Y,et al.CAG:A real-time low-cost enhanced-robustness high-transferabilitycontent-aware adversarial attack generator[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2020:5412-5419.
[39]ZHOU B L,KHOSLA A,LAPEDRIZA A,et al.Learning deep features for discriminative localization[C]//Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2016:2921-2929.
[40]PAPERNOT N,MCDANIEL P D,GOODFELLOW I J,et al.Practical black-box attacks against machine learning[C]//Proceedings of the 12th ACM Asia Conference on Computer and Communications Security (AsiaCCS).New York:ACM,2017:506-519.
[41]PAPERNOT N,MCDANIEL P D,GOODFELLOW I J.Transferability in Machine Learning:from phenomena to black-box attacks using adversarial samples [J].arXiv:1605.07277,2016.
[42]VITTER J S.Random sampling with a reservoir[J].ACMTransactions on Mathematical Software (TOMS),1985,11(1):37-57.
[43]LI P C,YI J F,ZHANG L J.Query-efficient black-box attack by active learning[C]//Proceedings of the 18th IEEE International Conference on Data Mining (ICDM).Piscataway,NJ:IEEE,2018:1200-1205.
[44]DONG Y P,PANG T Y,SU H,et al.Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//Proceedings of the 32nd IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2019:4312-4321.
[45]WU W B,SU Y X,CHEN X X,et al.Boosting the transferability of adversarial samples via attention[C]//Proceedings of the 33th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2020:1158-1167.
[46]LIU Y P,CHEN X Y,LIU C,et al.Delving into transferable adversarial examples and black-boxattacks[C]//Proceedings of the 5th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2017.
[47]LI Y W,BAI S,ZHOU Y Y,et al.Learning transferable adversarial examples via ghost networks[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2020:11458-11465.
[48]CHE Z H,BORJI A,ZHAI G T,et al.A newensemble adversa-rial attack powered by long-term gradient memories[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2020:3405-3413.
[49]ZHOU M Y,WU J,LIU Y P,et al.DaST:Data-free substitute training for adversarial attacks[C]//Proceedings of the 33th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2020:231-240.
[50]CHEN P Y,ZHANG H,SHARMA Y,et al.ZOO:Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec).New York:ACM,2017:15-26.
[51]BHAGOJI A N,HE W,LI B,et al.Exploring the space of black-box attacks on deep neural networks[C]//Proceedings of the 6th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2018.
[52]TU C C,TING P,CHEN P Y,et al.AutoZOOM:Autoencoder-based zeroth order optimization method for attacking black-box neural networks[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2019:742-749.
[53]ILYAS A,ENGSTROM L,ATHALYE A,et al.Black-box adversarial attacks with limited queries and information[C]//Proceedings of the 35th International Conference on Machine Learning (ICML).New York:ACM,2018:2142-2151.
[54]WIERSTRA D,SCHAUL T,GLASMACHERS T,et al.Natural evolution strategies [J].Journal of Machine Learning Research,2014,15(1):949-980.
[55]ILYAS A,ENGSTROM L,MADRY A.Prior Convictions:Black-box adversarial attacks with bandits and priors[C]//Proceedings of the 7th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2019.
[56]BRENDEL W,RAUBER J,BETHGE M.Decision-based adversarial attacks:reliable attacks against black-box machine lear-ning models[C]//Proceedings of the 6th InternationalConfe-rence on Learning Representations (ICLR).La Jolla,CA:LCLR,2018.
[57]DONG Y P,SU H,WU B Y,et al.Efficient decision-basedblack-box adversarial attacks on face recognition[C]//Procee-dings of the 32nd IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2019:7714-7722.
[58]HANSEN N,OSTERMEIER A.Completely derandomized self-adaptation in evolution strategies[J].Evolutionary computation,2001,9(2):159-195.
[59]BRUNNER T,DIEHL F,LE M T,et al.Guessing Smart:Biased sampling for efficient black-box adversarial attacks[C]//Proceedings of the 17th IEEE International Conference on Compu-ter Vision (ICCV).Piscataway,NJ:IEEE,2019:4957-4965.
[60]SHI Y C,HAN Y H,TIAN Q.Polishing decision-based adversarial noise with a customized sampling[C]//Proceedings of the 33th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2020:1027-1035.
[61]RAHMATI A,MOOSAVI-DEZFOOLI S M,FROSSARD P,et al.GeoDA:A geometric framework for black-box adversarial attacks[C]//Proceedings of the 33th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2020:8443-8452.
[62]GOODFELLOW I J,POUGET-ABADIE J,MIRZA M,et al. Generative adversarial nets[C]//Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2014:2672-2680.
[63]XIAO C W,LI B,ZHU J Y,et al.Generating adversarial examples with adversarial networks[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI).San Francisco,CA:Morgan Kaufmann,2018:3905-3911.
[64]JANDIAL S,MANGLA P,VARSHNEY S,et al.AdvGAN++:Harnessing latent layers for adversary generation[C]//Procee-dings of the 17th IEEE International Conference on Computer Vision (ICCV).Piscataway,NJ:IEEE,2019:2045-2048.
[65]ZHAO Z L,DUA D,SINGH S.Generating natural adversarial examples[C]//Proceedings of the 6th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2018.
[66]LIU X Q,HSIEH C.Rob-GAN:Generator,discriminator,andadversarial attacker[C]//Proceedings of the 32nd IEEE Confe-rence on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2019:11234-11243.
[67]CHENG S Y,DONG Y P,PANG T Y,et al.Improving black-box adversarial attacks with a transfer-based prior[C]//Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2019:10932-10942.
[68]SUYA F,CHI J F,EVANS D,et al.Hybrid batch attacks:Fin-ding black-box adversarial examples with limited queries[C]//Proceedings of the 29th USENIX Security Symposium (USENIX Security).Berkeley,CA:USENIX Association,2020:1327-1344.
[69]CO K T,MUÑOZ-GONZÁLEZ L,MAUPEOU S D,et al.Procedural noise adversarial examples for black-box attacks on deep convolutional networks[C]//Proceedings of the 26th ACM Conference on Computer and Communications Security (CCS).New York:ACM,2019:275-289.
[70]SNOEK J,LAROCHELLE H,ADAMS R P.Practical bayesian optimization of machine learning algorithms[C]//Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2012:2960-2968.
[71]PAPERNOT N,MCDANIEL P D,WU X,et al.Distillation as a defense to adversarial perturbations against deep neural networks[C]//Proceedings of the 37th IEEE Symposium on Secu-rity and Privacy (S&P).Piscataway,NJ:IEEE,2016:582-597.
[72]HINTON G E,VINYALS O,DEAN J.Distilling the knowledge in a neural network [J].Computer Science,2015,14(7):38-39.
[73]PAPERNOT N,MCDANIEL P D.Extending defensive distillation [J].arXiv:1705.05264,2017.
[74]DZIUGAITE G K,GHAHRAMANI Z,ROY D M.A study of the effect of jpg compression on adversarial images [J].arXiv:1608.00853,2016.
[75]GUO C,RANA M,CISSÉ M,et al.Countering adversarial images using input transformations[C]//Proceedings of the 6th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2018.
[76]GU S X,RIGAZIO L.Towards deep neural network architec-tures robust to adversarial examples[C]//Proceedings of the 3rd International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2015.
[77]CHEN M M,WEINBERGER K Q,SHA F,et al.Marginalized denoising auto-encoders for nonlinear representations[C]//Proceedings of the 31st International Conference on Machine Lear-ning (ICML).New York:ACM,2014:1476-1484.
[78]OSADCHY M,HERNANDEZ-CASTRO J,GIBSON J S,et al.No bot expects the deepcaptcha! introducing immutable adversarial examples,with applications to captcha generation [J].IEEE Transactions on Information Forensics and Security,2017,12(11):2640-2653.
[79]LIAO F Z,LIANG M,DONG Y P,et al.Defense against adversarial attacks using high-level representation guided denoiser[C]//Proceedings of the 31st IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2018:1778-1787.
[80]PRAKASH A,MORAN N,GARBER S,et al.Deflecting adversarial attacks with pixel deflection[C]//Proceedings of the 31st IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2018:8571-8580.
[81]XIE C H,WU Y X,MAATEN L,et al.Feature denoising for improving adversarial robustness[C]//Proceedings of the 32nd IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2019:501-509.
[82]XU W L,EVANS D,QI Y J.Feature squeezing:Detecting adversarial examples in deep neural networks[C]//Proceedings of the 25th Network and Distributed System Security Symp (NDSS).Reston,VA:ISOC,2018.
[83]TIAN S X,YANG G L,CAI Y.Detecting adversarial examples through image transformation[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2018:4139-4146.
[84]PANG T Y,DU C,DONG Y P,et al.Towards robust detection of adversarial examples[C]//Proceedings of the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2018:4584-4594.
[85]YANG P,CHEN J B,HSIEH C J,et al.ML-LOO:Detecting adversarial examples with feature attribution[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2020:6639-6647.
[86]ZHENG Z H,HONG P Y.Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks[C]//Proceedings of the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2018:7924-7933.
[87]MA S Q,LIU Y Q,TAO G H,et al.NIC:Detecting adversarial samples with neural network invariant checking[C]//Procee-dings of the 26th Network and Distributed System Security Symposium (NDSS).Reston,VA:ISOC,2019.
[88]CINTAS C,SPEAKMAN S,AKINWANDE V,et al.Detecting adversarial attacks via subset scanning of autoencoder activations and reconstruction error[C]//Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI).San Francisco,CA:Morgan Kaufmann,2020:876-882.
[89]MCFOWLAND E,SPEAKMAN S,NEILL D B.Fast genera-lized subset scan for anomalous pattern detection [J].Journal of Machine Learning Research,2013,14(1):1533-1561.
[90]TRAMÈR F,KURAKIN A,PAPERNOT N,et al.Ensembleadversarial training:attacks and defenses[C]//Proceedings of the 6th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2018.
[91]SHAFAHI A,NAJIBI M,GHIASI A,et al.Adversarial training for free[C]//Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2019:3353-3364.
[92]ZHU C,CHENG Y,GAN Z,et al.FreeLB:Enhanced adversa-rial training for natural language understanding[C]//Procee-dings of the 8th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2020.
[93]ZHANG D H,ZHANG T Y,LU Y P,et al.You only propagate once:accelerating adversarial training via maximal principle[C]//Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2019:227-238.
[94]TSIPRAS D,SANTURKAR S,ENGSTROM L,et al.Robustness may be at odds with accuracy[C]//Proceedings of the 7th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2019.
[95]ZHANG H Y,YU Y D,JIAO J T,et al.Theoretically principled trade-off between robustness and accuracy[C]//Proceedings of the 36th International Conference on Machine Learning (ICML).New York:ACM,2019:7472-7482.
[96]WANG Y S,ZOU D F,YI J F,et al.Improving adversarial robustness requires revisiting misclassified examples[C]//Proceedings of the 8th International Conference on Learning Representations (ICLR).La Jolla,CA:LCLR,2020.
[97]MAO C Z,ZHONG Z Y,YANG J F,et al.Metric learning for adversarial robustness[C]//Proceedings of the 33rd Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2019:478-489.
[98]LI P C,YI J F,ZHOU B W,et al.Improving the robustness of deep neural networks via adversarial training with triplet loss[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI).San Francisco,CA:Morgan Kaufmann,2019:2909-2915.
[99]LIU C H,JÁJÁ J.Feature prioritization and regularization improve standard accuracy and adversarial robustness[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI).San Francisco,CA:Morgan Kaufmann,2019:2994-3000.
[100]WANG H T,CHEN T L,GUI S P,et al.Once-for-all adversa-rial training:In-situ tradeoff between robustness and accuracy for free[C]//Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS).Cambridge,MA:MIT Press,2020.
[101]SHARIF M,BHAGAVATULA S,BAUER L,et al.Accessorize to a crime:Real and stealthy attacks on state-of-the-art face re-cognition[C]//Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS).New York:ACM,2016:1528-1540.
[102]XU K,ZHANG G,LIU S,et al.Adversarial t-shirt! evading person detectors in a physical world[C]//European Conference on Computer Vision (ECCV).Springer,Cham,2020:665-681.
[103]EYKHOLT K,EVTIMOV I,FERNANDES E,et al.Robustphysical-world attacks on deep learning visual classification[C]//Proceedings of the 31st IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2018:1625-1634.
[104]BROWN T B,MANÉ D,ROY A,et al.Adversarial patch [J].arXiv:1712.09665,2017.
[105]LUO B,LIU Y N,WEI L X,et al.Towards imperceptible and robust adversarial example attacks against neural networks[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2018:1652-1659.
[106]LIU A S,LIU X L,FAN J X,et al.Perceptual-sensitive gan for generating adversarial patches[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2019:1028-1035.
[107]JAN S T K,MESSOU J,LIN Y C,et al.Connecting the digital and physical world:Improving the robustness of adversarial attacks[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI).Menlo Park,CA:AAAI,2019:962-969.
[108]DUAN R J,MA X J,WANG Y S,et al.Adversarial camouflage:Hiding physical-world attacks with natural styles[C]//Procee-dings of the 33th IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2020:997-1005.
[109]ZHOU Z,TANG D,WANG X,et al.Invisible mask:Practical attacks on face recognition with infrared[J].arXiv:1803.04683,2018.
[110]SHEN M,LIAO Z,ZHU L,et al.Vla:A practical visible light-based attack on face recognition systems in physical world[J].Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,2019,3(3):1-19.
[111]DUAN R,MAO X,QIN A K,et al.Adversarial Laser Beam:Effective Physical-World Attack to DNNs in a Blink[C]//Proceedings of the 34st IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2021:16062-16071.
[112]SAYLES A,HOODA A,GUPTA M,et al.Invisible Perturbations:Physical Adversarial Examples Exploiting the Rolling Shutter Effect[C]//Proceedings of the 34st IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Piscataway,NJ:IEEE,2021:14666-14675.
[113]NGUYEN D L,ARORA S S,WU Y,et al.Adversarial lightprojection attacks on face recognition systems:A feasibility study[C]//Proceedings of the 33st IEEE Conference on Computer Vision and Pattern Recognition Workshops.Piscataway,NJ:IEEE,2020:814-815.
[114]LOVISOTTO G,TURNER H,SLUGANOVIC I,et al.SLAP:Improving physical adversarial examples with short-lived adversarial perturbations[C]//Proceedings of the 30th USENIX Security Symposium (USENIX Security).Berkeley,CA:USENIX Association,2021.
[115]SHI C H,JI S L,LIU Q J,et al.Text captcha is dead? A large scale deployment and empirical study[C]//Proceedings of the 27th ACM Conference on Computer and Communications Secu-rity (CCS).New York:ACM,2020:1391-1406.
[1] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[2] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[3] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[4] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[5] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[6] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[7] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[8] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[9] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[10] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
[11] ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun. Robust Deep Neural Network Learning Based on Active Sampling [J]. Computer Science, 2022, 49(7): 164-169.
[12] SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He. Survey of Deep Learning for Radar Emitter Identification Based on Small Sample [J]. Computer Science, 2022, 49(7): 226-235.
[13] ZHU Wen-tao, LAN Xian-chao, LUO Huan-lin, YUE Bing, WANG Yang. Remote Sensing Aircraft Target Detection Based on Improved Faster R-CNN [J]. Computer Science, 2022, 49(6A): 378-383.
[14] WANG Jian-ming, CHEN Xiang-yu, YANG Zi-zhong, SHI Chen-yang, ZHANG Yu-hang, QIAN Zheng-kun. Influence of Different Data Augmentation Methods on Model Recognition Accuracy [J]. Computer Science, 2022, 49(6A): 418-423.
[15] MAO Dian-hui, HUANG Hui-yu, ZHAO Shuang. Study on Automatic Synthetic News Detection Method Complying with Regulatory Compliance [J]. Computer Science, 2022, 49(6A): 523-530.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!