Computer Science ›› 2022, Vol. 49 ›› Issue (7): 164-169.doi: 10.11896/jsjkx.210600044

• Artificial Intelligence • Previous Articles     Next Articles

Robust Deep Neural Network Learning Based on Active Sampling

ZHOU Hui1,2, SHI Hao-chen1,2, TU Yao-feng1,3, HUANG Sheng-jun1,2   

  1. 1 College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China
    2 MIIT Key Laboratory of Pattern Analysis and Machine Intelligence,Nanjing 211106,China
    3 State Key Laboratory of Mobile Network and Mobile Multimedia Technology,Shenzhen,Guangdong 518057,China
  • Received:2021-06-04 Revised:2021-10-19 Online:2022-07-15 Published:2022-07-12
  • About author:ZHOU Hui,born in 1997,master.Her main research interests include machine learning and so on.
    HUANG Sheng-jun,born in 1987,professor.His main research interests include machine learning and data mi-ning.
  • Supported by:
    Technological Innovation 2030-“New Generation Artificial Intelligence” Major Project(2020AAA0107000) and National Natural Science Foundation of China(62076128).

Abstract: Recently,deep learning models have been widely used in various real-world tasks.Improving the robustness of deep neural networks has become an important research direction in machine learning field.Recent works show that training the deep model with noise perturbations can significantly improve the model robustness.However,its training requires a large set of precisely labeled examples,which is often expensive and difficult to collect in real-world scenario.Active learning(AL) is a primary approach for reducing the labeling cost,which progressively selects the most useful samples and queries their labels,with the target of training an effective model with less queries.This paper proposes an active sampling based neural network learning framework,which aims to improve the model robustness with low labeling cost.In this framework,the proposed inconsistency sampling strategy is employed to measure the potential utility for improving the model robustness of each unlabeled example with a series of perturbations.Then,those examples with the largest inconsistency will be selected for training the deep model with noise perturbations.Experimental results on the benchmark image classification task data set show that the inconsistency-based active sampling strategy can effectively improve the robustness of the deep neural network model with lower sample labeling cost.

Key words: Active learning, Deep learning, Inconsistency, Model robustness, Noise perturbations

CLC Number: 

  • TP181
[1]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[2]HE K,ZHANG X,REN S,et al.Delving deep into rectifiers:Surpassing human-level performance on imagenet classification[C]//Proceedings of the IEEE International Conference on Computer vision.2015:1026-1034.
[3]XIONG W,DROPPO J,HUANG X,et al.Achieving humanparity in conversational speech recognition[J].arXiv:1610.05256,2016.
[4]SUTSKEVER I,VINYALS O,LE Q V.Sequence to sequence learning with neural networks[J].arXiv:1409.3215,2014.
[5]LIU L,OUYANGW,WANG X,et al.Deep learning for generic object detection:A survey[J].International Journal of Computer Vision,2020,128(2):261-318.
[6]GEIRHOS R,RUBISH P,MICHAELIS C,et al.ImageNet-trained CNNs are biased towards texture;increasing shape bias improves accuracy and robustness[J].arXiv:1811.12231,2018.
[7]RUSAK E,SCHOTT L,ZIMMERMANN R S,et al.A simple way to make neural networks robust against diverse image corruptions [C]//European Conference on Computer Vision.Cham:Springer,2020:53-69.
[8]HENDRYCKS D,MAZEIKA M,KADAVATH S,et al.Usingself-supervised learning can improve model robustness and uncertainty[J].arXiv:1906.12340,2019.
[9]HENDRYCKS D,MU N,CUBUK E D,et al.Augmix:A simple data processing method to improve robustness and uncertainty[J].arXiv:1912.02781,2019.
[10]MAO C,ZHONG Z,YANG J,et al.Metric learning for adversarial robustness[J].arXiv:1909.00900,2019.
[11]ZHANG R.Making convolutional networks shift-invriance again[J].Proceedings of Machine Learning Research,2019,97:7324-7334.
[12]TRAMER F,CARLINI N,BRENDEL W,et al.On adaptive attacks to adversarial example defenses[J].arXiv:2002.08347,2020.
[13]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[J].arXiv:1706.06083,2017.
[14]WONG E,SCHMIDT F,KOLTER Z.Wasserstein adversarialexamples via projected sinkhorn iterations[C]//Internatioal Conference on Machine Learning.PMLR,2019:6808-6817.
[15]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[J].arXiv:1312.6199,2013.
[16]CHAPELLE O,WESTON J,BOTTOU L,et al.Vicinal riskminimization[C]//Conference and Workshop on Neural Information Processing Systems(NIPS).2000.
[17]HENDRYCKS D,DIETTERICH T.Benchmarking neural net-work robustness to common corruptions and perturbations[J].arXiv:1903.12261,2019.
[18]XIE Q,LUONG M T,HOVY E,et al.Self-training with noisy student improves imagenet classification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.IEEE,2020:10687-10698.
[19]MAHAJAN D,GIRSHICK R,RAMANATHAN V,et al.Ex-ploring the limits of weakly supervised pretraining[C]//Proceedings of the European Conference on Computer Vision(ECCV).IEEE,2018:181-196.
[20]UESATO J,ALAYRAC J B,HUANG P S,et al.Are labels required for improving adversarial robustness?[J].arXiv:1905.13725,2019.
[21]LEWIS D D,GALE W A.A sequential algorithm for trainingtext classifiers[C]//SIGIR'94.London:Springer,1994:3-12.
[22]TONG S,KOLLER D.Support vector machine active learning with applications to text classification[J].Journal of machine learning research,2001,23:45-66.
[23]YAN Y,HUANG S J.Cost-Effective Active Learning for Hierarchical Multi-Label Classification[C]//IJCAI.2018:2962-2968.
[24]HUANG S J,GAO N,CHEN S.Multi-instance multi-label active learning[C]//IJCAI.2017:1886-1892.
[25]HUANG S J,ZHOU Z H.Active query driven by uncertainty and diversity for incremental multi-label learning[C]//2013 IEEE 13th International Conference on Data Mining.IEEE,2013:1079-1084.
[26]HUANG S J,RONG J,ZHOU Z H.Active learning by querying informative and representative examples[C]//Conference and Workshop on Neural Information Processing Systems(NIPS).2014:1936-1949.
[27]WANG Z,YE J P.Querying discriminative and representativesamples for batch mode active learning[J].ACM Transactions on Knowledge Discovery from Data(TKDD),2015,9(3):1-23.
[28]DODGE S,KARAM L.Understanding how image qualityaffects deep neural networks[C]//2016 Eighth International Conference on Quality of Multimedia Experience(QoMEX).IEEE,2016:1-6.
[29]DODGE S,KARAM L.A study and comparison of human and deep learning recognition performance under visual distortions[C]//2017 26th International Conference on Computer Communication and Networks(ICCCN).IEEE,2017:1-7.
[30]GEIRHOS R,TEMME C R M,RAUBER J,et al.Generalisation in humans and deep neural networks[J].arXiv:1808.08750,2018.
[31]CARLINI N,ATHALYE A,PAPERNOT N,et al.On evaluating adversarial robustness[J].arXiv:1902.06705,2019.
[32]CHEN P,ZHAO J C,YU X S.Ensemble Method of K-NearestNeighbor Enhancement Fuzzy Minimax neuralNetworks with Centroid[J].Journal of Chongqing University of Technology(Natural Science),2021,35(9):116-129.
[33]FU Y,ZHU X,AND LI B.A survey on instance selection for active learning[J].Knowledge and Information Systems,2013,35(2):249-283.
[34]SEUNG HS,OPPER M,SOMPOLINSKY H.Query by committee[C]//Proceedings of the Fifth Annual Workshop on Computational Learning Theory.1992:287-294.
[35]YOU X,WANG R,TAO D.Diverse expected gradient activelearning for relative attributes[J].IEEE Transactions on Image Processing,2014,23(7):3203-3217.
[36]ROY N,MCCALLUM A.Toward optimal active learningthrough sampling estimation of error reduction[C]//International Conferenceon Machine Learning(ICML).2001:441-448.
[37]GEMAN S,BIENENSTOCK E,DOURSAT R.Neural networks and the bias/variance dilemma[J].NeuralComputation,1992,4(1):1-58.
[38]NING KP,TAO L,CHEN S,et al.Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries[J].arXiv:2103.14824.2021.
[39]LECUN Y,BOTTOU L,BENGIO Y,et al.Gradient-basedlearning applied to document recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324.
[40]YAO L,MILLER J.Tiny imagenet classification with convolutional neural networks[J].CS 231N,2015,2(5):8.
[41]MU N,GILMER J.Mnist-c:A robustness benchmark for computer vision[J].arXiv:1906.02337,2019.
[1] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[2] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[3] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[4] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[5] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[6] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[7] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[8] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[9] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[10] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
[11] SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He. Survey of Deep Learning for Radar Emitter Identification Based on Small Sample [J]. Computer Science, 2022, 49(7): 226-235.
[12] ZHU Wen-tao, LAN Xian-chao, LUO Huan-lin, YUE Bing, WANG Yang. Remote Sensing Aircraft Target Detection Based on Improved Faster R-CNN [J]. Computer Science, 2022, 49(6A): 378-383.
[13] WANG Jian-ming, CHEN Xiang-yu, YANG Zi-zhong, SHI Chen-yang, ZHANG Yu-hang, QIAN Zheng-kun. Influence of Different Data Augmentation Methods on Model Recognition Accuracy [J]. Computer Science, 2022, 49(6A): 418-423.
[14] MAO Dian-hui, HUANG Hui-yu, ZHAO Shuang. Study on Automatic Synthetic News Detection Method Complying with Regulatory Compliance [J]. Computer Science, 2022, 49(6A): 523-530.
[15] WANG Jun-feng, LIU Fan, YANG Sai, LYU Tan-yue, CHEN Zhi-yu, XU Feng. Dam Crack Detection Based on Multi-source Transfer Learning [J]. Computer Science, 2022, 49(6A): 319-324.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!