Computer Science ›› 2021, Vol. 48 ›› Issue (11A): 540-546.doi: 10.11896/jsjkx.201200077

• Information Security • Previous Articles     Next Articles

Network Anomaly Detection Based on Deep Learning

YANG Yue-lin, BI Zong-ze   

  1. School of Software Engineering,University of Science and Technology of China,Hefei 230022,China
  • Online:2021-11-10 Published:2021-11-12
  • About author:YANG Yue-lin,born in 1994,master,engineer.His main research interests include network security and deep learning.

Abstract: This paper proposes a novel and general end-to-end convolutional transformer network for modeling the long-range spatial and temporal dependence on network anomaly detection.The core ingredient of the proposed model is the feature embedding module by just replacing the spatial convolutions with proposed global self-attention in the final three bottleneck blocks of a ResNet,and the multi-head convolutional self-attention layer in encoder and decoder,which learns the sequential dependence of network traffic data.Our model uses an encoder,built upon multi-head convolutional self-attention layers,to map the input sequence to a feature map sequence,and then another deep networks,incorporating multi-head convolutional self-attention layers,decode the target synthesized feature map from the feature maps sequence.We also present a class-rebalancing self-training framework to alleviate the long tail effect caused by the imbalance of data distribution through semi-supervised learning,which is motivated by the observation that existing SSL algorithms produce high precision pseudo-labels on minority classes.The algorithm iteratively retrains a baseline SSL model with a labeled set expanded by adding pseudo-labeled samples from an unlabeled set,where pseudo-labeled samples from minority classes are selected more frequently according to an estimated class distribution.In this paper,CIC-IDS-2017 datasets is used for experimental evaluation.The experiments shows that the accuracy of our model is higher than that of other deep learning models,which improves detection performance while reducing detection time,and has practical application value in the field of network traffic anomaly detection.

Key words: Anomaly detection, Attention, Class-rebalancing, Deep learning, ResNet

CLC Number: 

  • TP183
[1]ZHOU Y,LI J.Research of Network Traffic Anomaly Detection Model Based on Multilevel Autoregression[C]//2019 IEEE 7th International Conference on Computer Science and Network Technology (ICCSNT).2019:380-384.
[2]KONG B,LIU Z,ZHOU G,et al.A Method of Detecting the Abnormal Encrypted Traffic Based on Machine Learning and Behavior Characteristics[C]//Proceedings of the 2019 the 9th International Conference on Communication and Network Securi-ty(ICCNS 2019).New York,NY,USA:Association for Computing Machinery,2019:47-50.
[3]VERMA A K,KAUSHIK P,SHRIVASTAVA G.A Network Intrusion Detection Approach Using Variant of Convolution Neural Network[C]//2019 International Conference on Communication and Electronics Systems (ICCES).2019:409-416.
[4]CHIA Y K,WITTEVEEN S,ANDREWS M.Transformer toCNN:Label-Scarce Distillation for Efficient Text Classification[J].arXiv:1909.03508.
[5]VASWANI A,SHAZEER N,PARMAR N,et al.Attention isAll You Need[C]//Advances in Neural Information Processing Systems.Curran Associates,Inc.,2017:5998-6008.
[6]KHAN S,NASEER M,HAYATM,et al.Transformers in Vision:A Survey[J].arXiv:2101.01169.
[7]DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.AnImage is Worth 16x16 Words:Transformers for Image Recognition at Scale[C]//International Conference on Learning Representations(ICLR 2021).2020.
[8]WU H,XIAO B,CODELLA N,et al.CvT:Introducing Convolutions to Vision Transformers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).2021.
[9]LI Y,ZHANG K,CAO J,et al.LocalViT:Bringing Locality to Vision Transformers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).2021.
[10]BHOJANAPALLI S,CHAKRABARTI A,GLASNER D,et al.Understanding Robustness of Transformers for Image Classification[C]//Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR).2021.
[11]NAKASHIMA K,KATAOKA H,MATSUMOTO A,et al.Can Vision Transformers Learn without Natural Images?[J].arXiv:2103.13023.
[12]JAMAL M A,BROWN M,YANG M H,et al.Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition From a Domain Adaptation Perspective[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).2020.
[13]SOHN K,BERTHELOT D,CARLINI N,et al.FixMatch:Simplifying Semi-Supervised Learning with Consistency and Confidence[C]//Advances in Neural Information Processing Systems(NeurIPS).Curran Associates,Inc.,2020:596-608.
[14]XIE Q,LUONG M T,HOVY E,et al.Self-TrainingWith Noisy Student Improves ImageNet Classification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).2020.
[15]KURNIABUDI,STIAWAN D,DARMAWIJOY O,et al.CIC-IDS-2017 Dataset Feature Analysis with Information Gain for Anomaly Detection[J].IEEE Access,2020,8:132911-132921.
[1] ZHOU Fang-quan, CHENG Wei-qing. Sequence Recommendation Based on Global Enhanced Graph Neural Network [J]. Computer Science, 2022, 49(9): 55-63.
[2] XU Tian-hui, GUO Qiang, ZHANG Cai-ming. Time Series Data Anomaly Detection Based on Total Variation Ratio Separation Distance [J]. Computer Science, 2022, 49(9): 101-110.
[3] DAI Yu, XU Lin-feng. Cross-image Text Reading Method Based on Text Line Matching [J]. Computer Science, 2022, 49(9): 139-145.
[4] ZHOU Le-yuan, ZHANG Jian-hua, YUAN Tian-tian, CHEN Sheng-yong. Sequence-to-Sequence Chinese Continuous Sign Language Recognition and Translation with Multi- layer Attention Mechanism Fusion [J]. Computer Science, 2022, 49(9): 155-161.
[5] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[6] XIONG Li-qin, CAO Lei, LAI Jun, CHEN Xi-liang. Overview of Multi-agent Deep Reinforcement Learning Based on Value Factorization [J]. Computer Science, 2022, 49(9): 172-182.
[7] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[8] WU Zi-yi, LI Shao-mei, JIANG Meng-han, ZHANG Jian-peng. Ontology Alignment Method Based on Self-attention [J]. Computer Science, 2022, 49(9): 215-220.
[9] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[10] SHI Dian-xi, ZHAO Chen-ran, ZHANG Yao-wen, YANG Shao-wu, ZHANG Yong-jun. Adaptive Reward Method for End-to-End Cooperation Based on Multi-agent Reinforcement Learning [J]. Computer Science, 2022, 49(8): 247-256.
[11] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[12] WANG Xin-tong, WANG Xuan, SUN Zhi-xin. Network Traffic Anomaly Detection Method Based on Multi-scale Memory Residual Network [J]. Computer Science, 2022, 49(8): 314-322.
[13] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[14] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[15] LI Rong-fan, ZHONG Ting, WU Jin, ZHOU Fan, KUANG Ping. Spatio-Temporal Attention-based Kriging for Land Deformation Data Interpolation [J]. Computer Science, 2022, 49(8): 33-39.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!