Computer Science ›› 2023, Vol. 50 ›› Issue (6A): 220500104-5.doi: 10.11896/jsjkx.220500104

• Image Processing & Multimedia Technology • Previous Articles     Next Articles

Graph Neural Network Few Shot Image Classification Network Based on Residual and Self-attention Mechanism

LI Fan1, JIA Dongli1, YAO Yumin2, TU Jun1   

  1. 1 School of Information and Electrical Engineering,Hebei University of Engineering,Handan,Hebei 056000,China;
    2 Hunan Technology Innovation Center of Blockchain,Changsha 410000,China
  • Online:2023-06-10 Published:2023-06-12
  • About author:LI Fan,born in 1998,postgraduate.His main research interest is intelligent information processing. JIA Dongli,born in 1972,Ph.D,asso-ciate professor,graduate supervisor.His main research interest is intelligent information processing.
  • Supported by:
    Science and Technology Innovation Leading Plan Project of High Tech Industry of Hunan Provincial Department of Science and Technology(2020GK2005).

Abstract: Few shot learning is proposed to solve the problem of small size of data set required for model learning or high cost of data annotation in deep learning.Image classification has always been an important research content in the research field,and there may be insufficient annotation data.In view of the lack of image annotation data,researchers have put forward many solutions,one of which is to classify small sample images by using graph neural network.In order to better play the role of graph neural network in the field of small sample learning,aiming at the unstable situation of graph neural network convolution operation,residual graph convolution network is used to improve the graph neural network,and residual graph convolution network is designed to improve the stability of graph neural network.Based on the convolutional network of residual graph,the self-attention mechanism of residual graph is designed in combination with the self-attention mechanism,and the relationship between nodes is deeply mined to improve the efficiency of information transmission and improve the classification accuracy of the classification model.After testing,the training efficiency of the improved Res-GNN is improved.The classification accuracy in 5way-1shot task is 1.1% higher than that of GNN model,and 1.42% higher than that of GNN model in 5way-5shot task.In the 5way-1shot task,the classification accuracy of ResAT-GNN is 1.62% higher than that of GNN model.

Key words: Few shot learning, Image classification, Graph neural network, Residual network, Self-attention mechanism

CLC Number: 

  • TP391
[1]SHI Y X,AN K,LI Y S.Few-shot Communication JammingRecognition Technology Based on Data Augmentation[J].Radio Communications Technology,2022,48(1):25-31.
[2]ZHU K F,WANG G J,LIU Y J.Radar Target Recognition Algorithm Based on Data Augmentation and WACGAN with a Limited Training Data[J].Acta Electronica Sinica,2020,48(6):1124-1131.
[3]KOCH G,ZEMEL R,SALAKHUTDINOV R.Siamese neuralnetworks for one-shot image recognition[C]//Proceedings of 32nd International Conference on Machine Learning.Lille,France:International Machine Learning Society,2015.
[4]VINYALS O,BLUNDELL C,LILLICRAP T,et al.Matchingnetworks for one shot learning[C]//30th Conference on Neural Information Processing Systems(NIPS 2016).Barcelona,Spain:NIPS Foundation,2016:1-9.
[5]CAI Q,PAN Y,YAO T,et al.Memory Matching Networks for One-Shot Image Recognition[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA:IEEE,2018:4080-4088.
[6]SNELL J,SWERSKY K,ZEMEL R S.Prototypical networks for few-shot learning[C]//31st Conference on Neural Information Processing Systems(NIPS 2017).Long Beach,CA,USA:NIPS Foundation,2017:1-11.
[7]SUNG F,YANG Y,ZHANG L,et al.Learning to compare:Relation network for few-shot learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA:IEEE,2018:1199-1208.
[8]GARCIA V,BRUNA J.Few-shot learning with graph neuralnetworks[C]//Proceedings of the International Conference on Learning Representations.Vancouver,BC,Canada,2018.
[9]KIM J,KIM T,KIM S,et al.Edge-Labeling Graph Neural Network for Few-Shot Learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).Long Beach,CA,USA:IEEE,2019:11-20.
[10]LIU Y,CHE X.Few-shot Image Classification Algorithm Based on Graph Network Optimization and Label Propagation[J].Signal Processing,2022,38(1):202-210.
[11]WANG X R,ZHANG H.Relation Network Based on Attention Mechanism and Graph Convolution for Few-Shot Learning[J].Computer Engineering and Applications,2021,57(19):164-170.
[12]YANG L,LI L,ZHANG Z,et al.DPGN:Distribution Propagation Graph Network for Few-Shot Learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).Seattle,WA,USA:IEEE,2020:13387-13396.
[13]LIU Z X,ZHU C J,HUANG J,et al.Image Super-resolution by Residual Attention Network with Multi-skip Connection[J].Computer Science,2021,48(11):258-267.
[14]YANG Q,ZHANG Y W,ZHU L,et al.Text Sentiment Analysis Based on Fusion of Attention Mechanism and BiGRU[J].Computer Science,2021,48(11):307-311.
[15]LIU H C,WANG L.Graph Classification Model Based on Capsule Deep Graph Convolutional Neural Network[J].Computer Science,2020,47(9):219-225.
[16]FINN C,ABBEEL P,LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning.2017.
[1] TENG Sihang, WANG Lie, LI Ya. Non-autoregressive Transformer Chinese Speech Recognition Incorporating Pronunciation- Character Representation Conversion [J]. Computer Science, 2023, 50(8): 111-117.
[2] LI Rongchang, ZHENG Haibin, ZHAO Wenhong, CHEN Jinyin. Data Reconstruction Attack for Vertical Graph Federated Learning [J]. Computer Science, 2023, 50(7): 332-338.
[3] YAN Mingqiang, YU Pengfei, LI Haiyan, LI Hongsong. Arbitrary Image Style Transfer with Consistent Semantic Style [J]. Computer Science, 2023, 50(7): 129-136.
[4] JIANG Linpu, CHEN Kejia. Self-supervised Dynamic Graph Representation Learning Approach Based on Contrastive Prediction [J]. Computer Science, 2023, 50(7): 207-212.
[5] DOU Zhi, HU Chenguang, LIANG Jingyi, ZHENG Liming, LIU Guoqi. Lightweight Target Detection Algorithm Based on Improved Yolov4-tiny [J]. Computer Science, 2023, 50(6A): 220700006-7.
[6] GU Shouke, CHEN Wen. Function Level Code Vulnerability Detection Method of Graph Neural Network Based on Extended AST [J]. Computer Science, 2023, 50(6): 283-290.
[7] WANG Huiyan, YU Minghe, YU Ge. Deep Learning-based Heterogeneous Information Network Representation:A Survey [J]. Computer Science, 2023, 50(5): 103-114.
[8] WANG Xianwang, ZHOU Hao, ZHANG Minghui, ZHU Youwei. Hyperspectral Image Classification Based on Swin Transformer and 3D Residual Multilayer Fusion Network [J]. Computer Science, 2023, 50(5): 155-160.
[9] YANG Bin, LIANG Jing, ZHOU Jiawei, ZHAO Mengci. Study on Interpretable Click-Through Rate Prediction Based on Attention Mechanism [J]. Computer Science, 2023, 50(5): 12-20.
[10] DONG Chengyu, LYU Mingqi, CHEN Tieming, ZHU Tiantian. Heterogeneous Provenance Graph Learning Model Based APT Detection [J]. Computer Science, 2023, 50(4): 359-368.
[11] SHAO Yunfei, SONG You, WANG Baohui. Study on Degree of Node Based Personalized Propagation of Neural Predictions forSocial Networks [J]. Computer Science, 2023, 50(4): 16-21.
[12] CHEN Fuqiang, KOU Jiamin, SU Limin, LI Ke. Multi-information Optimized Entity Alignment Model Based on Graph Neural Network [J]. Computer Science, 2023, 50(3): 34-41.
[13] YU Jian, ZHAO Mankun, GAO Jie, WANG Congyuan, LI Yarong, ZHANG Wenbin. Study on Graph Neural Networks Social Recommendation Based on High-order and Temporal Features [J]. Computer Science, 2023, 50(3): 49-64.
[14] XIE Qinqin, HE Lang, XU Ruli. Classification of Oil Painting Art Style Based on Multi-feature Fusion [J]. Computer Science, 2023, 50(3): 223-230.
[15] ZHANG Qi, YU Shuangyuan, YIN Hongfeng, XU Baomin. Neural Collaborative Filtering for Social Recommendation Algorithm Based on Graph Attention [J]. Computer Science, 2023, 50(2): 115-122.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!