计算机科学 ›› 2023, Vol. 50 ›› Issue (6A): 220500104-5.doi: 10.11896/jsjkx.220500104

• 图像处理&多媒体技术 • 上一篇    下一篇

结合残差与自注意力机制的图卷积小样本图像分类网络

李凡1, 贾东立1, 姚昱旻2, 涂俊1   

  1. 1 河北工程大学信息与电气工程学院 河北 邯郸 056000;
    2 湖南省区块链技术创新中心 长沙 410000
  • 出版日期:2023-06-10 发布日期:2023-06-12
  • 通讯作者: 贾东立(jwdsli@163.com)
  • 作者简介:(lifan539@163.com)
  • 基金资助:
    湖南省科技厅高新技术产业科技创新引领计划项目(2020GK2005)

Graph Neural Network Few Shot Image Classification Network Based on Residual and Self-attention Mechanism

LI Fan1, JIA Dongli1, YAO Yumin2, TU Jun1   

  1. 1 School of Information and Electrical Engineering,Hebei University of Engineering,Handan,Hebei 056000,China;
    2 Hunan Technology Innovation Center of Blockchain,Changsha 410000,China
  • Online:2023-06-10 Published:2023-06-12
  • About author:LI Fan,born in 1998,postgraduate.His main research interest is intelligent information processing. JIA Dongli,born in 1972,Ph.D,asso-ciate professor,graduate supervisor.His main research interest is intelligent information processing.
  • Supported by:
    Science and Technology Innovation Leading Plan Project of High Tech Industry of Hunan Provincial Department of Science and Technology(2020GK2005).

摘要: 小样本学习的提出是为了解决深度学习中模型学习所需数据集规模小或者数据标注代价昂贵的问题,图像分类作为深度学习研究领域的重要研究内容,也存在训练数据不足的情况。研究人员针对图像分类模型缺乏训练数据的情况,提出了许多的解决方法,利用图神经网络进行小样本图像分类就是其中的一种。为了更好地发挥图神经网络在小样本学习领域中的作用,针对图神经网络中的卷积操作过程易受偶然因素影响,导致模型不稳定,使用残差网络对图神经网络进行改进,设计了残差图卷积网络,以提高图神经网络的稳定性。在残差图卷积网络的基础上,结合自注意力机制设计残差图自注意力机制,深入挖掘节点之间的关系,提高信息传播效率,从而提高分类模型的分类准确率。经过测试,改进后的残差图卷积网络训练效率得到提高,在5way-1shot任务中的分类准确率相比GNN模型提高了1.1%,在5way-5shot任务中比GNN模型提高了1.42%。在5way-1shot任务中,残差图自注意力网络的分类准确率比GNN模型提高了1.62%。

关键词: 小样本学习, 图像分类, 图神经网络, 残差网络, 自注意力机制

Abstract: Few shot learning is proposed to solve the problem of small size of data set required for model learning or high cost of data annotation in deep learning.Image classification has always been an important research content in the research field,and there may be insufficient annotation data.In view of the lack of image annotation data,researchers have put forward many solutions,one of which is to classify small sample images by using graph neural network.In order to better play the role of graph neural network in the field of small sample learning,aiming at the unstable situation of graph neural network convolution operation,residual graph convolution network is used to improve the graph neural network,and residual graph convolution network is designed to improve the stability of graph neural network.Based on the convolutional network of residual graph,the self-attention mechanism of residual graph is designed in combination with the self-attention mechanism,and the relationship between nodes is deeply mined to improve the efficiency of information transmission and improve the classification accuracy of the classification model.After testing,the training efficiency of the improved Res-GNN is improved.The classification accuracy in 5way-1shot task is 1.1% higher than that of GNN model,and 1.42% higher than that of GNN model in 5way-5shot task.In the 5way-1shot task,the classification accuracy of ResAT-GNN is 1.62% higher than that of GNN model.

Key words: Few shot learning, Image classification, Graph neural network, Residual network, Self-attention mechanism

中图分类号: 

  • TP391
[1]SHI Y X,AN K,LI Y S.Few-shot Communication JammingRecognition Technology Based on Data Augmentation[J].Radio Communications Technology,2022,48(1):25-31.
[2]ZHU K F,WANG G J,LIU Y J.Radar Target Recognition Algorithm Based on Data Augmentation and WACGAN with a Limited Training Data[J].Acta Electronica Sinica,2020,48(6):1124-1131.
[3]KOCH G,ZEMEL R,SALAKHUTDINOV R.Siamese neuralnetworks for one-shot image recognition[C]//Proceedings of 32nd International Conference on Machine Learning.Lille,France:International Machine Learning Society,2015.
[4]VINYALS O,BLUNDELL C,LILLICRAP T,et al.Matchingnetworks for one shot learning[C]//30th Conference on Neural Information Processing Systems(NIPS 2016).Barcelona,Spain:NIPS Foundation,2016:1-9.
[5]CAI Q,PAN Y,YAO T,et al.Memory Matching Networks for One-Shot Image Recognition[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA:IEEE,2018:4080-4088.
[6]SNELL J,SWERSKY K,ZEMEL R S.Prototypical networks for few-shot learning[C]//31st Conference on Neural Information Processing Systems(NIPS 2017).Long Beach,CA,USA:NIPS Foundation,2017:1-11.
[7]SUNG F,YANG Y,ZHANG L,et al.Learning to compare:Relation network for few-shot learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA:IEEE,2018:1199-1208.
[8]GARCIA V,BRUNA J.Few-shot learning with graph neuralnetworks[C]//Proceedings of the International Conference on Learning Representations.Vancouver,BC,Canada,2018.
[9]KIM J,KIM T,KIM S,et al.Edge-Labeling Graph Neural Network for Few-Shot Learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).Long Beach,CA,USA:IEEE,2019:11-20.
[10]LIU Y,CHE X.Few-shot Image Classification Algorithm Based on Graph Network Optimization and Label Propagation[J].Signal Processing,2022,38(1):202-210.
[11]WANG X R,ZHANG H.Relation Network Based on Attention Mechanism and Graph Convolution for Few-Shot Learning[J].Computer Engineering and Applications,2021,57(19):164-170.
[12]YANG L,LI L,ZHANG Z,et al.DPGN:Distribution Propagation Graph Network for Few-Shot Learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).Seattle,WA,USA:IEEE,2020:13387-13396.
[13]LIU Z X,ZHU C J,HUANG J,et al.Image Super-resolution by Residual Attention Network with Multi-skip Connection[J].Computer Science,2021,48(11):258-267.
[14]YANG Q,ZHANG Y W,ZHU L,et al.Text Sentiment Analysis Based on Fusion of Attention Mechanism and BiGRU[J].Computer Science,2021,48(11):307-311.
[15]LIU H C,WANG L.Graph Classification Model Based on Capsule Deep Graph Convolutional Neural Network[J].Computer Science,2020,47(9):219-225.
[16]FINN C,ABBEEL P,LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning.2017.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!