计算机科学 ›› 2023, Vol. 50 ›› Issue (6A): 220500104-5.doi: 10.11896/jsjkx.220500104
李凡1, 贾东立1, 姚昱旻2, 涂俊1
LI Fan1, JIA Dongli1, YAO Yumin2, TU Jun1
摘要: 小样本学习的提出是为了解决深度学习中模型学习所需数据集规模小或者数据标注代价昂贵的问题,图像分类作为深度学习研究领域的重要研究内容,也存在训练数据不足的情况。研究人员针对图像分类模型缺乏训练数据的情况,提出了许多的解决方法,利用图神经网络进行小样本图像分类就是其中的一种。为了更好地发挥图神经网络在小样本学习领域中的作用,针对图神经网络中的卷积操作过程易受偶然因素影响,导致模型不稳定,使用残差网络对图神经网络进行改进,设计了残差图卷积网络,以提高图神经网络的稳定性。在残差图卷积网络的基础上,结合自注意力机制设计残差图自注意力机制,深入挖掘节点之间的关系,提高信息传播效率,从而提高分类模型的分类准确率。经过测试,改进后的残差图卷积网络训练效率得到提高,在5way-1shot任务中的分类准确率相比GNN模型提高了1.1%,在5way-5shot任务中比GNN模型提高了1.42%。在5way-1shot任务中,残差图自注意力网络的分类准确率比GNN模型提高了1.62%。
中图分类号:
[1]SHI Y X,AN K,LI Y S.Few-shot Communication JammingRecognition Technology Based on Data Augmentation[J].Radio Communications Technology,2022,48(1):25-31. [2]ZHU K F,WANG G J,LIU Y J.Radar Target Recognition Algorithm Based on Data Augmentation and WACGAN with a Limited Training Data[J].Acta Electronica Sinica,2020,48(6):1124-1131. [3]KOCH G,ZEMEL R,SALAKHUTDINOV R.Siamese neuralnetworks for one-shot image recognition[C]//Proceedings of 32nd International Conference on Machine Learning.Lille,France:International Machine Learning Society,2015. [4]VINYALS O,BLUNDELL C,LILLICRAP T,et al.Matchingnetworks for one shot learning[C]//30th Conference on Neural Information Processing Systems(NIPS 2016).Barcelona,Spain:NIPS Foundation,2016:1-9. [5]CAI Q,PAN Y,YAO T,et al.Memory Matching Networks for One-Shot Image Recognition[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA:IEEE,2018:4080-4088. [6]SNELL J,SWERSKY K,ZEMEL R S.Prototypical networks for few-shot learning[C]//31st Conference on Neural Information Processing Systems(NIPS 2017).Long Beach,CA,USA:NIPS Foundation,2017:1-11. [7]SUNG F,YANG Y,ZHANG L,et al.Learning to compare:Relation network for few-shot learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA:IEEE,2018:1199-1208. [8]GARCIA V,BRUNA J.Few-shot learning with graph neuralnetworks[C]//Proceedings of the International Conference on Learning Representations.Vancouver,BC,Canada,2018. [9]KIM J,KIM T,KIM S,et al.Edge-Labeling Graph Neural Network for Few-Shot Learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).Long Beach,CA,USA:IEEE,2019:11-20. [10]LIU Y,CHE X.Few-shot Image Classification Algorithm Based on Graph Network Optimization and Label Propagation[J].Signal Processing,2022,38(1):202-210. [11]WANG X R,ZHANG H.Relation Network Based on Attention Mechanism and Graph Convolution for Few-Shot Learning[J].Computer Engineering and Applications,2021,57(19):164-170. [12]YANG L,LI L,ZHANG Z,et al.DPGN:Distribution Propagation Graph Network for Few-Shot Learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).Seattle,WA,USA:IEEE,2020:13387-13396. [13]LIU Z X,ZHU C J,HUANG J,et al.Image Super-resolution by Residual Attention Network with Multi-skip Connection[J].Computer Science,2021,48(11):258-267. [14]YANG Q,ZHANG Y W,ZHU L,et al.Text Sentiment Analysis Based on Fusion of Attention Mechanism and BiGRU[J].Computer Science,2021,48(11):307-311. [15]LIU H C,WANG L.Graph Classification Model Based on Capsule Deep Graph Convolutional Neural Network[J].Computer Science,2020,47(9):219-225. [16]FINN C,ABBEEL P,LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning.2017. |
|