计算机科学 ›› 2023, Vol. 50 ›› Issue (6A): 220400029-10.doi: 10.11896/jsjkx.220400029

• 人工智能 • 上一篇    下一篇

基于多图特征聚合的小样本学习方法

曾武1, 毛国君1,2   

  1. 1福建工程学院计算机科学与数学学院 福州 350118;
    2福建省大数据挖掘与应用重点实验室 福州 350118
  • 出版日期:2023-06-10 发布日期:2023-06-12
  • 通讯作者: 毛国君(19662092@fjut.edu.cn)
  • 作者简介:(2201905122@smail.fjut.edu.cn)
  • 基金资助:
    国家自然科学基金项目(61773415);国家重点研发项目(2019YFD0900805)

Few-shot Learning Method Based on Multi-graph Feature Aggregation

ZENG Wu1, MAO Guojun1,2   

  1. 1 School of Computer Science and Mathematics,Fujian University of Technology,Fuzhou 350118,China;
    2 Fujian Provincial Key Laboratory of Big Data Mining and Applications,Fuzhou 350118,China
  • Online:2023-06-10 Published:2023-06-12
  • About author:ZENG Wu,born in 1997,postgraduate.His main research interests include data augmentation and few-shot learning. MAO Guojun,born in 1966,Ph.D,professor,is a member of China Computer Federation.His main research interests include data mining,big data and distribution computing.
  • Supported by:
    National Natural Science Foundation of China(61773415) and National Key Research and Development Program of China(2019YFD0900805).

摘要: 小样本学习可以从较少的样本中学习出各类样本的特征,但是由于低数据的问题,即样本数量较少,如何更加准确地提取图像中的重要特征信息,以及更好地学习图像中目标对象的特征和更精准地判断未标记样本与支持集类别的相似度,成为关键。提出一种基于多图特征聚合的小样本学习方法MGFAN。具体来说,该模型通过多种数据增强方法对原图进行扩充,然后使用一个自注意力模块去获取原图以及不同扩展图之间的重要特征信息,以此获得关于图像更为准确的特征向量。其次,在模型中引入关于预测图像不同扩增方式的自监督学习任务作为辅助任务,促进模型的特征学习能力。最后,采用多个距离函数来更加精准地计算样本间的相似度。在3个标准数据集miniImageNet,tieredImageNet和Stanford Dogs中,使用5-way 1-shot以及5-way 5-shot实验设置中的实验表明,MGFAN方法可以显著提高分类器的分类性能。

关键词: 小样本学习, 深度学习, 自监督学习, 特征聚合, 数据扩增, 自注意力

Abstract: Few-shot learning can learn the characteristics of various samples from fewer samples,but due to the problem of low data,that is,the number of samples is small,how to more accurately extract the important feature information in the image,and how to better learn from the image.The characteristics of the target object and the more accurate judgment of the similarity between the unlabeled samples and the support set category become the key.A few-shot learning method MGFAN based on multi-graph feature aggregation is proposed.Specifically,the model expands the original image through various data enhancement me-thods,and then uses a self-attention module to obtain important feature information between the original image and different expanded images,so as to obtain more accurate features vector about the image.Secondly,the self-supervised learning task of predicting different augmentation methods of images is introduced into the model as an auxiliary task to promote the feature learning ability of the model.Finally,multiple distance functions are used to calculate the similarity between samples more accurately.Experiments in 3 standard datasets miniImageNet,tieredImageNet and Stanford Dogs using 5-way 1-shot and 5-way 5-shot experimental settings show that the MGFAN method can significantly improve the classification performance of the classifier.

Key words: Few-shot learning, Deep learning, Self-supervised learning, Feature aggregation, Data augmentation, Self-attention

中图分类号: 

  • TP391
[1]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenetclassification with deep convolutional neural networks[J].Advances in Neural Information Processing Systems,2012,25.
[2]JI X,HENRIQUES J F,VEDALDI A.Invariant informationclustering for unsupervised image classification and segmentation[C]//Proceedings of the IEEE/CVF International Confe-rence on Computer Vision.2019:9865-9874.
[3]LIU W,ANGUELOV D,ERHAN D,et al.Ssd:Single shotmultibox detector[C]//European Conference on Computer Vision.Cham:Springer,2016:21-37.
[4]CHEN L C,PAPANDREOU G,KOKKINOS I,et al.Deeplab:Semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected crfs[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(4):834-848.
[5]NAM H,HAN B.Learning multi-domain convolutional neural networks for visual tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:4293-4302.
[6]HINTON G,LI D,DONG Y,et al.Deep neural networks for acoustic modeling in speech recognition:The shared views of four research groups[J].IEEE Signal Processing Magazine,2012,29(6):82-97.
[7]RUSSAKOVSKY O,DENG J,SU H,et al.ImageNet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115(3):211-252.
[8]FEI-FEI L,FERGUS R,PERONA P.One-shot learning of object categories[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(4):594-611.
[9]WANG Y,YAO Q,KWOK J T,et al.Generalizing from a few examples:A survey on few-shot learning[J].ACM Computing Surveys(CSUR),2020,53(3):1-34.
[10]NICHOL A,SCHULMAN J.Reptile:a scalable metalearning algorithm[J].arXiv:1803.02999,2018.
[11]KOCH G R,ZEMEL R,SALAKHUTDINOV R.Siamese neural networks for one-shot image recognition[C]//Proceedings of the 32nd Int conference on Machine Learning.New York:ACM,2015.
[12]WANG P,LIU L,SHEN C,et al.Multi-attention Network for One Shot Learning[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR).IEEE,2017.
[13]SNELL J,SWERSKY K,ZEMEL R.Prototypical networks for few-shot learning[C]//Proceedings of the 31st Annual Conference on Neural Information Processing Systems.Cambridge,MA:MIT Press,2017:4077-4087.
[14]DOERSCH C,ZISSERMAN A.Multi-task self-supervised visual learning[C]//Proceedings of the IEEE International Conference on Computer Vision.2017:2051-2060.
[15]KOMODAKIS N,GIDARIS S.Unsupervised representationlearning by predicting image rotations[C]//International Conference on Learning Representations(ICLR).2018.
[16]NOROOZI M,FAVARO P.Unsupervised learning of visual representations by solving jigsaw puzzles[C]//European Confe-rence on Computer Vision.Cham:Springer,2016:69-84.
[17]VINYALS O,BLUNDELL C,LILLICRAP T,et al.Matchingnetworks for one shot learning[C]//Proceedings of the 30th Annual conference on Neural Information Processing Systems.Cambridge,MA:MIT Press,2016:3630-3638.
[18]SUNG F,YANG Y,ZHANG L,et al.Learning to compare:Relation network for few-shot learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:1199-1208.
[19]SANTORO A,BARTUNOV S,BOTVINICK M,et al.Meta-learning with memory-augmented neural networks[C]//Proceedings of the 33rd Int conference on Machine Learning.New York:ACM,2016:1842-1850.
[20]FINN C,ABBEEL P,LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[C]//International Confe-rence on Machine Learning.PMLR,2017:1126-1135.
[21]RAVI S,LAROCHELLE H.Optimization as a model for few-shot learning[C/OL]//Proceedings of the 5th Int Conference on Learning Representations.https://openreview.net/forum?id=rJY0-Kcll.
[22]LI Z,ZHOU F,CHEN F,et al.Meta-sgd:Learning to learnquickly for few-shot learning[J].arXiv:1707.09835,2017.
[23]SHYAM P,GUPTA S,Dukkipati A.Attentive recurrent comparators[C]//International Conference on Machine Learning.PMLR,2017:3173-3181.
[24]DOERSCH C,GUPTA A,EFROS A A.Unsupervised visualrepresentation learning by context prediction[C]//Proceedings of the IEEE International Conference on Computer Vision.2015:1422-1430.
[25]PATHAK D,KRAHENBUHL P,DONAHUE J,et al.Context encoders:Feature learning by inpainting[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2536-2544.
[26]EVERINGHAM M,VAN GOOL L,WILLIAMS C,et al.The pascal visual object classes(voc) challenge[J].International Journal of Computer Vision,2010,88(2):303-338.
[27]VASWANI A,SHAZEER N,PARMAR N,et al.Attention isall you need[J].Advances in Neural Information Processing Systems,2017,30.
[28]REN M,TRIANTAFILLOU E,RAVI S,et al.Meta-learningfor semi-supervised few-shot classification[J].arXiv:1803.00676,2018.
[29]KHOSLA A,JAYADEVAPRAKASH N,YAO B,et al.Noveldataset for fine-grained image categorization:Stanford dogs[C]//Proc.CVPR Workshop on Fine-Grained Visual Categorization(FGVC).Citeseer,2011.
[30]ALLEN K,SHELHAMER E,SHIN H,et al.Infinite mixture prototypes for few-shot learning[C]//International Conference on Machine Learning.PMLR,2019:232-241.
[31]LI W,WANG L,XU J,et al.Revisiting local descriptor based image-to-class measure for few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:7260-7268.
[32]WU Z,LI Y,GUO L,et al.Parn:Position-aware relation net-works for few-shot learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:6659-6667.
[33]SIMON C,KONIUSZ P,NOCK R,et al.Adaptive subspaces for few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:4136-4145.
[34]WEI X S,WANG P,LIU L,et al.Piecewise classifier mappings:Learning fine-grained learners for novel categories with few examples[J].IEEE Transactions on Image Processing,2019,28(12):6116-6125.
[35]LIU B,CAO Y,LIN Y,et al.Negative margin matters:Understanding margin in few-shot classification[C]//European Conference on Computer Vision.Cham:Springer,2020:438-455.
[36]ZHANG M,ZHANG J,LU Z,et al.IEPT:Instance-level and episode-level pretext tasks for few-shot learning[C]//International Conference on Learning Representations.2021.
[37]ORESHKIN B,RODRÍGUEZ LÓPEZ P,LACOSTE A.Tadam:Task dependent adaptive metric for improved few-shot learning[J].Advances in Neural Information Processing Systems,2018,31.
[38]MISHRA N,ROHANINEJAD M,CHEN X,et al.A simpleneural attentive meta-learner[J].arXiv:1707.03141,2017.
[39]LEE K,MAJI S,RAVICHANDRAN A,et al.Meta-learningwith differentiable convex optimization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:10657-10665.
[40]SUN Q,LIU Y,CHUA T S,et al.Meta-transfer learning forfew-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:403-412.
[41]HOU R,CHANG H,MA B,et al.Cross attention network for few-shot classification[J].Advances in Neural Information Processing Systems,2019,32.
[42]RAVICHANDRAN A,BHOTIKA R,SOATTO S.Few-shotlearning with embedded class models and shot-free meta training[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:331-339.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!