Computer Science ›› 2022, Vol. 49 ›› Issue (5): 1-9.doi: 10.11896/jsjkx.210500128

• Computer Graphics & Multimedia • Previous Articles     Next Articles

Survey on Few-shot Learning Algorithms for Image Classification

PENG Yun-cong1,3, QIN Xiao-lin1,2,3, ZHANG Li-ge1,3, GU Yong-xiang1,3   

  1. 1 Chengdu Institute of Computer Applications,Chinese Academy of Sciences,Chengdu 610041,China
    2 Nanchang Institute of Technology,Nanchang 330044,China
    3 School of Computer Science and Technology,University of Chinese Academy of Sciences,Beijing 100049,China
  • Received:2021-05-18 Revised:2021-10-22 Online:2022-05-15 Published:2022-05-06
  • About author:PENG Yun-cong,born in 1998,postgraduate.His main research interests include few-shot learning and theory of statistical machine learning.
    QIN Xiao-lin,born in 1980,Ph.D,professor,Ph.D supervisor.His main research interests include automatic reasoning and swarm intelligence.
  • Supported by:
    National Natural Science Foundation of China(61402537),Sichuan Science and Technology Program(2019ZDZX0005,2019ZDZX0006,2020YFQ0056,2021YFG0034),Talents by Sichuan Provincial Party Committee Organization Department and National Academy of Science Alliance Collaborative Program(Chengdu Branch of Chinese Academy of Sciences-Chongqing Academy of Science and Technology).

Abstract: Presently,artificial intelligence algorithms represented by deep learning have achieved advanced results and been successfully used in fields such as image classification,biometric recognition and medical assisted diagnosis by virtue of ultra-large-scale data sets and powerful computing resources.However,due to many restrictions in the actual environment,it is impossible to obtain a large number of samples or the cost of obtaining samples is too high.Therefore,studying the learning algorithm in the case of small samples is the core driving force to promote the intelligent process,and it has also become a current research hot-spot.Few-shot learning is the algorithm to learn and solve the problem under the condition of limited supervision information.Firstly,it describes the reasons why few-shot learning is difficult to generalize from the perspective of machine learning theory.Secondly,according to the design motivation of the few-shot learning algorithm,existing algorithms are classified into three categories:representation learning,data expansion and learning strategy,and their advantages and disadvantages are analyzed.Thirdly,we summarize the commonly used few-shot learning evaluation methods and the performance of existing models in public data sets.Finally,we discuss the difficulties and future research trends of small sample image classification technology to provide re-ferences for future research.

Key words: Data expansion, Few-shot learning, Image classification, Learning representation, Transfer learning

CLC Number: 

  • TP181
[1]REN S,HE K,GIRSHICK R B,et al.Faster R-CNN:Towards Real-Time Object Detection with Region Proposal Networks[C]//International Conference on Neural Information Proces-sing Systems.2015.
[2]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImagenetClassificationwith Deep Convolutional Neural Networks[J].Advances in Neural Information Processing Systems,2012,25:1097-1105.
[3]YAN L,ZHENG Y,CAO J.Few-shot Learning for Short Text Classification[J].Multimedia Tools Applications,2018,77(22):29799-29810.
[4]LI F F,FERGUS R,PERONA P.One-shot Learning of Object Categories[J].IEEE Transactions on Pattern Analysis Machine Intelligence,2006,28(4):594-611.
[5]LAKE B M,SALAKHUTDINOV R,TENENBAUM J B.One-shot Learning by Inverting a Compositional Causal Process[C]//Proceedings of the 26th International Conference on Neural Information Processing Systems.2013:2526-2534.
[6]TANG K D,TAPPEN M F,SUKTHANKAR R,et al.Optimizing One-shot Recognition with Micro-set Learning[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2010:3027-3034.
[7]MUNKHDALAI T,YUAN X,MEHRI S,et al.Rapid Adaptation with Conditionally Shifted Neurons[C]//International Conference on Machine Learning.2018:3664-3673.
[8]FINK M.Object Classification From a Single Example Utilizing Class Relevance Metrics[C]//International Conference on Neural Information Processing Systems.2004:449-456.
[9]XU Z,ZHU L,YANG Y.Few-shot Object Recognition fromMachine-labeled Web Images[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1164-1172.
[10]SIMON C,KONIUSZ P,NOCK R,et al.Adaptive Subspaces for Few-shot Learning[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition.2020:4136-4145.
[11]CHEN Z,FU Y,ZHANG Y,et al.Multi-level Semantic Feature Augmentation for One-shot Learning[J].IEEE Transactions on Image Processing,2019,28(9):4594-4605.
[12]DOERSCH C,GUPTA A,EFROS A A.Unsupervised Visual Representation Learning by Context Prediction[C]//Procee-dings of the IEEE Conference on Computer Vision and Pattern Recognition.2015:1422-1430.
[13]PATHAK D,KRAHENBUHL P,DONAHUE J,et al.Context Encoders:Feature Learning by Inpainting[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2536-2544.
[14]ZHANG R,ISOLA P,EFROS A A.Colorful Image Colorization[C]//European Conference on Computer Vision.2016:649-666.
[15]GIDARIS S,SINGH P,KOMODAKIS N.Unsupervised Representation Learning by Predicting Image Rotations[C]//International Conference on Learning Representations.2018.
[16]RITCHIE D,WANG K,LIN Y A.Fast and Flexible IndoorScene Synthesis via Deep Convolutional Generative Models[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2019.
[17]CHEN M,FANG Y,WANG X,et al.Diversity Transfer Network for Few-shot Learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:10559-10566.
[18]SANTORO A,BARTUNOV S,BOTVINICK M,et al.Meta-learning with Memory-augmented Neural Networks[C]//International Conference on Machine Learning.2016:1842-1850.
[19]WANG Y X,HEBERT M.Learning from Small Sample Sets by Combining Unsupervised Meta-training with CNNs[C]//International Conference on Neural Information Processing Systems.2016:244-252.
[20]JAMAL M A,QI G J.Task Agnostic Meta-learning for Few-shot Learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:11719-11727.
[21]MISHRA N,ROHANINEJAD M,CHEN X,et al.A SimpleNeural Attentive Meta-Learner[C]//International Conference on Learning Representations.2018.
[22]RUSU A A,RAO D,SYGNOWSKI J,et al.Meta-Learning with Latent Embedding Optimization[C]//International Conference on Learning Representations.2018.
[23]ZHOU F,WU B,LI Z.Deep Meta-learning:Learning to Learn in The Concept Space[J].arXiv:1802.03596,2018.
[24]BERTINETTO L,HENRIQUES J,TORR P,et al.Meta-lear-ning with Differentiable Closed-form Solvers[C]//International Conference on Learning Representations.2019.
[25]ZHAO K L,JIN X L,WANG Y Z.Survey on Few-shot Lear-ning[J].Journal of Software,2021,32(2):349-369.
[26]LIU Y,LEI Y B,FAN J L,et al.Survey on Image Classification Technology Based on Small Sample Learning[J].Acta Automatica Sinica,2021,47(2):297-315.
[27]BOTTOU L,CURTIS F E,NOCEDAL J.Optimization Methods for Large-scale Machine Learning[J].Siam Review,2018,60(2):223-311.
[28]BOTTOU L,BOUSQUET O.The Tradeoffs of Large ScaleLearning[C]//International Conference on Neural Information Processing Systems.2007:161-168.
[29]WANG Y,YAO Q,KWOK J T,et al.Generalizing from A Few Examples:A Survey on Few-shot Learning[J].ACM Computing Surveys,2020,53(3):1-34.
[30]QI H,BROWN M,LOWE D G.Low-shot Learning with Imprinted Weights[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:5822-5830.
[31]KOCH G.Siamese Neural Networks for One-shot Image Recog-nition[C]//Internation Conference on Machine Learning Deep Learning Workshop.2015.
[32]CHEN W Y,LIU Y C,KIRA Z,et al.A Closer Look at Few-shot Classification[C]//International Conference on Learning Representations.2019.
[33]GIDARIS S,BURSUC A,KOMODAKIS N,et al.Boosting Few-shot Visual Learning with Self-supervision[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:8059-8068.
[34]SU J C,MAJI S,HARIHARAN B.When does Self-supervision Improve Few-shot Learning?[C]//European Conference on Computer Vision.2020:645-666.
[35]MISRA I,MAATEN L V D.Self-supervised Learning of Pretext-invariant Representations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:6707-6717.
[36]HJELM R D,FEDOROV A,LAVOIE-MARCHILDON S,et al.Learning Deep Representations by Mutual Information Estimation and Maximization[C]//International Conference on Lear-ning Representations.2018.
[37]BACHMAN P,HJELM R D,BUCHWALTER W.LearningRepresentations by Maximizing Mutual Information Across Views[J].arXiv:1906.00910,2019.
[38]CHEN D,CHEN Y,LI Y,et al.Self-supervised Learning for Few-shot Image Classification[J].arXiv:1911.06045,2019.
[39]TSAI Y H,HUANG L K,SALAKHUTDINOV R.LearningRobust Visual-Semantic Embeddings[C]//Proceedings of the IEEE International Conference on Computer Vision.2017:3571-3580.
[40]MANGLA P,KUMARI N,SINHA A,et al.Charting the Right Manifold:Manifold Mixup for Few-shot Learning[C]//Procee-dings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2020:2218-2227.
[41]GUO H,MAO Y,ZHANG R.Mixup as Locally Linear Out-of-manifold Regularization[C]//Proceedings of the AAAI Confe-rence on Artificial Intelligence.2019:3714-3722.
[42]ZHANG H,CISSE M,DAUPHIN Y N,et al.Mixup:Beyond Empirical Risk Minimization[J].arXiv:1710.09412,2017.
[43]WANG Y X,GIRSHICK R,HEBERT M,et al.Low-shotLearning from Imaginary Data[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:7278-7286.
[44]AZURI I,WEINSHALL A D.Generative Latent Implicit Conditional Optimization when Learning from Small Sample[C]//International Conference on Pattern Recognition.2020.
[45]SCHWARTZ E,KARLINSKY L,SHTOK J,et al.Delta-En-coder:An Effective Sample Synthesis Method for Few-shot Object Recognition[C]//International Conference on Neural Information Processing Systems.2018:2850-2860.
[46]DHILLON G S,CHAUDHARI P,RAVICHANDRAN A,et al.A Baseline for Few-Shot Image Classification[C]//International Conference on Learning Representations.2019.
[47]WANG Y X,HEBERT M.Learning to Learn:Model Regression Networks for Easy Small Sample Learning[C]//European Conference on Computer Vision.2016:616-634.
[48]HU Y,GRIPON V,PATEUX S.Leveraging the Feature Distribution in Transfer-based Few-shot Learning[J].arXiv:2006.03806,2020.
[49]VINYALS O,BLUNDELL C,LILLICRAP T,et al.MatchingNetworks for One shot Learning[C]//International Conference on Neural Information Processing Systems.2016:3637-3645.
[50]SUNG F,YANG Y,ZHANG L,et al.Learning to Compare:Relation Network for Few-shot Learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:1199-1208.
[51]GARCIA V,BRUNA J.Few-Shot Learning with Graph Neural Networks[C]//International Conference on Learning Representations.2018.
[52]KIM J,KIM T,KIM S,et al.Edge-labeling Graph Neural Network for Few-shot Learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:11-20.
[53]PHAM H,DAI Z,XIE Q,et al.Meta Pseudo Labels[J].arXiv:2003.10580,2020.
[54]LI J,WONG Y,ZHAO Q,et al.Learning to Learn from Noisy Labeled Data[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:5051-5059.
[55]FINN C,ABBEEL P,LEVINE S.Model-agnostic Meta-learning for Fast Adaptation of Deep Networks[C]//International Conference on Machine Learning.2017:1126-1135.
[56]NICHOL A,SCHULMAN J.Reptile:A Scalable Metalearning Algorithm[J].arXiv:1803.02999,2018.
[57]RAVI S,LAROCHELLE H.Optimization as a Model for Few-shot Learning[C]//International Conference on Learning Representations.2017.
[58]SUN Q,LIU Y,CHUA T S,et al.Meta-transfer Learning for Few-shot Learning[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition.2019:403-412.
[59]LI Z,ZHOU F,CHEN F,et al.Meta-SGD:Learning to Learn Quickly for Few-shot Learning[J].arXiv:1707.09835,2017.
[60]ANTONIOU A,EDWARDS H,STORKEY A.How to Train your MAML[J].arXiv:1810.09502,2018.
[61]ZIKO I,DOLZ J,GRANGER E,et al.Laplacian Regularized Few-shot Learning[C]//International Conference on Machine Learning.2020:11660-11670.
[62]SHYAM P,GUPTA S,DUKKIPATI A.Attentive RecurrentComparators[C]//International Conference on Machine Lear-ning.2017:3173-3181.
[63]SNELL J,SWERSKY K,ZEMEL R.Prototypical Networks for Few-shot Learning[C]//International Conference on Neural Information Processing Systems.2017:4080-4090.
[64]TRIANTAFILLOU E,ZEMEL R S,URTASUN R.Few-Shot Learning Through an Information Retrieval Lens[C]//International Conference on Neural Information Processing Systems.2017.
[65]LU Y Q,MIN W Q,DUAN H,et al.Few-shot Food Recognition Combining Triplet Convolutional Neural Network with Relational Network[J].Computer Science,2020,47 (1):136-143.
[66]KANG B,LIU Z,WANG X,et al.Few-shot Object Detection via Feature Reweighting[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:8420-8429.
[67]ROSTAMI M,KOLOURI S,EATON E,et al.Deep Transfer Learning for Few-shot SAR Image classification[J].Remote Sensing.2019,11(11):1374.
[68]WANG L,BAI X,ZHOU F.Few-Shot SAR ATR Based onConv-BiLSTM Prototypical Networks[C]//2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR).2019:1-5.
[69]FAN Q,ZHUO W,TANG C K,et al.Few-shot Object Detection with Attention-RPN and Multi-relation Detector[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:4013-4022.
[70]LIU W,ZHANG C,LIN G,et al.Crnet:Cross-reference Net-works for Few-shot Segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:4165-4173.
[71]LI Y,YANG J.Meta-learning Baselines and Database for Few-shot Classification in Agriculture[J].Computers and Electro-nics in Agriculture,2021,182:106055.
[1] WU Hong-xin, HAN Meng, CHEN Zhi-qiang, ZHANG Xi-long, LI Mu-hang. Survey of Multi-label Classification Based on Supervised and Semi-supervised Learning [J]. Computer Science, 2022, 49(8): 12-25.
[2] FANG Yi-qiu, ZHANG Zhen-kun, GE Jun-wei. Cross-domain Recommendation Algorithm Based on Self-attention Mechanism and Transfer Learning [J]. Computer Science, 2022, 49(8): 70-77.
[3] WANG Jun-feng, LIU Fan, YANG Sai, LYU Tan-yue, CHEN Zhi-yu, XU Feng. Dam Crack Detection Based on Multi-source Transfer Learning [J]. Computer Science, 2022, 49(6A): 319-324.
[4] YANG Jian-nan, ZHANG Fan. Classification Method for Small Crops Combining Dual Attention Mechanisms and Hierarchical Network Structure [J]. Computer Science, 2022, 49(6A): 353-357.
[5] DU Li-jun, TANG Xi-lu, ZHOU Jiao, CHEN Yu-lan, CHENG Jian. Alzheimer's Disease Classification Method Based on Attention Mechanism and Multi-task Learning [J]. Computer Science, 2022, 49(6A): 60-65.
[6] ZHU Xu-dong, XIONG Yun. Study on Multi-label Image Classification Based on Sample Distribution Loss [J]. Computer Science, 2022, 49(6): 210-216.
[7] ZHANG Wen-xuan, WU Qin. Fine-grained Image Classification Based on Multi-branch Attention-augmentation [J]. Computer Science, 2022, 49(5): 105-112.
[8] TAN Zhen-qiong, JIANG Wen-Jun, YUM Yen-na-cherry, ZHANG Ji, YUM Peter-tak-shing, LI Xiao-hong. Personalized Learning Task Assignment Based on Bipartite Graph [J]. Computer Science, 2022, 49(4): 269-281.
[9] ZUO Jie-ge, LIU Xiao-ming, CAI Bing. Outdoor Image Weather Recognition Based on Image Blocks and Feature Fusion [J]. Computer Science, 2022, 49(3): 197-203.
[10] ZHANG Shu-meng, YU Zeng, LI Tian-rui. Transferable Emotion Analysis Method for Cross-domain Text [J]. Computer Science, 2022, 49(3): 218-224.
[11] XU Hua-jie, CHEN Yu, YANG Yang, QIN Yuan-zhuo. Semi-supervised Learning Method Based on Automated Mixed Sample Data Augmentation Techniques [J]. Computer Science, 2022, 49(3): 288-293.
[12] DONG Lin, HUANG Li-qing, YE Feng, HUANG Tian-qiang, WENG Bin, XU Chao. Survey on Generalization Methods of Face Forgery Detection [J]. Computer Science, 2022, 49(2): 12-30.
[13] FANG Zhong-li, WANG Zhe, CHI Zi-qiu. Dual-stream Reconstruction Network for Multi-label and Few-shot Learning [J]. Computer Science, 2022, 49(1): 212-218.
[14] CHEN Tian-rong, LING Jie. Differential Privacy Protection Machine Learning Method Based on Features Mapping [J]. Computer Science, 2021, 48(7): 33-39.
[15] XIONG Zhao-yang, WANG Ting. Image Recognition for Building Components Based on Convolutional Neural Network [J]. Computer Science, 2021, 48(6A): 51-56.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!