计算机科学 ›› 2024, Vol. 51 ›› Issue (6A): 230800136-7.doi: 10.11896/jsjkx.230800136
史松昊, 王晓丹, 杨春晓, 王艺菲
SHI Songhao, WANG Xiaodan, YANG Chunxiao, WANG Yifei
摘要: 由于SAR图像获取难度大,可供研究的样本数量较少,解决有限样本条件下SAR图像目标识别问题成为业界公认的挑战。随着深度学习在计算机视觉领域的发展,衍生出了多种小样本图像分类方法,因此考虑采用跨域小样本学习范式解决小样本SAR图像目标识别问题。具体地,先在多个源域中训练得到不同域的特征提取器,而后通过知识蒸馏的方法获取一个通用的特征提取器,这里采用中心核对齐的方法,将提取的特征映射到一个更高维的空间,从而更好地区分原特征之间的非线性相似性;通过上一阶段获得的通用特征提取器提取目标域图像特征,最后采用原型网络的方法预测样本的类别。实验证明,该方法在缩减模型参数的同时,获得了88.61%的准确率,为解决小样本SAR图像目标识别问题提供了新的思路。
中图分类号:
[1]YING Z L,WANG W Q,XU Y,et al.Twin Self-SupervisedLearning Method for Small Sample SAR Images Automatic Target Recognition[J/OL].Signal Processing:1-13.[2023-07-13].http://kns.cnki.net/kcms/detail/11.2406.TN.20230321.1926.010.html. [2]FENG B D,YANG H T,WANG J N,et al.SAR Image TargetRecognition Algorithm Based on Data Fusion[J].Computer System Applications,2022,31(12):342-349. [3]WANG Y Y.SAR TargetRecognition Based on Modified Sparse Representation[J/OL].Electronics Optics & Control:1-7.[2023-08-21].http://kns.cnki.net/kcms/detail/41.1227.TN.20230727.1127.004.html. [4]KANG Z Q,ZHANG S Q,FENG S J,et al.Sparse Prior-Guided CNN Learning for SAR Images Target Recognition[J].Journal Of Signal Processing,2023,39(4):737-750. [5]TANG H,LI Z,PENG Z,et al.Blockmix:meta regularization and self-calibrated inference for metric-based meta-learning[C]//Proceedings of the 28th ACM International Conference on Multimedia.2020:610-618. [6]ANTONIOU A,EDWARDS H,STORKEY A.How to trainyour MAML[J].arXiv:1810.09502,2018. [7]SUNG F,YANG Y,ZHANG L,et al.Learning to compare:Relation network for few-shot learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:1199-1208. [8]XIE Y,FU Y,TAI Y,et al.Learning To Memorize Feature Hallucination for One-Shot Image Generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:9130-9139. [9]SCHROEDER B,CUI Y.Fgvcx fungi classification challenge 2018[J/OL].http://github.com/visipedia/fgvcx_fungi_comp. [10]FINN C,ABBEEL P,LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[C]//International Confe-rence on Machine Learning.PMLR,2017:1126-1135. [11]SNELL J,SWERSKY K,ZEMEL R.Prototypical networks for few-shot learning[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.2017:4080-4090. [12]CHEN W Y,LIU Y C,KIRA Z,et al.A closer look at few-shotclassification[J].arXiv:1904.04232,2019. [13]REQUEIMA J,GORDON J,BRONSKILL J,et al.Fast andflexible multi-task classification using conditional neural adaptive processes[C]//Proceedings of 33rd International Confe-rence onNeural Information Processing Systems.2019:7959-7970. [14]BATENI P,GOYAL R,MASRANI V,et al.Improved few-shot visual classification[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition.2020:14493-14502. [15]PEREZ E,STRUB F,DE VRIES H,et al.Film:Visual reaso-ning with a general conditioning layer[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2018. [16]DVORNIK N,SCHMID C,MAIRAL J.Selecting relevant features from a multi-domain representation for few-shot classification[C]//Computer Vision-ECCV 2020:16th European Confe-rence,Glasgow,UK,Part X 16.Springer International Publi-shing,2020:769-786. [17]LIU L,HAMILTON W,LONG G,et al.A universal representation transformer layer for few-shot image classification[J].ar-Xiv:2006.11702,2020. [18]PHOO C P,HARIHARAN B.Self-training for few-shot transfer across extreme task differences[J].arXiv:2010.07734,2020. [19]FU Y,XIE Y,FU Y,et al.Me-d2n:Multi-expert domain decompositional network for cross-domain few-shot learning[C]//Proceedings of the 30th ACM International Conference on Multimedia.2022:6609-6617. [20]LI W H,LIU X,BILEN H.Universal representation learningfrom multiple domains for few-shot classification[C]//Procee-dings of the IEEE/CVF International Conference on Computer Vision.2021:9526-9535. [21]GUO Y,CODELLA N C,KARLINSKY L,et al.A broaderstudy of cross-domain few-shot learning[C]//Computer Vision-ECCV 2020:16th European Conference,Glasgow,UK,Part XXVII 16.Springer International Publishing,2020:124-141. [22]YANG Y,ZHU W G,QIU L L,et al.A Survey of Research on the Target Recognition via Limited SAR Sample Based on Transfer Learning.[J/OL].Electronics Optics & Control:1-8.[2023-07-13].http://kns.cnki.net/kcms/detail/41.1227.TN.20230327.1159.002.html. [23]CHEN W Y,LIU Y C,KIRA Z,et al.A closer look at few-shotclassification[J].arXiv:1904.04232,2019. [24]KENDALL A,GAL Y,CIPOLLA R.Multi-task learning using uncertainty to weigh losses for scene geometry and semantics[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:7482-7491. [25]ZACARIAS A,ALEXANDRE L A.Sena-cnn:Overcoming catastrophic forgetting in convolutional neural networks by selective network augmentation[C]//IAPR Workshop on Artificial Neural Networks in Pattern Recognition.Cham:Springer International Publishing,2018:102-112. [26]ROY D,PANDA P,ROY K.Tree-CNN:a hierarchical deep convolutional neural network for incremental learning[J].Neural Networks,2020,121:148-160. [27]XU H,ZHI S,SUN S,et al.Deep Learning for Cross-Domain Few-Shot Visual Recognition:A Survey[J].arXiv:2303.08557,2023. [28]HINTON G,VINYALS O,DEAN J.Distilling the knowledge in a neural network[J].arXiv:1503.02531,2015. [29]ROMERO A,BALLAS N,KAHOU S E,et al.Fitnets:Hintsfor thin deep nets[J].arXiv:1412.6550,2014. [30]LI W H,BILEN H.Knowledge distillation for multi-task lear-ning[C]//Computer Vision-ECCV 2020 Workshops:Glasgow,UK,Part VI 16.Springer International Publishing,2020:163-176. [31]KORNBLITH S,NOROUZI M,LEE H,et al.Similarity of neural network representations revisited[C]//International Confe-rence on Machine Learning.PMLR,2019:3519-3529. [32]NGUYEN T,RAGHU M,KORNBLITH S.Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth[J].arXiv:2010.15327,2020. [33]TRIANTAFILLOU E,ZHU T,DUMOULIN V,et al.Meta-dataset:A dataset of datasets for learning to learn from few examples[J].arXiv:1903.03096,2019. [34]RUSSAKOVSKY O,DENG J,SU H,et al.Imagenet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115:211-252. [35]LAKE B M,SALAKHUTDINOV R,TENENBAUM J B.Human-level concept learning through probabilistic program induction[J].Science,2015,350(6266):1332-1338. [36]MAJI S,RAHTU E,KANNALA J,et al.Fine-grained visualclassification of aircraft[J].arXiv:1306.5151,2013. [37]WAH C,BRANSON S,WELINDER P,et al.The caltech-ucsd birds-200-2011 dataset[J/OL].https://www.docin.com/p-1472255882.html. [38]CIMPOI M,MAJI S,KOKKINOS I,et al.Describing textures in the wild[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2014:3606-3613. [39]JONGEJAN J,ROWLEY H,KAWASHIMA T,et al.The quick,draw!-AI experiment[J].Mount View,2016,17(2018):4. [40]HOUBEN S,STALLKAMP J,SALMEN J,et al.Detection of traffic signs in real-world images:The German Traffic Sign Detection Benchmark[C]//The 2013 International Joint Confe-rence on Neural Networks(IJCNN).IEEE,2013:1-8. [41]LIN T Y,MAIRE M,BELONGIE S,et al.Microsoft coco:Common objects in context[C]//Computer Vision-ECCV 2014:13th European Conference,Zurich,Switzerland,Part V 13.Springer International Publishing,2014:740-755. |
|