计算机科学 ›› 2025, Vol. 52 ›› Issue (3): 206-213.doi: 10.11896/jsjkx.240100166

• 数据库&大数据&数据科学 • 上一篇    下一篇

基于多原型重放和对齐的类增量无源域适应

田青1,2,3, 康陆禄1, 周亮宇1   

  1. 1 南京信息工程大学软件学院 南京 210044
    2 南京信息工程大学无锡研究院 江苏 无锡 214000
    3 南京大学计算机软件新技术国家重点实验室 南京 210023
  • 收稿日期:2024-01-22 修回日期:2024-04-12 出版日期:2025-03-15 发布日期:2025-03-07
  • 通讯作者: 田青(tianqing@nuist.edu.cn)
  • 基金资助:
    国家自然科学基金(62176128);江苏省自然科学基金(BK20231143);南京大学计算机软件新技术国家重点实验室开放课题(KFKT2022B06);中央高校基本科研基金(NJ2022028);江苏省“青蓝工程”人才计划项目

Class-incremental Source-free Domain Adaptation Based on Multi-prototype Replay andAlignment

TIAN Qing1,2,3, KANG Lulu1, ZHOU Liangyu1   

  1. 1 School of Software,Nanjing University of Information Science and Technology,Nanjing 210044,China
    2 Wuxi Institute of Technology,Nanjing University of Information Science and Technology,Wuxi,Jiangsu 214000,China
    3 State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China
  • Received:2024-01-22 Revised:2024-04-12 Online:2025-03-15 Published:2025-03-07
  • About author:TIAN Qing,born in 1984,Ph.D,professor,is a senior member of CCF(No.33364S).His main research interests include machine learning and pattern recognition.
  • Supported by:
    National Natural Science Foundation of China(62176128),Natural Science Foundation of Jiangsu Province,China(BK20231143),Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University(KFKT2022B06),Fundamental Research Funds for the Central Universities(NJ2022028) and Qing Lan Project of Jiangsu Province.

摘要: 传统无源域适应通常假设目标域数据全部可用,然而在实际应用中目标域数据常以流的形式出现,即未标记的目标域中的类会依次增加,这无疑带来了新的挑战。首先,在每个时间步骤中,目标域的标签空间都是源域的一个子集,盲目对齐反而会导致模型性能下降;其次,在学习新类的过程中会破坏先前学习到的知识,导致之前知识的灾难性遗忘。为了解决这些问题,提出了一种基于多原型重放和对齐(MPRA)的方法。该方法通过累积预测概率检测目标域中的共享类来应对标签空间不一致问题,并采用多原型重放来处理灾难性遗忘,提高模型的记忆能力。同时,基于多原型和源模型权重进行跨域的对比学习,从而对齐特征分布,提高模型性能。大量的实验表明,所提方法在3个基准数据集上都取得了优越的表现。

关键词: 无源域适应, 类增量学习, 多原型, 对比学习, 迁移学习

Abstract: Traditional source-free domain adaptation usually assumes that all the target domain data is available,but in practice,the target domain data often appears in the form of streams,that is,the classes in the unlabeled target domain will increase sequentially,which undoubtedly brings new challenges.First,in each time step,the label space of the target domain is a subset of the source domain,and blind alignment will cause the performance of the model to deteriorate.Secondly,in the process of learning new classes,it will destroy the previously learned knowledge,resulting in the catastrophic forgetting of the previous knowledge.In order to solve these problems,this paper proposes a method based on multi-prototype replay and alignment(MPRA).In this method,the shared classes in the target domain are detected by cumulative prediction probabilities,the label space inconsistency problem is solved,and the multi-prototype replay is used to deal with catastrophic forgetting and improve the memory ability of the model.Additionally,the method incorporates cross-domain contrastive learning based on multi-prototype and source model weights to align feature distributions and improve model robustness.A large number of experiments show that the proposed method has achieved superior performance on 3 benchmark datasets.

Key words: Source-free domain adaptation, Class-incremental learning, Multi-prototype, Contrastive learning, Transfer learning

中图分类号: 

  • TP391
[1]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[2]CHEN G,CHEN K,ZHANG L,et al.VCANet:Vanishing-point-guided context-aware network for small road object detection[J].Automotive Innovation,2021,4:400-412.
[3]XU Y,ZHANG Q,ZHANG J,et al.Vitae:Vision transformer advanced by exploring intrinsic inductive bias[J].Advances in Neural Information Processing Systems,2021,34:28522-28535.
[4]ZHENG Z D,YANG Y.Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation[J].International of Computer Vision,2021,129(4):1106-1120.
[5]TIAN Q,SUN H,MA C,et al.Heterogeneous domain adaptation with structure and classification space alignment[J].IEEE Transactions on Cybernetics,2021,52(10):10328-10338.
[6]HOFFMAN J,TZENG E,PARK T,et al.Cycada:Cycle-consistent adversarial domain adaptation[C]//International Confe-rence on Machine Learning.2018:1989-1998.
[7]ZHOU K B,TENG L Y,ZHANG W,et al.Discriminative label semantic guidance learning for domain adaptive retrieval[J].Journal of Chinese Computer Systems,2024,45(7):1639-1647.
[8]DU Z,LI J,SU H,et al.Cross-domain gradient discrepancy mi-nimization for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:3937-3946.
[9]PAN S J,TSANG I W,KWOK J T,et al.Domain adaptation via transfer component analysis[J].IEEE Transactions on Neural Networks,2010,22(2):199-210.
[10]LONG M,CAO Z,WANG J,et al.Conditional adversarial do-main adaptation[C]//Proceedings of the 32nd International Conference on Neural Information Processing System.2018:1647-1657.
[11]BOUSMALIS K,SILBERMAN N,DOHAN D,et al.Unsupervised pixel-level domain adaptation with generative adversarial networks[C]//Proceedings of the IEEE Conference on Compu-ter Vision and Pattern Recognition.2017:3722-3731.
[12]LONG M,CAO Y,Wang J,et al.Learning transferable features with deep adaptation networks[C]//International Conference on Machine Learning.2015:97-105.
[13]LIANG J,HU D,FENG J.Do we really need to access thesource data? source hypothesis transfer for unsupervised domain adaptation[C]//International Conference on Machine Learning.2020:6028-6039.
[14]ZHANG Z,CHEN W,CHENG H,et al.Divide and contrast:Source-free domain adaptation via adaptive contrastive learning[J].Advances in Neural Information Processing Systems,2022,35:5137-5149.
[15]DING N,XU Y,TANG Y,et al.Source-free domain adaptation via distribution estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:7212-7222.
[16]LIANG J,HU D,WANG Y,et al.Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2021,44(11):8602-8617.
[17]CHU T,LIU Y,DENG J,et al.Denoised Maximum Classifier Discrepancy for Source-Free Unsupervised Domain Adaptation[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2022:3472-3480.
[18]JING M M,LI J J,LU K,et al.Visually source-free domain ada-ptation via adversarial style matching[J].IEEE Transactions on Image Processing,2024,33:1032-1044.
[19]BELOUADAH E,POPESCU A,KANELLOS I.A comprehensive study of class incremental learning algorithms for visual tasks[J].Neural Networks,2021,135:38-54.
[20]MASANA M,LIU X,TWARDOWSKI B,et al.Class-incremental learning:Survey and performance evaluation on image classification[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,45(5):5513-5533.
[21]ZHAO B,XIAO X,GAN G J,et al.Maintaining discrimination and fairness in class incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:13208-13217.
[22]HU X,TANG K,MIAO C,et al.Distilling causal effect of data in class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:3957-3966.
[23]LIU Y,SCHIELE B,SUN Q.Rmm:Reinforced memory ma-nagement for class-incremental learning[J].Advances in Neural Information Processing Systems,2021,34:3478-3490.
[24]BELOUADAH E,POPESCU A.IL2m:Class incremental lear-ning with dual memory[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:583-592.
[25]LIU X,WU C,MENTA M,et al.Generative feature replay for class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.2020:226-227.
[26]YANG S,WANG Y,et al.Generalized source-free domain adaptation[C]//Proceedings of the IEEE/CVF International Confe-rence on Computer Vision.2021:8978-8987.
[27]YANG S,WANG Y X,et al.Exploiting the intrinsic neighborhood structure for source-free domain adaptation[J].Advances in Neural Information Processing Systems,2021,34:29393-29405.
[28]WANG F,HAN Z,GONG Y,et al.Exploring domain-invariant parameters for source free domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:7151-7160.
[29]TIAN J,ZHANG J,LI W,et al.VDM-DA:Virtual domain mo-deling for source data-free domain adaptation[J].IEEE Transactions on Circuits and Systems for Video Technology,2021,32(6):3749-3760.
[30]CHEN D,WANG D,DARREEL T,et al.Contrastive test-time adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:295-305.
[31]WANG R,WU Z,WENG Z,et al.Cross-domain contrastivelearning for unsupervised domain adaptation[J].IEEE Transactions on Multimedia,2022,25:1665-1673.
[32]TIAN Q,PENG S,MA T.Source-free unsupervised domainadaptation with trusted pseudo samples[J].ACM Transactions on Intelligent Systems and Technology,2023,14(2):1-17.
[33]CAO Z,LONG M,WANG J,et al.Partial transfer learning with selective adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:2724-2732.
[34]TIAN Q,CHU Y,SUN H Y,et al.Survey on Partial Domain Adaptation[J].Journal of Software,2023,34(12):5597-5613.
[35]CAO Z,MA L,LONG M,et al.Partial adversarial domain adaptation[C]//Proceedings of the European Conference on Computer Vision(ECCV).2018:135-150.
[36]SAHOO A,PANDA R,FERIS R,et al.Select,label,and mix:Learning discriminative invariant feature representations for partial domain adaptation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2023:4210-4219.
[37]KUNDU J N,VENKATESH R M,VENKAT N,et al.Class-incremental domain adaptation[C]//European Conference on Computer Vision.2020:53-69.
[38]LIN H,ZHANG Y,QIU Z,et al.Prototype-guided continualadaptation for class-incremental unsupervised domain adaptation[C]//European Conference on Computer Vision.2022:351-368.
[39]SAENLO K,KULIS B,FRITZ M,et al.Adapting visual category models to new domains[C]//Proceedings of European Conference on Computer Vision.Springer,2010:213-226.
[40]TZENG E,HOFFMAN J,SAENKO K,et al.Adversarial dis-criminative domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:7167-7176.
[41]GANIN Y,USTINOVA E,AJAKAN H,et al.Domain-adversarial training of neural networks[J].The Journal of Machine Learning Research,2016,17(1):2096-2030.
[42]CAO Z,YOU K,LONG M,et al.Learning to transfer examples for partial domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:2985-2994.
[43]LIANG J,WANG Y,HU D,et al.A balanced and uncertainty-aware approach for partial domain adaptation[C]//European Conference on Computer Vision.Cham:Springer International Publishing,2020:123-140.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!