Computer Science ›› 2025, Vol. 52 ›› Issue (3): 206-213.doi: 10.11896/jsjkx.240100166

• Database & Big Data & Data Science • Previous Articles     Next Articles

Class-incremental Source-free Domain Adaptation Based on Multi-prototype Replay andAlignment

TIAN Qing1,2,3, KANG Lulu1, ZHOU Liangyu1   

  1. 1 School of Software,Nanjing University of Information Science and Technology,Nanjing 210044,China
    2 Wuxi Institute of Technology,Nanjing University of Information Science and Technology,Wuxi,Jiangsu 214000,China
    3 State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China
  • Received:2024-01-22 Revised:2024-04-12 Online:2025-03-15 Published:2025-03-07
  • About author:TIAN Qing,born in 1984,Ph.D,professor,is a senior member of CCF(No.33364S).His main research interests include machine learning and pattern recognition.
  • Supported by:
    National Natural Science Foundation of China(62176128),Natural Science Foundation of Jiangsu Province,China(BK20231143),Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University(KFKT2022B06),Fundamental Research Funds for the Central Universities(NJ2022028) and Qing Lan Project of Jiangsu Province.

Abstract: Traditional source-free domain adaptation usually assumes that all the target domain data is available,but in practice,the target domain data often appears in the form of streams,that is,the classes in the unlabeled target domain will increase sequentially,which undoubtedly brings new challenges.First,in each time step,the label space of the target domain is a subset of the source domain,and blind alignment will cause the performance of the model to deteriorate.Secondly,in the process of learning new classes,it will destroy the previously learned knowledge,resulting in the catastrophic forgetting of the previous knowledge.In order to solve these problems,this paper proposes a method based on multi-prototype replay and alignment(MPRA).In this method,the shared classes in the target domain are detected by cumulative prediction probabilities,the label space inconsistency problem is solved,and the multi-prototype replay is used to deal with catastrophic forgetting and improve the memory ability of the model.Additionally,the method incorporates cross-domain contrastive learning based on multi-prototype and source model weights to align feature distributions and improve model robustness.A large number of experiments show that the proposed method has achieved superior performance on 3 benchmark datasets.

Key words: Source-free domain adaptation, Class-incremental learning, Multi-prototype, Contrastive learning, Transfer learning

CLC Number: 

  • TP391
[1]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[2]CHEN G,CHEN K,ZHANG L,et al.VCANet:Vanishing-point-guided context-aware network for small road object detection[J].Automotive Innovation,2021,4:400-412.
[3]XU Y,ZHANG Q,ZHANG J,et al.Vitae:Vision transformer advanced by exploring intrinsic inductive bias[J].Advances in Neural Information Processing Systems,2021,34:28522-28535.
[4]ZHENG Z D,YANG Y.Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation[J].International of Computer Vision,2021,129(4):1106-1120.
[5]TIAN Q,SUN H,MA C,et al.Heterogeneous domain adaptation with structure and classification space alignment[J].IEEE Transactions on Cybernetics,2021,52(10):10328-10338.
[6]HOFFMAN J,TZENG E,PARK T,et al.Cycada:Cycle-consistent adversarial domain adaptation[C]//International Confe-rence on Machine Learning.2018:1989-1998.
[7]ZHOU K B,TENG L Y,ZHANG W,et al.Discriminative label semantic guidance learning for domain adaptive retrieval[J].Journal of Chinese Computer Systems,2024,45(7):1639-1647.
[8]DU Z,LI J,SU H,et al.Cross-domain gradient discrepancy mi-nimization for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:3937-3946.
[9]PAN S J,TSANG I W,KWOK J T,et al.Domain adaptation via transfer component analysis[J].IEEE Transactions on Neural Networks,2010,22(2):199-210.
[10]LONG M,CAO Z,WANG J,et al.Conditional adversarial do-main adaptation[C]//Proceedings of the 32nd International Conference on Neural Information Processing System.2018:1647-1657.
[11]BOUSMALIS K,SILBERMAN N,DOHAN D,et al.Unsupervised pixel-level domain adaptation with generative adversarial networks[C]//Proceedings of the IEEE Conference on Compu-ter Vision and Pattern Recognition.2017:3722-3731.
[12]LONG M,CAO Y,Wang J,et al.Learning transferable features with deep adaptation networks[C]//International Conference on Machine Learning.2015:97-105.
[13]LIANG J,HU D,FENG J.Do we really need to access thesource data? source hypothesis transfer for unsupervised domain adaptation[C]//International Conference on Machine Learning.2020:6028-6039.
[14]ZHANG Z,CHEN W,CHENG H,et al.Divide and contrast:Source-free domain adaptation via adaptive contrastive learning[J].Advances in Neural Information Processing Systems,2022,35:5137-5149.
[15]DING N,XU Y,TANG Y,et al.Source-free domain adaptation via distribution estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:7212-7222.
[16]LIANG J,HU D,WANG Y,et al.Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2021,44(11):8602-8617.
[17]CHU T,LIU Y,DENG J,et al.Denoised Maximum Classifier Discrepancy for Source-Free Unsupervised Domain Adaptation[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2022:3472-3480.
[18]JING M M,LI J J,LU K,et al.Visually source-free domain ada-ptation via adversarial style matching[J].IEEE Transactions on Image Processing,2024,33:1032-1044.
[19]BELOUADAH E,POPESCU A,KANELLOS I.A comprehensive study of class incremental learning algorithms for visual tasks[J].Neural Networks,2021,135:38-54.
[20]MASANA M,LIU X,TWARDOWSKI B,et al.Class-incremental learning:Survey and performance evaluation on image classification[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,45(5):5513-5533.
[21]ZHAO B,XIAO X,GAN G J,et al.Maintaining discrimination and fairness in class incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:13208-13217.
[22]HU X,TANG K,MIAO C,et al.Distilling causal effect of data in class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:3957-3966.
[23]LIU Y,SCHIELE B,SUN Q.Rmm:Reinforced memory ma-nagement for class-incremental learning[J].Advances in Neural Information Processing Systems,2021,34:3478-3490.
[24]BELOUADAH E,POPESCU A.IL2m:Class incremental lear-ning with dual memory[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:583-592.
[25]LIU X,WU C,MENTA M,et al.Generative feature replay for class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.2020:226-227.
[26]YANG S,WANG Y,et al.Generalized source-free domain adaptation[C]//Proceedings of the IEEE/CVF International Confe-rence on Computer Vision.2021:8978-8987.
[27]YANG S,WANG Y X,et al.Exploiting the intrinsic neighborhood structure for source-free domain adaptation[J].Advances in Neural Information Processing Systems,2021,34:29393-29405.
[28]WANG F,HAN Z,GONG Y,et al.Exploring domain-invariant parameters for source free domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:7151-7160.
[29]TIAN J,ZHANG J,LI W,et al.VDM-DA:Virtual domain mo-deling for source data-free domain adaptation[J].IEEE Transactions on Circuits and Systems for Video Technology,2021,32(6):3749-3760.
[30]CHEN D,WANG D,DARREEL T,et al.Contrastive test-time adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:295-305.
[31]WANG R,WU Z,WENG Z,et al.Cross-domain contrastivelearning for unsupervised domain adaptation[J].IEEE Transactions on Multimedia,2022,25:1665-1673.
[32]TIAN Q,PENG S,MA T.Source-free unsupervised domainadaptation with trusted pseudo samples[J].ACM Transactions on Intelligent Systems and Technology,2023,14(2):1-17.
[33]CAO Z,LONG M,WANG J,et al.Partial transfer learning with selective adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:2724-2732.
[34]TIAN Q,CHU Y,SUN H Y,et al.Survey on Partial Domain Adaptation[J].Journal of Software,2023,34(12):5597-5613.
[35]CAO Z,MA L,LONG M,et al.Partial adversarial domain adaptation[C]//Proceedings of the European Conference on Computer Vision(ECCV).2018:135-150.
[36]SAHOO A,PANDA R,FERIS R,et al.Select,label,and mix:Learning discriminative invariant feature representations for partial domain adaptation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2023:4210-4219.
[37]KUNDU J N,VENKATESH R M,VENKAT N,et al.Class-incremental domain adaptation[C]//European Conference on Computer Vision.2020:53-69.
[38]LIN H,ZHANG Y,QIU Z,et al.Prototype-guided continualadaptation for class-incremental unsupervised domain adaptation[C]//European Conference on Computer Vision.2022:351-368.
[39]SAENLO K,KULIS B,FRITZ M,et al.Adapting visual category models to new domains[C]//Proceedings of European Conference on Computer Vision.Springer,2010:213-226.
[40]TZENG E,HOFFMAN J,SAENKO K,et al.Adversarial dis-criminative domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:7167-7176.
[41]GANIN Y,USTINOVA E,AJAKAN H,et al.Domain-adversarial training of neural networks[J].The Journal of Machine Learning Research,2016,17(1):2096-2030.
[42]CAO Z,YOU K,LONG M,et al.Learning to transfer examples for partial domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:2985-2994.
[43]LIANG J,WANG Y,HU D,et al.A balanced and uncertainty-aware approach for partial domain adaptation[C]//European Conference on Computer Vision.Cham:Springer International Publishing,2020:123-140.
[1] YUAN Ye, CHEN Ming, WU Anbiao, WANG Yishu. Graph Anomaly Detection Model Based on Personalized PageRank and Contrastive Learning [J]. Computer Science, 2025, 52(2): 80-90.
[2] TIAN Qing, LIU Xiang, WANG Bin, YU Jiangsen, SHEN Jiashuo. Multi-source-free Domain Adaptation Based on Source Model Contribution Quantization [J]. Computer Science, 2025, 52(2): 116-124.
[3] LIU Yanlun, XIAO Zheng, NIE Zhenyu, LE Yuquan, LI Kenli. Case Element Association with Evidence Extraction for Adjudication Assistance [J]. Computer Science, 2025, 52(2): 222-230.
[4] ZHANG Yusong, XU Shuai, YAN Xingyu, GUAN Donghai, XU Jianqiu. Survey on Cross-city Human Mobility Prediction [J]. Computer Science, 2025, 52(1): 102-119.
[5] YE Lishuo, HE Zhixue. Multi-granularity Time Series Contrastive Learning Method Incorporating Time-Frequency Features [J]. Computer Science, 2025, 52(1): 170-182.
[6] TIAN Sicheng, HUANG Shaobin, WANG Rui, LI Rongsheng, DU Zhijuan. Contrastive Learning-based Prompt Generation Method for Large-scale Language Model ReverseDictionary Task [J]. Computer Science, 2024, 51(8): 256-262.
[7] HU Haibo, YANG Dan, NIE Tiezheng, KOU Yue. Graph Contrastive Learning Incorporating Multi-influence and Preference for Social Recommendation [J]. Computer Science, 2024, 51(7): 146-155.
[8] TIAN Qing, LU Zhanghu, YANG Hong. Unsupervised Domain Adaptation Based on Entropy Filtering and Class Centroid Optimization [J]. Computer Science, 2024, 51(7): 345-353.
[9] ZHANG Xinrui, YANG Jian, WANG Zhan. Thai Speech Synthesis Based on Cross-language Transfer Learning and Joint Training [J]. Computer Science, 2024, 51(6A): 230500174-7.
[10] CAO Yan, ZHU Zhenfeng. DRSTN:Deep Residual Soft Thresholding Network [J]. Computer Science, 2024, 51(6A): 230400112-7.
[11] YU Bihui, TAN Shuyue, WEI Jingxuan, SUN Linzhuang, BU Liping, ZHAO Yiman. Vision-enhanced Multimodal Named Entity Recognition Based on Contrastive Learning [J]. Computer Science, 2024, 51(6): 198-205.
[12] LI Yilin, SUN Chengsheng, LUO Lin, JU Shenggen. Aspect-based Sentiment Classification for Word Information Enhancement Based on Sentence Information [J]. Computer Science, 2024, 51(6): 299-308.
[13] WANG Jiahao, FU Yifu, FENG Hainan, REN Yuheng. Indoor Location Algorithm in Dynamic Environment Based on Transfer Learning [J]. Computer Science, 2024, 51(5): 277-283.
[14] CHEN Runhuan, DAI Hua, ZHENG Guineng, LI Hui , YANG Geng. Urban Electricity Load Forecasting Method Based on Discrepancy Compensation and Short-termSampling Contrastive Loss [J]. Computer Science, 2024, 51(4): 158-164.
[15] LIAO Jinzhi, ZHAO Hewei, LIAN Xiaotong, JI Wenliang, SHI Haiming, ZHAO Xiang. Contrastive Graph Learning for Cross-document Misinformation Detection [J]. Computer Science, 2024, 51(3): 14-19.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!