Computer Science ›› 2025, Vol. 52 ›› Issue (3): 206-213.doi: 10.11896/jsjkx.240100166

• Database & Big Data & Data Science • Previous Articles     Next Articles

Class-incremental Source-free Domain Adaptation Based on Multi-prototype Replay andAlignment

TIAN Qing1,2,3, KANG Lulu1, ZHOU Liangyu1   

  1. 1 School of Software,Nanjing University of Information Science and Technology,Nanjing 210044,China
    2 Wuxi Institute of Technology,Nanjing University of Information Science and Technology,Wuxi,Jiangsu 214000,China
    3 State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China
  • Received:2024-01-22 Revised:2024-04-12 Online:2025-03-15 Published:2025-03-07
  • About author:TIAN Qing,born in 1984,Ph.D,professor,is a senior member of CCF(No.33364S).His main research interests include machine learning and pattern recognition.
  • Supported by:
    National Natural Science Foundation of China(62176128),Natural Science Foundation of Jiangsu Province,China(BK20231143),Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University(KFKT2022B06),Fundamental Research Funds for the Central Universities(NJ2022028) and Qing Lan Project of Jiangsu Province.

Abstract: Traditional source-free domain adaptation usually assumes that all the target domain data is available,but in practice,the target domain data often appears in the form of streams,that is,the classes in the unlabeled target domain will increase sequentially,which undoubtedly brings new challenges.First,in each time step,the label space of the target domain is a subset of the source domain,and blind alignment will cause the performance of the model to deteriorate.Secondly,in the process of learning new classes,it will destroy the previously learned knowledge,resulting in the catastrophic forgetting of the previous knowledge.In order to solve these problems,this paper proposes a method based on multi-prototype replay and alignment(MPRA).In this method,the shared classes in the target domain are detected by cumulative prediction probabilities,the label space inconsistency problem is solved,and the multi-prototype replay is used to deal with catastrophic forgetting and improve the memory ability of the model.Additionally,the method incorporates cross-domain contrastive learning based on multi-prototype and source model weights to align feature distributions and improve model robustness.A large number of experiments show that the proposed method has achieved superior performance on 3 benchmark datasets.

Key words: Source-free domain adaptation, Class-incremental learning, Multi-prototype, Contrastive learning, Transfer learning

CLC Number: 

  • TP391
[1]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[2]CHEN G,CHEN K,ZHANG L,et al.VCANet:Vanishing-point-guided context-aware network for small road object detection[J].Automotive Innovation,2021,4:400-412.
[3]XU Y,ZHANG Q,ZHANG J,et al.Vitae:Vision transformer advanced by exploring intrinsic inductive bias[J].Advances in Neural Information Processing Systems,2021,34:28522-28535.
[4]ZHENG Z D,YANG Y.Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation[J].International of Computer Vision,2021,129(4):1106-1120.
[5]TIAN Q,SUN H,MA C,et al.Heterogeneous domain adaptation with structure and classification space alignment[J].IEEE Transactions on Cybernetics,2021,52(10):10328-10338.
[6]HOFFMAN J,TZENG E,PARK T,et al.Cycada:Cycle-consistent adversarial domain adaptation[C]//International Confe-rence on Machine Learning.2018:1989-1998.
[7]ZHOU K B,TENG L Y,ZHANG W,et al.Discriminative label semantic guidance learning for domain adaptive retrieval[J].Journal of Chinese Computer Systems,2024,45(7):1639-1647.
[8]DU Z,LI J,SU H,et al.Cross-domain gradient discrepancy mi-nimization for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:3937-3946.
[9]PAN S J,TSANG I W,KWOK J T,et al.Domain adaptation via transfer component analysis[J].IEEE Transactions on Neural Networks,2010,22(2):199-210.
[10]LONG M,CAO Z,WANG J,et al.Conditional adversarial do-main adaptation[C]//Proceedings of the 32nd International Conference on Neural Information Processing System.2018:1647-1657.
[11]BOUSMALIS K,SILBERMAN N,DOHAN D,et al.Unsupervised pixel-level domain adaptation with generative adversarial networks[C]//Proceedings of the IEEE Conference on Compu-ter Vision and Pattern Recognition.2017:3722-3731.
[12]LONG M,CAO Y,Wang J,et al.Learning transferable features with deep adaptation networks[C]//International Conference on Machine Learning.2015:97-105.
[13]LIANG J,HU D,FENG J.Do we really need to access thesource data? source hypothesis transfer for unsupervised domain adaptation[C]//International Conference on Machine Learning.2020:6028-6039.
[14]ZHANG Z,CHEN W,CHENG H,et al.Divide and contrast:Source-free domain adaptation via adaptive contrastive learning[J].Advances in Neural Information Processing Systems,2022,35:5137-5149.
[15]DING N,XU Y,TANG Y,et al.Source-free domain adaptation via distribution estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:7212-7222.
[16]LIANG J,HU D,WANG Y,et al.Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2021,44(11):8602-8617.
[17]CHU T,LIU Y,DENG J,et al.Denoised Maximum Classifier Discrepancy for Source-Free Unsupervised Domain Adaptation[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2022:3472-3480.
[18]JING M M,LI J J,LU K,et al.Visually source-free domain ada-ptation via adversarial style matching[J].IEEE Transactions on Image Processing,2024,33:1032-1044.
[19]BELOUADAH E,POPESCU A,KANELLOS I.A comprehensive study of class incremental learning algorithms for visual tasks[J].Neural Networks,2021,135:38-54.
[20]MASANA M,LIU X,TWARDOWSKI B,et al.Class-incremental learning:Survey and performance evaluation on image classification[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,45(5):5513-5533.
[21]ZHAO B,XIAO X,GAN G J,et al.Maintaining discrimination and fairness in class incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:13208-13217.
[22]HU X,TANG K,MIAO C,et al.Distilling causal effect of data in class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:3957-3966.
[23]LIU Y,SCHIELE B,SUN Q.Rmm:Reinforced memory ma-nagement for class-incremental learning[J].Advances in Neural Information Processing Systems,2021,34:3478-3490.
[24]BELOUADAH E,POPESCU A.IL2m:Class incremental lear-ning with dual memory[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:583-592.
[25]LIU X,WU C,MENTA M,et al.Generative feature replay for class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.2020:226-227.
[26]YANG S,WANG Y,et al.Generalized source-free domain adaptation[C]//Proceedings of the IEEE/CVF International Confe-rence on Computer Vision.2021:8978-8987.
[27]YANG S,WANG Y X,et al.Exploiting the intrinsic neighborhood structure for source-free domain adaptation[J].Advances in Neural Information Processing Systems,2021,34:29393-29405.
[28]WANG F,HAN Z,GONG Y,et al.Exploring domain-invariant parameters for source free domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:7151-7160.
[29]TIAN J,ZHANG J,LI W,et al.VDM-DA:Virtual domain mo-deling for source data-free domain adaptation[J].IEEE Transactions on Circuits and Systems for Video Technology,2021,32(6):3749-3760.
[30]CHEN D,WANG D,DARREEL T,et al.Contrastive test-time adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:295-305.
[31]WANG R,WU Z,WENG Z,et al.Cross-domain contrastivelearning for unsupervised domain adaptation[J].IEEE Transactions on Multimedia,2022,25:1665-1673.
[32]TIAN Q,PENG S,MA T.Source-free unsupervised domainadaptation with trusted pseudo samples[J].ACM Transactions on Intelligent Systems and Technology,2023,14(2):1-17.
[33]CAO Z,LONG M,WANG J,et al.Partial transfer learning with selective adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:2724-2732.
[34]TIAN Q,CHU Y,SUN H Y,et al.Survey on Partial Domain Adaptation[J].Journal of Software,2023,34(12):5597-5613.
[35]CAO Z,MA L,LONG M,et al.Partial adversarial domain adaptation[C]//Proceedings of the European Conference on Computer Vision(ECCV).2018:135-150.
[36]SAHOO A,PANDA R,FERIS R,et al.Select,label,and mix:Learning discriminative invariant feature representations for partial domain adaptation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2023:4210-4219.
[37]KUNDU J N,VENKATESH R M,VENKAT N,et al.Class-incremental domain adaptation[C]//European Conference on Computer Vision.2020:53-69.
[38]LIN H,ZHANG Y,QIU Z,et al.Prototype-guided continualadaptation for class-incremental unsupervised domain adaptation[C]//European Conference on Computer Vision.2022:351-368.
[39]SAENLO K,KULIS B,FRITZ M,et al.Adapting visual category models to new domains[C]//Proceedings of European Conference on Computer Vision.Springer,2010:213-226.
[40]TZENG E,HOFFMAN J,SAENKO K,et al.Adversarial dis-criminative domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:7167-7176.
[41]GANIN Y,USTINOVA E,AJAKAN H,et al.Domain-adversarial training of neural networks[J].The Journal of Machine Learning Research,2016,17(1):2096-2030.
[42]CAO Z,YOU K,LONG M,et al.Learning to transfer examples for partial domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:2985-2994.
[43]LIANG J,WANG Y,HU D,et al.A balanced and uncertainty-aware approach for partial domain adaptation[C]//European Conference on Computer Vision.Cham:Springer International Publishing,2020:123-140.
[1] ZENG Lili, XIA Jianan, LI Shaowen, JING Maike, ZHAO Huihui, ZHOU Xuezhong. M2T-Net:Cross-task Transfer Learning Tongue Diagnosis Method Based on Multi-source Data [J]. Computer Science, 2025, 52(9): 47-53.
[2] HUANG Chao, CHENG Chunling, WANG Youkang. Source-free Domain Adaptation Method Based on Pseudo Label Uncertainty Estimation [J]. Computer Science, 2025, 52(9): 212-219.
[3] ZHANG Shiju, GUO Chaoyang, WU Chengliang, WU Lingjun, YANG Fengyu. Text Clustering Approach Based on Key Semantic Driven and Contrastive Learning [J]. Computer Science, 2025, 52(8): 171-179.
[4] ZHANG Taotao, XIE Jun, QIAO Pingjuan. Specific Emitter Identification Based on Progressive Self-training Open Set Domain Adaptation [J]. Computer Science, 2025, 52(7): 279-286.
[5] CHEN Qirui, WANG Baohui, DAI Chencheng. Research on Electrocardiogram Classification and Recognition Algorithm Based on Transfer Learning [J]. Computer Science, 2025, 52(6A): 240900073-8.
[6] LI Mingjie, HU Yi, YI Zhengming. Flame Image Enhancement with Few Samples Based on Style Weight Modulation Technique [J]. Computer Science, 2025, 52(6A): 240500129-7.
[7] LI Jianghui, DING Haiyan, LI Weihua. Prediction of Influenza A Antigenicity Based on Few-shot Contrastive Learning [J]. Computer Science, 2025, 52(6A): 240800053-6.
[8] YE Jiale, PU Yuanyuan, ZHAO Zhengpeng, FENG Jue, ZHOU Lianmin, GU Jinjing. Multi-view CLIP and Hybrid Contrastive Learning for Multimodal Image-Text Sentiment Analysis [J]. Computer Science, 2025, 52(6A): 240700060-7.
[9] FU Shufan, WANG Zhongqing, JIANG Xiaotong. Zero-shot Stance Detection in Chinese by Fusion of Emotion Lexicon and Graph ContrastiveLearning [J]. Computer Science, 2025, 52(6A): 240500051-7.
[10] HUANG Bocheng, WANG Xiaolong, AN Guocheng, ZHANG Tao. Transmission Line Fault Identification Method Based on Transfer Learning and Improved YOLOv8s [J]. Computer Science, 2025, 52(6A): 240800044-8.
[11] LIU Yufei, XIAO Yanhui, TIAN Huawei. PRNU Fingerprint Purification Algorithm for Open Environment [J]. Computer Science, 2025, 52(6): 187-199.
[12] GONG Zian, GU Zhenghui, CHEN Di. Cross-subject Driver Fatigue Detection Based on Local and Global Feature Integrated Network [J]. Computer Science, 2025, 52(6): 200-210.
[13] CHEN Yadang, GAO Yuxuan, LU Chuhan, CHE Xun. Saliency Mask Mixup for Few-shot Image Classification [J]. Computer Science, 2025, 52(6): 256-263.
[14] WU Pengyuan, FANG Wei. Study on Graph Collaborative Filtering Model Based on FeatureNet Contrastive Learning [J]. Computer Science, 2025, 52(5): 139-148.
[15] MIAO Zhuang, CUI Haoran, ZHANG Qiyang, WANG Jiabao, LI Yang. Restoration of Atmospheric Turbulence-degraded Images Based on Contrastive Learning [J]. Computer Science, 2025, 52(5): 171-178.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!