计算机科学 ›› 2025, Vol. 52 ›› Issue (2): 116-124.doi: 10.11896/jsjkx.240600004

• 数据库&大数据&数据科学 • 上一篇    下一篇

基于源模型贡献量化的多无源域适应

田青1,2,3, 刘祥1, 王斌1, 郁江森1, 申镓硕1   

  1. 1 南京信息工程大学软件学院 南京 210044
    2 南京信息工程大学无锡研究院 江苏 无锡 214000
    3 南京大学计算机软件新技术国家重点实验室 南京 210023
  • 收稿日期:2024-06-02 修回日期:2024-09-07 出版日期:2025-02-15 发布日期:2025-02-17
  • 通讯作者: 田青(tianqing@nuist.edu.cn)
  • 基金资助:
    国家自然科学基金(62176128);江苏省自然科学基金(BK20231143);南京大学计算机软件新技术国家重点实验室开放课题(KFKT2022B06);中央高校基本科研基金(NJ2022028);江苏省“青蓝工程”人才计划

Multi-source-free Domain Adaptation Based on Source Model Contribution Quantization

TIAN Qing1,2,3, LIU Xiang1, WANG Bin1, YU Jiangsen1, SHEN Jiashuo1   

  1. 1 School of Software,Nanjing University of Information Science and Technology,Nanjing 210044,China
    2 Wuxi Institute of Technology,Nanjing University of Information Science and Technology,Wuxi,Jiangsu 214000,China
    3 State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China
  • Received:2024-06-02 Revised:2024-09-07 Online:2025-02-15 Published:2025-02-17
  • About author:TIAN Qing,born in 1984,Ph.D,professor,is a senior member of CCF(No.33364S).His main research interests include machine learning,pattern recognition and computer vision.
  • Supported by:
    National Natural Science Foundation of China(62176128),Natural Science Foundation of Jiangsu Province(BK20231143),Open Projects Program of State Key Laboratory for Novel Software Technology of Nanjing University(KFKT2022B06),Fundamental Research Funds for the Central Universities(NJ2022028)and Qing Lan Project of Jiangsu Province.

摘要: 作为机器学习领域的研究新方向,多无源域适应旨在将多个源域模型中的知识迁移到目标域,以实现对目标域样本的准确预测。本质上,解决多无源域适应的关键在于如何量化多个源模型对目标域的贡献,并利用源模型中的多样性知识来适应目标域。为了应对上述问题,提出了一种基于源模型贡献量化(Source Model Contribution Quantizing,SMCQ)的多无源域适应方法。具体而言,提出了源模型可转移性感知,以量化源模型的可转移性贡献,从而为目标域模型有效地分配源模型的自适应权重。其次,引入了信息最大化方法,以缩小跨域的分布差异,并解决模型退化的问题。然后,提出了可信划分全局对齐方法,该方法用于划分高可信和低可信样本,以应对域差异引起的嘈杂环境,并有效降低标签分配错误的风险。此外,还引入了样本局部一致性损失,以减小伪标签噪声对低可信样本聚类错误的影响。最后,在多个数据集上进行实验,验证了所提方法的有效性。

关键词: 多无源域适应, 多模型贡献量化, 源模型可转移性感知, 信息最大化, 样本可信划分

Abstract: As a new research direction in the field of machine learning,multi-source-free domain adaptation aims to transfer knowledge from multiple source domain models to the target domain,so as to achieve accurate prediction of target domain samples.Essentially,the key to solving multi-source-free domain adaptation lies in how to quantify the contribution of multiple source models to the target domain and utilize the diverse knowledge in the source models to adapt to the target domain.To address these issues,this paper proposes a multi-source-free domain adaptation method based on source model contribution quantization(SMCQ).Specifically,a source model transferability perception is proposed to quantify the transferability contribution of the source model,enabling the effective allocation of adaptive weights for target domain models.Additionally,an information maximization method is introduced to reduce cross-domain distributional discrepancies and mitigate model degradation.Subsequently,a credible partition global alignment approach is proposed to divide high-confidence and low-confidence samples to cope with the noisy environment caused by domain differences,effectively reduce the risk of incorrect label assignments.In addition,a sample local consistency loss is also introduced to mitigate the impact of pseudo-label noise on clustering errors of low-confidence samples.Finally,experiments conducted on multiple datasets validate the effectiveness of the proposed method.

Key words: Multi-source-free domain adaptation, Multi-model contribution quantization, Source-model transferable perception, Information maximum, Credible partition of samples

中图分类号: 

  • TP391
[1]TIAN Q,ZHU Y N,SUN H Y,et al.Unsupervised domain adaptation through dynamically aligning both the feature and label spaces[J].IEEE Transactions on Circuits and Systems for Video Technology,2022,32(12):8562-8573.
[2]TIAN Q,CHU Y,SUN H Y,et al.Survey on partial domain adaptation[J].Journal of Software,2023,34(12):5597-5613.
[3]TIAN Q,SUN H Y,MA C,et al.Heterogeneous domain adaptation with structure and classification space alignment[J].IEEE Transactions on Cybernetics,2021,52(10):10328-10338.
[4]LONG M S,CAO Y,WANG J M,et al.Learning transferablefeatures with deep adaptation networks[C]//International Conference on Machine Learning.PMLR,2015:97-105.
[5]LONG M S,CAO Y,CAO Z J,et al.Transferable representation learning with deep adaptation networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,41(12):3071-3085.
[6]PENG X C,BAI Q X,XIA X D,et al.Moment matching formulti-source domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:1406-1415.
[7]GANIN Y,LEMPITSKY V.Unsupervised domain adaptationby backpropagation[C]//International Conference on Machine Learning.PMLR,2015:1180-1189.
[8]GANIN Y,USTINOVA E,AJAKAN H,et al.Domain-adversarial training of neural networks[J].Journal of Machine Lear-ning Research,2016,17(1):2096-2030.
[9]TZENG E,HOFFMAN J,SAENKO K,et al.Adversarial dis-criminative domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:7167-7176.
[10]KIM Y,CHO D,HAN K,et al.Domain adaptation withoutsource data[J].IEEE Transactions on Artificial Intelligence,2021,2(6):508-518.
[11]LIANG J,HU D P,FENG J S.Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation[C]//International Conference on Machine Learning.PMLR,2020:6028-6039.
[12]LI R,JIAO Q F,CAO W M,et al.Model adaptation:unsupervised domain adaptation without source data[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:9641-9650.
[13]YANG S Q,WANG Y X,VAN DE WEIJER J,et al.Unsupervised domain adaptation without source data by casting a bait[J].arXiv:2010.12427,2020.
[14]LIN H B,ZHANG Y F,QIU Z,et al.Prototype-guided continu-al adaptation for class-incremental unsupervised domain adaptation[C]//European Conference on Computer Vision.Cham:Springer Nature Switzerland,2022:351-368.
[15]KARIM N,MITHUN N C,RAJVANSHI A,et al.C-SFDA:A Curriculum Learning Aided Self-Training Framework for Efficient Source Free Domain Adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2023:24120-24131.
[16]KIM Y,CHO D,HONG S.Towards privacy-preserving domain adaptation[J].IEEE Signal Processing Letters,2020,27:1675-1679.
[17]WANG F,HAN Z Y,GONG Y S,et al.Exploring domain-invariant parameters for source free domain adaptation[C]//Procee-dings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:7151-7160.
[18]LIANG J,HU D P,FENG J S,et al.Dine:Domain adaptation from single and multiple black-box predictors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:8003-8013.
[19]ZHANG H J,ZHANG Y B,JIA K,et al.Unsupervised domainadaptation of black-box source models[J].arXiv:2101.02839,2021.
[20]KUNDU J N,VENKAT N,BADU R V.Universal source-free domain adaptation[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition.2020:4544-4553.
[21]WANG F,HAN Z Y,ZHANG Z Y,et al.MHPL:Minimumhappy points learning for active source free domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2023:20008-20018.
[22]AHMED S M,RAYCHAUDHURI D S,PAUL S,et al.Unsupervised multi-source domain adaptation without access to source data[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:10103-10112.
[23]DONG J H,FANG Z,LIU A J,et al.Confident anchor-induced multi-source free domain adaptation[J].Advances in Neural Information Processing Systems,2021,34:2848-2860.
[24]HAN Z Y,ZHANG Z Y,WANG F,et al.Discriminability andtransferability estimation:a bayesian source importance estimation approach for multi-source-free domain adaptation[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2023:7811-7820.
[25]LI J,DU Z K,ZHU L,et al.Divergence-agnostic unsupervised domain adaptation by adversarial attacks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,44(11):8196-8211.
[26]YE M,ZHANG J,OUYANG J,et al.Source data-free unsuper-vised domain adaptation for semantic segmentation[C]//Proceedings of the 29th ACM International Conference on Multimedia.2021:2233-2242.
[27]DING N,XU Y,TANG Y,et al.Source-free domain adaptation via distribution estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:7212-7222.
[28]TIAN Q,MA C,ZHANG F Y,et al.Source-free unsupervised domain adaptation with sample transport learning[J].Journal of Computer Science and Technology,2021,36(3):606-616.
[29]KURMI V K,SUBRAMANIAN V K,NAMBOODIRI V P.Domain impression:A sourcedata free domain adaptation method[C]//Proceedings of the IEEE/CVF winter Conference on Applications of Computer Vision.2021:615-625.
[30]ZHANG Z,CHEN W,CHENG H,et al.Divide and contrast:Source-free domain adaptation via adaptive contrastive learning[J].Advances in Neural Information Processing Systems,2022,35:5137-5149.
[31]YANG S,VAN DE WEIJER J,HERRANZ L,et al.Exploiting the intrinsic neighborhood structure for source-free domain adaptation[J].Advances in Neural Information Processing Systems,2021,34:29393-29405.
[32]YANG S,JUI S,VAN DE WEIJER J.Attracting and disper-sing:A simple approach for source-free domain adaptation[J].Advances in Neural Information Processing Systems,2022,35:5802-5815.
[33]GUO J,SHAH D J,BARZILAY R.Multi-source domain adaptation with mixture of experts[J].arXiv:1809.02256,2018.
[34]YANG L,BALAJI Y,LIM S N,et al.Curriculum manager for source selection in multi-source domain adaptation[C]//Computer Vision-ECCV 2020:16th European Conference,Glasgow,UK,August 23-28,2020,Proceedings,Part XIV 16.Springer International Publishing,2020:608-624.
[35]ZHAO S,WANG G,ZHANG S,et al.Multi-source distilling domain adaptation[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:12975-12983.
[36]ZHAO H,ZHANG S,WU G,et al.Adversarial multiple source domain adaptation[J].Advances in Neural Information Proces-sing Systems,2018,31:8559-8570.
[37]XU R,CHEN Z,ZUO W,et al.Deep cocktail network:Multi-source unsupervised domain adaptation with category shift[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:3964-3973.
[38]LI Y,CARLSON D E.Extracting relationships by multi-domain matching[J].Advances in Neural Information Processing Systems,2018,31:6799-6810.
[39]WANG H,YANG W,LIN Z,et al.TMDA:Task-specific multi-source domain adaptation via clustering embedded adversarial training[C]//2019 IEEE International Conference on Data Mi-ning(ICDM).IEEE,2019:1372-1377.
[40]SHEN M,BU Y,WORNELL G W.On balancing bias and variance in unsupervised multi-source-free domain adaptation[C]//International Conference on Machine Learning.PMLR,2023:30976-30991.
[41]HOFFMAN J,MOHRI M,ZHANG N.Algorithms and theory for multiple-source adaptation[J].Advances in Neural Information Processing Systems,2018,31:8256-8266.
[42]ZHAO S,LI B,YUE X,et al.Multi-source domain adaptation for semantic segmentation[J].Advances in Neural Information Processing Systems,2019,32:7285-7298.
[43]LI K Q Y,LU J,ZUO H,et al.Multi-source contribution lear-ning for domain adaptation[J].IEEE Transactions on Neural Networks and Learning Systems,2021,33(10):5293-5307.
[44]ZHANG J,ZHOU W E,CHEN X Q,et al.Multisource selective transfer framework in multiobjective optimization problems[J].IEEE Transactions on Evolutionary Computation,2019,24(3):424-438.
[45]BRIDLE J,HEADING A,MACKAY D.Unsupervised classi-fiers,mutual information and ‘phantom targets’[J].Advances in Neural Information Processing Systems,1991,4:1096-1101.
[46]KRAUSE A,PERONA P,GOMES R.Discriminative clustering by regularized information maximization[J].Advances in Neural Information Processing Systems,2010,23:775-783.
[47]SAENKO K,KULIS B,FRITZ M,et al.Adapting visual category models to new domains[C]//Computer Vision-ECCV 2010:11th European Conference on Computer Vision,Heraklion,Crete,Greece,September 5-11,2010,Proceedings,Part IV 11.Springer Berlin Heidelberg,2010:213-226.
[48]GONG B Q,SHI Y,SHA F,et al.Geodesic flow kernel for unsupervised domain adaptation[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2012:2066-2073.
[49]VENKATESWARA H,EUSEBIO J,CHAKRABORTY S,et al.Deep hashing network for unsupervised domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:5018-5027.
[50]HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[51]XU R J,CHEN Z L,ZUO W M,et al.Deep cocktail network:Multi-source unsupervised domain adaptation with category shift[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:3964-3973.
[52]WANG H,XU M H,NI B,et al.Learning to combine:Know-ledge aggregation for multi-source domain adaptation[C]//Computer Vision-ECCV 2020:16th European Conference,Glasgow,UK,August 23-28,2020,Proceedings,Part VIII 16.Springer International Publishing,2020:727-744.
[53]KIM Y,CHO D,PANDA P,et al.Progressive domain adaptation from a source pre-trained model[J].arXiv:2007.01524,2020.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!