Computer Science ›› 2026, Vol. 53 ›› Issue (3): 197-206.doi: 10.11896/jsjkx.250100068

• Database & Big Data & Data Science • Previous Articles     Next Articles

Social Learning Based on Multi-expert Collaboration and Information Interaction

LI Linhao1,2,3, XU Yanan1, DONG Yongfeng1,2,3, WANG Zhen1,2,3   

  1. 1 School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
    2 Hebei Province Key Laboratory of Big Data Computing(Hebei University of Technology), Tianjin 300401, China
    3 Hebei Engineering Research Center of Data-driven Industrial Intelligent(Hebei University of Technology), Tianjin 300401, China
  • Received:2025-01-10 Revised:2025-05-06 Published:2026-03-12
  • About author:LI Linhao,born in 1989,Ph.D,professor,is a member of CCF(No.42701M).His main research interests include machine learning,knowledge inference and computer vision.
    DONG Yongfeng,born in 1977,Ph.D,professor,is a member of CCF(No.28279D).His main research interests include machine learning,knowledge engineering,computer vision and intelligent information processing.
  • Supported by:
    National Natural Science Foundation of China(62306103).

Abstract: In distributed environments,data heterogeneity manifests as discrepancies in data features.Expert model collaboration suffers from knowledge isolation and improper task allocation,leading to uneven training results among experts,preventing full exploitation of each model’s advantages,and thus limiting overall performance.To address these challenges,this paper proposes MECII.The framework integrates the mixture of experts(MoE) architecture with social learning(SL) principles,optimizing the knowledge sharing and complementarity among experts through four key components:multi-expert collaboration,gating network,adaptive information interaction,and gating selection constraints.This approach effectively resolves issues related to data heterogeneity and expert collaboration in distributed learning.By ensuring precise expert selection and task allocation,MECII facilitates information flow between experts,enhancing each expert’s accuracy when processing specific data,and consequently enhancing the overall model performance.Experimental results demonstrate that MECII significantly outperforms traditional federated learning(FL) baseline methods on the CIFAR-10 and CIFAR-100 datasets.Particularly in scenarios with data heterogeneity,MECII achieves improvements in classification accuracy of 6.69 percentage points and 5.13 percentage points,respectively,compared to the state-of-the-art FedL2P method.Moreover,individual expert accuracy is effectively optimized,validating the framework’s significant advantages in promoting expert collaboration and improving individual accuracy.

Key words: Multi-expert collaboration, Social learning, Data heterogeneity, Federated learning, Information interaction, Mixture of experts

CLC Number: 

  • TP391
[1]AKERS R L,JENNINGS W G.[M]//The Handbook of Crimi-nological Theory.2015:230-240.
[2]XIE G Q,ZHONG B W,LI Y.Distributed Adaptive Multi-agentRendezvous Control Based on Average Consensus Protocol[J].Computer Science,2024,51(5):242-249.
[3]JACOBS R A,JORDAN M I,NOWLAN S J,et al.Adaptive mixtures of local experts[J].Neural Computation,1991,3(1):79-87.
[4]KONEČKOVINÝ J,MCMAHAN H B,YU F X,et al.Federated Learning:Strategies for Improving Communication Efficiency[J].arXiv:1610.05492,2016.
[5]ZHU G H,QI J H,ZHU Z N,et al.Research on progressive deep ensemble architecture search algorithm[J].Chinese Journal of Computers,2023,46(10):2041-2065.
[6]LIU Q,SHI M L,HUANG Z G,et al.Multi-agent collaborative reinforcement learning method based on dual-perspective mode-ling[J].Chinese Journal of Computers,2024,47(7):1582-1594.
[7]BANDURA A,WALTERS R H.Social learning theory[M].Englewood Cliffs,NJ:Prentice hall,1977.
[8]BADGHISH S,SHAIK A S,SAHORE N,et al.Can transac-tional use of AI-controlled voice assistants for service delivery pickup pace in the near future? A social learning theory(SLT) perspective[J].Technological Forecasting and Social Change,2024,198:122972.
[9]PARKER L E.Distributed Intelligence:Overview of the Field and its Application in Multi-Robot Systems[C]//AAAI Fall Symposium:Regarding the Intelligence in Distributed Intelligent Systems.2007:1-6.
[10]WANG X W,FENG X,YU H Q.Multi-agent Cooperative Algo-rithm for Obstacle Clearance Based on Deep Deterministic Policy Gradient and Attention Critic[J].Computer Science,2024,51(7):319-326.
[11]WANG X,ZHAO C,HUANG T W,et al.Cooperative learning of multi-agent systems via reinforcement learning[J].IEEE Transactions on Signal and Information Processing over Networks,2023,9:13-23.
[12]AHMED I,SYED M A,MAARUF M,et al.Distributed computing in multi-agent systems:a survey of decentralized machine learning approaches[J].Computing,2025,107(1):2.
[13]HONG S,ZHENG X,CHEN J,et al.Metagpt:Meta programming for a multi-agent collaborative framework[J].arXiv:2308.00352,2023.
[14]ZHANG M Y,JIN Z,LIU K.Virtual regret advantage self-playmethod for cooperative-competitive hybrid multi-agent systems[J].Journal of Software,2024,35(2):739-757.
[15]MCMAHAN B,MOORE E,RAMAGE D,et al.Communica-tion-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics.PMLR,2017:1273-1282.
[16]KAIROUZ P,MCMAHAN H B,AVENT B,et al.Advances and open problems in federated learning[J].Foundations and Trends© in Machine Learning,2021,14(1/2):1-210.
[17]SATTLER F,WIEDEMANN S,MÜLLER K R,et al.Robust and communication-efficient federated learning from non-iid data[J].IEEE Transactions on Neural Networks and Learning Systems,2019,31(9):3400-3413.
[18]DINH C T,TRAN N,NGUYEN J.Personalized federatedlearning with moreau envelopes[J].Advances in Neural Information Processing Systems,2020,33:21394-21405.
[19]WANG J,LIU Q,LIANG H,et al.Tackling the objective inconsistency problem in heterogeneous federated optimization[J].Advances in Neural Information Processing Systems,2020,33:7611-7623.
[20]LI T,SAHU A K,ZAHEER M,et al.Federated optimization in heterogeneous networks[J].Proceedings of Machine Learning and Systems,2020,2:429-450.
[21]LI Q,HE B,SONG D.Model-contrastive federated learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:10713-10722.
[22]QU Z,LI X,DUAN R,et al.Generalized federated learning via sharpness aware minimization[C]//International Conference on Machine Learning.PMLR,2022:18250-18280.
[23]SHI Y,LIANG J,ZHANG W,et al.Towards understanding and mitigating dimensional collapse in heterogeneous federated learning[J].arXiv:2210.00226,2022.
[24]LEE R,KIM M,LI D,et al.Fedl2p:Federated learning to personalize[C]//Advances in Neural Information Processing Systems.2024.
[25]HUANG W,YE M,SHI Z,et al.Federated learning for genera-lization,robustness,fairness:A survey and benchmark[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2024,46(12):9387-9406.
[26]ZHANG R L,DU J H,YIN H.Client selection algorithm incross-device federated learning[J].Journal of Software,2024,35(12):5725-5740.
[27]WANG Y,FU H,KANAGAVELU R,et al.An aggregation-free federated learning for tackling data heterogeneity[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2024:26233-26242.
[28]SHAZZER N,MIRHOSEINI A,MAZIARZ K,et al.Outra-geously large neural networks:The sparsely-gated mixture-of-experts layer[J].arXiv:1701.06538,2017.
[29]YANG Z,DAI Z,SALAKHUTDINOV R,et al.Breaking the softmax bottleneck:A high-rank RNN language model[J].ar-Xiv:1711.03953,2017.
[30]FEDUS W,ZOPH B,SHAZEER N.Switch transformers:Sca-ling to trillion parameter models with simple and efficient sparsity[J].Journal of Machine Learning Research,2022,23(120):1-39.
[31]LI Y,JIANG S,HU B,et al.Uni-moe:Scaling unified multimodal llms with mixture of experts[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2025,47(5):3424-3439.
[32]JORDAN M I,JACOBS R A.Hierarchical mixtures of experts and the EM algorithm[J].Neural Computation,1994,6(2):181-214.
[33]SHAZEER N,STERN M.Adafactor:Adaptive learning rateswith sublinear memory cost[C]//International Conference on Machine Learning.PMLR,2018:4596-4604.
[34]YU J,ZHUGE Y,ZHANG L,et al.Boosting continual learning of vision-language models via mixture-of-experts adapters[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2024:23219-23230.
[35]ROSENBAUM C,KLINGER T,RIEMER M.Routing net-works:Adaptive selection of non-linear functions for multi-task learning[J].arXiv:1711.01239,2017.
[36]DONG L,YANG N,WANG W,et al.Unified language model pre-training for natural language understanding and generation[C]//Advances in Neural Information Processing Systems.2019.
[37]DING X,ZHANG X,HAN J,et al.Scaling up your kernels to 31x31:Revisiting large kernel design in cnns[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:11963-11975.
[1] LI Jiahui, LI Yinglong, CHEN Tieming. Privacy-preserving Computation in Edge Service Scenario of Internet of Vehicles:A Review ofTechnical Basis and Research Progress [J]. Computer Science, 2026, 53(1): 298-322.
[2] WU Jiagao, YI Jing, ZHOU Zehui, LIU Linfeng. Personalized Federated Learning Framework for Long-tailed Heterogeneous Data [J]. Computer Science, 2025, 52(9): 232-240.
[3] WANG Chundong, ZHANG Qinghua, FU Haoran. Federated Learning Privacy Protection Method Combining Dataset Distillation [J]. Computer Science, 2025, 52(6A): 240500132-7.
[4] JIANG Yufei, TIAN Yulong, ZHAO Yanchao. Persistent Backdoor Attack for Federated Learning Based on Trigger Differential Optimization [J]. Computer Science, 2025, 52(4): 343-351.
[5] LUO Zhengquan, WANG Yunlong, WANG Zilei, SUN Zhenan, ZHANG Kunbo. Study on Active Privacy Protection Method in Metaverse Gaze Communication Based on SplitFederated Learning [J]. Computer Science, 2025, 52(3): 95-103.
[6] HU Kangqi, MA Wubin, DAI Chaofan, WU Yahui, ZHOU Haohao. Federated Learning Evolutionary Multi-objective Optimization Algorithm Based on Improved NSGA-III [J]. Computer Science, 2025, 52(3): 152-160.
[7] WANG Ruicong, BIAN Naizheng, WU Yingjun. FedRCD:A Clustering Federated Learning Algorithm Based on Distribution Extraction andCommunity Detection [J]. Computer Science, 2025, 52(3): 188-196.
[8] WANG Dongzhi, LIU Yan, GUO Bin, YU Zhiwen. Edge-side Federated Continuous Learning Method Based on Brain-like Spiking Neural Networks [J]. Computer Science, 2025, 52(3): 326-337.
[9] XIE Jiachen, LIU Bo, LIN Weiwei , ZHENG Jianwen. Survey of Federated Incremental Learning [J]. Computer Science, 2025, 52(3): 377-384.
[10] ZHENG Jianwen, LIU Bo, LIN Weiwei, XIE Jiachen. Survey of Communication Efficiency for Federated Learning [J]. Computer Science, 2025, 52(2): 1-7.
[11] WANG Xin, CHEN Kun, SUN Lingyun. Research on Foundation Model Methods for Addressing Non-IID Issues in Federated Learning [J]. Computer Science, 2025, 52(12): 302-313.
[12] PENG Jiao, CHANG Yongjuan, YAN Tao, YOU Zhangzheng, SONG Meina, ZHU Yifan, ZHANG Pengfei, HE Yue, ZHANG Bo, OU Zhonghong. Decentralized Federated Learning Algorithm Sensitive to Delay [J]. Computer Science, 2025, 52(12): 314-320.
[13] LUO Qifeng, XIAO Xing, WEN Chaofei, CHI Mingmin, PENG Bo. SAM-MR:SAM-based Mixed Region Matching Expert Adaptation Algorithm for FabricDetection [J]. Computer Science, 2025, 52(11A): 241200124-6.
[14] ZHAO Tong, CHEN Xuebin, WANG Liu, JING Zhongrui, ZHONG Qi. Backdoor Attack Method for Federated Learning Based on Knowledge Distillation [J]. Computer Science, 2025, 52(11): 434-443.
[15] DUN Jingbo, LI Zhuo. Survey on Transmission Optimization Technologies for Federated Large Language Model Training [J]. Computer Science, 2025, 52(1): 42-55.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!