Computer Science ›› 2023, Vol. 50 ›› Issue (10): 203-213.doi: 10.11896/jsjkx.220900242

• Artificial Intelligence • Previous Articles     Next Articles

Multi-surrogate Multi-task Optimization Approach Based on Two-layer Knowledge Transfer

MA Hui, FENG Xiang, YU Huiqun   

  1. Department of Computer Science and Engineering,East China University of Science and Technology,Shanghai 200237,China
    Shanghai Engineering Research Center of Smart Energy,Shanghai 200237,China
  • Received:2022-09-26 Revised:2023-03-09 Online:2023-10-10 Published:2023-10-10
  • About author:MA Hui,born in 1997,postgraduate.Her main research interests include artificial intelligence and evolutionary multi-task optimization.FENG Xiang,born in 1977,Ph.D,professor,is a member of China Computer Federation.Her main research interests include artificial intelligence,swarm intelligence and evolutionary computing,big data intelligence.
  • Supported by:
    National Natural Science Foundation of China(62276097),Key Program of National Natural Science Foundation of China(62136003),National Key Research and Development Program of China(2020YFB1711700),Special Fund for Information Development of Shanghai Economic and Information Commission(XX-XXFZ-02-20-2463) and Scientific Research Program of Shanghai Science and Technology Commission(21002411000).

Abstract: Evolutionary multi-task optimization is a new research direction in the field of computational intelligence.It focuses on how to handle multiple optimization tasks effectively and simultaneously through evolutionary algorithm,so as to enhance the performance of solving each task individually.Based on this,a multi-surrogate and multi-task optimization approach based on two-layer knowledge transfer(AMS-MTO) is proposed,which achieves the purpose of cross-domain optimization by transferring knowledge between surrogates and within surrogates at the same time.Specifically,the knowledge transfer within the surrogates realizes the cross-dimensional transfer of decision variable information through differential evolutionary,so as to avoid the algorithm falling into local optimum.The learning between surrogates adopts two strategies:implicit knowledge transfer and explicit knowledge transfer.The former uses the selective crossover of populations to generate offspring and promote the exchange of genetic information.The latter is mainly the transfer of elite individuals,which can make up for the strong randomness of implicit transfer.For the sake of evaluate the effectiveness of the AMS-MTO algorithm,we carry out an empirical study on 8 benchmark problems up to 100 dimension.At the same time,we give the convergence proof and compare it with the existing algorithms.Experiment resultsshow that when solving expensive problems of single objective optimization,the AMS-MTO algorithm has higher efficiency,better performance and faster convergence speed.

Key words: Evolutionary multi-task optimization, Multi-surrogate, Knowledge transfer, Elite individuals, Implicit transfer

CLC Number: 

  • TP391
[1]DING J L,YANG C,JIN Y C,et al.Generalized multitasking for evolutionary optimization of expensive problems[J].IEEE Transactions on Evolutionary Computation,2019,23(1):44-58.
[2]THEIL H.A rank-invariant method of linear and polynomial re-gression analysis[J].Advanced Studies in Theoretical and Applied Econometrics,1992,23:345-381.
[3]KOURAKOS G,MANTOGLOU A.Pumping optimization ofcoastal aquifers based on evolutionary algorithms and surrogate modular neural network models[J].Advances in Water Resources,2009,32(4):507-521.
[4]BUCHE D,SCHRAUDOLPH N,KOUMOUTSAKOS P.Acce-lerating evolutionary algorithms with gaussian process fitness function models[J].IEEE Transactions on Systems,Man and Cybernetics Part C:Applications and Reviews,2005,35(2):183-194.
[5]GONZALEZ J,ROJAS I,ORTEGA J,et al.Multiobjective evolutionary optimization of the size,shape,and position parameters of radial basis function networks for function approximation[J].IEEE Transactions on Neural Networks,2003,14(6):1478-1495.
[6]JIN Y C.Surrogate-assisted evolutionary computation:Recentadvances and future challenges[J].Swarm and Evolutionary Computation,2011,1(2):61-70.
[7]HAFTKA R T,VILLANUEVA D,CHAUDHURI A.Parallel surrogate-assisted global optimization with expensive functions--a survey[J].Structural and Multidisciplinary Optimization,2016,54:3-13.
[8]VINCENZI L,GAMBARELLI P.A proper infill sampling stra-tegy for improving the speed performance of a surrogate-assisted evolutionary algorithm[J].Computers and Structures,2017,178:58-70.
[9]TIAN J,TAN Y,ZENG J C,et al.Multiobjective infill criterion driven gaussian process-assisted particle swarm optimization of high-dimensional expensive problems[J].IEEE Transactions on Evolutionary Computation,2019,23(3):459-472.
[10]GOH C K,LIM D,MA L,et al.A surrogate-assisted memeticco-evolutionary algorithm for expensive constrained optimization problems[C]//2011 IEEE Congress of Evolutionary Computation(CEC).2011:744-749.
[11]LE M N,ONG Y S,MENZEL S,et al.Evolution by adapting surrogates[J].Evolutionary Computation,2013,21(2):313-340.
[12]YU H B,TAN Y,SUN C L,et al.A generation-based optimal restart strategy for surrogate-assisted social learning particle swarm optimization[J].Knowledge-Based Systems,2019,163:14-25.
[13]LI F,CAI X W,GAO L,et al.A surrogate-assisted multiswarm optimization algorithm for high-dimensional computationally expensive problems[J].IEEE Transactions on Cybernetics,2021,51(3):1390-1402.
[14]ZHOU Z Z,ONG Y S,NAIR P.Hierarchical surrogate-assisted evolutionary optimization framework[C]//Proceedings of the 2004 Congress on Evolutionary Computation.2004:1586-1593.
[15]LIM D,JIN Y C,ONG Y S,et al.Generalizing surrogate-assisted evolutionary computation[J].IEEE Transactions on Evolutionary Computation,2010,14(3):329-355.
[16]SUN C L,JIN Y C,ZENG J C,et al.A two-layer surrogate-assisted particle swarm optimization algorithm[J].Soft Computing,2015,19:1461-1475.
[17]YU H B,TAN Y,ZENG J C,et al.Surrogate-assisted hierarchical particle swarm optimization[J].Information Sciences,2018,454-455:59-72.
[18]BALI K K,GUPTA A,FENG L,et al.Linearized domain adaptation in evolutionary multitasking[C]//2017 IEEE Congress on Evolutionary Computation(CEC).2017:1295-1302.
[19]WEN Y W,TING C K.Parting ways and reallocating resources in evolutionary multitasking[C]//2017 IEEE Congress on Evolutionary Computation(CEC).2017:2404-2411.
[20]LIAW R T,TING C K.Evolutionary many-tasking based onbiocoenosis through symbiosis:A framework and benchmark problems[C]//2017 IEEE Congress on Evolutionary Computation(CEC).2017:2266-2273.
[21]LI G H,ZHANG Q F,GAO W F.Multipopulation evolution framework for multifactorial optimization[C]//Genetic and Evolutionary Computation Conference.Association for Computing Machinery,2018:215-216.
[22]MIN A T W,ONG Y S,GUPTA A,et al.Multiproblem surrogates:Transfer evolutionary multiobjective optimization of computationally expensive problems[J].IEEE Transactions on Evolutionary Computation,2019,23(1):15-28.
[23]GUPTA A,ONG Y S.Genetic transfer or population diversification? deciphering the secret ingredients of evolutionary multitask optimization[C]//2016 IEEE Symposium Series on Computational Intelligence(SSCI).2016:1-7.
[24]KATTAN A,GALVAN E.Evolving radial basis function networks via GP for estimating fitness values using surrogate mo-dels[C]//2012 IEEE Congress on Evolutionary Computation.2012:1-7.
[25]WILD S M,SHOEMAKER C A.Global convergence of radial basis function trust region derivative-free algorithms[J].SIAM Journal of Optimization,2011,21(3):761-781.
[26]WILD S M,SHOEMAKER C A.Global convergence of radial basis function trust region algorithms for derivative-free optimization[J].SIAM Rev,2013,55(2):349-371.
[27]DING J L,YANG C,JIN Y C,et al.Generalized multitasking for evolutionary optimization of expensive problems[J].IEEE Transactions on Evolutionary Computation,2019,23(1):44-58.
[28]LIAO P,SUN C L,ZHANG G C,et al.Multi-surrogate multi-tasking optimization of expensive problems[J].Knowledge-Based Systems,2021,551:23-38.
[1] ZHANG Qiyang, CHEN Xiliang, CAO Lei, LAI Jun, SHENG Lei. Survey on Knowledge Transfer Method in Deep Reinforcement Learning [J]. Computer Science, 2023, 50(5): 201-216.
[2] XU Ping'an, LIU Quan. Deep Reinforcement Learning Based on Similarity Constrained Dual Policy Distillation [J]. Computer Science, 2023, 50(1): 253-261.
[3] ZHANG Qiyang, CHEN Xiliang, ZHANG Qiao. Sparse Reward Exploration Method Based on Trajectory Perception [J]. Computer Science, 2023, 50(1): 262-269.
[4] TANG Feng, FENG Xiang, YU Hui-qun. Multi-task Cooperative Optimization Algorithm Based on Adaptive Knowledge Transfer andResource Allocation [J]. Computer Science, 2022, 49(7): 254-262.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!