计算机科学 ›› 2012, Vol. 39 ›› Issue (Z6): 261-264.

• • 上一篇    下一篇

基于Meta平衡的多Agent Q学习算法研究

王万良,艘约庆,赵燕伟   

  1. (浙江工业大学计算机科学与技术学院 杭州 310023) (浙江工业大学特种装备制造与先进加工技术教育部重点实验室 杭州 310012)
  • 出版日期:2018-11-16 发布日期:2018-11-16

Research on Multi-agent Q Learning Algorithm Based on Meta Equilibrium

  • Online:2018-11-16 Published:2018-11-16

摘要: 多Agent强化学习算法的研究一直以来大多都是针对于合作策略,而NashQ算法的提出对非合作策略的研究无疑是一个重要贡献。针对在多Agent系统中,Nash平衡无法确保求得的解是Paret。最优解及其计算复杂度较高的问题,提出了基于Mcta平衡的MctaQ算法。与NashQ算法不同,MctaQ算法通过对自身行为的预处理以及对其它Agent行为的预测来获取共同行为的最优策略。最后通过研究及气候合作策略游戏实验,证明了MctaQ算法在解决非合作策略的问题中有着很好的理论解释和实验性能。

关键词: 强化学习,Meta平衡,NashQ,多Agent系统

Abstract: Multi-agent reinforcement learning algorithms aim at cooperation strategy, while NashQ is frectuently menboned as a pivotal algorithm to the study of non-cooperative strategics. In multi agent systems, Nash equilibrium can not ensure the solutions obtained Pareto optimal, besides, the algorithm with high computation complexity. MetaQ algorithm was proposed in this paper. It is different from NashQ that MetaQ finds out the optimal solution by the pretreatment of its own behavior and the prediction of the others behavior. In the end,a game-climate cooperation strategy was used in this paper, and the results shows that MetaQ algorithm, with impressive performance, is fit for non-cooperative problem.

Key words: Reinforcement learning, Meta ectuilibrium, NashQ, Multi-agent system

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!