Computer Science ›› 2025, Vol. 52 ›› Issue (1): 65-71.doi: 10.11896/jsjkx.240800022

• Technology Research and Application of Large Language Model • Previous Articles     Next Articles

Large Language Models Driven Framework for Multi-agent Military Requirement Generation

LI Jiahui, ZHANG Mengmeng, CHEN Honghui   

  1. National Key Laboratory of Information Systems Engineering,National University of Defense Technology,Changsha 410000,China
  • Received:2024-08-05 Revised:2024-10-08 Online:2025-01-15 Published:2025-01-09
  • About author:LI Jiahui,born in 2000,postgraduate.His main research interests include requirement analysis and LLMs.
    ZHANG Mengmeng,born in 1990,Ph.D,associate professor.His main research interests include requirement analysis and system design and evaluation.

Abstract: Military requirement generation in joint operation involves many participants and a heavy workload.The process relies on individual experience and multiple sources of documents,which leads to problems such as low efficiency in requirement generation and difficulty supporting the design of joint operation system.With the development of large language models (LLMs),LLMs-driven agents have shown excellent performance in various fields,and multi-agent system can efficiently handle complex tasks by leveraging group intelligence through distributed decision-making.To address the low efficiency in military requirement generation,a framework for military requirement generation with LLMs-driven multi-agent system is proposed.The framework includes a multi-modal information acquisition agent,military expert agents,a moderator and other components.The multi-modal information acquisition agent can rapidly process multi-modal information,extract military requirements and provide the user with a question-and-answer function.Military expert agents simulate human experts discussing the generation of requirements through natural language dialogues.Driven by LLMs,these agents can perceive the environment and autonomously use tools such as Ar-xiv,search engines and other resources to support the dialogues.The moderator receives instructions from the human user,refines the content of the instructions using LLMs and generates dialogue prompts and problem background descriptions.Using the Russia-Ukraine conflict as an experimental case,military requirements are generated from relevant multi-modal information.The experimental results show that when the multi-modal information capacity is within the maximum processing capacity of LLMs,the framework significantly reduces the time consumption for military requirement generation,with time savings of 80% to 85% for video resources and 90% to 95% for audio resources.

Key words: Requirement generation, Multi-agent, Generative AI, LLMs, Multi-modal

CLC Number: 

  • TP181
[1]YU B,DUAN C Y.Military requirements and military requirements engineering[J].Requirement Engineering,2006(2):37-42.
[2]ZHANG Y,GUO Q S.Operational concept design method basedon DoDAF for ground unmanned combat system[J].Fire Control & Command Control,2021,46(5):52-57.
[3]CHEN Y X,HE L,WU J C,et al.Military requirement analysis methods and applications of new concept equipment[J].Fire Control & Command Control,2023,48(11):87-94,101.
[4]CHEN H H,CHEN T,ZHANG W M.Requirements engineering for networking information centric system of systems[J].Journal of Command and Control,2016,2(4):277-281.
[5]JIAO A L,XU J F.Approach for Combat CapabilityRequire-ment Satisfactory Degree Evaluation of Weapon System[J].Command Control & Simulation,2019,41(1):68-72.
[6]CHEN Y W,DOU Y J,CHENG B,et al.Research on Capability Requirement Generation of Weapon System-of-systems based on operational activity decomposition [J].Systems Engineering-Theory & Practice,2011,31(S1):154-163.
[7]REED S,ZOLNA K,PARISOTTO E,et al.A generalist agent[J].arXiv:2205.06175,2022.
[8]WENG L.LLM Powered Autonomous Agents[EB/OL].(2023-06-23)[2024-04-15].https://lilianweng.github.io/posts/2023-06-23-agent/.
[9]LÁLA J,O’DONOGHUE O,SHTEDRITSKI A,et al.Paperqa:Retrieval-augmented generative agent for scientific research[J].arXiv:2312.07559,2023.
[10]THAKUR C,GUPTA S.Multi-Agent system applications inhealth care:A survey[M]//Multi Agent Systems:Technologies and Applications towards Human-Centered.Singapore:Springer Nature Singapore,2022:139-171.
[11]HE Z,ZHANG C.AFSPP:Agent Framework for Shaping Pre-ference and Personality with Large Language Models[J].arXiv:2401.02870,2024.
[12]HONG S,ZHENG X,CHEN J,et al.Metagpt:Meta programming for multi-agent collaborative framework[J].arXiv:2308.00352,2023.
[13]CHASE H.Applications that can reason,Powered by LangChain[R/OL].(2023-07-10)[2024-04-15].https://www.langchain.com.
[14]PARK J S,POPOWSKI L,CAI C,et al.Social simulacra:Creating populated prototypes for social computing systems[C]//Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology.2022:1-18.
[15]CALLISON-BURCH C,TOMAR G S,MARTIN L J,et al.Dungeons and dragons as a dialog challenge for artificial intelligence[J].arXiv:2210.07109,2022.
[16]PARK J S,O’BRIEN J,CAI C J,et al.Generative agents:Interactive simulacra of human behavior[C]//Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology.2023:1-22.
[17]WU Q,BANSAL G,ZHANG J,et al.Autogen:Enabling next-gen llm applications via multi-agent conversation framework[J].arXiv:2308.08155,2023.
[18]LI G,HAMMOUD H,ITANI H,et al.Camel:Communicativeagents for“ mind” exploration of large language model society[J].Advances in Neural Information Processing Systems,2023,36:51991-52008.
[19]QIAN C,CONG X,YANG C,et al.Communicative Agents for Software Development[J].arXiv:2307.07924,2023.
[1] ZHAO Qian, GUO Bin, LIU Yubo, SUN Zhuo, WANG Hao, CHEN Mengqi. Generation of Enrich Semantic Video Dialogue Based on Hierarchical Visual Attention [J]. Computer Science, 2025, 52(1): 315-322.
[2] ZENG Zefan, HU Xingchen, CHENG Qing, SI Yuehang, LIU Zhong. Survey of Research on Knowledge Graph Based on Pre-trained Language Models [J]. Computer Science, 2025, 52(1): 1-33.
[3] YAN Yusong, ZHOU Yuan, WANG Cong, KONG Shengqi, WANG Quan, LI Minne, WANG Zhiyuan. COA Generation Based on Pre-trained Large Language Models [J]. Computer Science, 2025, 52(1): 80-86.
[4] HUANG Xiaofei, GUO Weibin. Multi-modal Fusion Method Based on Dual Encoders [J]. Computer Science, 2024, 51(9): 207-213.
[5] ZHANG Junsan, CHENG Ming, SHEN Xiuxuan, LIU Yuxue, WANG Leiquan. Diversified Label Matrix Based Medical Image Report Generation [J]. Computer Science, 2024, 51(8): 200-208.
[6] WANG Xianwei, FENG Xiang, YU Huiqun. Multi-agent Cooperative Algorithm for Obstacle Clearance Based on Deep Deterministic PolicyGradient and Attention Critic [J]. Computer Science, 2024, 51(7): 319-326.
[7] GAO Yuzhao, NIE Yiming. Survey of Multi-agent Deep Reinforcement Learning Based on Value Function Factorization [J]. Computer Science, 2024, 51(6A): 230300170-9.
[8] HUANG Feihu, LI Peidong, PENG Jian, DONG Shilei, ZHAO Honglei, SONG Weiping, LI Qiang. Multi-agent Based Bidding Strategy Model Considering Wind Power [J]. Computer Science, 2024, 51(6A): 230600179-8.
[9] XIE Guangqiang, ZHONG Biwei, LI Yang. Distributed Adaptive Multi-agent Rendezvous Control Based on Average Consensus Protocol [J]. Computer Science, 2024, 51(5): 242-249.
[10] HE Shiyang, WANG Zhaohui, GONG Shengrong, ZHONG Shan. Cross-modal Information Filtering-based Networks for Visual Question Answering [J]. Computer Science, 2024, 51(5): 85-91.
[11] XIN Yuanxia, HUA Daoyang, ZHANG Li. Multi-agent Reinforcement Learning Algorithm Based on AI Planning [J]. Computer Science, 2024, 51(5): 179-192.
[12] SHI Dianxi, HU Haomeng, SONG Linna, YANG Huanhuan, OUYANG Qianying, TAN Jiefu , CHEN Ying. Multi-agent Reinforcement Learning Method Based on Observation Reconstruction [J]. Computer Science, 2024, 51(4): 280-290.
[13] XU Bangwu, WU Qin, ZHOU Haojie. Appearance Fusion Based Motion-aware Architecture for Moving Object Segmentation [J]. Computer Science, 2024, 51(3): 155-164.
[14] YAN Jiahe, LI Honghui, MA Ying, LIU Zhen, ZHANG Dalin, JIANG Zhouxian, DUAN Yuhang. Multi-source Heterogeneous Data Fusion Technologies and Government Big Data GovernanceSystem [J]. Computer Science, 2024, 51(2): 1-14.
[15] SHI Dianxi, PENG Yingxuan, YANG Huanhuan, OUYANG Qianying, ZHANG Yuhui, HAO Feng. DQN-based Multi-agent Motion Planning Method with Deep Reinforcement Learning [J]. Computer Science, 2024, 51(2): 268-277.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!