<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/css" href="rsstyle.css"?><rss version="2.0">
<channel>
<title>Computer Science-Channel: Research and Application of Large Language Model Technology</title>
<description>Channel: Research and Application of Large Language Model Technology</description>
<link>https://www.jsjkx.com</link>
<language>EN-US</language>
<docs>https://www.jsjkx.com/EN/current.shtml</docs>
<generator>https://www.jsjkx.com</generator>
<ttl>5</ttl>
<item>
<title><![CDATA[Comprehensive Survey of LLM-based Agent Operating Systems]]></title>
    <link><![CDATA[https://www.jsjkx.com/EN/abstract/abstract24002.shtml]]></link>
<description><![CDATA[Large language model-based agent operating systems(Agent OS),as core platforms for integrating large models,tool resources,and multi-agent collaboration,are gradually becoming a key research direction for advancing general artificial intelligence.This paper systematically reviews the research progress in the field of Agent OS.It begins by discussing foundational theories,reviewing the evolution of various large language models,and progress in agent technology and traditional operating systems.This paper then elaborates on how their hierarchical architectures and modular designs achieve resource management and intelligent scheduling,focusing on typical architectures such as AIOS.Furthermore,it clarifies existing technical bottlenecks in scalability,context integration,and security within current systems.It also proposes future directions,including the use of lightweight designs,self-supervised learning mechanisms,and dynamic scheduling algorithms to optimize multi-agent cooperation efficiency.The main contributions of this paper are integrating fragmented research to provide a clearer technical framework,and highlighting the current limitations of Agent OS in covering emerging applications and industry-specific customizations.Future work should focus on enhancing the capability of cross-domain Agent OS for self-evolution and accelerating their implementation across diverse fields.]]></description>
<category><![CDATA[Channel: Research and Application of Large Language Model Technology]]></category>
<author><![CDATA[GUO Luxiang, WANG Yueyu, LI Qianyue, LI Shasha, LIU Xiaodong, JI Bin, YU Jie]]></author>
<pubDate><![CDATA[2026-01-08 00:00:00.0]]></pubDate>
</item>
<item>
<title><![CDATA[Efficient Inference Techniques of Large Models in Real-world Applications:A Comprehensive Survey]]></title>
    <link><![CDATA[https://www.jsjkx.com/EN/abstract/abstract24003.shtml]]></link>
<description><![CDATA[In recent years,the technologies of LLMs have been rapidly developed,with their applications across various industries experiencing vigorous growth.From natural language processing to intelligent recommendations,and from information retrieval to automated writing,LLMs are becoming indispensable tools in many fields.However,with the diversification of application scena-rios and the increase in demands,the efficiency of LLM inference is becoming increasingly prominent.In practical applications,ra-pid and accurate inference capabilities are crucial for responding to user queries,handling large-scale data,and making real-time decisions.To address this challenge,academia has undertaken extensive research and exploration to enhance the inference efficiency of LLMs.This paper comprehensively surveys the literature on efficient LLM inference in practical application scenarios.Firstly,it introduces the principles of LLMs and analyzes how to improve LLM inference efficiency in practical application scenarios.Secondly,it proposes a taxonomy tailored for real-world applications,which consists of three main levels:algorithm optimization,parameter optimization,and system optimization.This survey summarizes and categorizes related work about LLMs.Finally,it discusses potential future research directions.]]></description>
<category><![CDATA[Channel: Research and Application of Large Language Model Technology]]></category>
<author><![CDATA[LIU Lilong, LIU Guoming, QI Baoyuan, DENG Xueshan, XUE Dizhan,  QIAN Shengsheng]]></author>
<pubDate><![CDATA[2026-01-08 00:00:00.0]]></pubDate>
</item>
<item>
<title><![CDATA[LLM-based Business Process Adaptation Method to Respond Long-tailed Changes]]></title>
    <link><![CDATA[https://www.jsjkx.com/EN/abstract/abstract24004.shtml]]></link>
<description><![CDATA[Business process adaptation is one of fundamental and enduring tasks in business process management,aimed at enhancing flexibility and achieving business objectives by adjusting process models and instances in response to everchanging environment.Long-tailed changes(LTCs),stemming from residual uncertainty during modeling,are inevitable and pose a significant challenge to business resilience.The most effective approach available now is a tripartite collaboration framework,consisting of a frontend business operators perceiving LTCs and fulfilling adaptation using domain specific languages(DSL),a technical backend and managerial team providing service repository and compliance requirements,and an enabling tool assisting the adaptation.However,the diversity,complexity,and urgency of LTCs in varying spatiotemporal scenarios may exceed the frontend’s ability to grasp the situations,formulate appropriate solutions,and express them in DSL.To address the limitation and further expand the effective framework,LLM-Adapt,a long-tailed changes adaptation method based on large language models(LLMs),is proposed.By leveraging the generalization ability,content generation power,and embedded knowledge of events and countermeasures in LLMs,LLM-Adapt provides a more efficient and applicable adaptation mechanism.Firstly,a prompt engineering strategy tailored to the characteristics of LTCs is developed to enable frontend to interact with LLMs in natural language and obtain adaptation solutions.Secondly,in alignment with the business baseline constraints set by the back-end process owners,functional validation of the adaptation solutions is conducted.Furthermore,a new algorithm SSDT-Lane based on process structural similarity is proposed to filter out adaptation candidates that strike current organizational and resource configurations.Case studies and experiments conducted using both synthetic and real-world datasets demonstrate that LLM-Adapt outperforms existing methods in terms of accuracy,efficiency and applicability.]]></description>
<category><![CDATA[Channel: Research and Application of Large Language Model Technology]]></category>
<author><![CDATA[SHAO Xinyi, ZHU Jingwei, ZHANG Liang]]></author>
<pubDate><![CDATA[2026-01-08 00:00:00.0]]></pubDate>
</item>
<item>
<title><![CDATA[Research on Architecture and Technology Pathways for Empowering Tactical AdversarialSimulation Experiments with LLMs]]></title>
    <link><![CDATA[https://www.jsjkx.com/EN/abstract/abstract24005.shtml]]></link>
<description><![CDATA[Tactical confrontation simulation experiments are the core means of operational analysis,simulation training,and equipment activities based on simulation,and their levels of intelligence and automation directly impact the effectiveness of experiments and the generation of combat capabilities.To address the low efficiency issues in experimental design,model construction,scenario control,and human-computer interaction in traditional simulation experiments,a system architecture for empowering tactical confrontation simulation experiments with large language models is proposed,referencing the MCP protocol.This architecture consists of five layers:the foundation layer,tool resource layer,AI agent layer,empowerment path layer,and application layer.The five-layer architecture is guided top-down and integrated bottom-up layer by layer,enabling the coupling and aggregation of large and small models with data resources and traditional small models,and empowering various military activities based on simulation.Based on this,the specific paths of large model empowerment in tactical confrontation simulation are discussed in detail:large model empowerment in simulation experiment design,large model empowerment in decision-making model construction,and large model empowerment in scenario control.Finally,the challenges and countermeasures are analyzed.]]></description>
<category><![CDATA[Channel: Research and Application of Large Language Model Technology]]></category>
<author><![CDATA[LIU Dayong, DONG Zhiming, GUO Qisheng, GAO Ang, QIU Xuehuan]]></author>
<pubDate><![CDATA[2026-01-08 00:00:00.0]]></pubDate>
</item>
<item>
<title><![CDATA[Pre-training World Models from Videos with Generated Actions by Multi-modal Large Models]]></title>
    <link><![CDATA[https://www.jsjkx.com/EN/abstract/abstract24006.shtml]]></link>
<description><![CDATA[Pre-training of world models is key to improving the sample efficiency of reinforcement learning.However,existing methods struggle to capture the causal mechanisms of state transitions due to the lack of explicit action labels in video data.This paper presents MAPO(Multimodal-large-model-generated Action-based pre-training from videOs for world models),a novel pre-training framework.It leverages the semantic understanding of visual-language models and meets the needs of kinematic mode-ling,overcoming the limitations of traditional pre-training methods in the absence of action semantics.MAPO uses the multimodal large model(QWEN2_5-VL-7B) to analyze video frame sequences and generate fine-grained semantic action descriptions during pre-training.This establishes action-state associations with causal explanations.It also designs a context quantization encoding mechanism to separate static scene features from dynamic control factors,improving cross-modal representation.During fine-tu-ning,MAPO uses a dual-network collaborative architecture to align the pre-trained kinematic features with real-environment actions.Experiments show MAPO steadily improves average returns over baselines in 8 tasks on DeepMind Control Suite and Meta-World,especially in long-horizon tasks.This study offers a new cross-modal world model training approach,highlighting the importance of semantic action generation in causal reasoning.]]></description>
<category><![CDATA[Channel: Research and Application of Large Language Model Technology]]></category>
<author><![CDATA[WAN Shenghua, XU Xingye, GAN Le, ZHAN Dechuan]]></author>
<pubDate><![CDATA[2026-01-08 00:00:00.0]]></pubDate>
</item>
</channel>
</rss>