Computer Science ›› 2025, Vol. 52 ›› Issue (11): 223-229.doi: 10.11896/jsjkx.250500054

• Artificial Intelligence • Previous Articles     Next Articles

Method for Generating Judgment Documents Based on Trial Logic

LIAO Jinchao, YANG Weizhe, QIN Yongbin, HUANG Ruizhang, CHEN Yanping, ZHOU Yulin   

  1. State Key Laboratory of Public Big Data,Guizhou University,Guiyang 550025,China
    School of Computer Science and Technology,Guizhou University,Guiyang 550025,China
  • Received:2025-05-14 Revised:2025-06-27 Online:2025-11-15 Published:2025-11-06
  • About author:LIAO Jinchao,born in 1999,postgra-duate.His main research interest is na-tural language processing.
    QIN Yongbin,born in 1980,Ph.D,professoer.His main research interests include big data governance and application,and multi-source data fusion.
  • Supported by:
    National Key Research and Development Program of China(2023YFC3304500),Key Funded Projects of the Science and Technology Foundation Program of Guizhou Province(Qian-Kehe Major Special Project No.[2024] 003) and Department of Education of Guizhou Province(2024YJSKYJJ041).

Abstract: The automatic generation of judicial documents is one of the key tasks in the construction of smart courts,aiming to enhance judicial efficiency and document quality.However,due to the blind spots of large models in judicial cognition,they struggle to understand the trial mechanism and document norms,resulting in deficiencies in the logical consistency and structural rationality of the generated documents.To address these issues,this paper proposes a method for generating judicial documents based on trial logic,which utilizes large language models to simulate the trial reasoning process and generate documents in stages.Firstly,legal elements are used to fill in the preset template to describe the “basic case facts”.Secondly,the facts and evidence are analyzed and aligned to obtain the “trial facts”.Finally,relevant legal provisions are retrieved from the knowledge base to gene-rate the “court judgment”,and the complete document is assembled.Experimental results show that,compared with the baseline model on real case file data,the proposed method has improved the F1 values of ROUGE-1,ROUGE-2,and ROUGE-L by 6.03,6.56,and 7.98 percentage points respectively,verifying the effectiveness of the proposed method.

Key words: Large language model, Judgment document generation, Knowledge base, Trial logic, Smart court

CLC Number: 

  • TP391.1
[1]WU J.Legal document system reform under the background of intelligent justice [J].South China Sea Jurisprudence,2020,4(3):1-5.
[2]Supreme People's Court.Supreme People's Court Re-leasesKey Data on Judicial Adjudication Work for 2024 [EB/OL].(2024-01-26) [2025-03-01].https://www.chinacourt.org/article/detail/2025/01/id/8686406.shtml.
[3]SUTSKEVER I,VINYALS O,LE Q V.Sequence to Sequence Learning with Neural Networks [C]//Proceedings of the 28th International Conference on Neural Information Processing Systems(NIPS'14).MIT Press,2014:3104-3112.
[4]BROWN T,MANN B,RYDER N,et al.Language models arefew-shot learners[J].Advances in Neural Information Proces-sing Systems,2020,33:1877-1901.
[5]DUAN M Q.Basic Trial Procedure for First-Instance Ordinary Criminal Cases [EB/OL].(2015-11-02) [2025-03-01].https://www.66law.cn/domainblog/74744.aspx.
[6]LEWIS P,PEREZ E,PIKTUS A,et al.Retrieval-augmentedgeneration for knowledge-intensive nlp tasks[J].Advances in Neural Information Processing Systems,2020,33:9459-9474.
[7]DE NOVAIS E M,DIAS T T,PARABONI I.Improved TextGeneration Using N-gram Statistics [C]//12th Ibero-American Conference on Artificial Intelligence(IBERAMIA).Springer-Verlag,2010:316-325.
[8]CHRISTIAN H,AGUS M P,SUHARTONO D.Single document automatic text summarization using term frequen-cy-inverse document frequency(TF-IDF)[J].ComTech:Computer,Mathematics & Engineering Applications,2016,7(4):285-294.
[9]PROUDIAN D,POLLARD C.Parsing Head-Driven PhraseStructure Grammar [C]//Proceedings of the 23rd Annual Mee-ting of the Association for Computational Linguistics.1985:167-171.
[10]HU H J,LIAO M F,MAO W M,et al.Variational Auto-en-coder for Text Generation [C]//Proceedings of the 5th IEEE International Conference on Information Technology and Mechatronics Engineering(ITOEC).2020:595-598.
[11]LIN Z H,GONG Y Y,SHEN Y L,et al.Text Generation with Diffusion Language Models:A Pre-training Approach with Continuous Paragraph Denoise [C]//International Conference on Machine Learning.2023:21051-21064.
[12]WANG Z,HE W,WU H,et al.Chinese Poetry Generation withPlanning Based Neural Network [C]//Proceedings of the 26th International Conference on Computational Linguistics(COLING).2016:1051-1060.
[13]RAHMAN M M,SIDDIQUI F H.Multi-layered attentionalpeephole convolutional LSTM for abstractive text summarization[J].Etri Journal,2021,43(2):288-298.
[14]YANG P C,LI L,LUO F L,et al.Enhancing Topic-to-Essay Generation with External Commonsense Knowledge [C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.2019:2002-2012.
[15]PEI B S,L X,HU K Q,et al.Generation of judicial text abstracts based on knowledge enhancement pre-training model [J].Science Technology and Engineering,2024,24(20):8587-8597.
[16]ALOKLA A,GAD W,NAZIH W,et al.Pseudocode generation from source code using the bart model[J].Mathematics,2022,10(21):3967.
[17]ABADI V N M,GHASEMIAN F.Enhancing Persian text summarization through a three-phase fine-tuning and reinforcement learning approach with the mT5 transformer model[J].Scienti-fic Reports,2025,15(1):80.
[18]ALI S R,DOBBS T D,HUTCHINGS H A,et al.Using ChatGPT to write patient clinic letters[J].The Lancet Digital Health,2023,5(4):e179-e181.
[19]WANG Y,ZHOU Q,LEDO D.StoryVerse:Towards Co-authoring Dynamic Plot with LLM-based Character Simulation via Narrative Planning [C]//Proceedings of the 19th International Conference on the Foundations of Digital Games.2024:1-4.
[20]LI H Z,WANG H Y,SUN X,et al.Prompt-guided Generation of Structured Chest X-ray Report Using a Pre-trained LLM [C]//Proceedings of the IEEE International Conference on Multimedia and Expo(ICME).2024:1-6.
[21]XIAO S T,LIU Z,ZHANG P T,et al.C-Pack:Packed Resources for General Chinese Embeddings [C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval.2024:641-649.
[22]LIN C.Rouge:A Package for Automatic Evaluation of Summaries [C]//Proceedings of the Workshop on Text Summarization Branches Out.2004:74-81.
[23]MIN S,LYU X,HOLTZMAN A,et al.Rethinking the Role ofDemonstrations:What Makes In-Context Learning Work?[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.2022:11048-11064.
[24]MIHALCEA R,TARAU P.TextRank:Bringing Order intoText [C]//Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing.2004:404-411.
[25]SHI Y S,MENG J,WANG J.Seq2Seq Model with RNN Attention for Abstractive Summarization [C]//Proceedings of the 2019 International Conference on Artificial Intelligence and Computer Science.2019:348-353.
[26]LEWIS M,LIU Y,GOYAL N,et al.BART:Denoising Se-quence-to-Sequence Pre-training for Natural Language Generation,Translation,and Comprehension [C]//58th Annual Mee-ting of the Association for Computational Linguistics.2020:7871-7880.
[27]SHAO Y F,GENG Z C,LIU Y T,et al.Cpt:A pre-trained unbalanced transformer for both chinese language understanding and generation[J].Science China Information Sciences,2024,67(5):152102.
[28]DU Z X,QIAN Y J,LIU X,et al.GLM:General Language Mo-del Pre-training with Autoregressive Blank Infilling [C]//60th Annual Meeting of the Association for Computational Linguistics.2022:320-335.
[29]GRATTAFIORI A,DUBEY A,JAUHRI A,et al.The llama 3 herd of models[J].arXiv:2407.21783,2024.
[30]RAFFEL C,SHAZEER N,ROBERTS A,et al.Exploring the limits of transfer learning with a unified text-to-text transformer[J].Journal of Machine Learning Research,2020,21(1):5485-5551.
[1] LIU Leyuan, CHEN Gege, WU Wei, WANG Yong, ZHOU Fan. Survey of Data Classification and Grading Studies [J]. Computer Science, 2025, 52(9): 195-211.
[2] CAI Qihang, XU Bin, DONG Xiaodi. Knowledge Graph Completion Model Using Semantically Enhanced Prompts and Structural Information [J]. Computer Science, 2025, 52(9): 282-293.
[3] ZHONG Boyang, RUAN Tong, ZHANG Weiyan, LIU Jingping. Collaboration of Large and Small Language Models with Iterative Reflection Framework for Clinical Note Summarization [J]. Computer Science, 2025, 52(9): 294-302.
[4] WANG Dongsheng. Multi-defendant Legal Judgment Prediction with Multi-turn LLM and Criminal Knowledge Graph [J]. Computer Science, 2025, 52(8): 308-316.
[5] WANG Limei, HAN Linrui, DU Zuwei, ZHENG Ri, SHI Jianzhong, LIU Yiqun. Privacy Policy Compliance Detection Method for Mobile Application Based on Large LanguageModel [J]. Computer Science, 2025, 52(8): 1-16.
[6] LI Maolin, LIN Jiajie, YANG Zhenguo. Confidence-guided Prompt Learning for Multimodal Aspect-level Sentiment Analysis [J]. Computer Science, 2025, 52(7): 241-247.
[7] CHEN Jinyin, XI Changkun, ZHENG Haibin, GAO Ming, ZHANG Tianxin. Survey of Security Research on Multimodal Large Language Models [J]. Computer Science, 2025, 52(7): 315-341.
[8] TU Ji, XIAO Wendong, TU Wenji, LI Lijian. Application of Large Language Models in Medical Education:Current Situation,Challenges and Future [J]. Computer Science, 2025, 52(6A): 240400121-6.
[9] LI Bo, MO Xian. Application of Large Language Models in Recommendation System [J]. Computer Science, 2025, 52(6A): 240400097-7.
[10] ZOU Rui, YANG Jian, ZHANG Kai. Low-resource Vietnamese Speech Synthesis Based on Phoneme Large Language Model andDiffusion Model [J]. Computer Science, 2025, 52(6A): 240700138-6.
[11] ZHOU Lei, SHI Huaifeng, YANG Kai, WANG Rui, LIU Chaofan. Intelligent Prediction of Network Traffic Based on Large Language Model [J]. Computer Science, 2025, 52(6A): 241100058-7.
[12] BAI Yuntian, HAO Wenning, JIN Dawei. Study on Open-domain Question Answering Methods Based on Retrieval-augmented Generation [J]. Computer Science, 2025, 52(6A): 240800141-7.
[13] ZHANG Le, CHE Chao, LIANG Yan. Hallucinations Proactive Relief in Diabetes Q&A LLM [J]. Computer Science, 2025, 52(6A): 240700182-10.
[14] YIN Baosheng, ZONG Chen. Research on Semantic Fusion of Chinese Polysemous Words Based on Large LanguageModel [J]. Computer Science, 2025, 52(6A): 240400139-7.
[15] HU Caishun. Study on Named Entity Recognition Algorithms in Audit Domain Based on Large LanguageModels [J]. Computer Science, 2025, 52(6A): 240700190-4.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!