Computer Science ›› 2025, Vol. 52 ›› Issue (11): 22-29.doi: 10.11896/jsjkx.241000049
• Research and Application of Large Language Model Technology • Previous Articles Next Articles
PI Qiankun, LU Jicang, ZHU Taojie, PENG Yueling
CLC Number:
| [1]LI B L,CHEN Y Z,YU S W.A Survey of Information Extraction [J].Computer Engineering and Applications,2003(10):1-5,66. [2]YU J J,WANG X,CHEN W L,et al.Self-training method forlow resource relation extraction [J].Journal of Software,2025,36(4):1620-1636. [3]WANG R Y,XIANG W,WANG B,et al.A survey of document-level event extraction[J].Journal of Chinese Information Processing,2023,37(6):1-14. [4]ZHANG Q,SONG Y,GUO P,et al.CRMSP:A Semi-supervised Approach for Key Information Extraction with Class-Rebalan-cing and Merged Semantic Pseudo-Labeling[J].arXiv:2407.15873,2024. [5]QIN G,LIN N,SHEN M,et al.Global information enhancement and subgraph-level weakly contrastive learning for lightweight weakly supervised document-level event extraction[J].Expert Systems with Applications,2024,240:122516. [6]LYU Q,ZHANG H,SULEM E,et al.Zero-shot event extraction via transfer learning:Challenges and insights[C]//Procee-dings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.2021:322-332. [7]SAINZ O,DE LACALLEO L,LABAKA G,et al.Label verbalization and entailment for effective zero-and few-shot relation extraction[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.(ACL 2021),2021:1199-1212. [8]WANG C,LIU X,CHEN Z,et al.Zero-Shot Information Ex-traction as a Unified Text-to-Triple Translation[C]//Procee-dings of the 2021 Conference on Empirical Methods in Natural Language Processing.2021:1225-1238. [9]FU Y,LI X,SHANG C,et al.ZoIE:A Zero-Shot Open Information Extraction Model Based on Language Model[C]//26th International Conference on Computer Supported Cooperative Work in Design(CSCWD 2023).IEEE,2023:784-789. [10]YANG Q,HU Y,CAO R,et al.Zero-shot Key Information Extraction from Mixed-Style Tables:Pre-training on Wikipedia[C]//IEEE International Conference on Data Mining(ICDM 2021).IEEE,2021:1451-1456. [11]MIN B,ROSS H,SULEM E,et al.Recent advances in natural language processing via large pre-trained language models:A survey[J].ACM Computing Surveys,2023,56(2):1-40. [12]WANG X,ZHOU W,ZU C,et al.Instructuie:Multi-task in-struction tuning for unified information extraction[J].arXiv:2304.08085,2023. [13]SAINZ O,GARCÍA-FERRERO I,AGERRI R,et al.Gollie:Annotation guidelines improve zero-shot information-extraction[J].arXiv:2310.03668,2023. [14]HU D,LIU B,ZHU X,et al.Zero-shot information extraction from radiological reports using ChatGPT[J].International Journal of Medical Informatics,2024,183:105321. [15]KARTCHNER D,RAMALINGAM S,AL-HUSSAINI I,et al.Zero-Shot Information Extraction for Clinical Meta-Analysis using Large Language Models[C]//The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks.ACL,2023:396-405. [16]MANGRULKAR S,GUGGER S,DEBUT L,et al.Peft:State-of-the-art parameter-efficient fine-tuning methods[EB/OL].https://github.com/huggingface/peft. [17]GUI H,QIAO S,ZHANG J,et al.InstructIE:A Bilingual In-struction-based Information Extraction Dataset[J].arXiv:2305.11527,2023. [18]GUI H,YUAN L,YE H,et al.IEPile:Unearthing Large Scale Schema-Conditioned Information Extraction Corpus[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.2024:127-146. [19]ZHANG N,XU X,TAO L,et al.DeepKE:A Deep LearningBased Knowledge Extraction Toolkit for Knowledge Base Population[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing:System Demonstrations.2022:98-108. [20]TOUVRON H,MARTIN L,STONE K,et al.LLAMA 2:Open foundation and fine-tuned chat models[J].arXiv:2307.09288,2023. [21]DUBEY A,JAUHRI A,PANDEY A,et al.The LLAMA 3 herd of models[J].arXiv:2407.21783,2024. [22]DU Z,QIAN Y,LIU X,et al.GLM:General Language Model Pretraining with Autoregressive Blank Infilling[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.2022:320-335. [23]YANG A,XIAO B,WANG B,et al.Bachman 2:Open large-scale language models[J].arXiv:2309.10305,2023. [24]WANG P,ZHANG N,TIAN B,et al.Easyedit:An easy-to-use knowledge editing framework for large language models[J].arXiv:2308.07269,2023. [25]LOSHCHILOV I,HUTTER F.Decoupled Weight Decay Regularization[C]//International Conference on Learning Representations.2018. |
| [1] | CAI Qihang, XU Bin, DONG Xiaodi. Knowledge Graph Completion Model Using Semantically Enhanced Prompts and Structural Information [J]. Computer Science, 2025, 52(9): 282-293. |
| [2] | ZHONG Boyang, RUAN Tong, ZHANG Weiyan, LIU Jingping. Collaboration of Large and Small Language Models with Iterative Reflection Framework for Clinical Note Summarization [J]. Computer Science, 2025, 52(9): 294-302. |
| [3] | LIU Leyuan, CHEN Gege, WU Wei, WANG Yong, ZHOU Fan. Survey of Data Classification and Grading Studies [J]. Computer Science, 2025, 52(9): 195-211. |
| [4] | WANG Limei, HAN Linrui, DU Zuwei, ZHENG Ri, SHI Jianzhong, LIU Yiqun. Privacy Policy Compliance Detection Method for Mobile Application Based on Large LanguageModel [J]. Computer Science, 2025, 52(8): 1-16. |
| [5] | WANG Dongsheng. Multi-defendant Legal Judgment Prediction with Multi-turn LLM and Criminal Knowledge Graph [J]. Computer Science, 2025, 52(8): 308-316. |
| [6] | LI Maolin, LIN Jiajie, YANG Zhenguo. Confidence-guided Prompt Learning for Multimodal Aspect-level Sentiment Analysis [J]. Computer Science, 2025, 52(7): 241-247. |
| [7] | CHEN Jinyin, XI Changkun, ZHENG Haibin, GAO Ming, ZHANG Tianxin. Survey of Security Research on Multimodal Large Language Models [J]. Computer Science, 2025, 52(7): 315-341. |
| [8] | ZHAO Zheyu, WANG Zhongqing, WANG Hongling. Commodity Attribute Classification Method Based on Dual Pre-training [J]. Computer Science, 2025, 52(6A): 240500127-8. |
| [9] | TU Ji, XIAO Wendong, TU Wenji, LI Lijian. Application of Large Language Models in Medical Education:Current Situation,Challenges and Future [J]. Computer Science, 2025, 52(6A): 240400121-6. |
| [10] | LI Bo, MO Xian. Application of Large Language Models in Recommendation System [J]. Computer Science, 2025, 52(6A): 240400097-7. |
| [11] | ZOU Rui, YANG Jian, ZHANG Kai. Low-resource Vietnamese Speech Synthesis Based on Phoneme Large Language Model andDiffusion Model [J]. Computer Science, 2025, 52(6A): 240700138-6. |
| [12] | ZHOU Lei, SHI Huaifeng, YANG Kai, WANG Rui, LIU Chaofan. Intelligent Prediction of Network Traffic Based on Large Language Model [J]. Computer Science, 2025, 52(6A): 241100058-7. |
| [13] | BAI Yuntian, HAO Wenning, JIN Dawei. Study on Open-domain Question Answering Methods Based on Retrieval-augmented Generation [J]. Computer Science, 2025, 52(6A): 240800141-7. |
| [14] | ZHANG Le, CHE Chao, LIANG Yan. Hallucinations Proactive Relief in Diabetes Q&A LLM [J]. Computer Science, 2025, 52(6A): 240700182-10. |
| [15] | YIN Baosheng, ZONG Chen. Research on Semantic Fusion of Chinese Polysemous Words Based on Large LanguageModel [J]. Computer Science, 2025, 52(6A): 240400139-7. |
|
||