Computer Science ›› 2025, Vol. 52 ›› Issue (11A): 241200156-12.doi: 10.11896/jsjkx.241200156
• Artificial Intelligence • Previous Articles Next Articles
YUAN Tianhao, WANG Yongjun, WANG Baoshan, WANG Zhongyuan
CLC Number:
| [1]CHE L,ZHANG Z Q,ZHOU J J,et al.The research status and development trends of generative artificial intelligence[J].Science & Technology Review,2024,42(12):35-43. [2]ZAREMBA W.Recurrent neural network regularization[J].arXiv:1409.2329,2014. [3]HOCHREITER S.Long Short-term Memory[J].Neural Computation,1997,9(8):1735-1780. [4]SCHUSTER M,PALIWAL K K.Bidirectional recurrent neural networks[J].IEEE Transactions on Signal Processing,1997,45(11):2673-2681. [5]BECK M,PÖPPEL K,SPANRING M,et al.xLSTM:Extended Long Short-Term Memory[J].arXiv:2405.04517,2024. [6]GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Ge-nerative adversarial nets[C]//Advances in Neural Information Processing Systems.2014. [7]DENTON E L,CHINTALA S,FERGUS R.Deep generativeimage models using a laplacian pyramid of adversarial networks[C]//Advances in Neural Information Processing Systems.2015. [8]YU L,ZHANG W,WANG J,et al.Seqgan:Sequence generative adversarial nets with policy gradient[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2017. [9]ZHANG Y,GAN Z,CARIN L.Generating text via adversarial training[C]//NIPS Workshop on Adversarial Training.2016:21-32. [10]VASWANI A.Attention is all you need[C]//Advances in Neural Information Processing Systems.2017. [11]BAHDANAU D.Neural machine translation by jointly learning to align and translate[J].arXiv:1409.0473,2014. [12]KIM Y,DENTON C,HOANG L,et al.Structured attentionnetworks[C]//International Conference on Learning Representations.2017. [13]DEVLIN J,CHANG M W,LEE K,et al Bert:Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.ACL,2019:4171-4186. [14]LIU Z,LIN W,SHI Y,et al.Roberta:A robustly optimized bert pretraining approach[C]//Proceedings of the 20th Chinese National Conference on Computational Linguistics.ACL,2021:1218-1227. [15]XUE L,CONSTANT N,ROBERTS A,et al.mt5:A massively multilingual pre-trained text-to-text transformer[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.ACL,2021:483-498. [16]LAN Z.ALbert:A lite bert for self-supervised learning of language representations[C]//International Conference on Lear-ning Representations.2019. [17]CLARK K.Electra:Pre-training text encoders as discriminators rather than generators[J].arXiv:2003.10555,2020. [18]HE P,LIU X,GAO J,et al.Deberta:Decoding-enhanced bertwith disentangled attention[J].arXiv:2006.03654,2020. [19]ZAHEER M,GURUGANESH G,DUBEY K A,et al.Big bird:Transformers for longer sequences[J].Advances in Neural Information Processing Systems,2020,33:17283-17297. [20]SUN Y,WANG S,FENG S,et al.Ernie 3.0:Large-scale knowledge enhanced pre-training for language understanding and generation[J].arXiv:2107.02137,2021. [21]FENG F,YANG Y,CER D,et al.Language-agnostic BERTsentence embedding[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.ACL,2022:878-891. [22]GAO T,YAO X,CHEN D.SimCSE:Simple contrastive learning of sentence embeddings[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.ACL,2021:6894-6910. [23]LEE J,YOON W,KIM S,et al.BioBERT:a pre-trained biome-dical language representation model for biomedical text mining[J].Bioinformatics,2020,36(4):1234-1240. [24]LIU Z,HUANG D,HUANG K,et al.Finbert:A pre-trained financial language representation model for financial text mining[C]//Proceedings of the Twenty-ninth International Conference on International Joint Conferences on Artificial Intelligence.2021:4513-4519. [25]RADFORD A.Improving language understanding by generative pre-training[J].2018. [26]RADFORD A,WU J,CHILD R,et al.Language models are unsupervised multitask learners[J].OpenAI Blog,2019,1(8):9. [27]BROWN T B,MANN B,RYDER N,et al.Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems.Red Hook,NY:Curran Associates Inc.,2020:1877-1901. [28]ACHIAM J,ADLER S,AGARWAL S,et al.Gpt-4 technical report[R].GPT-4 Technical Report,2023. [29]KINGMA D P.Auto-encoding variational bayes[J].arXiv:1312.6114,2013. [30]FABIUS O,VAN AMERSFOORT J R.Variational recurrentauto-encoders[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP).IEEE,2019:3202-3206. [31]LIN S,CLARK R,BIRKE R,et al.Anomaly detection for time series using vae-lstm hybrid model[C]//2020 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP 2020).IEEE,2020:4322-4326. [32]HO J,JAIN A,ABBEEL P.Denoising diffusion probabilisticmodels[J].Advances in Neural Information Processing Systems,2020,33:6840-6851. [33]LI Y,ZHOU K,ZHAO W X,et al.Diffusion models for non-autoregressive text generation:A survey[C]//Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence(IJCAI ’23).2023:6692-6701. [34]XIANG J,LIU Z,LIU H,et al.Diffusion Dialog:A DiffusionModel for Diverse Dialog Generation with Latent Space[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics,Language Resources and Evaluation (LREC-COLING 2024).ACL,2024:4912-4921. [35]LOVELACE J,KISHORE V,CHEN Y W,et al.DiffusionGuided Language Modeling[C]//Findings of the Association for Computational Linguistics:ACL 2024.ACL,2024:14936-14952. [36]SANG E F,DE MEULDER F.Introduction to the CoNLL-2003 shared task:Language-independent named entity recognition[C]//Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003.ACL,2003:142-147. [37]RAJPURKAR P,ZHANG J,LOPYREV K,et al.SQuAD:100,000+ Questions for Machine Comprehension of Text[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.ACL,2016:2383-2392. [38]YIH W,RICHARDSON M,MEEK C,et al.The value of semantic parse labeling for knowledge base question answering[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics.2016:201-206. [39]TALMOR A,BERANT J.The web as a knowledge-base for answering complex questions[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.ACL,2018:641-651. [40]GLIWA B,MOCHOL I,BIESEK M,et al.SAMSum corpus:A human-annotated dialogue dataset for abstractive summarization[C]//Proceedings of the 2nd Workshop on New Frontiers in Summarization.ACL,2019:70-79. [41]ZHONG M,YIN D,YU T,et al.QMSum:A new benchmark for query-based multi-domain meeting summarization[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.ACL,2021:5905-5921. [42]NALLAPATI R,ZHOU B,GULCEHRE C,et al.Abstractive text summarization using sequence-to-sequence rnns and beyond[C]//Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning.ACL,2016:280-290. [43]GOYAL N,GAO C,CHAUDHARY V,et al.The flores-101evaluation benchmark for low-resource and multilingual machine translation[J].Transactions of the Association for Computational Linguistics,2022,10:522-538. [44]KOCMI T,BAWDEN R,BOJAR O,et al.Findings of the 2022 conference on machine translation(WMT22)[C]//Proceedings of the Seventh Conference on Machine Translation(WMT).2022:1-45. [45]SCARTON C,FORCADA M L,ESPLA-GOMIS M,et al.Estimating post-editing effort:a study on human judgements,task-based and reference-based metrics of MT quality[C]//Proceedings of the 16th International Conference on Spoken Language Translation,Hong Kong.Association for Computational Linguistics.ACL,2019. [46]CONNEAU A,RINOTT R,LAMPLE G,et al.XNLI:Evaluating cross-lingual sentence representations[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.ACL,2018:2475-2485. [47]LAN Y,HE G,JIANG J,et al.A survey on complex knowledge base question answering:Methods,challenges and solutions[J].IEEE Transactions on Knowledge and Data Engineering,2021,35(11):11196-11215. [48]JIN W,ZHAO B,YU H,et al.Improving embedded knowledge graph multi-hop question answering by introducing relational chain reasoning[J].Data Mining and Knowledge Discovery,2022,37(1):255-288. [49]HE G,LAN Y,JIANG J,et al.Improving multi-hop knowledge base question answering by learning intermediate supervision signals[C]//Proceedings of the 14th ACM International Confe-rence on Web Search and Data Mining.2021:553-561. [50]SHI J,CAO S,HOU L,et al.Transfernet:An effective andtransparent framework for multi-hop question answering over relation graph[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing ACL.2024:4149-4158. [51]ZHANG J,ZHANG X,YU J,et al.Subgraph retrieval enhanced model for multi-hop knowledge base question answering [C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.ACL,2022:5773-5784. [52]JIANG J,ZHOU K,ZHAO W X,et al.Unikgqa:Unified retrieval and reasoning for solving multi-hop question answering over knowledge graph[C]//International Conference on Lear-ning Representations.2023. [53]JIANG J,ZHOU K,ZHAO X,et al.Reasoninglm:Enablingstructural subgraph reasoning in pre-trained language models for question answering over knowledge graph[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.ACL,2023:3721-3735. [54]LUO L,LI Y F,HAFFARI G,et al.Reasoning on graphs:Faithful and interpretable large language model reasoning[C]//International Conference on Learning Representations.2024. [55]SUN J,XU C,TANG L,et al.Think-on-graph:Deep and re-sponsible reasoning of large language model with knowledge graph[C]//International Conference on Learning Representations.2024. [56]LUO H,TANG Z,PENG S,et al.Chatkbqa:A generate-then-retrieve framework for knowledge base question answering with fine-tuned large language models[C]//Findings of the Association for Computational Linguistics:ACL 2024.ACL,2024:2039-2056. [57]JIANG J,ZHOU K,DONG Z,et al.Structgpt:A general framework for large language model to reason over structured data[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.ACL,2023:9237-9251. [58]LIU X,YU H,ZHANG H,et al.Agentbench:Evaluating llms as agents[C]//International Conference on Learning Representations.2024. [59]CHENG S,ZHUANG Z,XU Y,et al.Call Me When Necessary:LLMs can Efficiently and Faithfully Reason over Structured Environments[C]//Findings of the Association for Computational Linguistics:ACL 2024.ACL,2024:4275-4295. [60]CHEN X,JIANG J Y,CHANG W C,et al.MinPrompt:Graph-based minimal prompt data augmentation for few-shot question answering[C]//Association for Computational Linguistics.2024:254-266. [61]TANG L,LI J,FANTUS S.Medical artificial intelligence eth-ics:a systematic review of empirical studies[J].Digital Health,2023,9:20552076231186064. [62]REN F,GUO,X,PENG X,et al.A Survey of Spoken Language Understanding in Medical Field[J].Journal of Chinese Information Processing,2024,38(1):24-35. [63]WU X,ZHANG H,LIAO H.Literature Review of Doctor Recommendation Methods and Applications for Consultation Platforms[J/OL].Computer Science,1-21[2024-11-27].http://kns.cnki.net/kcms/detail/50.1075.TP.20241022.1549.035.html. [64]SRIVASTAVA S,SHARMA G.OmniVec2-A Novel Trans-former based Network for Large Scale Multimodal and Multitask Learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2024:27412-27424. [65]ROHDE T,WU X,LIU Y.Hierarchical Learning for Generationwith Long Source Sequences[J].arXiv:2104.07545,2020. [66]BUDAGAM D,KJ S,KUMAR A,et al.Hierarchical Prompting Taxonomy:A Universal Evaluation Framework for Large Language Models[J].arXiv:2406.12644,2024. [67]XU L,KARIM M A,DINGLIWAL S,et al.Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization[C]//EMNLP.2024:35-49. [68]HE H,LIU Q,XU L,et al.CriSPO:Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation[J].arXiv:2410.02748,2024. [69]LIU J,ZOU Y,ZHANG H,et al.Topic-aware contrastive learning for abstractive dialogue summarization[C]//Findings of the Association for Computational Linguistics:EMNLP 2021.ACL,2021:1229-1243. [70]KIM S,JOO S J,CHAE H,et al.Mind the gap! injecting commonsense knowledge for abstractive dialogue summarization[C]//Proceedings of the 29th International Conference on Computational Linguistics.ACL,2022:6285-6300. [71]LIU J,ZOU Y,ZHANG H,et al.Topic-aware contrastive learning for abstractive dialogue summarization[C]//Findings of the Association for Computational Linguistics:EMNLP 2021.ACL,2021:1229-1243. [72]ZHAO Y,KHALMAN M,JOSHI R,et al.Calibrating sequence likelihood improves conditional language generation[C]//International Conference on Learning Representations.2022. [73]ZHANG X,LIU Y,WANG X,et al.Momentum calibration for text generation[J].arXiv:2212.04257,2022. [74]WANG B,LIU Z,CHEN N F.Instructive dialogue summarization with query aggregations[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.ACL,2023:7630-7653. [75]LIN C Y.Rouge:A package for automatic evaluation of summaries[C]//Text Summarization Branches Out.2004:74-81. [76]WANG P,ZHANG C,QI F,et al.Pgnet:Real-time arbitrarily-shaped text spotting with point gathering network[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2021:2782-2790. [77]LEWIS M,LIU Y,GOYAL N,et al.BART:Denoising se-quence-to-sequence pre-training for natural language generation,translation,and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.ACL,2020:7871-7880. [78]HUANG W,XIAO T,LIU Q,et al.HMNet:a hierarchicalmulti-modal network for educational video concept prediction[J].International Journal of Machine Learning and Cyberne-tics,2023,14(9):2913-2924. [79]LIU C Y,ZHOU C,WU J,et al.CPMF:A collective pairwise matrix factorization model for upcoming event recommendation[C]//2017 International Joint Conference on Neural Networks(IJCNN).IEEE,2017:1532-1539. [80]DU X,LI S,JI H.Dynamic global memory for document-level argument extraction[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.ACL,2022:5264-5275. [81]REN Y,CAO Y,GUO P,et al.Retrieve-and-sample:Document-level event argument extraction via hybrid retrieval augmentation[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics.2023:293-306. [82]WANG X,GAO T,ZHU Z,et al.KEPLER:A unified model for knowledge embedding and pre-trained language representation[J].Transactions of the Association for Computational Linguistics,2021,9:176-194. [83]KOCHSIEK A,SAXENA A,NAIR I,et al.Friendly neighbors:Contextualized sequence-to-sequence link prediction[C]//Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023).ACL,2023:131-138. [84]WANG L,ZHAO W,WEI Z,et al.SimKGC:Simple contrastive knowledge graph completion with pre-trained language models[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.ACL,2022:4281-4294. [85]SAXENA A,KOCHSIEK A,GEMULLA R.Sequence-to-se-quence knowledge graph completion and question answering[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.ACL,2022:2814-2828. [86]WANG K,XU Y,WU Z,et al.LLM as Prompter:Low-resource Inductive Reasoning on Arbitrary Knowledge Graphs[C]//Findings of the Association for Computational Linguistics:ACL 2024.ACL,2024:3742-3759. [87]STAHLBERG F.Neural machine translation:A review[J].Journal of Artificial Intelligence Research,2020,69:343-418. [88]FAN A,BHOSALE S,SCHWENK H,et al.Beyond english-centric multilingual machine translation[J].Journal of Machine Learning Research,2021,22(107):1-48. [89]ZHU W,LIU H,DONG Q,et al.Multilingual machine translation with large language models:Empirical results and analysis[C]//Findings of the Association for Computational Linguistics:NAACL 2024.ACL,2024:2765-2781. [90]FAN A,BHOSALE S,SCHWENK H,et al.Beyond english-centric multilingual machine translation[J].Journal of Machine Learning Research,2021,22(107):1-48. [91]ZHANG S,ROLLER S,GOYAL N,et al.Opt:Open pre-trained transformer language models[J].arXiv:2205.01068,2022. [92]ALMAZROUEI E,ALOBEIDLI H,ALSHAMSI A,et al.Thefalcon series of open language models[J].arXiv:2311.16867,2023. [93]TOUVRON H,MARTIN L,STONE K,et al.Llama 2:Openfoundation and fine-tuned chat models[J].arXiv:2307.09288,2023. [94]GAO X,GONG P,LIU J,et al.COMT Val158Met polymor-phism influences the susceptibility to framing in decision-making:OFC-amygdala functional connectivity as a mediator[J].Human Brain Mapping,2016,37(5):1880-1892. [95]PAPINESI K.Bleu:A method for automatic evaluation of machine translation[C]//Proceedings of 40th Actual Meeting of the Association for Computational Linguistics(ACL),2002.2002:311-318. [96]NLLB Team,COSTA-JUSS M R,CROSS J,et al.No language left behind:Scaling human-centered machine translation(2022)[J].arXiv:2207.04672,2022. [97]PELOFSKE E,URIAS V,LIEBROCK L M.Automated Multi-Language to English Machine Translation Using Generative Pre-Trained Transformers[J].arXiv:2404.14680,2024. [98]WANG Y,BAI J,HUANG R,et al.Speech-to-Speech Translation with Discrete-Unit-Based Style Transfer[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.ACL,2023:34-41. [99]FRID-ADAR M,KLANG E,AMITAI M,et al.Synthetic data augmentation using GAN for improved liver lesion classification[C]//2018 IEEE 15th International Symposium on Biomedical Imaging(ISBI 2018).IEEE,2018:289-293. [100]WANG Q,GAO J,LIN W,et al.Learning from synthetic data for crowd counting in the wild[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:8198-8207. [101]HAN J,WANG Q,GUO Z,et al.Disentangled Learning with Synthetic Parallel Data for Text Style Transfer[C]//Procee-dings of the 62nd Annual Meeting of the Association for Computational Linguistics.2024:15187-15201. [102]GU A,DAO T.Mamba:Linear-time sequence modeling with selective state spaces[C]//International Conference on Learning Representations.2024. [103]LIU Z,WANG Y,VAIDYA S,et al.Kan:Kolmogorov-arnold networks[C]//International Conference on Learning Representations.2025. [104]SUN Y,LI X,DALAL K,et al.Learning to(learn at test time):Rnns with expressive hidden states[C]//International Confe-rence on Learning Representations.2025. [105]HU Y,CHEN C,YANG C H H,et al.GenTranslate:LargeLanguage Models are Generative Multilingual Speech and Machine Translators[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.ACL,2024:74-90. [106]TIAN Y,XIA F,SONG Y.Dialogue summarization with mixture of experts based on large language models[C]//Procee-dings of the 62nd Annual Meeting of the Association for Computational Linguistics.2024:7143-7155. [107]LI Z,ZENG Y,ZUO Y,et al.KnowCoder:Coding Structured Knowledge into LLMs for Universal Information Extraction[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.ACL,2024:8758-8779. [108]PATEL A,RAFFEL C,CALLISON-BURCH C.DataDreamer:A Tool for Synthetic Data Generation and Reproducible LLM Workflows[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.ACL,2024:3781-3799. [109]CZINCZOLL T,HÖNES C,SCHALL M,et al.NextLevel-BERT:Investigating Masked Language Modeling with Higher-Level Representations for Long Documents[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.ACL,2024:4656-4666. [110]STN A,ARYABUMI V,YONG Z X,et al.Aya model:An instruction finetuned open-access multilingual language model[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.ACL,2024:15894-15939. [111]YULIANTO A,SUPRIATNANINGSIH R.Google translatevs. DeepL:a quantitative evaluation of close-language pair translation(french to english)[J].Asian Journal of English Language and Pedagogy,2021,9(2):109-127. [112]LV L,XIE J,ZHENG S,et al.Research Status and Trend ofGenerative Artificial Intelligence Applied to Education in China-Visualization Analysis Based on CiteSpace[J].Advances in Education,2024,14:655. [113]CILLO P,RUBERA G.Generative AI in innovation and marketing processes:A roadmap of research opportunities[J].Journal of the Academy of Marketing Science,2025,53:684-701. [114]MITA M,MURAKAMI S,KATO A,et al.Striking Gold in Advertising:Standardization and Exploration of Ad Text Generation[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.2024:955-972. [115]ZACK T,LEHMAN E,SUZGUN M,et al.Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care:a model evaluation study[J].The Lancet Digital Health,2024,6(1):e12-e22. [116]SCHNEIDER J.Explainable generative AI(GenXAI):A sur-vey,conceptualization,and research agenda[J].Artificial Intelligence Review,2024,57(11):289. [117]DABIS A,CSKI C.AI and ethics:Investigating the first policy responses of higher education institutions to the challenge of generative AI[J].Humanities and Social Sciences Communications,2024,11(1):1-13. [118]ZHENG Z,LU J,WANG L,et al.Cross-scale systematic learning for social big data:theory and methods[J].Scientia Sinica(Informationis),2024,54(9):2083-2097. |
| [1] | CAI Qihang, XU Bin, DONG Xiaodi. Knowledge Graph Completion Model Using Semantically Enhanced Prompts and Structural Information [J]. Computer Science, 2025, 52(9): 282-293. |
| [2] | ZHONG Boyang, RUAN Tong, ZHANG Weiyan, LIU Jingping. Collaboration of Large and Small Language Models with Iterative Reflection Framework for Clinical Note Summarization [J]. Computer Science, 2025, 52(9): 294-302. |
| [3] | CHENG Zhangtao, HUANG Haoran, XUE He, LIU Leyuan, ZHONG Ting, ZHOU Fan. Event Causality Identification Model Based on Prompt Learning and Hypergraph [J]. Computer Science, 2025, 52(9): 303-312. |
| [4] | LIU Leyuan, CHEN Gege, WU Wei, WANG Yong, ZHOU Fan. Survey of Data Classification and Grading Studies [J]. Computer Science, 2025, 52(9): 195-211. |
| [5] | WANG Limei, HAN Linrui, DU Zuwei, ZHENG Ri, SHI Jianzhong, LIU Yiqun. Privacy Policy Compliance Detection Method for Mobile Application Based on Large LanguageModel [J]. Computer Science, 2025, 52(8): 1-16. |
| [6] | LIU Le, XIAO Rong, YANG Xiao. Application of Decoupled Knowledge Distillation Method in Document-level RelationExtraction [J]. Computer Science, 2025, 52(8): 277-287. |
| [7] | WANG Dongsheng. Multi-defendant Legal Judgment Prediction with Multi-turn LLM and Criminal Knowledge Graph [J]. Computer Science, 2025, 52(8): 308-316. |
| [8] | ZHENG Cheng, YANG Nan. Aspect-based Sentiment Analysis Based on Syntax,Semantics and Affective Knowledge [J]. Computer Science, 2025, 52(7): 218-225. |
| [9] | WANG Youkang, CHENG Chunling. Multimodal Sentiment Analysis Model Based on Cross-modal Unidirectional Weighting [J]. Computer Science, 2025, 52(7): 226-232. |
| [10] | LI Maolin, LIN Jiajie, YANG Zhenguo. Confidence-guided Prompt Learning for Multimodal Aspect-level Sentiment Analysis [J]. Computer Science, 2025, 52(7): 241-247. |
| [11] | CHEN Jinyin, XI Changkun, ZHENG Haibin, GAO Ming, ZHANG Tianxin. Survey of Security Research on Multimodal Large Language Models [J]. Computer Science, 2025, 52(7): 315-341. |
| [12] | ZHAO Zheyu, WANG Zhongqing, WANG Hongling. Commodity Attribute Classification Method Based on Dual Pre-training [J]. Computer Science, 2025, 52(6A): 240500127-8. |
| [13] | TU Ji, XIAO Wendong, TU Wenji, LI Lijian. Application of Large Language Models in Medical Education:Current Situation,Challenges and Future [J]. Computer Science, 2025, 52(6A): 240400121-6. |
| [14] | LI Bo, MO Xian. Application of Large Language Models in Recommendation System [J]. Computer Science, 2025, 52(6A): 240400097-7. |
| [15] | ZOU Rui, YANG Jian, ZHANG Kai. Low-resource Vietnamese Speech Synthesis Based on Phoneme Large Language Model andDiffusion Model [J]. Computer Science, 2025, 52(6A): 240700138-6. |
|
||