计算机科学 ›› 2024, Vol. 51 ›› Issue (12): 63-70.doi: 10.11896/jsjkx.240900093
李占旗1,3,4, 吴新维2, 张蕾1, 刘全周1, 谢辉3, 熊德意2
LI Zhanqi1,3,4, WU Xinwei2, ZHANG Lei1, LIU Quanzhou1, XIE Hui3, XIONG Deyi2
摘要: 随着“软件定义汽车”的发展,汽车软件功能的复杂性和快速开发需求对电控系统验证提出了更高的要求。当前,电控系统软件功能的测试流程图开发主要依赖人工方式,效率低且存在人为因素影响。文中详细描述了汽车验证电控系统中的测试用例自动生成任务及其面临的挑战,并提出了一种基于大语言模型(LLM)的自动生成测试流程图方法,以提高开发效率并减少人力成本。该方法包括构建领域任务数据集和选择合适场景的大模型应用路线。在实验中探讨了基于传统语言模型微调和大语言模型API适配两种技术路线的优劣,并通过实验验证了不同的大模型API在测试用例生成任务上的表现,以及提示工程技术对大模型API的提升效果。提出了一种高效的自动生成汽车测试流程图的方法,展示了大语言模型在提升汽车软件测试效率中的潜力。
中图分类号:
[1]LI Y J,LI X P,ZHANG W G.Survey on vision-based 3D object detection methods[J].Computer Engineering and Applications,2020,56(1):11-24. [2]PEEBLES P Z.Probability,random variable,and random signal principles[M].4th ed.New York:McGraw Hill,2001:100-110. [3]WEINSTEIN L,SWERTZ M N.Pathogenic properties of invading microorganism[M]//Pathologic Physiology:Mechanisms of Disease.Philadephia:Saunders,1974:745-772. [4]MONREALE A,PINELLI F,TRASARTI R,et al.WhereNext:a location predictor on trajectory pattern mining[C]//Procee-dings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,Paris,Jun 28-Jul 1,2009.New York:ACM,2009:637-646. [5]LUO W,WANG H F.Evaluating Large Language Models:A Survey of Research Progress.[J]Journal of Chinese Information Processing.2024,38(1):1-23. [6]MORZY M.Prediction of moving object location based on frequent trajectories[C]//Proceedings of the 21st International Symposium on Computer and Information Sciences,Istanbul,Nov.1-3,2006.Berlin,Heidelberg:Springer,2006:583-592. [7]WANG L.Research of fuzzy clustering algorithm for incomplete data based on the improved ACO with interval supervision[D].Shenyang:Liaoning University,2016. [8]Online Computer Library Center,Inc.History of OCLC[EB/OL].(2000-01-08)[2019-12-23].http://www.oclc.org/about/history/default.htm. [9]ZHANG F X,YU X R,HE W F,et al.Face recognition with improved loss function and multiple-norm[J].Computer Enginee-ring and Applications,2020,54(6):114-120. [10]SUTSKEVER I,VINYALS O,LE Q V.Sequence to sequence learning with neural networks[J].arXiv:1409.3215,2014. [11]RADFORD A,WU J,CHILD R,et al.Language models are unsupervised multitask learners[J].OpenAI blog,2019,1(8):9. [12]HU E J,SHEN Y,WALLIS P,et al.Lora:Low-rank adaptation of large language models[J].arXiv:2106.09685,2021. [13]LIU H Q.In the Era of Software-defined Vehicles,AUTOSAR is Helping to Transform China’s Automotive Industry[J].Transport Energy Conservation & Environmental Protection,2024,20(3):74-77. [14]WEI J,WANG X,SCHUURMANS D,et al.Chain-of-thoughtprompting elicits reasoning in large language models[J].Advances in Neural Information Processing Systems,2022,35:24824-24837. [15]LIN C Y.Rouge:A package for automatic evaluation of summaries[C]//Text Summarization Branches Out.2004:74-81. [16]PAPINENI K,ROUKOS S,WARD T,et al.Bleu:a method for automatic evaluation of machine translation[C]//Proceedings of the 40th annual meeting of the Association for Computational Linguistics.2002:311-318. [17]LEWIS M.Bart:Denoising sequence-to-sequence pre-training for natural language generation,translation,and comprehension[J].arXiv:1910.13461,2019. [18]BAI J,BAI S,CHU Y,et al.Qwen technical report[J].arXiv:2309.16609,2023. [19]MENG X,DAI D,LUO W,et al.Periodiclora:Breaking the low-rank bottleneck in lora optimization[J].arXiv:2402.16141,2024. [20]BIDERMAN D,ORTIZ J G,PORTES J,et al.Lora learns less and forgets less[J].arXiv:2405.09673,2024. |
|