Computer Science ›› 2024, Vol. 51 ›› Issue (12): 63-70.doi: 10.11896/jsjkx.240900093

• Computer Software • Previous Articles     Next Articles

Automatic Test Case Generation Method for Automotive Electronic Control System Verification

LI Zhanqi1,3,4, WU Xinwei2, ZHANG Lei1, LIU Quanzhou1, XIE Hui3, XIONG Deyi2   

  1. 1 CATARC (Tianjin) Automotive Engineering Research Institute Co., Ltd., Tianjin 300300, China
    2 College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
    3 School of Mechanical Engineering, Tianjin University, Tianjin 300354, China
    4 China Automotive Technology and Research Center Co., Ltd., Tianjin 300300, China
  • Received:2024-09-13 Revised:2024-11-07 Online:2024-12-15 Published:2024-12-10
  • About author:LI Zhanqi,born in 1985,postgraduate,senior engineer.His main research interests include simulation development and system validation of automotive electronic control systems.
    XIONG Deyi,born in 1979,Ph.D,professor,Ph.D supervisor,is a member of CCF(No.57174S).His main research interests include natural language processing,large language model and AI alignment.
  • Supported by:
    National Key Research and Development Program of China(2021YFB3202204).

Abstract: With the development of “software-defined vehicles”,the complexity of automotive software functions and the demand for rapid development have imposed higher requirements on the verification of electronic control systems.Currently,the development of test flow charts for electronic control system software functions mainly relies on manual methods,which are inefficient and susceptible to human factors.This paper details the task and challenges of automatic test case generation in automotive electronic control system verification and proposes an automatic test flow chart generation method based on large language models(LLM) to improve development efficiency and reduce labor costs.The method includes constructing domain task datasets and selecting appropriate LLM application routes.The study explores the advantages and disadvantages of two technical routes:traditional language model fine-tuning and LLM API adaptation.Experiments validate the performance of different LLM APIs in test case generation tasks and the effectiveness of prompt engineering techniques in enhancing LLM API performance.In summary,this paper proposes an efficient method for automatically generating automotive test flow charts,demonstrating the potential of LLMs in improving the efficiency of automotive software testing.

Key words: Automotive applications, Large language models, Prompt engineering

CLC Number: 

  • TP391
[1]LI Y J,LI X P,ZHANG W G.Survey on vision-based 3D object detection methods[J].Computer Engineering and Applications,2020,56(1):11-24.
[2]PEEBLES P Z.Probability,random variable,and random signal principles[M].4th ed.New York:McGraw Hill,2001:100-110.
[3]WEINSTEIN L,SWERTZ M N.Pathogenic properties of invading microorganism[M]//Pathologic Physiology:Mechanisms of Disease.Philadephia:Saunders,1974:745-772.
[4]MONREALE A,PINELLI F,TRASARTI R,et al.WhereNext:a location predictor on trajectory pattern mining[C]//Procee-dings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,Paris,Jun 28-Jul 1,2009.New York:ACM,2009:637-646.
[5]LUO W,WANG H F.Evaluating Large Language Models:A Survey of Research Progress.[J]Journal of Chinese Information Processing.2024,38(1):1-23.
[6]MORZY M.Prediction of moving object location based on frequent trajectories[C]//Proceedings of the 21st International Symposium on Computer and Information Sciences,Istanbul,Nov.1-3,2006.Berlin,Heidelberg:Springer,2006:583-592.
[7]WANG L.Research of fuzzy clustering algorithm for incomplete data based on the improved ACO with interval supervision[D].Shenyang:Liaoning University,2016.
[8]Online Computer Library Center,Inc.History of OCLC[EB/OL].(2000-01-08)[2019-12-23].http://www.oclc.org/about/history/default.htm.
[9]ZHANG F X,YU X R,HE W F,et al.Face recognition with improved loss function and multiple-norm[J].Computer Enginee-ring and Applications,2020,54(6):114-120.
[10]SUTSKEVER I,VINYALS O,LE Q V.Sequence to sequence learning with neural networks[J].arXiv:1409.3215,2014.
[11]RADFORD A,WU J,CHILD R,et al.Language models are unsupervised multitask learners[J].OpenAI blog,2019,1(8):9.
[12]HU E J,SHEN Y,WALLIS P,et al.Lora:Low-rank adaptation of large language models[J].arXiv:2106.09685,2021.
[13]LIU H Q.In the Era of Software-defined Vehicles,AUTOSAR is Helping to Transform China’s Automotive Industry[J].Transport Energy Conservation & Environmental Protection,2024,20(3):74-77.
[14]WEI J,WANG X,SCHUURMANS D,et al.Chain-of-thoughtprompting elicits reasoning in large language models[J].Advances in Neural Information Processing Systems,2022,35:24824-24837.
[15]LIN C Y.Rouge:A package for automatic evaluation of summaries[C]//Text Summarization Branches Out.2004:74-81.
[16]PAPINENI K,ROUKOS S,WARD T,et al.Bleu:a method for automatic evaluation of machine translation[C]//Proceedings of the 40th annual meeting of the Association for Computational Linguistics.2002:311-318.
[17]LEWIS M.Bart:Denoising sequence-to-sequence pre-training for natural language generation,translation,and comprehension[J].arXiv:1910.13461,2019.
[18]BAI J,BAI S,CHU Y,et al.Qwen technical report[J].arXiv:2309.16609,2023.
[19]MENG X,DAI D,LUO W,et al.Periodiclora:Breaking the low-rank bottleneck in lora optimization[J].arXiv:2402.16141,2024.
[20]BIDERMAN D,ORTIZ J G,PORTES J,et al.Lora learns less and forgets less[J].arXiv:2405.09673,2024.
[1] LIU Yumeng, ZHAO Yijing, WANG Bicong, WANG Chao, ZHANG Baomin. Advances in SQL Intelligent Synthesis Technology [J]. Computer Science, 2024, 51(7): 40-48.
[2] ZHAO Yue, HE Jinwen, ZHU Shenchen, LI Congyi, ZHANG Yingjie, CHEN Kai. Security of Large Language Models:Current Status and Challenges [J]. Computer Science, 2024, 51(1): 68-71.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!