Computer Science ›› 2024, Vol. 51 ›› Issue (10): 178-186.doi: 10.11896/jsjkx.230800191

• Computer Software • Previous Articles     Next Articles

Study on Building Business-oriented Resource On-demand Resolution Model

LIU Yao1, QIN Xun2, LIU Tianji2   

  1. 1 Engineering Center,Institute of Scientific and Technical Information of China,Beijing 100038,China
    2 School of Software and Microelectronics,Peking University,Beijing 102600,China
  • Received:2023-08-30 Revised:2024-01-22 Online:2024-10-15 Published:2024-10-11
  • About author:LIU Yao,born in 1972,Ph.D,resear-cher,is a distinguished member of CCF(No.17606D).His main research in-terests include natural language proces-sing,knowledge organization,and know-ledge engineering
  • Supported by:
    National Social Science Foundation of China(21BTQ011).

Abstract: To address the issue of re-analyzing and repeating development of natural language processing tools and resource ana-lysis plugins when new requirements arise during project development,this paper proposes a business-oriented on-demand resource analysis solution.Firstly,a demand-driven resource analysis method from requirement to code is proposed,focusing on the construction of a demand concept indexing model for the requirement text itself.The constructed demand concept indexing model outperforms other classification models in terms of accuracy,recall,and F1 score.Secondly,this paper establishes a mapping mechanism from requirement text to code library categories based on the correlation between requirement text and code.For the mapping results,the precison@K is used as an evaluation metric,with an ultimate accuracy rate of 60%,demonstrating a certain practical value.In summary,this paper explores a set of key technologies for on-demand resource analysis with demand parsing capabilities and implements the correlation between requirements and code,covering the entire process from requirement text classification,code library classification,code library retrieval to plugin generation.The proposed method forms a complete business loop of “requirement-code-plugin-analysis” and experimentally verifies to be effective for on-demand resource analysis.Compared to existing large language models for business requirement analysis and code generation,this method focuses on the implementation of the full process of plugin code reuse within specific business domains,containing business characteristics.

Key words: Natural language processing, Requirements model, Code reuse, Text parsing, Code categorization, Code retrieval

CLC Number: 

  • TP391
[1]LAMPLE G,BALLESTEROS M,SUBRAMANIAN S,et al.Neural architectures for named entity recognition[J].arXiv:1603.01360,2016.
[2]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[J].arXiv:1810.04805,2018.
[3]LECUN Y,BOTTOU L.Gradient-based learning applied to do-cument recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324.
[4]BENGIO Y,SIMARD P,FRASCONI P.Learning long-term dependencies with gradient descent is difficult[J].IEEE transactions on neural networks,1994,5:157-166.
[5]KIM Y.Convolutional neural networks for sentence classification[J].arXiv:1408.5882,2014.
[6]SHI B,XIANG B,CONG Y.An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2016,39(11):2298-2304.
[7]HOCHREITER S,SCHMIDHUBER J.Long Short-Term Me-mory[J].Neural Computation,1997,9:1735-1780.
[8]GERS F A,SCHMIDHUBER J,CUMMINS F A.Learning to Forget:Continual Prediction with LSTM[J].Neural Computation,2000,12:2451-2471.
[9]GRAVES A,SCHMIDHUBER J.Framewise phoneme classification with bidirectional LSTM and other neural network architectures[J].Neural Networks,2005,18(7):602-610.
[10]CHUNG J,GULCEHRE C,CHO K,et al.Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling[J].arXiv:1412.3555,2014.
[11]PETERS M,NEUMANN M,IYYER M,et al.Deep Contextua-lized Word Representations[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies,Volume 1(Long Papers).2018:2227-2237.
[12]RADFORD A,NARASIMHAN K,SALIMANS T,et al.Improving language understanding by generative pre-training[EB/OL].https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf.
[13]LU J,YING W,SUN X,et al.Interactive Query Reformulation for Source-code Search with Word Relations[J].IEEE Access,2018,6:75660-75668.
[14]MCMILLAN C,GRECHANIK M,POSHYVANYK D,et al.Portfolio:finding relevant functions and their usage[C]//Proceedings of the 33rd International Conference on Software Engineering(ICSE 2011).Waikiki,Honolulu,HI,USA,2011(5):111-120.
[15]LV F,ZHANG H,LOU J,et al.Codehow:Effective code search based on API understanding and extendedboolean model[C]//30th IEEE/ACM International Conference on Automated Software Engineering.ASE 2015,Lincoln,NE,USA,2015:260-270.
[16]RAHMAN M M,CHANCHAL R.Nlp2api:Query reformula-tion for code search using crowdsourced knowledge and extra-large data analytics [C]//2018 IEEE International Conference on Software Maintenance and Evolution(ICSME).IEEE,2018:714-714.
[17]HUSAIN H,WU H,GAZIT T,et al.Codesearchnet challenge:Evaluating the state of semantic code search[J].arXiv:1909.09436,2020.
[18]GU X,ZHANG H,KIM S.Deep code search[C]//Proceedings of the 40th International Conference on Software Engineering.ICSE 2018,2018:933-944.
[19]YIN P,NEUBIG G.A syntactic neural model for general purposecode generation[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics(ACL 2017)Vancouver,Canada,Volume 1:Long Papers,Association for Computational Linguistics.2017:440-450.
[20]ZHANG J,WANG X,ZHANG H,et al.A novel neural source code representation based on abstract syntax tree[C]//Procee-dings of the 41st International Conference on Software Enginee-ring(ICSE 2019).Montreal,QC,Canada.IEEE / ACM,2019:783-794.
[21]WAN Y,SHU J,SUI Y,et al.Multi-modal attention networklearning for semantic source code retrieval[C]//34th IEEE/ACM International Conference on Automated Software Engineering(ASE 2019).San Diego,CA,USA.IEEE,2019:13-25.
[22]YANG H.BERT meets chinese word segmentation[J].arXiv:1909.09292,2019.
[23]SCHICK T,SCHÜTZE H.Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics:Main Volume.2020:255-269.
[1] GUO Zhiqiang, GUAN Donghai, YUAN Weiwei. Word-Character Model with Low Lexical Information Loss for Chinese NER [J]. Computer Science, 2024, 51(8): 272-280.
[2] YANG Binxia, LUO Xudong, SUN Kaili. Recent Progress on Machine Translation Based on Pre-trained Language Models [J]. Computer Science, 2024, 51(6A): 230700112-8.
[3] WANG Yingjie, ZHANG Chengye, BAI Fengbo, WANG Zumin. Named Entity Recognition Approach of Judicial Documents Based on Transformer [J]. Computer Science, 2024, 51(6A): 230500164-9.
[4] LI Minzhe, YIN Jibin. TCM Named Entity Recognition Model Combining BERT Model and Lexical Enhancement [J]. Computer Science, 2024, 51(6A): 230900030-6.
[5] PENG Bo, LI Yaodong, GONG Xianfu, LI Hao. Method for Entity Relation Extraction Based on Heterogeneous Graph Neural Networks and TextSemantic Enhancement [J]. Computer Science, 2024, 51(6A): 230700071-5.
[6] LI Bin, WANG Haochang. Implementation and Application of Chinese Grammatical Error Diagnosis System Based on CRF [J]. Computer Science, 2024, 51(6A): 230900073-6.
[7] ZHANG Mingdao, ZHOU Xin, WU Xiaohong, QING Linbo, HE Xiaohai. Unified Fake News Detection Based on Semantic Expansion and HDGCN [J]. Computer Science, 2024, 51(4): 299-306.
[8] TU Xin, ZHANG Wei, LI Jidong, LI Meijiao , LONG Xiangbo. Study on Automatic Classification of English Tense Exercises for Intelligent Online Teaching [J]. Computer Science, 2024, 51(4): 353-358.
[9] ZHENG Cheng, SHI Jingwei, WEI Suhua, CHENG Jiaming. Dual Feature Adaptive Fusion Network Based on Dependency Type Pruning for Aspect-basedSentiment Analysis [J]. Computer Science, 2024, 51(3): 205-213.
[10] LIU Feng, LIU Yaxuan, CHAI Xinyu, JI Haohan, ZHENG Zhixing. Computational Perception Technologies in Intelligent Education:Systematic Review [J]. Computer Science, 2024, 51(10): 10-16.
[11] FENG Jun, LI Kaixuan, GAO Zhizezhang, HUANG Li, SUN Xia. Survey of Research on Automated Grading Algorithms for Subjective Questions [J]. Computer Science, 2024, 51(10): 33-39.
[12] GE Huibin, WANG Dexin, ZHENG Tao, ZHANG Ting, XIONG Deyi. Study on Model Migration of Natural Language Processing for Domestic Deep Learning Platform [J]. Computer Science, 2024, 51(1): 50-59.
[13] GU Shiwei, LIU Jing, LI Bingchun, XIONG Deyi. Survey of Unsupervised Sentence Alignment [J]. Computer Science, 2024, 51(1): 60-67.
[14] MAO Xin, LEI Zhanyao, QI Zhengwei. Automated Kaomoji Extraction Based on Large-scale Danmaku Texts [J]. Computer Science, 2024, 51(1): 284-294.
[15] ZHANG Yian, YANG Ying, REN Gang, WANG Gang. Study on Multimodal Online Reviews Helpfulness Prediction Based on Attention Mechanism [J]. Computer Science, 2023, 50(8): 37-44.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!