Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of Explainable AI in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Computer Science    2023, 50 (5): 1-2.   DOI: 10.11896/jsjkx.qy20230501
    Abstract721)      PDF(pc) (1156KB)(3454)       Save
    Related Articles | Metrics
    Review of Software Engineering Techniques and Methods Based on Explainable Artificial Intelligence
    XING Ying
    Computer Science    2023, 50 (5): 3-11.   DOI: 10.11896/jsjkx.221100159
    Abstract441)      PDF(pc) (1672KB)(3540)       Save
    In terms of information processing and decision-making,artificial intelligence(AI) methods have shown superior performance compared to traditional methods.However,when AI models are put into production,their output results are not guaranteed to be completely accurate,so the “unreliability” of AI technology has gradually become a major obstacle to the large-scale implementation of AI.As AI is gradually applied to software engineering,the drawbacks of over-reliance on historical data and non-transparent decision-making are becoming more and more obvious,so it is crucial to provide reasonable explanations for the decision results.This paper elaborates on the basic concepts of explainable AI(XAI) and the evaluation of explanation models,and explores the feasibility of combining software engineering with explainable AI.Meanwhile,it investigates relevant researches in software engineering,analyzes the four typical application directions of XAI,namely,malware detection,high-risk component detection,software load distribution,and binary code similarity analysis,to discuss how to reveal the correctness of the system output,thereby increasing the credibility of the software system.This paper also gives insights into the research direction in combining software engineering and explainable artificial intelligence.
    Reference | Related Articles | Metrics
    Study on Interpretable Click-Through Rate Prediction Based on Attention Mechanism
    YANG Bin, LIANG Jing, ZHOU Jiawei, ZHAO Mengci
    Computer Science    2023, 50 (5): 12-20.   DOI: 10.11896/jsjkx.221000032
    Abstract426)      PDF(pc) (2857KB)(3413)       Save
    Click-Through Rate(CTR) prediction is critical to recommender systems.The improvement of CTR prediction can directly affect the earnings target of the recommender system.The performance and interpretation of the CTR prediction algorithm can guide developers to understand and evaluate recommender system accurately.That's also helpful for system design.Most existing approaches are based on linear feature interaction and deep feature extraction,which have poor model interpretation in the outcomes.Moreover,very few previous studies were conducted on the model interpretation of the CTR prediction.Therefore,in this paper,we propose a novel model which introduces multi-head self-attention mechanism to the embedding layer,the linear feature interaction component and the deep component,to study the model interpretation.We propose two models for the deep component.One is deep neural networks(DNN) enhanced by multi-head self-attention mechanism,the other computes high-order feature interaction by stacking multiple attention blocks.Furthermore,we calculate attention scores and interpret the prediction results for each component.We conduct extensive experiments using three real-world benchmark datasets.The results show that the proposed approach not only improves the effect of DeepFM effectively but also offers good model interpretation.
    Reference | Related Articles | Metrics
    Explainable Comparison of Software Defect Prediction Models
    LI Huilai, YANG Bin, YU Xiuli, TANG Xiaomei
    Computer Science    2023, 50 (5): 21-30.   DOI: 10.11896/jsjkx.221000028
    Abstract256)      PDF(pc) (2501KB)(3250)       Save
    Software defect prediction has become an important research direction in software testing.The comprehensiveness of defect prediction directly affects the efficiency of testing and program operation.However,the existing defect prediction is based on historical data,and most of them cannot give a reasonable explanation for the prediction process.This black box prediction process only shows the output results,making it difficult for people to know the impact of the internal structure of the test model on the output.In order to solve this problem,it is necessary to select software measurement methods and some typical deep lear-ning models,make a brief comparison of their input,output and structure,analyze them from the two perspectives of the degree of data differences and the processing process of the model on the code,and explain their similarities and differences.Experiments show that the method of deep learning is more effective than traditional software measurement methods in defect prediction,which is mainly caused by their different processing processes of raw data.When using convolution neural network and long-term and short-term memory neural network to predict defects,the data difference is mainly caused by the integrity of the understan-ding of code information.To sum up,in order to improve the prediction ability of software defects,the calculation of the model should comprehensively involve the semantics,logic and context of the code to avoid the omission of useful information.
    Reference | Related Articles | Metrics
    Study on Reliability Prediction Model Based on BASFPA-BP
    LI Honghui, CHEN Bo, LU Shuyi, ZHANG Junwen
    Computer Science    2023, 50 (5): 31-37.   DOI: 10.11896/jsjkx.220900283
    Abstract223)      PDF(pc) (2178KB)(3031)       Save
    Software reliability prediction is based on software reliability prediction model,which analyzes,evaluates and predicts software reliability and reliability-related measures.Using the failure data collected in software operation to predict the future software reliability.It has become an important means to evaluate software failure behavior and guarantee software reliability.BP neural network has been widely used in software reliability prediction because of its simple structure and few parameters.How-ever,the prediction accuracy of the software reliability prediction model built based on the traditional BP neural network cannot reach the expected target.Therefore,this paper proposes a software reliability prediction model based on BASFPA-BP.This model utilizes software failure data and utilizes BASFPA algorithm to optimize network weights and thresholds in the training process of BP neural network.Thus,the prediction accuracy of the model is improved.In this paper,three groups of public software failure data are selected,and the mean square error between the actual value and the predicted value is taken as the measurement standard of the predicted results.Meanwhile,BASFPA-BP is compared with FPA-BP,BP and Elman models.Experimental results show that the software reliability prediction model based on BASFPA-BP achieves high prediction accuracy in the same type of model.
    Reference | Related Articles | Metrics
    Interpretable Repair Method for Event Logs Based on BERT and Weak Behavioral Profiles
    LI Binghui, FANG Huan, MEI Zhenhui
    Computer Science    2023, 50 (5): 38-51.   DOI: 10.11896/jsjkx.220900030
    Abstract233)      PDF(pc) (4298KB)(3030)       Save
    In practical business processes,low-quality event logs due to outliers and missing values are often unavoidable.Low-quality event logs can degrade the performance of associated algorithms for process mining,which in turn interferes with the correct implementation of decisions.Under the condition that the system reference model is unknown,when performing log anomaly detection and repair work,the existing methods have the problems of needing to manually set thresholds,do not understand what behavior constraints the prediction model learns,and poor interpretability of repair results.Inspired by the fact that the pre-trained language model BERT using the masking strategy can self-supervise learning of general semantics in text through context information,combined with attention mechanism with multi-layer and multi-head,this paper proposes the model BERT4Log and weak behavioral profiles theory to perform an interpretable repair process for low-quality event logs.The proposed repair method does not need to set a threshold in advance,and only needs to perform self-supervised training once.At the same time,the method uses the weak behavioral profiles theory to quantify the degree of behavioral repair of logs.And combined with the multi-layer multi-head attention mechanism to realize the detailed interpretation process about the specific prediction results.Finally,the performance of the proposed method is evaluated on a set of public datasets,and compared with the current research with the best performance.Experimental results show that the repair performance of BERT4Log is better than the comparative research,and at the same time,the model can learn weak behavioral profiles and achieve detailed interpretation of repair results.
    Reference | Related Articles | Metrics
    Review on Interpretability of Deep Learning
    CHEN Chong, CHEN Jie, ZHANG Hui, CAI Lei, XUE Yaru
    Computer Science    2023, 50 (5): 52-63.   DOI: 10.11896/jsjkx.221000044
    Abstract301)      PDF(pc) (1759KB)(3365)       Save
    With the explosive growth of data volume and the breakthrough of deep learning theory and technology,deep learning models perform well enough in many classification and prediction tasks(image,text,voice and video data,etc.),which promotes the large-scale and industrialized application of deep learning.However,due to the high nonlinearity of the deep learning model with undefined internal logic,it is often regarded as a “black box” model which restricts further applications in key fields(such as medical treatment,finance,autonomous driving).Therefore,it is necessary to study the interpretability of deep learning.Firstly,recent studies on deep learning,the definition and necessity of explaining deep learning models are overviewed and described.Secondly,recent studies on interpretation methods of deep learning,and its classifications from the perspective of intrinsic interpretable model and attribution-based/non-attribution-based interpretation are analyzed and summarized.Then,the qualitative and quantitative performance criteria of the interpretability of deep learning are introduced.Finally,the applications of deep learning interpretability and future research directions are discussed and recommended.
    Reference | Related Articles | Metrics
    Code Embedding Method Based on Neural Network
    SUN Xuekai, JIANG Liehui
    Computer Science    2023, 50 (5): 64-71.   DOI: 10.11896/jsjkx.220100094
    Abstract340)      PDF(pc) (3927KB)(3046)       Save
    There are many application scenarios for code analysis and research,such as code plagiarism detection and software vulnerability search.With the development of artificial intelligence,neural network technology has been widely used in code analysis and research.However,the existing methods either simply treat the code as ordinary natural language processing,or use much more complex rules to sample the code.The former processing method is easy to cause the loss of key information of the code,while the latter can make the algorithm to be too complicated,and the training of the model will take a lot of time.Alon proposed an algorithm named Code2vec,which has significant advantages compared with previous code analysis methods.But the Code2vec still has some limitations.Therefore,a code embedding method based on neural network is proposed.The main idea of this method is to express the code function as the code embedding vector.First,a code function is decomposed into a series of abstract syntax tree paths,then a neural network is used to learn how to represent each path,and finally all paths are aggregated into an embedding vector to represent the current code function.A prototype system based on this method is implemented in this paper.Experimental results show that compared with Code2vec,the new algorithm has the advantages of simpler structure and faster training speed.
    Reference | Related Articles | Metrics
    Hybrid Algorithm of Grey Wolf Optimizer and Arithmetic Optimization Algorithm for Class Integration Test Order Generation
    ZHANG Wenning, ZHOU Qinglei, JIAO Chongyang, XU Ting
    Computer Science    2023, 50 (5): 72-81.   DOI: 10.11896/jsjkx.220200110
    Abstract289)      PDF(pc) (2591KB)(3050)       Save
    Integration testing is an essential and important part in software testing.Determining the orders in which classes should be tested during the object oriented integration testing is a very complex problem.The search based approaches have been proved to be efficient in generating class integration test orders(CITO),with the disadvantage of slow convergence speed and low optimization accuracy.In the grey wolf optimizer(GWO) algorithm,wolves are likely to be located in the same or certain regions,thus easily being trapped into local optima.Arithmetic optimization algorithm(AOA) is a new meta heuristic technique with excellent randomness and dispersibility.To improve the performance of CITO generation,a hybrid optimization algorithm of GWO and AOA(GWO-AOA) is proposed,combining the rapid convergence speed of GWO and strong ability to avert local optima stagnation of AOA.In the GWO-AOA,the main hunting steps of GWO is unchanged and the leading individual of AOA is replaced by the center of dominant wolfs,providing a proper balance between exploration and exploitation.In addition,random walk scheme is adopted based on the random local mutation to improve the global search ability.Experimental results indicate that the proposed method can generate promising class integration test orders with less time compared to other comparative methods.
    Reference | Related Articles | Metrics
    Mechanical Equipment Fault Diagnosis Driven by Knowledge
    DONG Jiaxiang, ZHAI Jiyu, MA Xin, SHEN Leixian, ZHANG Li
    Computer Science    2023, 50 (5): 82-92.   DOI: 10.11896/jsjkx.221100160
    Abstract331)      PDF(pc) (4275KB)(3123)       Save
    With the rapid development of social economy,modern industry now presents a trend featuring complex research objects,informationalized application methods and diversified production modes.Industrial fault diagnosis,as one of the most important research areas in modern industry,is still facing a series of technical bottlenecks due to the complexity of mechanical equipment and the lack of referential knowledge.In order to solve the above problems,this paper proposes a knowledge-driven fault diagnosis scheme for mechanical equipment,which mainly includes two parts——knowledge construction and diagnosis process.In terms of knowledge construction,this paper presents a domain knowledge graph construction method.In terms of diagnosis process,this paper designs a general mechanical equipment fault diagnosis process consisting of four steps,fault inquiry,fault location,fault cause location and fault maintenance guidance.To date,the scheme has been actually applied in a large excavator maintenance provider in China,and its effectiveness has been verified.Experimental results indicate the scheme improves the know-ledge and intelligent level of excavator fault diagnosis and shows high accuracy and practicability.The application of the scheme will be further promoted in the industry.
    Reference | Related Articles | Metrics
    Review of Intelligent Device Fault Diagnosis Based on Deep Learning
    HUANG Xundi, PANG Xiongwen
    Computer Science    2023, 50 (5): 93-102.   DOI: 10.11896/jsjkx.220500197
    Abstract273)      PDF(pc) (3901KB)(3222)       Save
    Intelligent fault diagnosis applies deep learning theory to equipment fault diagnosis,which can automatically identify the health state and fault type of equipment,and has attracted extensive attention in the field of equipment fault diagnosis.Intelligent equipment fault diagnosis realizes equipment fault diagnosis by building end-to-end AI models and algorithms to associate equipment monitoring data with machine health status.However,there are many models and algorithms for equipment fault diagnosis,but they are not common to each other.Using models that are inconsistent with monitoring data for fault diagnosis will lead to a significant decline in diagnosis accuracy.In order to solve this problem,based on the comprehensive investigation of the relevant literature of equipment fault diagnosis,this paper first briefly describes the model framework of in-depth equipment fault diagnosis,then classifies,lists,compares and summarizes the models and algorithms according to the specific application scenarios and equipment monitoring data types,and finally analyzes the future development direction according to the existing problems.This review is expected to provide a useful reference for the research of intelligent equipment fault diagnosis.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 1, 11 records