Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors

Featured ArticlesMore...

  • Volume 51 Issue 6, 15 June 2024
      
      Computer Software
      Empirical Study on Dependencies and Updates of R Packages
      CHENG Hongzheng, YANG Wenhua
      Computer Science. 2024, 51 (6): 1-11.  doi:10.11896/jsjkx.230200069
      Abstract ( 118 )   PDF(3059KB) ( 225 )   
      References | Related Articles | Metrics
      As an excellent tool for statistical analysis and statistical cartography,R is very popular in the field of statistical analysis and artificial intelligence,and it has a rich open-source ecosystem with a growing number of R packages.The characteristics of the R package development model,i.e.,the new development of an R package is often implemented by introducing existing R packages to achieve functionality,resulting in very complex dependencies between R packages and even dependency conflicts.The other factor that causes this problem is the update of the R package,in addition to the dependencies.Therefore,an in-depth empi-rical study of the dependencies and updates of R packages is needed to understand the current state of development of existing R packages.However,existing empirical studies on R have focused on the entire R ecosystem without a specific analysis of the dependencies and updates of R packages.To bridge this gap,this paper presents a detailed analysis of the dependencies,the updates,the potential conflicts of dependencies,and the updates of dependencies of common R packages based on data from CRAN(Comprehensive R Archive Network) and GitHub.It is found that the dependency relationships between R packages are complex,and the number of packages each R package depends on is generally high.Still,the dependencies are concentrated in a part of R packages.Although the update frequency of common R packages is fast,there are still many conflicts(inconsistencies) between depen-dencies,and we detected and classified the dependency conflicts of these R packages.The results of our empirical study can provide R developers and users with a better understanding of the current state of R package development,and provide some suggestions that can help R package developers avoid pitfalls in the development process,as well as directions that researchers can explore further on issues related to R package dependencies and updates.
      Summary of Token-based Source Code Clone Detection Techniques
      LIU Chunling, QI Xuyan, TANG Yonghe, SUN Xuekai, LI Qinghao, ZHANG Yu
      Computer Science. 2024, 51 (6): 12-22.  doi:10.11896/jsjkx.230400117
      Abstract ( 84 )   PDF(2194KB) ( 201 )   
      References | Related Articles | Metrics
      Code cloning refers to the generation of similar or identical code during software development due to the reuse,modification,and refactoring of source code.Code cloning has a positive impact on improving software development efficiency and redu-cing development costs,but it can also do harm to the development and maintenance of software system,including but not limited to the decline of stability,and propagation of software defects.Clone detection techniques for source code have important research and application value in plagiarism detection,vulnerability detection,copyright infringement,and other fields.Although some excellent detection tools and techniques have emerged,there are still challenges in detecting syntactic and semantic clones on a large scale and in an effective manner.Among them,lexical-based clone detection technology can quickly detect type 1-3 clones and can be extended to other programming languages and large-scale projects,therefore it is commonly used for clone detection in large-scale databases.This paper reviews the research status of lexical-based clone detection technology in the past decade,analyzes and summarizes 16 selected literature from 10 characteristics,and finally proposes possible research directions for lexical-based clone detection technology in the future in light of new technological developments.
      Nonsense Variable Names Detection Method Based on Lexical Features and Data Mining
      JIANG Yanjie, DONG Chunhao, LIU Hui
      Computer Science. 2024, 51 (6): 23-33.  doi:10.11896/jsjkx.231100030
      Abstract ( 104 )   PDF(2719KB) ( 241 )   
      References | Related Articles | Metrics
      Identifiers is an important part of code,and it is also one of the key elements for people to understand the semantics of code.Variables are widely used to represent objects in programs.Names of such variables could serve as a major clue to the responsibility of the variables if they are serious and properly named.However,unqualified variable names(e.g.,“a”,“var”) are constructed frequently by developers.Such nonsense variable names have a severe negative impact on the readability and maintai-nability of software applications.So,automated identification of bad smells is one of the hot topics in the field of software refacto-ring.To identify such nonsense names automatically,we conduct an empirical study to figure out the key features that could be exploited to distinguishing nonsense names from well-constructed meaningful ones.Results of the study suggest that nonsense variable names are often short and rarely contain meaningful words.To this end,in this paper,we propose a heuristics and data mining-based approach to identifying nonsense variable names.It first retrieves suspicious variable names based on lexical analysis.On the resulting suspicious names,it conducts an abbreviation expansion-based filtering to exclude such variable names that are carefully constructed to represent the abbreviations of meaningful words.Finally,it conducts data mining-based filtering to further exclude well-known symbols(e.g.“i”,“e”).Experimental results on open source datasets show that the proposed method has high accuracy.Its average precision and recall is 85% and 91.5%,respectively.
      Revisiting Test Sample Selection for CNN Under Model Calibration
      ZHAO Tong, SHA Chaofeng
      Computer Science. 2024, 51 (6): 34-43.  doi:10.11896/jsjkx.230400029
      Abstract ( 77 )   PDF(2876KB) ( 186 )   
      References | Related Articles | Metrics
      Deep neural networks are widely used in various tasks,and model testing is crucial to ensure their quality.Test sample selection can solve the issue of labor-intensive manual labeling by strategically choosing a small set of data to label.However,existing selection metrics based on predictive uncertainty neglect the accuracy of the estimation of predictive uncertainty.To fill the gaps of the above studies,we conduct a systematic empirical study on 3 widely used datasets and 4 convolutional neural networks(CNN) to reveal the relationship between model calibration and predictive uncertainty metrics used in test sample selection.We then compare the quality of the test subset selected by calibrated and uncalibrated models.The findings indicate a degree of correlation between uncertainty metrics and model calibration in CNN models.Moreover,CNN models with better calibration select higher-quality test subsets than models with poor calibration.Specifically,the calibrated model outperforms the uncalibrated model in detecting misclassified samples in 70.57% of the experiments.Our study emphasizes the importance of considering mo-del calibration in test selection and highlights the potential benefits of using a calibrated model to improve the adequacy of the testing process.
      Software Data Clustering Method Combining Code Snippets and Hybrid Topic Models
      WEI Linlin, SHEN Guohua, HUANG Zhiqiu, CAI Mengnan, GUO Feifei
      Computer Science. 2024, 51 (6): 44-51.  doi:10.11896/jsjkx.230300091
      Abstract ( 48 )   PDF(2397KB) ( 133 )   
      References | Related Articles | Metrics
      Using topic model to cluster documents is a common practice in many text mining tasks.Many studies use topic models to cluster data from software websites to analyze the development of communities in different fields.However,due to the fact that these software-related data often contain code snippets and the uneven distribution of text length,it is easy to get unstable clustering results by using traditional single topic model to handle this text data.This paper proposes a clustering method combining code snippets and hybrid topic models,and uses Stack Overflow as the data source to construct a Python third-party libraries dataset with the top 60 questions on the platform.After analyzing,it is finally divided into the following six different areas:network security,data analysis,artificial intelligence,text processing,software development and system terminal.Experimental results show that in terms of automatic evaluation and manual evaluation indicators,using code snippets combined with text for topic modeling,the quality of clustering results division performs well, while combining multiple models for experiments can improve the stability and accuracy of clustering results to a certain extent.
      Automatic Tensorization for TPU Coarse-grained Instructions
      LIU Lei, ZHOU Zhide, LIU Xingxiang, CHE Haoyang, YAO Lei, JIANG He
      Computer Science. 2024, 51 (6): 52-60.  doi:10.11896/jsjkx.230800049
      Abstract ( 33 )   PDF(3610KB) ( 145 )   
      References | Related Articles | Metrics
      Tensorization refers to the process of calling specific hardware instructions to accelerate tensor programs.TPU supports various coarse-grained instructions for computation and memory transaction without clear constraints on the input scale.How to use these instructions to automatically generate tensorized programs has become an important topic.However,existing tensorization method requires a large number of handwritten matching fragments for coarse-grained instructions and does not support flexible instruction parallelism optimization like ping-pong buffer,which is inefficient to scale to TPU scenarios.To this end,this paper proposes Tir2TPU,an automatic tensorization method for TPU coarse-grained instructions.Firstly,Tir2TPU extracts the iterator binding information of Block structure and automatically performs instruction replacement while traversing TensorIR’s abstract syntax tree.Secondly,it also utilizes a parallel model that simulates hardware behavior to generate parallel instruction flow.Finally,Tir2TPU combines a hardware-centric schedule space based on TPU features,which greatly accelerates auto-tuning process.The performance of Tir2TPU is evaluatedon 5 commonly used operators in machine learning models.Experimental results show that Tir2TPU can achieve up to 3× and an average of 1.78 × speedup compared to TPU’s compiler,and consistently delivers 90% performance compared to manually optimized operators.
      Prompt Learning Based Parameter-efficient Code Generation
      XU Yiran, ZHOU Yu
      Computer Science. 2024, 51 (6): 61-67.  doi:10.11896/jsjkx.230400137
      Abstract ( 41 )   PDF(2373KB) ( 117 )   
      References | Related Articles | Metrics
      Automatic code generation is one of the effective ways to improve the efficiency of software development.Existing research often regards code generation as a sequence-to-sequence task,and the process of fine-tuning of large-scale pre-trained language models is often accompanied by high computing cost.In this paper,a method of prompt learning based parameter-efficient code generation is proposed.This method guides the pre-trained language model to generate code by querying the result which is most similar to the current intent in the code corpus,and most of the parameters of the model are fixed in the process to achieve the effect of reducing computing cost.In order to verify the effectiveness of PPECG,two datasets for code generation are selected in this paper,namely CONCODE and Solidity4CG.The effectiveness of PPECG is verified by calculating the BLEU,CodeBLEU and Exact Match values of the generated results.Experimental results show that PPECG effectively reduces the graphic memory cost during fine-tuning,and is basically close to or even better than the current SOTA method on the above benchmarks,which is capable of completing code generation tasks well.
      Identifying Coincidental Correct Test Cases Based on Machine Learning
      TIAN Shuaihua, LI Zheng, WU Yonghao, LIU Yong
      Computer Science. 2024, 51 (6): 68-77.  doi:10.11896/jsjkx.230400017
      Abstract ( 44 )   PDF(2289KB) ( 112 )   
      References | Related Articles | Metrics
      Spectrum-based fault localization(SBFL) techniques have been widely studied to help developers quickly find the position of the fault,so as to reduce the cost of program debugging.However,there is a special test case in the test suites that executes the fault statement but outputs the expected result,and this test case is called coincidental correct(CC) test case.CC test case can negatively effect the performance of SBFL fault localization.In order to mitigate the negative impact of CC test case and enhance the performance of SBFL technique,this paper proposes CC test cases identification via machine learning approach(CCIML).CCIML approach utilizes features extracted from the SBFL suspiciousness formula and program static features to identify CC test cases,thus improving the fault localization accuracy of SBFL technique.To evaluate the performance of CCIML approach,experiments are carried out on the Defects4J dataset.The experimental results show that the average recall,precision,and F1 score of the CCIML approach for identifying CC test cases are 63.89%,70.16%,and 50.64%,respectively,better than the baselines.In addition,after processing the CC test cases identified by the CCIML approach using the cleaning and relabeling strategies,the fault localization performance obtained is also better than the comparison baselines.Under the cleaning and relabe-ling strategy,the number of faulty statements ranked first in suspicion value are 328 and 312,respectively.Compared to the fuzzy weighted K-nearest neighbor(FW-KNN) approach,the fault localization accuracy is improved by 124.66% and 235.48%.
      Sequence-based Program Semantic Rule Mining and Violation Detection
      LI Zi, ZHOU Yu
      Computer Science. 2024, 51 (6): 78-84.  doi:10.11896/jsjkx.230300224
      Abstract ( 28 )   PDF(2042KB) ( 109 )   
      References | Related Articles | Metrics
      In software development,source code that violates semantic rules may compile or run normally but may have defects in performance or functionality.Therefore,accurately detecting such defects has become a challenge.Existing research usually adopts itemset-based rule mining and detection methods,but these methods have significant room for improvement in detection ability and accuracy due to the failure to integrate the order information and control flow information of source code effectively.To address this problem,this paper propose a sequence-based method called SPUME for extracting and detecting program semantic rules.The method converts program source code into an intermediate representation sequence,extracts semantic rules from it using sequence rule mining algorithms,and detects defects in the source code based on these rules.To verify the effectiveness of SPUME,it is compared with three baseline methods,including PR-Miner,Tikanga,and Bugram.Experimental results show that compared with PR-Miner,which is based on unordered itemset mining,and Tikanga,which combines graph models,SPUME has significantly improved detection performance,speed,and accuracy.Compared with Bugram,which is based on Ngram language models,SPUME detects more program defects more efficiently while maintaining a similar level of accuracy.
      Software Diversity Composition Based on Multi-objective Optimization Algorithm NSGA-II
      XIE Genlin, CHENG Guozhen, LIANG Hao, WANG Qingfeng
      Computer Science. 2024, 51 (6): 85-94.  doi:10.11896/jsjkx.221100194
      Abstract ( 33 )   PDF(3347KB) ( 140 )   
      References | Related Articles | Metrics
      Software diversity is widely used in scenarios such as software development because it effectively improves system resilience and the cost of malicious binary analysis.How to collaboratively deploy the existing diversity techniques to obtain higher security gains while ensuring lower performance overhead is one key issue of software diversity research.The search algorithm of the existing software diversity composition methods is inefficient,the search space is small,and the security evaluation metric is not comprehensive,so it is difficult to comprehensively reflect the impact of software diversity on various attacks.To solve these problems,a software diversity composition method based on multi-objective optimization algorithm is proposed.The software diversity composition problem is constructed as a multi-objective optimization model that comprehensively considers TLSH simila-rity,gadget quality and CPU clock cycles.A solution algorithm based on NSGA-II including chromosome encoding,adaptive crossover and mutation operators,and validation algorithm for composition scheme is designed for the model.Experimental results show that the proposed method can effectively generate software diversity composition with high security gain and low performance overhead.
      Method of Generating Test Data by Genetic Algorithm Based on ART Optimal Selection Strategy
      LI Zhibo, LI Qingbao, LAN Mingjing
      Computer Science. 2024, 51 (6): 95-103.  doi:10.11896/jsjkx.230100012
      Abstract ( 20 )   PDF(2887KB) ( 127 )   
      References | Related Articles | Metrics
      Automatic generation of test data is a hot topic in the field of software testing.The heuristic search algorithm based on genetic algorithm is a method to generate test data by path coverage.In this paper,a method based on adaptive random testing(ART) to update population is proposed.ART is integrated into genetic algorithm to optimize the selection operation and dyna-mically update the population.As a result,the individual diversity in the process of population evolution is increased,the convergence speed is improved,and the falling into local optimum is effectively reduced.Experimental results show that the path cove-rage is significantly improved and the average number of generations is effectively reduced.
      Database & Big Data & Data Science
      Study on Method for Collaborative Tuning Resources and Parameters of Cloud Database
      LI Yuhang, TAN Ruixiong, CHAI Yunpeng
      Computer Science. 2024, 51 (6): 104-110.  doi:10.11896/jsjkx.231000156
      Abstract ( 29 )   PDF(2898KB) ( 104 )   
      References | Related Articles | Metrics
      In cloud databases,there are numerous configuration options,including internal database parameters and virtual machine resource configuration for the environment deployment,which collectively determine the database’s read/write performance and resource consumption.In the cloud environment with elastic resources,users are concerned about both the database’s service performance and resource consumption costs.However,due to the large number of configuration options and rapid workload changes,finding the optimal combination of configurations becomes challenging.To address the online tuning scenario with dynamically changing workloads,this paper proposes CoTune,a fast tuning method for coordinating cloud database resources and parameters.This method focuses on OLTP workloads and iteratively adjusts the configurations of virtual machine resources and database parameters to minimize resource consumption while ensuring service quality.The method introduces several key innovations:firstly,it adopts a three-stage approach within each tuning cycle to adjust resource quotas and database parameters,prioritizing service quality;secondly,it classifies the impact of database parameters on different resources,reducing the search space and enabling rapid parameter adjustments;and finally,it incorporates a reinforcement learning model for database parameter tuning,with a specific reward function designed to quickly obtain reward values and accelerate the tuning frequency.Experimental results demonstrate that,compared to approaches that simultaneously tune resources and parameters or solely focus on resource tuning,the proposed method reduces resource consumption while maintaining service quality.Through rapid iterative tuning,it effectively addresses the challenges posed by workload variations and achieves more efficient resource utilization in dynamic workload environments.
      CDES:Data-driven Efficiency Evaluation Methodology for Cloud Database
      HAN Yujie, XU Zhijie, YANG Dingyu, HUANG Bo, GUO Jianmei
      Computer Science. 2024, 51 (6): 111-117.  doi:10.11896/jsjkx.231000140
      Abstract ( 32 )   PDF(2356KB) ( 103 )   
      References | Related Articles | Metrics
      Evaluating database efficiency in a large-scale cloud production environment is crucial for cloud vendors to optimize the cloud cost.In order to evaluate the use efficiency of cloud database,this paper proposes CDES,a data-driven cloud database efficiency evaluation method based on the fusion of computing and storage indicators.According to the load behavior and perfor-mance profile of the cloud database instance,this method selects the main metrics that affect the cost and efficiency of the cloud database from two aspects of computing and storage,and then combines the data collected by the cloud monitoring platform to evaluate the efficiency of the cloud database instance and cluster.Based on the evaluation results of CDES,the governance scheme of cloud database efficiency is further proposed,together with the governance optimization suggestions to guide users to improve the efficiency of resource utilization and reduce idle resources.Finally,CDES has been deployed in the production environment of a large Internet enterprise and used for the performance evaluation of the cloud OLTP database product.The results show that the proposed method can effectively evaluate the efficiency and guide governance of the cluster with more than 5 000 cloud database instances,and the governance results can save 46.15% of the instance cost at most.
      Study on Anomalous Evolution Pattern on Temporal Networks
      WU Nannan, GUO Zehao, ZHAO Yiming, YU Wei, SUN Ying, WANG Wenjun
      Computer Science. 2024, 51 (6): 118-127.  doi:10.11896/jsjkx.230600168
      Abstract ( 23 )   PDF(5633KB) ( 118 )   
      References | Related Articles | Metrics
      The competitive methods for anomalous subgraphs detection have been successfully applied to tasks like event detection in social networks,traffic congestion detection in road networks,etc.However,few studies have been initiated in the dynamic evolution of anomalous subgraphs in attributed graphs.For multiple anomalous subgraph evolving pattern,it is the first dynamic graph-based study to capture multi-anomalies connected on time intervals.This study proposes an approach,namely dynamic evolution of multiple anomalous subgraphs scanning(DE-MASS),to detect the most anomalous evolutionary pattern,which consists of multiple anomalous subgraphs on attributed graphs.The DE-MASS outperforms the competitive baselines in the Weibo real dataset,computer traffic real dataset,and captures the evolution patterns of anomalous subgraphs on three real-world applications:traffic congestion detection in urban road networks(Beijing,Tianjin,and Nanjing in China),event detection in the social network(Weibo)and cyber-attack detection in computer traffic network.
      Motif-aware Adaptive Cross-layer Random Walk Community Detection
      WANG Beibei, XIN Junchang, CHEN Jinyi, WANG Zhiqiong
      Computer Science. 2024, 51 (6): 128-134.  doi:10.11896/jsjkx.231000142
      Abstract ( 26 )   PDF(2859KB) ( 104 )   
      References | Related Articles | Metrics
      In recent years,multi-layer network community detection using high order interactive information has become a hot spot.In order to solve this problem,a MACLCD algorithm is proposed.The algorithm considers high order interaction and interlayer correlation in multi-layer network to improve the accuracy of community detection.Specifically,firstly,the inter-layer correlation is revealed through comprehensive measurement from the perspective of network and node.Secondly,considering that each layer network may have different local and global structural characteristics,motif is used to identify the unique high-order interaction structure of each layer network,and a multi-layer weighted hybrid order network is constructed.Furthermore,a cross-layer walking model is designed,and a jump factor is introduced to ensure that the random walk can traverse the multi-layer network adaptively,so as to capture more diverse network structural information.Experimental comparisons are conducted on four real-world network datasets,and the results demonstrate that the MACLCD algorithm outperforms the comparison algorithms in terms of community detection performance.
      Deep Multiple-sphere Support Vector Data Description Based on Variational Autoencoder with Mixture-of-Gaussians Prior
      WU Huinan, XING Hongjie, LI Gang
      Computer Science. 2024, 51 (6): 135-143.  doi:10.11896/jsjkx.230300194
      Abstract ( 32 )   PDF(1934KB) ( 110 )   
      References | Related Articles | Metrics
      With the continuous increase of data dimension and scale,anomaly detection methods based on deep learning have achieved excellent detection performance,among which deep support vector data description(Deep SVDD) has been widely used.However,it is necessary to impose constraints on various parameters of the mapping network in Deep SVDD to alleviate the hypersphere collapse problem.In order to further improve the feature learning ability of the mapping network in Deep SVDD and solve the hypersphere collapse problem,deep multiple-sphere support vector data description based on variational autoencoder with mixture-of-gaussians prior(DMSVDD-VAE-MoG) is proposed.First,the network parameters and multiple hypersphere centers are initialized by pre-training.Second,the latent features of the training data are obtained by mapping network.The VAE loss,the average radius of multiple hyperspheres together with the average distance between the latent features and their corres-ponding hypersphere centers are jointly optimized to obtain the optimal network connection weights and multiple minimum hyperspheres.In comparison with the other eight related methods,the experimental results show that the proposed DMSVDD-VAE-MoG achieves better detection performance upon MNIST,Fashion-MNIST and CIFAR-10.
      Robust Estimation and Filtering Methods for Ordinal Label Noise
      JIANG Gaoxia, WANG Fei, XU Hang, WANG Wenjian
      Computer Science. 2024, 51 (6): 144-152.  doi:10.11896/jsjkx.230700115
      Abstract ( 30 )   PDF(3032KB) ( 104 )   
      References | Related Articles | Metrics
      Large-scale labeled datasets inevitably contain label noise,which limits the generalization performance of the model to some extent.The labels of ordinal regression datasets are discrete values,but there exist ordinal relationships between different labels.Although the labels of ordinal regression have the characteristics of both classification and regression labels,the label noise filtering algorithms for classification and regression tasks are not fully applicable to ordinal label noise.To solve this problem,the Akaike generalization error estimation of regression model with label noise is proposed.On this basis,a label noise filtering framework for ordinal regression task is designed.Besides,a robust ordinal label noise estimation method is proposed.It adopts a me-dian-based fusion strategy to reduce the interference of abnormal estimated components.Finally,this estimation method is combined with the proposed framework to form a noise robust fusion filtering(RFF) algorithm.The effectiveness of the RFF is verified on benchmark datasets and a real age estimation dataset.Experimental results show that the performance of RFF algorithm is better than that of other classification and regression filtering algorithms in ordinal regression tasks.It is adaptive to different kinds of noises and could effectively improve the data quality and model generalization performance.
      Subspace-based I-nice Clustering Algorithm
      HE Yifan, HE Yulin, CUI Laizhong, HUANG Zhexue
      Computer Science. 2024, 51 (6): 153-160.  doi:10.11896/jsjkx.230800200
      Abstract ( 29 )   PDF(2587KB) ( 112 )   
      References | Related Articles | Metrics
      Subspace clustering of high-dimensional data is a hot issue in the field of unsupervised learning.The difficulty of subspace clustering lies in finding the appropriate subspaces and corresponding clusters.At present,the most existing subspace clustering algorithms have the drawbacks of high computational complexity and difficulty in parameter selection because the number of subspaces combinations is very large and the algorithmic execution time is very long for high-dimensional data.Also,the diffe-rent datasets and application scenarios require different parameter inputs.Thus,this paper proposes a new subspace clustering algorithm named sub-I-nice to recognize all clusters in subspaces.First,the sub-I-nice algorithm randomly divides the original dimensions into groups to build subspaces.Second,I-niceMO algorithm is used to recognize clusters in each subspace.Finally,the newly-designed ball model is designed to construct subspace clustering ensemble.The persuasive experiments are conducted to validate the clustering performances of sub-I-nice algorithm on synthetic datasets with noise.Experimental results show that the sub-I-nice algorithm has better accuracy and robustness compared to the other three representative clustering algorithms,thereby confirming the rationality and effectiveness of the proposed algorithm.
      Continuous Influence Maximization Under Independent Cascade Propagation Model
      DENG Ziwei, CHEN Ling, LIU Wei
      Computer Science. 2024, 51 (6): 161-171.  doi:10.11896/jsjkx.230400006
      Abstract ( 24 )   PDF(6364KB) ( 118 )   
      References | Related Articles | Metrics
      Influence maximization is to seek a group of most influential users in social networks as seed nodes,and spread information through seed nodes to maximize the information spreading.Most of the existing research on influence maximization assume that each node is either a seed or not.While in practical applications,the users’ probability of becoming a seed should be determined according to their influence in the social network,hence maximize the expected range of influence of the seed set obtained according to the probability distribution.This is the problem of continuous influence maximization.A continuous influence maximization algorithm under the independent cascade propagation model is proposed.The algorithm first abstracts the problem into a constrained optimization,then several possible seed sets are sampled.The influence propagation range is estimated for each possible seed set.The gradient descent method is employed to calculate the increment value in each direction according to the estimated propagation range in each iteration.The direction of the maximum increment is taken as the gradient to update the objective function value.By such iterations,the optimal solution of the objective function can be obtained.Experiments on real and virtual data sets show that the proposed algorithm can obtain significantly larger expected range of influence than Random,Degree,UD and CD algorithms.
      Computer Graphics & Multimedia
      Survey of Breast Cancer Pathological Image Analysis Methods Based on Graph Neural Networks
      CHEN Sishuo, WANG Xiaodong, LIU Xiyang
      Computer Science. 2024, 51 (6): 172-185.  doi:10.11896/jsjkx.230400106
      Abstract ( 67 )   PDF(3672KB) ( 153 )   
      References | Related Articles | Metrics
      Pathological diagnosis is the gold standard for cancer diagnosis and treatment,the use of artificial intelligence(AI) models for analyzing pathological images has the potential to not only reduce the workload of pathologists but also improve the accuracy of cancer diagnosis and treatment.However,these methods face challenges due to the large scale of pathological images and the difficulty in interpreting the predicted results.In recent studies,graph neural networks have shown their strong abilities in modeling spatial context and interpretability of entities in images,which provides a new idea for the study of digital pathology.In this survey,we review recent related works in computer vision,analyze the advantages of graph neural networks for breast cancer pathology,classify and compare existing graph construction methods,and analyze and compare graph neural network models proposed in recent years.We also summarize the challenges that exist in using graph neural networks for analyzing pathological images of breast cancer and prospect the future research directions.
      Review of Heterogeneous Iris Recognition
      KONG Jialin, ZHANG Qi, WANG Caiyong
      Computer Science. 2024, 51 (6): 186-197.  doi:10.11896/jsjkx.231200175
      Abstract ( 39 )   PDF(4603KB) ( 115 )   
      References | Related Articles | Metrics
      The variations in iris image acquisition environment and devices result in significant disparities in iris registration and recognition samples,which brings challenges to the traditional iris recognition technology.Heterogeneous iris recognition has emerged as a focal point of interest in both academic and industrial domains.This paper classifies and summarizes the existing he-terogeneous iris recognition methods from three perspectives:different levels,sample distinctiveness,and single-source versus multi-source scenarios,and summarizes the latest advancements in heterogeneous iris recognition.Existing heterogeneous iris datasets are reviewed according to the classification of cross-quality,cross-device and cross-spectrum,and the iris recognition evaluation metrics are summarized so that researchers can better evaluate and validate the algorithm performance.Finally,the future development direction of heterogeneous iris recognition is prospected,focusing on three aspects:environmental robustness,modeling of data heterogeneity and multimodal fusion.
      Vision-enhanced Multimodal Named Entity Recognition Based on Contrastive Learning
      YU Bihui, TAN Shuyue, WEI Jingxuan, SUN Linzhuang, BU Liping, ZHAO Yiman
      Computer Science. 2024, 51 (6): 198-205.  doi:10.11896/jsjkx.230400052
      Abstract ( 38 )   PDF(3102KB) ( 123 )   
      References | Related Articles | Metrics
      Multimodal named entity recognition(MNER) aims to detect ranges of entities in a given image-text pair and classifies them into corresponding entity types.Although existing MNER methods have achieved success,they all focus on using image encoder to extract visual features,without enhancement or filtering,and directly feed them into cross-modal interaction mechanism.Moreover,since the representations of text and images come from different encoders,it is difficult to bridge the semantic gap between the two modalities.Therefore,a vision-enhanced multimodal named entity recognition model based on contrastive learning (MCLAug) is proposed.First,ResNet is used to collect image features.On this basis,a pyramid bidirectional fusion strategy is proposed to combine low-level high-resolution with high-level strong semantic image information to enhance visual features.Se-condly,using the idea of multimodal contrastive learning in the CLIP model,calculate and minimize the contrastive loss to make the representations of the two modalities more consistent.Finally,the fused image and text representations are obtained using a cross-modal attention mechanism and a gated fusion mechanism,and a CRF decoder is used to perform the MNER task.Comparative experiments,ablation studies and case studies on 2 publicly datasets demonstrate the effectiveness of the proposed model.
      Deformable Image Registration Model Based on Weighted Bounded Deformation Function
      MIN Lihua, DING Tianzhong, JIN Zhengmeng
      Computer Science. 2024, 51 (6): 206-214.  doi:10.11896/jsjkx.230400090
      Abstract ( 30 )   PDF(4146KB) ( 112 )   
      References | Related Articles | Metrics
      Deformable image registration is a very important topic in the field of image processing.It is one of the most basic problems in computer vision,and also a difficult point in medical image analysis.In this paper,we study the image registration of two uni-modal grayscale images.A new deformable image registration model based on the weighted bounded deformation function is proposed by fully considering the edge information of the reference image.In addition,the paper firstly proposes a weighted bounded deformation function space in which the definition and related conclusions are given.Theoretically,we prove the exis-tence of solutions to the proposed model.Furthermore,an effective algorithm is designed based on the gradient descent method to numerically solve the model.Moreover,numerical experiments which are also performed on synthetic images and medical images respectively show that,compared with other comparison models,the proposed model can obtain more accurate registration results by introducing control functions and using weighted bounded deformation functions as regular terms,especially in the image edge.
      LiDAR-Radar Fusion Object Detection Algorithm Based on BEV Occupancy Prediction
      LI Yuehao, WANG Dengjiang, JIAN Haifang, WANG Hongchang, CHENG Qinghua
      Computer Science. 2024, 51 (6): 215-222.  doi:10.11896/jsjkx.230500085
      Abstract ( 28 )   PDF(3080KB) ( 120 )   
      References | Related Articles | Metrics
      Beam attenuation and target occlusion in the working environment of LiDAR can cause the output point cloud to be sparse at the far end,which leads to the phenomenon of detection accuracy degradation with distance for 3D object detection algorithms based on LiDAR.To address this problem,a LiDAR-radar fusion object detection algorithm based on BEV occupancy prediction is proposed.First,a simplified bird’s eye view(BEV) occupancy prediction sub-network is proposed to generate position-related radar features,which also helps to solve the network convergence difficulty problem caused by the sparsity of radar data.Then,in order to achieve cross-modal feature fusion,a multi-scale LiDAR-radar fusion layer based on BEV space feature correlation is designed.Experimental results on the nuScenes dataset show that the mean average precision(mAP) of the proposed radar branch network reaches 21.6%,and the inference time is 8.3ms.After adding the fusion layer structure,the mAP of the multi-modal detection algorithm improves by 2.9%,compared to the baseline algorithm CenterPoint,and the additional inference time overhead is only 8.6ms.At the 30m position of the distance sensor,the detection accuracy of the multi-modal algorithm for 10 categories in the nuScenes dataset increases by 2.1%~16.0% compared to CenterPoint respectively.
      Real-time Dispersion Rendering Method Based on Adaptive Photons and Hierarchical Dispersion Map
      LUO Yuanmeng, ZHANG Jun
      Computer Science. 2024, 51 (6): 223-230.  doi:10.11896/jsjkx.230300097
      Abstract ( 26 )   PDF(3764KB) ( 111 )   
      References | Related Articles | Metrics
      Caustic is the bright phenomenon formed when light rays gather in an area after reflection or refraction.Dispersion is a color spectrum phenomenon that occurs due to the difference in refractive index of monochromatic light of different wavelengths in refractive caustic,and is a complex and time-consuming lighting calculation step when rendering realistic translucent objects.Existing ray tracing techniques must rely on high-end GPU hardware for real-time dispersion rendering.Based on the image-space caustic map technique,a simple and efficient real-time dispersion rendering method is proposed in the paper,in which the method of sampling 7 monochromatic lights and adaptively resizing 7 color photons is proposed for rendering the approximate whole dispersion spectrum.The hierarchical dispersion map strategy is proposed to improve the rendering efficiency by avoiding the increase of photon rasterization size.Experimental results show that the proposed method can achieve real-time rendering on PC,and the whole continuous spectrum is simulated with 7 monochromatic lights of discrete sampling spectrum,which reduces the calculation and storage of rendering,and improves the noise problem based on the image-space technique.
      Point Cloud Upsampling Network Incorporating Transformer and Multi-stage Learning Framework
      LI Zekai, BAI Zhengyao, XIAO Xiao, ZHANG Yihan, YOU Yilin
      Computer Science. 2024, 51 (6): 231-238.  doi:10.11896/jsjkx.230300154
      Abstract ( 26 )   PDF(3989KB) ( 120 )   
      References | Related Articles | Metrics
      Drawing on Transformer’s powerful feature encoding capabilities in the fields of natural language and computer vision,and inspired by a multi-stage learning framework,a point cloud upsampling network that incorporates Transformer and multi-stage learning framework is designed.The network adopts a two-stage network model,the first stage is a dense point generation network,using a multi-layer Transformer encoder to progressively transform the local geometric information and local feature information of the input point cloud to the high-level semantic features of the point cloud,the feature expansion module upsamples the point cloud features in the feature space,the coordinate regression module remaps the point cloud from the feature space back to the Euclidean space to initially generate a dense point cloud.The second stage is the point-by-point optimisation network,using the Transformer encoder to encode the latent semantic features in the dense point cloud,and combining the semantic features from the previous stage to obtain the complete semantic features of the point cloud,the information integration module extracts the error features of the points from the geometric information and semantic features of the dense point cloud,and the error regression module calculates the coordinate offset of the points in Euclidean space from the error features to realise the point-by-point optimisation of the dense point cloud,so that the distribution of points on the point cloud is more uniform and closer to the real object surface.In extensive experiments on the large synthetic dataset PU1K,the high-resolution point clouds generated by MSPUiT are reduced to 0.501×10-3,5.958×10-3 and 1.756×10-3 in terms of Chamfer Distance(CD),Hausdorff Distance(HD) and distance from the generated point cloud to the original point cloud block(P2F),respectively.Experimental results show that the surface of the point cloud is smoother and less noisy after upsampling by MSPUiT,and the quality of the generated point cloud is higher than that of the current mainstream point cloud upsampling networks.
      DETR with Multi-granularity Spatial Attention and Spatial Prior Supervision
      LIAO Junshuang, TAN Qinhong
      Computer Science. 2024, 51 (6): 239-246.  doi:10.11896/jsjkx.230300218
      Abstract ( 24 )   PDF(3890KB) ( 102 )   
      References | Related Articles | Metrics
      The Transformer has shown remarkable performance in the field of computer vision in recent years,and has gained widespread attention due to its excellent global modeling capability and competitive performance compared to convolutional neural networks(CNNs).Detection Transformer(DETR) is the first end-to-end network that adopts the Transformer architecture for object detection tasks,but it suffers from slow convergence during training and suboptimal performance due to its equivalent mo-deling across the global scope and indistinguishability of object query keys.To address these issues,we propose replacing the self-attention in the encoder and the cross-attention in the decoder of DETR with a multi-granularity attention mechanism,using fine-grained attention for tokens that are close in distance and coarse-grained attention for tokens that are far apart,to enhance its modeling capability.We also introduce spatial prior constraints in the cross-attention of the decoder to supervise the network training,which accelerates the convergence speed.Experimental results show that the proposed improved model,after incorporating the multi-granularity attention mechanism and spatial prior supervision,achieves a 16% improvement in recognition accuracy on the PASCAL VOC2012 dataset compared to the unmodified DETR,with a doubled convergence speed.
      Early-stage Fatigue Detection Based on Frequency Domain Information of Eye Features
      HUO Xingxing, HU Ruimin, LI Yixin
      Computer Science. 2024, 51 (6): 247-255.  doi:10.11896/jsjkx.230300033
      Abstract ( 33 )   PDF(2548KB) ( 110 )   
      References | Related Articles | Metrics
      The fatigue of baggage X-ray security inspector is an important cause of false and missed inspection.Previous work in this field mostly focused on detecting extreme fatigue with explicit signs such as yawning,nodding off and prolonged eye closure.However,for security inspectors,such explicit signs may not appear until only before an accident,and it is too late to detect fatigue.Thus,there is significant value in detecting fatigue at an early stage,to warn the occurrence of fatigue in time.Due to the subtle facial performance characteristics of early-stage fatigue,the irreversibility of time-domain parameters leads to its inability for complete representations.To solve this problem,an early-stage fatigue detection method for baggage X-ray security inspectors based on the frequency domain information of eye features is proposed,which converts the original time domain information into a more expressive frequency domain feature space.It firstly obtained the eye aspect ratio series through the facial detection algorithm,then the time-domain features are transformed into frequency-domain space for analysis to mine more subtle features.Finally,HM-LSTM is used for training and verification. Experiment is conducted on the dataset UTA-RLDD.The results show that the proposed architecture improves the recognition rate of early-stage fatigue by 2%,demonstrating that frequency domain features have better expression ability than time domain features.
      Scene Text Detection Algorithm Based on Feature Enhancement
      GAO Nan, ZHANG Lei, LIANG Ronghua, CHEN Peng, FU Zheng
      Computer Science. 2024, 51 (6): 256-263.  doi:10.11896/jsjkx.230500230
      Abstract ( 26 )   PDF(3563KB) ( 123 )   
      References | Related Articles | Metrics
      To address the problem of missed and false detection of image text in natural scenes due to complex backgrounds and variable scales,this paper proposes a text detection algorithm for scenes based on feature enhancement.In the feature pyramid fusion stage,a dual-domain attention feature fusion module(D2AAFM)is proposed,which can better fuse feature map information of different semantics and scales,thus improving the characterization ability of text information.At the same time,considering the problem of semantic information loss in the process of up-sampling and fusion of deeper feature maps of the network,the multi-scale spatial perception module(MSPM)is proposed to enhance the semantic features of text in higher-level feature maps by expanding the perceptual field to obtain contextual information of a larger perceptual field,thus effectively reduce the text of missed and false detection.In order to evaluate the effectiveness of the proposed algorithm,it is tested on the publicly available datasets ICDAR2015,CTW1500 and MSRA-TD500,and its overall index F-value reaches 82.8%,83.4% and 85.3%,respectively.The experimental results show that the algorithm has good detection capability on different datasets.
      Center Point Target Detection Algorithm Based on Improved Swin Transformer
      LIU Jiasen, HUANG Jun
      Computer Science. 2024, 51 (6): 264-271.  doi:10.11896/jsjkx.230300222
      Abstract ( 27 )   PDF(4018KB) ( 124 )   
      References | Related Articles | Metrics
      Aiming at the shortcomings of Swin Transformer in extracting local feature information and expressing features,this paper proposes a center point target detection algorithm based on improved Swin Transformer to improve its performance in target detection.By adjusting the network structure and introducing a deconvolution module to enhance the network’s ability to extract local feature information,using an adaptive two-dimensional Gaussian kernel and a regression head module to detect the center point of the target,so as to enhance the feature expression ability,and adding a dropout activation function to the Swin Transformer block module to alleviate the network overfitting problem.The improved algorithm is validated on the Pascal VOC and MS COCO 2017 datasets,respectively.The experimental results show that the improved Swin Transformer algorithm achieves an accuracy of 81.1% on the Pascal VOC dataset and 37.2% on the MS COCO dataset,significantly superior to other mainstream object detection algorithms.
      Artificial Intelligence
      Study on Human-Machine Hybrid Intelligent Decision-making Paradigm and Its Operational Application
      DING Yanyan, FENG Jianhang, YE Ling, ZHENG Shaoqiu, LIU Fan
      Computer Science. 2024, 51 (6): 272-281.  doi:10.11896/jsjkx.230300180
      Abstract ( 47 )   PDF(2408KB) ( 127 )   
      References | Related Articles | Metrics
      Human-machine hybrid intelligence combines machine intelligence and human intelligence,giving full play to the respective intelligence advantages of machines and humans,and realizing cross-vector and cross-cognition of intelligence.As a new form of intelligence,human-machine hybrid intelligence has a wide range of application prospects.Human-machine hybrid intelligence decision making introduces human thinking into intelligence systems,and utilizes multi-intelligence cooperation to complete hybrid decision-making for a certain task.Existing research on human-machine hybrid intelligence decision making lacks holistic theoretical descriptions and categorical comparisons,more importantly,there are fewer architectural descriptions of operational decision making systems concerning the military domain.Therefore,the generic human-machine hybrid decision making paradigm is classified from the perspectives of collaborative interaction means and decision stages,while the applications of human-machine hybrid intelligent decision making systems in different paradigms for operational purposes are analyzed.In addition,the problems of current human-machine hybrid intelligent decision-making paradigms and systems are summarized,and the future development directions are prospected.
      Review of Graph Neural Networks
      HOU Lei, LIU Jinhuan, YU Xu, DU Junwei
      Computer Science. 2024, 51 (6): 282-298.  doi:10.11896/jsjkx.230400005
      Abstract ( 29 )   PDF(3995KB) ( 137 )   
      References | Related Articles | Metrics
      With the rapid development of artificial intelligence,deep learning has achieved great success in data that can be represented in Euclidean spaces,such as images,text,and speech.However,it has been difficult to apply deep learning to non-Eucli-dean spaces.In recent years,with the emergence of graph neural networks,it has demonstrated powerful representation learning abilities in non-Euclidean spaces and has been widely applied in various fields such as recommendation systems,natural language processing,and computer vision.The graph neural network model is based on the mechanism of information propagation.Specifi-cally,the target node in the graph updates its embedding representation by aggregating the information of neighboring nodes.With graph neural networks,many real-world problems(such as social networks,knowledge graphs,and drug chemical compositions) can be abstracted into graph networks and the dependence relationships between different nodes can be modeled reasonably using the connecting edges in the graph.Therefore,this paper systematically reviews graph neural networks,introduces the basic knowledge of graph-structured data,and systematically reviews graph walk algorithms and different types of graph neural network models.Furthermore,it also details the current general framework and application areas of graph neural networks,and concludes with a summary and outlook on future research in graph neural networks.
      Aspect-based Sentiment Classification for Word Information Enhancement Based on Sentence Information
      LI Yilin, SUN Chengsheng, LUO Lin, JU Shenggen
      Computer Science. 2024, 51 (6): 299-308.  doi:10.11896/jsjkx.230600059
      Abstract ( 37 )   PDF(2204KB) ( 122 )   
      References | Related Articles | Metrics
      Aspect-based sentiment classification is a fine-grained sentiment classification task that aims to determine the sentiment polarity of specified aspect terms in a sentence.In recent years,syntactic knowledge has been widely applied in the field of aspect-based sentiment classification.Current mainstream models utilize syntactic dependency trees and graph convolutional neural networks to classify sentiment polarity.However,these models primarily focus on using aggregated aspect term information to determine sentiment polarity,and few studies focus on the impact of global sentence information on sentiment polarity.This leads to biased sentiment classification results.To address this issue,this paper proposes an aspect-based sentiment classification model that enhances aspect term information with sentence-level information.This model learns sentence representations through con-trastive learning,with the goal of minimizing the contrastive loss of sentence vectors to adjust the feature representation of word vectors.Finally,the model aggregates opinion word information using a graph convolutional neural network(GCN)to obtain sentiment classification results.Experimental results on the SemEval2014 dataset and Twitter dataset demonstrate that the model improves classification accuracy,which verifies the effectiveness of our approach.
      Long Text Multi-entity Sentiment Analysis Based on Multi-task Joint Training
      ZHANG Haoyan, DUAN Liguo, WANG Qinchen, GAO Hao
      Computer Science. 2024, 51 (6): 309-316.  doi:10.11896/jsjkx.230400001
      Abstract ( 21 )   PDF(2006KB) ( 104 )   
      References | Related Articles | Metrics
      Multi-entity sentiment analysis aims to identify core entities in a text and judge their corresponding sentiment,which is a research hotspot in the field of fine-grained sentiment analysis.However,most existing researches of long text multi-entity sentiment analysis is still in its early stages.This paper proposes a long text multi-entity sentiment analysis model(PAM) based on multi-task joint training.To begin with,the utilization of TF-IDF algorithm for extracting sentences similar to the article title can help eliminate redundant information and reduce the length of text.Subsequently,the adoption of two BiLSTM models for core entity recognition and sentiment analysis tasks respectively enables the acquisition of necessary features.Next,multi-head attention mechanism is employed,which is integrated with relative position information,to transfer the knowledge gained from entity recognition task to sentiment analysis task,thus enabling joint learning of the two tasks.Finally,the proposed Entity_Extract algorithm is used to identify core entities from predicted candidate entities according to the number and position of entities in the text and obtain their corresponding emotions.Experimental results on Sohu news datasets demonstrate the effectiveness of PAM model.
      Generation of Structured Medical Reports Based on Knowledge Assistance
      SHI Jiyun, ZHANG Chi, WANG Yuqiao, LUO Zhaojing, ZHANG Meihui
      Computer Science. 2024, 51 (6): 317-324.  doi:10.11896/jsjkx.230900076
      Abstract ( 28 )   PDF(2010KB) ( 119 )   
      References | Related Articles | Metrics
      Automatic generation of medical reports is an important application of text summarization technology.Due to the ob-vious difference between the medical consultation data and data of the general field,the traditional text summary generation me-thod cannot fully understand and utilize the highly complex medical terms in the medical text,so that the key knowledge contained in the medical consultation has not been fully used.In addition,most of the traditional text summary generation methods directly generate summaries,and do not have the ability to automatically select and filter key information and generate structured text according to the structural characteristics of medical reports.In order to solve the above problems,a knowledge-assisted structured medical report generation method is proposed in this paper.The proposed method combines the entity-guided prior domainknowledge with the structure-guided task decoupling mechanism,and realizes the key knowledge of medical consultation data,taking full advantage of the structured features of medical reports.The effectiveness of the method is verified on the IMCS21 dataset.The ROUGE score of the summary generated by our method is 2% to 3% higher than that of baseline methods,and a more accurate medical report is generated.
      Attentional Interaction-based Deep Learning Model for Chinese Question Answering
      JIANG Rui, YANG Kaihui, WANG Xiaoming, LI Dapeng, XU Youyun
      Computer Science. 2024, 51 (6): 325-330.  doi:10.11896/jsjkx.230300175
      Abstract ( 28 )   PDF(1519KB) ( 130 )   
      References | Related Articles | Metrics
      With the rapid development of the Internet and big data,artificial intelligence,represented by deep neural network(DNN),has ushered in a golden period of development.As an important branch in the field of artificial intelligence,question answering has attracted more and more scholars’ attention.The existing deep neural network module can extract the semantic features of the question or answer,however,on the one hand,it ignores the semantic relation between the question and answer,on the other hand,it cannot grasp the potential relation among all the characters in the question or answer as a whole.Therefore,two different forms of attention interaction module,namely cross-embedding and self-embedding,are used to solve the above pro-blems,and a set of deep learning model based on the proposed attention interaction module is designed to prove the effectiveness of this attention interaction module.Firstly,each character in the question and answer is mapped into a fixed length vector,and the corresponding character embedding matrix is obtained respectively.After that,the character embedding matrix is sent into the attentional interaction module to obtain the character embedding matrix that takes all characters of the question and answer into account.After adding the previous character embedding matrix,it is sent into the deep neural network module to extract the semantic features of the question and answer.Finally,the vector representations of the question and the answer are obtained,and the similarity between them is calculated.Experiments show that the accuracy of Top-1 of the proposed model is 3.55 % higher than that of the mainstream deep learning model at most,which proves the effectiveness of the proposed attention interaction module in resolving the above problems.
      Fast Path Recovery Algorithm for Obstacle Avoidance Scenarios
      MA Yinghong, LI Xu’nan, DONG Xu, JIAO Yi, CAI Wei, GUO Youguang
      Computer Science. 2024, 51 (6): 331-337.  doi:10.11896/jsjkx.230400015
      Abstract ( 22 )   PDF(2403KB) ( 104 )   
      References | Related Articles | Metrics
      A fast path recovery algorithm for unmanned aerial vehicles(UAVs) in obstacle avoidance scenarios is proposed to address the shortcomings of most existing obstacle avoidance algorithms that lack consideration for UAV path recovery,and a few path recovery algorithms have poor recovery effects.Taking into account environmental constraints and UAV maneuverability constraints,a safe and efficient UAV obstacle avoidance and path recovery path is planned by rotating the coordinate system,determining the turning direction of the UAV,and calculating the coordinates of multiple key path points throughout the entire obstacle avoidance and path recovery process using the binary method.The comparative experimental results show that the fast path recovery algorithm can plan shorter obstacle avoidance and path recovery paths,and the obstacle avoidance can start at the track points closer to the obstacle.The obstacle avoidance and path recovery time is shorter,the path deviation is smaller,and the overall path is better.This is more advantageous for most reconnaissance scenarios where UAVs need to cruise along off-line paths as much as possible.
      Gender Discrimination Speech Detection Model Fusing Post Attributes
      WANG Xiaolong, WANG Yanhui, ZHANG Shunxiang, WANG Caiqin, ZHOU Yuhao
      Computer Science. 2024, 51 (6): 338-345.  doi:10.11896/jsjkx.230800198
      Abstract ( 21 )   PDF(2263KB) ( 112 )   
      References | Related Articles | Metrics
      Gender discrimination speech detection is to identify whether the text has the tendency of gender discrimination through NLP technology,which provides strong support for purifying the network environment.The limitation of current researches is that they pay more attention to the posts itself,while the exploration of relationships among post attributes(user,post,and theme) is overlooked.Motivated by this issue,this paper proposes a model to mine the relationships among post attributes by constructing heterogeneous graphs.Firstly,the word embeddings of post content are generated by ERNIE,subsequently,the contextual dependencies are extracted using BiGRU,and thus the sentence representation is obtained.Then,the heterogeneous graph based on the relationships among post attributes is constructed,and the heterogeneous graph attention network is further employed to obtain the relationship representation of the post.Finally,the sentence representation and relationship representation are fused as input of the Softmax function for classification.Experimental results show that the proposed model can improve the effect of gender discrimination speech detection.
      Computer Network
      Pre-allocated Capacity Quota Limiting System Based on Microservice
      ZHENG Xu, FAN Hongjie, LIU Junfei
      Computer Science. 2024, 51 (6): 346-353.  doi:10.11896/jsjkx.231100125
      Abstract ( 26 )   PDF(2769KB) ( 115 )   
      References | Related Articles | Metrics
      In a distributed architecture,rate limiters that exist simultaneously on multiple nodes need to collaborate effectively to achieve the same effect as a monolithic rate limiter.In real-world business scenarios,there is irregular distribution of online requests and high offline business throughput.In such cases,certain critical nodes operating under overload conditions can result in slow response times,leading to increased overall latency in the request chain and even causing sluggish application performance.To address the issues of existing microservice flow limiting,this paper proposes a rate limiting algorithm based on proactive quota updates using pre-allocated quotas.This algorithm adopts a server-initiated broadcast approach where the server can both accept client requests and proactively update the latest results for processing requests on the nodes holding the resource quotas.Flexible allocation algorithms can be utilized during quota allocation at the server end.Estimation of rate limiter quotas involves sliding window pattern to track the number of requests and the allocated resource quotas over a period of time.Additionally,we implement a rate limiting model based on this algorithm.Experimental results demonstrate that the model can promptly respond to quota changes and effectively achieve fairness among nodes.Compared to the Doorman system,the proposed model is better suited for online and offline traffic scenarios and enables more precise rate limiting.
      Federated Learning Client Selection Scheme Based on Time-varying Computing Resources
      LIU Jianxun, ZHANG Xinglin
      Computer Science. 2024, 51 (6): 354-363.  doi:10.11896/jsjkx.230400183
      Abstract ( 22 )   PDF(4049KB) ( 108 )   
      References | Related Articles | Metrics
      Federated learning(FL) is an emerging paradigm for distributed machine learning,whose core idea is that user devices train their models locally in a distributed manner and do not need to upload raw data,but only upload the trained model to the server for model aggregation.Most of the existing studies ignore that the computing resources of devices change temporally with the usage patterns of users,which can affect the training of FL.In this paper,we model time-varying computing resources for he-terogeneous devices using an auto regressive model and propose a client selection algorithm.We first formulate the optimization problem of minimizing the average training time of each round of FL under the long-term training time constraint,then transform it using Lyapunov optimization theory,and finally solve it to obtain the client selection algorithm.Experimental results show that compared with the baseline algorithms,the proposed algorithm can reduce the training time of FL and the average waiting time of the devices while basically remaining the quality of model.
      Adaptive Sparse Sensor Network Target Coverage Algorithm Based on Edge Computing
      LI Jie, WANG Yao, CHEN Kansong, XU Lijun
      Computer Science. 2024, 51 (6): 364-374.  doi:10.11896/jsjkx.230300185
      Abstract ( 20 )   PDF(3141KB) ( 112 )   
      References | Related Articles | Metrics
      Ocean exploration is the key to ocean development,and how to quickly and efficiently achieve underwater target detection is a problem that must be solved for ocean exploration.Based on this,an adaptive sparse sensing network target coverage optimization algorithm based on edge computing is proposed to efficiently accomplish underwater target detection with fewer sen-sing nodes.Firstly,the energy balance of the sensing network is optimized by adding an energy factor to protect the nodes with lower energy during the node movement through the Ad Hoc mobile energy optimization strategy mechanism.Secondly,an Ad Hoc greedy detection mechanism is proposed to achieve the detection of unknown areas with minimum cost and fast target cove-rage.Finally,using the virtual force-based adaptive connectivity mechanism,the connectivity of the sparse self-organized network is ensured by increasing the virtual gravitational range to solve the disconnection problem during the node movement.Simulation results show that the proposed algorithm is able to provide fast and durable target detection coverage with a smaller number of mobile sensors,with better performance compared to the comparison algorithms.
      Enhanced Snake Optimizer Based RFID Network Planning
      LI Zhiqian, ZHENG Jiali, CHEN Yijun, ZHANG Jiangbo
      Computer Science. 2024, 51 (6): 375-383.  doi:10.11896/jsjkx.230300130
      Abstract ( 20 )   PDF(3354KB) ( 119 )   
      References | Related Articles | Metrics
      Aiming at the optimal deployment of radio frequency identification(RFID) network planning,an enhanced snake optimizer based on the embedded sine cosine algorithm(SCA) and adaptive threshold is proposed.In the population initialization stage,taking advantage of the uniformity and ergodicity of the Circle chaotic map,the algorithm mechanisms such as sine cosine algorithm and adaptive threshold are introduced in the local search stage and the development stage,respectively,to get rid of the disadvantages of the snake optimizer such as uneven initialization process,easy to fall into local optimization and slow convergence speed.On the basis of meeting the four objectives of 100% label coverage,reducing the collision interference between readers and writers,achieving the load balance of readers and writers,and reducing the total transmission power,the optimal deployment location of readers is solved.Enhanced snake optimizer(ESO) is compared with particle swarm optimization(PSO),grey wolf optimizer(GWO),and salp swarm algorithm(SSA).Experimental results show that enhanced snake optimizer has a stronger ability to optimize the deployment of RFID network,and its overall performance is significantly improved.Under the same experimental conditions,the optimal fitness value of ESO is 28.1% higher than PSO,17.7% higher than GWO,and 22.9% higher than SSA,which can more effectively obtain the optimal RFID network planning and deployment scheme.
      Study on Collaborative Control Method of Vehicle Platooning Based on Edge Intelligence
      LI Le, LIU Meifang, CHEN Rong, WEI Siyu
      Computer Science. 2024, 51 (6): 384-390.  doi:10.11896/jsjkx.231000126
      Abstract ( 22 )   PDF(2953KB) ( 107 )   
      References | Related Articles | Metrics
      With the development of communication technology and automatic control technology,the autonomous control method of intelligent and connected vehicles(ICV),especially the control method under hybrid platooning,has become an important direction of the research of unmanned driving technology.In order to reduce the delay of control strategy output due to the limitation of computing power of on-board processor,and improve vehicle tracking effect,a collaborative control method of vehicle platooning based on edge intelligence is proposed.Using the powerful computing power of edge server and 5G communication network,a control system based on edge intelligence is designed to upload computing tasks to the cloud and release the computing resources of the on-board processor.Based on the analysis of vehicle following scenario in hybrid platooning,a vehicle platooning control model in spatiotemporal coupling scenario is designed,and a vehicle dynamics model is established by using MPC control algorithm.Through model prediction,rolling optimization and feedback correction,control strategy calculation services are provi-ded for intelligent and connected vehicle.The results of MATLAB simulation experiment and edge computing virtual platform experiment show that the proposed MPC control algorithm performs well in trajectory tracking control and can provide safety control strategy for vehicles in real time and efficiently.
      Information Security
      Particle Swarm Optimization-based Federated Learning Method for Heterogeneous Data
      XU Yicheng, DAI Chaofan, MA Wubin, WU Yahui, ZHOU Haohao, LU Chenyang
      Computer Science. 2024, 51 (6): 391-398.  doi:10.11896/jsjkx.230400182
      Abstract ( 19 )   PDF(3639KB) ( 115 )   
      References | Related Articles | Metrics
      Federated learning is an emerging privacy-preserving distributed machine learning framework,whose core feature is the ability to implement distributed machine learning without access to the client’s raw data.The client uses local data for model training and then uploads the model parameters to the server for aggregation,thus ensuring that the client data is always protected.In this process,there are problems of high communication costs due to frequent parameter transfers and non-independent homogeneous heterogeneous data owned by each client,both of which severely limit the application of federated learning.To address these problems,FedPSG,a federated learning method based on particle swarm optimization for data heterogeneity,is proposed to reduce the communication cost by changing the form of data transferred from the client to the server from model para-meters to model scores,so that only a small number of clients need to upload model parameters to the server in each training round.Meanwhile,a model retraining strategy is proposed to use the server data to train the global model for a second iteration,further improving the model performance by mitigating the impact of data heterogeneity issues on federated learning.Simulating different data heterogeneous environments,experiments are conducted on MNIST,FashionMNIST and CIFAR-10 datasets.The results show that FedPSG can effectively improve the accuracy of the model in different data heterogeneous environments,and verify that the model retraining strategy can effectively solve the client-side data heterogeneity problem.
      N-variant Architecture for Container Runtime Security Threats
      LIU Daoqing, HU Hongchao, HUO Shumin
      Computer Science. 2024, 51 (6): 399-408.  doi:10.11896/jsjkx.230200099
      Abstract ( 21 )   PDF(4624KB) ( 103 )   
      References | Related Articles | Metrics
      It is container technology that has promoted the development of cloud computing with its lightweight and scalability advantages,but the security threat of container runtime is increasingly serious.The existing intrusion detection and access control technology can’t effectively deal with the attack behavior of using container runtime to achieve container escape.First of all,this paper proposes an N-variant architecture for container runtime security threats combined with the redundancy and diversity me-thods of N-variant system.Secondly,through the redundancy and diversity methods of the N-variant system and the combination of the voting algorithm based on historical information,the accuracy of the voting is improved.Besides,service quality of container applications is optimized through two-stage voting and scheduling strategies.Finally,a prototype system is built.The test results show that the performance loss of the prototype system is within an acceptable range,and the attack surface of the system is reduced to a certain extent,thus achieving the purpose of enhancing the security of container applications.
      Browser Fingerprint Tracking Based on Improved GraphSAGE Algorithm
      CHU Xiaoxi, ZHANG Jianhui, ZHANG Desheng, SU Hui
      Computer Science. 2024, 51 (6): 409-415.  doi:10.11896/jsjkx.230400003
      Abstract ( 29 )   PDF(2443KB) ( 109 )   
      References | Related Articles | Metrics
      The current Web tracking field mainly uses browser fingerprint to track users,and for the problems of browser fingerprint tracking technology such as dynamic changes of fingerprint over time and the difficulty of long-term tracking,an improved graph sampling aggregation algorithm NE-GraphSAGE is proposed for browser fingerprint tracking. Firstly,the graph data is constructed using browser fingerprint as nodes and feature similarity between fingerprints as edges. Secondly,the GraphSAGE algorithm in graph neural networks is improved to not only focus on node features,but also capture edge information and classify edges to identify fingerprint. Finally,the NE-GraphSAGE algorithm is compared with Eckersley algorithm,FPStarker algorithm,and LSTM algorithm to verify the recognition effect of NE-GraphSAGE algorithm. Experimental results show that the NE-GraphSAGE algorithm has different degrees of improvement in accuracy and tracking time,and the maximum tracking time is up to 80 days. Compared with the other three algorithms,the NE-GraphSAGE algorithm has better performance,verifying its ability to track browser fingerprint for a long time.
      Extended Code Index Modulation Scheme Based on Reversible Elementary Cellular Automata Encryption
      ZHAO Geng, HUANG Sijie, MA Yingjie, DONG Youheng, WU Rui
      Computer Science. 2024, 51 (6): 416-422.  doi:10.11896/jsjkx.230300067
      Abstract ( 30 )   PDF(2260KB) ( 110 )   
      References | Related Articles | Metrics
      In order to address the problems of limited pseudo noise(PN)code resources in direct sequence spread spectrum system and degraded bit error rate(BER)performance of code index mo-dulation system,this paper proposes an extended code index modulation(E-CIM)scheme based on reversible elementary cellular automata encryption.First,to address the problem of limited PN code resources,a method of iterating PN codes using chaotic rules of elementary cellular automata is proposed to achieve the purpose of extending PN codes.In addition,to address the problem of BER degradation of code index modulation,this paper proposes a code index modulation scheme with reversible elementary cellular automata encryption,in which the information bits are cut into modulation bits and mapping bits at the transmitter side,and mapped into modulation symbols and spreading code indexes,respectively.The in-phase component is spread using the spreading code of the corresponding index,while the mapped bits are encrypted using the reversible elementary cellular automata,The orthogonal component is spread by the spreading code corresponding to the index selected by the encrypted mapped bits.Simulation and analysis results show that under the same spectrum efficiency conditions,the BER performance of E-CIM is superior to that of the CIM and GCIM schemes by about 2~4 dB and superior to that of the N-CSK-CIM scheme by about 0.5dB in an additive Gaussian white noise channel when the BER is 10-5.
      Function-call Instruction Characteristic Analysis Based Instruction Set Architecture Recognization Method for Firmwares
      JIA Fan, YIN Xiaokang, GAI Xianzhe, CAI Ruijie, LIU Shengli
      Computer Science. 2024, 51 (6): 423-433.  doi:10.11896/jsjkx.230500087
      Abstract ( 21 )   PDF(4046KB) ( 114 )   
      References | Related Articles | Metrics
      The recognition of instruction set architecture is a crucial task for conducting security research on embedded devices,and has significant implications.However,existing studies and tools often suffer from low recognition accuracy and high false positive rates when identifying the firmware instruction set architecture of specific types of embedded devices.To address this issue,a new method for recognizing firmware instruction set architecture based on feature analysis of function call instructions is proposed.It identifies function call instructions in the target firmware by simultaneously utilizing the information contained in the operation codes and operands of the instructions,and uses them as key features to classify different instruction set architectures.A prototype system called EDFIR(embedded device firmware instruction set recognizer) has been developed based on this me-thod.Experimental results show that compared to currently widely used and state-of-the-art tools such as IDA Pro,Ghidra,Radare2,Binwalk,and ISA detect,the proposed method has higher recognition accuracy,lower false positive rates,and stronger anti-interference capabilities.It achieves a recognition accuracy of 97.9% on 1 000 real device firmwares,which is 42.5% higher than the best performing ISA detect.Furthermore,experiments demonstrate that even when the analysis scale is reduced to 1/50 of the complete firmware,it can still maintain a recognition accuracy of 95.31%,indicating an excellent recognition performance.
      Remote Access Trojan Traffic Detection Based on Fusion Sequences
      WU Fengyuan, LIU Ming, YIN Xiaokang, CAI Ruijie, LIU Shengli
      Computer Science. 2024, 51 (6): 434-442.  doi:10.11896/jsjkx.230400159
      Abstract ( 38 )   PDF(4520KB) ( 110 )   
      References | Related Articles | Metrics
      In response to the issues of weak generalization ability,limited representation capability,and delayed warning in exis-ting remote access Trojan(RAT) traffic detection methods,a RAT traffic detection model based on a fusion sequence is proposed.By deeply analyzing the differences between normal network traffic and RAT traffic in packet length sequence,packet payload length sequence,and packet time interval sequence,traffic is represented as a fusion sequence.The fusion sequences are input into a Transformer model that utilizes multi-head attention mechanisms and residual connections to mine the intrinsic relationships within the fusion sequences and learn the patterns of RAT communication behavior,effectively enhancing the detection capability and generalization ability of the model for RAT traffic.The model only needs to extract the first 20 data packets of a network session for detection and can issue timely warnings in the early stages of Trojan intrusion.Comparative experimental results show that the model not only achieves excellent results in known data but also performs well in unknown traffic test sets.Compared with existing deep learning models,it presents superior performance indicators and has practical application value in the field of RAT traffic detection.
  • Computer Science
    NCTCS2017 (0)
    Network & Communication (202)
    Information Security (673)
    Software & Database Technology (3)
    Artificial Intelligence (756)
    Surveys (52)
    Graphics, Image & Pattern Recognition (58)
    WISA2017 (1)
    WISA2018 (1)
    NASAC 2017 (13)
    CGCKD 2018 (12)
    Netword & Communication (4)
    Graphics, Image & Pattern Recognition (7)
    CCF Big Data 2017 (7)
    NCIS 2017 (5)
    Graphics, Image & Pattem Recognition (10)
    Interdiscipline & Frontier (95)
    Review (33)
    Intelligent Computing (158)
    Pattern Recognition & Image Processing (82)
    Big Date & Date Mining (21)
    Interdiscipline & Application (228)
    ChinaMM 2017 (12)
    CCDM2018 (14)
    Graphics ,Image & Pattern Recognition (45)
    Pattem Recognition & Image Processing (25)
    Big Data & Data Mining (42)
    Surverys (5)
    NDBC 2018 (8)
    Graphics,Image & Pattern Recognition (11)
    Database & Big Data & Data Science (371)
    Computer Graphics & Multimedia (431)
    Intelligent Software Engineering (17)
    Databωe & Big Data & Data Science (14)
    Computer Software (79)
    Mobile Crowd Sensing and Computing (9)
    Software Engineering (11)
    Intelligent Edge Computing (14)
    New Distributed Computing Technologies and Systems (9)
    Image Processing & Multimedia Technology (166)
    Artificial Intelligence Security (12)
    Human-Machine Interaction (5)
    Intelligent Data Governance Technologies and Systems (14)
    Blockchain Technology (18)
    Invited Article (2)
    Multilingual Computing Advanced Technology (11)
    Novel Distributed Computing Technology and System (11)
    Survys (0)
    Network & Communication (0)
    Graphics, Image & Patten Recognition (0)
    Interdiscipline & Frontier (4)
    HPC China 2018 (6)
    Data Science (18)
    Advances on Multimedia Technology (11)
    Special Issue of Social Computing Based Interdisciplinary Integration (13)
    Smart IoT Technologies and Applications Empowered by 6G (10)
    Smart Healthcare (10)
    Software & Database Technology (48)
    ChinaMM2018 (14)
    ChinaMM2019 (0)
    Intelligent Mobile Authentication (10)
    Big Data & Data Science (182)
    Network & Cornmunication (4)
    Theoretical Computer Science (10)
    Computer Vision: Theory and Application (16)
    Software Engineering & Database Technology (8)
    Computer Architecture (26)
    Discipline Construction (2)
    High Performance Computing (26)
    Computer Science Theory (19)
    Computer Network (254)
    Computer Graphics& Multimedia (10)
    High Perfonnance Computing (10)
    Federated Leaming (11)
    Edge Intelligent Collaboration Technology and Frontier Applications (10)
    Database & Big Data & Data Science (44)
    Computer Graphics & Multimedia (41)
    Explainable AI (11)
    Database & Big Data & Data Science (15)
    Software & Interdiscipline (16)
    Data Security (11)
    Granular Computing & Knowledge Discovery (10)
    Big Data & Data Science (37)
    Network & Communication (13)
    Special Issue on the 50th Anniversary of Computer Science (1)
    Special Issue on the 54th Anniversary of Computer Science (1)
    Special Issue on the 55th Anniversary of Computer Science (1)
    Special Issue on the 56th Anniversary of Computer Science (1)
    Special Issue on the 58th Anniversary of Computer Science (1)
    Discipline Frontier (5)
    Artificial Intelligenc (36)
    Computer Networ (14)
    Information Security Protection in New Computing Mode (6)
    Compact Data Structure (7)
    Special Issue of Knowledge Engineering Enabled By Knowledge Graph: Theory, Technology and System (11)
    Computer Graphics & Multimedia (10)
    Granular Computing & Knowledge Discovery (6)
    Interdiscipline & Frontier (0)
    Computer Software & Architecture (8)
    Interdiscipline & Application (14)
    Image Processing & Multimedia Technology (36)
    Special Issue on the 51th Anniversary of Computer Science (1)
    Special Issue on the 52th Anniversary of Computer Science (1)
    Special Issue on the 53th Anniversary of Computer Science (1)
    Special Issue on the 57th Anniversary of Computer Science (1)
    Special Issue on the 59th Anniversary of Computer Science (1)
    Image Processing & Multimedia Technolog (50)
    Computer Software & Architecture (10)
    Contents (11)

  • 2024 Vol. 51 No. 6A

  • 2024 Vol. 51 No. 5

  • 2024 Vol. 51 No. 4

  • 2024 Vol. 51 No. 3
  • 2024,51 (6A) 
  • 2024,51 (5) 
  • 2024,51 (4) 
  • 2024,51 (3) 
  • 2024,51 (2) 
  • 2024,51 (1) 
  • 2023,50 (12) 
  • 2023,50 (11A) 
  • 2023,50 (11) 
  • 2023,50 (10) 
  • 2023,50 (9) 
  • 2023,50 (8) 
  • 2023,50 (7) 
  • 2023,50 (6A) 
  • 2023,50 (6) 
  • 2023,50 (5) 

  • More>>
  • Web Application Page Element Recognition and Visual Script Generation Based on Machine Vision (552)
    LI Zi-dong, YAO Yi-fei, WANG Wei-wei, ZHAO Rui-lian
    Computer Science. 2022, No.11:65-75
    Abstract (552) PDF (2624KB) (11997)
    Review of Time Series Prediction Methods (3823)
    YANG Hai-min, PAN Zhi-song, BAI Wei
    Computer Science. 2019, No.1:21-28
    Abstract (3823) PDF (1294KB) (11932)
    Polynomial Time Algorithm for Hamilton Circuit Problem (5954)
    JIANG Xin-wen
    Computer Science. 2020, No.7:8-20
    Abstract (5954) PDF (1760KB) (11812)
    Optimization Method of Streaming Storage Based on GCC Compiler (1161)
    GAO Xiu-wu, HUANG Liang-ming, JIANG Jun
    Computer Science. 2022, No.11:76-82
    Abstract (1161) PDF (2713KB) (11787)
    Patch Validation Approach Combining Doc2Vec and BERT Embedding Technologies (411)
    HUANG Ying, JIANG Shu-juan, JIANG Ting-ting
    Computer Science. 2022, No.11:83-89
    Abstract (411) PDF (2492KB) (11194)
    Semantic Restoration and Automatic Transplant for ROP Exploit Script (409)
    SHI Rui-heng, ZHU Yun-cong, ZHAO Yi-ru, ZHAO Lei
    Computer Science. 2022, No.11:49-54
    Abstract (409) PDF (2661KB) (10879)
    Study on Effectiveness of Quality Objectives and Non-quality Objectives for Automated Software Refactoring (380)
    GUO Ya-lin, LI Xiao-chen, REN Zhi-lei, JIANG He
    Computer Science. 2022, No.11:55-64
    Abstract (380) PDF (3409KB) (10818)
    Decision Tree Algorithm-based API Misuse Detection (799)
    LI Kang-le, REN Zhi-lei, ZHOU Zhi-de, JIANG He
    Computer Science. 2022, No.11:30-38
    Abstract (799) PDF (3144KB) (10778)
    AutoUnit:Automatic Test Generation Based on Active Learning and Prediction Guidance (783)
    ZHANG Da-lin, ZHANG Zhe-wei, WANG Nan, LIU Ji-qiang
    Computer Science. 2022, No.11:39-48
    Abstract (783) PDF (2609KB) (10681)
    Research and Progress on Bug Report-oriented Bug Localization Techniques (715)
    NI Zhen, LI Bin, SUN Xiao-bing, LI Bi-xin, ZHU Cheng
    Computer Science. 2022, No.11:8-23
    Abstract (715) PDF (2280KB) (10493)
    Study on Integration Test Order Generation Algorithm for SOA (655)
    ZHANG Bing-qing, FEI Qi, WANG Yi-chen, Yang Zhao
    Computer Science. 2022, No.11:24-29
    Abstract (655) PDF (1866KB) (10131)
    Studies on Community Question Answering-A Survey (304)
    ZHANG Zhong-feng,LI Qiu-dan
    Computer Science. 2010, No.11:19-23
    Abstract (304) PDF (551KB) (10043)
    Research Progress and Challenge of Programming by Examples (577)
    YAN Qian-yu, LI Yi, PENG Xin
    Computer Science. 2022, No.11:1-7
    Abstract (577) PDF (1921KB) (9442)
    Survey of Distributed Machine Learning Platforms and Algorithms (1809)
    SHU Na,LIU Bo,LIN Wei-wei,LI Peng-fei
    Computer Science. 2019, No.3:9-18
    Abstract (1809) PDF (1744KB) (8325)
    Survey of Cloud-edge Collaboration (1687)
    CHEN Yu-ping, LIU Bo, LIN Wei-wei, CHENG Hui-wen
    Computer Science. 2021, No.3:259-268
    Abstract (1687) PDF (1593KB) (7578)
    Survey of Fuzz Testing Technology (1248)
    ZHANG Xiong and LI Zhou-jun
    Computer Science. 2016, No.5:1-8
    Abstract (1248) PDF (833KB) (7103)
    Overview on Multi-agent Reinforcement Learning (3014)
    DU Wei, DING Shi-fei
    Computer Science. 2019, No.8:1-8
    Abstract (3014) PDF (1381KB) (6950)
    Methods in Adversarial Intelligent Game:A Holistic Comparative Analysis from Perspective of Game Theory and Reinforcement Learning (2669)
    YUAN Wei-lin, LUO Jun-ren, LU Li-na, CHEN Jia-xing, ZHANG Wan-peng, CHEN Jing
    Computer Science. 2022, No.8:191-204
    Abstract (2669) PDF (4699KB) (6823)
    Survey of Code Similarity Detection Methods and Tools (952)
    ZHANG Dan,LUO Ping
    Computer Science. 2020, No.3:5-10
    Abstract (952) PDF (1428KB) (6496)
    Survey of Symbolic Execution (725)
    YE Zhi-bin,YAN Bo
    Computer Science. 2018, No.6A:28-35
    Abstract (725) PDF (1797KB) (6207)
Announcement
Subject