Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors

Featured ArticlesMore...

  • Volume 50 Issue 1, 15 January 2023
      
      Database & Big Data & Data Science
      Survey of Learned Index
      WANG Yitan, WANG Yishu, YUAN Ye
      Computer Science. 2023, 50 (1): 1-8.  doi:10.11896/jsjkx.211000149
      Abstract ( 82 )   PDF(2528KB) ( 118 )   
      References | Related Articles | Metrics
      Due to the explosive growth of data in the era of big data,it is difficult for the traditional index structures to handle this huge and complex data.In order to solve this problem,the learned index has emerged and become one of the most popular research topics in the database.Learned indexes employ machine learning models for index construction.By training and learning the relationship between data and physical location,the learning model can be obtained so as to master the distribution characte-ristics between the two to realize the improvement and optimization of the traditional index.Extensive experiments show that learned indexes can adapt to large-scale data sets,and provide better search performance with lower memory requirements than traditional indexes.This paper introduces the applications of learned indexes and reviews the existing learned index models.According to data types,learned indexes are divided into two categories:one-dimensional and multi-dimensional.The advantages,disadvantages,and supported searches of learned index models in each category are also introduced and analyzed in detail.Finally,some future research directions of learned indexes are prospected to provide references for related researches.
      Survey on Hierarchical Clustering for Machine Learning
      WANG Shaojiang, LIU Jia, ZHENG Feng, PAN Yicheng
      Computer Science. 2023, 50 (1): 9-17.  doi:10.11896/jsjkx.211000185
      Abstract ( 82 )   PDF(2896KB) ( 133 )   
      References | Related Articles | Metrics
      Clustering analysis plays a key role in machine learning,data mining and biological DNA information.Clustering algorithms can be categorized into flat clustering and hierarchical clustering.Flat clustering mostly divides the data set into K parallel communities without intersections,but the real communities have multi-level inclusion relations,so the hierarchical clustering algorithms can provide more elaborate analysis and better interpretability.Compared with flat clustering,the research progress of hierarchical clustering is slow.Aiming at the problem of hierarchical clustering,this paper surveys a large number of related papers in the aspects of the selection of cost functions,the evaluations of clustering results and the performance of clustering algorithms.The evaluation indices of clustering results mainly include modularity,Jaccard index,normalized mutual information,dendrogram purity,etc.Among the flat clustering algorithms,the classical algorithms include K-means algorithm,label propagation algorithm,DBSCAN algorithm,spectral algorithm and so on.The hierarchical clustering algorithms can be further classified into agglomerative clustering algorithms and split clustering algorithm.The split clustering algorithms involves dichotomy K-means algorithm,recursive sparsest cut algorithm,etc.Agglomerative clustering algorithms involves classical Louvain algorithm,BIRCH algorithm and more recent HLP algorithm,PERCH algorithm and GRINCH algorithm.This paper also further analyzes the advantages and disadvantages of these algorithms,and finally,the whole paper is summarized.
      Ontology-Schema Mapping Based Incremental Entity Model Construction and Evolution Approach of Knowledge Graph
      SHAN Zhongyuan, YANG Kai, ZHAO Junfeng, WANG Yasha, XU Yongxin
      Computer Science. 2023, 50 (1): 18-24.  doi:10.11896/jsjkx.220500205
      Abstract ( 76 )   PDF(2427KB) ( 110 )   
      References | Related Articles | Metrics
      In the field of smart city,with the deepening of information technology,many systems generate massive data.Semantic communication among these multi-source heterogeneous data has become one of the important problems to be solved in the deve-lopment of urban intelligent applications.Building knowledge graph is one of the common means to solve the semantic communication of data.After establishing ontology,the construction and evolution of graph entity model becomes the key technology to support various applications.Therefore,how to automatically extend the knowledge entities from constantly updated data sources becomes the primary problem of knowledge graph construction.Some existing knowledge entity generation tools cannot provide sufficient support for data import,and users need to carry out complex preprocessing of source data to convert it into the data format supported by the platform.As a result,the workload of preprocessing is heavy,and the data cannot be updated and increased rapidly.To deal with structured or semi-structured data,this paper proposes an ontology schema mapping-based incremental entity model construction and evolution approach of knowledge graph,which achieves the growth and evolution of instance model as data update.Based on the combination of machine recommendation and human-machine interaction,according to the characteristics of different data sources,the knowledge is extracted and correctly mapped to the concepts in the ontology model.The conti-nuous evolution of the entity model is supported by means of entity alignment and relationship complement.The approach is verified in the knowledge graph construction scenario of enterprise domain.By machine recommendation and prohibiting duplicate checking,efficient and accurate entity generation is realized,which proves the effectiveness of the approach.
      Fast Storage System for Time-series Big Data Streams Based on Waterwheel Model
      LU Mingchen, LYU Yanqi, LIU Ruicheng, JIN Peiquan
      Computer Science. 2023, 50 (1): 25-33.  doi:10.11896/jsjkx.220900045
      Abstract ( 64 )   PDF(2562KB) ( 86 )   
      References | Related Articles | Metrics
      With the rapid development of the Internet of Things,the scale of sensor deployment has been growing in recent years.Large-scale sensors generate massive streaming data every second,and the value of the data decreases over time.Therefore,the storage system needs to be able to withstand the write pressure brought by the high-speed arriving streaming data and persist the data as fast as possible for subsequent query and analysis.This poses a considerable challenge to the write performance of the storage system.The fast storage system based on the waterwheel model can meet the fast storage requirements of high-speed time-series data streams in big data application scenarios.The proposed system is deployed between high-speed streaming data and underlying storage nodes,using multiple data buckets to build a logically rotating storage model(similar to the ancient Chinese waterwheel),and coordinating data writing and persisting by controlling the state of each data bucket.Waterwheel sends data buckets to different underlying storage nodes,so that the instantaneous write pressure is evenly distributed to multiple underlying storage nodes,and the write throughput is improved with the help of multi-node parallel writing.The waterwheel model is deployed on a stand-alone version of MongoDB,and compared with the distributed MongoDB in experiments.The results show that the proposed system can effectively improve the write throughput of the system,reduce the write latency,and has good horizontal scalability.
      Study on Big Graph Traversals for Storage Medium Optimization
      JIAO Tianzhe, HE Hongyan, ZHANG Zexin, SONG Jie
      Computer Science. 2023, 50 (1): 34-40.  doi:10.11896/jsjkx.211100049
      Abstract ( 38 )   PDF(2971KB) ( 49 )   
      References | Related Articles | Metrics
      As the basis of many algorithms,the breadth-first search(BFS) algorithm for big graph data has attracted more and more interests industrially and academically.Among the researches on BFS for the big graph,the solid state disk(SSD) is utilized to improve algorithm performance.In the graph traversal,the storage devices are required to continuously and repeatedly load data to fulfill the demand of graph traversal.However,many operations of data wipe caused by continuously and repeatedly loading data severely degrade the lifetime of SSD.For this reason,the lifetime of SSD can be effectively extended by reducing the data for write operations.Firstly,combined with the characteristics of graph structure,the data reuse model is constructed for describing the levels of data reuse in graph traversal.Then,a heuristic priority access algorithm based on vertex degree is proposed.By judging the independence of vertexes,the proposed algorithm selects graph vertexes that are priority accessed for improving the probability of data reusing,increasing hit ratio,and reducing the wear of flash memory.This method is available to any BFS algorithms and datasets without modifying BFS algorithm and big graph data.In the end,simulation results demonstrate that the proposed model and algorithm are effective.By applying the proposed algorithm on three common BFS algorithms:BFS-4K,B40C,and Gunrock,it can effectively reduce the operations of data written in graph traversal and improve the lifetime of SSD 12%,15%,22%,respectively.
      Deep Disentangled Collaborative Filtering with Graph Global Information
      HAO Jingyu, WEN Jingxuan, LIU Huafeng, JING Liping, YU Jian
      Computer Science. 2023, 50 (1): 41-51.  doi:10.11896/jsjkx.220900255
      Abstract ( 39 )   PDF(7437KB) ( 43 )   
      References | Related Articles | Metrics
      GCN-based collaborative filtering models generate the representation of user nodes and item nodes by aggregating information on user-item interaction bipartite graph,and then predict users' preferences on items.However,they neglect users' different interaction intents and cannot fully explore the relationship between users and items.Existing graph disentangled collaborative filtering models model users' interaction intents,but ignore the global information of interaction graph and the essential features of users and items,causing the incompleteness of representation semantics.Furthermore,disentangled representation learning is inefficient due to the iterative structure of model.To solve these problems,this paper devises a deep disentangled collaborative filtering model incorporating graph global information,which is named as global graph disentangled collaborative filtering(G2DCF).G2DCF builds graph global channel and graph disentangled channel,which learns essential features and intent features,respectively.Meanwhile,by introducing orthogonality constraint and representation independence constraint,G2DCF makes every user-item interaction intent as unique as possible to prevent intent degradation,and raises the independence of representations under different intents,so as to improve the disentanglement effect.Compared with the previous graph collaborative filtering models,G2DCF can more comprehensively describe features of users and items.A number of experiments are conducted on three public datasets,and results show that the proposed method outperforms the comparison methods on multiple metrics.Further,this paper analyzes the representation distributions from independence and uniformity,verifies the disentanglement effect.It also compares the convergence speed to verify the effectiveness.
      Fast Computation Graph Simplification via Influence-based Pruning for Graph Neural Network
      GU Xizhi, SHAO Yingxia
      Computer Science. 2023, 50 (1): 52-58.  doi:10.11896/jsjkx.220900032
      Abstract ( 35 )   PDF(3037KB) ( 58 )   
      References | Related Articles | Metrics
      Computation graph simplification is a kind of optimization technique to improve the training speed of graph neural network models.It uses the characteristics of common neighbors among nodes and speeds up the training of graph neural network models by eliminating redundant computation in the stage of aggregation.However,when dealing with large-scale graph data,the existing computation graph simplification techniques suffer from the problem of low computation efficiency,which affects the application of computation graph simplification in large-scale graph neural networks.This paper analyzes the current techniques of computation graph simplification in detail by counting the overhead of two phases including searching and reconstruction,and summarizes the shortcomings of existing techniques.On this basis,it proposes an algorithm of fast computation graph simplification via influence-based pruning for graph neural network.This algorithm applies an influence model to describe the contribution of each node to the computation graph simplification and prunes the searching space of common neighbors based on influence,which greatly improves the efficiency of the phase of searching.In addition,this paper analyzes the algorithm complexity and theoretically proves the expected acceleration effect of this technique.Finally,in order to verify the effectiveness of this novel algorithm,the algorithm is applied to two mainstream computation graph simplification technique,and common graph neural network models areselected to test on some data sets.Experimental results demonstrate that the novel algorithm can significantly improve the efficiency of the computation graph simplification on the premise of ensuring a certain amount of redundant computation reduction.Compared with the baseline of computation graph simplification,the proposed technique can speed up to 3.4 times in searching phase and speed up to 1.6 times on the whole process on PPI dataset,while it can speed up to 5.1 times in searching phase and speed up to 3.2 times on the whole process on Reddit dataset.
      Credit Evaluation Model Based on Dynamic Machine Learning
      CHEN Yijun, GAO Haoran, DING Zhijun
      Computer Science. 2023, 50 (1): 59-68.  doi:10.11896/jsjkx.220800191
      Abstract ( 32 )   PDF(3310KB) ( 49 )   
      References | Related Articles | Metrics
      With the development of computer technology,using machine learning algorithms to build automated evaluation models has become an important tool to for the financial institutions to conduct credit evaluation.However,currently,the credit evaluation model is still facing challenges:credit data is class-imbalanced and high-dimensional,meanwhile,the behavior of customers can be influenced by the changeable external environment,namely,the concept drift will occur.As a result,this paper proposes a dynamic credit evaluation model,which can achieve the flexible model update by using ensemble learning algorithm to continuously add base classifiers trained on new incremental data,and dynamically adjusting the weight of each base classifier to adapt to concept drift.When concept drift occurs,according to the detection results of concept drift,the model is able to use different forms of equalization and feature selection on credit data.In particular,for feature selection,this paper proposes an incremental feature selection algorithm combining the choice of representative samples that makes the feature selection efficient and accurate,enabling the model to simultaneously process the high-dimensional imbalanced data and adapt the concept drift of incremental credit data.Finally,this paper manages to demonstrate that the proposed dynamic model is more efficient and accurate than other prevailing algorithms on real incremental high-dimensional credit datasets.
      Hermitian Laplacian Matrix of Directed Graphs
      LIU Kaiwen, HUANG Zengfeng
      Computer Science. 2023, 50 (1): 69-75.  doi:10.11896/jsjkx.211100067
      Abstract ( 24 )   PDF(1524KB) ( 30 )   
      References | Related Articles | Metrics
      Laplacian matrix plays an important role in the research of undirected graphs.From its spectrum,some structure and properties of a graph can be deduced.Based on this,several efficient algorithms have been designed for relevant tasks in graphs,such as graph partitioning and clustering.However,for directed graphs,the Laplacian is no longer symmetric,resulting in the complex eigenvalues,which are meaningless in most scenes.To circumvent this,the k'th root of unity are introduced as the weights of directed edges in recent researches,and the corresponding Hermitian Laplacian is defined.In this paper,the rotation angle of a directed edge and the generalized Hermitian Laplacian are introduced.It is shown that the Hermitian Laplacian has inherited some useful algebraic properties from ordinary Laplacian.To study the relations between directed graphs and the related Hermitian Laplacian,the definitions of constraint equations and directed cycle are proposed.It is proved that the following three statements are equivalent:1)the minimum eigenvalue of Hermitian Laplacian is equal to 0;2)the constraint equations have at least a solution;3)for each directed cycle in the graph,its rotation angle is equal to 2lπ($l \in \mathbb{Z}$).Finally,some corollaries and applications are presented.
      Text Material Recommendation Method Combining Label Classification and Semantic QueryExpansion
      MENG Yiyue, PENG Rong, LYU Qibiao
      Computer Science. 2023, 50 (1): 76-86.  doi:10.11896/jsjkx.220100078
      Abstract ( 30 )   PDF(4176KB) ( 35 )   
      References | Related Articles | Metrics
      In the process of preparing various planning and research reports,researchers often need to collect and read a large amount of text materials according to the proposed catalog or title,not only the workload is large,but the quality cannot be gua-ranteed.To this end,in the field of digital government planning documentation,a text material recommendation method combining label classification and semantic query expansion is proposed.From the perspective of information retrieval,the titles at all levels in the catalog are regarded as query sentences,and the referenced text materials are used as target documents,so as to retrieve and recommend text materials.This method is based on the differential evolution algorithm,organically combining the text material recommendation method based on word vector average,semantic query expansion and label classification,which makes up the shortcoming of the traditional text material recommendation method and achieves to retrieve the text materials with the granularity of paragraphs through the title of catalog.After experimental verification on 10 datasets,the results show that the performance of the proposed method is significantly improved.It can greatly reduce the workload of manual material selection and material classification,as well as reduce the difficulty of documentation.
      Computer Graphics & Multimedia
      Viewpoint-tolerant Scene Recognition Based on Segmentation of Sparse Point Cloud
      HE Xionghui, TAN Jiefu, LIU Zhe, XUE Chao, YANG Shaowu, ZHANG Yongjun
      Computer Science. 2023, 50 (1): 87-97.  doi:10.11896/jsjkx.211000118
      Abstract ( 25 )   PDF(5776KB) ( 35 )   
      References | Related Articles | Metrics
      In autonomous robot navigation,simultaneous localization and mapping is responsible for perceiving the surrounding environment and positioning itself,providing perceptual support for subsequent advanced tasks.Scene recognition,as a key mo-dule,can help the robot perceive the surrounding environment more accurately.It can correct the accumulated error caused by sensor error by identifying whether the current observation and the previous observation belong to the same scene.Existing me-thods mainly focus on scene recognition under the stable viewpoint,and judge whether two observations belong to the same scene based on the visual similarity between them.However,when the observation angle changes,there may be large visual differences in observations of the same scene,which may make the observations only partially similar,and this will lead to the failure of traditional methods.Therefore,a scene recognition method based on sparse point cloud segmentation is proposed.It divides the scene to solve local similar problems,and combines visual information and geometric information to achieve accurate scene description and ma-tching.So that the robot can recognize the same scene observation under different perspectives,which supports the loop detection for a single robot or the map fusion for multi-robot.This method divides each observation into several parts based on sparse point cloud segmentation.The segmentation result is invariant to the perspective,and each segment is extracted with a local bag of words vector and a β angle histogram to accurately describe its scene content.The former contains the visual semantic information of the scene.The latter contains the geometric structure information of the scene.Then,based on the segment,the same parts between observations are matched,the different parts are discarded to achieve accurate scene content matching and improve the success rate of place recognition.Finally,results on the public dataset show that this method outperforms the mainstream method bag of words in both stable and changing perspectives.
      Onboard Rock Detection Algorithm Based on Spiking Neural Network
      MA Weiqi, YUAN Jiabin, ZHA Keke, FAN Lili
      Computer Science. 2023, 50 (1): 98-104.  doi:10.11896/jsjkx.211100149
      Abstract ( 31 )   PDF(2229KB) ( 53 )   
      References | Related Articles | Metrics
      The detection of rocky obstacles onboard in the deep space environment is an important prerequisite to ensure the safe detection of the planetary rover.Due to the storage capacity and data processing capabilities of space-borne computing equipment,large-scale and complex calculations are not suitable for the remote and deep space environment.In addition,traditional rock detection algorithms have problems such as high complexity and excessive energy consumption.Therefore,this paper proposes the Spiking-Unet,which is a multi-class semantic segmentation algorithm and uses deep spiking neural network to achieve effective detection of rocks onboard.Firstly,because of class imbalance in the rock images,constructing the lovasz_CE loss function to train the Unet network model.Secondly,mapping the parameters obtaining from the Unet network model to the Spiking-Unet network based on the parameter scaling method.Thirdly,using the S-softmax function based on the pulse firing frequency to rea-lize the pixel-level classification of rock images.The proposed algorithm is tested on the public datasets Artificial Lunar Landscape.Experimental results show that the Spiking-Unet can reduce Flopsto about 1/1 000 of the original and reduce energy consuptionto about 1/600 of the original when the accuracy is similar with the Unet model with the same topology.
      SPT:Swin Pyramid Transformer for Object Detection of Remote Sensing
      CAI Xiao, CEHN Zhihua, SHENG Bin
      Computer Science. 2023, 50 (1): 105-113.  doi:10.11896/jsjkx.211100208
      Abstract ( 42 )   PDF(4168KB) ( 42 )   
      References | Related Articles | Metrics
      The task of object detection is a basic and highly concerned work in the field of computer vision.Because object detection in remote sensing has important application value in transportation,military,agriculture,etc.,it has also become a major research hotspot.Compared with natural images,remote sensing images are affected by many factors such as complex background interference,weather,irregularities,and small objects.It is extremely challenging to achieve higher accuracy in remote sensing image object detection tasks.This paper proposes a novel object detection network based on Transformer,swin pyramid Transformer(SPT).SPT uses a sliding window Transformer module as the backbone of feature extraction.Among it,the self-attention mechanism of Transformer is very effective for detecting objects in a chaotic background,and the sliding window mode efficiently avoids a large number of square-level complexity calculations.After obtaining the feature map extracted by the backbone network,SPT uses a pyramid architecture to fuse different scale and semantic features,pithily reducing the loss of information between feature layers,and capturing the inherent multi-scale hierarchical relationship.In addition,this paper proposes self-mixed Transformer(SMT) module and cross-layer Transformer(CLT) module.SMT re-renders the highest-level feature map to enhance object feature recognition and expression.According to the feature context interaction,the feature expressions of the pixels of each feature layer are rearranged by CLT,and the CLT module is integrated into the bottom-up and top-down dual paths of the pyramid to make full use of global and local information containing different semantics.Our SPT network model is trained and tested on the UCAS-AOD and RSOD datasets.Experimental results show that SPT is high-performing in remote sensing image object detection tasks,especially suitable for irregular and small target categories,such as overpass and car.
      Multitask Transformer-based Network for Image Splicing Manipulation Detection
      ZHANG Jingyuan, WANG Hongxia, HE Peisong
      Computer Science. 2023, 50 (1): 114-122.  doi:10.11896/jsjkx.211100269
      Abstract ( 27 )   PDF(2765KB) ( 45 )   
      References | Related Articles | Metrics
      Most of existing deep learning-based methods for image splicing forgery detection use convolutional layer for forensics feature extraction.However,convolution kernel conducts the local computation process with the limited reception field.More-over,existing methods mainly apply the location of tampering regions to guide the detection model to train,and it is difficult to learn richer tamper trace features.To overcome above-mentioned limitations,a multitask transformer-based network(MT-Net) is proposed for image splicing detection and localization.The self-attention mechanism of Transformer is leveraged in encoder to learn the pixel correlation,which is able to provide different attention levels for pixels and makes the detection network pay more attention to tampering traces.Meanwhile,MT-Net considers three subtasks simultaneously to guide the detection network expose tampering traces from both local and global information,including tampered edge detection,tampered area detection and the prediction of the tampered area's proportion.Finally,three specific loss functions for their corresponding subtask are designed to better optimize the detection network in the training phase.In experiments,the proposed method(MT-Net) achieves better detection results compared with other state-of-the-art methods on three public available datasets,including CASIA v2.0,Columbia and IDM2020,where F1 scores are 0.808,0.913 and 0.675 respectively.The visualization results also demonstrate that the proposed method has the better capability of localizing the splicing regions.
      Study on Unsupervised Image Dehazing and Low-light Image Enhancement Algorithms Based on Luminance Adjustment
      WANG Bin, LIANG Yudong, LIU Zhe, ZHANG Chao, LI Deyu
      Computer Science. 2023, 50 (1): 123-130.  doi:10.11896/jsjkx.211100058
      Abstract ( 28 )   PDF(3127KB) ( 34 )   
      References | Related Articles | Metrics
      Among the degradations of low-quality images,luminance deviations such as brighter or darker images are very common image degradation phenomena.The image enhancement method based on fully supervised learning faces the dilemma that the training data is difficult to obtain or the acquisition cost is too high,and the training data is inconsistent with the application scene.To handle these problems,an unsupervised image dehazing and low-light enhancement algorithm based on luminance adjustment is proposed in this paper.A deep architecture with channel attention and pixel attention mechanism are designed to measure the differences between enhanced images and input low-quality images.A variable quadratic function has been applied to adjust the pixel luminance of the image.Multiple unsupervised losses i.e.,brightness saturation loss,spatial consistency loss,illumination smoothness loss and pseudo-label supervision loss are utilized to alleviate the illumination deviations but to ensure the identity between the enhanced images and the input low-quality images,which efficiently improve the quality of the images.Empirically,an intensity compression strategy is applied for the hazy images to darken the hazy images to have a similar intensity range with low-light images.Thus,the hazy images can be treated equally with low-light images with our deep network to adjust the luminance of the image.For the dehazing task,compared with the second-best method,our method improves the PSNR value for 2.8 dB and SSIM value for 0.01 in RESIDE dehazing dataset.For the low-light enhancement task,our method outperforms the second-best method for 0.56 dB and 0.01 separately measured by PSNR and SSIM in the SICE dataset.The proposed image dehazing and low-light enhancement algorithms can restore high-quality images from hazy images and low-light images.It effectively overcomes the difficulty of acquiring the targeted enhanced data or alleviates the problem of domain gap between training data and application data in the low-level vision tasks,which improves its adaptivity in real applications.
      Multi-object Tracking Based on Cross-correlation Attention and Chained Frames
      CHEN Yunfang, LU Yangyang, ZHOU Xin, ZHANG Wei
      Computer Science. 2023, 50 (1): 131-137.  doi:10.11896/jsjkx.211100097
      Abstract ( 20 )   PDF(3680KB) ( 27 )   
      References | Related Articles | Metrics
      The one-stage method of multi-object tracking(MOT) has gradually become the mainstream of MOT due to its advantages in reasoning speed.However,compared with the two-stage method,its tracking accuracy is poor.One reason is that the target is easy to be lost due to the use of single frame input that cause the correlation between the targets is not strong,the other is that the difference between the two tasks of detection and tracking is ignored.In order to alleviate the limitations,a multi-object tracking algorithm based on cross-correlation attention and chained frames(MOT-CCC) is proposed.MOT-CCC takes two consecutive frames as input,and converts the target association problem into a two-frame detection frame pair regression problem,which enhances the correlation between targets.The cross-correlation attention module decouples the detection task and the identification task to balance and reduce the competition between the two tasks.In addition,the proposed algorithm integrates the three modules of target detection,feature extraction and data association into one whole network to achieve end-to-end optimization,which improves tracking accuracy and reduces tracking time.In the MOT16 and MOT17 benchmark tests,compared with the benchmark CTracker algorithm,the MOTA of MOT-CCC increases by 1.3% and the FP decreases by 13%.
      AFTM:Anchor-free Object Tracking Method with Attention Features
      LI Xuehui, ZHANG Yongjun, SHI Dianxi, XU Huachi, SHI Yanyan
      Computer Science. 2023, 50 (1): 138-146.  doi:10.11896/jsjkx.211000083
      Abstract ( 32 )   PDF(3601KB) ( 37 )   
      References | Related Articles | Metrics
      As an important branch in the field of computer vision,object tracking has been widely used in many fields such as intelligent video surveillance,human-computer interaction and autonomous driving.Although object tracking has achieved good development in recent years,tracking in complex environment is still a challenge.Due to problems such as occlusion,object deformation and illumination change,tracking performance will be inaccurate and unstable.In this paper,an effective object tracking method AFTM,is proposed with attention features.Firstly,this paper constructs an adaptively generated attention weight factor group,which implements an efficient adaptive fusion strategy for response map to improve the accuracy of object positioning and bounding box scale calculation in the process of classification and regression.Secondly,aiming at the class imbalance in the data set,the proposed method uses the dynamically scaled cross entropy loss as the loss function of the object positioning network,which can modify the optimization direction of the model and make the tracking performance more stable and reliable.Finally,this paper designs a corresponding learning rate adjustment strategy to stochastically average the weight of a number of models,which can enhance the generalization ability of the model.Experimental results on public data sets show that the proposed method has higher accuracy and more stable tracking performance in complex tracking environment.
      Image Deblurring Based on Residual Attention and Multi-feature Fusion
      ZHAO Qian, ZHOU Dongming, YANG Hao, WANG Changchen
      Computer Science. 2023, 50 (1): 147-155.  doi:10.11896/jsjkx.211100161
      Abstract ( 32 )   PDF(4889KB) ( 41 )   
      References | Related Articles | Metrics
      Non-uniform blind deblurring in dynamic scenes is a challenging computer vision problem.Although deblurring algorithms based on deep learning have made great progress,there are still problems such as incomplete deblurring and loss of details.To solve these problems,a deblurring network based on residual attention and multi-feature fusion is proposed.Unlike the existing single-branch network structure,the proposed network consists of two independent feature extraction subnets.The backbone network uses an encoder-decoder network based on U-Net to obtain image features at different scales,and uses the residual attention module to filter the features,so as to adaptively learn the contour features and spatial structure features of the image.In addition,in order to compensate for the information loss caused by the down-sampling operation and up-sampling operation in the backbone network,a deep weighted residual dense subnet with a large receptive field is further used to extract rich detailed information of the feature map.Finally,the multi-feature fusion module is used to gradually fuse the original resolution blurred image and the feature information generated by the backbone network and the weighted residual dense subnet,so that the network can adaptively learn more effective features in an overall manner to restore the blurred image.In order to evaluate the deblurring performance of the network,tests are conducted on the benchmark data sets GoPro and HIDE,and the results show that the blurred image can be effectively restored.Compared with the existing methods,the proposed deblurring algorithm has achieved excellent deblurring performances in terms of visual effects and objective evaluation indicators.
      Cloth Simulation Filtering Algorithm with Topography Cognition
      MENG Huaru, WU Guowei
      Computer Science. 2023, 50 (1): 156-165.  doi:10.11896/jsjkx.211100183
      Abstract ( 22 )   PDF(3077KB) ( 35 )   
      References | Related Articles | Metrics
      Digital elevation model(DEM) can reflect the topographic characteristics of an area and has a wide range of scientific research applications.Filtering LIDAR point cloud data,extracting the ground points and interpolating are common steps in constructing DEM.The filtering algorithm used in the process of point cloud filtering directly affects the accuracy of the final DEM.As a point cloud filtering algorithm,cloth simulation filtering(CSF) algorithm has the advantages of simple model and high filtering efficiency.It has high filtering accuracy for flat areas.However,when dealing with complex terrain areas,the accuracy of filtering results will be poor due to the internal elasticity and gravity inertia of the cloth model.In view of this,in order to improve the filtering accuracy and terrain adaptability of CSF algorithm in dealing with complex terrain areas,so as to improve the accuracy of constructing DEM,the cloth simulation filtering algorithm with terrain cognition(CSFTC) is proposed.The algorithm proposes a terrain-cognitive model.Based on the local distribution characteristics of point cloud data points,the terrain-cognitive model is constructed and extended to rough digital elevation model(R-DEM),which realizes the separation of macro terrain trend and micro terrain details through point cloud terrain normalization.Finally,the original CSF algorithm combined with R-DEM is used to realize point cloud filtering.Comparison experiment between CSFTC algorithm and the original CSF algorithm is designed.The average total error rate decreases from 9.30% to 5.10%,and the average type-II error rate decreases from 30.02% to 8.46%.Experimental results show that compared with the original CSF algorithm,the accuracy of CSFTC algorithm increases slightly in flat region and increases significantly in complex region,which improves the terrain adaptability of the algorithm.The significant decrease of type-II error is helpful to improve the accuracy of constructed DEM.
      Artificial Intelligence
      Knowledge-based Visual Question Answering:A Survey
      WANG Ruiping, WU Shihong, ZHANG Meihang, WANG Xiaoping
      Computer Science. 2023, 50 (1): 166-175.  doi:10.11896/jsjkx.211100237
      Abstract ( 31 )   PDF(2744KB) ( 35 )   
      References | Related Articles | Metrics
      As an important presentation form of the completeness of artificial intelligence and the visual Turing test,visual question answering(VQA),coupled with its potential application value,has received extensive attention from computer vision and na-tural language processing.Knowledge plays an important role in visual question answering,especially when dealing with complex and open questions,reasoning knowledge and external knowledge are critical to obtaining correct answers.The question and answer mechanism that contains knowledge is called knowledge-based visual question answering(Kb-VQA).At present,systematic investigations on Kb-VQA have not been discovered.Research on knowledge participation methods and expression forms in VQA can effectively fill the gaps in the literature review in the knowledge-based visual question answering system.In this paper,the constituent units of Kb-VQA are investigated,the existence of knowledge is studied,and the concept of knowledge hierarchy is proposed.Further,the knowledge participation methods and expression forms in the process of visual feature extraction,language feature extraction and multi-modal fusion are summarized,and future development trends and research directions are discussed.
      Survey of Applications of Pretrained Language Models
      SUN Kaili, LUO Xudong , Michael Y.LUO
      Computer Science. 2023, 50 (1): 176-184.  doi:10.11896/jsjkx.220800223
      Abstract ( 95 )   PDF(1506KB) ( 141 )   
      References | Related Articles | Metrics
      In recent years,pretrained language models have developed rapidly,pushing natural language processing into a whole new stage of development.To help researchers understand where and how the powerful pretrained language models can be applied in natural language processing,this paper surveys the state-of-the-art of its application.Specifically,we first briefly review typical pretrained language models,including monolingual,multilingual and Chinese pretrained models.Then,we discuss these pretrained language models' contributions to five different natural language processing tasks:information extraction,sentiment analysis,question answering,text summarization,and machine translation.Finally,we discuss some challenges faced by the applications of pretrained language models.
      Study on Short Text Classification with Imperfect Labels
      LIANG Haowei, WANG Shi, CAO Cungen
      Computer Science. 2023, 50 (1): 185-193.  doi:10.11896/jsjkx.211100278
      Abstract ( 23 )   PDF(1561KB) ( 26 )   
      References | Related Articles | Metrics
      Short text classification techniques have been widely studied.When these techniques are applied to domain short text forproduction,as textual data accumulates,people often encounter problems mainly in two aspects:the imperfect labels and mistakenly-labeled training dataset.First,the class label set is generally dynamic in nature.Second,when domain annotators label textual data,it is hard to distinguish some fine-grained class label from others.For the above problems,this paper analyzes the shortcomings of an actual and complex telecom domain label set with numerous classes in depth and proposes a conceptual model for the imperfect multi-classification label system.Based on the conceptual model,for repairing the conflicts and omissions in a labeled dataset,we introduce a semi-automatic method for detecting these problems iteratively with the help of a seed dataset.After repairing the conflicts and omissions caused by a dynamic label set and mistakes of annotators,after about six months of iteration,the F1-score of the BERT-based classification model is above 0.9 after filtering out 10% tickets with low classification confidence.
      Bi-level Path Planning Method for Unmanned Vehicle Based on Deep Reinforcement Learning
      HUANG Yuzhou, WANG Lisong, QIN Xiaolin
      Computer Science. 2023, 50 (1): 194-204.  doi:10.11896/jsjkx.220500241
      Abstract ( 30 )   PDF(4410KB) ( 36 )   
      References | Related Articles | Metrics
      With the wide application of intelligent unmanned vehicles,intelligent navigation,path planning and obstacle avoidance technology have become important research contents.This paper proposes model-free deep reinforcement learning algorithms DDPG and SAC,which use environmental information to track to the target point,avoid static and dynamic obstacles,and can be generally suitable for different environments.Through the combination of global planning and local obstacle avoidance,it solves the path planning problem with better globality and robustness,solves the obstacle avoidance problem with better dynamicity and generalization,and shortens the iteration time.In the network training stage,PID,A* and other traditional algorithms are combined to improve the convergence speed and stability of the method.Finally,a variety of experimental scenarios such as navigation and obstacle avoidance are designed in the robot operating system ROS and the simulation program gazebo.Simulation results verify the reliability of the proposed approach,which takes the global and dynamic nature of the problem into account and optimizes the generated paths and time efficiency.
      Utilizing Heterogeneous Graph Neural Network to Extract Emotion-Cause Pairs Effectively
      PU Jinyao, BU Lingmei, LU Yongmei, YE Ziming, CHEN Li, YU Zhonghua
      Computer Science. 2023, 50 (1): 205-212.  doi:10.11896/jsjkx.211100265
      Abstract ( 31 )   PDF(2226KB) ( 25 )   
      References | Related Articles | Metrics
      As an emerging task in text sentiment analysis,the automatic extraction of emotion-cause pairs aims to identify emotion expression from the raw texts without any annotation in the unit of clauses,and identify the causes for the corresponding emotions to form emotion-cause pairs.The crucial point of this task is focused on how to effectively capture the relationship between emotions and causes and among different emotion-cause pairs.To overcome the shortcomings of existing researches in capturing these associations,such as too coarse granularity and unable to effectively distinguish the mutual influence of causal relations between different pairs,this paper proposes an emotion-cause pair extraction method based on a heterogeneous graph neural network.Initially,we construct a heterogeneous graph with clauses and clause pairs as vertices,in which there are different types of edges between clauses and clause pairs and between different clause pairs to capture various fine-grained associations.Then using the heterogeneous graph neural network algorithm with attention mechanism to iteratively update the vertex embeddings of clauses and clause pairs.Finally,the updated embeddings is input to the binary classifier,and the classifier judges whether the corresponding pair has an emotion-cause relationship.To evaluate the effectiveness of the proposed model,we conduct a series of experiments on a benchmark dataset of the emotion-cause pair extraction task.The results demonstrate that the method based on the heterogeneous graph neural network proposed in this paper has a stable effect improvement,and the F1 value is 0.85% higher than the state-of-art baselines.When the bottom encoder(for obtaining the initial embeddings of clauses and clause pairs) is replaced by BERT,the F1 value can reach 73.12%,and our model also outperforms the state-of-art algorithm.
      Chinese Nested Named Entity Recognition Algorithm Based on Segmentation Attention andBoundary-aware
      ZHANG Rujia, DAI Lu, GUO Peng, WANG Bang
      Computer Science. 2023, 50 (1): 213-220.  doi:10.11896/jsjkx.211100257
      Abstract ( 30 )   PDF(2726KB) ( 33 )   
      References | Related Articles | Metrics
      Chinese nested named entity recognition(CNNER) is a challenging task due to the absence of natural delimiters in Chinese and the complexity of the nested structure.In this paper,we propose a novel boundary-aware layered neural model(BLNM) with segmentation attention for the CNNER task.To exploit some semantic relation among adjacent characters,we first design a segmentation attention network to capture the potential word information and enhance character representation.Next,we model the nested structure with dynamically stacked Flat NER networks to detect entities in an inner to outer manner.We also design a boundary generative module to connect adjacent Flat NER layers,which can mark the boundary and position of detected entities and greatly alleviate the error propagation problem.Experiment results on ACE 2005 Chinese nested NE dataset show that the proposed model achieves superior performance than the state-of-the-art methods.
      Text Classification Method Based on Bidirectional Attention and Gated Graph Convolutional Networks
      ZHENG Cheng, MEI Liang, ZHAO Yiyan, ZHANG Suhang
      Computer Science. 2023, 50 (1): 221-228.  doi:10.11896/jsjkx.211100095
      Abstract ( 31 )   PDF(2371KB) ( 61 )   
      References | Related Articles | Metrics
      Existing text classification models based on graph convolutional networks usually simply fuse the neighborhood information of different orders through the adjacency matrix to update the representation of node in graph,resulting in insufficientrepresentation of the word sense information of the nodes.In addition,the model based on conventional attention mechanism only provides a positive weighted representation of the word embedding,ignoring the impact of words that produce negative effects on the final classification.To overcome the above problems,a model based on bidirectional attention mechanism and gated graph convolutional networks is proposed in the paper.Firstly,the model uses the gated graph convolutional networks to selectively fuse the multi-order neighborhood information of nodes in the graph,retaining the information of previous orders,to enrich the feature representation of nodes in graph.Secondly,the model learns the influence of different words on text classification results by the bidirectional attention mechanism,giving positive weights to words with positive effects on the classification and negative weights to words with negative effects to weaken their influence in the vector representation,to improve the model's ability to distinguish nodes with different properties in the document.Finally,the maximum pooling and average pooling are used to fuse the word representation in text to get the document representation for the final classification,where the average pooling can make each word play a role in generating a graph-level representation of the document and the maximum pooling can make the important words play a greater role in document embedding.Extensive experiments on four benchmark datasets show that the proposed model significantly outperforms the baseline model.
      Spiking Neural Network Model for Brain-like Computing and Progress of Its Learning Algorithm
      HUANG Zenan, LIU Xiaojie, ZHAO Chenhui, DENG Yabin, GUO Donghui
      Computer Science. 2023, 50 (1): 229-242.  doi:10.11896/jsjkx.220100058
      Abstract ( 31 )   PDF(2804KB) ( 61 )   
      References | Related Articles | Metrics
      With the increasingly prominent limitations of deep neural networks in practical applications,brain-like computing spiking neural networks with biological interpretability have become the focus of research.The uncertainty and complex diversity of application scenarios pose new challenges to researchers,requiring brain-like computing spiking neural networks with multi-scale architectures similar to biological brain organizations to realize the perception and decision-making function of multi-modal and uncertain information.This paper mainly introduces the multi-scale biological rational brain-like computing spiking neural network model and its learning algorithm for multi-modal information representation and uncertainty information perception,analyzing and discussing two key technical issues that the spiking neural network based on the interconnection of memristors can rea-lize multi-scale architecture brain-like computing,namely:the consistency problem of multi-modal and uncertain information with spike timing representation,and the computing fault-tolerant problem for the multi-scale spiking neural network with different learning algorithms.Finally,this paper analyzes and forecasts the further research direction of brain-like computing spiking neural network.
      Novel Class Reasoning Model Towards Covered Area in Given Image Based on InformedKnowledge Graph Reasoning and Multi-agent Collaboration
      RONG Huan, QIAN Minfeng, MA Tinghuai, SUN Shengjie
      Computer Science. 2023, 50 (1): 243-252.  doi:10.11896/jsjkx.220700112
      Abstract ( 31 )   PDF(6559KB) ( 28 )   
      References | Related Articles | Metrics
      Object detection is one of the most popular directions in computer vision,which is widely used in military,medical and other important fields.However,most object detection models can only recognize visible objects.There are often covered(invisible) target objects in pictures in daily life.It is difficult for existing object detection models to show ideal detection performance for covered objects in pictures.Therefore,this paper proposes a novel class reasoning model towards covered area in given image based on informed knowledge graph reasoning and multi-agent collaboration(IMG-KGR-MAC).Specifically,first,IMG-KGR-MAC constructs a global prior knowledge graph according to the visible objects of all pictures in a given picture library and the positional relationship between them.At the same time,according to the objects contained in the pictures themselves and their positional relationships,picture knowledge graphs are established for each picture respectively.The covered objects information in each picture is not included in the global prior knowledge graph and the picture's own knowledge graph.Second,deep deterministic policy gradient(DDPG) deep reinforcement learning idea is adopted to build two cooperative agents.Agent 1 selects the “category label” that is most suitable for the covered object from the global prior knowledge graph according to the semantic information of the current picture,and adds it to the knowledge graph of the given picture as a new entity node.Agent 2 further selects 〈entities,relationships〉 from the global prior knowledge graph according to the newly added entities of agent 1,and expands the graph structure associated with the new entity nodes.Third,agent 1 and agent 2 share the task environment and communicate the reward value,and cooperate with each other to carry out forward and reverse reasoning according to the principles of ‘picture covered target(entity) → associated graph structure' and ‘associated graph structure → picture covered object(entity)',so as to effectively estimate the most likely category label of the covered object of a given picture.Experimental results show that,compared with the existing related methods,the proposed IMG-KGR-MAC model can learn the semantic relationship between the covered picture of a given picture and the global prior knowledge graph,effectively overcome the shortcomings of the existing models that it is difficult to detect the covered object,and has good reasoning ability for the covered object.It has more than 20% improvement in many indicators such as MR(mean rank) and mAP(mean average precision).
      Deep Reinforcement Learning Based on Similarity Constrained Dual Policy Distillation
      XU Ping'an, LIU Quan
      Computer Science. 2023, 50 (1): 253-261.  doi:10.11896/jsjkx.211100167
      Abstract ( 27 )   PDF(2713KB) ( 33 )   
      References | Related Articles | Metrics
      Policy distillation,a method of transferring knowledge from one policy to another,has achieved great success in challenging reinforcement learning tasks.The typical policy distillation approach uses a teacher-student policy model,where know-ledge is transferred from the teacher policy,which has excellent empirical data,to the student policy.Obtaining a teacher policy is computationally intensive,so dual policy distillation(DPD) framework is proposed,which maintains two student policies to transfer knowledge to each other and no longer depends on the teacher policy.However,if one of the student policies cannot surpass the other through self-learning,or if the two student policies converge after distillation,the deep reinforcement learning algorithm combined with DPD degenerates into a single policy gradient optimization approach.To address the problems mentioned above,the concept of similarity between student policies is given,and the similarity constrained dual policy distillation(SCDPD) framework is proposed.The framework dynamically adjusts the similarity between two students' policies in the process of knowledge transfer,and has been theoretically shown to be effective in enhancing the exploration of students′ policies as well as the stability of algorithms.Experimental results show that the SCDPD-SAC algorithm and SCDPD-PPO algorithm,which combine SCDPD with classical off-policy and on-policy deep reinforcement learning algorithms,have better performance compared with classical algorithms on multiple continuous control tasks.
      Sparse Reward Exploration Method Based on Trajectory Perception
      ZHANG Qiyang, CHEN Xiliang, ZHANG Qiao
      Computer Science. 2023, 50 (1): 262-269.  doi:10.11896/jsjkx.220700010
      Abstract ( 30 )   PDF(3959KB) ( 17 )   
      References | Related Articles | Metrics
      When dealing with sparse reward problems,existing deep RL algorithms often lead to hard exploration,they often only rely on the pre-designed environment reward,so it is difficult to achieve good results.In this situation,it is necessary to design rewards more carefully,make more accurate judgments and feedback on the exploration status of agents.The asynchronous advantage actor-critic(A3C) algorithm improves the training efficiency through parallel training,and improves the training speed of the original algorithm.However,for the environment with sparse rewards,it cannot well solve the problem of difficult exploration.To solve the problem of poor exploration effect of A3C algorithm in sparse reward environment,A3C based on exploration trajectory perception(ETP-A3C) is proposed.The algorithm can perceive the exploration trajectory of the agent when it is difficult to explore in training,further judge and decide the exploration direction of the agent,and help the agent get out of the exploration dilemma as soon as possible.In order to verify the effectiveness of ETP-A3C algorithm,a comparative experiment is carried out with baseline algorithm in five different environments of Super Mario Brothers.The results show that this method has significantly improved the learning speed and model stability.
      SNPT Systems Working in Global Asynchronous and Local Synchronous Mode
      ZHANG Luping, XU Fei
      Computer Science. 2023, 50 (1): 270-275.  doi:10.11896/jsjkx.211100091
      Abstract ( 17 )   PDF(1562KB) ( 21 )   
      References | Related Articles | Metrics
      Spiking neural P systems with thresholds(SNPT systems) are a class of bio-inspired computing models,inspired by the association between the potential changes in neurons and the neural activities.It is proved that SNPT systems working in the maximally parallel mode are computationally universal since they can achieve the equivalent computation power with Turing machines as number generators and acceptors.The computing power of SNPT systems working in other modes is a topic of concern.In this work,we investigate the number generating power of SNPT systems working in the global asynchronous and local synchronous way(ASNPlocsynT systems).It is proved that ASNPlocsynT systems with integer weights are universal,and ASNPlocsynT systems with positive-integer weights can only generate the semilinear sets of numbers.The results show that the range of synaptic weights affects the computation power of ASNPlocsynT systems.
      Chinese Event Detection Without Triggers Based on Dual Attention
      CHENG Yong, MAO Yingchi, WAN Xu, WANG Longbao, ZHU Min
      Computer Science. 2023, 50 (1): 276-284.  doi:10.11896/jsjkx.211000071
      Abstract ( 34 )   PDF(2462KB) ( 76 )   
      References | Related Articles | Metrics
      Event extraction is an essential task of natural language processing,and event detection is one of the critical steps of event extraction,whose goal is to detect the occurrence of events and classify them.Currently,Chinese event detection has problems of polysemous words and mismatches between words and triggers,which affect the accuracy of event detection models.We propose the event detection without triggers based on dual attention(EDWTDA),which skips the process of trigger word recognition and directly determines event types without trigger word tags.First,the ALBERT model is applied to improve the semantic representation ability of word embedding vectors.Second,we fusion local attention and event types to capture key semantic information and simulate hidden event triggers to solve the problem of mismatch between words and triggers.Third,the global attention is introduced to mine contextual information in documents to solve the problem of polysemous words.Further,the event detection task is converted into a binary classification task for solving multi-label problem.Finally,the focal loss function is used to address the sample imbalance after conversion.Experimental results on the ACE2005 Chinese corpus show that compared with the best baseline model JMCEE,the accuracy rate,recall rate,and F1-score of the proposed model increases by 3.40%,3.90% and 3.67%,respectively.
      Computer Network
      Review and Prospect of Connectivity Research on Cellular-V2X
      DAI Liang, WU Yibo, WANG Guiping
      Computer Science. 2023, 50 (1): 285-293.  doi:10.11896/jsjkx.211000164
      Abstract ( 35 )   PDF(2287KB) ( 79 )   
      References | Related Articles | Metrics
      As one of the core technologies of future autonomous vehicles,Cellular-V2X(C-V2X) faces a series of developing pain points related to network connectivity,such as mobility,coverage and spectrum,while rapidly popularizing its application.The connectivity of C-V2X based cooperative vehicle-infrastructure system directly reflects the overall performance of C-V2X network connected vehicles,which is of great significance to ensure information can achieve long-distance,adaptive,low latency and high reliable transmission.Different from the traditional cellular mobile communication network,C-V2X networking vehicles have the characteristics of high moving speed,short link duration between nodes,strong predictability of wireless communication environment,and mobility model limited by road topology.To make efficient use of spectrum for communication,C-V2X networking vehicles also have the characteristics of no center and self-organization of ad hoc network.Firstly,the advantages and characteristics of C-V2X are briefly introduced,including the progress and structure of C-V2X.On this basis,research on connectivity of C-V2X network are summarized and classified based on transportation scenarios,communication mode selection,road side unit(RSU) location deployment,power control and model driven.Finally,the development trends of connectivity for C-V2X are discussed and its future application is prospected.
      Incentive Mechanism for Continuous Crowd Sensing Based Symmetric Encryption and Double Truth Discovery
      XU Miaomiao, CHEN Zhenping
      Computer Science. 2023, 50 (1): 294-301.  doi:10.11896/jsjkx.220400101
      Abstract ( 25 )   PDF(2527KB) ( 29 )   
      References | Related Articles | Metrics
      Aimed at the problems in continuous crowd sensing,such as the increased privacy requirements,the unreliable perception data collected and the low enthusiasm of users to participate,this paper proposes an incentive mechanism based on symmetric encryption and double-layer truth discovery(SDIM).First,the symmetric encryption algorithm is used to protect the privacy of the perceived data.When the privacy requirements are high and the number of perceptions is large,the computing overhead and the time of data encryption and reward computing will be greatly reduced.Second,based on a double-layer truth discovery model,an incentive mechanism supporting data reliability evaluation is proposed.The purpose is to simultaneously realize the real time reward of continuous crowd sensing,and improve the fairness of reward when the participants have malicious behavior.Finally,the dual privacy analysis of the proposed method is illustrated.The simulation results show that the proposed method can effectively calculate the true value and the reward according to the data reliability.Notably,it is obviously superior to the comparative model in the time of data encryption and reward computing,and can calculate the reward more fairly when the participants have malicious behavior.
      Information Security
      Survey of Membership Inference Attacks for Machine Learning
      CHEN Depeng, LIU Xiao, CUI Jie, HE Daojing
      Computer Science. 2023, 50 (1): 302-317.  doi:10.11896/jsjkx.220800227
      Abstract ( 49 )   PDF(3429KB) ( 49 )   
      References | Related Articles | Metrics
      Artificial intelligence has been integrated into all aspects of people's daily lives with the continuous development of machine learning,especially in the deep learning area.Machine learning models are deployed in various applications,enhancing the intelligence of traditional applications.However,in recent years,research has pointed out that personal data used to train machine learning models is vulnerable to the risk of privacy disclosure.Membership inference attacks(MIAs) are significant attacks against the machine learning model that threatens users' privacy.MIA aims to judge whether user data samples are used to train the target model.When the data is closely related to the individual,such as in medical,financial,and other fields,it directly interferes with the user's private information.This paper first introduces the background knowledge of membership inference attacks.Then,we classify the existing MIAs according to whether the attacker has a shadow model.We also summarize the threats of MIAs in different fields.Also,this paper points out the defense means against MIAs.The existing defense mechanisms are classified and summarized according to the strategies for preventing model overfitting,model-based compression,and disturbance.Finally,this paper analyzes the advantages and disadvantages of the current MIAs and defense mechanisms and proposes possible research directions for future MIAs.
      Survey of Storage Scalability in Blockchain Systems
      LI Bei, WU Hao, HE Xiaowei, WANG Bin, XU Ergang
      Computer Science. 2023, 50 (1): 318-333.  doi:10.11896/jsjkx.211200042
      Abstract ( 29 )   PDF(3839KB) ( 37 )   
      References | Related Articles | Metrics
      Blockchain is a distributed database based on the Bitcoin platform.It has attracted widespread attention from academia and industry because of its characteristics of decentralization,traceability,and non-tamperability.However,with the rapid growth of the number of nodes and the amount of data,problems such as low throughput and difficult storage scalability appear.The problem of storage scalability of blockchain has become a bottleneck for blockchain applications.Therefore,this paper focuses on the summary and prospect of storage scalability of blockchain.Firstly,this paper introduces the basic knowledge and data structure of blockchain and analyzes the problems of storage scalability of blockchain at present.Secondly,this paper explains the principles,implementation methods,research progress,advantages,and disadvantages of the 0th layer expansion scheme,the on-chain expansion scheme,and the off-chain expansion scheme.Finally,this paper analyzes the challenges in the current blockchain sto-ragescalability scheme and provides a direction for the next blockchain research work.
      Password Guessing Model Based on Reinforcement Learning
      LI Xiaoling, WU Haotian, ZHOU Tao, LU Hui
      Computer Science. 2023, 50 (1): 334-341.  doi:10.11896/jsjkx.211100001
      Abstract ( 30 )   PDF(1811KB) ( 56 )   
      References | Related Articles | Metrics
      Password guessing is an important research direction in password security.Password guessing based on generative adversarial network(GAN) is a new method proposed in recent years,which guides the update of the generator according to evaluation results on passwords generated by the discriminator.Consequently,password guessing sets can be generated with trained GANs.However,the existing GAN-based password guessing models have low efficiency due to inadequate guidance of the discriminator to the generator.To solve this problem,an improved GAN password guessing model AC-Pass based on reinforcement learning Actor-Critic algorithm is proposed.The AC-Pass model guides the update of the generation strategy of the Actor network at each time step through the output rewards of the discriminator and the Critic network,and realizes the reinforce guidance of password sequence generation process.The proposed AC-Pass model is implemented on RockYou,LinkedIn and CSDN data sets and compared with PCFG model and the existing GANs-based password guessing models such as PassGAN and seqGAN.Results on homologous testing sets and heterologous testing sets indicate that password cracking rate of AC-Pass model on the guessing set is higher than that of PassGAN and seqGAN.Moreover,AC-Pass shows better guessing performance than PCFG when the password spatial distribution between the testing set and the training set is significant.In addition,the AC-Pass model has a large password output space.As the size of password guessing set increases,the cracking rate continues to rise.
      Blockchain-based Trusted Service-oriented Architecture
      CHEN Yan, LIN Bing, CHEN Xiaona, CHEN Xing
      Computer Science. 2023, 50 (1): 342-350.  doi:10.11896/jsjkx.211100011
      Abstract ( 27 )   PDF(2907KB) ( 44 )   
      References | Related Articles | Metrics
      In traditional service-oriented architecture(SOA),web service providers register their service descriptions in the registry for service consumers to discover and invoke services.Traditional SOA lacks dispute resolution mechanism,so that the trusted service invocation between service consumers and providers can not be guaranteed.Blockchain can be reasonably introduced into the dispute resolution mechanism due to its significant advantages in decentralization and tamper resistance.Therefore,this paper proposes a trusted SOA architecture based on blockchain,in which blockchain acts as evidence recorder and service registry agent. During a service trusted invocation,the service consumer first encrypts the parameters and sends them to the target service provider.Secondly,the service provider receives the encryption parameters and decrypts them.Then,the service provider completes the service execution and encrypts the output result.Finally,when the service provider sends the encryption result to the service consumer,it completes the trusted credential construction and uplink.Based on the above,when a service dispute occurs,it will trigger the adjudication of the smart contract.The execution of the contract relies on trusted vouchers to correctly handle service disputes.Experimental results show that,compared with the traditional invocation,the proposed method can correctly handle the service disputes between service providers and requesters on the premise of ensuring that the growth rate of trusted invocation time of most services is no more than 30%.
      Backdoor Attack Against Deep Reinforcement Learning-based Spectrum Access Model
      WEI Nan, WEI Xianglin, FAN Jianhua, XUE Yu, HU Yongyang
      Computer Science. 2023, 50 (1): 351-361.  doi:10.11896/jsjkx.220800269
      Abstract ( 24 )   PDF(5736KB) ( 25 )   
      References | Related Articles | Metrics
      Deep reinforcement learning(DRL) has attracted much attention in multi-user intelligent dynamic spectrum access due to its advantages in sensing and decision making.However,the weak interpretability of deep neural networks(DNNs) makes DRL models vulnerable to backdoor attacks.In this paper,a non-invasive backdoor attack method with low-cost is proposed against DSA-oriented DRL models in cognitive wireless networks.The attacker monitors the wireless channels to select backdoor triggers,and generates backdoor samples into the experience pool of a secondary user's DRL model.Then,the trigger can be implanted into the DRL model during the training phase.The attacker actively sends signals to activate the triggers in the DRL model during the inference phase,inducing secondary users to take the actions set by the attacker,thereby reducing their success rate of channel access.A series of simulation show that the proposed backdoor attack method can reduce the attack cost by 20%~30% while achieving an attack success rate over 90%,and is suitable for three different DRL models.
      Anomaly Detection Method of SDN Network Edge Switch
      ZHAO Yang, YI Peng, ZHANG Zhen, HU Tao, LIU Shaoxun
      Computer Science. 2023, 50 (1): 362-372.  doi:10.11896/jsjkx.211100223
      Abstract ( 22 )   PDF(3176KB) ( 87 )   
      References | Related Articles | Metrics
      Software-defined network gives programmability to the network,reduces the complexity of network management,and promotes the development of new network technology.As a device for data forwarding and policy enforcement,the permissions of SDN switches should not be stolen by unauthorized entities.However,the SDN switch does not always execute the commands issued by the controller.Malicious attackers attack the network covertly and fatally by eroding the SDN switch,which seriously affects the end-to-end communication quality of users.Communicationsequential process(CSP),as a modeling language designed for concurrent systems,can accurately describe the interaction between SDN switch-switch and switch-host.In this paper,CSP is used to model SDN switch and terminal host,and two abnormal switch location methods are analyzed theoretically.We verify the effectiveness of the two detection methods in the instantiated model system when the edge switch is maliciously forwarded as an egress switch,and the authentication results show that the abnormal behavior cannot be detected.In order to solve this problem,an anomaly detection method for edge switch is proposed in this paper.In this method,the host records the statistical information and triggers the packet_in message to complete the information transmission with the controller by constructing a special packet.The controller collects the statistical information and detects the abnormal forwarding behavior of the edge switch by analyzing the statistical information consistency between the edge switch and the host.Finally,based on the ryu controller,experiments are carried out on the mininet platform,and experimental results show that the edge switch anomaly detection method can successfully detect abnormal behavior.
      Feature Extraction Method for Public Component Libraries Based on Cross-fingerprint Analysis
      GUO Wei, WU Zehui, WU Qianqiong, LI Xixing
      Computer Science. 2023, 50 (1): 373-379.  doi:10.11896/jsjkx.211100121
      Abstract ( 30 )   PDF(2088KB) ( 103 )   
      References | Related Articles | Metrics
      The widespread use of software public component libraries increases the speed of software development while expanding the attack surface of software.Vulnerabilities that exist in public component libraries are widely distributed in software that uses the library files,and the compatibility,stability,and development delays make it difficult to fix such vulnerabilities and the patching period is long.Software component analysis is an important tool to solve such problems,but limited by the problem of ineffective feature selection and difficulties in extracting accurate features from public component libraries,the accuracy of component analysis is not high and generally stays at the level of kind location.In this paper,we propose a public component library feature extraction method based on cross-fingerprint analysis,build a fingerprint library based on 25 000 open source projects on GitHub platform,propose source string role classification,export function fingerprint analysis,binary compilation fingerprint analysis,etc.to extract cross-fingerprints of component libraries,realize the accurate localization of public component libraries,develop a prototype tool LVRecognizer,test and evaluate 516 real softwares,and obtain a accuracy rate of 94.74%.
  • Computer Science
    NCTCS2017 (0)
    Network & Communication (183)
    Information Security (506)
    Software & Database Technology (3)
    Artificial Intelligence (541)
    Surveys (52)
    Graphics, Image & Pattern Recognition (58)
    WISA2017 (1)
    WISA2018 (1)
    NASAC 2017 (13)
    CGCKD 2018 (12)
    Netword & Communication (4)
    Graphics, Image & Pattern Recognition (7)
    CCF Big Data 2017 (7)
    NCIS 2017 (5)
    Graphics, Image & Pattem Recognition (10)
    Interdiscipline & Frontier (86)
    Review (33)
    Intelligent Computing (158)
    Pattern Recognition & Image Processing (82)
    Big Date & Date Mining (21)
    Interdiscipline & Application (207)
    ChinaMM 2017 (12)
    CCDM2018 (14)
    Graphics ,Image & Pattern Recognition (45)
    Pattem Recognition & Image Processing (25)
    Big Data & Data Mining (42)
    Surverys (5)
    NDBC 2018 (8)
    Graphics,Image & Pattern Recognition (11)
    Database & Big Data & Data Science (298)
    Computer Graphics & Multimedia (356)
    Intelligent Software Engineering (17)
    Databωe & Big Data & Data Science (14)
    Computer Software (51)
    Mobile Crowd Sensing and Computing (9)
    Software Engineering (11)
    Intelligent Edge Computing (14)
    New Distributed Computing Technologies and Systems (9)
    Image Processing & Multimedia Technology (125)
    Artificial Intelligence Security (12)
    Human-Machine Interaction (5)
    Intelligent Data Governance Technologies and Systems (14)
    Blockchain Technology (18)
    Invited Article (2)
    Multilingual Computing Advanced Technology (11)
    Novel Distributed Computing Technology and System (11)
    Survys (0)
    Network & Communication (0)
    Graphics, Image & Patten Recognition (0)
    Interdiscipline & Frontier (0)
    HPC China 2018 (6)
    Data Science (18)
    Advances on Multimedia Technology (11)
    Special Issue of Social Computing Based Interdisciplinary Integration (13)
    Smart IoT Technologies and Applications Empowered by 6G (10)
    Smart Healthcare (10)
    Software & Database Technology (48)
    ChinaMM2018 (14)
    ChinaMM2019 (0)
    Intelligent Mobile Authentication (10)
    Big Data & Data Science (152)
    Network & Cornmunication (4)
    Theoretical Computer Science (10)
    Computer Vision: Theory and Application (16)
    Software Engineering & Database Technology (8)
    Computer Architecture (26)
    Discipline Construction (2)
    High Performance Computing (5)
    Computer Science Theory (19)
    Computer Network (190)
    Computer Graphics& Multimedia (10)
    High Perfonnance Computing (10)
    Federated Leaming (11)
    Computer Networ (14)
    Contents (9)

  • 2022 Vol. 49 No. 12

  • 2022 Vol. 49 No. 11

  • 2022 Vol. 49 No. 11A

  • 2022 Vol. 49 No. 10
  • 2022,49 (12) 
  • 2022,49 (11) 
  • 2022,49 (11A) 
  • 2022,49 (10) 
  • 2022,49 (9) 
  • 2022,49 (8) 
  • 2022,49 (7) 
  • 2022,49 (6) 
  • 2022,49 (6A) 
  • 2022,49 (5) 
  • 2022,49 (4) 
  • 2022,49 (3) 
  • 2022,49 (2) 
  • 2022,49 (1) 
  • 2021,48 (12) 
  • 2021,48 (11) 

  • More>>
  • Optimization Method of Streaming Storage Based on GCC Compiler (356)
    GAO Xiu-wu, HUANG Liang-ming, JIANG Jun
    Computer Science. 2022, No.11:76-82
    Abstract (356) PDF (2713KB) (10990)
    Polynomial Time Algorithm for Hamilton Circuit Problem (5301)
    JIANG Xin-wen
    Computer Science. 2020, No.7:8-20
    Abstract (5301) PDF (1760KB) (10919)
    Patch Validation Approach Combining Doc2Vec and BERT Embedding Technologies (74)
    HUANG Ying, JIANG Shu-juan, JIANG Ting-ting
    Computer Science. 2022, No.11:83-89
    Abstract (74) PDF (2492KB) (10686)
    Web Application Page Element Recognition and Visual Script Generation Based on Machine Vision (117)
    LI Zi-dong, YAO Yi-fei, WANG Wei-wei, ZHAO Rui-lian
    Computer Science. 2022, No.11:65-75
    Abstract (117) PDF (2624KB) (10551)
    Semantic Restoration and Automatic Transplant for ROP Exploit Script (88)
    SHI Rui-heng, ZHU Yun-cong, ZHAO Yi-ru, ZHAO Lei
    Computer Science. 2022, No.11:49-54
    Abstract (88) PDF (2661KB) (10491)
    Study on Effectiveness of Quality Objectives and Non-quality Objectives for Automated Software Refactoring (78)
    GUO Ya-lin, LI Xiao-chen, REN Zhi-lei, JIANG He
    Computer Science. 2022, No.11:55-64
    Abstract (78) PDF (3409KB) (10488)
    Decision Tree Algorithm-based API Misuse Detection (269)
    LI Kang-le, REN Zhi-lei, ZHOU Zhi-de, JIANG He
    Computer Science. 2022, No.11:30-38
    Abstract (269) PDF (3144KB) (10309)
    AutoUnit:Automatic Test Generation Based on Active Learning and Prediction Guidance (104)
    ZHANG Da-lin, ZHANG Zhe-wei, WANG Nan, LIU Ji-qiang
    Computer Science. 2022, No.11:39-48
    Abstract (104) PDF (2609KB) (10223)
    Review of Time Series Prediction Methods (2483)
    YANG Hai-min, PAN Zhi-song, BAI Wei
    Computer Science. 2019, No.1:21-28
    Abstract (2483) PDF (1294KB) (9988)
    Study on Integration Test Order Generation Algorithm for SOA (223)
    ZHANG Bing-qing, FEI Qi, WANG Yi-chen, Yang Zhao
    Computer Science. 2022, No.11:24-29
    Abstract (223) PDF (1866KB) (9815)
    Research and Progress on Bug Report-oriented Bug Localization Techniques (317)
    NI Zhen, LI Bin, SUN Xiao-bing, LI Bi-xin, ZHU Cheng
    Computer Science. 2022, No.11:8-23
    Abstract (317) PDF (2280KB) (9759)
    Studies on Community Question Answering-A Survey (158)
    ZHANG Zhong-feng,LI Qiu-dan
    Computer Science. 2010, No.11:19-23
    Abstract (158) PDF (551KB) (9580)
    Research Progress and Challenge of Programming by Examples (297)
    YAN Qian-yu, LI Yi, PENG Xin
    Computer Science. 2022, No.11:1-7
    Abstract (297) PDF (1921KB) (8973)
    Survey of Distributed Machine Learning Platforms and Algorithms (1369)
    SHU Na,LIU Bo,LIN Wei-wei,LI Peng-fei
    Computer Science. 2019, No.3:9-18
    Abstract (1369) PDF (1744KB) (6538)
    Overview on Multi-agent Reinforcement Learning (1906)
    DU Wei, DING Shi-fei
    Computer Science. 2019, No.8:1-8
    Abstract (1906) PDF (1381KB) (5634)
    Survey of Fuzz Testing Technology (476)
    ZHANG Xiong and LI Zhou-jun
    Computer Science. 2016, No.5:1-8
    Abstract (476) PDF (833KB) (5379)
    Survey on Attacks and Defenses in Federated Learning (362)
    CHEN Ming-xin, ZHANG Jun-bo, LI Tian-rui
    Computer Science. 2022, No.7:310-323
    Abstract (362) PDF (4808KB) (5206)
    Click Streams Recognition for Web Users Based on HMM-NN (124)
    FEI Xing-rui, XIE Yi
    Computer Science. 2022, No.7:340-349
    Abstract (124) PDF (3544KB) (4935)
    Secure Coordination Model for Multiple Unmanned Systems (110)
    LI Tang, QIN Xiao-lin, CHI He-yu, FEI Ke
    Computer Science. 2022, No.7:332-339
    Abstract (110) PDF (2179KB) (4926)
    Frequency Feature Extraction Based on Localized Differential Privacy (120)
    HUANG Jue, ZHOU Chun-lai
    Computer Science. 2022, No.7:350-356
    Abstract (120) PDF (2455KB) (4918)
Announcement
Subject