Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 50 Issue 2, 15 February 2023
Edge Intelligent Collaboration Technology and Frontier Applications
Optimization and Deployment of Memory-Intensive Operations in Deep Learning Model on Edge
Peng XU, Jianxin ZHAO, Chi Harold LIU
Computer Science. 2023, 50 (2): 3-12.  doi:10.11896/jsjkx.20221100135
Abstract PDF(2630KB) ( 2044 )   
References | Related Articles | Metrics
As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.
Distributed Weighted Data Aggregation Algorithm in End-to-Edge Communication Networks Based on Multi-armed Bandit
Yifei ZOU, Senmao QI, Cong'an XU, Dongxiao YU
Computer Science. 2023, 50 (2): 13-22.  doi:10.11896/jsjkx.221100134
Abstract PDF(1630KB) ( 1926 )   
References | Related Articles | Metrics
As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit (MAB) algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.
Hierarchical Memory Pool Based Edge Semi-supervised Continual Learning Method
WANG Xiangwei, HAN Rui, Chi Harold LIU
Computer Science. 2023, 50 (2): 23-31.  doi:10.11896/jsjkx.221100133
Abstract PDF(2471KB) ( 2222 )   
References | Related Articles | Metrics
The continuous changes of the external environment lead to the performance regression of neural networksbased on traditional deep learning methods.Therefore,continual learning(CL) area gradually attracts the attention of more researchers.For edge intelligence,the CL model not only needs to overcome catastrophic forgetting,but also needs to face the huge challenge of severely limited resources.This challenge is mainly reflected in the lack of labeled resources and powerful devices.However,the existing classic CL methods usually rely on a large number of labeled samples to maintain the plasticity and stability,and the lack of labeled resources will lead to a significant accuracy drop.Meanwhile,in order to deal with the problem of insufficient annotation resources,semi-supervised learning methods often need to pay a large computational and memory overhead for higher accuracy.In response to these problems,a low-cost semi-supervised CL method named edge hierarchicalmemory learner (EdgeHML) is proposed.EdgeHML can effectively utilize a large number of unlabeled samples and a small number of labeled samples.It is based on a hierarchical memory pool,leverage multi-level storage structure to store and replay samples.EdgeHML implements the interaction between different levels through a combination of online and offline strategies.In addition,in order to further reduce the computational overhead for unlabeled samples,EdgeHML leverages a progressive learning method.It reduces the computation cycles of unlabeled samples by controlling the learning process.Experimental results show that on three semi-supervised CL tasks,EdgeHML can improve the model accuracy by up to 16.35% compared with the classic CL method,and the training iterations time can be reduced by more than 50% compared with semi-supervised methods.EdgeHML achieves a semi-supervised CL process with high performance and low overhead for edge intelligence.
Resource Allocation Strategy Based on Game Theory in Mobile Edge Computing
CHEN Yipeng, YANG Zhe, GU Fei, ZHAO Lei
Computer Science. 2023, 50 (2): 32-41.  doi:10.11896/jsjkx.220300198
Abstract PDF(2129KB) ( 2090 )   
References | Related Articles | Metrics
The existing research on resource allocation strategies of mobile edge computing mostly focus on delay and energy consumption,but relatively less consideration is given to the benefits of edge servers.When considering the benefits of edge servers,many studies ignore the optimization of delay.Therefore,a two-way update strategy based on game theory(TUSGT) is proposed.TUSGT transforms the task competition between servers into a non-cooperative game problem on the side of edge servers,and adopts a joint optimization strategy based on potential game,which allows every edge server to determine the task selection prefe-rence with the goal of maximizing its own profit.On the side of mobile devices,the EWA algorithm in online learning is used to update the parameters,which affects the task selection preference of the edge servers from a global perspective and improves the overall deadline hit rate.Simulation results show that,compared with BGTA,MILP,greedy strategy,random strategy,and ideal strategy,TUSGT can increase the deadline hit rate by up to 30%,and increase the average profit of edge servers by up to 65%.
Coalition Game-assisted Joint Resource Optimization for Digital Twin-assisted Edge Intelligence
LI Xiaohuan, CHEN Bitao, KANG Jiawen, YE Jin
Computer Science. 2023, 50 (2): 42-49.  doi:10.11896/jsjkx.221100123
Abstract PDF(2129KB) ( 2067 )   
References | Related Articles | Metrics
In order to cope with the performance loss caused by temporal-spatial resource dispersion of edge service providers (ESPs) in edge intelligence-driven industrial Internet of Things system,this paper proposes a coalition game-based joint resource allocation scheme assisted by digital twin.Firstly,we design a transferable utility coalition game model consisting of a primary problem of utility maximization of edge devices and a sub-problem of utility maximization of ESPs under the constraints of ESPs' resource limitation including bandwidth,computation and cache capabilities.Then,the original multi-objective problem is transformed into one convex problem with linear constraints.Finally,an alternative optimization method is leveraged for solving the equivalent optimization problem.Simulation results show the effectiveness of the proposed coalition game-assisted scheme for improving system resource utilization,with greater promotion as the number of ESPs grows.This proves that the proposed scheme is more adaptable to large scale edge intelligence systems,compared with traditional Nash equilibrium and grand coalition method.
Online Task Allocation Strategy Based on Lyapunov Optimization in Mobile Crowdsensing
CHANG Sha, WU Yahui, DENG Su, MA Wubin, ZHOU Haohao
Computer Science. 2023, 50 (2): 50-56.  doi:10.11896/jsjkx.221100179
Abstract PDF(2125KB) ( 2054 )   
References | Related Articles | Metrics
Based on the idea of crowdsourcing,mobile crowdsensing(MCS) collects mobile sensing devices to sense the surroun-ding environment,which can make environment sensing and information collection more flexible,convenient and efficient.Whe-ther the task allocation strategy is reasonable or not directly affects the success of the sensing task.Therefore,formulating a reasonable task allocation strategy is a hotspot and focus in the research of MCS.At present,most of the task allocation methods in MCS systems are offline and targeted at single type tasks.However,in practice,online multi-type task allocation is more common.Therefore,this paper studies the task allocation method in MCS for multiple types of tasks,and proposes an online task allocation strategy oriented to system benefits combined with the characteristics of MCS technology in the military field.In this paper,a long-term,dynamic online task allocation system model is established,and the problem is solved based on Lyapunov optimization theory with the system benefit as the optimization goal,so that the online dynamic control of task admission strategy and task allocation scheme is realized.Experiment shows that the online task allocation algorithm proposed in this paper is effective and feasible.It can reasonably allocate the tasks arriving at the MCS system online,ensure the stability of the task queue,and increase the system utility by adjusting the parameter value.
UAV Frequency-based Crowdsensing Using Grouping Multi-agentDeep Reinforcement Learning
Cui ZHANG, En WANG, Funing YANG, Yong jian YANG , Nan JIANG
Computer Science. 2023, 50 (2): 57-68.  doi:10.11896/jsjkx.221100114
Abstract PDF(5389KB) ( 1975 )   
References | Related Articles | Metrics
Mobile CrowdSensing (MCS) is a promising sensing paradigm that recruits users to cooperatively perform sensing tasks.Recently,unmanned aerial vehicles (UAVs) as the powerful sensing devices are used to replace user participation and carry out some special tasks,such as epidemic monitoring and earthquakes rescue.In this paper,we focus on scheduling UAVs to sense the task Point-of-Interests (PoIs) with different frequency coverage requirements.To accomplish the sensing task,the scheduling strategy needs to consider the coverage requirement,geographic fairness and energy charging simultaneously.We consider the complex interaction among UAVs and propose a grouping multi-agent deep reinforcement learning approach (G-MADDPG) to schedule UAVs distributively.G-MADDPG groups all UAVs into some teams by a distance-based clustering algorithm (DCA),then it regards each team as an agent.In this way,G-MADDPG solves the problem that the training time of traditional MADDPG is too long to converge when the number of UAVs is large,and the trade-off between training time and result accuracy could be controlled flexibly by adjusting the number of teams.Extensive simulation results show that our scheduling strategy has better performance compared with three baselines and is flexible in balancing training time and result accuracy.
Deployment Optimization and Computing Offloading of Space-Air-Ground Integrated Mobile Edge Computing System
ZHENG Hongqiang, ZHANG Jianshan, CHEN Xing
Computer Science. 2023, 50 (2): 69-79.  doi:10.11896/jsjkx.220600057
Abstract PDF(2318KB) ( 2009 )   
References | Related Articles | Metrics
As a new architecture,the space-air-ground integrated communication technology can effectively improve the network service quality of ground terminal,and has attracted widespread attention in recent years.This paper studies a space-air-ground integrated mobile edge computing system,in which multiple UAVs provide low-latency edge computing services for ground devices,and low earth orbit satellites provide ubiquitous cloud computing services for ground devices.Since the deployment position of the UAVs and the scheduling scheme of computing tasks are the key factors affecting the performance of the system,the deployment position of the UAVs,the link relationship between the ground terminal and the UAVs,and the offloading ratio of computing tasks need to be jointly optimized to minimize the average task response delay of the system.Since the formally defined joint optimization problem is a mixed nonlinear programming problem,this paper designs a two-layer optimization algorithm.In the upper layer of the algorithm,a particle swarm optimization algorithm that combines genetic algorithm operators is proposed to optimize the deployment position of the UAVs,and the greedy algorithm is used in the lower layer of the algorithm to optimize the computing task offloading scheme.The extensive simulation experiments verify the feasibility and effectiveness of the proposed method.The results show that the proposed method can achieve lower average task response time,compared to other baseline methods.
Task Offloading Method Based on Cloud-Edge-End Cooperation in Deep Space Environment
SHANG Yuye, YUAN Jiabin
Computer Science. 2023, 50 (2): 80-88.  doi:10.11896/jsjkx.220800156
Abstract PDF(2910KB) ( 1996 )   
References | Related Articles | Metrics
Deep space exploration is a significant area of space missions in the modern world,and future large-scale deep space exploration will be greatly impacted by autonomous deep space exploration technologies.The autonomous technology of deep space exploration faces severe challenges because of the complicated and uncharted deep space environment,lengthy deep space communication time,and constrained on-board computing capacity.To address this issue,a cloud-edge-end cooperation architecture for deep space exploration tasks using digital twins is developed,which can offer more efficient resource services for deep space exploration autonomous technologies.Firstly,the complex deep space exploration task is decomposed into multiple sub-modules with dependencies.Secondly,the orbiter coverage time model,the collaborative computing model,and the task dependency model are established in the virtual space layer.Finally,based on the aforementioned models,the corresponding target optimization pro-blem is proposed.The optimization objective is to minimize the energy consumption and time of the landing rover for completing the deep space exploration mission under the constraints of module dependence,the effective communication service time of the orbiter and the transmit power control of the landing rover.In order to solve this optimization problem,an adaptive genetic algorithm is proposed,so that the optimal execution strategy for the landing rover in the physical space layer can be determined.Si-mulation results show that the proposed adaptive genetic algorithm can effectively reduce the mission completion time and energy consumption.Additionally,the proposed cloud-edge-end cooperation computing model is contrasted with the other three computing models,and the results reveal that,when it is used to achieve the same objective,the proposed cloud-edge-end cooperation framework has a greater resource utilization.
Database & Big Data & Data Science
Overview of Research on Bayesian Inference and Parallel Tempering
ZHAN Jin, WANG Xuefei, CHENG Yurong, YUAN Ye
Computer Science. 2023, 50 (2): 89-105.  doi:10.11896/jsjkx.220100001
Abstract PDF(3263KB) ( 445 )   
References | Related Articles | Metrics
Bayesian inference is one of the main problems in statistics.It aims to update the prior knowledge of the probability distribution model based on the observation data.For the posterior probability that cannot be observed or is difficult to directly calculate,which is often encountered in real situations,Bayesian inference can obtain a good approximation.It is a kind of important method based on Bayesian theorem.Many machine learning problems involve the process of simulating and approximating the target distribution of various types of feature data,such as classification models,topic modeling,and data mining.Therefore,Bayesian inference has shown important and unique research value in the field of machine learning.With the beginning of the big data era,the experimental data collected by researchers through actual information is very large,resulting in the complex distribution of targets to be simulated and calculated.How to perform accurate and time-efficient approximation inferences on target distributions under complex data has become a major and difficult point in Bayesian inference problems today.Aiming at the infe-rence problem under this complex distribution model,this paper systematically introduces and summarizes the two main methods for solving Bayesian inference problems in recent years,which are variational inference and sampling methods.Firsly,this paper gives the problem definition and theoretical knowledge of variational inference,introduces in detail the variational inference algorithm based on coordinate ascent,and gives the existing applications and future prospects of this method.Next,it reviews the research results of existing sampling methods at home and abroad,gives the specific algorithm procedure of various main sampling methods,as well as summarizes and compares the characteristics,advantages and disadvantages of these methods.Finally,this paper introduces parallel tempering technique,outlines its basic theories and methods,discusses the combination and application of parallel tempering and sampling methods,and explores new research directions for the future development of Bayesian inference problems.
Hybrid Programming Task Recommendation Model Based on Knowledge Graph and Collaborative Filtering for Online Judge
LIU Zejing, WU Nan, HUANG Fuqun, SONG You
Computer Science. 2023, 50 (2): 106-114.  doi:10.11896/jsjkx.211200105
Abstract PDF(2313KB) ( 445 )   
References | Related Articles | Metrics
The online judge (OJ) is a widely used system for programming education,learning and contests.Users often get lost in searching for tasks of interest in the massive database.How to recommend suitable programming tasks to the users and plan the learning path is a significant research topicin the development of online programming evaluation system.Existing traditional recommendation methods have the limitation of making a trade-off between interpretability and effectiveness.This paper proposes a task-recommending model for the OJ platform -hybrid programming task recommendation model based on knowledge graph and collaborative filtering for online judge (HKGCF).The HKGCF model can help users improve their learning effect by recommending questions that match their current knowledge levels and skills.The model is designed based on a hybrid strategy that integrates the knowledge graph representation learning with an improved collaborative filtering algorithm.The model is implemented and integrated into the OJ platform of Beihang University,and meet the specific interaction formats of the OJ platform.We conducted two experiments,an online and an offline test,to validate the proposed model and its implementations.The results show that the proposed model outperforms the representative conventional recommendation algorithm interms of interpretability and accuracy
Neural Collaborative Filtering for Social Recommendation Algorithm Based on Graph Attention
ZHANG Qi, YU Shuangyuan, YIN Hongfeng, XU Baomin
Computer Science. 2023, 50 (2): 115-122.  doi:10.11896/jsjkx.211200019
Abstract PDF(2575KB) ( 445 )   
References | Related Articles | Metrics
The development of Internet technology has made the problem of information overload more and more serious.In order to solve the problems of data sparse and cold start of traditional recommendation technology,social recommendation has gradually become a research hotspot in recent years.As a network,graph neural networks(GNNs)can naturally integrate node information and topology,offer great potential for improving social recommendation.But there are still many challenges for social recommendation based on graph neural network.For example,how to learn accurate latent factor representations of users and items from user-item interaction graphs and social network graphs;Simply mapping of inherent properties of users and items to obtain embeddings,but key collaborative signals of user-item interactions are not learned.In order to learn more accurate latent factor representations,capture key collaborative signals,and improve the performance of recommender systems,a graph attention-based neural collaborative filtering social recommendation model(AGNN-SR) is proposed.The model is based on user-item interaction graphs and social network graphs,and learns latent factors of users and items from multiple perspectives through a multi-head attention mechanism.In addition,graph neural networks utilize higher-order connectivity to recursively propagate embedding information on the graph,explicitly encoding collaborative signaling to explore deep and complex interactions between users and items.Finally,the effectiveness of the AGNN-SR model is verified on three real datasets.
Visual Question Answering Model Based on Multi-modal Deep Feature Fusion
ZOU Yunzhu, DU Shengdong, TENG Fei, LI Tianrui
Computer Science. 2023, 50 (2): 123-129.  doi:10.11896/jsjkx.211200303
Abstract PDF(2573KB) ( 486 )   
References | Related Articles | Metrics
In the era of big data,with the explosive growth of multi-source heterogeneous data,multi-modal data fusion has attracted much attention of researchers,and visual question answering(VQA) has become a hot topic in multi-modal data fusion due to its image and text fusion processing characteristics.Visual Q&A task is mainly based on the deep feature fusion association and representation of image and text multi-modal data,and inference learning of the fusion feature results,so as to get the conclusion.Traditional visual question answering models tend to miss key information and mostly focus on the superficial modal feature association representation learning between data,but less on the deep semantic feature fusion.To solve the above pro-blems,this paper proposes a visual question answering model based on cross-modal deep interaction of of graphic features.The proposed method uses convolutional neural network and LSTM network to obtain the data features of image and text modes respectively,and builds a novel deep attention learning network based on combination of meta-attention units,to realize interactive learning of attention features within or between modes of image and text.At last,we represent the learning features so as to output the results.The model is tested and evaluated on VQA-v2.0 dataset.Compared with the traditional baseline model,the expe-rimental results show that the performance of the proposed model is significantly improved.
Self-supervised Flight Trajectory Prediction Based on Data Augmentation
WANG Pengyu, TAI Wenxin, LIU Fang, ZHONG Ting, LUO Xucheng, ZHOU Fan
Computer Science. 2023, 50 (2): 130-137.  doi:10.11896/jsjkx.211200016
Abstract PDF(2212KB) ( 457 )   
References | Related Articles | Metrics
Accurate flight trajectory predictions can help air traffic management systems make warnings for potential hazards and effectively provide guidance for safe travel.However,the atmospheric situation in which the planes flying is complicated and changeable.The flight track is affected by external factors such as atmospheric disturbance,the air cloud,making prediction difficult.In addition,due to the harsh ground environment where some flight areas are located,it is impossible to deploy enough signal base stations,while the flight signals in some flight areas are collected and combined by multiple signal base stations,resulting in sparse and noisy aircraft track data,which further increases the difficulty of flight track prediction.This paper proposes a technically enhanced self-supervision flight trajectory learning method.This method uses a regularization-based data enhancement mode to extend the sparse track data and process the abnormal values included in the dataset.It provides a self-supervised learning diagram by maximizing mutual information to dig the mobility pattern contained in the flight trajectory.The method employs a multi-head self-attention model with a distillation mechanism as a fundamental model to solve the long-term dependence problem of the recurrent neural network.In addition,the approach uses the distillation mechanism to reduce the complexity of the model and utilizes the generating decoding method to accelerate the speed of its training and prediction.The evaluation results on the flight trajectory dataset show that our method has a significant increase in the results of trajectory prediction compared with the state-of-the-art method that our approach reduces the root mean square error of the prediction results in latitude,longitude,and altitude by 20.8%,26.4%,and 25.6%,respectively.
Hierarchical Multiple Kernel K-Means Algorithm Based on Sparse Connectivity
WANG Lei, DU Liang, ZHOU Peng
Computer Science. 2023, 50 (2): 138-145.  doi:10.11896/jsjkx.220400230
Abstract PDF(3072KB) ( 364 )   
References | Related Articles | Metrics
Multiple kernel learning(MKL) aims to find an optimal consistent kernel function.In the hierarchical multiple kernel clustering(HMKC) algorithm,the sample features are extracted layer by layer from high-dimensional space to maximize the retention of effective information,but the information interaction between layers is ignored.In this model,only the corresponding nodes in the adjacent layer will exchange information,but for other nodes,it is isolated,and if the full connection is adopted,the diversity of the final consistence matrix will be reduced.Therefore,this paper proposes a hierarchical multiple kernel K-Means(SCHMKKM) algorithm based on sparse connectivity,which controls the assignment matrix to achieve the effect of sparse connections through the sparsity rate,thereby locally fusing the features obtained by the distillation of information between layers.Finally,we perform cluster analysis on multiple data sets and compare it with the fully connected hierarchical multiple kernel K-Means(FCHMKKM) algorithm in experiment.Finally,it is proved that more discriminative information fusion is beneficial to learn a better consistent partition matrix,and the fusion strategy of sparse connection is better than the strategy of full connection.
Study on Time Series Shapelets Extraction Based on Optimization and Two-phase Filtering
LI Chen, WAN Yuan
Computer Science. 2023, 50 (2): 146-157.  doi:10.11896/jsjkx.211200065
Abstract PDF(3866KB) ( 313 )   
References | Related Articles | Metrics
Compared with the time series classification methods based on global features,the shapelet-based methods have more advantages in interpretability,efficiency and accuracy.In order to solve the problems of insufficient discrimination of shapelets obtained from existing sparse models and the large scale of shapelets candidates,this paper proposes a shapelets extraction method based on optimization and two-phase filtering.First,the time series are sampled,and the sampled time series are grouped by combining the extreme points and the trend,then the weight of each item in the sparse group lasso regularizer are assigned according to the grouping results.The fused penalty regularization is used in each group of the weighted sparse group lasso to ensure that the adjacent positions of the solution change smoothly.Those sparse regularization terms are combined as constraints to construct the objective function together with the local fisher discriminant analysis.Then,a two-phase filtering framework is established to measure the sparsity of groups,so as to quickly locate the key group that plays a decisive role in classification.Finally,this key group is retained to extract shapelets for time series classification,which reduces the candidates of shapelets.Extensive experiments are carried out on 28 datasets,and the experimental results show that,compared with the existing shapelets-based extraction methods,the proposed method significantly improves the classification accuracy with a good efficiency,and reduces the scale of shapelets to a certain extent.
Fair Method for Spectral Clustering to Improve Intra-cluster Fairness
XU Xia, ZHANG Hui, YANG Chunming, LI Bo, ZHAO Xujian
Computer Science. 2023, 50 (2): 158-165.  doi:10.11896/jsjkx.211100279
Abstract PDF(2334KB) ( 433 )   
References | Related Articles | Metrics
Recently,the fairness of the algorithm has aroused extensive discussion in the machine learning community.Given the widespread popularity of spectral clustering in modern data science,studying the algorithm fairness of spectral clustering is a crucial topic.Existing fair spectral clustering algorithms have two shortcomings:1) poor fairness performance;2) work only for single sensitive attribute.In this paper,the fair spectral clustering problem is regarded as a constrained spectral clustering problem.By solving the feasible solution set of constrained spectral clustering,an unnormalized fair spectral clustering(UFSC) method is proposed to improve fairness performance.In addition,the paper also proposes a fair clustering algorithm suitable for multiple sensitive attribute constraints.Experimental results on multiple real-world datasets demonstrate that the UFSC and MFSC are fairer than the existing fair spectral clustering algorithms.
Searching Super-reduct:Improvement on Efficiency and Effectiveness
WANG Xiaoxiao, BA Jing, CHEN Jianjun, SONG Jingjing, YANG Xibei
Computer Science. 2023, 50 (2): 166-172.  doi:10.11896/jsjkx.211200292
Abstract PDF(1652KB) ( 348 )   
References | Related Articles | Metrics
Following the derivation of multiple reducts,an ensemble based classification framework can be constructed,which has been demonstrated to be useful in improving the performance of subsequent learning tasks.The approach called super-reduct is exactly suggested with such thinking.Generally,multiple super-reducts are obtained by randomly adding more extra attributes into the fundamental reduct.Therefore,how to search fundamental reduct is the key to performing super-reduct.In view of this,considering both efficiency and effectiveness,not only attribute group but also ensemble selector is introduced into the mechanism of super-reduct:the device of attribute group is used to speed up the process of searching fundamental reduct,the device of ensemble selector is used to find more robust attributes in the procedure of searching reduct.Comprehensive experiments on 20 UCI data sets show that compared with 4 popular strategies,our approach can not only significantly reduce the computational cost but also provide superior stabilities and accuracies for classification tasks.
Topological Properties of Generalized Rough Approximation Operators Based on Objects
LI Yanyan, QIN Keyun
Computer Science. 2023, 50 (2): 173-177.  doi:10.11896/jsjkx.211100054
Abstract PDF(1313KB) ( 312 )   
References | Related Articles | Metrics
Rough set theory is a mathematical tool to deal with uncertain problems.The core notion of rough set theory is appro-ximation operators.Pawlak approximation operators based on equivalence relations cab be extended to generalized rough approximation operators based on arbitrary binary relations.The topological structures of approximation operators are important topics in rough set theory.This paper is devoted to the study of topological properties of object-based generalized rough approximation operators.It is proved that the sufficient condition for all definable subsets in a generalized approximation space to form a topology is also its necessary condition.The regularity and normality of this topology are studied.The equivalent conditions for the same topology generated by the serial binary relation and its transitive closure are given.The relationship between this topology and the topology induced by the object-based generalized rough approximation operator under any binary relation is discussed.
Computer Graphics & Multimedia
Survey of Rigid Object Pose Estimation Algorithms Based on Deep Learning
GUO Nan, LI Jingyuan, REN Xi
Computer Science. 2023, 50 (2): 178-189.  doi:10.11896/jsjkx.211200164
Abstract PDF(1910KB) ( 544 )   
References | Related Articles | Metrics
Rigid object pose estimation aims to obtain 3D translation and 3D rotation information of the rigid object in the camera coordinate system,which plays an important role in rapidly developing fields such as autonomous driving,robotics and augmented reality.The representative papers on rigid object pose estimation based on deep learning from 2017 to 2021 are summarized and analyzed.The rigid object pose estimation methods are divided into coordinate-based,keypoints-based and template-based me-thods.The rigid object pose estimation task is divided into four sub-tasks:image preprocessing,spatial mapping or feature ma-tching,pose recovery,and pose optimization.The subtask realization of each method and its advantages and problems are introduced in detail.The challenges of rigid object pose estimation are analyzed,and the existing solutions and their advantages and disadvantages are summarized.Based on the rigid object pose estimation method,the articulated object and deformable object pose estimation are analyzed.The common datasets and performance evaluation indexes of rigid object pose estimation are introduced,and the performance of existing methods on common datasets is compared and analyzed.Finally,the future research directions of pose tracking and class rigid object pose estimation are prospected.
Research Progress of Infrared and Visible Image Fusion Algorithms
Computer Science. 2023, 50 (2): 190-200.  doi:10.11896/jsjkx.220100074
Abstract PDF(3923KB) ( 969 )   
References | Related Articles | Metrics
Infrared images are easy to identify thermal targets,and visible images have rich texture information.The fusion of infrared and visible images takes the advantages of both optical bands which can clearly show the targets and background.It has been widely used in many fields such as military reconnaissance,security monitoring,remote sensing measurement,and becomes a key research direction in the field of image fusion.In recent years,infrared and visible image fusion algorithms have attracted the attention of researchers around the world and have been studied abundantly.In this paper,the image fusion algorithms are introduced firstly,including traditional image processing methods of multi-scale transformation,sparse representation,and deep lear-ning algorithms based on CNN,GAN,AE.Then,the evaluation methods of fusion images are summarized,and a variety of common objective evaluation indexes are classified.After that,comparative experiments are carried out to subjectively evaluate and quantitatively analyze the advantages and disadvantages of these algorithms.Finally,the development trend of infrared and visible image fusion methods is prospected.
Scene Text Detection with Improved Region Proposal Network
LI Junlin, OUYANG Zhi, DU Nisuo
Computer Science. 2023, 50 (2): 201-208.  doi:10.11896/jsjkx.211000191
Abstract PDF(2791KB) ( 309 )   
References | Related Articles | Metrics
Scene text images have very complex and changeable features.Using region proposal network(RPN) to extract text rectangle position candidate boxes is an indispensable step,which can greatly improve the accuracy of text detection.However,recent studies show that the methods of regressing the center point,width and height of the text rectangular candidate boxes by minimizing the smooth L1 loss function would easily cause problems such as missing boundary information and inaccurate regression.Therefore,this paper proposes a scene text detection model based on improved region proposal network.First,the backbone network composed of the residual network and the feature pyramid network is used to generate a shared feature map.Then,an improved regression method and vertex-based loss function(Vertex-IOU) are used to generate a series of text rectangular candidate boxes on the shared feature map.Finally,ROI Align is used to convert these candidate boxes into fixed-size feature maps for bounding box regression in the fully connected layer.Through comparative experiments on ICDAR2015 dataset,the results show that the test effect is improved compared with other models,which proves the effectiveness of our model.
Few-shot Object Detection Based on Feature Fusion
HUA Jie, LIU Xueliang, ZHAO Ye
Computer Science. 2023, 50 (2): 209-213.  doi:10.11896/jsjkx.220500153
Abstract PDF(1654KB) ( 594 )   
References | Related Articles | Metrics
Few-shot object detection aims to train target detection model through a small amount of sample learning.At present,most of the existing few-shot object detection methods are based on classical target detection algorithms.In the two-stage detection method,due to the small number of new class samples,many irrelevant border boxes are generated,resulting in low accuracy of candidate regions.To solve this problem,this paper proposes a few-shot object detection algorithm FF-FSOD based on feature fusion.It uses the feature fusion method to enhance the data,supplements the new category samples,increases the coverage range of the sample,and introduces the FPN network to extract multi-scale feature.Then,the RPN network is improved,and the support set image branch is introduced.The depth correlation between the support set image feature and the query set image feature is calculated,and the attention feature map is obtained,and the more accurate candidate box is obtained.The effectiveness of the proposed model is verified on MS COCO and FSOD datasets.Experimental results show that the proposed method obtains more accurate candidate boxes and improves the detection accuracy.
Self-supervised 3D Face Reconstruction Based on Detailed Face Mask
ZHU Lei, WANG Shanmin, LIU Qingshan
Computer Science. 2023, 50 (2): 214-220.  doi:10.11896/jsjkx.220600035
Abstract PDF(2402KB) ( 477 )   
References | Related Articles | Metrics
Self-supervised 3D face reconstruction can alleviate the problem of lack of 3D face data,and has therefore become a hot research topic in recent years.Existing self-supervised methods usually focus on using globally supervised signals and do not pay enough attention to the local details of faces.In order to better recover fine-grained 3D faces with vivid details,this paper proposes a fine-grained 3D face reconstruction method based on face part masks,which can reconstruct fine-grained 3D faces without any 3D face annotation.The main idea is to improve the local accuracy of the reconstructed 3D face by giving refinement constraints on the face region through the face part mask and self-supervised constraints on the face part mask on top of the basic loss functions such as 2D image consistency loss,image deep perception loss,etc.Qualitative and quantitative experiments on AFLW2000-3D and MICC Florence datasets demonstrate the effectiveness and superiority of the proposed method.
Lightweight Face Generation Method Based on TransEditor and Its Application Specification
LIANG Weiliang, LI Yue, WANG Pengfei
Computer Science. 2023, 50 (2): 221-230.  doi:10.11896/jsjkx.220800166
Abstract PDF(5267KB) ( 342 )   
References | Related Articles | Metrics
Face generation can combine the style of the face and the pose of the head to synthesize fake face images,it is often used for vision tasks such as gender conversion and pose modification.GAN-based face generation methods can greatly improve the quality and editability of face generation.However,these generation methods have complex network structures and large computing resource requirements,and are difficult to directly apply to practical scenarios.To achieve efficient face generation,this paper proposes a lightweight face generation method based on TransEditor,and discusses the corresponding application specifications.At the technical level,firstly,based on the TransEditor face editing network model,we design a lightweight face generation network model with reference to the generator structure of lightweight network model such as StyleGAN2.Secondly,we analyze the loss function of the network model from the aspects of generation loss,confrontation loss,reconstruction loss,etc.,and propose to use the PReLU activation function instead of the Softplus activation function to improve the generation effect of the ge-nerator.Finally,through massive experiments,it is proved that the LPIPS of the proposed lightweight face generation method based on TransEditor only reduces by 0.0042,which greatly reduces the training time and parameter amount of the model,and improves the operation efficiency of the face generation model.At the level of application specifications,it is necessary to improve the existing regulatory measures and standardize the use of the proposed face generation method,so that technological progress can better serve social development.
Crack Detection of Concrete Pavement Based on Attention Mechanism and Lightweight DilatedConvolution
QU Zhong, WANG Caiyun
Computer Science. 2023, 50 (2): 231-236.  doi:10.11896/jsjkx.211200290
Abstract PDF(4102KB) ( 406 )   
References | Related Articles | Metrics
Cracks in the concrete pavement will affect the safety,applicability,and durability of the structure,and crack detection is a challenging research hotspot.This paper proposes a crack detection model composed of an improved full convolutional network and a deep supervision network,which uses the improved VGG-16 as the backbone network.Firstly,the low-level convolutional feature aggregation is fused to the backbone network again through the spatial attention mechanism.Secondly,the middle and high-level convolutional features are fused through the lightweight dilated convolution fusion module for multi-feature fusion to get the clear edge and high-resolution feature maps,all side feature maps are added to produce the final prediction map.Finally,the deep supervision network provides direct supervision for the detection results of each stage.In this paper,the focus loss function is selected as the evaluation function,and the trained network model can efficiently identify the crack location from the input original image under various conditions such as uneven illumination and complex background.To verify the effectiveness and robustness of the proposed method,it is compared with six methods on three datasets,DeepCrack,CFD,and Crack500,and the results show that it has excellent performance,and the F-score value reaches 87.12%.
Artificial Intelligence
Cross-domain DOM Pickup and Automation Scheme of RPA System Based on Browser Extension
YI Renke, CAI Yuhui, YANG Shenghong, WU Fan, LI Kenli
Computer Science. 2023, 50 (2): 237-243.  doi:10.11896/jsjkx.220600203
Abstract PDF(1698KB) ( 405 )   
References | Related Articles | Metrics
Robotic process automation(RPA) is one of today's research hotspots.The pickup and automation of web page elements is one of the important functions of RPA.RPA injects scripts into the web page to process web pages by using browser extensions,using web page element positioning path to locate to the target node for automated operations.When there is a cross-domain frame in the source web page,due to the limitations of the same-origin strategy,the script injected into the source web page can not obtain the DOM object of the target node,resulting in the inability to generate a web page element positioning path,so that it can not be automated.When processing a web page containing a third-party cross-domain frame,the scheme treats it as a frame process equal to the status of the source web page frame,and the web page element positioning path is designed to contain the url of the frame and the form of the web page element Xpath to achieve cross-domain web page element pickup and automation.Experiments show that the scheme can effectively pick up and automate the elements of cross-domain web pages,and support chrome,firefox,and other browsers that support browser extensions.
Fine-grained Action Allocation and Scheduling Method for Dynamic Heterogeneous Tasks in Multi-robot Environments
WANG Jiwang, SHEN Liwei
Computer Science. 2023, 50 (2): 244-253.  doi:10.11896/jsjkx.220500117
Abstract PDF(2134KB) ( 353 )   
References | Related Articles | Metrics
In a multi-robot environment,robots with different capabilities collaborate with each other to complete task requirements.Realistically,these tasks are dynamically issued and can have different goals and urgency levels,so it is necessary to allocate and schedule the appropriate robots responsible for executing the fine-grained actions decomposed for each task.Most of the existing approaches are suitable for static and homogeneous task allocation scenarios,while most of the dynamic heterogeneous tasks are assigned using exclusive allocation strategies,which causes the robot to frequently enter into waiting states(i.e.,robots are in the idle phase between being assigned a task and actually starting to execute it).Since tasks vary in urgency levels and release times,this allocation method will reduce the efficiency of response to more urgent tasks,while leading to longer waiting time and task completion time.To address this problem,this paper proposes a fine-grained action allocation and scheduling method for dynamic heterogeneous tasks in a multi-robot environment.In this paper,the object of allocation and scheduling is a fine-grained action decomposed by a task,and an action can be undertaken by one capability of a robot.Faced with a set of fine-grained actions decomposed by the task,this method draws on the auction algorithm process to calculate the optimal allocation scheme for a robot to undertake a specific action based on the robot capabilities,state and task information.In addition,by executing the allocation and scheduling process at each new task release or when a robot finishes executing an action,robots in the general task waiting state can be scheduled to the urgent task to ensure the priority completion of the urgent task and reduce the overall waiting time of the robot.Based on this approach,the execution module ROSPlan is extended and implemented.Simulation experiments around a set of multi-robot dynamic heterogeneous tasks show that the proposed methodr can obtain a better allocation scheme compared to the method using greedy policies.
Gait Recognition Based on Inertial Sensor
ZHANG Xianggang, LYU Yunlian, ZENG Jing, ZHANG Ting
Computer Science. 2023, 50 (2): 254-266.  doi:10.11896/jsjkx.220500011
Abstract PDF(1684KB) ( 382 )   
References | Related Articles | Metrics
As a new biometric technology,gait recognition has become a research hotspot of biometric technology due to its advantages of non-contact,non-interference,long-distance and difficult to camouflage.In addition,in recent years,due to the maturity of MEMS inertial sensor technology and its wide application in portable devices,gait recognition based on inertial sensors has attracted more and more attention from researchers.This paper collects and sorts out the research methods and research status of gait recognition using inertial sensors at home and abroad,and reviews the relevant technologies in this field.According to the sequence of recognition processing,the paper reviews the relevant technologies and research status of data acquisition,data preprocessing,data segmentation,feature selection and combination,and intelligent recognition in each stage.At the same time,the main public gait databases are given for the convenience of interested readers.Finally,on this basis,the technical difficulties of gait-based recognition are discussed,and the future development directions are prospected.
Incremental Object Detection Inspired by Memory Mechanisms in Brain
SHANG Di, LYU Yanfeng, QIAO Hong
Computer Science. 2023, 50 (2): 267-274.  doi:10.11896/jsjkx.220900212
Abstract PDF(2653KB) ( 464 )   
References | Related Articles | Metrics
Incremental learning is key to bridging the enormous gap between artificial intelligence and human intelligence,mea-ning that agents can learn several tasks sequentially from a continuous stream of correlated data without forgetting,just as humans do.Object detection is one of the core tasks in the field of computer vision and the cornerstone of computer images understanding.Therefore,the incremental object detection has important research and practical significance.Although incremental learning has achieved good results in image classification,the research on incremental learning based on object detection is still in its infancy.This is because object detection is more complex than image classification,which needs to solve both classification and bounding box regression problems.Many researchers have made great efforts to solve this problem,but most of the work only focuses on how to retain previous learning,ignoring fast adaptability to new tasks,which is a critical requirement for incremental learning.Based on the memory mechanism of the brain,humans can constantly extract knowledge during learning,so as to learn new tasks better and faster without forgetting.Inspired by this,an incremental meta-learning method that integrates the codec memory replay mechanism is proposed.This method encodes,stores,decodes and replays the feature vectors of learned samples,so as to approximate the dynamic learning environment as a local stationary environment and avoid catastrophic forgetting.Besides,a double-loop online meta-learning strategy is designed,which can help model to extract common structures of tasks and improve generalization performance on new tasks encountered during learning.The model is respectively updated by SGD with multiple batches of old and new mixed data in the inner loop,and is meta-updated in the outer loop.We evaluate the proposed approach on three incremental object detection settings defined on PASCAL VOC and MS COCO datasets,where the proposed algorithm performs favorably well against state-of-the-art methods.It proves that it can help the model to resist forgetting better and have better generalization performance on new tasks.The proposed algorithm is gradient-based and model-agnostic,so it has strongadaptability and can be applied on more complex detection frameworks.
Event Extraction Method Based on Conversational Machine Reading Comprehension Model
LIU Luping, ZHOU Xin, CHEN Junjun, He Xiaohai, QING Linbo, WANG Meiling
Computer Science. 2023, 50 (2): 275-284.  doi:10.11896/jsjkx.220400271
Abstract PDF(4223KB) ( 383 )   
References | Related Articles | Metrics
Event extraction aims to extract structured information automatically from massive unstructured texts to help people quickly understand the latest developments of events.Traditional methods are mainly implemented by classification or sequence labeling methods,which rely on a large amount of labeled data to train the model.In recent years,researchers have proposed to use machinereading comprehension models for event extraction,and through task conversion and combined use of machine rea-ding comprehension datasets for training to effectively alleviate the issue of insufficient annotation data.However,existing methods are limited to a single round of question answering and lack dependencies between different question and answer rounds.In addition,existing methods do not fully utilize entity knowledge in sentences.To this end,a new machine reading comprehension model for event extraction is proposed,and we extend existing methods in two ways.Firstly,by explicitly adding entity tag information in the sentence,making the model effectively learn the prior knowledge of the entities in the input sentence.Secondly,a historical conversational information encoding module is designed,and the attention mechanism is utilized to select important information from historical conversations to assist in inference.Finally,experiment results on a public dataset show that the new model achieves better performance than the existing methods based on the machine reading comprehension model.
Mixture-of-Experts Model for Hypernymy Discrimination
ZENG Nan, XIE Zhipeng
Computer Science. 2023, 50 (2): 285-291.  doi:10.11896/jsjkx.211200066
Abstract PDF(2267KB) ( 322 )   
References | Related Articles | Metrics
Hypernymy discrimination is an essential and challenging task in NLP.Traditional supervised methods usually model all the hypernymies in the global semantic space,which has achieved fair performance.However,the distributed semantic representation of hypernymies is rather complex,and their manifestations may differ significantly in different areas of the semantic space,making it difficult to learn the global model.This paper employs the mixture-of-experts framework as a solution.It works on the basis of a divide-and-conquer strategy,which divides the semantic space into multiple subspaces,and each subspace corres-ponds to a local expert(model).A number of localized experts(models) focus on their own domains(or subspaces) to learn their specialties,and a gating mechanism determines the space partitioning and the expert aggregation.Experimental results show that the mixture-of-experts model outperforms the traditional global ones on public datasets.
End-to-End Event Factuality Identification with Joint Model
CAO Jinjuan, QIAN Zhong, LI Peifeng
Computer Science. 2023, 50 (2): 292-299.  doi:10.11896/jsjkx.211200108
Abstract PDF(2322KB) ( 314 )   
References | Related Articles | Metrics
Event factuality is the description of the real situation of events in text.It is the basic task of many related applications in the field of natural language processing.At present,most researches on event factuality use labeled events to identify event factuality,which is inconvenient for practical application,and ignores the impact of different event sources on event factuality.Aiming at the existing problems,an end-to-end joint model JESF for event factuality identification is proposed.The model can carry out three tasks at the same time:event identification,event source identification and event factuality identification.Using bidirectional encoder representations from transformers(BERT)and linguistic features to strengthen the semantic representation of words.Graph convolutional network(GCN) is constructed by using attention mechanism and dependency syntax tree to effectively extract semantic and syntactic features.In particular,the model can also be applied to the task of event factuality considering only the default source(text author).Experimental results on FactBank,Meantime,UW and UDS-IH2 show that the proposed model is better than the benchmark model.
Study on Abductive Analysis of Auto Insurance Fraud Based on Network Representation Learning
LI Weizhuo, LU Bingjie, YANG Junming, NA Chongning
Computer Science. 2023, 50 (2): 300-309.  doi:10.11896/jsjkx.220800169
Abstract PDF(2989KB) ( 430 )   
References | Related Articles | Metrics
Auto insurance fraud detection plays an important role in promoting the healthy development of auto insurance.As the judgment of fraud involves the core content of civil rights,it is necessary for auto insurance experts to check the case and provide the reasons for fraud.Although the method based on machine learning have strongscalability and high accuracy,it lacks interpre-tability,while the rule method based on expert system has good interpretability,but it is limited by the trigger conditions of complex rules.To address the unexplainable problem of cases detected as“fraud” by machine learning methods without triggering the expert system fraud rules,this paper puts forward an analysis method of auto insurance fraud traceability based on network representationlear-ning.It first defines the abductive analysis task of auto insurance fraud.That is,for cases that are identified as “fraud ”ones by machine learning methods without triggering the expert system,it returns the ranking of the most likely fraud rules to auto insurance experts.Then,the method models the case-rule factor network based on the network representation lear-ning according to the fraud cases that have triggered the rules of the expert system,and learns the vector representation of these factors in fraud rules.To better measure the similarity between fraud cases and rules with incomplete triggering factors in the expert system,a weighted splicing strategy of factors in fraud rules is designed based on the principle of abductive reasoning,which can alleviate the problem of insufficient training data to some extent.Experimental results show that the proposed method can obtain better performances than existing methods in terms of three metrics.
Unsupervised Script Summarization Based on Pre-trained Model
SU Qi, WANG Hongling, WANG Zhongqing
Computer Science. 2023, 50 (2): 310-316.  doi:10.11896/jsjkx.211100039
Abstract PDF(2949KB) ( 331 )   
References | Related Articles | Metrics
The script is a special text structure,which is composed of the dialogue between characters and the description of the scene.Unsupervised script summary refers to compressing and extracting a long script to form a short text that can summarize the information of the script.Therefore,this paper proposes an unsupervised script summary method based on a pre-training mo-del.By adding pre-training tasks for text sequence processing in pre-training,the generated pre-training model fully takes into account the description of the dialogue in the script and the emotional characteristics of the characters,then the model is used as a trainer to calculate the similarity between sentences and combined with the TextRank algorithm to score and sort the key sentences.Finally,the sentence with the highest score is selected as the summary.Experimental results show that the proposed method has better performance than the base model,and the performance is significantly improved in the ROUGE evaluation.
Improved Elite Sparrow Search Algorithm Based on Double Sample Learning and Single-dimensional Search
JIA Kaiye, DONG Yan
Computer Science. 2023, 50 (2): 317-323.  doi:10.11896/jsjkx.211100162
Abstract PDF(2156KB) ( 341 )   
References | Related Articles | Metrics
An improved elite sparrow search algorithm based on double-sample learning and single-dimension search is proposed to solve the problems of uneven initial population distribution,little information exchange between populations,easy to fall into local optimum and slow convergence.First,the combination of Hammersley low difference sequence and reverse learning is used to generate the initial elite population to enhance individual quality and diversity.Then,the two-sample learning strategy is adop-ted to improve the follower's position updating formula,strengthen the information exchange between populations,and improve the algorithm's ability to jump out of local optimum.Finally,in the late iteration of the algorithm,the single-dimensional search mode is adopted to enhance the depth mining ability of the algorithm and improve the accuracy of the algorithm.By analyzing the time complexity,it is proved that the improved algorithm does not increase the time complexity of the algorithm.Twelve test functions with different characteristics are selected for optimization,and the test results show that the algorithm has obvious advantages in convergence speed,accuracy and stability compared with other algorithms.
Information Security
EHFM:An Efficient Hierarchical Filtering Method for Multi-source Network Malicious Alerts
YANG Xin, LI Gengxin, LI Hui
Computer Science. 2023, 50 (2): 324-332.  doi:10.11896/jsjkx.220800049
Abstract PDF(3003KB) ( 348 )   
References | Related Articles | Metrics
Security situation awareness technology based on the alarm data plays an essential role in system protection.In the complex network environment,situation awareness systems control and predict the network security in time by capturing multiple metrics representing system situations combined with alert data.However,network security detection or protection systems ge-nerate massive and diverse alarm logs daily.Such massive threat logs and event information lead to a sharp rise in complexity and even bring some misjudgment problems.Therefore,there is a need for methods that filter the massive warning alerts with fine granularity and high accuracy to provide the basis for building subsequent reliable situation awareness systems.This paper proposes an efficient hierarchical filtering method(EHFM) for multi-source alarm data.EHFM contains five layers of filters,and the proposed hierarchical filtering structure guarantees its scalability and flexibility.Firstly,EHFM designs a unified format for multi-source alarm data to provide unified and customizable filtering.Moreover,the concept of “difference in joint performance entropy” incorporated with the fuzzy analytic hierarchy algorithm is proposed,which guarantees its robustness.These methods improve filtering accuracy by solving the problem of misjudgment caused by excessive alarm scale and external environmental factors.Then,the threat degree of malicious events to the system is classified by considering both the frequency and the impact of alerts.Finally,the classified and filtered alerts are visualized to facilitate the subsequent processing by security managers or software.Based on the proposed EHFM,a security situation awareness system is developed to verify its efficiency.The results of comprehensive experiments demonstrate that the proposed scheme filters and classifies malicious events in fine granularity and hence improves the accuracy and effectiveness of security situation awareness technology in large-scale alarm scenarios.
RCP:Mean Value Protection Technology Under Local Differential Privacy
LIU Likang, ZHOU Chunlai
Computer Science. 2023, 50 (2): 333-345.  doi:10.11896/jsjkx.220700273
Abstract PDF(2147KB) ( 354 )   
References | Related Articles | Metrics
This paper mainly focuses on the mean estimation problem in differential privacy query.After introducing the current mainstream local differential privacy design scheme of numerical data mean estimation,it first introduces the random censoring mechanism in random response technology to reveal the basic principle of mean calculation under local differential privacy,proposes a utility optimization theorem about the variance of mean estimation,and gives a boundary optimization formula,which improves the interpretability and operability of utility optimization theory in this field.Based on this theory,this paper proposes a practical,concise and efficient mean estimation algorithm protocol RCP for the first time,which can be used to collect and analyze the data of intelligent device users connected to the Internet,while meeting the requirements of local differential privacy.RCP is simple in structure,supports data analysis tasks on any number of numerical attributes,and has efficient communication and calculation,effectively alleviating the practical problems of complex algorithm design,difficult optimization,and low efficiency.Finally,empirical research demonstrates that the proposed method outperforms other existing schemes in terms of utility,efficiency and asymptotic error bounds.
Approach of Web Application Access Control Vulnerability Detection Based on State Deviation Analysis
MA Qican, WU Zehui, WANG Yunchao, WANG Xinlei
Computer Science. 2023, 50 (2): 346-352.  doi:10.11896/jsjkx.211100166
Abstract PDF(2545KB) ( 379 )   
References | Related Articles | Metrics
Attackers can exploit vulnerabilities in Web applications to implement malicious behaviors such as disrupting application functionality and Trojan implantation.For the detection of access control vulnerabilities in Web applications,existing me-thods have high false alarm,leakage rates and low efficiency due to the difficulty of extracting code features and inaccuratebeha-vior portrayal.This paper proposes a method for detecting Web access control vulnerabilities based on state deviation analysis,which combines white-box testing techniques to extract access control-related constraints in code to generate Web application expected access policies,and then generates Web application actual access policies through dynamic analysis,converting the detection of access control vulnerabilities into the detection of state deviation.Using this technology to develop the prototype tool ACVD,it is possible to accurately detect the types of access control vulnerabilities such as unauthorized access and ultra vires access.Tested in 5 real Web applications,16 real vulnerabilities are found,and the recall rate reaches 98%,which is about 300% higher than traditional black box tools.
Interdiscipline & Frontier
Survey of Container Technology for High-performance Computing System
CHEN Yiyang, WANG Xiaoning, LU Shasha, XIAO Haili
Computer Science. 2023, 50 (2): 353-363.  doi:10.11896/jsjkx.220100163
Abstract PDF(3088KB) ( 487 )   
References | Related Articles | Metrics
Container technology has been widely used in the cloud computing industry,mainly for rapid migration and automated deployment of service software environments.With the deep integration of high performance computing,big data and artificial intelligence technologies,the application software dependency and configuration of high performance computing systems are beco-ming increasingly complex,and the demand for user-defined software stacks in supercomputing centers is getting stronger.Therefore,in the application environment of high-performance computing systems,a variety of container implementations have also been developed to meet the practical needs such as user-defined software stacks.This paper summarizes the development history of container technology,explains the technical principles of containers in Linux platform,analyzes and evaluates the container implementation software for high-performance computing systems,and finally the future research direction of container technology for high-performance computing system is prospected.
Thoughts on Development and Research of Science,Technology and Engineering Application of Brain & Mind-inspired Computing
LIU Yang, LIU Ruijia, ZHOU Liming, ZUO Xianyu, YANG Wei, ZHOU Yi
Computer Science. 2023, 50 (2): 364-373.  doi:10.11896/jsjkx.220500023
Abstract PDF(2543KB) ( 910 )   
References | Related Articles | Metrics
To develop a new generation of brain-inspired intelligence,we need to comprehensively consider the structure,function and behavior of natural intelligence.Bias in any direction is not comprehensive,and it is difficult to fully touch the essence of intelligence.Based on the structure simulation of nervous system,the function emulation of cognitive system and the behavior imitation of natural intelligence,this paper defines the basic concept of brain & mind-inspired computing(BMC),puts forward the hypothesis,model and framework of BMC,and studies the frontier theory of BMC.Then it explores and analyzes the technical route,core algorithms and key technologies of BMC research,and summarizes the current situation of complex system and engineering application of BMC in the aspects of brain mechanism,mental model and behavior control.Combined with the multidisciplinary and interdisciplinary characteristics of intelligence science,neuroscience,cognitive science,information science and computational mathematics,it further discusses the research paradigm and transdisciplinary construction of BMC,brain-inspired computing and brain-like computing.Reserch of BMC is expected to make a major breakthrough in the scientific theory,technological innovation and engineering system of the new generation of brain-inspired intelligence.
Interdiscipline & Frontier
Tensor Instruction Generation Optimization Fusing with Loop Partitioning
LIANG Jiali, HUA Baojian, SU Shaobo
Computer Science. 2023, 50 (2): 374-383.  doi:10.11896/jsjkx.220300147
Abstract PDF(2977KB) ( 414 )   
References | Related Articles | Metrics
The tensor compiler compiles the tensor algorithm and schedule of the operator into the code of the target hardware.In order to accelerate tensor operation,the special processor in the field of deep learning is designed as a special architecture with special instructions,which supports multi-core parallel,multi-level special memory architecture and tensor calculation.On top of the hardware,there is a tensor instruction set closely related to the characteristics of the hardware.In such a complex architecture,the use of tensor instructions has many constraints and limitations,and there are the following problems and challenges.Firstly,the conditional branches introduced by loop tiling such as computing task division or data chunking increase the difficulty of pattern matching.Secondly,tensor instructions have hardware constraints such as alignment and data layout.To solve the above problems and research challenges,an optimization algorithm of tensor instruction ge-neration based on loop partitioning is proposed.By dividing the loop interval,the algorithm eliminates the conditional branches introduced by task division or data segmentation.The instruction and hardware constraints are solved by filling zeros,replacing equivalent instructions and adding additional calculations.The tensor instruction is generated by pattern matching method.This paper studies and extends the open source deep learning compiler TVM version 0.7,and implements a compiler prototype system supporting tensor instruction ge-neration of DianNao architecture machine learning accelerator.In order to evaluate the effectiveness of the algorithm,the operator performance and development efficiency of element-wise binary tensor operator,in-place unary tensor operator and convolution operator are tested on the DianNao architecture machine learning accelerator hardware platform.Experimental results show that the average speedup of the three types of operators is 125.00%,the maximum speedup is 194.00%,and the maximum development efficiency increases by 7 times.