Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 49 Issue 4, 15 April 2022
  
Contents
Contents
Computer Science. 2022, 49 (4): 0-0. 
Abstract PDF(5440KB) ( 711 )    Suppl. Info.
RelatedCitation | Metrics
Special Issue of Social Computing Based Interdisciplinary Integration
Develop Social Computing and Social Intelligence Through Cross-disciplinary Fusion
MENG Xiao-feng, YU Yan
Computer Science. 2022, 49 (4): 3-8.  doi:10.11896/jsjkx.yg20220402
Abstract PDF(1740KB) ( 3011 )   
References | Related Articles | Metrics
The era of digital intelligence offers new opportunities for the development of social computing and social intelligence.Cross-disciplinary fusion shall be a critical approach for its deep development.This paper elaborates the connotation and denotation of social computing, discusses the paradigm shift of social computing research, and the general development of social computing and social intelligence.Next, it looks forward to the social computing and social intelligence in the era of digital intelligence, and proposes three pillars for constructinga social intelligence system based on the new infrastructure.The construction of such an intelligent system is mainly composed by three components, including the construction of large-scale high-velocity data intelligence, the integration of multi-scale flexible spatial intelligence, and the formation of complex adaptive social intelligence.There is a level progression from data intelligence to social intelligence, in which data, computing and society are entangled.As such, computing science, data science, spatial science, complex science and social science are requested to interact from both theoretical and methodological perspectives.With the rapid update of digital-intelligent technologies and their penetration into the whole social-economic system, social computing and social intelligence is bound to seek breakthrough and deep development in the interdisciplinary cross-integration.
Finer-grained Mapping for Urban Scenes Based on POI
ZENG Jin, LU Yong-gang, YUE Yang
Computer Science. 2022, 49 (4): 9-15.  doi:10.11896/jsjkx.210800274
Abstract PDF(2613KB) ( 2736 )   
References | Related Articles | Metrics
As a symbol of urban culture, meaning and emotion, “scene” is a concept beyond the physical space.In the context of the knowledge economy, urban scene is an abstract concept describing culture, values and lifestyle generated by the combination of amenities.It is regarded to attract high-quality human capital and thus is the endogenous driving force of the economy and urban development.Therefore, accurately grasping the state and spatial distribution of urban scenes is an essential dimension of urban development.Several studies have mapped urban scenes based on the scale of the whole city or region, such as ZIP code tabulation area via official commercial codes or Dianping data.This study attempts to propose a methodological framework to achieve fine-grained mapping of urban scenes based on POI data and statistical methods.Scenes in Shenzhen are estimated, and the results show that the main scenes of Shenzhen are corporate, formality, exhibitionism, fashion and transgression.Moreover, three scenes patterns are presented, which may come from work, residential and creative entertainment spaces, respectively.In general, a practical methodological framework is proposed to map finer-grained scenes in cities, which is conducive to a more profound understanding and accurate identification of urban scenes and brings inspiration for urban development.
Conceptual Model for Large-scale Social Simulation
ZHANG Ming-xin
Computer Science. 2022, 49 (4): 16-24.  doi:10.11896/jsjkx.210900136
Abstract PDF(2630KB) ( 2846 )   
References | Related Articles | Metrics
Large-scale agent-based social simulation is gradually proved to be an effective method for the study of human society.It can contribute to decision-making in social science, distributed artificial intelligence and agent technology in computer science, theory and modeling practice of computer simulation system, etc.However, the existing research practice has difficulties in balancing model complexity and simulation performance.In view of the existing problems, this paper proposes a conceptual model framework of large-scale social simulation based on agent and big data driving, and provides the reference implementation of mo-del components.Taking the epidemic prediction and control in a large-scale artificial city as an example, it illustrates how to use the proposed conceptual framework to model the large-scale social system with complex human behavior and social interaction.It also points out the potential applications in other social science fields, such as micro transportation system and urban evacuation planning.
Integrated Modeling Method and Application System for Social Computing
WANG Qi, WANG Gang-qiao, CHEN Yong-qiang, LIU Yi
Computer Science. 2022, 49 (4): 25-29.  doi:10.11896/jsjkx.210900257
Abstract PDF(2057KB) ( 2474 )   
References | Related Articles | Metrics
Complex social system modeling is the principle problem in social computing.Considering the modeling process and requirements in the field of social computing, a model deep integration architecture called POV framework is proposed.The framework consists of three parts, physical layer, overlay layer and virtual layer, which provides the method of model organization, expression and integration.Based on this method, an interactive sharing and integration platform for social computing data model is built, which provides researchers with a social computing experimental platform including data resources, analysis tools, modeling and simulation computing environment.Application examples show that the platform can provide effective support for researchers to carry out social computing research.
EEG Emotion Recognition Based on Spatiotemporal Self-Adaptive Graph ConvolutionalNeural Network
GAO Yue, FU Xiang-ling, OUYANG Tian-xiong, CHEN Song-ling, YAN Chen-wei
Computer Science. 2022, 49 (4): 30-36.  doi:10.11896/jsjkx.210900200
Abstract PDF(2299KB) ( 3063 )   
References | Related Articles | Metrics
With the rapid development of human-computer interaction in computer aided field, EEG has become the main means of emotion recognition.Meanwhile, graph network has attracted wide attention due to its excellent ability to represent topological data.To further improve the representation performance of graph network on multi-channel EEG signals, in this paper, conside-ring the sparsity and infrequency of EEG signals, a self-adaptive brain graph convolutional network with spatiotemporal attention (SABGCN-ST) is proposed.The method solves the sparsity of emotion via the spatiotemporal attention mechanism and explores the functional connections between different electrode channels via the self-adaptive brain network topological adjacent matrix.Finally, the feature learning of graph structure is operated via graph convolution, and the emotion is predicted.Extensive experiments conduct on two benchmark datasets DEAP and SEED prove that SABGCN-ST has a significant advantage in accuracy compared with baseline models, and the average accuracy of SABGCN-ST reaches 84.91%.
Identification and Segmentation of User Value in Crowdsourcing Platforms:An Improved RFMModel
CHEN Dan-hong, PENG Zhang-lin, WAN De-quan, YANG Shan-lin
Computer Science. 2022, 49 (4): 37-42.  doi:10.11896/jsjkx.210800255
Abstract PDF(1675KB) ( 2313 )   
References | Related Articles | Metrics
On the crowdsourcing platform, different types of users have diversity and differences in participation intention, work motivation, business ability and other aspects, and the value they generated on the platform is also different.The segmentation of users based on user value measurement is the key to better insight into user value and needs for personalized and refined management of users.At the same time, the choice of crowdsourcing user value measurement dimension is also a problem to be solved.Therefore, based on the RFM model, combined with the characteristics of crowdsourcing platform and crowdsourcing users, this paper firstly incorporates user credit into the user value model, proposes and constructes a crowdsourcing user value measurement model-RFMC.Secondly, combined with the required data obtained on the platform of “Yipinweike”, using GBDT algorithm to complete the crowdsourcing user classification.Finally, the classification performance of Nave Bayes, Multinomial Logistic Regression and GBDT are compared.Also, the classification performance of RFMC model is compared with that of traditional model without considering user credit.Evaluation indicators show that the proposed model is suitable for crowdsourcing users and has good experimental results.
Link Prediction for Node Featureless Networks Based on Faster Attention Mechanism
LI Yong, WU Jing-peng, ZHANG Zhong-ying, ZHANG Qiang
Computer Science. 2022, 49 (4): 43-48.  doi:10.11896/jsjkx.210800276
Abstract PDF(2338KB) ( 2501 )   
References | Related Articles | Metrics
Link prediction is an important task in network science.It aims to predict the link existence probabilities of two nodes.There are many relations between substances in real word, which can be described by network science in computers.There are many problems of daily life, which can be transformed to link prediction tasks.Link prediction algorithms for node featureless networks are convenient to migrate in directed networks, weighted networks, time networks, and so on.However, the traditional link prediction algorithms are faced with many problems as follows.The network structures information mining is not deep enough.The feature extraction processes depend on subjective consciousness.The algorithms are short of universality, and the time complexity and space complexity are flawed, which cause that they are difficult to be applied to real industry networks.In order to effectively avoid the above problems, based on the basic structure of graph attention network, graph embedding representation technology is used to collect node characteristics, analogy with the memory addressing strategy in neural turing machine, and combined with the relevant work of important node discovery in complex network, a fast and efficient attention calculation method is designed, and a node featureless network link prediction algorithm FALP integrating fast attention mechanism is proposed.Experiment on three public datasets and a private dataset show that the FALP effectively avoids these problems and has excellent predictive performance.
EWCC Community Discovery Algorithm for Two-Layer Network
TANG Chun-yang, XIAO Yu-zhi, ZHAO Hai-xing, YE Zhong-lin, ZHANG Na
Computer Science. 2022, 49 (4): 49-55.  doi:10.11896/jsjkx.210800275
Abstract PDF(2131KB) ( 2317 )   
References | Related Articles | Metrics
Aiming at the problem of community discovery in relational networks, considering the strength of interaction between nodes and information seepage mechanism, an edge weight and connected component (EWCC) community discovery algorithm based on edge weight and connected branches is innovatively proposed.In order to verify effectiveness of the algorithm, firstly, five kinds of interactive two-layer network models are constructed.By analyzing influence of interaction degree of nodes between layers on the network topology, 30 data sets generated under five kinds of two-layer network models are determined.Secondly, the real data set is selected to compare with GN algorithm and KL algorithm in the evaluation criteria of modularity, algorithm complexity and community division number.Experimental results show that EWCC algorithm has high accuracy.Then, the numerical simulation shows that with the weakening of interaction relationship between layers, the module degree is inversely proportional to number of communities, and the community division effect is better when node relationship between layers is weaker.Finally, as an application of the algorithm, the “user-APP” two-layer network is constructed based on empirical data, and the community is divided.
Modeling and Analysis of WeChat Official Account Information Dissemination Based on SEIR
CHANG Ya-wen, YANG Bo, GAO Yue-lin, HUANG Jing-yun
Computer Science. 2022, 49 (4): 56-66.  doi:10.11896/jsjkx.210900169
Abstract PDF(3688KB) ( 2415 )   
References | Related Articles | Metrics
In the era of mobile Internet, it has become an irreversible trend that the social relationship chain goes online.The appearance of WeChat official account not only improves the convenience of information acquisition, but also increases the difficulty of system information governance.The research about the dissemination process of official account information on the WeChat social network and curbing the spread of rumors on social networks become the focus of WeChat operators and social regulator authorities.Based on the SEIR infectious disease model, this paper uses the real operating data provided by Beijing Sootoo Company to calculate and simulate the mutual conversion probability of the S-state, E-state, I-state and R-state users, and restores the whole link process of official account information dissemination on WeChat social network.In addition, this paper also quantitatively analyzes the influence of the number of official account fans, the influence of fans, the infection probability P1 of susceptible users into exposed users, and the dissemination probability P2 of exposed users into infected users on the process of information dissemination, which proves the effectiveness of the key opinion leader's forced immunization strategy in suppressing information dissemination.
Capability Building for Government Big Data Safety Protection:Discussions from Technologicaland Management Perspectives
SUN Xuan, WANG Huan-xiao
Computer Science. 2022, 49 (4): 67-73.  doi:10.11896/jsjkx.211000010
Abstract PDF(1583KB) ( 2448 )   
References | Related Articles | Metrics
Government big data is the core asset for digital government construction in the new era, and it is of great significance to the upgrade of government functions and services and the development of economic and social innovation.However, in a complex network circulation environment, in order to ensure the rational, orderly, and reliable use of government big data, capability building for data security protection cannot be ignored.On the technical aspect, government big data security protection involves several core elements, including the network security, platform security, and application security, and on the management aspect, government big data security protection needs to be focused on personnel quality and institutional quality.On the basis of the oretical discussions, specific technical and management capability indicators are given, and the construction practice of provincial-level agency unit A is analyzed.
Insights into Dataset and Algorithm Related Problems in Artificial Intelligence for Law
CONG Ying-nan, WANG Zhao-yu, ZHU Jin-qing
Computer Science. 2022, 49 (4): 74-79.  doi:10.11896/jsjkx.210900191
Abstract PDF(1700KB) ( 2389 )   
References | Related Articles | Metrics
With the rapid development of artificial intelligence (AI) technology, the application of AI-related technologies in law is increasedand attracts extensive attention.Specifically, AI has emerged in multiple legal scenarios such as automatic contract review and smart courts, compared with traditional artificial intelligence, its high efficiency shows its great application potential in the judicial field.However, in other scenarios such as legal judgement prediction (LJP), AI faces challenges and doubts in data analysis and algorithms, although some attempts have been made.Through analysis of the work related to legal AI, this paper summarizes the potential problems in datasets and algorithms in intelligent referees, investigates the changes in judicial progress that AI may bring and discusses whether the problems encountered by AI will affect the justice of law.Finally, this paper briefly expresses the potential solutions to the above problems, and provides insights into its future development, in the hope that AI technology will have a more systematic application in China's judicial field and contributeto the construction of socialist rule of law.
Big Data-driven Based Socioeconomic Status Analysis:A Survey
YAO Xiao-ming, DING Shi-chang, ZHAO Tao, HUANG Hong, LUO Jar-der, FU Xiao-ming
Computer Science. 2022, 49 (4): 80-87.  doi:10.11896/jsjkx.211100014
Abstract PDF(1730KB) ( 2641 )   
References | Related Articles | Metrics
Socioeconomic Status (SES), an overall measure of a person's economic and social status relative to others combining factors such as economics and sociology, has received a lot of attention from researchers, as its assessment can help relevant orga-nizations to make various policies and decisions (governmental formulation of social policies, advertising personalized services, etc).In addition, with the development of big data technology and machine learning in recent years, assessing people's socioeconomic attributes (SEAs) and further obtaining the corresponding socioeconomic status with a data-driven approach can address the issue of extremely high cost of traditional methods.Therefore, this paper summarizes the research progresses of applying big data techniques to socioeconomic status analysis in recent years.It first introduces the basic concept of socioeconomic status and discusses the challenges posed by big data methods compared to traditional methods.After that, it systematically summarizes and classifies the state-of-the-art related methods based on the information in the learning process, and present them in detail, discusses the pros and cons of each type of method.Finally, it discusses the challenges and problems of inferring people's socioeconomic status and provides an outlook on future research directions.
Database & Big Data & Data Science
Survey of Visualization Methods on Academic Citation Information
ZHU Min, LIANG Zhao-hui, YAO Lin, WANG Xiang-kun, CAO Meng-qi
Computer Science. 2022, 49 (4): 88-99.  doi:10.11896/jsjkx.210300219
Abstract PDF(8249KB) ( 2699 )   
References | Related Articles | Metrics
Academic literature includes abundant citation information, which becomes the major analysis object and hot topic in both bibliometry and scientific research evaluation field.Compared with quantitative analysis methods based on mathematics and statistics, the use of visualization methods can realize the vivid presentation of citation information in time sequence and hierarchy, as well as the interactive mining of complex citation networks, which is of great significance to the reform of scientific research evaluation and the innovation of bibliometric methods.This paper introduces domestic and foreign relevant research on academic citation information analysis in recent years, summarizes the general framework of academic citation information visualization;classifies visualization methods according to two types of analysis tasks, entity evaluation and bibliometry, then elaborates the current research status, advantages and disadvantages of each type of methods, and finally points out the challenges and directions for further exploration of academic citation information visualization.
Budget-aware Influence Maximization in Social Networks
ZUO Yuan-lin, GONG Yue-jiao, CHEN Wei-neng
Computer Science. 2022, 49 (4): 100-109.  doi:10.11896/jsjkx.210300228
Abstract PDF(3136KB) ( 462 )   
References | Related Articles | Metrics
The influence maximization of social networks is a crucial problem in the field of network science, which has wide applications from advertising to public opinion control.This problem refers to selecting a set of source nodes in a social graph to achieve the greatest influence under a certain propagation model.Since the node selection problem is a typical NP-hard problem, it will encounter the combinatorial explosion when facing large-scale networks.Hence, in recent years, heuristic algorithms are ge-nerally adopted to obtain approximate solutions to the problems in acceptable time.However, the existing work rarely considers the cost of selected nodes, and hence the solutions obtained cannot meet the budget limitations in practical applications.This paper aims to solve the influence maximization problem of social networks under cost-constrained conditions.By fully considering the costs, we build a budget-aware influence maximization model and propose a node selection algorithm named community detection-based ant colony system (CDACS) to deal with it.First, in order to save the unnecessary expenditure coming from the redundant coverage of source nodes, we use the fast greedy modularity maximization algorithm to cluster the network, and introduce a cross-community walking factor in the state transition process of ants to enhance the global exploration ability of the ant colony on the network.Second, we specifically design a penalty-based evaluation function to guide the search towards budget-feasible region as well as developing new heuristic and pheromone forms to enhance the search efficiency.Experimental results on real datasets show that the CDACS algorithm enhances the traditional ant colony algorithm by achieving a 15% improvement in the average coverage rate and a 20% reduction in the running time overhead.Compared with other existing influence maximization algorithms, the coverage effect has also been significantly improved.Moreover, the reliability of the CDACS algorithm in cost control has been validated by experiments.
Technical Research of Graph Neural Network for Text-to-SQL Parsing
CAO He-xin, ZHAO Liang, LI Xue-feng
Computer Science. 2022, 49 (4): 110-115.  doi:10.11896/jsjkx.210200173
Abstract PDF(2273KB) ( 2664 )   
References | Related Articles | Metrics
The Text-to-SQL task in the field of semantic parsing is of great significance for realizing database-based automatic question and answer.At present, deep learning models, such as sequence generation model Seq2Seq, has achieved significant effects in single-table SQL queries.However, the problem of multi-table SQL queries remains to be solved.Graph neural network can effectively extract the associated information between databases, tables and questions, enrich the semantic information in the parsing process, and improve the accuracy of multi-table SQL queries.This paper proposes an adaptive graph construction method and graph encoding method.Question information is introduced into the existing Text-to-SQL model, and the graph network initialized weights are generated by convolution operation on the splicing word vector of the question sentence and the database.General training can be achieved for different databases of the same type.The IRNet framework and relational expansion are used to design the overall model, and it is verified on the open Text-to-SQL data set——Spider.Results show that the technology can effectively improve the matching accuracy of multi-table SQL statement generation, and the algorithm has an important reference value for the research of graph neural network in the text-to-SQL field.
Fast Unsupervised Graph Embedding Based on Anchors
YANG Hui, TAO Li-hong, ZHU Jian-yong, NIE Fei-ping
Computer Science. 2022, 49 (4): 116-123.  doi:10.11896/jsjkx.210200098
Abstract PDF(2824KB) ( 513 )   
References | Related Articles | Metrics
Graph embedding is a widely used method for dimensionality reduction due to its computational effectiveness.The computational complexity of graph embedding method to construct traditional K-Nearest Neighbors (K-NN) graph is at least O(n2d), where n and d represents the sample size and dimensions respectively.The construction of K-NN graphs is very time-consuming since the computational complexity is proportional to the square of the number of samples in the case of a large amount of data, which will limit the application of graph embedding algorithms on large-scale data sets.To address this problem, a fast unsupervised graph embedding based on anchors (FUGE) method is proposed in this paper.FUGE first selects anchors (representative points) from the data, then constructs a similarity graph between the data and anchors, and finally performs graph embedding analysis.Since the number of anchors is much smaller than the number of data, the proposed method can effectively reduce the computational complexity of the process of graph construction.Different from using the kernel function to construct the similarity graph, FUGE directly learns the data point-anchor similarity graph by the neighbor information of the data, which further accelerates the process of graph construction.The overall computational complexity of FUGE is O(nd2+nmd), where m is the number of anchors.Extensive experiments on real-world benchmark data sets show the effectiveness and efficiency of the proposed method.
Adaptive Multimodal Robust Feature Learning Based on Dual Graph-regularization
ZHAO Liang, ZHANG Jie, CHEN Zhi-kui
Computer Science. 2022, 49 (4): 124-133.  doi:10.11896/jsjkx.210300078
Abstract PDF(3820KB) ( 623 )   
References | Related Articles | Metrics
In the big data era, the widespread of massive multi-modal data has caused huge changes in the data characteristics, namely wide variety and low value density.Different types of data are characterized by both function independently and complement each other.Discovering the hidden value behind multi-modal data has become the key problem in big data mining tasks.Therefore, to tackle the shortcomings of the low-quality multimodal data, this paper proposes a new multimodal robust feature learning method by introducing the modal specific error matrix.The effect of noisy information on the fusion result can thus be effectively reduced.Moreover, a dual graph-regularization mechanism for data manifolds and feature manifolds is designed to describe the spatial structure of multimodal data, which can ensure the data stability during multimodal feature learning.On six real-world multi-modal data sets, the results are compared with several classical algorithms in recent years based on three evaluation indexes, namely accuracy (ACC), normalized mutual information (NMI) and purity (PUR).Experimental results show that the proposed method is superior to all other compared algorithms, especially in network data sets Webkb containing large amounts of noise information, its ACC and NMI are improved by about 10% compared with the baseline algorithms.It can be seen that the proposed algorithm can accurately learn the sharing features of multi-modal data.
Application of Gray Wolf Optimization Algorithm on Synchronous Processing of Sample Equalization and Feature Selection in Credit Evaluation
CHU An-qi, DING Zhi-jun
Computer Science. 2022, 49 (4): 134-139.  doi:10.11896/jsjkx.210300075
Abstract PDF(2219KB) ( 672 )   
References | Related Articles | Metrics
With the rapid development of Internet finance industry, traditional credit risk evaluation is facing challenges in the face of massive data.Due to the unbalanced sample categories and high feature redundancy in credit evaluation, it has become the key factor affecting the classification accuracy of current evaluation.In order to solve the above problems, a method based on gray wolf optimization algorithm is proposed to process the samples under sampling and feature selection synchronously.In this me-thod, the performance of the classifier is taken as the heuristic information of the gray wolf optimization algorithm, and then the intelligent search is carried out to obtain the combination of the optimal sample and the feature set, and the tabu table strategy is introduced into the original gray wolf algorithm to avoid the algorithm falling into the local optimum.Experimental results show that the proposed method has a great improvement compared with other methods, and its performance on different data sets proves that it can effectively solve the problem of sample imbalance, reduce the dimension of feature space, and improve the accuracy of classification.Compared with the original data, the accuracy of credit risk evaluation is improved by about 3%, which proves the applicability and superiority of this method in the field of credit evaluation.
Prediction Method of Structural Static Performance Based on Data Learning
ZHAO Hang, TONG Shui-guang, ZHU Zheng-zhou
Computer Science. 2022, 49 (4): 140-143.  doi:10.11896/jsjkx.210300238
Abstract PDF(2273KB) ( 550 )   
References | Related Articles | Metrics
Aiming at the high cost of establishing a prediction model in the current mechanical structure optimization, a prediction method of structural static performance based on data learning is proposed.The cantilever beam is taken as the research object, and the finite element model is established to obtain the displacement field data of the simulation results.Then the boundary condition-displacement field surrogate model is constructed.The results show that the trend of displacement field distribution is consistent with the actual situation, and the relative error of the maximum displacement is -0.02% and -0.47% under the load of 1000N and 1600N, respectively.The influences of the magnitude of the uniform force and the position of the concentrated force on the displacement field prediction are discussed.The results show that the prediction error increases with the increase of load amplitude.Compared with the uniform force, the prediction error under the concentrated force load is larger, and the error is larger when the loading position is near the edge.In the inversion problem, the displacement fields are taken as the input, the uniform forces and the positions of concentrated force are taken as the output to construct the displacement field-boundary condition surrogate model.The prediction errors under the uniform loads of 1000N and 1600N are 0.15% and -0.48%, respectively, and the prediction errors under the load positions at 5mm and 10mm are 0.38% and -1.84%.The method based on data learning can provide new thinking for the prediction of structural static performance.
Three-way Drift Detection for State Transition Pattern on Multivariate Time Series
SHEN Shao-peng, MA Hong-jiang, ZHANG Zhi-heng, ZHOU Xiang-bing, ZHU Chun-man, WEN Zuo-cheng
Computer Science. 2022, 49 (4): 144-151.  doi:10.11896/jsjkx.210600045
Abstract PDF(2531KB) ( 456 )   
References | Related Articles | Metrics
Unsupervised drift detection for multivariate time series (MTSs) is an important task in machine learning.However, this issue is challenging because the definitions of sequential patterns and their drifts are very flexible.Inspired by the idea of “Think in Threes”, this paper proposes a three-way drift detection method for state transition pattern with periodic wildcard gaps (3WDD-STAP), which is improved from the incremental mining algorithm of STAP.Without additional parameters, both frequent and drifted STAPs can be obtained simultaneously.Considering the support changes around the increments, we define three types of STAP drift.Type I drift indicates that STAPs change from frequent to infrequent.The incremental dataset needs to be rescanned.Type II drift indicates that STAPs change from infrequent to frequent.The original dataset needs to be rescanned.Type III drift indicates that STAPs retain frequent or infrequent, namely, these STAPs are normal.No dataset needs to be rescanned.Finally, experimental results on 2 real-world datasets show that:1)we obtain less drifted STAPs with less α and β, and vice versa;2)the two types of drifted STAPs obeys different distribution for various datasets;3)the obtained STAPs and their drifts have strong readability.
Weak Label Feature Selection Method Based on Neighborhood Rough Sets and Relief
SUN Lin, HUANG Miao-miao, XU Jiu-cheng
Computer Science. 2022, 49 (4): 152-160.  doi:10.11896/jsjkx.210300094
Abstract PDF(1956KB) ( 415 )   
References | Related Articles | Metrics
In multi-label learning and classification, existing feature selection algorithms based on neighborhood rough sets will use classification margin of samples as the neighborhood radius.However, when the margin is too large, the classification may be meaningless.When the distances of samples are too large, it will easily result in the abnormal heterogeneous or similar samples, and these existing feature selection algorithms cannot deal with the weak label data.To address these issues, a weak label feature selection method based on multi-label neighborhood rough sets and multi-label Relief is proposed.First, the number of heterogeneous and similar samples is introduced to improve the classification margin, based on which, the neighborhood radius is defined, a new formula of neighborhood approximation accuracy is presented, and then the multi-label neighborhood rough sets model is constructed and can effectively measure the uncertainty of sets in the boundary region.Second, the iterative updated weight formula is employed to fill in most of the missing labels, and then by combining the neighborhood approximation accuracy with the mutual information, a new correlation between labels is developed to fill in the remaining information of missing labels.Third, the number of heterogeneous and similar samples continues to be used to improve the label weighting and feature weighting formulas, and then the multi-label Relief model is proposed for multi-label feature selection.Finally, based on the multi-label neighborhood rough sets model and the multi-label Relief algorithm, a weak label feature selection algorithm is designed to process high-dimensional data sets with missing labels and effectively improve the performance of multi-label classification.The simulation tests are carried out on eleven public multi-label data sets, and experimental results verify the effectiveness of the proposed weak label feature selection algorithm.
Attribute Reduction of Variable Precision Fuzzy Rough Set Based on Misclassification Cost
WANG Zi-yin, LI Lei-jun, MI Ju-sheng, LI Mei-zheng, XIE Bin
Computer Science. 2022, 49 (4): 161-167.  doi:10.11896/jsjkx.210500211
Abstract PDF(2053KB) ( 485 )   
References | Related Articles | Metrics
Attribute reduction is a hot research issue in rough set.In this paper, how to reduce redundant attributes without increasing the misclassification cost is studied.Firstly, the minimum misclassification degree of variable precision fuzzy rough sets is defined.Then, by introducing the decision process, the variable precision fuzzy rough set model is proposed based on the minimum misclassification degree.Then, a heuristic attribute reduction algorithm is proposed by taking the misclassification cost as an invariant.We compare this algorithm with other algorithms through experiments.The results show that the attribute reduction results obtained by the proposed algorithm have the advantages of less reserved attributes and lower misclassification cost.
Three-way Approximate Reduction Based on Positive Region
WANG Zhi-cheng, GAO Can, XING Jin-ming
Computer Science. 2022, 49 (4): 168-173.  doi:10.11896/jsjkx.210500067
Abstract PDF(1679KB) ( 440 )   
References | Related Articles | Metrics
Attribute reduction is one of the most important research topics in the theory of three-way decision.However, the existing attribute reduction methods based on three-way decision are too strict, which limit the efficiency of attribute reduction.In this paper, a three-way approximate attribute reduction method based on the positive region is proposed.More specifically, attri-bute reduction is considered as the process of determining attributes as positive, boundary, or negative ones according to their correlation to the decision attribute.The negative attributes are first removed by retaining the measure of the positive region.Then, some of the boundary attributes are iteratively excluded by relaxing the positive region measure.Finally, an approximate reduction is formed by the remaining attributes.Extensive experiments on UCI data sets demonstrate that the proposed method can achieve much smaller reducts with the same or even better performance in comparison with other representative methods, showing the effectiveness in attribute reduction.
Computer Graphics & Multimedia
Survey of 3D Gesture Tracking Algorithms Based on Monocular RGB Images
ZHANG Ji-kai, LI Qi, WANG Yue-ming, LYU Xiao-qi
Computer Science. 2022, 49 (4): 174-187.  doi:10.11896/jsjkx.210700084
Abstract PDF(5605KB) ( 895 )   
References | Related Articles | Metrics
In view of the needs of applications such as human-computer interaction(HCI) systems and virtual reality(VR) systems, the study on theories and methods of 3D gesture tracking has become one of the hot issues with widespread concern at home and abroad.In recent years, the 3D gesture tracking algorithms based on computer vision develop rapidly.Among them, the more economical and ubiquitous monocular RGB camera has the most potential.It is an important tool and way for 3D gesture tracking applications to take into reality, which has been focused by researchers.In order to comprehend the development status of gesture tracking algorithms, and assist researchers in this field to conduct more deep-going explorations, firstly, in comparison with the traditional methods, this paper introduces the 3D gesture tracking algorithms based on monocular RGB image, and divides it into three categories:discriminative methods, generative methods and hybrid methods, and summarizes the corresponding advantages and disadvantages.Secondly, the influence of RGB image characteristics on 3D gesture tracking is discussed, and the methods to alleviate the depth ambiguity of the image are generalized.Thirdly, according to the classification, the representative algorithms with RGB as input data are emphatically analyzed, and the specific superiority and weaknesses of related algorithms are compared through visualized performance evaluation index.Finally, the problems faced with the current 3D gesture tracking algorithms are summarized and the future development is prospected.
Improved Ellipse Fitting Algorithm with Outlier Removal
GUO Si-yu, WU Yan-dong
Computer Science. 2022, 49 (4): 188-194.  doi:10.11896/jsjkx.210200040
Abstract PDF(1993KB) ( 604 )   
References | Related Articles | Metrics
The results of ellipse fitting can be considerably distorted by outliers in the fitted point set.To tackle this problem, three improved ellipse fitting algorithms, one of which is based on least trimmed square, and the other two on dual point removal, are proposed.The least trimmed square algorithm starts from a random sample of the original complete fitted set, and then in each iteration, new fitted set is formed by points with the least residual errors, till the process converges to an ellipse fitting a subset whose members are mostly non-outliers.Dual point removal algorithms, on the other hand, starts from the whole fitted set, removes the two points respectively with the maximal positive and the minimal negative residual errors, and halts when the number of points in the remaining set does not exceeds a user-defined threshold.The two proposed algorithms and existing methods are compared on an image base of actual accessories.Experimental results show that when the number of reserved ellipse points is relatively small, the dual removal-based algorithms present the best fitting accuracies, but are slower than the least trimmed square fitting algorithm.When the best performance with parameter tuning is concerned, however, the least trimmed square algorithm achieves a shape-location matching accuracy of 0.62 pixels and an orientation matching accuracy of 0.6°, at an average execution time of 6.5ms, outperforming other algorithms.Other advantages of the proposed algorithms include the small number of algorithm parameters, the intuitiveness of the parameters, and the insensitivity of the algorithm performance to the parameters.These experimental results provide solid evidences for the effectiveness of the proposed algorithms, especially the least trimmed square algorithm.
Sketch Colorization Method with Drawing Prior
DOU Zhi, WANG Ning, WANG Shi-jie, WANG Zhi-hui, LI Hao-jie
Computer Science. 2022, 49 (4): 195-202.  doi:10.11896/jsjkx.210300140
Abstract PDF(5103KB) ( 495 )   
References | Related Articles | Metrics
Automatic sketch colorization has become an important research topic in computer vision.Previous methods intent to improve the colorization quality with advanced network architecture or innovative pipeline.However, they usually generate results with concentrated hue, unreasonable saturation and gray distribution.To alleviate these problems, this paper proposes a sketch colorization method with drawing priors.Inspired by the actual coloring process, this method learns the widely used drawing priors (such as hue variation, saturation contrast, and gray contrast) to improve the quality of automatic sketch colorization.Speci-fically, it incorporates pixel-level loss in the HSV color space to gain more natural results with less artifacts.Meanwhile, three heuristic loss functions that introduce the drawing priors such as hue variation, saturation and gray contrast are used to train our method to generate results with harmonious color composition.We compare our method with current state-of-the-art methods on test dataset constructed by real sketch images.Fréchet inception distance (FID) and mean opinion score (MOS) are adopted to measure the similarity between the distribution of real and generated images and the visual quality, respectively.Compared to the second-best method, the experimental results show that the FID of our method decreases by 21.00 and the MOS increases by 0.96, respectively.All the experimental results prove that the proposed method effectively improves the visual quality of automa-tic sketch colorization.
Intracerebral Hemorrhage Image Segmentation and Classification Based on Multi-taskLearning of Shared Shallow Parameters
ZHAO Kai, AN Wei-chao, ZHANG Xiao-yu, WANG Bin, ZHANG Shan, XIANG Jie
Computer Science. 2022, 49 (4): 203-208.  doi:10.11896/jsjkx.201000153
Abstract PDF(2779KB) ( 622 )   
References | Related Articles | Metrics
Non-enhanced CT scanning is the first choice for the diagnosis of suspected cerebral hemorrhage in the emergency room.Medical staffs usually use CT images to manually segment the lesions of patients with suspected acute cerebral hemorrhage, and then classify them based on clinical experience.This method of manual diagnosis requires the physician's experience and is highly subjective.Moreover, the segmentation and classification tasks are performed separately, and the characteristic information associated between the two tasks cannot be fully utilized, and the time cost is high, which increases the difficulty of quickly segmenting and classifying cerebral hemorrhage lesions based on CT images.In response to the above problems, the paper proposes a model for segmentation and classification of cerebral hemorrhage images based on multi-task learning.On the one hand, the weight of the loss function is optimized according to the difficulty of learning different tasks.On the other hand, public information sharing is realized in the shallow layer of the multi-task learning network, and private information of different tasks is extracted deeply to obtain more representative features, so as to quickly and accurately segment and classify the CT images of patients with cerebral hemorrhage.The experimental results show that the segmentation annotations generated by the multi-task learning network have good visual consistency with the real annotations.Under the optimal weight, the average Dice coefficient (DSC) of all subjects is 0.828, the sensitivity is 0.842, the specificity is 0.985, and the positive predictive value (PPV) is 0.838.The accuracy, sensitivity, specificity and AUC value of multi-task learning network classification are 95.00%, 90.48%, 100.00% and 0.982, respectively.Compared with single-task deep learning, Y-Net and multi task learning assisted by classification, this method makes more effective use of relevant task information, and at the same time improves the segmentation and classification accuracy of hemorrhage lesions by adjusting the weight of the loss function.
Scene Recognition Method Based on Multi-level Feature Fusion and Attention Module
XU Hua-jie, QIN Yuan-zhuo, YANG Yang
Computer Science. 2022, 49 (4): 209-214.  doi:10.11896/jsjkx.210100135
Abstract PDF(2388KB) ( 891 )   
References | Related Articles | Metrics
Scene image is usually composed of background information and foreground objects.Convolutional neural network (CNN) used for scene recognition task usually needs to recognize the category of scene according to the characteristics of key objects in the scene, or even combined with the position relationship between objects.Aiming at the problem that the key target features of small size in the scene image gradually disappear with the deepening of the network level, which leads to scene recognition errors, a scene recognition method based on multi-level feature fusion and attention module is proposed.Firstly, the feature extraction part of the deep neural network ResNet-18 is divided into five branches, and then the multi-level features of the output of the five branches are fused, and the fused features are used for scene recognition and classification to make up for the lost target information.Secondly, an improved attention module is added to the network to achieve the purpose of focusing on learning the key targets in the scene image, so as to improve the recognition effect further.Experimental results on several scene datasets show that the recognition accuracy of the proposed method on MIT-67, SUN-397 and UIUC-Sports scene datasets reaches 88.2%, 79.9% and 97.7% respectively, which is higher than the current mainstream scene recognition methods.
Infrared and Visible Image Fusion Network Based on Optical Transmission Model Learning
YAN Min, LUO Xiao-qing, ZHANG Zhan-cheng
Computer Science. 2022, 49 (4): 215-220.  doi:10.11896/jsjkx.210200174
Abstract PDF(3518KB) ( 459 )   
References | Related Articles | Metrics
The fusion of infrared and visible images can obtain more comprehensive and rich information.Because there is no ground truth reference image, existing fusion networks simply try to find a balance between the two modes as much as possible.Due to the lack of ground truth label in existing data sets, supervised learning methods can not be directly applied to image fusion.In this paper, a multimode image synthesizing method based on the ambient light transmission model is proposed.Based on the NYU-Depth labeled data set and its depth annotation information, a set of infrared and visible multi-mode pairs with their ground truth fusion images is synthesized.The edge loss function and detail loss function are introduced into the conditional GAN, and the network is trained with end-to-end manner over the synthesized multi-modal image data set.Finally a fusion network is obtained.The trained network can make the fused image retain the details of the visible image and the characteristics of the infrared image, and sharpen the boundary of thermal targets in the infrared image.Compared with the state-of-the-art methods including IFCNN, DenseFuse, and FuionGAN on open TNO benchmark data set, the effectiveness of the proposed method is verified with subjective and objective image quality evalution.
End-to-End Speech Synthesis Based on BERT
AN Xin, DAI Zi-biao, LI Yang, SUN Xiao, REN Fu-ji
Computer Science. 2022, 49 (4): 221-226.  doi:10.11896/jsjkx.210300071
Abstract PDF(2903KB) ( 917 )   
References | Related Articles | Metrics
To address the problems of low training and prediction efficiency of RNN-based neural network speech synthesis mo-dels and long-distance information loss, an end-to-end BERT-based speech synthesis method is proposed to use the Self-Attention Mechanism instead of RNN as an encoder in the Seq2Seq architecture of speech synthesis.The method uses a pre-trained BERT as the model's Encoder to extract contextual information from the input text content, the Decoder outputs the Mel spectrum by using the same architecture as the speech synthesis model Tacotron2, and finally the trained WaveGlow network is used to transform the Mel spectrum into the final audio result.This method significantly reduces the training parameters and training time by fine-tuning the downstream task based on pre-trained BERT.At the same time, it can also compute the hidden states in the encoder in parallel with its Self-Attention mechanism, thus making full use of the parallel computing power of the GPU to improve the training efficiency and effectively alleviate the remote dependency problem.Through comparison experiments with the Tacotron2 model, the results show that the model proposed in this paper is able to double the training speed while obtaining similar results to the Tacotron2 model.
Droplet Segmentation Method Based on Improved U-Net Network
GAO Xin-yue, TIAN Han-min
Computer Science. 2022, 49 (4): 227-232.  doi:10.11896/jsjkx.210300193
Abstract PDF(2793KB) ( 632 )   
References | Related Articles | Metrics
The accurate segmentation of liquid drop image is an important part of high precision contact Angle measurement.Aiming at the problems of inaccurate target, incomplete contour, and poor effect of solid-liquid-vapor intersection and boundary details in the process of liquid drop segmentation, a neural network model suitable for liquid drop segmentation is proposed.The model is based on U-Net network, and a 1×1 convolution layer is added at its input to summarize image features to avoid losing information from the initial image.Resnet18 structure is used as the feature learning encoder of U-Net to enhance the expression ability of the network and promote the propagation of gradient.The feature fusion technology of dense connection is added in the decoding process, which improves the detail information of segmented target and reduces the network parameters.Finally, a batch normalization operation is added after each convolution layer to further optimize the network performance.Experimental results show that the improved U-Net model can effectively improve the accuracy of droplet identification and segmentation effect, and has a certain reference value in the field of contact Angle measurement.
Human Abnormal Behavior Detection Method Based on Improved YOLOv3 Network Model
ZHANG Hong-min, LI Ping-ping, FANG Xiao-bing, LIU Hong
Computer Science. 2022, 49 (4): 233-238.  doi:10.11896/jsjkx.210300251
Abstract PDF(3415KB) ( 610 )   
References | Related Articles | Metrics
The data of traditional video surveillance is very large and complex, and cannot detect the abnormal behaviors of human in a timely and effective manner.In response to these problems, this paper presents an improved YOLOv3 algorithm (YOLOv3-MSSE) for the detection of human abnormal behavior.This algorithm can improve the detection accuracy of large targets for it is based on the traditional YOLOv3 network model, and a multi-scale feature extraction network is constructed by the residual mo-dules.At the same time, by incorporating the attention mechanism into different positions of the network structure, and the importance of the features in each channel of the feature map can be weighted, which effectively improves the detection performance of the model for abnormal human behavior.Experimental results show that compared with the traditional YOLOv3 algorithm, the mAP of YOLOv3-MSSE is increased by 20.8%, and F1-scores is increased by 11.3%.The proposed algorithm can not only detect the specific abnormal behavior of the human in the monitoring scene effectively, but also can balance the relationship between the detection accuracy rate and the recall rate well.In addition, It is more suitable for the detection of human abnormal behavior in actual monitoring scenarios than similar methods.
Study on Reflective Vest Detection for Apron Workers Based on Improved YOLOv3 Algorithm
XU Tao, CHEN Yi-ren, LYU Zong-lei
Computer Science. 2022, 49 (4): 239-246.  doi:10.11896/jsjkx.210200119
Abstract PDF(3231KB) ( 554 )   
References | Related Articles | Metrics
This paper proposes a reflective vest detection algorithm for apron staff based on prior knowledge and improved YOLOv3 algorithm.Aiming at the problem of the existing target detection method with low speed, the reflective vest detection candidate region is generated based on prior knowledge to replace the initial candidate region, so as to reduce the detection area.Darknet-37 is used to replace Darknet-53 as the backbone network for feature extraction, which improves the detection speed of the algorithm.Aiming at the problem that the reflective vest occupies a small area in the picture and is difficult to identify, a spatial pyramid pooling structure (SPP) is added into the detection model to realize feature enhancement, and the detection scale is increased to four for multi-scale feature fusion.The K-means++algorithm is used to perform cluster analysis again on the size of labeled bounding box, and the clustering result is used to replace the initial Anchor value of Yolov3.GIoU is selected as the loss function to improve the positioning accuracy.Experimental results show that the proposed new target detection algorithm in the self-built reflective vest data set is better than YOLOv3 test results, the precision rate and recall rate reach 97.6% and 96.1%, detection rate reach 28.4 frames/s, which effectively solves the problems such as inaccurate positioning, missed detection and low detection speed existing in the original model, and meets the real-time requirements in the practical application of apron target detection while ensuring a high detection accuracy.
Automatic Identification Algorithm of Blood Cell Image Based on Convolutional Neural Network
LI Guo-quan, YAO Kai, PANG Yu
Computer Science. 2022, 49 (4): 247-253.  doi:10.11896/jsjkx.210200093
Abstract PDF(3640KB) ( 1254 )   
References | Related Articles | Metrics
A complete blood cell count is an important testing technique to evaluate overall health condition in medical diagnosis.In order to solve the problem that traditional blood cell counters and other devices are cumbersome and time-consuming for the artificial counting procedure of blood cells, a blood cell recognition algorithm based on convolutional neural networks is proposed, that is, three types of blood cells are automatically identified and counted based on Res2Net and YOLO object detection algorithm.The performance of the blood cell identification model is enhanced by incorporating Res2Net into the YOLO model to extract multiscale features represented by fine-grained and increase the range of receptive field in each network layer.After training and testing on an public blood smear image dataset, it can automatically identify and count red blood cells, white blood cells, and platelets, and the accuracy of identification reaches 93.44%, 96.09%, and 96.36%, respectively.Compared with other recognition models based on convolutional neural networks, the efficiency of blood detection can be significantly improved due to the high re-cognition accuracy and strong generalization.
Artificial Intelligence
Physics-informed Neural Networks:Recent Advances and Prospects
LI Ye, CHEN Song-can
Computer Science. 2022, 49 (4): 254-262.  doi:10.11896/jsjkx.210500158
Abstract PDF(2620KB) ( 3956 )   
References | Related Articles | Metrics
Physical-informed neural networks (PINN) are a class of neural networks used to solve supervised learning tasks.They not only try to follow the distribution law of the training data, but also follow the physical laws described by partial diffe-rential equations.Compared with pure data-driven neural networks, PINN imposes physical information constraints during the training process, so that more generalized models can be acquired with fewer training data.In recent years, PINN has gradually become a research hotspot in the interdisciplinary field of machine learning and computational mathematics, and has obtained relatively in-depth research in both theory and application, and has made considerable progress.However, due to the unique network structure of PINN, there are some problems such as slow training or even non-convergence and low precision in practical application.On the basis of summarizing the current research of PINN, this paper explores the network/system design and its application in many fields such as fluid mechanics, and looks forward to the further research directions.
Heating Strategy Optimization Method Based on Deep Learning
LI Peng, YI Xiu-wen, QI De-kang, DUAN Zhe-wen, LI Tian-rui
Computer Science. 2022, 49 (4): 263-268.  doi:10.11896/jsjkx.210300155
Abstract PDF(2936KB) ( 808 )   
References | Related Articles | Metrics
Typically, the strategy of central heating for buildings in winter is climate compensator.However, this strategy heavily relies on manual experience with a relatively simple regulation.Therefore, how to optimize the heating control strategy is very important to keep the indoor temperature stable and comfortable.For this task, this paper proposes a heating strategy optimization method based on deep learning and deep reinforcement learning, which can optimize the original control strategy based on real historical data.The paper first develops a deep MTDN (Multiple Time Difference Network) as the simulator to predict the next time slot's room temperature.By learning the thermodynamic law of indoor temperature change, the network has high accuracy and confirms the physical laws.After that, the SAC (Soft Actor-Critic) algorithm based on maximum entropy reinforcement learning is employed as the strategy optimizer to interact with the simulator.Here, we use the evaluation index of the human body's thermal response as the reward to train and optimize the heating control strategy.Based on the real data of a heat exchange station in Tianjin, we evaluate the predictive ability of the simulator and the control ability of the strategy optimizer, respectively.The results verify that, compared with other types of prediction simulators, this simulator not only has high prediction accuracy but also conforms to physical laws.At the same time, compared with the original strategy, the strategy learned by the strategy optimizer can ensure that the indoor temperature is more stable and comfortable in multiple time periods of random sampling.
Personalized Learning Task Assignment Based on Bipartite Graph
TAN Zhen-qiong, JIANG Wen-Jun, YUM Yen-na-cherry, ZHANG Ji, YUM Peter-tak-shing, LI Xiao-hong
Computer Science. 2022, 49 (4): 269-281.  doi:10.11896/jsjkx.210500125
Abstract PDF(5291KB) ( 565 )   
References | Related Articles | Metrics
“Learning” is a complex event.Individual's learning effect is affected by many factors.Moreover, different individuals have different learning habits.Therefore, it is challenging for students to plan their learning schedule reasonably according to their own characteristics.Although some general theoretical strategies for task management have been proposed, the differences among individuals are usually neglected.Furthermore, existing research cannot provide a calculation method to form a specific task mana-gement schedule.To this end, this paper tries to explore students'learning characteristics by deeply studying the relation between learning efficiency and time factor through data analysis.Based on this, it quantifies personalized learning efficiency.Furthermore, it exploits the bipartite graph method to construct the learning task assignment scenario, and designs adaptive utility function according to different learning goals.Then, a dynamic allocation algorithm TLTA based on transfer learning is proposed to formulate a reasonable schedule for students.Finally, a large number of experiments are carried out on real learning datasets, and the results validate the effectiveness and applicability of the proposed work.
Chinese Short Text Classification Algorithm Based on Hybrid Features of Characters and Words
LIU Shuo, WANG Geng-run, PENG Jian-hua, LI Ke
Computer Science. 2022, 49 (4): 282-287.  doi:10.11896/jsjkx.210200027
Abstract PDF(2639KB) ( 773 )   
References | Related Articles | Metrics
The rapid development of information technology has lead to massive data of Chinese short texts on the Internet.As such, using classification technology to dig out valuable information from it is a current research hotspot.Compared with Chinese long texts, short texts have the characteristics of fewer words, more ambiguities and irregular information, making text feature extraction and expression a challenge.For this reason, a Chinese short text classification algorithm based on the deep neural network model of hybrid features of characters and words is proposed.First, the character vector and word vector of Chinese short text are calculated respectively.Then, their features are extracted and fused.Last, the classification task is accomplished through the fully connected layer and the softmax layer.The test results on the public THUCNews news data set show that the algorithm is better than the mainstream TextCNN, BiGRU, Bert and ERNIE_BiGRU comparison models in terms of accuracy, recall and F1 value.It has a good effect on short text classification.
Text Classification Method Based on Word2Vec and AlexNet-2 with Improved AttentionMechanism
ZHONG Gui-feng, PANG Xiong-wen, SUI Dong
Computer Science. 2022, 49 (4): 288-293.  doi:10.11896/jsjkx.211100016
Abstract PDF(2391KB) ( 628 )   
References | Related Articles | Metrics
In order to improve the accuracy and efficiency of text classification, a text classification method based on Word2Vec text representation and AlexNet-2 with improved attention mechanism is proposed.Firstly, Word2Vec is adopted to embed the text word features, and the word vector is trained to represent the text in the form of distributed vectors.Then, an improved AlexNet-2 is used to effectively encode the long-distance word dependency.Meanwhile, the attention mechanism is added to the model to learn the contextual embedding semantics of the target word efficiently, and the word weight is adjusted according to the correlation between the input of word vector and the final prediction result.The experiment is evaluated in three public data sets, and the situations of a large number of sample annotations and a small number of sample annotations are analyzed.Experimental results show that, compared with the existing excellent methods, the proposed method can significantly improve the performance and efficiency of text classification.
Modeling and Analysis of Emergency Decision Making Based on Logical Probability GamePetri Net
LI Qing, LIU Wei, GUAN Meng-zhen, DU Yu-yue, SUN Hong-wei
Computer Science. 2022, 49 (4): 294-301.  doi:10.11896/jsjkx.210300224
Abstract PDF(2307KB) ( 431 )   
References | Related Articles | Metrics
To give play to the modeling advantages of the logical Petri net for batch processing and uncertainty of value transfer, this paper integralings the relevant game elements of multi-agent game process, excutes models for the multi-agent decision problem, solves the problem of multi-agent dynamic game decision optimization and puts forward logic game decision Petri net.Above all, this paper defines the properties of each token as rational persons and its utility function values, and provide the definition of utility functions and state probability transfer function.Next, this paper introduces decision transition, determines the optimal decision transition as per the comparison of token utility function value as well as provides related algorithm.Finally, the modeling and analysis of the dynamic game decision process of emergency are carried out based on logical game decision Petri net, and the dynamic game process is analyzed based on the reachability graph constructed by reachability identification.The algorithm is described for the generation of reachability graph, and how to solve the dynamic game decision problem is discussed, thus the optimal emergency pre-arranged plan is generated and the resource conflict in the process of emergency is analyzed by the logic game decision model of emergency.On this basis, this paper verifies the effectiveness and superiority of the model in the analysis of the emergency decision process.
Computer Network
Cooperation Localization Method Based on Location Confidence of Multi-UAV in GPS-deniedEnvironment
SHI Dian-xi, LIU Cong, SHE Fu-jiang, ZHANG Yong-jun
Computer Science. 2022, 49 (4): 302-311.  doi:10.11896/jsjkx.210200106
Abstract PDF(4194KB) ( 998 )   
References | Related Articles | Metrics
The localization of unmanned aerial vehicle (UAV) in GPS-denied environment is a difficult problem to be studied and solved.In this paper, the cooperative localization (CL) of UAV cluster system in GPS-denied environment is the main research points.Firstly, location confidence (LC) is proposed for the quantification of UAV's localization accuracy in GPS-denied environment.Secondly, a LC-based CL method, which can adaptively adjust the cooperative weight of each UAV through persistent excitation-based relative localization, is proposed to improve the accuracy of UAV cluster localization.Thirdly, a expand kalman filtering (EKF) is designed for UAV attitude calculation, which can be equipped in each UAV and used to fusing multi heterogeneous sensor data and the CL output.Finally, a multi-UAV cooperative localization system is implemented on ROS, and is verified in multi-UAV flight simulation scene of Gazebo.The simulation results show that the LC-based CL method can effectively alleviate the error accumulation of traditional inertial navigation system, and improve the accuracy of UAV cluster localization.
Fair Joint Optimization of QoE and Energy Efficiency in Caching Strategy for Videos
PENG Dong-yang, WANG Rui, HU Gu-yu, ZU Jia-chen, WANG Tian-feng
Computer Science. 2022, 49 (4): 312-320.  doi:10.11896/jsjkx.210800027
Abstract PDF(2251KB) ( 471 )   
References | Related Articles | Metrics
With the increase in video traffic on wireless networks, content delivery networks and mobile edge computing are considered effective solutions to this problem, whereas caching strategy problem is an important issue of research.When facing different application scenarios and requirements, caching strategies are designed with different objectives.This study focuses on the fairness problem among different optimization objectives.For video service providers, the quality of experience (QoE) reflects the service performance, and energy efficiency reflects the cost-effectiveness and green energy-saving indicators.When designing a caching strategy, it is difficult to specify the objective with higher priority.Therefore, they need to be fairly optimized.First, the two important optimization objectives in the caching strategy problem(QoE and energy efficiency) are mathematically modeled, and the principle of fairness is proposed.Second, these two optimization objectives are innovatively consider as game players and are substituted into the Nash bargaining game model.Third, a multi-round bargaining algorithm is novelly proposed to ensure fairness, and the rationality and effectiveness of the proposed algorithm are rigorously proved.Finally, simulation experiments demonstrate that the proposed algorithm can optimize the QoE and energy efficiency of caching strategies while maintaining a ba-lance between them.
Traffic Prediction Method for 5G Network Based on Generative Adversarial Network
GAO Zhi-yu, WANG Tian-jing, WANG Yue, SHEN Hang, BAI Guang-wei
Computer Science. 2022, 49 (4): 321-328.  doi:10.11896/jsjkx.210300240
Abstract PDF(5256KB) ( 646 )   
References | Related Articles | Metrics
With the explosive growth of wireless access user demand, 5G network traffic is increasing exponentially and showing a trend of diversity and heterogeneity, which made the network traffic prediction face many challenges.Due to the multi-layer architecture of macro base station, micro base station and pico base station in 5G network, a traffic prediction method based on ge-nerative adversarial network (GAN) is proposed.First, the generation network captures the temporal-spatial features of network traffic and the type features of base station, and then the splicing feature is inputted into the composite residual module to gene-rate the predictive traffic, which is inputted into the discriminant network.Second, the discriminant network determines whether the generative traffic is real traffic or predictive traffic.Finally, after the game confrontation between the generation network and the discriminant network, the generation network could generate high-precision predictive traffic.The experimental results show that, compared with 2DCNN, 3DCNN and ConvLSTM, the two-dimensional root mean square predictive error of GAN is reduced by 58.64%, 38.74% and 34.88%, respectively.Therefore, GAN has the best performance of traffic prediction.
Information Security
Research Advance on BFT Consensus Algorithms
FENG Liao-liao, DING Yan, LIU Kun-lin, MA Ke-lin, CHANG Jun-sheng
Computer Science. 2022, 49 (4): 329-339.  doi:10.11896/jsjkx.210700011
Abstract PDF(2576KB) ( 1524 )   
References | Related Articles | Metrics
Since the advent of Bitcoin in 2008, blockchain has gradually become a research hotspot in academia.As the key technology of blockchain, consensus algorithm has also attracted more attention from researchers.It's easy to introduce Byzantine fault nodes in blockchain system because of its complex and variable runtime, so the blockchain Byzantine fault tolerant consensus algorithm is a difficulty that must be overcome.This paper systematically summarizes the research progress of the blockchain Byzantine fault tolerant consensus algorithm, in order to provide a reference for the innovation of consensus algorithms in the future.Firstly, sorting out the four major factions of the existing blockchain Byzantine fault tolerant consensus algorithms and introducing the BFT consensus algorithm.Secondly, reviewing several important values in the classic PBFT algorithm and its correctness proof.Thirdly, putting forward the four optimization goals of the BFT consensus algorithm:decentralization, efficiency, fault tolerance rate and security.Then, based on the dimensions of consensus rounds, number of consensus nodes, underlying hardware, communication mode or encryption algorithm, probability of fault nodes, five optimization ideas of BFT consensus algorithm are summarized.Finally, analysising 10 classic BFT consensus algorithms in detail and making performance comparison.
Overview of Research on Security Encryption Authentication Technology of IoV in Big Data Era
SONG Tao, LI Xiu-hua, LI Hui, WEN Jun-hao, XIONG Qing-yu, CHEN Jie
Computer Science. 2022, 49 (4): 340-353.  doi:10.11896/jsjkx.210400112
Abstract PDF(7336KB) ( 760 )   
References | Related Articles | Metrics
With the increasing risks of Internet of vehicles (IoV) attack, the network security threats of vehicle-mounted systems, vehicle-mounted terminals, vehicle-mounted information and service applications, the operation and service platform of intelligent connected vehicles (ICVs) are prominent.Information tampering and virus intrusion in the generalized network attack have been proved to be suitable for the attack of ICVs.The characteristics of weak password authentication and weak encryption in traditional IoV are hard to satisfy the current requirements of multi-network and multi-node security protection in the field of IoV.In addition, the lack of domestic security encryption authentication mechanism and the imperfect encryption authentication system make it more difficult to satisfy the requirements of IoV security.To solve the problem of IoV security encryption authentication, the paper studies IoV security encryption authentication technology in the age of big data.Firstly, this paper introduces the current situation and relevant concepts of IoV security in the era of big data.Then it contrasts and analyzes the current IoV security architecture, and puts forward IoV security encryption authentication system in the era of big data, and elaborates systematically the IoV security technology architecture and the encryption authentication way of IoV communication module.Then the architecture proposed in this paper is compared with the information security standards of the IoV and elaborates key technology and innovations of the IoV security encryption authentication.Finally, the paper summarizes and proposes the problems and challenges faced by the current security encryption authentication technology of IoV.
MLSTM:A Password Guessing Method Based on Multiple Sequence Length LSTM
CHANG Geng, ZHAO Lan, CHEN Wen
Computer Science. 2022, 49 (4): 354-361.  doi:10.11896/jsjkx.210300008
Abstract PDF(3803KB) ( 612 )   
References | Related Articles | Metrics
Password is one of the most important methods of user authentication.Using effective password guessing methods to improve the hit rate of password attacks is the main approach to study password security.In recent years, researchers have proposed to use long short-term memory (LSTM) neural network to guess password and have demonstrated it is superior to traditional password guessing models, such as Markov model and PCFG(probabilistic context free text) model.However, the traditional LSTM model has the problem that it is hard to select the length of the sequence and cannot learn the relationship between different length sequences.This paper collects large-scale password sets and analyzes the user's password construction behaviors and the preference for passwords setting, and finds that the user's personal information has important influences on the password settings.Then a multiple sequence lengths of LSTM password guessing model MLSTM(Multi-LSTM) is proposed and the personal information is applied to trawling guessing.Experimental results demonstrate that compared with PCFG, the cracking rate is increased by 68.2% at most.While compared with traditional LSTM and 3th-order Markov, the hit rates are increased by 7.6%~42.1% and 23.6%~65.2% respectively.
Study on Differential Privacy Protection for Medical Set-Valued Data
WANG Mei-shan, YAO Lan, GAO Fu-xiang, XU Jun-can
Computer Science. 2022, 49 (4): 362-368.  doi:10.11896/jsjkx.210300032
Abstract PDF(2866KB) ( 593 )   
References | Related Articles | Metrics
Electronic medical data surges along with the constant development of information technologies and medical care digitalization.It provides foundations for further application on data analysis, data mining and intelligent diagnosis.The fact that me-dical data are massive and involve a lot of patient privacy.How to protect patient privacy while using medical data is challenging.The predominant principle for the solutions is anonymity.It is not competent in confidentiality or availability when attackers possess strong background knowledge.This paper proposes an optimized classification tree and an improved Diffpart.In our design, association of data is introduced to sift set-valued data for DP based perturbation, which satisfies the utility and supports statistic query.Then test is conducted with 240000 practical medical data and the results show that the proposed algorithm holds DP distribution and outperforms Diffpart in privacy and utility.
Detection Method of ROP Attack for Cisco IOS
LI Peng-yu, LIU Sheng-li, YIN Xiao-kang, LIU Hao-hui
Computer Science. 2022, 49 (4): 369-375.  doi:10.11896/jsjkx.210300153
Abstract PDF(2315KB) ( 765 )   
References | Related Articles | Metrics
Cisco IOS (Internet operating system) is a special operating system of Cisco router.Due to the limitation of hardware conditions, it pays more attention to the performance and ignores the system security in the design, which makes it unable to effectively detect the attack of return address oriented programming (ROP).Aiming at the defects of traditional ROP protection technology in Cisco IOS protection, a method based on return address memory hash verification is proposed, which can effectively detect the ROP attack on Cisco IOS and capture the attack code.By analyzing the advantages and disadvantages of the existing protection mechanisms against ROP attacks, on the basis of the idea of compact shadow memory protection, the traditional sha-dow memory storage mode is transformed into a hash based memory search mode, and the record of the return address memory pointer is added as the index of hash search, which improves the efficiency of shadow me-mory search and can resist shadow memory tampering caused by memory leakage.Based on the Dynamips virtualization platform, the CROPDS system is designed and implemented, and the method is verified effectively.Compared with the previous methods, it improves the generality and perfor-mance, and can capture the shellcode of attack execution.
Chaotic Sequence Cipher Algorithm Based on Discrete Anti-control
ZHAO Geng, LI Wen-jian, MA Ying-jie
Computer Science. 2022, 49 (4): 376-384.  doi:10.11896/jsjkx.210300116
Abstract PDF(2342KB) ( 452 )   
References | Related Articles | Metrics
Aiming at the degeneracy problem of discrete chaotic dynamics system in the digital domain, an algorithm that can configure the Lyapunov exponents of the system to be all positive is proposed.The algorithm is based on the principle of chaos anti-control.First, a feedback matrix is introduced.All the parameters in the set are specified carefully, and it is proved from a theore-tical point of view that the algorithm can configure the Lyapunov exponent to be fully positive.Subsequently, the boundedness of the system orbit and the finiteness of the Lyapunov exponent are proved, and the numerical simulation analysis and performance comparison of the configuration are carried out through several examples, so as to verify that the algorithm can produce a discrete chaotic system without degenerate.There are certain advantages in numerical accuracy and algorithm running time.The configured chaotic system is then used to generate the sequence and then quantized.The quantization scheme is to take out the effective digital combination of the sequence.Some dynamic transformation processing on the sequence can enhance the randomness and complexity of the output sequence.We convert the transformed output sequence into a binary sequence, perform a number of randomness and statistical tests, and compare the performance with the general chaotic sequence.The test results show that the sequence has better random characteristics and can be used in a chaotic sequence cipher system.