Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 48 Issue 7, 15 July 2021
Artificial Intelligence Security
Artificial Intelligence Security Framework
JING Hui-yun, WEI Wei, ZHOU Chuan, HE Xin
Computer Science. 2021, 48 (7): 1-8.  doi:10.11896/jsjkx.210300306
Abstract PDF(2122KB) ( 2464 )   
References | Related Articles | Metrics
With the advent of artificial intelligence,all walks of life begin to deploy artificial intelligence systems according to their own business needs,which accelerates the scale construction and widespread application of artificial intelligence worldwide in an all-around way.However,the security risks of artificial intelligence infrastructure,design and development,and integration applications also arise.To avoid risks,countries worldwide have formulated AI ethical norms and improved laws and regulations and industry management to carry out artificial intelligence safety governance.In the artificial intelligence security governance,the artificial intelligence security technology system has important guiding significance.Specifically,the artificial intelligence security technology system is an essential part of artificial intelligence security governance,critical support for implementing artificial intelligence ethical norms,meeting legal and regulatory requirements.However,there is a general lack of artificial intelligence security framework in the world at the current stage,and security risks are prominent and separated.Therefore,it is urgent to summarize and conclude the security risks existing in each life cycle of artificial intelligence.To solve the above problems,this paper proposes an AI security framework covering AI security goals,graded capabilities of AI security,and AI security technologies and management systems.It looks forward to providing valuable references for the community to improve artificial intelligence's safety and protection capabilities.
Survey on Artificial Intelligence Model Watermarking
XIE Chen-qi, ZHANG Bao-wen, YI Ping
Computer Science. 2021, 48 (7): 9-16.  doi:10.11896/jsjkx.201200204
Abstract PDF(2905KB) ( 3290 )   
References | Related Articles | Metrics
In recent years,with the rapid development of artificial intelligence,it has been used in voice,image and other fields,and achieved remarkable results.However,these trained AI models are very easy to be copied and spread.Therefore,in order to protect the intellectual property rights of the models,a series of algorithms or technologies for model copyright protection emerge as the times require,one of which is model watermarking technology.Once the model is stolen,it can prove its copyright through the verification of the watermark,maintain its intellectual property rights and protect the model.This technology has become a hot spot in recent years,but it has not yet formed a more unified framework.In order to better understand,this paper summarizes the current research of model watermarking,discusses the current mainstream model watermarking algorithms,analyzes the research progress in the research direction of model watermarking,reproduces and compares several typical algorithms,and finally puts forward some suggestions for future research direction.
Security Evaluation Method for Risk of Adversarial Attack on Face Detection
JING Hui-yun, ZHOU Chuan, HE Xin
Computer Science. 2021, 48 (7): 17-24.  doi:10.11896/jsjkx.210300305
Abstract PDF(2671KB) ( 2129 )   
References | Related Articles | Metrics
Face detection is a classic problem in the field of computer vision.With the power-driven by artificial intelligence and big data,it has displayed a new vitality.Face detection shows its important application value and great application prospect in the fields of face payment,identity authentication,beauty camera,intelligent security,and so on.However,with the overall acceleration of face detection deployment and application process,its security risks and hidden dangers have become increasingly prominent.Therefore,this paper analyzes and summarizes the security risks which the current face detection models face in each stage of their life cycle.Among them,adversarial attack has received extensive attention because it poses a serious threat to the availability and reliability of face detection,and may cause the dysfunction of the face detection module.The current adversarial attacks on face detection mainly focus on white-box adversarial attacks.However,because white-box adversarial attacks require a full understanding of the internal structure and all parameters of a specific face detection model,and for the protection of business secrets and corporate interests,the structure and parameters of a commercially deployed face detection model in the real physical world are usually inaccessible.This makes it almost impossible to use white-box adversarial methods to attack commercial face detection models in the real world.To solve the above problems,this paper proposes a black-box physical adversarial attack me-thod for face detection.Through the idea of ensemble learning,the public attention heat map of many face detection models is extracted,then the obtained public attention heat map is attacked.Experiments show that our method realizes the successful escape of the black-box face detection model deployed on mobile terminals,including the face detection module of mobile terminal’s built-in camera software,face payment software,and beauty camera software.This demonstrates that our method will be helpful to evaluate the security of face detection models in the real world.
Feature Gradient-based Adversarial Attack on Modulation Recognition-oriented Deep Neural Networks
WANG Chao, WEI Xiang-lin, TIAN Qing, JIAO Xiang, WEI Nan, DUAN Qiang
Computer Science. 2021, 48 (7): 25-32.  doi:10.11896/jsjkx.210300299
Abstract PDF(2887KB) ( 1229 )   
References | Related Articles | Metrics
Deep neural network (DNN)-based automatic modulation recognition (AMR) outperforms traditional AMR methods in automatic feature extraction,recognition accuracy with less manual intervention.However,high recognition accuracy is the first priority of the practitioners when designing AMR-oriented DNN (ADNN) models while security is usually neglected.In this backdrop,from the perspective of the security of artificial intelligence,this paper presents a novel characteristic gradient-based adversarial attack method on ADNN models.Compared with traditional label gradient-based attack method,the proposed method can better attack the extracted temporal and spatial features by ADNN models.Experimental results on an open dataset show that the proposed method outperforms label gradient-based method in the attacking success ratio and transferability in both white-box and black-box attacks.
Differential Privacy Protection Machine Learning Method Based on Features Mapping
CHEN Tian-rong, LING Jie
Computer Science. 2021, 48 (7): 33-39.  doi:10.11896/jsjkx.201200224
Abstract PDF(2100KB) ( 1263 )   
References | Related Articles | Metrics
The differential privacy algorithm in image classification improves the privacy protection capability of the machine learning model by adding noise,and at the same time easily causes the accuracy of the model classification to decrease.To solve the above problems,a differential privacy protection machine learning method based on features mapping is proposed.Thismethodcombines the pre-training neural network and shadow model training technology to map the feature vectors of the original data sample to the high-dimensional vector space in the form of differential vectors,so as to shorten the distance of the sample in the high-dimensional vector space to reduce the leakage of private information caused by model updates,and improve the privacy protection and classification capabilities of the machine learning model.The experimental results on the MNIST and CIFAR-10 datasets show that for the ε-differential privacy model with ε equal to 0.01 and 0.11,the classification accuracy is improved to 99% and 96%,respectively,indicating that compared with DP-SGD and many other commonly used differential privacy algorithms,the model trained by this method can maintain stronger classification capabilities at a lower privacy budget.And the success rate of reasoning attacks against this model on the two data sets is reduced to 10%,which is against inference attacks.Compared with the traditional CNN model of image classification,the defense capability of the CNN model is greatly improved.
Intelligent Penetration Testing Path Discovery Based on Deep Reinforcement Learning
ZHOU Shi-cheng, LIU Jing-ju, ZHONG Xiao-feng, LU Can-ju
Computer Science. 2021, 48 (7): 40-46.  doi:10.11896/jsjkx.210400057
Abstract PDF(2540KB) ( 2319 )   
References | Related Articles | Metrics
Penetration testing is a general method for network security testing by simulating hacker attacks.Traditional penetration testing methods mainly rely on manual operations,which have high time and labor costs.Intelligent penetration testing is the future direction of development,aiming at more efficient and low-cost network security protection.Penetration testing path discovery is a key issue in the research of intelligent penetration testing,the purpose of which is to discover vulnerabilities in the network and possible attackers’ penetration testing path in time and achieve targeted defense.In this paper,deep reinforcement learning and penetration testing are combined,the agent is trained in simulated network scenarios,the penetration testing process is modeled as a Markov decision process model,and an improved deep reinforcement learning algorithm Noisy-Double-Dueling DQNper is proposed.The algorithm integrates prioritized experience replay mechanism,double DQN,dueling DQN and noise net mechanism.Different scale network scenarios are used for comparative experiments.The algorithm is better than the traditional DQN (Deep Q Network) algorithm and its improved version in convergence speed and can be applied to larger scale network scenarios.
DRL-IDS:Deep Reinforcement Learning Based Intrusion Detection System for Industrial Internet of Things
LI Bei-bei, SONG Jia-rui, DU Qing-yun, HE Jun-jiang
Computer Science. 2021, 48 (7): 47-54.  doi:10.11896/jsjkx.210400021
Abstract PDF(2283KB) ( 2211 )   
References | Related Articles | Metrics
In recent years,the Industrial Internet of Things (IIoT) has developed rapidly.While realizing industrial digitization,automation,and intelligence,the IIoT has introduced tremendous cyber threats.Further,the complex,heterogeneous,and distributed IIoT environment has created a brand-new attack surface for cyber intruders.Traditional intrusion detection techniques no longer fulfill the needs of intrusion detection for the current IIoT environment.This paper proposes a deep reinforcement learning algorithm (i.e.,Proximal Policy Optimization 2.0,PPO2) based intrusion detection system for the IIoT.The proposed intrusion detection system combines the perceptual ability of deep learning with the decision-making ability of reinforcement learning,which can effectively detect multiple types of cyber attacks for the IIoT.First,a LightGBM-based feature selection algorithm is used to filter the most effective feature sets in IIoT data.Then,the hidden layer of the multilayer perceptron network is used as the shared network structure of the value network and policy network in the PPO2 algorithm.At last,the PPO2 algorithm is used to construct the intrusion detection model and ReLU (Rectified Linear Unit) is employed for classification output.Extensive experiments conducted on a real IIoT dataset released by the Oak Ridge National Laboratory,sponsored by the U.S.Department of Energy,show that the proposed intrusion detection system achieves 99.09% accuracy in detecting multiple types of network attacks for the IIoT,and it outperforms state-of-the-art deep learning models (e.g.,LSTM,CNN,RNN) based and deep reinforcement learning models (e.g.,DDQN and DQN) based intrusion detection systems,in terms of the accuracy,precision,recall,and F1 score.
Adversarial Attacks Threatened Network Traffic Classification Based on CNN
YANG Yang, CHEN Wei, ZHANG Dan-yi, WANG Dan-ni, SONG Shuang
Computer Science. 2021, 48 (7): 55-61.  doi:10.11896/jsjkx.210100095
Abstract PDF(1741KB) ( 2083 )   
References | Related Articles | Metrics
Deep learning algorithm is widely used in network traffic classification,which has good classification effect.Convolutional neural network can not only greatly improve the accuracy of network traffic classification,but also simplify the classification process.However,neural network is faced with security threats such as adversarial attack.The impact of these security threats on network traffic classification based on neural network needs to be further researched and verified.This paper proposes an adversarial attack method for network traffic classification based on convolutional neural network.By adding the disturbance which is difficult to recognize by human eyes to the deep learning input image converted from network traffic,it makes convolutional neural network misclassify network traffic.At the same time,to this attack method,this paper also proposes a defense method based on mixed adversarial training,which combines the adversarial traffic samples generated by adversarial attack and the original traffic samples to enhance the robustness of the classification model.We evaluate the proposed method on public data sets.The experimental results show that the proposed adversarial attack method can cause a sharply drop in the accuracy of the network traffic classification method based on convolutional neural network,and the proposed mixed adversarial attack training can effectively resist the adversarial attack,so as to improve the robustness of the network traffic classification model.
Detection of Abnormal Flow of Imbalanced Samples Based on Variational Autoencoder
ZHANG Ren-jie, CHEN Wei, HANG Meng-xin, WU Li-fa
Computer Science. 2021, 48 (7): 62-69.  doi:10.11896/jsjkx.200600022
Abstract PDF(1927KB) ( 1347 )   
References | Related Articles | Metrics
With the rapid development of machine learning technology,more and more machine learning algorithms are used to detect and analyze attack traffic.However,attack traffic often accounts for a very small portion of network traffic.When training machine learning models,there is often a problem of imbalance between the positive and negative samples of the training set,which affects model training effect.Aiming at the problem of imbalanced samples,an imbalanced sample generation method based on variational auto-encoder is proposed.The idea is that when expanding imbalanced samples,not all of them are expanded.But imbalanced samples are analyzed,and a small number of boundary samples that are most likely to have confusion effects on machine learning are expanded.First,the KNN algorithm is used to screen the samples that are closest to the majority of samples;second,DBSCAN algorithm is used to cluster the partial samples selected by the KNN algorithm to generate one or more sub-clusters;then,a VAE network model is designed to learn and expand the few samples in one or more sub-clusters distinguished by the DBSCAN algorithm.The expanded samples are added to the original samples to build a new training set;finally,the newly constructed training set is used to train decision tree classifier to detect abnormal traffic.The recall rate and F1 score are selected as the evaluation indicators.The original sample,the SMOTE-generated sample and our sample are compared.The experimental results show that the decision tree classifier trained using the proposed method in this paper has improved the recall rate and F1 score among the four types of anomalies.The F1 score is up to 20.9%,which is higher than the original sample and the SMOTE method.
SQL Injection Attack Detection Method Based on Information Carrying
CHENG Xi, CAO Xiao-mei
Computer Science. 2021, 48 (7): 70-76.  doi:10.11896/jsjkx.200600010
Abstract PDF(2488KB) ( 1378 )   
References | Related Articles | Metrics
At present,the accuracy of SQL injection attack detection based on traditional machine learning still needs to be improved.The main reason behind this phenomenon is that if too many features are selected when extracting feature vectors,it will cause the overfitting of the model and negatively affect the efficiency of the algorithm,whereas a large number of false and missed number will be generated if too little features are selected.To solve this problem,the paper proposes SQLIA-IC,a SQL injection attack detection method based on information carrying.The SQLIA-IC adds a marker and content matching module on the basis of machine learning detection.The marker is used to detect sensitive information in the sample,and the content matching module is used to match the feature items of the sample to achieve the purpose of secondary judgment.In order to improve the efficiency of SQL injection attack detection,the information value is used to simplify the detection results of machine learning and markers.In the content matching module,the dynamic matching is performed according to the information value carried by the sample.The simulation experiment results show that compared with the traditional machine learning methods,the accuracy rate of the method proposed in this paper is 2.62% higher on average,the precision ratio is 4.35% higher on average,the recall rate is 0.96%higheron average while the time loss has only increased by about 5 ms,which reveals that the method proposed can detect SQL injection attacks efficiently and effectively.
Deepfake Videos Detection Method Based on i_ResNet34 Model and Data Augmentation
BAO Yu-xuan, LU Tian-liang, DU Yan-hui, SHI Da
Computer Science. 2021, 48 (7): 77-85.  doi:10.11896/jsjkx.210300258
Abstract PDF(4141KB) ( 1125 )   
References | Related Articles | Metrics
Existing Deepfake videos detection methods are weak in extracting facial feature.Therefore,this paper proposes an improved ResNet(i_ResNet34) model and three data augmentation methods based on information dropping.Firstly,the ResNet is optimized by using the group convolution to replace the ordinary convolution to extract more sufficient facial features without increasing model parameters.Then,max pooling layer is used to the down sampling in the shortcut branch of the dashed residual structure of the model whichis improved,so that loss of facial feature information decreases in video frames.Then,the channel attention layer is introduced after the convolution layer to increase the weight of the channel which extracts the key features and improves the channel correlation of the feature map.Finally,the i_ResNet34 model is implemented to train the original dataset and the expanded dataset with three data augmentation methods based on information dropping,achieving 99.33% and 98.67% detection accuracy on FaceSwap and Deepfakes datasets of FaceForensicans++ respectively,superior to the existing mainstream algorithms,thus verifying the effectiveness of the proposed method.
Deepfake Video Detection Based on 3D Convolutional Neural Networks
XING Hao, LI Ming
Computer Science. 2021, 48 (7): 86-92.  doi:10.11896/jsjkx.210200127
Abstract PDF(2833KB) ( 1665 )   
References | Related Articles | Metrics
In recent years,“Deepfake” has attracted widespread attention.It is difficult for people to distinguish Deepfake videos.However,these forged videos will bring huge potential threats to our society,such as being used to make fake news.Therefore,it is necessary to find a method to identify these synthetic videos.In order to solve the problem,a Deepfake video detection model based on 3D CNNS for deepfake detection is proposed.This model notices the inconsistency of temporal and spatial features in the Deepfake video,and 3D CNNS can effectively capture temporal and spatial features of deepfake video.The experimental results show that models based on 3D CNNS have high accuracy rate,and strong robustness on the Deepfake-detection-challenge dataset and Celeb-DF dataset.The detection accuracy of the proposed model reaches 96.25%,and the AUC value reaches 0.92.This model also solves the problem of poor generalization.By comparing with the existing Deepfake detection models,the proposed model is superior to the existing models in terms of detection accuracy and AUC value,which verifies the effectiveness of the proposed model.
Database & Big Data & Data Science
Embedding Consensus Autoencoder for Cross-modal Semantic Analysis
SUN Sheng-zi, GUO Bing-hui , YANG Xiao-bo
Computer Science. 2021, 48 (7): 93-98.  doi:10.11896/jsjkx.200600003
Abstract PDF(2720KB) ( 758 )   
References | Related Articles | Metrics
Cross-modal retrieval has become a topic of popularity,since multi-data is heterogeneous and the similarities between different forms of information are worthy of attention.Traditional single-modal methods reconstruct the original information and lacks of considering the semantic similarity between different data.In this work,an Embedding Consensus Autoencoder for Cross-Modal Semantic Analysis is proposed,which maps the original data to a low-dimensional shared space to retain semantic information.Considering the similarity between the modalities,an automatic encoder is utilized to associate the feature projection to the semantic code vector.In addition,regularization and sparse constraints are applied to low-dimensional matrices to balance reconstruction errors.The high dimentional data is transformed into semantic code vector.Different models are constrained by parameters to achieve denoising.The experiments on four multi-modal data sets show that the query results are improved and effective cross-modal retrieval is achieved.Further,ECA-CMSA can also be applied to fields related to computer and network such as deep and subspace learning.The model breaks through the obstacles in traditional methods,and uses deep learning methods innovatively to convert multi modal data into abstract expression,which can get better accuracy and achieve better results in recognition.
Dynamic Data Refining Strategy for Soundness Verification Based on WFT-net
TAO Xiao-yan, YAN Chun-gang, LIU Guan-jun
Computer Science. 2021, 48 (7): 99-104.  doi:10.11896/jsjkx.200700125
Abstract PDF(2224KB) ( 2610 )   
References | Related Articles | Metrics
The workflow net with data tables (WFT-net) has been proposed to verify the soundness of business processes,to ensure the correctness of business logics and the satisfiability of data requirements.In some cases,the static data refining strategy may not reflect all possible execution situations of the business process,which can cause problems such as poor detection accuracy.To this end,a new dynamic data refining strategy is proposed in this paper.First,a method for evaluating the status of tables and predicates associated with the written data element in the current state of the WFT-net is given,to capture real-time changes in data-flow status,and to fully reflect all reachable states in process execution,so as to avoid the loss of the execution path.In addition,when the process execution is caught in a loop that will cause the data-flow status to be updated infinitely,the data assignment rules are appropriately adjusted to avoid the consequent infinite state.Then,the soundness of the business process is verified based on its all possible execution situations.At last,experimental results based on different business process instances show that the dynamic data refining strategy is able to improve the accuracy of soundness verification.
Outlier Document Detection via Optimal Transport and k-nearest Neighbor
SHUI Ze-nong, ZHANG Xing-yu, SHA Chao-feng
Computer Science. 2021, 48 (7): 105-111.  doi:10.11896/jsjkx.200400140
Abstract PDF(1895KB) ( 2836 )   
References | Related Articles | Metrics
Outlier or anomaly detection is one of the research hotspots in areas such as data mining and machine learning,and researchers have proposed a variety of outlier detection methods that can be applied to problems such as intrusion detection and anomalous transaction detection.However,most outlier detection methods mainly target tabular data or time series data,etc.and cannot be directly applied to outlier document detection.Existing outlier detection methods based on proximity generally measure proximity by the distance of a document to the entire document set,failing to find outliers based on local considerations,and may not be able to characterize semantic proximity between documents using Euclidean distance.Probabilistic model-based outlier do-cument detection methods are too complex and define document outliers only globally.In response to these questions,this paper proposes a new proximity-based outlier document detection method where we measure the outlier of a document by the distance between the document and its k-nearest neighbor document.We introduce the optimal transport algorithm to calculate the distance between documents,based on the semantic information of the document obtained from word embedding vector and the topic model.The method defines document outliers from a local perspective,using document distances that reflect the semantic proxi-mity between documents.This paper conducts extensive experiments on two open source document datasets,and the results show that the proposed methods outperform the benchmark outlier document detection methods in terms of four evaluation metrics.Experiments also demonstrate the effectiveness of proposal of k-nearest neighbor based outliers and the impact of value k.
Multi-task Spatial-Temporal Graph Convolutional Network for Taxi Idle Time Prediction
SONG Long-ze, WAN Huai-yu, GUO Sheng-nan, LIN You-fang
Computer Science. 2021, 48 (7): 112-117.  doi:10.11896/jsjkx.201000089
Abstract PDF(1828KB) ( 1025 )   
References | Related Articles | Metrics
The taxi idle time seriously affects the utilization efficiency of transportation resources and the driver’s income.Accurate taxi idle time prediction can effectively guide drivers to make reasonable path planning,and assist taxi platforms for efficient resource scheduling.However,in actual scenarios,the idle time in different areas of the city is affected by various factors such as regional traffic,passenger flow,and historical idle time.A spatial-temporal graph convolution network (MSTGCN) model based on multi-task framework is proposed to solve this problem.MSTGCN adopts a novel convolutional structure of spatial-temporal graph to comprehensively model the various spatial and temporal correlation factors that affect the idle time.A multi-task attention fusion mechanism is also proposed to improve the information acquisition ability and prediction performance of each task.Extensive experiments are carried out on two public data sets provided by Didi Chuxing GAIA Initiative,and the prediction results of the proposed model are better than that of other methods.
Prediction of Evolution Trend of Online Public Opinion Events Based on Attention Mechanism in Social Networks
SANG Chun-yan, XU Wen, JIA Chao-long, WEN Jun-hao
Computer Science. 2021, 48 (7): 118-123.  doi:10.11896/jsjkx.200600155
Abstract PDF(2080KB) ( 946 )   
References | Related Articles | Metrics
Compared with traditional media,social networks play a prominent role in disseminating news,ideas,and opinions,and are also the best way to spread negative information such as rumors and false news.Therefore,accurate prediction and effective control of the evolution trend of online public opinion have become important research topics.At present,most studies predict the evolution characteristics and development trends of online public opinion events from the perspective of theoretical modeling.The modeling and analysis of information dissemination evolution trend prediction models based on user behavior characteristics need to be further studied.Considering the interaction between users in the process of information dissemination,this paper proposes a method based on the attention mechanism,which aims to explore the evolution trend prediction of information dissemination in social networks.Firstly,a network architecture based on long shot-term memory (LSTM) is used to obtain the trajectory characteristics of information propagation.Secondly,considering the complexity of information dissemination and user behavior,the attention mechanism is used to mine the dependence between users to predict the real information dissemination process.Finally,we comprehensively consider the driving factors that affect information dissemination,and obtain an attention diffusion neural network (ADNN) based on the attention mechanism.The experimental results on the four comparative data sets show that the ADNN model is better than the popular sequence model.This model can effectively use the influence of driving factors on information dissemination,and more accurately predict the trend of information dissemination in social networks.
Social Network User Influence Evaluation Algorithm Integrating Structure Centrality
TAN Qi, ZHANG Feng-li, WANG Ting, WANG Rui-jin, ZHOU Shi-jie
Computer Science. 2021, 48 (7): 124-129.  doi:10.11896/jsjkx.200600096
Abstract PDF(2014KB) ( 744 )   
References | Related Articles | Metrics
In social networks,the transmission process of information can be controlled macro by tracking a small number of strongly influential users,but user influence is a kind of posterior information that cannot be predicted and can only be determined by relevant characteristics.Therefore,this paper proposes a social network user influence evaluation algorithm that integrates structural degree centrality to identify users with strong influence.As an evaluation algorithm for social network user influence,SDRank is developed based on an improved PageRank algorithm,which introduces structural degree centrality,combines the re-gulatory factor of join time and average forward number,and then calculates the user’s influence.Compared to other existing algorithms,SDRank is applicable to a broader set of scenarios from a user behavior perspective,for it doesn’t require specific information(such as personal tags,fans) that have potential forgery risks or default possibilities,and doesn’t have to exploit the under-lying information of disseminated content.This paper takes the cascade forwarding dataset of Weibo users as the experimental object,makes a visual analysis of the average forwarding number of top-K users and other relevant results,and discusses the role of user forwarding behavior in information transmission in social network.During the experiment,its accuracy,recall rate and F1-measure value are greatly improved compared with PageRank and TrustRank,and the effectiveness of SDRank algorithm is verified.
Robustness Analysis of Complex Network Based on Rewiring Mechanism
MU Jun-fang, ZHENG Wen-ping, WANG Jie, LIANG Ji-ye
Computer Science. 2021, 48 (7): 130-136.  doi:10.11896/jsjkx.201000108
Abstract PDF(2120KB) ( 895 )   
References | Related Articles | Metrics
With the widespread use of infrastructure networks such as power systems,transportation systems,and communication systems,it is of great significance to improve the robustness of complex networks.The rewiring mechanism is an efficient and simple method to improve the robustness of the network.The rewiring mechanism based on the 0-order null model improves the robustness of the network by randomly deleting and creating edges.Although the number of edges is maintained,the degree of nodes will change,such as the RM-ES algorithm.The rewiring mechanism based on the 1-order null model improves the robustness of the network by randomly selecting two edges for rewiring,although the degree distribution is maintained,it is difficult to find the appropriate nodes by random edge selection,which increases the running time of algorithms,such as the RM-LCC algorithm.Therefore,in order to maintain the degree distribution and improve the robustness of the network,this paper proposes a fast rewiring mechanism based on 1-order null model,called FRM.The FRM algorithm first weights the edges by degree for each edge,then selects two edges with the probability proportional to the weight of edge,finally,creates the edges between nodes with similar degree.We compare the FRM algorithm with other methods on real network data sets.The experimental results of three real network data show that FRM algorithm performs better than four representative rewiring algorithms under the attack of degree centrality,betweenness centrality and PageRank centrality with respect to the ratio of nodes in largest connected component s(Q),robustness index R and robustness index I(G).
Semi-supervised Clustering Based on Gaussian Fields and Adaptive Graph Regularization
ZHAO Min, LIU Jing-lei
Computer Science. 2021, 48 (7): 137-144.  doi:10.11896/jsjkx.200800190
Abstract PDF(2725KB) ( 721 )   
References | Related Articles | Metrics
Clustering is to divide a given sample into several different clusters,which is a widely used tool,has been applied in machine learning,data mining and so on,and has received extensive concern by researchers.However,there are still three main limitations.Firstly,usually there are noises and outliers in the data,which will bring about significant errors in the clustering results.Secondly,traditional clustering methods do not use supervision information to guide the construction of similarity matrices.Finally,in the graph-based clustering method,when constructing graphs,the neighbor relationship is determined.Once the calculation is wrong, it will result in poor quality of the constructed graph,which will affect the clustering performance.Therefore,a semi-supervised clustering model based on Gaussian field and adaptive graph regularization (SCGFAG) is proposed in this paper. In this model,supervised information is introduced by gaussian field and harmonic function to guide the construction of similarity matrix to realize semi-supervised learning.Sparse error matrix is introduced to represent sparse noise,such as impulse noise,dead line,stripes,and l1 norm is introduced to alleviate the sparse noise.In addition,the l2,1 norm is also introduced by the proposed model to mitigate the effects of outliers.Therefore,our SCGFAG is insensitive to data noise and outliers.More importantly,the regularization of adaptive graph is introduced into SCGFAG to improve the clustering performance.In order to realize the goal of optimization clustering,an iterative updating algorithm-Augmented Lagrangian Method (ALM) is proposed to update the optimization variables respectively.Experimental results on four datasets show that the proposed method is superior to the eight classical clustering methods,and has better clustering performance.
Improved Random Forest Imbalance Data Classification Algorithm Combining Cascaded Up-sampling and Down-sampling
ZHENG Jian-hua, LI Xiao-min, LIU Shuang-yin, LI Di
Computer Science. 2021, 48 (7): 145-154.  doi:10.11896/jsjkx.200800120
Abstract PDF(3253KB) ( 1171 )   
References | Related Articles | Metrics
Data imbalance will seriously deteriorate the performance of traditional classification algorithms.Imbalance data classification has become a hot and difficult problem in the field of machine learning.In order to improve the detection rate of minority samples in imbalance data sets,an improved random forest algorithm is proposed in this paper.The core of the algorithm is to use hybrid sampling for each random forest subtree data set sampled by Bootsrap.Firstly,inverse weight up-sampling based on Gaussian mixture model is adopted,then cascade up-sampling based on SMOTE-borderline1 algorithm is carried out,and down-sampling is carried out in a random down-sampling way,so as to obtain a balanced training subset of each subtree.Finally,adecision tree-based improved random forest learner is used to implement the unbalanced data classification algorithm.In addition,this paper uses G-means and AUC as evaluation indexes,and compares them with 10 different algorithms on 15 public data sets.The results show that the average ranking and average value of the two indexes rank first.Furthermore,this paper compares with 6 state-of-the-art algorithms on 9 data sets.Among the 32 comparisons,the proposed algorithm achieves better results than that of other algorithms for 28 times.The experimental results show that the proposed algorithm is helpful to improve the detection rate of minority class and has better classification performance.
Frequent Pattern Mining of Residents’ Travel Based on Multi-source Location Data
WU Cheng-feng, CAI Li, LI Jin, LIANG Yu
Computer Science. 2021, 48 (7): 155-163.  doi:10.11896/jsjkx.200800072
Abstract PDF(6302KB) ( 876 )   
References | Related Articles | Metrics
With the improvement of urbanization,the mining of frequent patterns for resident’s travel has become a hot topic.Most of existing studies have problems such as the lack of description of the purpose and significance for frequent travel patterns,and incomplete analysis for mining results.To address these issues,firstly,this paper proposes a novel mining method of residents’ frequent travel patterns (MMoRFTP).It divides the map into several different regions by using morphological image method,builds the travel model by using the fused multi-source location data,and identifies the city functions of each region by using topic model.Then,it transforms travel trajectories lacking semantic information into ones with regional and functional areas semantics,and constructs the travel pattern graph and label pattern graph with region as node and semantic trajectory as edge.Based on graph model construction,the MulEdge algorithm is proposed to mine the frequent association pattern of residents’ travel.In this paper,urban road network data,POI data,taxi GPS data and check-in data are used in the experiment.The results show that MMoRFTP has good performance,and the discovered frequent travel patterns can provide a decision-making basis for road planning,traffic management,commercial layout and so on.
Dummy Location Generation Method Based on User Preference and Location Distribution
WANG Hui, ZHU Guo-yu, SHEN Zi-hao, LIU Kun, LIU Pei-qian
Computer Science. 2021, 48 (7): 164-171.  doi:10.11896/jsjkx.200800069
Abstract PDF(1710KB) ( 1131 )   
References | Related Articles | Metrics
The traditional dummy location generation algorithm based on k-anonymity mechanism has low rationality and is vulnerable to attack by attackers using side information.Aiming at solving this problem,the SPDGM algorithm is proposed.Firstly,this algorithm defines the semantic weighted digraph to describe the time distribution and semantic transfer relationship of semantics.Secondly,for the sake of solving the problem of weak resistance caused by only considering the historical probability of location,this algorithm defines the location credibility,which considers the historical probability of location and the evaluation information of the public.Thirdly,in order to avoid the dense distribution of dummy location,the dispersion degree is defined to control the distribution of dummy location.Finally,this algorithm generates an anonymous set whose semantics safe and distribution sparsely.The experimental results show that the SPDGM algorithm has lower recognition rate and higher privacy protection strength under the semantic attack,and the running time of the algorithm considering semantic attack is lower.Therefore,SPDGM algorithm is feasibility and practicability.
Collaborative Filtering Recommendation Algorithm Based on Adversarial Learning
ZHAN Wan-jiang, HONG Zhi-lin, FANG Lu-ping, WU Zhe-fu, LYU Yue-hua
Computer Science. 2021, 48 (7): 172-177.  doi:10.11896/jsjkx.200600077
Abstract PDF(1599KB) ( 912 )   
References | Related Articles | Metrics
The recommendation system can recommend relevant information and commodities to the user according to the user’s hobbies and purchase behavior.As user-generated content UGC gradually becomes the mainstream of current Web applications,recommendations based on UGC have also received widespread attention.Different from the binary interaction between user and item in traditional recommendation,the existing UGC recommendation adopts collaborative filtering method to propose a ternary interaction between consumer,item and producer,thereby improving the accuracy of recommendation,but most of the algorithms focus on the recommended performance and ignore the research on robustness.Therefore,by combining the ideas of adversarial learning and collaborative filtering,a collaborative filtering recommendation algorithm based on adversarial learning is proposed.First,the adversarial disturbance is added to the ternary relationship model parameters to make the performance of the model the worst.At the same time,the adversarial learning method is used to train the model to achieve the purpose of improving the robustness of the recommendation model.Secondly,an efficient algorithm is designed used to transform the parameters required by the model.Finally,it is tested on two public data sets generated by Reddit and Pinterest.The experimental results show that under the same parameter settings,compared with the existing algorithms,the AUC,Precision and Recall indicators of the proposed algorithm have been significantly improved,verifying its feasibility and effectiveness.The algorithm not only enhances the recommendation performance,but also improves the robustness of the model.
Interval Prediction Method for Imbalanced Fuel Consumption Data
CHEN Jing-jie, WANG Kun
Computer Science. 2021, 48 (7): 178-183.  doi:10.11896/jsjkx.200500145
Abstract PDF(2185KB) ( 939 )   
References | Related Articles | Metrics
Fuel consumption data is imbalanced,which leads to the lower quality prediction interval.Aiming at this problem,an interval prediction model based on SMOTE-XGBoost algorithm is proposed.From the perspective of oversampling,the SMOTE algorithm is used to increase the number of minority samples in the training set,so that the imbalance of data in the training set is eliminated.For the interval prediction task,the quantile loss function is used as the loss function of the XGBoost algorithm.At the same time,by smoothing the small area around the origin of its first derivative,the quantile loss function is improved to solve the problem that the quantile loss function causes the tree in the XGBoost algorithm to not split.Based on the above work,the XGBoost algorithm and SMOTE algorithm are combined to train the interval prediction model,and finally the upper and lower bound of the prediction interval are obtained respectively.Conducting experiments based on the QAR data set,the experiment results indicate that compared with other methods,this method makes the prediction interval have higher interval coverage and narrower interval width,which improves the quality of the prediction interval.
Computer Graphics & Multimedia
Video Super-resolution Method Based on Deep Learning Feature Warping
CHENG Song-sheng, PAN Jin-shan
Computer Science. 2021, 48 (7): 184-189.  doi:10.11896/jsjkx.200800224
Abstract PDF(2485KB) ( 1266 )   
References | Related Articles | Metrics
Video restoration aims to restore potential clear videos from a given degraded video sequence.Existing video restoration methods usually focus on modeling the motion information among adjacent frames and establishing the alignment among them.Different from these methods,this paper proposes a feature warping method based on deep learning for video super-resolution.Firstly,the proposed algorithm estimates the motion information based on deep convolutional neural networks.Then,it develops a shallow deep convolutional neural network to estimate the features from input frames.Based on the estimated motion information,this paper warps the deep features to those of the central frames.Next,the proposed method fuses the deep features effectively.Finally,this paper proposes a restoration network which is able to reconstruct clear frames.Experimental results de-monstrate the effectiveness of the proposed algorithm.The proposed algorithm performs well on the benchmark datasets compared to existing methods.
SAR Image Change Detection Method Based on Capsule Network with Weight Pruning
CHEN Zhi-wen, WANG Kun, ZHOU Guang-yun, WANG Xu, ZHANG Xiao-dan, ZHU Hu-ming
Computer Science. 2021, 48 (7): 190-198.  doi:10.11896/jsjkx.200800225
Abstract PDF(3391KB) ( 727 )   
References | Related Articles | Metrics
The SAR image change detection algorithm based on deep neural network has been widely used in many fields such as agricultural detection,urban planning and forest early warning due to its high accuracy.This paper designs a SAR image change detection algorithm based on capsule network.In view of its high model complexity and large number of parameters,a model compression method based on weight pruning is proposed.This method performs layer-by-layer analysis of its capsule network parameters,adopts different pruning strategies for different types of layers,prunes redundant parameters in the network,and then fine-tunes the pruned network to improve the detection performance of the model.Finally,by compressing and storing the para-meters retained in the model,the storage space occupied by the model is significantly reduced.Experiments on four datasets of real SAR images prove the effectiveness of the proposed model compression method.
Video Abnormal Event Detection Algorithm Based on Self-feedback Optimal Subclass Mining
HOU Chun-ping, ZHAO Chun-yue, WANG Zhi-peng
Computer Science. 2021, 48 (7): 199-205.  doi:10.11896/jsjkx.200800146
Abstract PDF(2087KB) ( 598 )   
References | Related Articles | Metrics
Video anomaly detection algorithm is one of the hot issues in the field of video processing,and it is used to detect whether an abnormal event is contained in the video.However,since abnormal samples are not involved in the training process,and there is a certain degree of similarity between abnormal samples and normal samples,it is difficult to design an abnormal detection model with discrimination.In order to solve the above problems,firstly,this paper proposes a feature selection method based on similarity preservation and sample recovery.This method can retain the similarity of normal samples,and then learn features that can accurately describe normal events.Secondly,it formalizes the abnormal event detection as classification problem,and proposes a self-feedback optimal subclass mining method to find optimal classifier.The sample will be labeled as anomaly if all classifiers label it as anomaly.Extensive experiments on public video surveillance datasets (i.e.Avenue Dataset and UCSD Ped2 Dataset) demonstrate that the proposed abnormal event detection method can achieve good results.
Temporal Modeling for Online Anomaly Detection
QING Lai-yun, ZHANG Jian-gong, MIAO Jun
Computer Science. 2021, 48 (7): 206-212.  doi:10.11896/jsjkx.200900093
Abstract PDF(2540KB) ( 846 )   
References | Related Articles | Metrics
Weakly supervised anomaly detection (WSAD) is a challenging task in that there is only normal and anomaly video label supervision but it is required to localize intervals where anomalies take place.We employ multiple instance learning (MIL) network for weakly supervised anomaly detection,which regards the input video as a bag and the segments chunked from the vi-deo as instances in it.We train the instance classifier with only label of video level (bag level),while the label of instance level is unknown.As there is strong temporal information in videos,we focus on temporal relationship for online anomaly detection in surveillance videos.We consider both global and local perspective and use self-attention module to learn each instance weight.We get the linear weighted sum of self-attention score and instance anomaly score,which represents video level anomaly score.Then the mean square error loss is employed to train the self-attention module.Online constraints allow us to use historical and current video clips only,without future frames.In order to model the temporal structure of video,we introduce LSTM and temporal con-volutional network (TCN) into WSAD problem.We explore single rate dilated temporal convolutional network,and pyramid dilated temporal convolutional network (PDTCN) which fuses multi-scale feature with different rates.Experiments show that the AUC of PDTCN with complementary inner and outer bag loss is higher than that of the baseline method without temporal mode-ling by 3.2% on UCF-Crime dataset.
Low Light Image Fusion Detection Method Based on Lego Filter and SSD
LI Lin, LIU Xue-liang, ZHAO Ye, JI Ping
Computer Science. 2021, 48 (7): 213-218.  doi:10.11896/jsjkx.200800127
Abstract PDF(4152KB) ( 770 )   
References | Related Articles | Metrics
Aiming at the problem that target detection is prone to misdetection or missing detection due to the complex background environment of low light image,this paper proposes a method to improve the accuracy and speed of low light image based on SSD object detection.Firstly,the low light image is enhanced,the processed enhanced image and the original low light image are respectively input into the SSD network structure with Lego filter for training detection.The two detection models are used to train and detect the enhanced data set,and a series of candidate frames are obtained.Finally,the non-repeated frames in the candidate frames are fused to mark the target in the correct position,so as to improve the detection accuracy of low light image.At the same time,Lego filter is integrated into the network structure to reduce the model parameters of network training,so as to improve the detection speed.Experimental results show that,when Lego filter is integrated in different positions of the network structure,the parameters of the model reduce by 8.9% and 29.5%,and the numbers of floating-point operations reduce by 6.8% and 34.9%.After fusion processing,the detection accuracy improves by 3% ~7%.This method is more suitable for practical application,effectively improves the detection speed and accuracy of low light image,and expands the application range of object detection.
Novel Algorithm of Single Image Dehazing Based on Dark Channel Prior
HE Tao, ZHAO Ting, XU He
Computer Science. 2021, 48 (7): 219-224.  doi:10.11896/jsjkx.200700160
Abstract PDF(3229KB) ( 1145 )   
References | Related Articles | Metrics
Because the dark channel prior defogging algorithm will cause color distortion and offset in bright areas such as the sky,a novel algorithm of single image dehazing based on dark channel prior is proposed to improve the image defogging effect.Firstly,according to the size of the image,an adaptive filtering window is designed.Next,in order to prevent the influence of the highlight pixels in the image on the estimation of the atmospheric light value,the variogram is used to remove the highlight pi-xels,and the atmospheric light value is estimated by combining the dark channel map of the image after removing the highlight pi-xel.Then,an improved dark channel prior defogging algorithm combined with structural similarity is proposed,and the transmittance is optimized and corrected.And then,the atmospheric scattering model is used to recover the fog-free image.Finally,the mutual conversion of the RGB model and the HIS model are used to enhance the brightness of the restored image.The experimental results show that the proposed algorithm can not only better dehaze the scene in the picture,but also better deal with bright areas such as the sky,so that the processed image has a good visual effect.
UAV Sound Recognition Algorithm Based on Deep Learning
XU Hao, LIU Yue-lei
Computer Science. 2021, 48 (7): 225-232.  doi:10.11896/jsjkx.200500091
Abstract PDF(2975KB) ( 1223 )   
References | Related Articles | Metrics
Deep learning has demonstrated its superior performance and broad development prospect in image recognition and sound processing.It is of certain significance for the UAV detection system established in no-fly zone to use deep learning method to judge the sound signal of UAV.In order to obtain better detection effect,the representative feature extraction and classification methods are listed at first,and their advantages and disadvantages are analyzed.Then,a method of data processing is proposed to expand the number of available samples.At the same time,different combinations of deep learning network training samples are used in the experiment.Finally,the confounding matrix method is used to evaluate the experimental results of different SNR models,filtering limits,fitting degrees,neural network combinations and cross-model recognition.The results show that reducing the sound intensity of the UAV can improve the recognition distance of the system.By using MFCC to extract the sound features,the samples classified by the fully connected neural network have a longer identification radius and a lower misjudgment rate.
Moving Object Detection Based on Region Extraction and Improved LBP Features
XIN Yuan-xue, SHI Peng-fei, XUE Rui-yang
Computer Science. 2021, 48 (7): 233-237.  doi:10.11896/jsjkx.200600131
Abstract PDF(2242KB) ( 1093 )   
References | Related Articles | Metrics
Detection accuracy of moving object is dramatically affected by the dynamic natural background,for instance,the shaking leaves and varying illumination.Therefore,it is essential to distinguish between the dynamic background and the foreground moving object.The existing foreground extraction algorithm extracts the foreground point by point,which leads to a waste of computing resources.This paper proposed a novel moving object detection algorithm based on region extraction and improved Local Binary Patterns (LBP).First,the image is divided into several image blocks of same size,and the Kernel Density Estimation (KDE) model is established according to the statistical characteristics of these image blocks.The foreground region is estimated by the KDE model.Then,the improved LBP texture feature histogram of all pixels in the foreground block is obtained.By ma-tching the histogram,all the foreground pixels are extracted,and the background is updated with a probabilistic model.The experimental results show that the proposed method can quickly extract the foreground region of moving target and eliminate most of the interference caused by dynamic background.Compared with the traditional algorithm,the proposed method is more suitable for moving object detection in natural scenes.
Multi-scale Multi-granularity Feature for Pedestrian Re-identification
WANG Dong, ZHOU Da-ke, HUANG You-da , YANG Xin
Computer Science. 2021, 48 (7): 238-244.  doi:10.11896/jsjkx.200600043
Abstract PDF(2660KB) ( 1012 )   
References | Related Articles | Metrics
In order to address the problem of insufficient discriminative features for pedestrian re-identification extracted by exis-ting convolutional neural network,a novel multi-scale multi-granularity feature learning for pedestrian re-identification method is proposed.In the training phase,the method extracts multi-scale features at different stages of the convolutional neural network,and then blocks and pools these feature maps to obtain multi-granularity features containing global and local features,uses uncertainty to weight Softmax loss and triples loss and to supervise training process on feature vectors.In the inference phase,the obtained multi-scale multi-granularity features are concatenated,and finally the concatenated features are used to perform similarity matching in the gallery.Experiments on the Market-1501 and DukeMTMC-ReID datasets show that the proposed method improves the Rank-1 evaluation index by 4.3% and 3.6%,respectively,compared with the benchmark network ResNet-50,and improves the mAP evaluation index respectively 6.2% and 6.6%.The results show that the proposed method can enhance the discrimination of extracted features and improve the performance of pedestrian re-identification.
Artificial Intelligence
Survey on Cloud Manufacturing Service Composition
YAO Juan, XING Bin, ZENG Jun, WEN Jun-hao
Computer Science. 2021, 48 (7): 245-255.  doi:10.11896/jsjkx.200800173
Abstract PDF(1841KB) ( 915 )   
References | Related Articles | Metrics
With the rapid development of industrialization,manufacturing industry as the main force to promote industrialization must accelerate the pace of development,thus a new service-oriented manufacturing model——cloud manufacturing is proposed.Cloud manufacturing aims at sharing and cooperation between distributed manufacturing resources and capabilities,forms an on-demand resource allocation and uses mode with demand.It needs to explore continuously to select the optimal service performance and combine these services into a composite service to meet the needs of users.Cloud manufacturing service composition is an NP-hard problem,which is one of the most challenging problems in cloud manufacturing.The current cloud manufacturing service composition methods have challenges such as high time complexity,poor composition effect,and the composition path that can only achieve sub-optimal solutions.How to use fine-grained services to generate composite services to improve manufacturing capabilities and to meet users’ needs has attracted a widespread attention from academics and industrial researchers.Therefore,it is very necessary to conduct a comprehensive review of researches on this NP-hard problem.In this paper,firstly,the composition process and optimization objectives of cloud manufacturing service composition are described.Then,key points and hotspots in cloud manufacturing service composition are systematically summarized from different perspectives such as composition criteria,optimization algorithm,and multi-objective and single-objective optimization problems,etc.Finally,the application scenarios,experimental data and current deficiencies of cloud manufacturing service composition are summarized and discussed.
Summary of Computer-assisted Tongue Diagnosis Solutions for Key Problems
ZHANG Li-qian, LI Meng-hang, GAO Shan-shan, ZHANG Cai-ming
Computer Science. 2021, 48 (7): 256-269.  doi:10.11896/jsjkx.200800223
Abstract PDF(3872KB) ( 1685 )   
References | Related Articles | Metrics
Tongue diagnosis is one of the important contents of the four diagnostic methods of “looking,listening,asking and fee-ling the pulse”,and it is also a major feature of TCM (traditional Chinese medicine) diagnosis.TCM physicians need to make clinical diagnosis through visual observation,which makes traditional tongue diagnosis have the disadvantages of strong subjective dependence and lack of quantification.With the development of Wise Information Technology of 120 (WIT 120),researchers have focused on how to use computers to assist in the diagnosis of tongue images,realize intelligent tongue diagnosis,and then realize smart Chinese medicine.In recent years,the intelligent tongue diagnosis and its relevant research have become more and more popular.In order to assist researchers in this field to explore computer-aided tongue diagnosis in a more in-depth manner,this paper systematically and comprehensively reviewed them.Firstly,the specific process of computer-aided diagnosis of tongue image of traditional Chinese medicine is introduced.Secondly,based on the extensive study on the existing literature,the latest achievements and existing applications,this paper classifies and discusses different steps of computer-aided tongue diagnosis in mainstream methods,and summarizes the basic ideas,advantages and disadvantages of these methods.Then a relatively complete Computer-aided tongue diagnosis system is designed and implemented after enumerating some tongue image analysis systems that have been developed so far.Finally,this paper summarizes and prospects the possible development direction in the future.
Chaos Artificial Bee Colony Algorithm Based on Homogenizing Optimization of Generated Time Series
SHI Ke-xiang, BAO Li-yong, DING Hong-wei, GUAN Zheng, ZHAO Lei
Computer Science. 2021, 48 (7): 270-280.  doi:10.11896/jsjkx.200800087
Abstract PDF(5711KB) ( 836 )   
References | Related Articles | Metrics
In order to optimize the distribution of the time series related to the initial honey source and search method,and further improve the algorithm’s global pioneering and traversal optimization efficiency,a chaos artificial bee colony algorithm based on homogenizing optimization of generated time series is proposed in this paper.Aiming at the problem that the distribution of initial honey sources generated by chaos time series is too concentrated,firstly,based on the principle of maximum entropy,the Logistic chaos mapping is optimized for homogenization,and entropy spectrum analysis and NIST randomness test are used to verify the randomness of the generated time series,so that the initial honey source generated by it can be randomly and uniformly distributed in the entire solution space,which lays the foundation for the global optimization of the algorithm.Secondly,this paper improves the neighborhood search methods according to search strategies from near to far,and uses the homogenizing time series to search for the optimal location of the honey source,so as to improve the traversal speed and convergence accuracy of the proposed algorithm.Finally,the proposed algorithm performs experimental simulation on nine standard test functions.It is compared with other improved artificial bee colony algorithms and optimization algorithms from the convergence curve and optimization results,and the six algorithms are reasonably introduced into the logistics distribution problem to find the shortest path.The results show that the proposed optimization algorithm not only strengthens the homogenization of initial honey sources,but also has a more significant optimization effect.It can jump out of the local optimal and find the global optimal solution accurately and quickly.
Logistic Regression with Regularization Based on Network Structure
HU Yan-mei, YANG Bo, DUO Bin
Computer Science. 2021, 48 (7): 281-291.  doi:10.11896/jsjkx.201100106
Abstract PDF(2543KB) ( 758 )   
References | Related Articles | Metrics
Logistic regression is widely used as classification model.However,as the task of high-dimensional data classification becomes more and more frequent in practical application,the classification model is facing great challenge.Regularization is an effective approach to this challenge.Many existing regularized logistic regression models directly use L1-norm penalty as regularized penalty term without considering the complex relationships among features.There are also some regularization penalty terms designed on the basis of group information of features,but assuming that the group information is prior knowledge.This paper explores the pattern hidden in feature data from the perspective of network and then proposes a regularized logistic regression model based on the network structure.Firstly,this paper constructs feature network by describing feature data in the form of network.Secondly,it observes and analyzes the feature network from the perspective of network science and designs a penalty function based on the observation.Thirdly,it proposes a logistic regression model with network structured Lasso by taking the penalty function as regularized penalty term.Lastly,it infers the solution of the model by combining the Nesterov’s accelerated proximal gradient method and the Moreau-Yosida regularization method.Experiments on real datasets show that the proposed regularized logistic regression performs excellently,which demonstrates that observing and analyzing feature data from the perspective of network is a potential way to study regularized model.
BGCN:Trigger Detection Based on BERT and Graph Convolution Network
CHENG Si-wei, GE Wei-yi, WANG Yu, XU Jian
Computer Science. 2021, 48 (7): 292-298.  doi:10.11896/jsjkx.200500133
Abstract PDF(1955KB) ( 1243 )   
References | Related Articles | Metrics
Trigger word detection is a basic task of event extraction,which involves the recognition and classification of trigger words.There are two main problems in the previous work:(1)the neural network model for trigger word detection only consi-ders the sequential representation of sentences,and the sequential modeling method is inefficient in capturing long-distance dependencies;(2)although the representation-based method overcomes the problem of manual feature extraction,the word vector used as the initial training feature lacks the degree of representation of the sentence,so it is difficult to capture the deep two-way representation.Therefore,we propose a trigger word detection model BGCN,based on BERT model and GCN network.This model strengthens the feature representation by introducing BERT word vector,and introduces syntactic structure to capture long-distance dependencies and detect event trigger words.Experimental results show that our method outperforms other existing neural network models on ACE2005 datasets.
Prediction of Fire Smoke Flow and Temperature Distribution Based on Trend Feature Vector
YIN Yun-fei, LIN Yue-jiang, HUANG Fa-liang, BAI Xiang-yu
Computer Science. 2021, 48 (7): 299-307.  doi:10.11896/jsjkx.200600106
Abstract PDF(2752KB) ( 1149 )   
References | Related Articles | Metrics
The prediction of smoke movement and temperature distribution when a fire occurs is a popular technology in the field of construction and fire protection.At present,this prediction has not been combined with deep neural network technology.Aiming at the current situation that the prediction of fire smoke movement and temperature distribution is cumbersome and the prediction accuracy is low,a prediction model of fire smoke movement and temperature distribution based on trend feature vector is proposed.The deep learning methods are used to train and predict relevant data,which is of great significance to reveal the law of fire occurrence and development and can provide auxiliary information for fire-fighting and fire evacuation.The proposed model can extract the trend features in the fire time series data,and uses these features as a priori knowledge to accelerate and optimize the training process of the deep neural network.This paper designs LSTM-TFV (LSTM based on Trend Feature Vector) algorithm.Experimental results show that the proposed prediction model improves the accuracy of the prediction of fire smoke movement and temperature distribution,and realizes efficient and convenient fire time series data prediction.
Prediction of Tectonic Coal Thickness Based on AGA-DBSCAN Optimized RBF Neural Networks
WU Shan-jie, WANG Xin
Computer Science. 2021, 48 (7): 308-315.  doi:10.11896/jsjkx.200800110
Abstract PDF(2790KB) ( 706 )   
References | Related Articles | Metrics
In the prediction of tectonic coal thickness,the problem of low accuracy is often caused by various restrictive factors.Therefore,a method of optimizing the parameters of RBF neural networks by using adaptive genetic algorithm to optimize density clustering is used to predicte the thickness of tectonic coal.Firstly,the 3D seismic attribute data of the mining area are preprocessed,and the PCA algorithm is used to reduce the dimension and eliminate the linear correlation between variables.Then a RBF neural network model for predicting the thickness of tectonic coal is constructed,the genetic algorithm is used to optimize the density clustering to obtain the best core point,and the initial clustering center of k-means clustering is further calculated to optimize the k-means algorithm,so that the RBF neural network implicit layer basis function is obtained.An excellent center vector increases the accuracy and robustness of the model prediction.At the same time,aiming at the problem that genetic algorithm is easy to fall into local optimal problem,the global and local search ability of the genetic algorithm is improved by adaptively changing the crossover rate and the mutation rate with the increase of the number of evolutions,so that it can escape the local best advantage and obtain better evolutionary results.The L2 regularization term is added to effectively avoid the influence of noisy data for generalization performance of the model.Finally,the prosed model is applied to the 8# coal seam of the No.6 mining area of Luling Coal Mine.The predicted thickness of the model is highly consistent with the actual geological data.It is possible to promote the prediction of coal thickness in actual mining area.
Computer Network
Research Progress of Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing
LIANG Jun-bin, ZHANG Hai-han, JIANG Chan, WANG Tian-shu
Computer Science. 2021, 48 (7): 316-323.  doi:10.11896/jsjkx.200800095
Abstract PDF(1465KB) ( 1940 )   
References | Related Articles | Metrics
Mobile edge computing is a new type of network computing mode that has emerged in recent years.It allows server nodes with strong computing power and storage performance to be placed closer to the edge of the network of mobile devices (such as near base stations ),allowing mobile devices to offload tasks to edge devices for processing closely,thereby alleviates the disadvantages of traditional networks that have to spend a lot of time,energy and unsafely offload tasks to remote cloud platforms for processing due to weak computing and storage capabilities of mobile devices and limited energy.However,how to make a device that only has limited local information (such as the number of neighbors ) chooses to offload tasks to the local site according to the size and number of tasks,or chooses the mobile edge computing server with the optimal delay and energy consumption in a dynamic network where the wireless channel changes with time,to perform all or part of the task offloading,is a multi-objective programming problem and has a high degree of difficulty in solving.It is difficult to obtain better results with traditional optimization techniques(such as convex optimization).Deep reinforcement learning is a new type of artificial intelligence algorithm technology that combines deep learning and reinforcement learning.It can make more accurate decision-making results for complex collaboration,game and other issues.It has broad application prospects in many fields such as industry,agriculture and commerce.In recent years,It has become a new research trend to use deep reinforcement learning method to optimize task offloading in mobile edge computing networks.In the past three years,some researchers have conducted preliminary explorations on it,and achieved lower latency and energy consumption than using deep learning or reinforcement learning alone in the past,but there are still many shortcomings.In order to further advance the research in this field,this paper analyzes,compares and summarizes the domestic and foreign related work in recent years,summarizes their advantages and disadvantages,and discusses the possible future in research directions.
Self-adaptive Intelligent Wireless Propagation Model to Different Scenarios
GAO Shi-shun, ZHAO Hai-tao, ZHANG Xiao-ying, WEI Ji-bo
Computer Science. 2021, 48 (7): 324-332.  doi:10.11896/jsjkx.201000181
Abstract PDF(3076KB) ( 1127 )   
References | Related Articles | Metrics
The wireless propagation model,which can accurately predict the path loss of radio waves,plays an important role in the estimation of communication rate,coverage and interference.It plays a fundamental role in the design of communication systems in civil and military fields.With the advance in artificial intelligence,there appears a significant trend to develop intelligent wireless propagation model that replaces the empirical formula with machine learning algorithms to fit the path loss.The intelligent wireless propagation model effectively extends the applicability of the propagation model and reduces the error in predicting path loss.However,because the optimal input features set of the intelligent wireless propagation model may be different in diffe-rent propagation environments,it is important to optimally design and select the input features for different scenarios.Therefore,this paper proposes a self-adaptive intelligent wireless propagation model(SAIWP).Firstly,inspired by the processing methods of empirical model for features in different scenarios,the SAIWP model extends the input features set of the intelligent wireless propagation model.And then,the SAIWP model uses the simulated annealing algorithm to self-adaptively select the optimal input feature subset to reduce the error in the prediction of path loss.Finally,the SAIWP model exploits the optimal input feature subset in the optimization process and all data set to train the intelligent wireless propagation model.Simulation results show that,in the LTE networks and the smart campus,compared with traditional empirical models and intelligent wireless propagation models,the SAIWP model predict accurately in various terrains and distances,and effectively reduces the error in the prediction of path loss.
Reinforcement Learning Based Energy Allocation Strategy for Multi-access Wireless Communications with Energy Harvesting
WANG Ying-kai, WANG Qing-shan
Computer Science. 2021, 48 (7): 333-339.  doi:10.11896/jsjkx.201100154
Abstract PDF(2447KB) ( 865 )   
References | Related Articles | Metrics
Due to the increasing popularization of the Internet of Things (IoT),the requirements for the power that can be used by the terminal equipment of the IoT are also constantly improving.Energy harvesting technology is a promising solution to overcome equipment energy shortages by generating renewable energy.Considering the uncertainty of renewable energy in the unknown environment,the terminal equipment of the IoT needs a reasonable and effective energy allocation strategy to ensure the continuous and stable operation of the system.In this paper,a DQN-based deep reinforcement learning energy allocation strategy is proposed,which uses DQN algorithm to directly interact with the unknown environment to approach the optimal energy allocation strategy without relying on the prior knowledge of the environment.Moreover,a pre-training algorithm is proposed to optimize the initialization state and learning rate of the strategy based on the characteristics of reinforcement learning and time-inva-riant system.The simulation results under different channel data conditions show that the energy allocation strategy proposed in this paper has better performance than the existing strategy under different channel conditions,and has strong variable scene learning ability.
Dynamic Broadcasting Strategy in Cognitive Radio Networks Under Delivery Deadline
FANG Ting, GONG Ao-yu, ZHANG Fan, LIN Yan, JIA Lin-qiong, ZHANG Yi-jin
Computer Science. 2021, 48 (7): 340-346.  doi:10.11896/jsjkx.200900001
Abstract PDF(2320KB) ( 595 )   
References | Related Articles | Metrics
In cognitive radio networks with delivery deadline requirements,secondary users need to opportunistically broadcast messages using the channel unoccupied by primary users (PUs) within a given delivery deadline.For this scenario,this paper proposes a new cognitive radio network dynamic broadcasting strategy under delivery deadline,which allows each secondary user (SU) to adjust the transmission probability according to the observation of carrier sensing in each slot,the remaining time before deadline expiration and the PU traffic model.Based on an ideal assumption on the observation of carrier sensing,this paper obtains an optimal broadcasting strategy and the maximum network reliability using Markov decision process (MDP).Then,accor-ding to a practical observation capability of carrier sensing,this paper proposes a heuristic broadcasting strategy and uses another MDP to obtain the network reliability of this heuristic strategy.Simulation results verify the accuracy of the analysis and show that the network reliability of the proposed heuristic strategyis very close to the maximum network reliability under the ideal observation,and is superior to that of the optimal statics trategy.