Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK

Course Design and Redesign for Introduction to Data Science
朝乐门. 数据科学导论的课程设计及教学改革[J]. 计算机科学, 2020, 47(7): 17.
CHAO Lemen. Course Design and Redesign for Introduction to Data Science[J]. Computer Science, 2020, 47(7): 17.  CHAO Lemen
 Computer Science. 2020, 47 (7): 17. doi:10.11896/jsjkx.200500088
 Abstract PDF(1296KB) ( 867 )
 References  Related Articles  Metrics

Introduction to Data Science is an intrinsic course for not only the development of emerging majors (Data Science and Big Data Technology,Big Data Management and Application,and so on),but also only the innovation of traditional ones (Computer Science and Technology,Statistics,and Information Resource Management,and so on).Course design issues for this novel course,including its objectives,contents,experiments,assessment methods,reference books,personalized design are discussed based upon conducting an indepth for typical courses offered by Columbia University,New York University,Harvard University and Renmin University of China as well as the author’s teaching experience.The redesign of exiting courses on introduction to Data Science should focus on improving the abilities of target students on fullstack data science,data product development,coding for Data Science,and communicating with nonprofessional users,as well as leveraging alternative course construction models,reflecting social needs,highlighting its roadmap roles for the curriculums.

Polynomial Time Algorithm for Hamilton Circuit Problem
姜新文. 哈密顿图判定问题的多项式时间算法[J]. 计算机科学, 2020, 47(7): 820.
JIANG Xinwen. Polynomial Time Algorithm for Hamilton Circuit Problem[J]. Computer Science, 2020, 47(7): 820.  JIANG Xinwen
 Computer Science. 2020, 47 (7): 820. doi:10.11896/jsjkx.191200176
 Abstract PDF(1760KB) ( 10030 )
 References  Related Articles  Metrics

The ‘P vs.NP problem’ is the most famous and difficult problem in math.and computer science.Among the seven most important and difficult problems that the American Clay Mathematics Institute declared in 2000,this problem ranks the first one.Collecting 25 difficult problems that human being urgently want to solve,a list given by Science in 2005 contains the ‘P vs.NP problem’,ranking 19th.In the latest list of the most important 125 questions given by Science,the ‘P vs.NP problem’ ranks 19th too.Modern cryptography is based on the assumption that NP≠P.Whether NP=P or not depends on the complexity to solve a NPC problem.A new NP problem which is called MSP is proposed and a polynomial time algorithm to solve MSP is designed in this paper.To prove that the MSP problem is a NP complete problem,several papers,that reduced more than 10 NP complete problems to the MSP problem and reversely reduced the MSP problem to the SAT problem,were published in the last several years.Hence the result maybe helpful for proving NP=P.

Uncertain XML Model Based on Fuzzy Sets and Probability Distribution and Its Algebraic Operations
胡磊, 严丽. 一种基于模糊集和概率分布的不确定XML模型及其代数运算[J]. 计算机科学, 2020, 47(7): 2130.
HU Lei, YAN Li. Uncertain XML Model Based on Fuzzy Sets and Probability Distribution and Its Algebraic Operations[J]. Computer Science, 2020, 47(7): 2130.  HU Lei, YAN Li
 Computer Science. 2020, 47 (7): 2130. doi:10.11896/jsjkx.190700164
 Abstract PDF(1635KB) ( 616 )
 References  Related Articles  Metrics

As a defacto standard of information representation and exchange,XML has been widely used as a unified data exchange format between different applications,which has played an important role in realworld applications.However,the real world is filled with uncertain information and classical XML is not able to represent and deal with uncertain data.So it is necessary to extend classical XML model.The real world is complex,which often contains both random and fuzzy uncertainties.Considering that probability theory and fuzzy set theory are powerful tools for dealing with uncertainty,this paper uses both probability theory and fuzzy set theory to build a new uncertain XML model,which is different from the existing fuzzy XML models and probabilistic XML models.The new uncertain XML model is compatible with existing XML models and can represent more complex uncertain information.

Study on Subnetwork Reliability of kary ncubes
冯凯, 李婧. k元n方体的子网络可靠性研究[J]. 计算机科学, 2020, 47(7): 3136.
FENG Kai, LI Jing. Study on Subnetwork Reliability of kary ncubes[J]. Computer Science, 2020, 47(7): 3136.  FENG Kai, LI Jing
 Computer Science. 2020, 47 (7): 3136. doi:10.11896/jsjkx.190700170
 Abstract PDF(1909KB) ( 478 )
 References  Related Articles  Metrics

The kary ncube is one of the most attractive interconnection network topologies for parallel computing systems.In order to accurately measure the fault tolerance on subnetworks in a kary ncube,the kary (n－1)cube reliability in a kary ncube under the probabilistic fault model is studied.When k is an odd integer and k≥3,a lower bound on the kary (n－1)cube reliability in a kary ncube under the probability fault model is derived by clarifying the intersections of kary (n－1)cube subnetworks in a kary ncube,and an approximate kary (n－1)cube reliability result is obtained.The approximation result is shown to be close to the simulation result,and these two results are getting overlapped as the node reliability decreases.Moreover,an algorithm is given for searching faultfree kary (n－1)cubes in a kary ncube in the presence of node failures,and the experimental results demonstrate the effectiveness of the proposed algorithm.

Blind Quantum Computation over Noise Channels
罗文俊, 雷爽. 噪声信道下的盲量子计算[J]. 计算机科学, 2020, 47(7): 3741.
LUO Wenjun, LEI Shuang. Blind Quantum Computation over Noise Channels[J]. Computer Science, 2020, 47(7): 3741.  LUO Wenjun, LEI Shuang
 Computer Science. 2020, 47 (7): 3741. doi:10.11896/jsjkx.190600020
 Abstract PDF(1468KB) ( 352 )
 References  Related Articles  Metrics

Blind Quantum Computation (BQC),is a kind of protocol that remarkably distinguishes from traditional quantum computation,delegates computing tasks from clients to the servers through the quantum channels which eventually alleviates the computing pressure generated by the clients.Consequently,BQC requires that the quantum is teleported in an accurate manner of transmission via the channels.Due to the problem of noise of quantum channel,a purely noiseless transmission channel under ideal circumstance cannot be realized without quantum error correction codes that are implemented to rectify the flip errors in terms of quantum bit and phase resulted from noise channels.By the basis of BQC protocol,two antinoise BQC protocols are proposed from the perspectives of noise bit flip channels and noise phase flip channels,respectively.Explicitly,the client encodes the qubits via various ways,then the encoded qubits are used to transmit the quantum information to the server by which the quantum error correction codes are exploited to recover the correct quantum information for the purpose of completion of BQC with the client.A protocol analysis indicates that via correction computation,the requirement of accurate transmission by BQC protocol can be met during the computation of BQC over the quantum bit flip and quantum phase flip noise channels with neither the sacrifice of correctness and blindness of BQC,nor the reduction in unconditional security of quantum computing.Finally,this paper hopes that the new BQC protocols can be applied to other quantum error correction codes as well.

Reward Mechanism Based Branching Strategy for SAT Solver
沈雪, 陈树伟, 艾森阳. 基于奖励机制的SAT求解器分支策略[J]. 计算机科学, 2020, 47(7): 4246.
SHEN Xue, CHEN Shuwei, AI Senyang. Reward Mechanism Based Branching Strategy for SAT Solver[J]. Computer Science, 2020, 47(7): 4246.  SHEN Xue, CHEN Shuwei, AI Senyang
 Computer Science. 2020, 47 (7): 4246. doi:10.11896/jsjkx.190700191
 Abstract PDF(1450KB) ( 408 )
 References  Related Articles  Metrics

Branching decision is one of the most critical parts of CDCL solver,and an excellent branching strategy can reduce the number of branching decisions and improve the SAT solver’s efficiency.Most of the stateofart branching strategies have incorporated the conflict analysis process.But the branching strategies have different reward methods on variables participating in conflict analysis,so the selected branching variables are different.In this paper,considering the important fact that decision variables are always selected in the unassigned variable set,a new branching strategy based on the EVSIDS (Exponential Variable State Independent Decaying Sum) branching strategy is proposed,which is called the branching strategy based on the reward mechanism (referred to as the RACT branching strategy).The RACT branching strategy is to reward the variables that are deassigned in the conflict analysis again to increase the probability that the variables that are frequently involved in the conflict analysis in the unassigned variables are selected as branch variables.Finally,the proposed branching strategy is embedded into the Glucose4.1 solver to form a new solver Glucose4.1+RACT,and the effectiveness of the RACT branching strategy is tested by using 350 instances of the 2017 SAT competition as experimental data sets.The experimental comparison shows that the solver Glucose4.1+RACT can solve more instances than the original solver,especially with an increase of 13.5% in the number of satisfied instances,and the total time spent on solving 350 competition examples is also 3.9% lower than that of Glucose 4.1.The above experimental data shows that the proposed branching strategy can effectively reduce the number of branch decisions in the search tree and give the correct search space,thus improving the solving ability of the SAT solver.

Techniques for Recommendation System:A Survey
刘君良, 李晓光. 个性化推荐系统技术进展[J]. 计算机科学, 2020, 47(7): 4755.
LIU Junliang, LI Xiaoguang. Techniques for Recommendation System:A Survey[J]. Computer Science, 2020, 47(7): 4755.  LIU Junliang, LI Xiaoguang
 Computer Science. 2020, 47 (7): 4755. doi:10.11896/jsjkx.200200114
 Abstract PDF(1473KB) ( 1406 )
 References  Related Articles  Metrics

The recommendation system obtains users’ historical behavior data to predict their preferences,such as web browsing data,purchase records,social network information,users’ geographical location and so on.With the development of computer technology,the recommendation technology is mainly based on useritem data matrix decomposition technology in the early stage.Afterwards,it is gradually integrated with data mining,machine learning,artificial intelligence and other technologies,so as to deeply mine the potential preferences of user behavior and build a more accurate user preference model.The recommendation process also moves from static prediction to realtime recommendation,enriching the recommendation results through realtime interaction with users.This paper mainly reviews the key technologies adopted by the recommendation system in different periods,including contentbased filtering technology,collaborative filtering technology,recommendation technology based on deep learning,recommendation technology based on reinforcement learning,recommendation technology based on heterogeneous information network.Finally,this paper analyzes the advantages and disadvantages of key technologies,and then looks forward to the future development of the recommendation system.

Research Progress on Risk Access Control
王静宇, 刘思睿. 大数据风险访问控制研究进展[J]. 计算机科学, 2020, 47(7): 5665.
WANG Jingyu, LIU Sirui. Research Progress on Risk Access Control[J]. Computer Science, 2020, 47(7): 5665.  WANG Jingyu, LIU Sirui
 Computer Science. 2020, 47 (7): 5665. doi:10.11896/jsjkx.190700157
 Abstract PDF(1848KB) ( 446 )
 References  Related Articles  Metrics

Big data access control is one of the important technologies to ensure the security and information sharing of big data.However,because the traditional access control strategy can not meet the realtime and dynamic access information in the dynamic environment,the risk assessment method is introduced in the access control to coordinate access control policies,improve the application of access control in dynamic environments.In view of this,this paper systematically reviews and summarizes the main work of risk access control research at home and abroad,and analyzes the latest research results in recent years.Firstly,the risk access control extended to the traditional access control model and its XACML frameworkbased access control model is analyzed and summarized,and the application in different environments is summarized.Secondly,the techniques and methods of risk access control are summarized and analyzed,the risk is selfcontained,and RiskAdaptive Access Control (RAdAC) is analyzed and researched.Finally,the future research on risk access control in big data environment is prospected,and some problems with research value are proposed.This paper argues that riskbased access control is still an important research content of access control in future big data access control research technology.

Subspace Clustering Method Based on Block Diagonal Representation and Neighbor Constraint
高方远, 王秀美. 一种基于块对角表示和近邻约束的子空间聚类方法[J]. 计算机科学, 2020, 47(7): 6670.
GAO Fangyuan, WANG Xiumei. Subspace Clustering Method Based on Block Diagonal Representation and Neighbor Constraint[J]. Computer Science, 2020, 47(7): 6670.  GAO Fangyuan, WANG Xiumei
 Computer Science. 2020, 47 (7): 6670. doi:10.11896/jsjkx.190600155
 Abstract PDF(1709KB) ( 595 )
 References  Related Articles  Metrics

Clustering is an important tool for machine learning and data mining,and subspace clustering is a popular method in highdimensional data analysis.Spectral clustering based subspace clustering method learns the selfrepresentation coefficient matrix of data in subspace,and then the spectral clustering is carried out on the coefficient matrix.It is found that the subspacebased clustering cannot deal with nonlinear problem and neglect the local geometric structure of the data.To this end,this paper proposes a new subspace clustering method which first projects the data to a highdimensional linear space by a nonlinear mapping function and applies a Laplacianbased manifold regularization constraint on the subspace clustering model to preserve the local structure of the data at the same time.Three kinds of Laplacian matrix are used to establish the different nonlinear subspace clustering models based on manifold regularization and block diagonal constraints.Experimental results on different data sets show the effectiveness of the proposed methods.

Nonnegative Matrix Factorization Algorithm with Hypergraph Based on Pertreatments
李向利, 贾梦雪. 基于预处理的超图非负矩阵分解算法[J]. 计算机科学, 2020, 47(7): 7177.
LI Xiangli, JIA Mengxue. Nonnegative Matrix Factorization Algorithm with Hypergraph Based on Pertreatments[J]. Computer Science, 2020, 47(7): 7177.  LI Xiangli, JIA Mengxue
 Computer Science. 2020, 47 (7): 7177. doi:10.11896/jsjkx.200200106
 Abstract PDF(2446KB) ( 456 )
 References  Related Articles  Metrics

With the development of the media technology,more information is stored as the pictures.It is a topic problem in the machine learning field that how to distribute the right label to lots of unsigned pictures.And the image clustering has wide application on the face recognition and the handwriting number recognition field.Because the pictures are always stored as nonnegative matrices,the nonnegative matrix factorization algorithm (NMF) plays an important role in the image clustering.But the disadvantage in NMF algorithm is that the algorithm processes the data in the original data space which may produce a terrible result when the data have errors.To address this problem,the proposed algorithm is the nonnegative matrix factorization algorithm with a hypergraph based on pertreatments (PHGNMF).The PHGNMF algorithm introduces the pertreatments and the hypergraph into the NMF algorithm.In the pertreatments,the algorithm uses the grayscale normalization to eliminate the influence of the different illuminations firstly and then the algorithm can extract the lowfrequency information of the pictures by the wavelet analysis.The wavelet procession could also reduce the dimensions of the data.The algorithm constructs a hypergraph for the data to save the neighboring information which has an important influence in the clustering procession.At last the results in five fundamental data sets confirm the effectiveness of the algorithm compared with fundamental algorithms.The results show the increase of accuracy is 2%～7% and the increase of normalized mutual information on some data sets is 2%～5%.

Mining Method of Business Process Change Based on Cost Alignment
刘静, 方贤文. 基于成本对齐的业务流程变化挖掘方法[J]. 计算机科学, 2020, 47(7): 7883.
LIU Jing, FANG Xianwen. Mining Method of Business Process Change Based on Cost Alignment[J]. Computer Science, 2020, 47(7): 7883.  LIU Jing, FANG Xianwen
 Computer Science. 2020, 47 (7): 7883. doi:10.11896/jsjkx.190600140
 Abstract PDF(1408KB) ( 267 )
 References  Related Articles  Metrics

Change mining is the core of business process management,and it is particularly important to mine the changes of business processes from the event log.Most of the existing analysis methods of change mining focus on the source model or target model.From the point of view of system log,this paper proposes a business process change mining method based on cost optimal alignment.Firstly,according to the event log,the effective high frequency morphological occurrence segment is extracted,the highest cost function value of each trace alignment is calculated,and on this basis,the optimal trace alignment is found,and then the similarity between the change log and the source log is measured to mine the change set quickly and efficiently.Finally,an example is given to show the effectiveness of the method.

Survey of Visual Image Saliency Detection
袁野, 和晓歌, 朱定坤, 王富利, 谢浩然, 汪俊, 魏明强, 郭延文. 视觉图像显著性检测综述[J]. 计算机科学, 2020, 47(7): 8491.
YUAN Ye, HE Xiaoge, ZHU Dingkun, WANG Fulee, XIE Haoran, WANG Jun, WEI Mingqiang, GUO Yanwen. Survey of Visual Image Saliency Detection[J]. Computer Science, 2020, 47(7): 8491.  YUAN Ye, HE Xiaoge, ZHU Dingkun, WANG Fulee, XIE Haoran, WANG Jun, WEI Mingqiang, GUO Yanwen
 Computer Science. 2020, 47 (7): 8491. doi:10.11896/jsjkx.190900006
 Abstract PDF(2690KB) ( 892 )
 References  Related Articles  Metrics

In today’s society where image data are exploding,how to use computer to efficiently acquire and process image information has become an important research topic.Under the inspiration of human visual attention mechanism,researchers have found that when this mechanism is introduced into machine image processing tasks,the efficiency of information extraction can be greatly improved,thus saving more limited computing resources.Visual image saliency detection is to use computers to simulate the human visual attention mechanism to calculate the importance of the information of each part in the image,which has been widely used in image segmentation,video compression,target detection,image indexing and other aspects,and has important research values.This paper summarizes and introduces the research situation of image saliency detection algorithms.Firstly,it takes informationdriven sources as starting point to summarize the saliency detection model,and then analyzes several typical saliency detection algorithms.The models are divided into 3 categories according to whether they are based on learning models,which are based on nonlearning models,based on traditional machine learning models and based on deep learning models.For the first category,the paper compares in more details the saliency detection algorithms based on local contrast and global contrast,and points out their respective advantages and disadvantages.For the latter two categories,this paper analyzes the application of machine learning algorithms and deep learning in saliency detection.Finally,this paper summarizes and compares the existing saliency detection algorithms and prospects the future development direction of the research in this aspect.

Injectionmolded Bottle Defect Detection Using Semisupervised Deep Convolutional Generative Adversarial Network
谢源, 苗玉彬, 许凤麟, 张铭. 基于半监督深度卷积生成对抗网络的注塑瓶表面缺陷检测模型[J]. 计算机科学, 2020, 47(7): 9296.
XIE Yuan, MIAO Yubin, XU Fenglin, ZHANG Ming. Injectionmolded Bottle Defect Detection Using Semisupervised Deep Convolutional Generative Adversarial Network[J]. Computer Science, 2020, 47(7): 9296.  XIE Yuan, MIAO Yubin, XU Fenglin, ZHANG Ming
 Computer Science. 2020, 47 (7): 9296. doi:10.11896/jsjkx.190700093
 Abstract PDF(2129KB) ( 375 )
 References  Related Articles  Metrics

Defect detection of injectionmolded bottles is an important part of injection molding.Due to the relatively few defective samples in production,the model tends to overfit when using deep learning algorithm.In order to solve this problem,a defect detection model based on semisupervised deep convolutional generative adversarial network(DCGAN) is proposed.Firstly,the model preprocesses the original images using HSV color space transformation and Otsu threshold segmentation methods.Then,the learning tasks are combined so that the unsupervised discriminator and the supervised classifier share convolutional parameters.At the same time,the loss function is modified,which consists of cross entropy and wasserstein distance.Finally,the model is finetuned using Adam optimizer.The experimental results show that the model can distinguish the defective samples,achieving an accuracy of 98.65%.Compared with traditional machine learning algorithm and CNN model with data augmentation,the proposed model avoids overfitting.

Vehicle Selflocalization Based on Standard Road Sign
张善彬, 袁金钊, 陈辉, 王玉荣, 王杰, 屠长河. 基于标准路牌的车辆自定位[J]. 计算机科学, 2020, 47(7): 97102.
ZHANG Shanbin, YUAN Jinzhao, CHEN Hui, WANG Yurong, WANG Jie, TU Changhe. Vehicle Selflocalization Based on Standard Road Sign[J]. Computer Science, 2020, 47(7): 97102.  ZHANG Shanbin, YUAN Jinzhao, CHEN Hui, WANG Yurong, WANG Jie, TU Changhe
 Computer Science. 2020, 47 (7): 97102. doi:10.11896/jsjkx.190900011
 Abstract PDF(2013KB) ( 430 )
 References  Related Articles  Metrics

Vehicle selflocalization is one of the key technologies of automatic driving and advanced assistant driving.Fast and accurate vehicle selflocalization can provide vehicle location information for navigation or intelligent driving system in time.Aiming at the problem of vehicle positioning in complex environment in the field of automatic driving and advanced assistant driving system,a vehicle selflocalization method based on standard road signs is proposed.A simple database containing standard road signs is designed,in which information such as fonts,sizes and control point coordinates of the road signs are prestored.The video stream images containing the standard road signs are captured by a vehiclemounted monocular camera.Centroid coordinates of the identification area are extracted as control points,and the planar projection transformation matrix between each frame of the video stream image and the database reference image is calculated.Motion constrains and matrix decomposition are used to obtain the stable position of the onboard camera.Experimental tests are performed in the real road environment.The results show that the positioning accuracy of proposed method within 30 meters can reach 0.1 meters,and 0.05 meters within 20 meters.This method is lowcost,simple and reliable,and can use onboard monocular camera and standard road signs to realize precise selflocalization of vehicles in complex traffic sections.

Hand Gesture Recognition Based on Selfadaptive Multiclassifiers Fusion
刘肖, 袁冠, 张艳梅, 闫秋艳, 王志晓. 基于自适应多分类器融合的手势识别[J]. 计算机科学, 2020, 47(7): 103110.
LIU Xiao, YUAN Guan, ZHANG Yanmei, YAN Qiuyan, WANG Zhixiao. Hand Gesture Recognition Based on Selfadaptive Multiclassifiers Fusion[J]. Computer Science, 2020, 47(7): 103110.  LIU Xiao, YUAN Guan, ZHANG Yanmei, YAN Qiuyan, WANG Zhixiao
 Computer Science. 2020, 47 (7): 103110. doi:10.11896/jsjkx.200100073
 Abstract PDF(3637KB) ( 454 )
 References  Related Articles  Metrics

In order to improve the performance of hand gesture recognition based on wearable devices,a hand gesture recognition method (SAMCF) based on selfadaptive multiclassifiers fusion is proposed to solve the bias of single classifier in hand gesture recognition.First,for statistical features that cannot characterize intraclass variability and similarity between complex gestures,SAMCF uses a convolutional neural network (CNN) to automatically extract depth features with strong representation capabilities.Then,SAMCF uses multiple basic classifiers to recognize the extracted feature vectors,and determines the optimal recognition result through selfadaptive fusion algorithm,which solves the bias of single classifier.After that,the robustness and generalization ability of the model are verified by using the data set collected by data glove.The experimental results show that SAMCF can effectively extract the depth features of gesture,solve the bias of single classifier,and improve the efficiency of hand gesture recognition and enhance the performance of hand gesture recognition.The recognition accuracy of character level hand gesture (American Sign Language and Arabic numerals) is 98.23%,which is 5% higher than other algorithms on average;the recognition accuracy of word level gesture (Chinese Sign Language) is 97.81%,which is 4% higher than other algorithm on average.

Face Recognition Based on Cluster Analysis in Complex Environment
高玉潼, 雷为民, 原玥. 复杂环境下基于聚类分析的人脸目标识别[J]. 计算机科学, 2020, 47(7): 111117.
GAO Yutong, LEI Weimin, YUAN Yue. Face Recognition Based on Cluster Analysis in Complex Environment[J]. Computer Science, 2020, 47(7): 111117.  GAO Yutong, LEI Weimin, YUAN Yue
 Computer Science. 2020, 47 (7): 111117. doi:10.11896/jsjkx.190500004
 Abstract PDF(2560KB) ( 315 )
 References  Related Articles  Metrics

In modern society,the use of face recognition technology in a variety of fields is increasing.Meanwhile,the problems of social security environment and international security are becoming more serious,thus face recognition is confronted with more severe challenges.Detection target and background are complex and dynamic in a complicated environment,so the traditional face recognition technology can not meet the growing demand.Therefore,in this paper,the traditional SIFT (Scale,Invariant,Feature,Transform) algorithm is optimized by clustering analysis method,and the object features are classified according to the principle of clustering analysis,so as to make the clustering results more in line with the set threshold and improve the matching efficiency.The results show that the improved SIFT algorithm can eliminate the interference of irrelevant books and realize the complete connection of image matching points.In order to verify the effectiveness of the improved SIFT algorithm, it is compared with the common algorithms based on several commonlyused databases,and the results show that the clustering algorithm SIFT is better than other algorithms in CASPEALG R1,CFP,MultiGPIE,and has better application effect and applicability.

Automatic Recognition of ECG Based on Stacked Bidirectional LSTM
王文刀, 王润泽, 魏鑫磊, 漆云亮, 马义德. 基于堆叠式双向LSTM的心电图自动识别算法[J]. 计算机科学, 2020, 47(7): 118124.
WANG Wendao, WANG Runze, WEI Xinlei, QI Yunliang, MA Yide. Automatic Recognition of ECG Based on Stacked Bidirectional LSTM[J]. Computer Science, 2020, 47(7): 118124.  WANG Wendao, WANG Runze, WEI Xinlei, QI Yunliang, MA Yide
 Computer Science. 2020, 47 (7): 118124. doi:10.11896/jsjkx.190600161
 Abstract PDF(3409KB) ( 658 )
 References  Related Articles  Metrics

For the growing demand of ECG data analysis,a new ECG classification algorithm is proposed.Firstly,the original data are truncated by fixed length,sample equilibrium is obtained,and the preprocessing operations such as instantaneous frequency and spectral entropy of the signal are obtained.After the data is preprocessed,the model can better extract features from the data for learning.In training progress,a twoway LSTM network stacking model is adopted.The stacked twoway LSTM model is an improved cyclic neural network model.Compared with convolutional neural networks,the cyclic neural network is more suitable for processing sequence data such as electrocardiogram.The experiment is conducted using MATLAB2018b under Windows for training and testing.The CUDA version is 9.0.The classification accuracy rate is used as an indicator to measure the performance of the model.The model is tested on two data sets,one is the data of the 2017 Physiological Signal Challenge(hereinafter referred to as the 2017 dataset),the final classification accuracy rate is 97.4%;the other is the data of the 2018 Physiological Signal Challenge (hereinafter referred to as the 2018 dataset),and the final classification accuracy rate is 77.6% on this dataset.The MATLAB group to which it belongs has achieved the third place.This algorithm improves the accuracy of 5.6% in the 2017 dataset and 7.6% in the 2018 dataset compared to the results of the traditional LSTM network.Compared to the results of a singlelayer bidirectional LSTM network,in the 2017 data set,the accuracy rate improves 4.2%,and the accuracy rate improves 5.7％ in the 2018 data set,which fully verifies the feasibility and advantages of the algorithm.

TextVideo Feature Learning Based on Clustering Network
张衡, 马明栋, 王得玉. 基于聚类网络的文本视频特征学习[J]. 计算机科学, 2020, 47(7): 125129.
ZHANG Heng, MA Mingdong, WANG Deyu. TextVideo Feature Learning Based on Clustering Network[J]. Computer Science, 2020, 47(7): 125129.  ZHANG Heng, MA Mingdong, WANG Deyu
 Computer Science. 2020, 47 (7): 125129. doi:10.11896/jsjkx.190700006
 Abstract PDF(1539KB) ( 228 )
 References  Related Articles  Metrics

Comprehensive understanding of video content and text semantics has been widely researched in many fields.The early research is mainly to map textvideo to a common vector space.However,one of the problems faced by this method is the lack of a largescale textvideo datasets.Because of the large information redundancy of the video data,extracting the whole video feature directly through 3D network will lead to more network parameters and poor realtime performance,which is not conducive to video tasks.In order to solve the above problems,this paper proposes that the local characteristics of video can be aggregated by good clustering network,and the network model can be trained by image and video datasets at the same time to effectively solve the problem of video modal missing.At the meantime,the influence of face mode on recall task is compared.The attention mechanism is added to the clustering network,which makes the network pay more attention to the modes strongly related to the text semantics,so as to improve the similarity value of the textvideo and improve the accuracy of the model.The experimental result shows that textvideo feature learning based on clustering network can map textvideo to a common vector space,so that text and video with similar semantics are close to each other,text and video with different distances are far away.In this paper,the performance of the textvideo recall task evaluation model based on MPII and MSRVTT datasets is improved compared with other models.From the experimental result,it is fully proved that the textfeature learning based on clustering network can map the textvideo to a common vector space,which can be used in the textvideo recall task.

Image Denoising by Mixing 3D Block Matching with Harmonic Filtering in Transform Domain
吴静, 周先春, 徐新菊, 黄金. 三维块匹配波域调和滤波图像去噪[J]. 计算机科学, 2020, 47(7): 130134.
WU Jing, ZHOU Xianchun, XU Xinju, HUANG Jin. Image Denoising by Mixing 3D Block Matching with Harmonic Filtering in Transform Domain[J]. Computer Science, 2020, 47(7): 130134.  WU Jing, ZHOU Xianchun, XU Xinju, HUANG Jin
 Computer Science. 2020, 47 (7): 130134. doi:10.11896/jsjkx.190600120
 Abstract PDF(3688KB) ( 343 )
 References  Related Articles  Metrics

Aiming at remedy the defeat that the current denoising algorithms lack of analyses of integral structure and excessive computational complexity,this paper proposes an improved denoising algorithm using the harmonic filtering diffusion model in wavedomain to amend BM3D technology.Firstly,the algorithm uses the Euclidean distance to merge similar 2D image fragments thus obtaining 3D date arrays.Then it is dealt by collaborative filtering,and the preestimation data of the image would be obtained by inverse 3D transformation.Wavelet decomposition is used to extract high frequency part of predenoised image to filter.Lastly,wavelet reconstruction is conducted to estimate the original image,in order to avoid edge ambiguity.Laplacian of Gassian is used to construct a new operator into the diffusion model for filtering,so as to balance the operation speed and denoising performance,and protect the complete structureof the image information.The experimental results show that the new algorithm has perfect denoising performance,more integrity of internal information protection,and short running time,which is beneficial to practical applications.

Complex Scene Text Detection Based on Attention Mechanism
刘燕, 温静. 基于注意力机制的复杂场景文本检测[J]. 计算机科学, 2020, 47(7): 135140.
LIU Yan, WEN Jing. Complex Scene Text Detection Based on Attention Mechanism[J]. Computer Science, 2020, 47(7): 135140.  LIU Yan, WEN Jing
 Computer Science. 2020, 47 (7): 135140. doi:10.11896/jsjkx.190600157
 Abstract PDF(3555KB) ( 514 )
 References  Related Articles  Metrics

Most of the traditional text detection methods are developed in the bottomup manner,which usually start with lowlevel semantic character or stroke detection,followed by nontext component filtering,text line construction,and text line validation.However,the modeling,scale,typesetting and surrounding environment of the characters in the complex scene change drastically,and the task of detecting text is carried up by human under variety of visual granularities.It’s difficult for these bottomup traditional methods to maintain the text features under different resolution,due to their dependency on the low lever features.Recently,deep learning methods have been widely used in text detection in order to extract more features under different scale.However,in the existing methods,the key feature information is not emphasized during the feature extraction process of each layer,and will be lost in the layertolayer feature mapping process.Therefore,the missing information will also lead to a lot of falsealarm and leak detection,which causes much more timeconsuming.This paper proposes a complex scene text detection method based on the attention mechanism.The main contribution of this method is to introduce a visual attention layer in VGG16,and use the attention mechanism to enhance the significant information in the global information in the network.Experiments show that in the Ubuntu environment with GPU,this method can ensure the integrity of the text area in the detection of complex scene text pictures,reduce the fragmentation of the detection area and can achieve up to 87% recall rate and 89% precision rate.

Review of Information Cascade Prediction Methods Based on Deep Learning
张志扬, 张凤荔, 谭琪, 王瑞锦. 基于深度学习的信息级联预测方法综述[J]. 计算机科学, 2020, 47(7): 141153.
ZHANG Zhiyang, ZHANG Fengli, TAN Qi, WANG Ruijin. Review of Information Cascade Prediction Methods Based on Deep Learning[J]. Computer Science, 2020, 47(7): 141153.  ZHANG Zhiyang, ZHANG Fengli, TAN Qi, WANG Ruijin
 Computer Science. 2020, 47 (7): 141153. doi:10.11896/jsjkx.200300130
 Abstract PDF(2128KB) ( 632 )
 References  Related Articles  Metrics

Online social media greatly promotes the generation and transmission of information,exacerbates the communication and interaction between massive amounts of information,and highlights the importance of predicting information cascades.In recent years,deep learning has been widely used in the field of information cascade prediction.This paper mainly classifies,sorts,and summarizes the current research status of deep learningbased information cascade prediction methods and classic algorithms.According to the different emphasis of information cascade feature characterization,the information cascade prediction method based on deep learning is divided into time series information cascade prediction method and topology information cascade prediction method.The time series information cascade prediction method is further divided into methods based on random walks and methods based on diffusion paths,and the topology information cascade prediction method is divided into methods based on global topological structure and methods based on neighborhood aggregation.This paper details the principles and advantages and disadvantages of each type of method,and introduces the data sets and evaluation indicators commonly used in the field of information cascade prediction,and compares the information cascade prediction algorithms based on deep learning in the macro and micro information cascade prediction scenarios,and discusses some technical details commonly used in information cascade prediction algorithms.Finally,this paper summarizes the field possible future research directions and development trends.

Improved Salp Swarm Algorithm Based on Levy Flight Strategy
张严, 秦亮曦. 基于Levy飞行策略的改进樽海鞘群算法[J]. 计算机科学, 2020, 47(7): 154160.
ZHANG Yan, QIN Liangxi. Improved Salp Swarm Algorithm Based on Levy Flight Strategy[J]. Computer Science, 2020, 47(7): 154160.  ZHANG Yan, QIN Liangxi
 Computer Science. 2020, 47 (7): 154160. doi:10.11896/jsjkx.190600068
 Abstract PDF(1874KB) ( 424 )
 References  Related Articles  Metrics

Aiming at the shortcomings of slow convergence speed and easy to fall into local optimum in the optimization process of the Salp swarm algorithm (SSA),a Levy Flightbased Conditional Updating Salp Swarm Algorithm (abbreviated as LECUSSA) is proposed and it is used in the feature subset selection of classification algorithm.Firstly,the leader position is updated randomly by using the long and short jump characteristics of Levy Flight strategy,which enhances the global optimal search ability.Secondly,the conditional updating condition to the follower’s position is added to make the follower no longer follow blindly,thus accelerating the convergence speed.The performance of LECUSSA algorithm is compared with other algorithms on 23 benchmark functions.The algorithm is applied to the selection of classification feature subset of SVM algorithm,and 8 UCI datasets are used to compare the performance of the classification results after feature selection.The experimental results show that LECUSSA has good global optimal search ability and fast convergence speed.After feature selection using LECUSSA algorithm,the feature subset with the best classification accuracy can be found.

Multimodal Optimization Algorithm for Protein Conformation Space
李章维, 肖璐倩, 郝小虎, 周晓根, 张贵军. 蛋白质构象空间的多模态优化算法[J]. 计算机科学, 2020, 47(7): 161165.
LI Zhangwei, XIAO Luqian, HAO Xiaohu, ZHOU Xiaogen, ZHANG Guijun. Multimodal Optimization Algorithm for Protein Conformation Space[J]. Computer Science, 2020, 47(7): 161165.  LI Zhangwei, XIAO Luqian, HAO Xiaohu, ZHOU Xiaogen, ZHANG Guijun
 Computer Science. 2020, 47 (7): 161165. doi:10.11896/jsjkx.190600100
 Abstract PDF(2542KB) ( 380 )
 References  Related Articles  Metrics

Due to the inaccuracy of protein energy model,the optimal solution in mathematics does not consistently correspond to the stable natural structure of a given protein.The existing methods tend to converge to local optimal solutions because of the huge conformational space.To address the problem of the inaccuracy of energy model in the field of protein structure prediction and the reliability of highdimensional conformational space sampling,a dihedral angle similarity based multimodal conformation optimization method (DASOM) for abinitio protein structure prediction is proposed in the framework of population based algorithm.Firstly,the modal exploration is conducted,knowledgebased Rosetta coarsegrained energy model is used as the standard to select new individuals with high quality,thus the diversity of the population can be increased.Then,a dihedral angular similarity model is established to meet the requirements of similar individual determination in the multimodal optimization algorithm.Crowding update strategy is used for optimizing the existing modal to achieve the modal enhancement and more reasonable conformation is obtained.Experimental results on 10 test proteins show that the proposed method not only achieves high prediction accuracy,but also obtains many metastable protein conformations as well.

Pyramid Evolution Strategy Based on Differential Evolution for Solving Onedimensional Cutting Stock Problem
侯改, 何朗, 黄樟灿, 王占占, 谈庆. 基于差分进化的金字塔演化策略求解一维下料问题[J]. 计算机科学, 2020, 47(7): 166170.
HOU Gai, HE Lang, HUANG Zhangcan, WANG Zhanzhan, TAN Qing. Pyramid Evolution Strategy Based on Differential Evolution for Solving Onedimensional Cutting Stock Problem[J]. Computer Science, 2020, 47(7): 166170.  HOU Gai, HE Lang, HUANG Zhangcan, WANG Zhanzhan, TAN Qing
 Computer Science. 2020, 47 (7): 166170. doi:10.11896/jsjkx.190500014
 Abstract PDF(1601KB) ( 303 )
 References  Related Articles  Metrics

Onedimensional cutting stock problem is a classic NPhard problem in combinatorial optimization,which is widely used in mechanical manufacturing and engineering applications.It is difficult for traditional swarm intelligence algorithm to balance the contradictions of mining and exploration,competition and cooperation between individuals and populations with in a population when solving onedimensional cutting stock problem.On the basis of pyramid evolution strategy,an algorithm based on pyramid evolution strategy and differential evolution is proposed in this paper.The algorithm makes full use of the advantages of the PES algorithm and solves these two contradictions well,but the PES algorithm does not consider the cooperative relationship between the current individual and the optimal individual,it may lead the convergence speed of the algorithm to be slowly.To this end,the mutation and crossover operations of differential evolution are introduced into the accelerating process of PES algorithm,and the invalid individuals are corrected by the repair operator.On the one hand,it is conducive to speed up the convergence speed of the algorithm,on the other hand,it can utilize the differences among individuals and realize the local mining of the area near the individuals in the population,which is beneficial to improve the accuracy of the algorithm.The proposed algorithm is applied to six examples.The experimental results show that the algorithm has high optimization accuracy and convergence speed compared with other eight algorithms,which verifies the feasibility and effectiveness of the algorithm for solving the onedimensional cutting stock problem.

Evolution Analysis of Household Car Supply Chain Based on MultiAgent
孙军艳, 张媛媛, 吴冰莹, 牛亚儒, 陈婵娟. 基于MultiAgent的家用汽车供应链演化分析[J]. 计算机科学, 2020, 47(7): 171178.
SUN Junyan, ZHANG Yuanyuan, WU Bingying, NIU Yaru, CHEN Chanjuan. Evolution Analysis of Household Car Supply Chain Based on MultiAgent[J]. Computer Science, 2020, 47(7): 171178.  SUN Junyan, ZHANG Yuanyuan, WU Bingying, NIU Yaru, CHEN Chanjuan
 Computer Science. 2020, 47 (7): 171178. doi:10.11896/jsjkx.190600038
 Abstract PDF(2663KB) ( 227 )
 References  Related Articles  Metrics

In order to explore the business strategy of multimodel product supply chain,this paper used MultiAgent modeling and simulation method to establish a multimodel home vehicle supply chain network model.Aiming at maximizing the profit of manufacturer Agent,stimulusresponse learning mechanism and the particle swarm learning mechanism are used to analyze the evolution of the model.The simulations show that,firstly,by using the stimulusresponse learning mechanism,the sales volume of some models of the manufacturer will be greatly reduced.For manufacturers pursuing multiple models,it is not conducive to the promotion of multiple models,but the total sales volume and profits will increase.Secondly,by using the particle swarm learning mechanism,no matter what combination of C_{1} and C_{2} is,it is difficult to obtain the optimal sales and profit at the same time.For massconsumer models,manufacturers can choose to strengthen C_{2} and expand the market with “small profits but quick turnover” strategy to increase sales.For highend consumer models,manufacturers can choose to strengthen C_{1} and create highquality products with a “benefit and profitable” strategy to increase profits.Relatively speaking,by using particle swarm learning mechanisms,manufacturers can quickly adjust strategies to cope with market changes,and the strategies are more stable after learning.This study has practical guidance for supply chain management with multiple models.

Improved Speedy Qlearning Algorithm Based on Double Estimator
郑帅, 罗飞, 顾春华, 丁炜超, 卢海峰. 基于双估计器的改进Speedy Qlearning算法[J]. 计算机科学, 2020, 47(7): 179185.
ZHENG Shuai, LUO Fei, GU Chunhua, DING Weichao, LU Haifeng. Improved Speedy Qlearning Algorithm Based on Double Estimator[J]. Computer Science, 2020, 47(7): 179185.  ZHENG Shuai, LUO Fei, GU Chunhua, DING Weichao, LU Haifeng
 Computer Science. 2020, 47 (7): 179185. doi:10.11896/jsjkx.190500143
 Abstract PDF(2109KB) ( 294 )
 References  Related Articles  Metrics

Qlearning algorithm is a classical reinforcement learning algorithm.However,due to overestimation and the conservative updating strategy,there exists a problem of slow convergence.Speedy Qlearning algorithm and Double Qlearning algorithm are two variants of the Qlearning algorithm which are used to solve the problems of slow convergence and overestimation in Qlearning algorithm respectively.Based on the updating rule of Q value in Speedy Qlearning algorithm and the updating strategy of Monte Carlo reinforcement learning,the equivalent form of the updating rule of Q value is proposed through theoretical analysis and mathematical proof.According to the equivalent form,Speedy Qlearning algorithm takes the estimation function of current Q value as the estimation of the historical Q value.Although the overall convergence speed of the agent is improved,Speedy Qlearning also has the problem of overestimation,which leads to a slow convergence at the beginning of iterations.In order to solve the problem of slow convergence at the initial stage of iterations in the Speedy Qlearning algorithm,an improved algorithm named Double Speedy Qlearning is proposed based on the fact that the double estimator in the Double Qlearning algorithm can improve the convergence speed of agents.By using double estimator,the selection of optimal action and maximum Q value is separated,so that the learning strategy of Speedy Qlearning algorithm in the initial iteration period can be improved and the overall convergence speed of Speedy Qlearning algorithm can be improved.Through grid world experiments of different scales,linear learning rate and polynomial learning rate are used to compare the convergence speed of Qlearning algorithm and its improved algorithm in the initial iteration stage and the overall convergence speed.The results show that the convergence speed of the Double Speedy Qlearning algorithm is faster than that of Speedy Qlearning algorithm in the initial iteration stage and its overall convergence speed is significantly faster than that of comparison algorithms.Its difference between the actual average reward value and the expected reward value is also the smallest.

Novel Artificial Bee Colony Algorithm for Solving Manyobjective Scheduling
郑友莲, 雷德明, 郑巧仙. 求解高维多目标调度的新型人工蜂群算法[J]. 计算机科学, 2020, 47(7): 186191.
ZHENG Youlian, LEI Deming, ZHENG Qiaoxian. Novel Artificial Bee Colony Algorithm for Solving Manyobjective Scheduling[J]. Computer Science, 2020, 47(7): 186191.  ZHENG Youlian, LEI Deming, ZHENG Qiaoxian
 Computer Science. 2020, 47 (7): 186191. doi:10.11896/jsjkx.190600089
 Abstract PDF(2027KB) ( 263 )
 References  Related Articles  Metrics

Manyobjective continuous optimization problem has been considered extensively while there are few studies on manyobjective combination optimization problem.Artificial bee colony(ABC) algorithm has been successfully applied to solve various production scheduling problem,but ABC is seldom used to solve manyobjective scheduling problem and manyobjective scheduling problem itself is also seldom handled.Aiming at multiobjective flexible job shop scheduling problem,a new ABC algorithm is proposed to optimize simultaneously maximum completion time,total tardiness,total energy consumption and total workload.Unlike the general flexible job shop scheduling problem,the above problem is green scheduling one because of the inclusion of total energy consumption.The new ABC has new characteristics which are obviously different from the existing ABC algorithm.Its number of onlooker bees is less that of employed bees,employed bee focuses on global search while onlooker bee only carries out local search,which avoids the algorithm from falling into local optimization through the different search methods of two kinds of bees.At the same time,onlooker bee just selects some best employed bees or members of external file,and some employed bees cannot become follower objects to avoid wasting computing resources on search for poor solutions.A new strategy is adopted to handle scout.The simulation results show that the ratio of the number of nondominated solutions to population scale for manyobjective scheduling problem is notably less than the same ratio for manyobjective continuous optimization problem.Compared with multiobjective genetic algorithm and variable neighborhood search,the computational results show that ABC has better results than two comparative algorithms on solving the considered manyobjective scheduling.

Point Cloud Deep Learning Network Based on Dynamic Graph Convolution and Spatial Pyramid Pooling
朱威, 绳荣金, 汤如, 何德峰. 基于动态图卷积和空间金字塔池化的点云深度学习网络[J]. 计算机科学, 2020, 47(7): 192198.
ZHU Wei, SHENG Rongjin, TANG Ru, HE Defeng. Point Cloud Deep Learning Network Based on Dynamic Graph Convolution and Spatial Pyramid Pooling[J]. Computer Science, 2020, 47(7): 192198.  ZHU Wei, SHENG Rongjin, TANG Ru, HE Defeng
 Computer Science. 2020, 47 (7): 192198. doi:10.11896/jsjkx.190700180
 Abstract PDF(3273KB) ( 428 )
 References  Related Articles  Metrics

The classification and semantic segmentation of point cloud data have important applications in automatic driving,intelligent robot and holographic projection.While using the traditional method of manually extracting point cloud features or the feature learning method of firstly transforming threedimensional point cloud data into data forms of multiview and volumetric grid,there exist problems such as many processing links and great loss of threedimensional features,resulting in low accuracy of classification and segmentation.The existing deep neural network PointNet,which can directly process point cloud data,ignoresthe local finegrained features of point cloud and is weak in processing complex point cloud scenarios.To solve the above problems,this paper proposes a point cloud deep learning network based on dynamic graph convolution and spatial pyramid pooling.On the basis of PointNet,the dynamic graph convolution module GraphConv is used to replace the feature learning module in PointNet,which enhances the network’s ability to learn local topological structure information.At the same time,a pointbased spatial pyramid pooling structure PSPP is designed to capture multiscale local features.Compared with the multiscale sampling point cloud of PointNet++ and the repeated grouping method for multiscale local features learning,it is simpler and more efficient.Experimental results show that,on the three benchmark data sets of point cloud classification and semantic segmentation task,the proposed network has higher classification and segmentation accuracy than the existing network.

Epileptic EEG Signals Detection Based on Tunable Qfactor Wavelet Transform and Transfer Learning
罗婷瑞, 贾建, 张瑞. 基于可调Q因子小波变换和迁移学习的癫痫脑电信号检测[J]. 计算机科学, 2020, 47(7): 199205.
LUO Tingrui, JIA Jian, ZHANG Rui. Epileptic EEG Signals Detection Based on Tunable Qfactor Wavelet Transform and Transfer Learning[J]. Computer Science, 2020, 47(7): 199205.  LUO Tingrui, JIA Jian, ZHANG Rui
 Computer Science. 2020, 47 (7): 199205. doi:10.11896/jsjkx.200200104
 Abstract PDF(2645KB) ( 406 )
 References  Related Articles  Metrics

Aiming at the detection of epileptic EEG signals,a method of detecting epileptic EEG signals based on Tunable Qfactor wavelet transform and transfer learning is proposed.Firstly,the EEG signals are transformed by Tunable Qfactor wavelet transform,and the subbands with large energy differences are selected for partial reconstruction.The reconstructed signals are rearranged and expressed as twodimensional color image data.Secondly,through the analysis of the existing automatic seizure detection algorithm and the Xception model of deep separable convolutional networks,the parameters of the pretraining model classified by the ImageNet dataset are used to initialize the network parameters,and the pretraining model of the depth separable convolution network Xception is obtained.Finally,the transfer learning method is used to transfer the pretraining results of the Xception model to the automatic seizure detection task.The performance of this method is verified on the BONN epilepsy dataset,and the accuracy,sensitivity and specificity reaches 99.37%,100% and 98.48%respectively,proving that the model has good generalization ability in automatic seizure detection task.Compared with traditional detection methods and other deep learning methods based,the automatic detection method proposed in this paper achieves higher accuracy,avoids the process of artificial design and feature extraction,and has better application value.

Review on Placement of Multiple Controllers in SDN
贾吾财, 吕光宏, 王桂芝, 宋元隆. SDN多控制器放置问题研究综述[J]. 计算机科学, 2020, 47(7): 206212.
JIA Wucai, LV Guanghong, WANG Guizhi, SONG Yuanlong. Review on Placement of Multiple Controllers in SDN[J]. Computer Science, 2020, 47(7): 206212.  JIA Wucai, LV Guanghong, WANG Guizhi, SONG Yuanlong
 Computer Science. 2020, 47 (7): 206212. doi:10.11896/jsjkx.200200075
 Abstract PDF(1964KB) ( 406 )
 References  Related Articles  Metrics

With the rapid development of software defined network,deployment of the inherent defects of single controller gradually revealed,multiple controller deployment has become an inevitable trend.However,the number and location of controllers have a decisive influence on network performance,and the high complexity of weighing factors in solving this problem,which seriously hinders the application of SDN in data centers and wide area networks.Firstly,the essence of placement problem and its general solving steps are described.Secondly,based on the network model,the core components of the deployment strategy,namely the optimization objectives and search algorithm are described in detail.Then,based on the research at home and abroad,the deployment strategies are divided into static deployment and dynamic deployment,and the advantages and disadvantages of typical strategies are compared.Finally,the future research direction is prospected.

Multisource Treebased Scheduling Algorithm for Deadlineaware P2MP Interdatacenter Transfers
庄奕, 杨家海. 限时点到多点跨数据中心传输的多源树调度算法[J]. 计算机科学, 2020, 47(7): 213219.
ZHUANG Yi, YANG Jiahai. Multisource Treebased Scheduling Algorithm for Deadlineaware P2MP Interdatacenter Transfers[J]. Computer Science, 2020, 47(7): 213219.  ZHUANG Yi, YANG Jiahai
 Computer Science. 2020, 47 (7): 213219. doi:10.11896/jsjkx.200300069
 Abstract PDF(2261KB) ( 264 )
 References  Related Articles  Metrics

With the growth of the data volume for cloud applications,more and more cloud service providers pay attention to interdatacenter bulk transfer.The main challenge of interdatacenter bulk transfer is how to find the best resource scheduling algorithm,which uses the least resources to transfer the user’s data to the specified destinations before the specified deadline.This paper proposes MSTB(MultiSource TreeBased) algorithm,an effective scheduling solution for deadlineaware P2MP interdatacenter transfers.With the help of multisource mechanism and multicast forwarding tree,MSTB outperforms the stateoftheart method.Simulation experiments show that MSTB can increase the number of transfer requests accepted by up to 91% and increase effective throughput by up to 54% with short transfer completion time and low computation complexity.

4Bitbased Gradient Compression Method for Distributed Deep Learning System
蒋文斌, 符智, 彭晶, 祝简. 一种基于4Bit编码的深度学习梯度压缩算法[J]. 计算机科学, 2020, 47(7): 220226.
JIANG Wenbin, FU Zhi, PENG Jing, ZHU Jian. 4Bitbased Gradient Compression Method for Distributed Deep Learning System[J]. Computer Science, 2020, 47(7): 220226.  JIANG Wenbin, FU Zhi, PENG Jing, ZHU Jian
 Computer Science. 2020, 47 (7): 220226. doi:10.11896/jsjkx.200300097
 Abstract PDF(2745KB) ( 408 )
 References  Related Articles  Metrics

In order to reduce the communication overhead of distributed deep learning system,compression of gradient data before transmission is an effective method,such as 2Bit method in MXNet.However,there is a problem in this kind of method,that is,too high compression ratio will lead to decline in accuracy and convergence speed,especially for larger network models.To address this problem,a new gradient compression strategy called 4Bit is proposed.Four bits are used to represent a specific gradient value.Compared with 2Bit,this method can approximate the gradient more finely,thus improving the accuracy of training results and convergence speed.Furthermore,different approximation thresholds are selected according to the gradient characteristics of each layer of the network model,which makes the compressed values more reasonable,and finally improves the convergence speed and final accuracy of the model.The experimental results show that,although 4Bit is slightly lower than the 2Bit method in terms of acceleration,its accuracy is higher and practicability is better by using more bits and multiple thresholds.It is very meaningful to reduce the communication overhead of the distributed deep learning system while maintaining high accuracy by 4Bit.

Unknown Link Layer Protocol Frame Segmentation Algorithm Based on Longest Common Substrings Mining
陈庆超, 王韬, 冯文博, 尹世庄. 基于最长公共子串挖掘的未知链路层协议帧切割算法[J]. 计算机科学, 2020, 47(7): 227230.
CHEN Qingchao, WANG Tao, FENG Wenbo, YIN Shizhuang. Unknown Link Layer Protocol Frame Segmentation Algorithm Based on Longest Common Substrings Mining[J]. Computer Science, 2020, 47(7): 227230.  CHEN Qingchao, WANG Tao, FENG Wenbo, YIN Shizhuang
 Computer Science. 2020, 47 (7): 227230. doi:10.11896/jsjkx.190600073
 Abstract PDF(1745KB) ( 182 )
 References  Related Articles  Metrics

In the increasingly fierce field of modern electronic countermeasures,the original data intercepted by the listener is generally in the form of bit stream.Dividing the bit stream into data frames is the primary task to process the intercepted data.Although the existing methods can accurately extract the related sequence to achieve frame segmentation,when the amount of data to be processed is large,the consumption of time and space is unacceptable and some thresholds need to be set in advance.For this reason,an unknown link layer protocol frame segmentation algorithm based on the longest common substring mining is proposed in this paper.By counting the longest common substring of the bit stream of a certain length,the preamble and the frame start delimiter become iteratively accurate.Thus,the frame segmentation is realized.The experimental data show that compared with the algorithm based on frequent sequence mining to achieve frame segmentation,the number of candidate sequences of the proposed algorithm is reduced exponentially,and the final candidate sequence is unique.The time complexity of the proposed algorithm is O(n),and only a single scan is required,which fully shows that the proposed algorithm can realize frame segmentation efficiently.

Graph Convolution of Fusion Metapath Based Heterogeneous Network Representation Learning
蒋宗礼, 李苗苗, 张津丽. 基于融合元路径图卷积的异质网络表示学习[J]. 计算机科学, 2020, 47(7): 231235.
JIANG Zongli, LI Miaomiao, ZHANG Jinli. Graph Convolution of Fusion Metapath Based Heterogeneous Network Representation Learning[J]. Computer Science, 2020, 47(7): 231235.  JIANG Zongli, LI Miaomiao, ZHANG Jinli
 Computer Science. 2020, 47 (7): 231235. doi:10.11896/jsjkx.190600085
 Abstract PDF(1874KB) ( 471 )
 References  Related Articles  Metrics

In recent years,network representation learning has received more and more attention as an effective method for analyzing heterogeneous information networks by representing nodes in a lowdimensional space.Random walk based methods are currently popular methods to learn network embedding,however,most of these methods are based on shallow neural networks,which make it difficult to capture heterogeneous network structure information.The graph convolutional network (GCN) is a popular method for deep learning of graphs,which is known to be capable of better exploitation of network topology,but current design of GCN is intended for homogenous networks,ignoring the rich semantic information in the network.In order to effectively mine the semantic information and highly nonlinear network structure information in heterogeneous information networks,this paper proposes a heterogeneous network representation learning algorithm based on graph convolution of fusion metapath(MG2vec)to improve the effect of network representation.Firstly,the algorithm obtains rich semantic information in heterogeneous information networks through relevance measurement based on metapaths.Then the graph convolution network is used for deep learning to capture the characteristics of nodes and neighbor nodes,to make up for the deficiency of shallow model in capturing the information of the network structure,so as to better integrate rich semantic information and structural information into the lowdimensional node representation.Experiments are carried out on DBLP and IMDB,compared with DeepWalk,node2vec and Metapath2vec classical algorithms,the proposed MG2vec algorithm has higher classification accuracy and better performance in multilabel classification tasks,the precision and MacroF1 value can be respectively up to 94.49% and 94.16%,and the both of values are up to 26.05% and 28.73% higher respectively than DeepWalk.The experimental results show that the performance of MG2vec algorithm is better than that of classical network representation learning algorithms,and MG2vec has better heterogeneous information network representation effect.

Virtual Network Embedding Algorithm Based on Topology Comprehensive Evaluation and Weight Adaptation
史朝卫, 孟相如, 马志强, 韩晓阳. 拓扑综合评估与权值自适应的虚拟网络映射算法[J]. 计算机科学, 2020, 47(7): 236242.
SHI Chaowei, MENG Xiangru, MA Zhiqiang, HAN Xiaoyang. Virtual Network Embedding Algorithm Based on Topology Comprehensive Evaluation and Weight Adaptation[J]. Computer Science, 2020, 47(7): 236242.  SHI Chaowei, MENG Xiangru, MA Zhiqiang, HAN Xiaoyang
 Computer Science. 2020, 47 (7): 236242. doi:10.11896/jsjkx.190600022
 Abstract PDF(2209KB) ( 227 )
 References  Related Articles  Metrics

The existing virtual network embedding algorithms do not consider the topological features of nodes comprehensively,the evaluation method of nodes is relative simple and the weights cannot be adaptively adjusted according to the network.To solve these problems,a virtual network embedding algorithm based on topology comprehensive evaluation and weight adaptation is proposed.In the node embedding stage,by considering the centrality,proximity and adjacent aggregation of nodes,this paper establishes a node multimetric evaluation model combined with the node resource properties such as the node CPU and the sum of adjacent bandwidth.The weights are adjusted adaptively according to the change of network environment by using the entropy weight method.Simulation results show that compared with the latest and classical virtual network embedding algorithms,the acceptance ratio of the proposed algorithm is improved by 2%~23%,and the longterm average revenuetocost ratio is increased by 3%~17%.Moreover,the proposed algorithm can maintain good performance for different types of virtual network requests with different resource requirements.

WSN Coverage Optimization Based on Adaptive Particle Swarm Optimization
齐薇, 虞慧群, 范贵生, 陈亮. 基于自适应粒子群的WSN覆盖优化[J]. 计算机科学, 2020, 47(7): 243249.
QI Wei, YU Huiqun, FAN Guisheng, CHEN Liang. WSN Coverage Optimization Based on Adaptive Particle Swarm Optimization[J]. Computer Science, 2020, 47(7): 243249.  QI Wei, YU Huiqun, FAN Guisheng, CHEN Liang
 Computer Science. 2020, 47 (7): 243249. doi:10.11896/jsjkx.200200133
 Abstract PDF(2830KB) ( 533 )
 References  Related Articles  Metrics

The wireless sensor network (WSN) coverage of data sensing layer has great significance on the quality of sensing services.In view of the problems of coverage redundancy,coverage void and premature convergence of particle swarm optimization caused by the randomness of initial deployment of wireless sensor network,an adaptive virtual force particle swarm optimization algorithm based on binomial perception coverage is proposed,which optimizes the effective coverage of the network.By adding mobile nodes to the network,the algorithm performs the redeployment distribution of position scheduling,adjusts the inertia weight by calculating the degree of population evolution and the degree of relative aggregation,and ueses the threshold of fitness variance to judge whether the intergerence of virtual force strategy is needed in the current state.This paper focuses on the analysis of the impact of the initial deployment category and mobile node proportion on the redeployment coverage performance,and gives the corresponding implementation algorithm.Simulation results show that compared with ACPSO,DACPSO and DVPSO,the improved PSO has 98.33% coverage and high mobile efficiency,which fully proves the effectiveness of the algorithm.

Approximate Algorithm for Minimum Virtual Backbone in 3D Wireless Ad Hoc Networks
易梦, 梁家荣, 覃斌. 三维无线自组织网络中最小虚拟骨干的近似算法[J]. 计算机科学, 2020, 47(7): 250256.
YI Meng, LIANG Jiarong, QIN Bin. Approximate Algorithm for Minimum Virtual Backbone in 3D Wireless Ad Hoc Networks[J]. Computer Science, 2020, 47(7): 250256.  YI Meng, LIANG Jiarong, QIN Bin
 Computer Science. 2020, 47 (7): 250256. doi:10.11896/jsjkx.190700059
 Abstract PDF(2021KB) ( 222 )
 References  Related Articles  Metrics

In homogeneous wireless ad hoc networks,the size of VB (Virtual Backbone) is an important factor to measure the quality of ad hoc networks,the smaller the virtual backbone,the less the network routing overhead.The problem of minimum virtual backbone can be abstracted into the problem of minimum connected dominating set.There have been a lot of research achievements on the problem of minimum connected dominating set in UDG (Unit Disk Graphs,UDG) on twodimensional wireless sensor networks,but in some practical cases,unit disk graphs cannot accurately abstract the network.So how to construct high quality CDS(Connected Dominating Set,CDS) in UBG (Unit Ball Graph,UBG) is proposed.In this paper,an optimal upper bound of the number of independent nodes in UBG is given,and the performance ratio of CDS is obtained by using the upper bound.In the proposed algorithm STCDS,the method of constructing minimum steiner tree with minimum number of steiner nodes is mainly used to optimize the connecting parts between nodes.Theoretical analysis shows that the performance ratio of STCDS algorithm is 11.8080＋ln11,which is the best result among all known researches in this direction.Simulation results also verify the feasibility of the proposed algorithm STCDS.

New Device Fingerprint Feature Selection and Model Construction Method
王萌, 丁志军. 一种新的设备指纹特征选择及模型构建方法[J]. 计算机科学, 2020, 47(7): 257262.
WANG Meng, DING Zhijun. New Device Fingerprint Feature Selection and Model Construction Method[J]. Computer Science, 2020, 47(7): 257262.  WANG Meng, DING Zhijun
 Computer Science. 2020, 47 (7): 257262. doi:10.11896/jsjkx.190900107
 Abstract PDF(1838KB) ( 756 )
 References  Related Articles  Metrics

In recent years,with the rapid development of mobile Internet,more and more businesses have moved from the browser to the mobile.But the black industry chain that is parasitic on the mobile Internet has reached the point of flooding.To solve this problem,the device fingerprint,that is,the use of the device’s characteristic attributes to generate a unique identifier for each device came into being.Many algorithms based on machine learning methods for device uniqueness authentication have emerged,most of which focus on the establishment of models.Few of them have indepth research on feature selection.However,feature selection is directly related to the performance of the final model.Aiming at this problem,this paper proposes a new device fingerprint feature selection and model construction method (FSDSWSC),which is based on the feature discrimination of different devices and the feature stability of the same device to select some of the most valuable features.The importance of the selected features’ weights is applied to the later model establishment.The FSFSWSC is compared with other mainstream feature selection methods on 6424 Android devices in the real sence.The results show that FSFSWSC has a great improvement compared with other methods,and the accuracy of device uniqueness authentication reaches 99.53%,which shows the superiority of FSFSWSC.

Revised Impossible Differential Cryptanalysis of PFP Block Cipher
沈璇, 王欣玫, 何俊, 孙志远. PFP算法改进的不可能差分分析[J]. 计算机科学, 2020, 47(7): 263267.
SHEN Xuan, WANG Xinmei, HE Jun, SUN Zhiyuan. Revised Impossible Differential Cryptanalysis of PFP Block Cipher[J]. Computer Science, 2020, 47(7): 263267.  SHEN Xuan, WANG Xinmei, HE Jun, SUN Zhiyuan
 Computer Science. 2020, 47 (7): 263267. doi:10.11896/jsjkx.200200034
 Abstract PDF(2489KB) ( 317 )
 References  Related Articles  Metrics

Nowadays,the application scenarios in the resourceconstrained terminal system appear more and more,and the data encryption requirement of them also needs to be satisfied.There are many lightweight block ciphers designed such as PRESENT which is an international standard block cipher.PFP cipher is an ultralightweight block cipher which takes Feistel structure,and its round function is designed by using the experience of PRESENT cipher for reference.The block size of PFP is 64bit,the key size of PFP is 80bit and its round number is 34.For PFP,this paper studies its ability against impossible differential cryptanalysis.In the design document,the designers proposed a 5round impossible differential and attacked reduced 6round PFP cipher with this distinguisher.Moreover,the designers can recover 32bit master key.Comparing with this result,by exploiting the differential property of the Sbox in PFP,this paper constructs a 7round impossible differential distinguisher and attack reduced 9round PFP.Moreover,it can recover 36bit master key.Therefore,the result is much better than the known one in terms of either the round number or the recovered key.So far as I know,the result in this paper is the best impossible differential cryptanalysis of PFP cipher.

Multisubblock Incentive Consensus Mechanism Based on Topology and Distribution Mechanism
刘帅, 甘国华, 刘明熹, 房勇, 汪寿阳. 一种基于拓扑结构及分配机制设计的多子块激励共识机制[J]. 计算机科学, 2020, 47(7): 268277.
LIU Shuai, GAN Guohua, LIU Mingxi, FANG Yong, WANG Shouyang. Multisubblock Incentive Consensus Mechanism Based on Topology and Distribution Mechanism[J]. Computer Science, 2020, 47(7): 268277.  LIU Shuai, GAN Guohua, LIU Mingxi, FANG Yong, WANG Shouyang
 Computer Science. 2020, 47 (7): 268277. doi:10.11896/jsjkx.200200027
 Abstract PDF(1628KB) ( 343 )
 References  Related Articles  Metrics

First of all,this paper proposes an modified consensus mechanism based on the classic PoW (Proof of Work) consensus mechanism by changing the condition of take the miners’ blocks into blockchain and income distribution strategy.To be specific,on the one hand,according to the rule of this modified consensus mechanism,the first generated subchain made up of N subblocks will be integrated into the main chain,which is different from PoW,the modified consensus mechanism replaces the simple single chain structure of PoW with a more complex network structure;on the other hand,the modified consensus mechanism improves on the revenue distribution strategy of the traditional consensus mechanism,its distribution strategy is divided into three steps in order to improve the expected earnings of miners with low computational power,so as to encourage those miners with low computational power to actively participate in mining and supervise the safety of blockchain.In addition,the network structure introduced by the modified consensus mechanism enables miners to have more strategies of mining.This paper also discusses the influence of selecting different mining strategies,splitting calculation power of malicious miners and collusion on the safety and efficiency of the block chain.Finally,a variety of market scenarios are set up to simulate the improved algorithm so as to analyze the mining benefits of various miners under different market characteristics,which are hoping to guide miners.

Encrypted Dynamic Configuration Method of FPGA Based on Cloud
陈利锋, 朱路平. 一种基于云端加密的FPGA自适应动态配置方法[J]. 计算机科学, 2020, 47(7): 278281.
CHEN Lifeng, ZHU Luping. Encrypted Dynamic Configuration Method of FPGA Based on Cloud[J]. Computer Science, 2020, 47(7): 278281.  CHEN Lifeng, ZHU Luping
 Computer Science. 2020, 47 (7): 278281. doi:10.11896/jsjkx.190700110
 Abstract PDF(1914KB) ( 333 )
 References  Related Articles  Metrics

In the field of parallel computing which needs a lot of data,such as cloud computing,machine learning algorithm,artificial intelligence computing,etc.,as an important technical means to improve performance,FPGA has been widely used.In the configuration of FPGA,configuration data need to be read from memory and then written into the FPGA.As a practical embodiment of technological achievements,configuration data has the problem of how to prevent data from being illegally acquired,leading to the leakage of research property.In order to deal with this problem,this paper proposes an effective method of FPGA configuration based on cloud encryption.This method encrypts and manages the configuration data file by cloudbased encryption APP.When configuring the FPGA,the microprocessor obtains the encrypted configuration data through the access port of the cloudbased server,and decrypts it using the decryption algorithm built in the microprocessor.Then,the decrypted data are used dynamically to config the FPGA.The method described in this paper stores the configuration data of the FPGA in the cloud server,and carries out strict data protection and file protection through encryption means on the cloud server,thus providing a flexible and powerful encryption protection capability.The microprocessor obtains data from the cloud through encryption channel,decrypts the encrypted data and then uses it for the configuration of FPGA.In the whole process,the configuration data are encrypted,and the risk of data leakage is effectively controlled.Thus,the configuration data can be protected to the maximum extent to prevent illegal acquisition and use,meanwhile the remote dynamic configuration of the FPGA can be realized.The proposed method has been verified in Aliyun and Tencent cloud platforms,which achieves good confidentiality and flexible configuration.

Reinforcement Learning Based Cache Scheduling Against DenialofService Attacks in Embedded Systems
黄锦灏, 丁钰真, 肖亮, 沈志荣, 朱珍民. 一种基于强化学习的嵌入式系统抗拒绝服务攻击的缓存调度方案[J]. 计算机科学, 2020, 47(7): 282286.
HUANG Jinhao, DING Yuzhen, XIAO Liang, SHEN Zhirong, ZHU Zhenmin. Reinforcement Learning Based Cache Scheduling Against DenialofService Attacks in Embedded Systems[J]. Computer Science, 2020, 47(7): 282286.  HUANG Jinhao, DING Yuzhen, XIAO Liang, SHEN Zhirong, ZHU Zhenmin
 Computer Science. 2020, 47 (7): 282286. doi:10.11896/jsjkx.200100135
 Abstract PDF(1904KB) ( 302 )
 References  Related Articles  Metrics

The sharing last level cache (LLC) scheduling of the central processor determines the instructions per cycle (IPC) of the user processes and the robustness of denialofservice (DoS) attacks in the multicore embedded operating systems.However,existing scheduling schemes rely on the specific LLC scheduling model and DoS attack model,which makes it difficult for the processor to obtain the running information of the user processes in each scheduling cycle under different scheduling environments.Therefore,this paper proposes a reinforcement learning (RL) based LLC scheduling scheme to against DoS attacks in embedded systems,which optimizes the occupied position and the occupied space based on the measured occupied start and end positions,the previous IPC,load miss rate and store miss rate.The processor can jointly increase the IPC and reduce the success rate of the DoS attack from the malicious process without knowing the DoS attack model in the dynamic LLC scheduling environment.Simulations are implemented on the multicore embedded operating systems where multitenant virtual machines participate together,which show that the proposed scheme can significantly increase the IPC and reduce the success rate of the DoS attack.

Network Security Situation Assessment Method Based on Improved Hidden Markov Model
李欣, 段詠程. 基于改进隐马尔可夫模型的网络安全态势评估方法[J]. 计算机科学, 2020, 47(7): 287291.
LI Xin, DUAN Yongcheng. Network Security Situation Assessment Method Based on Improved Hidden Markov Model[J]. Computer Science, 2020, 47(7): 287291.  LI Xin, DUAN Yongcheng
 Computer Science. 2020, 47 (7): 287291. doi:10.11896/jsjkx.190300045
 Abstract PDF(1743KB) ( 361 )
 References  Related Articles  Metrics

Cyber security situation awareness,as an effective supplement in cyber security protection measures,is one of the research focus in recent years.In particular,network security situation assessment has become an important research topic in the field of network security.Hidden Markov Model (HMM) can be used in network security situation assessment,which can evaluate network status in real time,but there are problems such as difficult to configure model parameters and low evaluation accuracy.Therefore,this paper proposes a situation assessment method for improving the Hidden Markov Model,combining the BaumWelch (BW) parameter optimization algorithm with the Seeker Optimization Algorithm (SOA).Taking advantage of the strong random search ability of SOA,the traditional parameter optimization algorithm is easy to fall into local optimal solution.The optimized parameters are substituted into the HMM,and the network security situation value is obtained through quantitative analysis.Based on the DARPA2000 dataset,this paper uses MATLAB software to verify the proposed method.The experimental results show that compared with BW algorithm,this method can improve the accuracy of the model,and it makes the quantification of the network security situation more reasonable.

Network Representation Learning Algorithm Based on Vulnerability Threat Schema
黄易, 申国伟, 赵文波, 郭春. 一种基于漏洞威胁模式的网络表示学习算法[J]. 计算机科学, 2020, 47(7): 292298.
HUANG Yi, SHEN Guowei, ZHAO Wenbo, GUO Chun. Network Representation Learning Algorithm Based on Vulnerability Threat Schema[J]. Computer Science, 2020, 47(7): 292298.  HUANG Yi, SHEN Guowei, ZHAO Wenbo, GUO Chun
 Computer Science. 2020, 47 (7): 292298. doi:10.11896/jsjkx.190600156
 Abstract PDF(3415KB) ( 275 )
 References  Related Articles  Metrics

Threat intelligence analysis can provide effective attack and defense information for network attack and defense,and finegrained mining,that is,the relationship between security entities and entities in network threat intelligence data,is a hotspot of network threat intelligence analysis research.Traditional machine learning algorithms,when applied to largescale network threat intelligence data analysis,face sparse,highdimensional and other issues,and thus it is difficult to effectively capture network information.To this end,a network representation learning algorithm based on vulnerability threat schema——HSEN2vec for the classification of network security vulnerabilities is proposed.The algorithm aims to capture the structure and semantic information of the heterogeneous security entity network to the maximum extent,and obtains the lowdimensional vector representation of the security entity.In the algorithm,the structural information of the heterogeneous security entity network is obtained based on the vulnerability threat schema,and then modeled by the Skipgram model,and the effective prediction is performed by the negative sampling technique to obtain the final vector representation.The experimental results show that in the national security vulnerability data,compared with other methods,the learning algorithm proposed in this paper improves the accuracy of vulnerability classification and other evaluation indicators.

Method Based on Traffic Fingerprint for IoT Device Identification and IoT Security Model
杨威超, 郭渊博, 李涛, 朱本全. 基于流量指纹的物联网设备识别方法和物联网安全模型[J]. 计算机科学, 2020, 47(7): 299306.
YANG Weichao, GUO Yuanbo, LI Tao, ZHU Benquan. Method Based on Traffic Fingerprint for IoT Device Identification and IoT Security Model[J]. Computer Science, 2020, 47(7): 299306.  YANG Weichao, GUO Yuanbo, LI Tao, ZHU Benquan
 Computer Science. 2020, 47 (7): 299306. doi:10.11896/jsjkx.190700199
 Abstract PDF(2338KB) ( 1407 )
 References  Related Articles  Metrics

The largescale deployment of the Internet of Things makes it possible for vulnerable IoT devices to be connected to the network.When an attacker uses a vulnerable device to access the target internal network,it can lurk to wait for an attack.To prevent such attacks,it is necessary to develop a security mechanism for access control of suspicious devices and management of internal devices.Firstly,in order to realize the access control of suspicious devices,a device identification method is given in this paper.By setting a white list,a communication traffic feature fingerprint is constructed,and a random forest method is used to train the device identification model.Secondly,to manage internal devices,an intelligent security management model is proposed to build an ontology threat model based on assets,vulnerabilities and security mechanisms.Finally,the experimental results verify the detection performance of the device recognition model,the recognition accuracy rate is above 96%.Compared with theexisting similar methods,the results prove that the proposed method has better detection stability.

WSN Sourcelocation Privacy Protection Based on Improved Ant Colony Algorithm
郭蕊, 芦天亮, 杜彦辉, 周杨, 潘孝勤, 刘晓晨. 基于改进蚁群算法的WSN源位置隐私保护[J]. 计算机科学, 2020, 47(7): 307313.
GUO Rui, LU Tianliang, DU Yanhui, ZHOU Yang, PAN Xiaoqin, LIU Xiaochen. WSN Sourcelocation Privacy Protection Based on Improved Ant Colony Algorithm[J]. Computer Science, 2020, 47(7): 307313.  GUO Rui, LU Tianliang, DU Yanhui, ZHOU Yang, PAN Xiaoqin, LIU Xiaochen
 Computer Science. 2020, 47 (7): 307313. doi:10.11896/jsjkx.200100056
 Abstract PDF(1991KB) ( 254 )
 References  Related Articles  Metrics

Wireless Sensor Network (WSN) oriented to target monitoring tasks is usually deployed in unsupervised,critical and sensitive environments.The openness of Wireless communication seriously threatens the security of monitoring targets,so it is necessary to effectively protect the location privacy of source nodes.Considering the low computational complexity of ant colony algorithm and its unique advantages in path planning,a source location privacy protection scheme based on improved ant colony algorithm EESLPACA (Energy Efficient Source Location Privacy based on Ant Colony Algorithm) is proposed.When the sensor node receives the packet,it will select the forwarding node according to the pheromone concentration and the improved path heuristic content to minimize and balance the network energy consumption.At the same time,by introducing the reference distance and improving the pheromone update mechanism,the possibility of the unselected node becoming the forwarding node is increased,and the repeated dynamic route with low probability is constructed to reduce the number of packets an attacker can receive and increase the difficulty of reverse tracking.The performance analysis shows that it not only effectively improves the performance of ant colony algorithm and makes it better applied in the field of WSN source location privacy protection,but also effectively enhances the security of source location privacy while prolonging the network life cycle and shortening the transmission delay compared with CDR and ELSP schemes.

Antieavesdropping Network Coding Based on Quantum GHZ State
徐光宪, 崔俊杰. 一种基于量子GHZ态的防窃听网络编码[J]. 计算机科学, 2020, 47(7): 314321.
XU Guangxian, CUI Junjie. Antieavesdropping Network Coding Based on Quantum GHZ State[J]. Computer Science, 2020, 47(7): 314321.  XU Guangxian, CUI Junjie
 Computer Science. 2020, 47 (7): 314321. doi:10.11896/jsjkx.190500168
 Abstract PDF(2704KB) ( 286 )
 References  Related Articles  Metrics

The coding efficiency of classical network coding is inefficient during transmission and easy to be bugged.Although Hayashi’s proposal of sharing EPR at the transmitter improves the coding efficiency for information transmission,it can not realize the complete recovery of transmitted information,and the information is not safely disposed,so it is easy to be bugged.To this end,based on the threeparticle maximally entangled GHZ state,a quantum network coding scheme is proposed to prevent information being eavesdropped by using the quantum nocloning theorem and teleportation.Starting from the classical butterfly network coding,the direct product operation is carried out on particles to be sent and the GHZ state particles at the sending end.Then bell measurement is performed on the calculated particles,and the precoding information can be obtained according to the measurement results.The precoding information is transmitted to intermediate nodes,and auxiliary particles used for measurement are introduced at the receiving end.Then,quantum entanglement computation is performed on the received particles,and joint unitary operation is performed on particle clusters.Finally,based on the coding information,corresponding unitary matrix is selected to restore particle clusters.Experimental data shows that the single transmission success rate of GHZbased quantum network coding has been significantly improved,in terms of encoding efficiency,the cross transmission of quantum information in the butterfly network can be completed with only 6 qubits.In terms of security,the probability of detecting whether the information is bugged has improved.It proves that the proposed scheme improves the coding efficiency of the network coding system and effectively solves the bugging problem of information transmission.

Antenna Selection for Spatial Modulation Based on Physical Layer Security
丁青锋, 奚韬, 连义翀, 吴泽祥. 基于物理层安全的空间调制系统天线选择算法[J]. 计算机科学, 2020, 47(7): 322327.
DING Qingfeng, XI Tao, LIAN Yichong, WU Zexiang. Antenna Selection for Spatial Modulation Based on Physical Layer Security[J]. Computer Science, 2020, 47(7): 322327.  DING Qingfeng, XI Tao, LIAN Yichong, WU Zexiang
 Computer Science. 2020, 47 (7): 322327. doi:10.11896/jsjkx.190600133
 Abstract PDF(2130KB) ( 211 )
 References  Related Articles  Metrics

For the high complexity of antenna selection algorithm based on optimal secrecy capacity,an improved antenna selection algorithm with low complexity based on the difference of column norm squared is proposed.First,the analytic formula of secrecy capacity is simplified by normalizing the fixed quantity with comparison of the difference between legitimate channel and eavesdropper channel,which is expanded and expressed by channel coefficient.Then the difference of channel coefficient square is traversed and sorted,and the antenna combination with the largest difference of channel norm square is selected.Meanwhile,combining the algorithm with the artificial noise,which is designed at the remaining legitimate channel null space,can obtain the optimal secrecy capacity.The simulation results show that compared with tranditional algorithm,the proposed algorithm can achieve optimal secrecy capacity with low complexity.In addition,the bit error rate of legitimate receiver is maintained and the bit error rate of eavesdropper is restricted maximumly.Meanwhile,the safety performance of the system is enhanced effectively.

Zerohighresolution Information Hiding Algorithm for 3D Mesh Model
任帅, 王萌, 范傲雄, 高泽, 徐解, Shahzad KHURRAM, 张弢. 一种零高分辨率3D网格模型的信息隐藏算法[J]. 计算机科学, 2020, 47(7): 328334.
REN Shuai, WANG Meng, FAN Aoxiong, GAO Ze, XU Jie, Shahzad KHURRAM, ZHANG Tao. Zerohighresolution Information Hiding Algorithm for 3D Mesh Model[J]. Computer Science, 2020, 47(7): 328334.  REN Shuai, WANG Meng, FAN Aoxiong, GAO Ze, XU Jie, Shahzad KHURRAM, ZHANG Tao
 Computer Science. 2020, 47 (7): 328334. doi:10.11896/jsjkx.190800021
 Abstract PDF(3395KB) ( 279 )
 References  Related Articles  Metrics

Aiming at weak antianalysis of current information hiding algorithm for 3D mesh model,a zerohighresolution information hiding algorithm was proposed.Firstly,multiresolution analysis based on the improved halffold mesh simplification algorithm is manipulated on mesh model to decompose it into Highlayer,Midlayer,Lowlayer.Secondly,the feature vectors are extracted by concentric sphere segmentation in Highlayer,and the feature points embedded in information are determined by calculating vertex saliency in Midlayer.Thirdly,the highest bit of the eigenvalue is obtained circularly from the feature vector and then correlatedwith the encrypted information scrambled by Chebyshev to form the associated information.Finally,the associated information is embedded into the DCT transform AC coefficient of the feature point spherical coordinate r value using the piecewise mapping function.The algorithm hides the associated information constructed by feature vector of Highlayer and scrambled information intothe affine transform invariant of robust feature points in Midlayer with less than 15% energy,which is beneficial to the invisibility,robustness and antianalysis of algorithm.The obvious changes of point and surface features are undetected by the firstorder Laplacian smoothing steganalysis method,which means that the proposed algorithm is of good antianalysis ability.