Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 45 Issue 11A, 03 November 2018
Research Progress and Mainstream Methods of Frequent Itemsets Mining
LI Guang-pu, HUANG Miao-hua
Computer Science. 2018, 45 (11A): 1-11. 
Abstract PDF(2611KB) ( 2519 )   
References | RelatedCitation | Metrics
As one of the main research modules of data mining,association analysis is mainly used to find strong correlation features hidden in large data sets.The majority of association rule mining tasks can be divided into generation of frequent patterns (frequent itemsets,frequent sequences,frequent subgraphs) and generation of rule.The former finds itemsets,sequences,and subgraphs satisfying the minimum support threshold in the dataset.The latter extracts high confidence rules from the frequent patterns found in the previous step.Frequent itemset mining is a key issue in many data mining tasks,and it is also the core of association rule mining algorithms.For more than a decade,scholars have devoted themselves to improving the efficiency of generating frequent itemsets,improving algorithms from different perspectives,so as to improve the efficiency of algorithms,and a large number of efficient and scalable algorithms have been proposed.This article makes an in-depth analysis of frequent item set mining,introduces and reviews the typical algorithms of complete frequent itemsets,closed frequent itemsets,and maximal frequent itemsets.Finally,the research direction of frequent itemsets mining algorithm was briefly analyzed.
Survey of Query-oriented Automatic Summarization Technology
WANG Kai-xiang
Computer Science. 2018, 45 (11A): 12-16. 
Abstract PDF(1561KB) ( 1519 )   
References | RelatedCitation | Metrics
This paper systematically combed the query-oriented automatic summarization technology,analyzed the basic ideas,advantages and disadvantages of the methods used,and summarized the future development direction.By analyzing,four kinds of query-oriented automatic summarization were summarized:the method based on graph model,the method based on machine learning,the method based on clustering and other methods.In the future,the method based on neural network and multi model fusion will become the focus of future research.In the application level,it will become a trend to study the algorithm combining with the actual application scene.
Overview:Application of Convolution Neural Network in Object Detection
YU Jin-yong, DING Peng-cheng, WANG Chao
Computer Science. 2018, 45 (11A): 17-26. 
Abstract PDF(3120KB) ( 2415 )   
References | RelatedCitation | Metrics
As a branch of machine learning,deep learning hasobtained wide application in various fields,and has become a major development direction of speech recognition,natural language processing,information retrieval and other aspects.Especially in image classification and object detection,it has made new breakthroughs.This paper first sorted out the typical applications of convolution neural network in object detection.Secondly,this paper compared several typical convolutional neural network structures,and summed up their advantages and disadvantages.Finally,the existing problems and the future development direction of deep learning were discussed.
Research on Architecture of Internet of Things
LI Dong-yue, YANG Gang, QIAN Bo
Computer Science. 2018, 45 (11A): 27-31. 
Abstract PDF(1649KB) ( 3927 )   
References | RelatedCitation | Metrics
The Internet of things(IoT) raised widespread concern since it was proposed,including the government,enterprises and scholars.The relevant standards of organizations or research institutions have been trying to develop a unified standard for the Internet of Things applications.On the one hand,the Internet of things involves a wide range of contents.On the other hand,the concept of Internet of things and fusion technology are also constantly updating and developing.So there is a standard of IoT without accepted in the public.Based on the evolution of IOT architecture,this paper analyzed the design patterns and advantages of different IOT architectures in three ways.In the end,the trend of IoT in the future was speculated.
Review of Machine Learning Algorithms in Traditional Chinese Medicine
ZHANG Xiao-hang, SHI Qing-lei, WANG Bin, WANG Bing-wei, WANG Yong-ji, CHEN Li, WU Jing-zheng
Computer Science. 2018, 45 (11A): 32-36. 
Abstract PDF(2347KB) ( 3919 )   
References | RelatedCitation | Metrics
Machine learning algorithms include traditional machine learning algorithms and deep learning algorithms.There exist more reports for traditional machine learning algorithms in the field of traditional Chinese medicine(TCM) diagnosis and treatment,which provides reference used for exploring the dialectical laws of TCM and provides the basis for the objectification of TCM diagnosis and treatment.At the same time,the latest advances in deep learning technologies provide new effective paradigms in obtaining end-to-end learning models from complex data.Deep learning algorithms have gained great success and become increasingly popular in more and more areas.The value of deep learning algorithms in TCM diagnosis and treatment has been paid more and more attention to by the industry.In this paper,the review of traditional machine learning algorithms and deep learning algorithms used in the advance of the TCM domain overe given.Firstly,the research and application status of the two algorithms in the TCM domain was summarized.Then in view of the analyzed work,different characteristics and limitations were found between traditional machine learning algorithms and deep learning algorithms.Finally,these characteristics and limitations were discussed and the existing problems and recommendations were put forward,so as to provide a reference for the further study of machine learning algorithm in the field of TCM.
Review of Social Recommendation
WANG Gang, JIANG Jun, WANG Han-ru
Computer Science. 2018, 45 (11A): 37-42. 
Abstract PDF(2944KB) ( 2051 )   
References | RelatedCitation | Metrics
Social recommendation system is becoming a hot topic of concern with the rapid development of Internet social network.First of all,this paper introduced the basic theory of social recommendation,and explained the concept and basic framework of social recommendation.And on this basis,this paper classified social recommendation as individual-oriented social recommendation and group-oriented social recommendation.Then,this paper gave the formal definition for individual-oriented social recommendation and group-oriented social recommendation respectively,andsummerized the current research status in the view of individual and group.Individual-oriented social recommendation includes re-commended methods based on predicting and sequential learning.Group-oriented social recommendation includes recommended methods based on integration of method and the integration of results.
Research on Deep Learning Used in Intelligent Robots
LONG Hui, ZHU Ding-ju, TIAN Juan
Computer Science. 2018, 45 (11A): 43-47. 
Abstract PDF(2721KB) ( 4201 )   
References | RelatedCitation | Metrics
The trend of robot development is artificial intelligence.Deep learning is the frontier technology of intelligent robot,and it is also a new subject in machine learning field.Deep learning technology is widely used in agriculture,industry,military,aviation and other fields,and the combination of deep learning and robot can make it possible to design intelligent robots with high working efficiency,high real-time and high precision.In order to enhance the ability of intelligent robots in all aspects and make it more intelligent,this paper introduced relearch project recated to deep learning and robots and the application of deep learning in robots,including indoor and outdoor scene recognition,industrial servi-ces and family services,and multi robot collaboration,etc.Finally,the future development of deep learning in intelligent robots,the possible opportunities and challenges were discussed.
Review of Pattern Driven Software Architecture Design
ZHANG Ying-jie, ZHU Xue-feng
Computer Science. 2018, 45 (11A): 48-52. 
Abstract PDF(1555KB) ( 1452 )   
References | RelatedCitation | Metrics
In the current software development theory and practice,software production needs to be done manually from aquistion of requirement to code completion.The mapping from software requirements analysis to software architectures still needs designer’s skills,experience and creativity.Most software code production still depends on the programmer to do it manually.This traditional way of software production poses many problems for the software industry.With the development of software engineering theory and case tools,the methodology of breaking through traditional way of software development has been put forward gradually.Software automation production methods based on pattern can save a lot of manpower in the process of the software abstract model to the automatic generation of software code.This approach improves the efficiency of software development and increases the adaptability of the software.This paper stu-died the design of model-driven software architecture by introducing mode-based software automation production me-thods.
Impact of Blockchain Technology on AI
PAN Ji-fei, HUANG De-cai
Computer Science. 2018, 45 (11A): 53-57. 
Abstract PDF(2647KB) ( 985 )   
References | RelatedCitation | Metrics
Blockchain is the technology behind Bitcoin,with the characteristics of decentralization,non-tampering and retrospective.Blockchain has been gradually applied in financial,digital rights,notarization,networking,document stora-ge and other fields,and obtained great achievements.Blackchain has become a more and more popular research area like AI (Artificial Intelligence),Big Data and Cloud Computing.AI bases on massive datasets and powerful computing ability,and Blockchain can be well integrated into AI to promote it’s rapid development.On the basis of introducing the concept and working mechanism of Blockchain,this paper mainly introduced the influence of Blockchain technology on AI,analyzed the application feasibility of Blockchain in AI,and finally put forward the prospect.
Research on Enterprise Portraits Based on Big Data Platforms
TIAN Juan, ZHU Ding-ju, YANG Wen-han
Computer Science. 2018, 45 (11A): 58-62. 
Abstract PDF(1707KB) ( 5449 )   
References | RelatedCitation | Metrics
With the development of economy,the number of enterprises is increasing.For the massive data generated by the enterprised,we can use big data technology processing methods and the theory of enterprise portrait to analyze business data,and provide reliable data analysis for enterprise development,industry development and government regulation.Firstly,this paper summarized and analyzed the construction and technology of corporate portraits at domestic and international.Then,according to the characteristics of enterprise data and combined with the persona technology,this paper put forward several methods to deal with enterprise data.At the last,this paper put forward several issues about handling enterprise data when using big data technology processing methods.
Survey of Skyline Processing in P2P Environments
SUN Zhi, SUN Xue-jiao
Computer Science. 2018, 45 (11A): 63-70. 
Abstract PDF(1953KB) ( 783 )   
References | RelatedCitation | Metrics
With the growth of data scale and the development of network technology,peer-to-peer (P2P) has attracted more and more attention as a platform for distributed information sharing and searching.Based on the highly dynamic,highly decentralized and extensible features of peer-to-peer networks,the skyline calculation method on P2P not only needs to meet the requirements of centralized skyline calculation method,but also needs to reduce the network traffic,reduce the average number of nodes,maintain load balancing and other important metrics.This paper examined this ongoing research area so that readers can easily understand the most advanced overview.This paper described the purpose and main principles of the distributed skyline method,and summarized the existing methods applicable to the P2P environment and provided performance comparison analysis.Finally,this paper gave the future development direction of skyline calculation in P2P environment.
Age of Big Data:from Von Neumann to Computing Storage Fusion
QIU Ci-yun, LI Li, ZHANG Huan, WU Jia
Computer Science. 2018, 45 (11A): 71-75. 
Abstract PDF(1731KB) ( 2667 )   
References | RelatedCitation | Metrics
The emergence of massive data and improvement of computing power aroused the 3rd artificial intelligence booming,and the age of big data arrived.This paper firstly analyzed firstly that computer with Von Neumann architecture faces the problem of memory wall,bandwidth wall and high power consumption in the age of big data,which evokes the changing of computer architecture development trend to match the requirement of big data processing.Then,the computing and storage fusion in computer architecture level,software and hardware structure,spirit of offloading algorithm,technology feature background,and the commercial application were analyzed,to enlighten the product design such as high performance computing,data centre setup and design of smart SSD.In micro level,the 3D stack package technology based on through silicon via was analyzed and the latest industry applications were introduced.Finally,artificial cognitive computation which represents the computing and storage fusion development goal and the latest research status were summarized.
Review for Software Cost Evaluation Methods
ZHAO Xiao-min, FEI Meng-yu, CAO Guang-bin, ZHU Li-nan
Computer Science. 2018, 45 (11A): 76-83. 
Abstract PDF(2593KB) ( 2518 )   
References | RelatedCitation | Metrics
How to do a good job of software project budget has always been one of the difficult problems in the information construction of government agencies,enterprises and institutions.Software cost assessment is a behavior that evalua-tes development effort,time limit and cost of software project through a set of processes or models.It can improve the accuracy of software budget,protect the delivery cycle of software project,and arrange and schedule the research and development programmer reasonably.First of all,the software cost assessment methods were classified and compared,and their advantages and disadvantages were analyzed.Then,the experiment and analysis of four evaluation methods,including function point,use case point,neural network and analogy,were carried out with the sample data of the software project.Finally,the existing problems of the existing software cost assessment methods and the direction of further research were pointed out.
Review of SDN Performance Optimization Technology
SUN Tao, ZHANG Jun-xing
Computer Science. 2018, 45 (11A): 84-91. 
Abstract PDF(1692KB) ( 1571 )   
References | RelatedCitation | Metrics
Software-Defined Network (SDN) is an emerging network architecture.It decouples data plane and completely of control plane.Control plane focuses on making and issuing whole network decision.The data plane is solely responsible for data forwarding.Through the open interface of control plane,SDN achieves network programmability.In the future,when SDN is widely deployed in a wide area,every plane’s performance optimization technologies will face many challenges.Firstly,the status quo of the performance optimization of control plane and data plane in SDN architecture was analyzed.Secondly,the problems faced in the process of optimizing the performance of each plane were summarized.Finally,the future research trends of SDN performance optimization was prospected.
Intelligent Computing
Research of V2G Strategies for EV Parking Lot Based on Improved PSO
SHAO Wei-hui, XU Wei-sheng, XU Zhi-yu, WANG Ning, NONG Jing
Computer Science. 2018, 45 (11A): 92-96. 
Abstract PDF(4868KB) ( 836 )   
References | RelatedCitation | Metrics
Electric vehicle parking lot is built nearby commercial office buildings for electric vehicles charging and par-king to solve a series of problems broughtby large-scale grid-connected electric vehicles.A V2G based electric vehicle parking lot model (V2G_EVPL) was proposed to describe charging and discharging behaviors of electric vehicles in the parking lot based on vehicle to grid (V2G) technology.Under the conditions of the real-time pricing (RTP),V2G_EVPL dynamically schedules the electric vehicle charging or discharging,and then interacts with the grid for energy.Improved particle swarm optimization (IPSO) algorithm is applied to solve the optimal scheduling for electric vehicles.Improvements,such as feasibility coding,adaptive search radius and boundary variability correction,are made to improve the efficiency and the convergence accuracy of PSO.Real-time price data of PJM and the parameters of mainstream electric vehicles are used in simulation.The operation process and results of V2G_EVPL under three different scenarios are compared and analyzed.Results of the simulations show that the proposed model is reasonable and the improved PSO algorithm is both efficient and effective.
PSO-ACO Fusion Algorithm Based on Optimal Path Planning in Indoor Environment
LIU Jun, XU Ping-ping, WU Gui-lu, PENG Jie
Computer Science. 2018, 45 (11A): 97-100. 
Abstract PDF(1856KB) ( 961 )   
References | RelatedCitation | Metrics
In order to find the optimal path for mobile robot to reach the specified destination in indoor obstacle environment,an improved PSO-ACO fusion algorithm based on particle swarm optimization (PSO) and ant colony algorithm (ACO) was proposed.In PSO-ACO fusion algorithm,ant colony algorithm is used to obtain the global optimal solution for the local optimal problem caused by premature particle in particle swarm optimization algorithm.At that same time,the problem of small variety of particles in the PSO algorithm and lack of initialization pheromone and time consume in the ACO algorithm are effectively solved.Simulation results show that PSO-ACO fusion algorithm can greatly improve the ability of searching the optimal solution and realize the optimal path planning under the premise of improving the global search ability and search speed of the algorithm compared with particle swarm optimization and ant colony algorithm.
Research on Multi-units Control Method in RTS Games Based on PAGA
YANG Zhen, ZHANG Wan-peng, LIU Hong-fu, WEI Zhan-yang
Computer Science. 2018, 45 (11A): 101-104. 
Abstract PDF(2720KB) ( 1301 )   
References | RelatedCitation | Metrics
Unit control in real-time strategy games (RTS) is a challenging issue in the field of artificial intelligence (AI).Such games are constrained in real time,and have a large state and action space,which make intelligent algorithms do not solve this type of problem.By controlling the multi-units in the battle scene by searching strategy in the script space,it is possible to effectively overcome the adverse effects caused by the huge branching factor.This paper used adaptive genetic algorithm to search in scripting space to provid a good sequence of actions for multi-units in the battle scene and control the unit.Experiments show that the proposed PAGA (Portable Adaptive Genetic Algorithm) is feasible and effective,and its performance is superior to the current algorithms in large-scale unit control.
Emotion Recognition on Microblog Based on Character and Word Features
YIN Hao, XU Jian, LI Shou-shan, ZHOU Guo-dong
Computer Science. 2018, 45 (11A): 105-109. 
Abstract PDF(2054KB) ( 1195 )   
References | RelatedCitation | Metrics
Text emotion recognition is an important task in the community of nature language processing.This task aims to predict the involving emotion towards a piece of text.This paper proposed a novel emotion recognition approach based on character and word features.Compared to most traditional approaches,this approach employs both the character and word features by considering the characteristic of microblog text.Specifically,the feature presentations of microblog are extracted respectively from characters and words.Then,a LSTM model (or Bi-directional LSTM model) is employed to extract the hidden feature presentations from the above feature presentations.Third,the two groups of hidden character and word feature representations are merged to perform emotion recognition.Empirical studies demonstrate the effectiveness of the proposed approach for emotion recognition on SINA microblog.
Relationships Between Several Reductions in Decision System
JING Si-hui, QIN Ke-yun
Computer Science. 2018, 45 (11A): 110-112. 
Abstract PDF(2611KB) ( 610 )   
References | RelatedCitation | Metrics
The indiscernibility relation is the basis of rough set theory.Firstly,this paper studied the relationship between λ-reduction,maximal distribution reduction and distribution reduction in decision table.It is proved that a λ-consistent set is a maximal distribution consistent set and a distribution consistent set.Secondly,this paper designed a heuristic reduction algorithm based on the attribute frequency in the distinguishing matrix for λ-reduction,which can reduce the complexity of reduction calculation.Finally,the feasibility and effectiveness of the proposed algorithm was verified by examples.
WDS:Word Documents Similarity Based on Word Embedding
WANG Lu-qi, LONG Jun, YUAN Xin-pan
Computer Science. 2018, 45 (11A): 113-116. 
Abstract PDF(1803KB) ( 1251 )   
References | RelatedCitation | Metrics
In order to further improve the accuracy of document similarity,under the framework of system similarity function,this paper presented Word Documents Similarity (WDS) based on word embedding,and its optimization algorithm FWDS (Fast Word Documents Similarity).WDS regards the set of word embedding corresponding to the words set of documents as the system,and regards the word embedding corresponding to the word as the element of the system.So,the similarity of the documents is the similarity of the two word embedding sets.In the concrete calculation,the first vector set is used as the standard,the alignment operation of the two vector sets is carried out,and the multiple parameters of the sets that are in and not in MOPs are calculated.The experimental results show that compared with WMD and WJ,WDS always keep better hit rate with documents’ length increase.
Branching Heuristic Method Based on Added Weight Average Value
HU Zhong-xue, XU Yang, HU Rong
Computer Science. 2018, 45 (11A): 117-120. 
Abstract PDF(2627KB) ( 745 )   
References | RelatedCitation | Metrics
The effective branching strategy was proposed in the satisfiability (SAT) problem algorithm,which can improve the efficiency of the solver.In consideration of the conflict,according to whether the variables join in conflict and conflict times,a branching heuristic method based on added weight average value was proposed.Firstly,a sequence is used to record whether the variables are involved in the conflict.Secondly,a weighted average function is given,the function value is calculated through variable sequences and decision levels.Finally,variable with the largest value is assigned,example analysis and comparisonare conducted.The new method is an improved method according to the control encoding method,so in the example analysis,the comparison and analysis ways were used,and SUM (Sum in experiment) strategy and ACIDS(Average Conflict-index Decision Score) strategy were compared with the new method at the same time.Through analysis of examples in SATLIB (SAT little information bank),the results show that more clauses are satisfied or the latest conflict clause can get the priority by the new method.
Research and Application of Ensemble Learning Using Gradient Optimization Decision Tree
WANG Yan-bin, WU You-xi, LIU Hong-pu
Computer Science. 2018, 45 (11A): 121-125. 
Abstract PDF(1661KB) ( 752 )   
References | RelatedCitation | Metrics
Ensemble learning completes the learning task by building multiple classifiers with certain complementary performance to reduce the classification error.However,the current research fails to consider the local validity of the classifier.In this paper,a hierarchical multi-class classification algorithm was proposed in the framework of ensemble learning.The algorithm decomposes the problem by predicted category,and integrates several weak classifiers on the basis of stratification to improve the prediction accuracy.The experimental results on a real data set of American College Matriculation Set and three UCI datasets verified the effectiveness of the algorithm.
Prediction Model of Ship Trajectory Based on LSTM
QUAN Bo, YANG Bo-chen, HU Ke-qi, GUO Chen-xuan, LI Qiao-qin
Computer Science. 2018, 45 (11A): 126-131. 
Abstract PDF(1877KB) ( 3414 )   
References | RelatedCitation | Metrics
It is imperative to raise the level of decision-making for vessel traffic service (VTS) system in the light of the increasingly complex maritime circumstances.Aiming at the multidimensional characteristics of the ship’s navigation trajectory and the demand for the accuracy and the real-time prediction of the ship’s trajectory,a prediction method combining ship trajectory automatic identification system (AIS) data and deep learning was proposed.The feature expression of vessel behavior based on AIS data was established and the recurrent neural network-long short term memory (RNN-LSTM) model was proposed.The model was trained by AIS data from the Guangzhou Harbor and used to predict vessel trajectory.The results show that the method can predict the characteristics of vessel trajectory timely with acceptable accuracy.Compared with the traditional processing method,it is more superior in processing time series data.
New Method for Ranking Scientific Publications with Creditworthiness Mechanism
FENG Lei, JI Jun-zhong, WU Chen-sheng
Computer Science. 2018, 45 (11A): 132-137. 
Abstract PDF(1981KB) ( 639 )   
References | RelatedCitation | Metrics
Evaluating the scientific value of publications has always been a research focus in the field of bibliometrics.However,some mainstream methods based on data mining overlook the influence of malicious activities and result in poor evaluation results.To solve this problem,this paper proposed a new method named ReputeRank,which employs a creditworthiness mechanism to evaluate the effectiveness of publications in the citation network.The creditworthiness mechanism consists of the seeds selection phase,the spread credit phase and the integrated computation phase.First,ReputeRank employs background information on the division of SCI Periodicals to select potential good seeds and bad seeds in the citation network.Then,in light of assumption that good credibility seeds pointing to papers which usually have a higher credible degree while bad credibility seeds pointing to papers which often have a lower credible degree,the method uses TrustRank and Anti-TrustRank evaluation formula to iteratively spread trust values and distrust values over the citation network.Finally,according to the trust and distrust values in the citation network,the method utilizes an integrated equation to comprehensively compute the score value of each paper and arranges all papers in the descen-ding order of the score values.The experimental results on KDD cup 2003 datasets demonstrate that ReputeRank has good performance of effectiveness and robustness compared with PageRank,CountDegree and SPRank.
Approach to Solve TSP with Parallel ACS-2-opt
LI Jun, TONG Zhao, WANG Zheng
Computer Science. 2018, 45 (11A): 138-142. 
Abstract PDF(1619KB) ( 751 )   
References | RelatedCitation | Metrics
According to defects in basic ACS solving TSP,this paper added 2-opt local search strategy to ACS model to improve the computing capacity and accuracy in the process of building the best tour.Moreover,since ACS algorithm is easy to be parallel processed,this paper used multithreaded concurrency and parameter optimization to enhance computing speed of basic ACS.Finally,this paper actualized parallel ACS-2-opt algorithm which has preferable performance in solving the medium TSP.According to the experimental results,2-opt strategy has obvious effect on improving the accuracy of ACS solving TSP.The time cost of ACS solving TSP is distinct with different pheromone heuristic value settings.ACS becomes vestigial when using reciprocal of distance between two nodes as the corresponding heuristic value.Under the circumstance of parallel computation,ACS-2-opt has good parallel computing on solving TSP.
Q-learning with Feature-based Approximation for Traffic Light Control
LI Min-shuo, YAO Ming-hai
Computer Science. 2018, 45 (11A): 143-145. 
Abstract PDF(2766KB) ( 681 )   
References | RelatedCitation | Metrics
Reinforcement learning(RL) learns the policy through interaction with the environment.RL algorithms are online,incremental,and easy to implement.This paper proposed a Q-learning algorithm with function approximation for adaptive traffic light control (TLC).The application of table-based Q-learning to traffic signal control requires full-state representations and cannot be implemented,even in moderate-sized road networks,because the computational complexity exponentially grows in the numbers of lanes and junctions.This paper tackledthe dimension disaster problem by effectively using feature-based state representations and used a broad characterization of the levels of congestion.The experiment results show that the proposed method is effective and feasible.
Focused Crawling Based on Grey Wolf Algorithms
XIAO Jing-jie, CHEN Zhi-yun
Computer Science. 2018, 45 (11A): 146-148. 
Abstract PDF(2879KB) ( 1079 )   
References | RelatedCitation | Metrics
In order to solve the problem that the focused crawler is difficult to achieve an optimal solution in the global search,and improve the accuracy of the topic crawler and the recall rate,this paper designed a focused crawler search strategy combined with grey wolf algorithm.The experimental results show that compared with the traditional breadth-first search strategy and the genetic algorithm which is also a swarm intelligence algorithm,the performance of the focused crawler based on grey wolf algorithm was greatly improved,and more topic-related web pages can be crawled.
Tactical Analysis of MOBA Games Based on Hotspot Map of Battlefield
YU Cheng, ZHU Wan-ning
Computer Science. 2018, 45 (11A): 149-151. 
Abstract PDF(3734KB) ( 917 )   
References | RelatedCitation | Metrics
With the continuous development of the e-sports industry,except for the decisive factors such as experience,talent and skills,data analysis has an increasing influence on the winners and losers of MOBA games.For the problem that some MOBA games are unable to obtain accurate data directly through the interface,this paper proposed a method that gives pretreatment for position data according to the official heat maps and used PNN (Probabilistic Nearest Neighbor) with the idea of prototype clustering for tactical analysis.Finally,tactics is derived in the form of a probability:mobile probability for core characteristics of sides to core of the battle.In this algorithm,weighted distance is added to improve KNN’s shortcoming of using Euclidean distance to calculate the difference between sample points,and the least square method is used to obtain the optimal constant solution.At the same time,all distance data are normalized to improve the accuracy of the algorithm.The final experiment shows that this method is effective in predicting the probability of the core point of the battlefield.
Health Assessment of Diesel Generator Based on Convolution Neural Network
ZHAO Dong-ming, CHENG Yan-ming, CAO Ming
Computer Science. 2018, 45 (11A): 152-154. 
Abstract PDF(1537KB) ( 958 )   
References | RelatedCitation | Metrics
Diesel generator is the core equipment of the surface unmanned boat (USV),its health statusdirectly affects the navigation state of USV.In view of the health assessment of diesel generators,a method based on the convolution neural network was proposed.The health assessment model is established by using the basic parameters of the generator as the characteristic parameters,and the state of the motor health assessment is set out.Taking 100 ton electric propulsion USV diesel generator as an example,the model was verified,and the health state transition relationship and the health threshold of the starting motor are 0.03.Compared with the commonly used BP neural network,the convergence speed,recognition speed and accuracy of the model are obviously improved.
Research on Optimization Algorithm of Deep Learning
TONG Wei-guo, LI Min-xia, ZHANG Yi-ke
Computer Science. 2018, 45 (11A): 155-159. 
Abstract PDF(1809KB) ( 2604 )   
References | RelatedCitation | Metrics
Deep learning is a hot research field in machine learning.Training and optimization algorithm of deep lear-ning have also been high concern and studied,and has become an important driving force for the development of artificial intelligence.Based on the basic structure of convolution neural network,the selection of activation function,the setting of hyperparameters and optimization algorithms in network training were introduced in this paper.The advantages and disadvantages of each training and optimization algorithm were analyzed and verified by Cifar-10 data set as training samples.Experimental results show that the appropriate training methods and optimization algorithms can effectively improve the accuracy and convergence of the network.Finally,the optimal algorithm was applied in the image recognition of actual transmission line and achieved good result.
Differential Evolution Algorithm with Adaptive Population Size Reduction Based on Population Diversity
SHAN Tian-yu, GUAN Yu-yang
Computer Science. 2018, 45 (11A): 160-166. 
Abstract PDF(1931KB) ( 1066 )   
References | RelatedCitation | Metrics
To avoid premature effectively and improve the capability of global search,an algorithm named differential evolution algorithm with adaptive population size reduction based on population diversity(Dapr-DE) was proposed.Dapr-DE firstly uses population diversity to control the population size reduction.Then,Dapr-DE divides the population into some subpopulations by clustering and deletes the individuals according to their fitness,which keeps population diversity effectively and avoids local convergence.At last,the experimental results validate the effectiveness of the proposed algorithm on 30 real optimization problems in the CEC14 function set.
Research on Muti-AGV Sechduling Algorithm Based on Improved Hybrid PSO-GA for FMS
YUE Xiao-han, XU Xiao-jian, WANG Xi-bo
Computer Science. 2018, 45 (11A): 167-171. 
Abstract PDF(1961KB) ( 894 )   
References | RelatedCitation | Metrics
The antomated guided vehicle(AGV) is often used to transport materials for improving prodiction efficiency in manufacturing facility or a warehouse.AGV scheduling not only needs to consider the AGV task assignment problem,but also needs to consider the time spent for each operation and the running time of the car.Compared with single-objective optimization scheduling algorithm,multi-objective optimization requires a more complex model to support.This model optimizes the two dimensions of minimizing the completion time and scheduling the minimum number of AGVs considering the power status of the AGV.This paper presented an improved hybrid particle swarm optimization and genetic algorithm (PSO-GA) to optimize the model.Compared with the GA or PSO algorithm,the proposed algorithm has significant optimization effect.Compared to PSO-GA hybrid algorithm,it is further improved in the running time.
Pattem Recognition & Image Processing
Research on Multi Feature Fusion Infrared Ship Wake Detection
ZOU Na, TIAN Jin-wen
Computer Science. 2018, 45 (11A): 172-175. 
Abstract PDF(4111KB) ( 1297 )   
References | RelatedCitation | Metrics
A new algorithm of infrared ship wake detection based on fusion of Gabor filter and local information entropy was proposed to solve the problem that the infrared image of ship wake is easily disturbed by the sea clutter,the contrast is low,and the image can not be identified by the traditional method.First of all,the contrast between the wake and the sea background is calculated by using the gray level co-occurrence matrix to determine whether there is a ship wake in the region,and the region of interest is extracted to improve the processing speed of the algorithm.Secondly,multi direction Gabor filter and local information entropy are used for feature fusion to realize the feature enhancement of ship wake.Finally,the infrared ship wake detection is realized by threshold segmentation and Hough transform.Experimental results show that this method can effectively preserve the texture features and details of ship wake,and accurately extract the complete wake edge,which greatly improves detection rate.
Research on High Rate of Log’s Output Based on Computer Vision
ZHONG Ping-chuan, WANG Na, XIAO Yi-di, ZHENG Ze-zhong
Computer Science. 2018, 45 (11A): 176-179. 
Abstract PDF(4600KB) ( 768 )   
References | RelatedCitation | Metrics
The computer can simulate the human visual environment to identify and measure things in the field of vision.With the increase of accuracy,computer vision can replace the function of human’s eyes to achieve simple and repetitive manual operations.The introduction of computer vision into logs can increase the yield of logs,reduce wood loss,maxmize the utilization rate of logs with high-efficiency and accurate performance of the computer,minimize the production of raw materials that generate square waste,and increase the output rate of logs.This algorithm is applied to automated band saw log cutting systems.The basic process includes eliminating image noise through image preproces-sing,removing the background through color segmentation,giving the contour of the region of interest by edge detection,filling the misprocessed contour edges through morphological operations,and calculating the largest area of the fitted ellipse.The experimental results show that the arithmetic can meet the requests of actual production,and the accuracy reaches 95%.
Multi-view Geometric 3D Reconstruction Method Based on AKAZE Algorithm
ZHOU Sheng-pu, GENG Guo-hua, LI Kang, WANG Piao
Computer Science. 2018, 45 (11A): 180-184. 
Abstract PDF(5994KB) ( 1365 )   
References | RelatedCitation | Metrics
Aiming at the low efficiency of incremental motion recovery structure algorithm in multi-view geometric 3D reconstruction algorithm,a multi-view geometric 3D reconstruction method based on AKAZE algorithm was proposed.The target image obtained by the camera is detected and matched by AKAZE algorithm,and the weak matching image is eliminated by using the random sample consensus algorithm and the three view constraints.Then the global rotation parameters are solved by the least square method according to the relative position and attitude parameters of the matching graphs,and the global displacement parameters are solved by using the three-view constraint relation.Finally,the bundle adjustment optimization is carried out.The experimental results show that the proposed algorithm can improve the processing efficiency and meet the needs of fast processing on the basis of improving the reconstruction effect.
Method for Visual Adjustment of Two-camera Position Based on GA-BP Neural Network
YANG Feng-kai, CHENG Su-xia
Computer Science. 2018, 45 (11A): 185-188. 
Abstract PDF(1991KB) ( 594 )   
References | RelatedCitation | Metrics
The target template was designed and the BP neural network model was proposed,which can calculate the position deviation parameter between the two cameras according to the image coordinates of the feature points on the target template on the dual camera.GA algorithm is used to optimize BP neural network to compensate the shortco-mings.The training sample data set is used to train the proposed model,and the model is tested with the test sample data set,and finally the training model is used for the actual production of the two-camera module.The actual application results show that the calibration precision and time can meet the requirements of actual production based on the proposed method.
Extraction Algorithm of Key Actions in Continuous and Complex Sign Language
XU Xin-xin, HUANG Yuan-yuan, HU Zuo-jin
Computer Science. 2018, 45 (11A): 189-193. 
Abstract PDF(3811KB) ( 973 )   
References | RelatedCitation | Metrics
An algorithm of extracting key actions in sign language was brought out in this paper.In the continuous and complex sign language,the number of key actions is small and the state is relatively stable.Thus using the key actions to construct the data model of the sign language will reduce the unstable factors and improve the accuracy.In this paper,an adaptive classification algorithm was proposed,which extracts the key actions step by step according to the time order and the irrelevance among the key actions.Experiments show that the algorithm can be used for the non-specific population.Moreover,the algorithm can extract all the key actions from both the single vocabulary and the continuous sentence.Key actions can be regarded as primitives of sign language,and thus sign language can be looked upon as different combinations of those primitives as well.Therefore,as for the continuous and complex sign language,the extraction of key actions has important significance not only for the data model construction,but also for its recognition.
Edge Detection for Noisy Image Based on Wavelet Transform and New Mathematical Morphology
YU Xiao-qing, CHEN Ren-wen, TANG Jie, XU Jin-ting
Computer Science. 2018, 45 (11A): 194-197. 
Abstract PDF(4101KB) ( 1102 )   
References | RelatedCitation | Metrics
In order to remove image noise and preserve image edge information in image edge detection,a edge detection method for noisy image based on wavelet transform modulus maxima and improved mathematical morphology edge detection was proposed.Firstly,the image edge detection algorithm based on wavelet transform modulus maxima was introduced.Then a new improved mathematical morphology was proposed.Finally,in order to synthesize the merits of the two algorithms,a new fusion method was used to fuse the results of the two methods together,and a novel edge detection method for noisy image based on wavelet transform and new morphology was proposed.The experimental results show that the proposed fusion detection algorithm can suppress the noise more effectively and improve the edge detection effect than using wavelet transform modulus maxima or new mathematical morphology alone.
Sequential Feature Based Sketch Recognition
YU Mei-yu, WU Hao, GUO Xiao-yan, JIA Qi GUO He
Computer Science. 2018, 45 (11A): 198-202. 
Abstract PDF(1970KB) ( 1072 )   
References | RelatedCitation | Metrics
Recognizing freehand sketches is a greatly challenging work.Most existing methods treat sketches as traditional texture images with fixed structural ordering and ignore the temporality of sketch.In this paper,a novel sketch recognition method was proposed based on the sequence of sketch.Strokes are divided into groups and their features are fed into recurrent neural network to make use of the temporality.The features from each temporality are combined to produce the final classification results.The proposed algorithm was tested on a benchmark,and the recognition rate is far above other methods.
Auto-detection of Hard Exudates Based on Deep Convolutional Neural Network
CAI Zhen-zhen, TANG Peng, HU Jian-bin, JIN Wei-dong
Computer Science. 2018, 45 (11A): 203-207. 
Abstract PDF(4444KB) ( 618 )   
References | RelatedCitation | Metrics
A hard exudates (HEs) detection method based on deep convolution neural network was proposed in this paper,which achieves the purpose of automatic detection for HEs and contributes to the creation of diabetic retinopathy (DR) computer-aided diagnostic system.This method includes training the classification model for HEs offline and detection for HEs online.In order to train HEs classification model offline,CNN is adopted to extract HEs features automatically.Then,HEs in fundus image are detected by HEs classification model which has been trained offline,meanwhile,HEs probability graph and HEs pseudo-color map are obtained.The method was verified on standard data set and self-built data set respectively.Compared with other methods,the proposed method is profitable with strong robustness,and has very strong clinical practice significance.
Study on Adaptive Hierarchical Clustering De-noising Algorithm of Laser Ranging in Storage of Dangerous Chemicals
LIU Xue-jun, WEI Yu-chen, YUAN Bi-xian, LU Hao, DAI Bo, LI Cui-qing
Computer Science. 2018, 45 (11A): 208-211. 
Abstract PDF(4038KB) ( 695 )   
References | RelatedCitation | Metrics
In order to realize the safety early warning of dangerous chemical products,this paper used laser ranging and encoder to monitor the five distance in the warehouse.In order to solve the noise problem of ranging data,a new algorithm was designed for reducing noise and feedback compensation.According to the characteristicing of the noise and the distance,taking the tested object as the center,objects are divided into three categories from far to near.The first layer and the second layer use peak denoising,the third layer uses piecewise fitting angle,simultaneously interlayer feedback error correction is used to realize the closed loop denoising.Experiment results show that the variance is reduced by 0.83 compared with the denoising algorithm.Compared with the comparison algorithm of difference value,this algorithm can removed catastrophie caused by a small amount of noise concentrated and the variance value is decreased by 1.93.The algorithm can remove the noise and restore the position of the object.
Watershed Segmentation by Gradient Hierarchical Reconstruction under Opponent Color Space
JIA Xin-yu, JIANG Zhao-hui, WEI Ya-mei, LIU Lian-zhong
Computer Science. 2018, 45 (11A): 212-217. 
Abstract PDF(3246KB) ( 893 )   
References | RelatedCitation | Metrics
In order to improve the over-segmentation in the traditional watershed algorithm,a watershed segmentation algorithm of gradient hierarchical reconstruction was proposed under opponent color space,considering the interference of reflected light on the image.Firstly,the color image is converted from RGB space to the opponent color space which has nothing to do with the reflected light.Secondly,the gradient image of the color image is obtained by combining the image information entropy.Thirdly,the gradient image is hierarchically reconstructed according to the distribution information of the gradient histogram.Then morphological minimum calibration technique is used to calibrate the combined gradient image.At last,watershed segmentation is applied to the corrected image.Experiments on different types of images were carried out.The experimental results show that the proposed algorithm is more prominent than the three classic watershed algorithms in the number of divided regions,running time and the DIR.The new algorithm is more in line with human perception of the image,the segmentation and performance are better,and it has higher robustness and practicality.
Improved Anti-aliasing Algorithm Based on Deferred Shading
SHAO Peng, ZHOU Wei, LI Guang-quan, WU Zhi-jian
Computer Science. 2018, 45 (11A): 218-221. 
Abstract PDF(3308KB) ( 1882 )   
References | RelatedCitation | Metrics
FXAA is a post-processing anti-aliasing algorithm.Because it is an edge detection algorithm based on image pixel,it causes a lot of unnecessary anti-aliasing computation.In order to improve the performance of anti-aliasing,an improved anti-aliasing algorithm based on FXAA (IAAFXAA) was proposed.The depth and normal of the relative view are saved into the texture.The algorithm extracts depth and normal information from G-buffer,and uses depth and normal information to perform more accurate edge detection.A large number of experimental results and analysis show that while ensuring the good anti-aliasing effect,the proposed algorithm can determine the anti-aliasing region more accurately to generate high-quality boundaries,and avoid excessive blurring of images to improve image quality.
Improved ORB Feature Extraction Algorithm Based on Quadtree Encoding
YU Xin-yi, ZHAN Yi-an, ZHU Feng, OU Lin-lin
Computer Science. 2018, 45 (11A): 222-225. 
Abstract PDF(5978KB) ( 1169 )   
References | RelatedCitation | Metrics
An improved ORB feature extraction algorithm based on quadtree encoding was proposed in this paper,which can solve the problem that the detected feature points are too dense to show the picture information completely.Firstly,the image pyramid is built to make the scale invariance.Then,the feature points are extracted on each image pyramid and quadtree encoding is introduced to homogenize the feature point.Finally,the direction and descriptor are calculated for each feature points.In this paper,the Xtion PRO was used as an experimental tool to extract the feature points under indoor environment,and the proposed algorithm was compared with others.Experimental results show the effectiveness and accuracy of the proposed method.
Agricultural Insect Pest Detection Method Based on Regional Convolutional Neural Network
WEI Yang, BI Xiu-li, XIAO Bin
Computer Science. 2018, 45 (11A): 226-229. 
Abstract PDF(4811KB) ( 1189 )   
References | RelatedCitation | Metrics
In the current integrated agricultural pest control,agricultural insect pests are detected primarily by professionals’ sample collection and sorting manually,such manual classification method is both expensive and time consuming.Existing computer-aided automatic detection of agricultural pests has a high requirement on the background environment of pests and cannot locate agricultural pests.To solve these problems,this paper proposed a new method for automatic detection of agricultural pests based on the idea of the deep learning.It contains the region proposal network and the Fast R-CNN network.Region proposal network extracts feature in one or more regions of arbitrary size and complicated background images,then gets preliminary position of the candidate regions of agricultural pests.Preliminary position of the candidate regions of agricultural pests is taken as an input to Fast R-CNN.Fast R-CNN finally learns the classification of target in the preliminary location candidate area and calculates exact coordinates by studying the intraspecific differences and interspecies similarity of agricultural pests.Meanwhile,this paper also established a labeled actual scene tag agricultural pests database,and the proposed method was tested on this database,with theaverage precision up to 82.13%.The experimental results show that the proposed method can effectively enhance the accuracy of agricultural pests detection,and get accurate positions,and is superior to the previous automated agricultural pest detection methods.
Visual Tracking Algorithm Based on Kernelized Correlation Filter
HUANG Jian, GUO Zhi-bo, LIN Ke-jun
Computer Science. 2018, 45 (11A): 230-233. 
Abstract PDF(4191KB) ( 877 )   
References | RelatedCitation | Metrics
Visual tracking is an important part of the computer vision,and kernelized correlation filter tracking is a relatively novel method in visual tracking field.It is different from traditional method based on target feature,which has high accuracy and fast tracking speed.However,when the object moves rapidly or has the larger scale changes,the method cannot track the target accurately.This paper proposed an improved algorithm based on the correlative filter which can effectively overcome the above problems.The learning factors of kernelized correlation filtering and the ada-ptive updating model of learning factors are determined by using random update multi-template matching.Experimental results show that the algorithm can adjust the learning factors quickly according to different scenarios,thus the success rate of tracking will be improved.Through adaptive learning factor and multi-template matching,this algorithm has robust adaptability to partial occlusion,illumination and target scale.
Gaussian Process Assisted CMA-ES Application in Medical Image Registration
LOU Hao-feng, ZHANG Duan
Computer Science. 2018, 45 (11A): 234-237. 
Abstract PDF(5239KB) ( 887 )   
References | RelatedCitation | Metrics
A gaussian process assisted covariance matrix adaptation evolution strategy(GPACMA-ES)optimization algorithm was proposed in this paper.The kernel function used in the GPACMA-ES algorithm is constructed by the cova-riance matrix.Taking advantage of the Gaussian process,which plays a key role in both online learning about the histo-ric experience and predicting the promising region which contains globally optimal solution,the frequency of calculating fitness function in the algorithm is reduced markedly.Meanwhile,in order to improve the efficiency of the algorithm,GPACMA-ES is sampling in the trust region.So it has rapid convergence and good global search capacity.Finally,a case study of medical image registration is examined to demonstrate the ability and applicability of the GPACMA-ES.Expe-riment results show that GPACMA-ES is proper for medical image registration than CMA-ES,and it has a better effect on the precision of registration while reducing the number of calculation of the fitness function.
Algorithm of Multi-layer Forward Artificial Neural Network for Image Classification
GU Zhe-bin, CAO Fei-long
Computer Science. 2018, 45 (11A): 238-243. 
Abstract PDF(2383KB) ( 702 )   
References | RelatedCitation | Metrics
The input of traditional artificial neural network is in vector form,but the image is represented by matrix.Therefore,in the process of image processing,the image will be inputted into the neural network in vector form,which will destroy the structure information of image,and thus affect the effect of image processing.In order to improve the ability of network on image processing,the multilayer feedforward neural networks with matrix inputs are introducedbased on the idea and method of deep learning.At the same time,the traditional back-propagation algorithm (BP) is used to train the network,and the training process and training algorithm are given.After a lot of experiments,the network structure with good performance were determined,and the numerical experiments were carried out on the USPS handwritten digital data set.The experimental results show that the proposed multilayer network has better classification results than the single hidden layer feed forward neural network with matrix input (2D-BP).In addition,to deal with the problem of color image classification,this paper provided an effective and feasible method,the new 2D-BP network,to deal with it
Image Shape and Texture Description Method Based on Complex Network
HONG Rui, KANG Xiao-dong, LI Bo, WANG Ya-ge
Computer Science. 2018, 45 (11A): 244-246. 
Abstract PDF(2522KB) ( 967 )   
References | RelatedCitation | Metrics
This paper proposed an image feature description method based on complex network.By using the key points of the image as the node of complex network,this method uses MST measure to achieve dynamic evolution process,anduse complex network characters in different phase to achieve the description of the shape of the image.With the distance and the difference of gray level between a pixel and its neighborhood,a series of degree matrices can be represented by using a series of thresholds,and the texture feature can be represented by calculating the degree distribution of network nodes under different thresholds.This method is based on statistical image description method.It has stronger robustness and rotation invariance,and has a great performance in classification experiments.
Fuzzy C-means Color Image Segmentation Algorithm Combining Hill-climbing Algorithm
JIA Juan-juan, JIA Fu-jie
Computer Science. 2018, 45 (11A): 247-250. 
Abstract PDF(3664KB) ( 758 )   
References | RelatedCitation | Metrics
There are some problems with the color image segmentation technology based on traditional Fuzzy C-means clustering algorithm,such as the selection of the initial category number,the determinated of the initial centroids,large amount of calculation in clustering process and post-processing.Based on the research of these problems,according to the shortage of random initialization in traditional FCM,and for getting more accurate initialization automatically,this paper proposed a clustering segmentation method combining Hill-climbing for color image(HFCM),which can generate the initial centroids and the number of clusters adaptively according to the three dimensional histogram of the image.In addition,a new post-processing strategy which combined the most frequency filter and region mergeing was introduced to effectively eliminate small spatial regions.Experiments show that the proposed segmentation algorithm achieves high computational speed,and its segmentation results are close to human perceptions.
3D Model Retrieval Method Based on Angle Structure Feature of Render Image
LIU Zhi, PAN Xiao-bin
Computer Science. 2018, 45 (11A): 251-255. 
Abstract PDF(2524KB) ( 654 )   
References | RelatedCitation | Metrics
In order to make full use of the color,shape,texture and other features in the 3D model,a 3D model retrieval method was proposed based on angle structure features of render images.Firstly,the 3D model render images are taken as a test dataset and the marked natural images are taken as a training set.The render images are classified based on their skeleton-associated shape context and the angle structure features are extracted to establish the feature library.Then,the angle structure features of the input natural images are extracted.The distance measurement method is used to calculate the similarity between the angle structure feature of input natural image and those features in the feature library.The experimental results show that the full utilization of the color,shape and color space information of the render image is an effective way to achieve 3D model retrieval.
Double Level Set Algorithm Based on NL-Means Denosing Method for Brain MR Images Segmentation
TANG Wen-jie, ZHU Jia-ming XU Li
Computer Science. 2018, 45 (11A): 256-258. 
Abstract PDF(4089KB) ( 655 )   
References | RelatedCitation | Metrics
This paper proposed a novel double level set algorithm based on NL-Means denosing method for brain MR image segmentation,which has a large amount of noise and complicated background,and cannot be separated completely by traditional level set.First of all,this algorithm gets the denoised image by analyzing the image with NL-Means denosing method.Then,the algorithm identifies denoised image by segmenting the analyzed results in terms of improved double level set model.In order to deal with the effect of intensity inhomogeneities on the medical image,the algorithm introduces a bias fitting term into the improved double level set model and optimizes the denosing method result.The experimental result shows that the algorithm can reduce the problems of intensity inhomogeneities and noise,can separate brain MR image including intensity inhomogeneities and noise completely,and can obtain the expected effect of segmentation.
Application of Local Autocorrelation Function in Content-based Image Retrieval
HU Zhi-jun, LIU Guang-hai, SU You
Computer Science. 2018, 45 (11A): 259-262. 
Abstract PDF(2691KB) ( 632 )   
References | RelatedCitation | Metrics
In the field of image retrieval,in order to make the image retrieval more convenient and efficient,this paper proposed a new image retrieval feature,namly local autocorrelation feature,which provides a new tool for content-based image retrieval.It has the characteristics of orientation feature and texture feature.The experiment was carried out for local autocorrelation feature presented in this paper on the Corel10K database,the experimental results show that the average retrieval precision and recall rate of the local autocorrelation feature are lower than the color feature,but it is higher than that of the orientation feature.In addition to color features,the local autocorrelation feature is an efficient image retrieval feature.
Pathological Image Classification of Gastric Cancer Based on Depth Learning
ZHANG Ze-zhong, GAO Jing-yang, LV Gang, ZHAO Di
Computer Science. 2018, 45 (11A): 263-268. 
Abstract PDF(2813KB) ( 1978 )   
References | RelatedCitation | Metrics
Due to that CNN can effectively extract deep features of the image,this paper used GoogLeNet and AlexNet models which have excellent performance in image classification to diagnose the pathological image of gastric cancer.Firstly,according to the characteristics of medical pathological images,this paper optimized the GoogLeNet model to reduce the computational cost under the premise of ensuring the accuracy of diagnosis.On this basis,it proposed the idea of model fusion.By combining more images with different structures and different depths,more effective pathological information of gastric cancer can be acquired.The experimental results show that the fusion model with multiple structures has achieved better results than the original model in the diagnosis of pathological images for gastric cancer.
Single Tree Detection in Remote Sensing Images Based on Morphological Snake Model
DONG Tian-yang, ZHOU Qi-zheng
Computer Science. 2018, 45 (11A): 269-273. 
Abstract PDF(4850KB) ( 1194 )   
References | RelatedCitation | Metrics
Single tree detection can assist forestry statistics in getting information such as position,width and diameter of the crowns,so it is of great significance for the development of precision forestry.In order to solve the problem of inaccurate canopy delineation in single-tree canopy detection,this paper proposed a single tree detection algorithm based on morphological Snake model for remote sensing images.Firstly,the forest features are analyzed.Then the local maximum method is used to extract treetops according to the forest feature map and the distance map.After this,the contour of Snake model is initialized for all crowns according to treetops,after evolution of the contour,the final detection result of individual trees is obtained.In order to verify the effectiveness of the method,this paper gave comparative analysis of the region growing method,template matching method,watershed method and the proposed morphological snake model method.The experimental results show that the proposed method is more accurate and the shape of the crown is more realistic.Compared with the other three methods,the detection score is 6% higher and the area average difference is reduced by 0.5m2.
Research on Intelligent Detection Method of Steel Rail Abrasion
ZHANG Xiu-feng, WANG Juan, DING Qiang
Computer Science. 2018, 45 (11A): 274-277. 
Abstract PDF(1668KB) ( 758 )   
References | RelatedCitation | Metrics
In order to meet the actual demand,a new detection method of steel rail abrasion based on line laser image processing was proposed after analyzing current methods and characteristics of steel rail abrasion detecting equipment at home and abroad.The bending degree of line laser image on the wear of rail was used to determine the width and depth of steel rail abrasion.The edge points and centre points could be found by using roof-type edge detection method,then straight lines can be fitted by using these points.The optimal features combination is selected by removing the redundant features with high correlation.Finally,the experiment results show that the method could extract features amount effectively,and obtain the width and depth of the steel rail abrasion accurately.The characteristics of algorithm inculde small amount,simple and high precision.It lays the foundation for the development of steel rail abrasion detection device.
Handwritten Numeral Recognition Algorithm Based on Similar Principal Component Analysis
HAN Xu, LIU Qiang, XU Jin, CHEN Hai-yun
Computer Science. 2018, 45 (11A): 278-281. 
Abstract PDF(4453KB) ( 940 )   
References | RelatedCitation | Metrics
Principal component analysis (PCA) is one of the most important data reduction algorithms,there is much-maligned views in the process of handling data.A novel improved similar principal component analysis (SPCA) algorithm which is based on principal component analysis (PCA) algorithm was proposed in this paper.This algorithm can keep some detail information in the process.Taking the MNIST handwritten numeral database as an example, the near feature vector is chosen in original vectors to get the groups of non-orthogonal feature vectors.Then,the vectors of trai-ning library is compared with the vectors of testing library,and the recognition rate is calculated.Recognition results indicate that the algorithm can make high identification of the testing samples through a small number of training samples.
Network & Communication
Study of Hidden Node Problem in Full-duplex Enabled CSMA Networks
LIU Sheng-bo, FU Li-qun
Computer Science. 2018, 45 (11A): 282-286. 
Abstract PDF(1634KB) ( 943 )   
References | RelatedCitation | Metrics
Full-duplex (FD) technology enables simultaneous transmission and reception in the same band.Thus,it is expected to double spectrum efficiency of wireless networks.Full-duplexing can alleviate the hidden node problem in traditional CSMA networks.However,hidden node problem in FD CSMA networks is short of comprehensive and deep research.This paper briefly analyzed the hidden node problem and its solutions in half-duplex CSMA networks,and introduced four transmission modes in FD CSMA networks.Theoretical analysis and simulations illustrate the hidden node problem in FD CSMA networks.Finally,this paper further discussed the existing FD MAC (Medium Access Control) protocols,and proposed several important issues which need to be taken into consideration in the design of FD MAC protocols in order to reduce the hidden node problem.
Wireless Network Alarm Correlation Based on Time,Space and Rules
WAN Ying, HONG Mei, CHEN Yu-xing, WANG Shuai, FAN Zhe-ning
Computer Science. 2018, 45 (11A): 287-291. 
Abstract PDF(1958KB) ( 1276 )   
References | RelatedCitation | Metrics
The generation of a wireless network alarm is caused by multiple faults that occurs over a period of time in a complex wireless network.How to find the root alarm of the root fault quickly and accurately is an important issue for network managers.This paper presented a method of wireless network alarm association based on time,space and expert rules.The method is based on rules,network topology,and time series of alarm,and the method combines the space and time with the traditional single rules alarm association method to synthetically locate the root alarm.To solve the large-scale complex network structure,the method uses a hierarchical correlation method.First it finds the subnet which generates alarm,and then locates the alarm node from subnet.At the same time,to adapt the dynamic characteristics of wireless networks,the method maintenans network topology structure and expert rule base dynamically.Experimental results demonstrate that the accuracy rate of the proposed method is 86.6%.
Measuring Method of Node Influence Based on Relative Entropy
CEHN Jun-hua, BIAN Zhai-an, LI Hui-jia, GUAN Run-dan
Computer Science. 2018, 45 (11A): 292-298. 
Abstract PDF(3273KB) ( 834 )   
References | RelatedCitation | Metrics
Recognizing central nodes is a key problem in complex network analysis,this paper proposed a method of relative entropy using TOPSIS (Technique for Order Performance by Similarity to Ideal Solution) method to identify the influential node in the network.The existing central measure methods can be considered as determining the rank of each node attribute in a complex network.Therefore,the method proposed in this paper can use the advantages of various central measure methods to obtain a better ranking result.Finally,the validity of the proposed method was verified by numerical experiments.
Resource Allocation of Capacity Maximization Based on MIMO System
LIU Chun-ling, MA Qiu-cheng, ZHANG Ran
Computer Science. 2018, 45 (11A): 299-302. 
Abstract PDF(1903KB) ( 876 )   
References | RelatedCitation | Metrics
Multiple input multiple output (MIMO) technology can improve the transmission rate of system and increase the system capacity.Aiming at the problems that users suddenly increase in emergency communication andthe system capacity of the traditional resource allocation algorithm is unable to meet the demand of users,a resource allocation algorithm for maximizing system capacity based on the user’s minimum rate of MIMO-OFDM system was proposed.It considers the rapid growth of low rate business such as call service in emergency situations.Firstly,the subcarriers are allocated in descending order according to the users’ rate.Then,on the basis of the ratio of allocated subcarriers’ number,the remaining subcarriers are allocated from high to low according to the users’ rate.For the subcarriers with remaining bandwidth,subcarrier grouping method is used for redistributing,thus it maximizes the number of system serviced users.In order to compensate for channel fading and suppress interference between channels,a power allocation of subcarrier channel matrix grouping was proposed.It candecrease the number of iterations and reduce the computational complexity.The simulation experiment evaluates the performance of the proposed algorithm from throughput,the number of system serviced users,computational complexity and so on.Comparing with traditional resource allocation algorithms,the simulation results show that the proposed algorithm increases the number of system serviced users and reduces computational complexity.
High Reliable Data Collection Algorithm in Energy Harvesting Wireless Sensor Networks
ZHAO Ran, PAN Gen-mei
Computer Science. 2018, 45 (11A): 303-307. 
Abstract PDF(1774KB) ( 575 )   
References | RelatedCitation | Metrics
Energy harvesting wireless sensor network (EH-WSN) has the capability of harvesting environment energy,can work forever and thus has a plenty of promising applications.Most of the available routing schemes of EH-WSN focus on improving energy efficiency,but few of them consider the importance of reliability in EH-WSN.According to EH-WSN characteristics,this paper deduced the node’s link successful packet receiving rate and node retransmission times.It formulated the reliability maximized routing problem for EH-WSN,then proposed a scheme for constructing high reliability data collection tree considering energy harvesting rate,link quality and internode distance.Experimental results show that the proposed algorithm can improve network reliability compared with other data collection strategies.
Study of Propagation Mechanism in Networks Based on Topological Path
ZHANG Lin-zi, JIA Chuan-liang
Computer Science. 2018, 45 (11A): 308-314. 
Abstract PDF(4547KB) ( 603 )   
References | RelatedCitation | Metrics
The existing information dissemination model of social network mainly analyzes the ways of dissemination,and combines the process of communication with the degree of nodes.However,the media is often ignored.In real-world networks,the propagation source,a physical propagation medium,usually propagates from one node to another via a specific path.This paper was no longer limited to analyze the overall behavior of nodes,but considered each node separately,and used the continuous Markov chain to simulate the influence of propagation sources and paths on propagation.By introducing a mean field approximation,the computational complexity of the path-based pro-pagation is reduced from an exponential level to a polynomial level.This paper also defined a propagation characteristic matrix containing both routing and traffic information,and derived a key propagation threshold based on path propagation.When the effective transmission rate is below the threshold,the propagation will gradually die out,so we can use this critical propagation threshold to promote or suppress path-based propagation.Finally,in addition to the stochastic scale-free network,this paper introduced the real-world network traffic as a research case to compare the connection-based and path-based propagation behaviors.The conclusions show that the model’s propagation in social networks is highly persistent and extremely stable.
High Energy Efficient Mobile Charging Strategy in Wireless Rechargeable Sensor Networks
WANG Zi-qiang, LIN Hui
Computer Science. 2018, 45 (11A): 315-319. 
Abstract PDF(3145KB) ( 638 )   
References | RelatedCitation | Metrics
Mobile charging through wireless power transfer technology plays an important role in powering the wireless rechargeable sensor networks (WRSNs).Existing studies usually overlook the energy consumed by nodes during their waiting time before they get charged.These studies also make simplified assumption on the nodes’ residual energy threshold,which can easily lead to the suspension of nodes.A novel mobile charging strategy was proposed in this paper so as to solve this problem.A residual energy prediction model was proposed in order to match the nodes’ actual energy demand.The weighted path minimization problem and the weighted path based energy allocation maximization problem were formulated and solved with genetic algorithm and linear programming respectively.The proposed mobile charging strategy was then evaluated and compared with existing studies through simulations.The results demonstrate that the proposed strategy can increase the charging energy efficiency and can insure the network to operate permanently.
Improvement of DV-Hop Algorithm Based on Multiple Communication Radii and Cosine Theorem
NI Ying-bo, CHEN Yuan-yan, YE Juan, WANG Ming
Computer Science. 2018, 45 (11A): 320-324. 
Abstract PDF(1878KB) ( 672 )   
References | RelatedCitation | Metrics
In order to improve the positioning accuracy of DV-Hop positioning algorithm,an improved DV-Hop algorithm based on multiple communication radii and cosine theorem was proposed.The improvement of the algorithm is reflected at two aspects.Firstly,the algorithm uses multiple communication radii to broadcast locations,multiply broadcast,subdivide hop counts,and makes the minimum hop count between unknown nodes and beacon nodes more accurate.Secondly,the algorithm adjusts and corrects estimated hop distance by cosine theorem after estimating the distance between the unknown node and the corresponding beacon node.The improved algorithm was compared to classical algorithm under the same simulation environment,the simulation results show that the improved algorithm effectively increases the positioning accuracy of the sensor nodes.
Information Security
Symbolic Execution Technology Based Defect Detection System for Network Programs
DENG Zhao-kun, LU Yu-liang, ZHU Kai-long, HUANG Hui
Computer Science. 2018, 45 (11A): 325-329. 
References | RelatedCitation | Metrics
The network software consists of a server and a client running on different physical nodes.Unlike ordinary binary programs,when the network software running,the server and client will communicate and transmit data in real time,and the interaction between two sides will impact on each other’s program running,so the analyzing only on ser-ver-side often leads to fault or omission of software vulnerabilities.This paper studied the state synchronization techno-logy of the two point and the process of symbolic data introduced,which is based on software virtual machine of dyna-mic binary translation mechanism and selective symbol execution technology.Through the key function hook method,the program execution process was monitored,the two-terminal state synchronization decision model was determined,and an automated network program vulnerability detection system was built.The experiment verified the effectiveness of the system in the discovery vulnerabilities of the actual network software.Finally,this system was tested by detecting the CVE vulnerabilities in the software,and the experiment results also proved the effectiveness of this system.
Algorithm Improvement of Pseudo-random Sequence Collision in Information Hiding
LIU Zhong-yi, SHEN Xiang-chen, NI Lu-lin, XU Chun-gen
Computer Science. 2018, 45 (11A): 330-334. 
Abstract PDF(2695KB) ( 1062 )   
References | RelatedCitation | Metrics
In the process of embedding secret information into a limited large vector image,a pseudo-random sequence is generally used to select the position of the pixel to be embedded in the information.When the secret information is large enough,the pseudo-random number generated by the pseudo-random number generator would be repeated,thus resulting in collision.If we choose to skip all the duplicate locations,the amount of confidential information which had been embedded in a limited large vector image would be limited.Therefore,this paper proposed an improved algorithm.When the sequence generated by the pseudo-random number generator reoccurs,the repeated position will not be skipped and the embedded operation will be performed normally,and the operation process at the repeated position will be recorded and saved in some form.In reverse extraction,the key and this action record are used to extract the ciphertext.The improved algorithm,combined with cryptography and information hiding,greatly expands the amount of secret information hidden in a limited number of pictures and improves the security of information hiding.
Research on Cyberspace Situation Awareness Security Assessment Based on Improved BP Neural Network
CHEN Wei-peng, AO Zhi-gang, GUO Jie, YU Qin, TONG Jun
Computer Science. 2018, 45 (11A): 335-337. 
Abstract PDF(3504KB) ( 1243 )   
References | RelatedCitation | Metrics
This paper used BP neural network algorithm to establish the relationship between the network situation awareness level and the perceived parameters,and situation awareness was assessed quantitatively.The research of neural network in this field is the most mature,but the traditional BP neural network algorithm is slow in feedback error,and it is likely to converge to a local extremum.So the variable step learning strategy and simulated annealing method are used to build a virtual network HoneyNet simulation environment,then the Matlab is used for algorithm simulation.The obtained results are close to the actual results.
New Method for Webpage Watermarking Based on Empty Styles
CHEN Wei-xu, CHEN Jian-ping, WEN Wan-zhi, CAI Liang
Computer Science. 2018, 45 (11A): 338-341. 
Abstract PDF(1784KB) ( 1004 )   
References | RelatedCitation | Metrics
Webpage watermarking has important applications in webpage copyright protection and anti-tampering.The existing webpage watermarking methods mainly embed watermark information by using the insensitivity of HTML to some format changes.Such methods have the problems that watermark information is separated from the webpage contents,which makes the watermark less concealed and vulnerable to attacks.This paper proposed a new webpage watermarking method based on empty styles.Making use of HTML’s feature that it does no operation to a style without content definition,the method transforms watermark information into empty styles and embeds them into the HTML code.The embedded watermark information is closely linked with the HTML code and is of strong concealment.It is not easy to detect and attack,and has a large watermark capacity as well.Compared with the existing methods,the proposed method is more superior.
Two-dimensional Code Encryption Algorithm Based on Singular Value Decomposition
GE Ya-jing, ZHAO Li-feng
Computer Science. 2018, 45 (11A): 342-343. 
Abstract PDF(3853KB) ( 1119 )   
References | RelatedCitation | Metrics
With the rapid development of mobile Internet and smart phone,the security of information transmission in network is more and more important.As a new technology of information storage,transmission and recognition,two-dimensional code has been widely applied in many fields.However,because the two-dimensional code coding algorithm is open and its information encryption is unimplemented,there are information danger in some areas.In this paper,a new algorithm based on singular value decomposition (SVD) was proposed to encrypt encoded data through the study of two-dimensional code coding rules and encryption algorithms.Usually the digital image files in our computer is stored in the form of matrix,and each element of the matrix is also expressed in the corresponding image coordinates of pixel va-lue.Therefore,through the computer digital image processing,there is actually a series of non negative matrix operations.Decryption is the inverse process of encryption,by obtaining the singular value decomposition and encryption of the picture matrix,and the encrypted solution of the plaintext information.The experiment shows that this method is efficient in encryption and decryption,and has good security.
Research on Intrusion Detection System Method Based on Intuitionistic Fuzzy Sets
XING Rui-kang, LI Cheng-hai
Computer Science. 2018, 45 (11A): 344-348. 
Abstract PDF(1610KB) ( 806 )   
References | RelatedCitation | Metrics
Intrusion detection refers to the technology that collects and analyzes various kinds of data through several key points in a computer network or a computer system,so as to find and respond to possible intrusion attacks.However,due to the variety of attacks in cyberspace and many uncertainties,how to describe and deal with its objective existenceof uncertainty has become an important part of constructing an intrusion detection system model.Intuitionistic fuzzy set theory is a theory that studies the problem of uncertainty in the system.Therefore,studying intrusion detection methods based on intuitionistic fuzzy set theory plays an important role in dealing with a large number of uncertainties in intrusion detection systems.This paper summarized the typical intrusion detection methods based on intuitionistic fuzzy set theory in existing literatures and made a proper analysis and comparison,pointing out the shortcomings in the current related methods and the future development direction,which provide some reference value for further study.
Spark-based Parallel Outlier Detection Algorithm of K-nearest Neighbor
FENG Gui-lan, ZHOU Wen-gang
Computer Science. 2018, 45 (11A): 349-352. 
Abstract PDF(2861KB) ( 1261 )   
References | RelatedCitation | Metrics
With the advent of big data era,outlier detection has attracted extensive attention.Computational resources of the traditional K-nearest neighbor outlier detection dealing with massive high dimensional data with single machine are insufficient,and the MapReduce in Hadoop cannot effectively deal with frequent iteration calculation problem.According to the above problems,this paper put forward a Spark-based parallel outlier detection algorithm of K-nearest neighbor,named SPKNN.Firstly,in the stage of map,the algorithm tries to find the local K nearest neighbors for each partition of the data in all data set.Then in the reduce stage,it determines the global K nearest neighbors according to the local K nearest neighbors of each partition.Finally,it calculates the degrees of outliers by using global K nearest neighbors and select outliers.Compared with the traditional K-nearest neighbor outlier detection,the performance of the SPKNN has an approximate linear relationship with computing resources in the premise of ensuring the detection accuracy.And compared with other outlier detection methods,it doesn’t need additional extension data,support iteration calculation and can reduce I/O costs by using memory cache.Experiment results of SPKNN show that it has high efficiency and scalability for massive data sets.
Network Log Analysis Technology Based on Big Data
YING Yi, REN Kai, LIU Ya-jun
Computer Science. 2018, 45 (11A): 353-355. 
Abstract PDF(1603KB) ( 1862 )   
References | RelatedCitation | Metrics
There exists a calculation bottleneck when traditional log analysis technology processes the massive data.To solve this problem,a log analysis solution based on big data technology was proposed in this paper.In this solution,the storage and analysis,mining tasks of Log files will be decomposed on multiple computers.The open source framework Hadoop is used to establish a parallel network log analysis engine.IP statistics and outlier detection algorithm was rea-lized with MapReduce model.Empirical studies show that the use of big data technology in data-intensive computing can significantly improve the execution efficiency of algorithms and the scalability of system.
XSS Attack Detection Technology Based on SVM Classifier
ZHAO Cheng, CHEN Jun-xin, YAO Ming-hai
Computer Science. 2018, 45 (11A): 356-360. 
Abstract PDF(2673KB) ( 1314 )   
References | RelatedCitation | Metrics
A large number of security vulnerabilities appeare with the development of Web applications,XSS is one of the most harmful Web vulnerabilities.To deal with the unknown XSS,a XSS detection scheme based on support vector machine (SVM) classifier was proposed.The most representative five dimensional features are extracted to support the training of machine algorithms based on a large number of analysis of XSS attack samples.The feasibility of the SVM classifier was verified based on accuracy,recall and false alarm rate.In addition,the characteristics of deformed XSS samples were added to optimize the performance of the classifier.The improved SVM classifier has better performance compared with traditional tools and ordinary SVM.
Double Chaotic Image Encryption Algorithm Based on Run-length Sequence
WANG Le-le, LI Guo-dong
Computer Science. 2018, 45 (11A): 361-366. 
Abstract PDF(6124KB) ( 907 )   
References | RelatedCitation | Metrics
Image encryption has an important position in life and is an interesting topic in the Internet transmission.An improved H-L chaotic image encryption algorithm was proposed.Based on the run-lengths sequence,the improved H-L agorithm is applied to image encryption.After the computer simulation,the simulation results show that the improved algorithm achieves a better encryption effect,enhances the anti-attack performance,and has certain value.
Security Provenance Model for RFID Big Data Based on Blockchain
LIU Yao-zong, LIU Yun-heng
Computer Science. 2018, 45 (11A): 367-368. 
Abstract PDF(2629KB) ( 1939 )   
References | RelatedCitation | Metrics
In recent years,with the development of the block chain technology,it has received extensive attention.It is generally regarded as an important tool to solve the problem of data security.RFID big data is important source of data in the Internet of things,and the security requirements for data are also very high.Data tracing is one of the important applications of RFID networking technology,it is widely used in original traceability of agricultural products,raw materials,industrial production and consumer goods and parts traceability,security and other aspects.Block chain plays an important role in improving the safety of big data source.This paper presented a RFID data source security model based on block chain technology,and block chain technology is used in the tracing process of RFID data,the formation of multi information transparency,sharing,participation and fidelity in the traceability chain.The block chain books are formed in production,processing,marketing and other aspects of the establishment of RFID tracing goods,the whole chain traceability path RFID big data is established,to end users,so as to realize the safety management of the traceability of RFID big data.
New Ownership Transfer Protocol of RFID Tag
GAN Yong, WANG Kai, HE Lei
Computer Science. 2018, 45 (11A): 369-372. 
Abstract PDF(2797KB) ( 659 )   
References | RelatedCitation | Metrics
There exists risk of security and privacy disclosure in the process of ownership transfer of RFID tag.Thus a new tag ownership transfer protocol with transfer switch based on Hash function was proposed .The original owner and the new owner have different communication keys respectively,the former key is used for authentication between the original owner and the tag while the latter key is for ownership transfer between the tag and the new owner.Because of the transfer switch,namely OTS,it is possible to implement OTS configuration to resist desynchronization attack through OTS configuration.The safety analysis of the protocol shows that the protocol can meet the safety requirements of tag ownership transfer and resist common active and passive attack,thus achieving complete transfer of tag ownership.Finally,theperformance of protocol was analyzed and the results show that efficiency performance of proposed protocol is significantly improved compared with existing ownership transfer protocol of RFID tag.
Combined Halftone Information Anti-counterfeit Algorithm Based on Dot Shape
GE Nai-xin, QU Yi-fei, WANG Qi, HAN Xue-ying
Computer Science. 2018, 45 (11A): 373-376. 
Abstract PDF(7678KB) ( 1027 )   
References | RelatedCitation | Metrics
In order to study the halftone anti-counterfeit algorithm based on different dot shape and obtain different printing dot combination which can achieve the best anti-fake effect,experiment selected circular,square and rhombus dot shape to form six combination schemes.Then the anti-counterfeit information was binarized to be the modulation signal as the basis of dot shape selection in the process of halftone,thus acquiring the halftone image with hidden information. Template matching method was used to generate dot matching template to extract the anti-counterfeit information,and achieved subjective and objective evaluation of the information.The results showed that halftone image generated by the round-diamond dot shape combination was closer to original image,and its anti-counterfeiting information had better concealment.Besides,extracted information was highly similar to original information.The selection of dot shape combination ensures the accurate reproduction of image and provides better anti-counterfeiting performance.
Construction and Effectiveness Evaluation of New Cyber Defense System
JIN Xiao, GE Hui, MA Rui
Computer Science. 2018, 45 (11A): 377-381. 
Abstract PDF(1631KB) ( 1154 )   
References | RelatedCitation | Metrics
In the current network space,the defender is in the passive position in the attack-and-defense game.This situa-tion can be changed by constructing a dynamically-enabled cyber defense system.Through the research of the dynamically-enabled cyber defense system,critical danamic technology is given in four aspects (network,software,platform,data) for enhancing the security of the traditional cyber space,and a danamically-enapled cyber space method is constructed.By combining with the offensive and defensive in the face of dynamically-enabled cyber security effectivenessevalua-tion,the contribution of the dynamically-enabled cyberspace defense system in the safety of cyber space was proved.
Forward Security Anonymous Authentication Protocol Based on Group Signature for Vehicular Ad Hoc Network
YUE Xiao-han, HUI Ming-heng, WANG Xi-bo
Computer Science. 2018, 45 (11A): 382-388. 
Abstract PDF(1795KB) ( 882 )   
References | RelatedCitation | Metrics
Vehicular Ad Hoc network is widely used in improving traffic safety and efficiency.However,there is still a problem of communication trust and user privacy protection.Many existing authentication protocols require that certi-fiers download up-to-date revocation lists from remote center,which greatly increase the remote center’s workload.In this paper,in order to solve these problems,a new authentication protocol based on group signature scheme was proposed by combining the decentralized group model and the complete sub-tree method.In this protocol,the verifier can verify a signature by getting the latest time,without having to obtain the latest revocation list,with forward security,effective revocation,anonymity,unforgeability,non-frameability and traceability.
Composite Image Encryption Algorithm Based on Involutory Matrix
ZANG Rui, YU Yang
Computer Science. 2018, 45 (11A): 389-392. 
Abstract PDF(4518KB) ( 747 )   
References | RelatedCitation | Metrics
As the Internet plays a more and more important role in economic society and national security,the security of network data transmission has attracted the attention of academic circles in recent years.One of the hot research issues is the encryption and transmission of digital image information.The traditional way of image encryption is relatively simple,has some flaws and is easy to be cracked.For this question,the content of the research is a composite encryption algorithm based on involutory matrix,matrix decomposition and information hiding.The original image is encrypted by using the matching matrix,and the encrypted image is decomposed into several low-pixel images and then hidden into the public information.The example shows that this method has high safety performance and good effect.
Double Chaotic Image Encryption Algorithm Based on Fractional Transform
WANG Le-le, LI Guo-dong
Computer Science. 2018, 45 (11A): 393-397. 
Abstract PDF(5633KB) ( 1104 )   
References | RelatedCitation | Metrics
Image encryption plays an important role in daily life.Aiming at the problem of low security of traditional natural chaos system,an improved image encryption algorithm based on H-L double chaos and fractional Fourier transformwas proposed.Based on the order of the optimal solution sequence solved by the exhaustive method,the chaotic map is combined with the fractional Fourier transform.At the same time,it combines with the fractional Fourier transform to realize the scrambling in the spatial and the frequency domains,so that the plain text information is hidden.The simulation results show that the improved algorithm achieves good encryption effect,large key space,low computational complexity,strong sensitivity and effective anti-statistical attack performance.It has certain value in the aspect of gra-phic information security.
Big Data & Data Mining
Personalized Recommendation Algorithm Based on PageRank and Spectral Method
CHANG Jia-wei, DAI Mu-hong
Computer Science. 2018, 45 (11A): 398-401. 
Abstract PDF(1797KB) ( 1319 )   
References | RelatedCitation | Metrics
Traditional PageRank recommendation algorithm is less scalable.To solve this problem,a personalized recom-mendation algorithm based on PageRank and spectral method was proposed.The number of iterations is controlled by adding the number of nodes in the PageRank algorithm to obtain the candidate set,threshold is ued to trim the number of nodes participating in the iteration to get the candidate node set.Spectral clustering is utilized to sort the candidate nodes.The candidate node adjacency matrix is normalized,and eigenvalues and eigenvectors of matrices are used to eva-luate the distance between nodes and target nodes in a graph.At last,a final list of recommendations is produced.Experi-mental results show that the proposed recommendation algorithm improves the processing efficiency on the premise of ensuring the recommendation quality.
Service Recommendation Method Based on Social Network Trust Relationships
WANG Jia-lei, GUO Yao, LIU Zhi-hong
Computer Science. 2018, 45 (11A): 402-408. 
Abstract PDF(2442KB) ( 1383 )   
References | RelatedCitation | Metrics
With the advent of service computing,many different electronic services have emerged.Users often have to find what they need from a large number of services,which is a formidable task.Hence,it is necessary to put forward an efficient recommendation algorithm.The traditional cooperative recommendation system has some problems,such as cold start,sparsity of data and poor real-time performance,which lead to poor recommendation results under the circumstances with less scoring data.In order to get a better recommendation result,this paper introduced trust transfer in social networks and utilized it to establish a trust transfer model to obtain trust among users.On the other hand,based on the score data,the similarity between users in the system is calculated.On the basis of similarity between users’ trust and preference,according to the characteristics of social networks,users’ trust and preference are dynamically combined to obtain comprehensive recommendation weights.The comprehensive recommendation weights can replace the traditional similarity measurement standards for user-based collaborative filtering recommendation.This method was verified through the Epinions data set and can further improve the recommendation effect and.
Association Rule Mining Algorithm Based on Hadoop
DING Yong, ZHU Chang-shui, WU Yu-yan
Computer Science. 2018, 45 (11A): 409-411. 
Abstract PDF(2936KB) ( 795 )   
References | RelatedCitation | Metrics
The traditional parallel association rule algorithm defines a MapReduce task for each iteration to implement the generation and counting function of the candidate set,but multiple startup of the MapReduce task brings great performance overhead.This paper defined a parallel association rule mining algorithm (PST-Apriori).This algorithm adopts a partition strategy,defines a prefix shared tree in each distributed computing node,and compresses the candidate items generated by each transaction T to the prefix shared tree (PST).Then the breadth traversal algorithm is used,and the 〈key,value〉 corresponding to each node are used as input of the map function,and the MapReduce frame is automatically gathered according to the key value.Finally,the reduce function is called to aggregate the processing results of multiple tasks,and the frequent itemsets satisfying the minimum support threshold are obtained.The algorithm only usestwo MapReduce tasks,and PST is sorted according to key value to facilitate shuffle operation at Mapper,which improves the efficiency of operation.
Collaborative Filtering Algorithm Based on User’s Preference for Items and Attributes
WANG Yun-chao, LIU Zhen
Computer Science. 2018, 45 (11A): 412-416. 
Abstract PDF(2154KB) ( 690 )   
References | RelatedCitation | Metrics
Collaborative filtering algorithm is one of the most successful and useful technologies in recommendation systems.Cosine similarity and Pearson correlation coefficient are two of the most widely used traditional algorithms to calculate the similarity in collaborative filtering algorithm.In order to reduce the error,an improved collaborative filtering recommendation algorithm was proposed in view of the disadvantages of the two traditional similarity algorithms.The two traditional algorithms were improved by importing two parameters,one of them was proposed for considering the rating habits of users,and the other was imported to measure the difference of items chosen by users.User’s preference is related to project attributes,therefore,a parameter was designed to measure it.The new algorithm was constructed by the improved traditional algorithm and user’s preference for attributes.The results of experiment on MovieLens dataset show that the proposed algorithm has lower mean absolute error (MAE) and root mean square error (RMSE),and has better performance by using the two parameters compared with traditionalalgorithms.
Word Clustering Based Text Semantic Tagging Extraction Method
LI Xiong, DING Zhi-ming, SU Xing, GUO Li-min
Computer Science. 2018, 45 (11A): 417-421. 
Abstract PDF(3226KB) ( 1255 )   
References | RelatedCitation | Metrics
This research mainly solves the problem of extracting key semantic information from a large number of text data.Text is the information carrier of the natural language.When the text information is analyzed and processed,the characteristics of text messages are different,due to different goals and methods.In the past,the semantic tagging extraction method is usually focused on the single text,but the semantic relationships between different texts are ignored.To this end,this paper proposed a text semantic tagging extraction method based on word clustering.The proposed method is based on semantic tagging extraction processing target,which employs a distributed Hinton representation hypothesis to express text information,and uses word clustering algorithm to maximize the semantic tagging and semantic similarity between the original text data.Experiments show that since the method involves all vocabularies in the cluster computing,the semantic richness and power of information expression of the proposed method outperform many existing methods.
Personal Learning Recommendation Based on Online Learning Behavior Analysis
CHEN Jin-yin, FANG Hang, LIN Xiang, ZHENG Hai-bin, YANG Dong-yong, ZHOU Xiao
Computer Science. 2018, 45 (11A): 422-426. 
Abstract PDF(3379KB) ( 2019 )   
References | RelatedCitation | Metrics
With the wide use of online courses and the population of online learning,massive data of online learning behaviors have been collected.How to take advantages of those accumulated data through novel data mining technology for improving teaching decision and learning efficiency is becoming the research focus.In this paper,online learning behavior features are extracted,relationship between online learner’s personality and learning efficiency is modeled and analyzed,and personal learning recommendation is designed as well.First,online learner behavior features were extracted,and BP neural network based academic performance prediction algorithm was put forward,in which offline score was predicted based on accordingly online learning behavior features.Second,in order to further analyze the relationship of online learning behavior and offline practical score,a novel actual entropy based online learning behavior orderness evaluation model was proposed.Each learner’s offline academic performance can be predicted on basis of online learning orderness.Third,learners’ personalities were estimated through Felder-Silverman method.K-means algorithm was carried out on those personality vectors to achieve clusters of learners with the similar personality.Among those learners clustered into the same class,the top scored learner’s learning behavior will be recommended to the rest learners.Finally,tackinga practical online courses platform’s data as our experimental subject,plenty of experiments were carried out including online learning behavior feature extraction,offline academic performance evaluation and orderness analysis,perso-nal learning behavior recommendation,and the efficiency and application value of proposed method was proved.
Semi-supervised Feature Selection Algorithm Based on Information Entropy
WANG Feng, LIU Ji-chao, WEI Wei
Computer Science. 2018, 45 (11A): 427-430. 
Abstract PDF(1531KB) ( 982 )   
References | RelatedCitation | Metrics
In applications,since it is usually expensive to determine data labels,researchers can only mark a very small amount of data.Hence,on the basis of rough set theory and entropy,this paper proposed an entropy-based rough feature selection algorithm for the problem of “small labeled samples”.In the context of semi-supervised learning,entropy and feature significance were defined in this paper.On this basis,a new semi-supervised feature selection algorithm was proposed to deal with datasets which contain only small labels.Experimental results show that the new algorithm is feasible and efficiency
Prediction of Geosensor Data Based on knnVAR Model
LIAO Ren-jian, ZHOU Li-hua, XIAO Qing, DU Guo-wang
Computer Science. 2018, 45 (11A): 431-435. 
Abstract PDF(3039KB) ( 549 )   
References | RelatedCitation | Metrics
The prediction of geosensor data is widely used in economy,engineering,natural science and social sciences.The spatial correlation of different sites and the time correlation of the same site in the data pose great challenges to traditional forecasting models.In this paper,a knnVAR model which computes the relevance of the space-time information effectively and considers the uniqueness of each sensing sequence at the same time was proposed to predict the geosensor data.This model quantifies the time information and spatial information of the data by calculating the space-time distance,and then searches for the K nearest neighbor based on space-time distance.Finally,the nearest neighbor sequences were applied to the vector autoregressive model.By searching for space-time nearest neighbors,knnVAR model computes the relevance of the time dimension and space dimension effectively.At the same time,knnVAR model uses the space-time nearest neighbor sequences which are highly correlated to predict the sensing sequence.The experimental results show that the knnVAR model can improve the prediction accuracy of geosensor data effectively.
Segmentation of Baidu Takeaway Customer Based on RFA Model and Cluster Analysis
BAO Zhi-qiang, ZHAO Yuan-yuan, ZHAO Yan, HU Xiao-tian, GAO Fan
Computer Science. 2018, 45 (11A): 436-438. 
Abstract PDF(1492KB) ( 1416 )   
References | RelatedCitation | Metrics
In view of the characteristics of Baidu Take-out industry,such as large number of customers,large consumption data,high dimensions and so on,this paper proposed an improved RFM model based on perspective of customer consumption behavior,and uses the AHP algorithm to determine the weight of each variable in the model.K-Means clustering algorithm is used for customer segmentation,and the customer’s personal value for the business is computed and determined .The results of data analysis show that the customer segmentation method based on the improved RFM model canmake merchants adopt targeted strategies for customers with different values.
Local Model Weighted Ensemble for Top-N Movie Recommendation
TANG Ying, SUN Kang-gao, QIN Xu-jia, ZHOU Jian-mei
Computer Science. 2018, 45 (11A): 439-444. 
Abstract PDF(2353KB) ( 1193 )   
References | RelatedCitation | Metrics
In order to solve the problem that the traditional recommendation algorithms can not accurately capture the user preference with a single model,this paper proposed a Top-N personalized recommendation algorithm based on local model weighted ensemble.This recommendation algorithm adopts user clustering to compute the local models and takes the sparse linear model as the basic recommendation model.Meanwhile,the semantic-level feature vector representation of each user was proposed based on LDA topic model and movie text content information,so as to implement user clustering.The experiments of the film data crawled from Douban show that our local model weighted ensemble recommendation algorithm enhances the recommendation quality of the original base model and outperforms some traditional classical recommendation algorithms,which demonstrates the effectiveness of the proposed algorithm.
Symbolic Value Partition Algorithm Using Granular Computing
Computer Science. 2018, 45 (11A): 445-452. 
Abstract PDF(1710KB) ( 557 )   
References | RelatedCitation | Metrics
In the field of data mining,data preprocessing based on symbolic data packets is a very challenging issue.It provides people with a more simplified representation of data.In the past research,researchers proposed many solutions,such as using rough set approach to solve this problem.In this paper,a symbolic data grouping algorithm based on grain computing was proposed,which is divided into two stages:granularity generation and granularity selection.At the stage of particle size generation,for each attribute,the tree is constructed from the bottom of the leaf with the cluster of corresponding attribute values as a binary tree,forming a forest of attribute trees.In the stage of granularity selection,each tree is globally considered on the basis of information gain,and the optimal grain layer is selected.The result of layer selection is the grouping result of symbolic data.Experimental results show that compared with the existing algorithms,this algorithm presents a more balanced hierarchy and more excellent compression efficiency,and has better application value.
Next Place Prediction of Massively Multiplayer Online Role-playing Games
TONG Zhen-ming, LIU Zhi-peng
Computer Science. 2018, 45 (11A): 453-457. 
Abstract PDF(1845KB) ( 649 )   
References | RelatedCitation | Metrics
In recent years,massively multiplayer online role-playing games (MMORPG) has become one of the most popular Internet recreational activities.MMORPG creates virtual societies,in which each user plays a fictional character,and controls most of its activities.With rapid development of MMORPG,it has accumulated massive data,which contain semantic as well as topological information of virtual societies.Researchers have already carried out manystu-dies,such as player departure prediction and server consolidation.The task of next place prediction is crucial to enhance gaming experience,improve game design and game bot detection,and most of next place prediction methods are based on statistical analysis.However,it is difficult to apply these methods in practice due to the characteristic of large scale of game data,and an automatic computation method to be developed.This paper proposed a next place prediction algorithm based on hidden Markov model (HMM).The model considers location characteristics as unobservable parameters,and takes the effects of previous actions of each game character into consideration.Experimental results with real MMORPG dataset show that our approach is intuitive and has better performance in dense distributed data than other existing methods for the task of next place prediction of MMORPG.
Spectral Clustering Algorithm Based on SimRank Score
LI Peng-qing, LI Yang-ding, DENG Xue-lian, LI Yong-gang, FANG Yue
Computer Science. 2018, 45 (11A): 458-461. 
Abstract PDF(2617KB) ( 1068 )   
References | RelatedCitation | Metrics
Traditional spectral clustering algorithms only consider distance between data points,ignoring their intrinsic relation.To deal with this problem,a spectral clustering method based on SimRank score was proposed.Firstly,the method computes the adjacency matrix of the undirected graph data,and obtains the similarity matrix based on SimRank.Secondly,a Laplacian matrix expression is constructed based on similarity matrix,which is then normalized followed by spectral decomposition.Finally,a k-means clustering procedure is performed on the obtained eigenvectors to obtain the final clustering results.Experimental results on benchmark datasets from UCI data repository show that the proposed algorithm is superior to the existing spectral clustering algorithms based on distance similarity in terms of clustering accuracy,standard mutual information and purity.
Research on News Recommendation Methods Considering Geographical Location of News
YUAN Ren-jin, CHEN Gang
Computer Science. 2018, 45 (11A): 462-467. 
Abstract PDF(1797KB) ( 773 )   
References | RelatedCitation | Metrics
In order to research the impact of news event place on the recommendation performance of news recommendation system,a News recommendation algorithm Considering Geographical Position(NCGP) method is proposed.Firstly,an algorithm was designed to extract the place of news event.Secondly,the vector space model,TF-IDF algorithm and word2vec tool were used to construct the news feature vector.Then,constructing the user interest model was discussed deeply.Finally,the cosine similarity method was used to calculate the similarity between the user interest model and the candidate news set to complete the recommendation.The experimental results show that the performance of the proposed news event place extraction algorithm is better,and the precision can reach 93.6%,besides,the F-value of NCGP is improved compared with the collaborative filtering recommendation algorithm and the recommendation algorithm that only considers news content.
Multi-level Feature Selection Mechanism Based on MapReduce
SONG Zhe-li, WANG Chao, WANG Zhen-fei
Computer Science. 2018, 45 (11A): 468-473. 
Abstract PDF(3028KB) ( 620 )   
References | RelatedCitation | Metrics
Feature selection is a committed step of text classification.The classification accuracy mainly depends on the merits and demerits of the selected feature words.This paper proposed a multi-level feature selection mechanism based on MapReduce.On the one hand,the mechanism screens the original dataset by an improved CHI feature selection algorithm,then uses the mutual information method to filter the noise words and to put the high quality feature words forward for the primaries.On the other hand,the time consumption of multi-level feature selection is reduced by introducing the mechanism into MapReduce model.Experimental results show that the mechanism improves both the classification accuracy and its runtime when dealing with big data problems.
Heuristically Determining Cluster Numbers Based NJW Spectral Clustering Algorithm
CHEN Jun-fen, ZHANG Ming, HE Qiang
Computer Science. 2018, 45 (11A): 474-479. 
Abstract PDF(2255KB) ( 813 )   
References | RelatedCitation | Metrics
The main idea of NJW is to project data points into feature space and then to cluster using K-means algorithm,while the clustering results is considered as the clustering of the original data points.However,the number of clusters C and scaling parameter σ of Gaussian kernel function greatly influence the clustering performance of NJW algorithm,whilst the fact that K-means algorithm is sensitive to initial cluster centers also influences the clustering result of NJW algorithm.To this end,an improved NJW algorithmis presented referring to DP-NJW algorithm which heuristically determines cluster-center points and the number of clusters according to density distribution of data points,and then implements NJW algorithm to cluster.Note that,the obtained cluster-center points and the number of clusters in the first stage are the initialization of K-means algorithm in the second stage of the proposed DP-NJW algorithm.In the next section,DP-NJW is compared with state-of-the-art clustering algorithms on the given seven public datasets,where DP-NJW provided higher clustering accuracy than NJW on five datasets and even on the other two datasets.DP-NJW algorithm provided better clustering performance than DPC on the given five datasets.In addition,DP-NJW consumed less computing time than the other two algorithms,and this is especially obvious on the larger aggregation dataset.Overall,DP-NJW algorithm is superior to the state-of-the-art clustering algorithms.
Software Engineering & Database Technology
Software Stage Effort Prediction Based on Analogy and Grey Model
WANG Yong, LI Yi, WANG Li-li, ZHU Xiao-yan
Computer Science. 2018, 45 (11A): 480-487. 
Abstract PDF(1836KB) ( 832 )   
References | RelatedCitation | Metrics
Accurate software effort prediction is one of the most challenging tasks in the software engineering domain.Due to the inherent uncertainty and risk of software development process,it is insufficient to predict the whole effort just at the early stage of the project.In contrast,it is important to predict the effort of each stage during the software development process.This enables the managers to reallocate resources according to the variation of the project development and ensures the project to be completed with the prescribed schedule and under the budget.Therefore,this paper presented a new method for software physical time stage-effort prediction based on both analogy method and grey mo-del.The proposed hybrid method obtains prediction results by combining the values predicted by both analogy and grey model.At the same time,this method can avoid the limitations of using either of them.The experimental results on real world software engineering dataset indicate that the prediction accuracy obtained by the proposed method is better than that obtained by analogy method,GM (1,1) model,GV,Kalman filter and linear regression,showing great potential.
Approach of Mutation Test Case Generation Based on Model Checking
YANG Hong, HONG Mei, QU Yuan-yuan
Computer Science. 2018, 45 (11A): 488-493. 
Abstract PDF(3121KB) ( 1345 )   
References | RelatedCitation | Metrics
In order to carry out mutation analysis of software testing,this paper proposed a method generating mutation test case based on model checking.Using formalized analysis of UPPAAL and test framework,the system under test is firstly described as a timed automata which conforms to its specifications.Then a set of mutation operation following timed automata’s basic structure and grammar are injected into the original model to simulate some implementation error which may occur.The mutated models and accessibility property are used as inputs to UPPAAL Yggdrasil to gene-rate test cases covering mutation area.Lastly,test cases are executed on the mutated model and a set of valid test cases is selected on the basis of test execution result (whether the test case can kill the mutation).Experimental results show that the generating test case of the proposed method is valid.Moreover,the mutation score of test case set is higher than that of the existing one based on state machine duplication and model’s transition coverage.
Analysis on Technical Support Equipments’ Software Invalidation Based on Soft and Hard Integrated System Methodology
LIU Kai, LIANG Xin, ZHANG Jun-ping
Computer Science. 2018, 45 (11A): 494-496. 
Abstract PDF(3202KB) ( 1044 )   
References | RelatedCitation | Metrics
A new method called soft and hard integrated method was put forward,and it was applied to solve the pro-blem of technical support equipments’ software invalidation.With analysis,it was validated that developing software reliability and maintainability testing system based on software architecture is good for insuring the quality of technical support equipment’s software.Finally,the emphasis of this research was fixed on designing software architecture of software reliability and maintainability testing system.
Long Method Detection Based on Cost-sensitive Integrated Classifier
LIU Li-qian, DONG Dong
Computer Science. 2018, 45 (11A): 497-500. 
Abstract PDF(1584KB) ( 752 )   
References | RelatedCitation | Metrics
Long method is a software design problem that requires refactoring because it is too long.In order to improve the detection rate of traditional machine learning approaches on long method,a cost-sensitive integrated classifier algorithm was proposed from the viewpoint of unbalanced sample data of code smell.Based on the traditional decision tree algorithm,the under-sampling startegy is used for resampling,then a plurality of balanced subsets are generated.These subsets are trained to generate a plurality of same base classifiers.Finally,the mistaken classification cost determined by the cognitive complexity is complemented to the integrated classifier.The cost makes the classifier inclined to the accuracy rate of the minority categories.Compared with the traditional machine learning algorithm,this method has improved the precision and recall for detection result of long methods.
Software Cost Estimation Method Based on Weighted Analogy
ZHAO Xiao-min, CAO Guang-bin, FEI Meng-yu, ZHU Li-nan
Computer Science. 2018, 45 (11A): 501-504. 
Abstract PDF(2927KB) ( 914 )   
References | RelatedCitation | Metrics
Software cost estimation is one of the most important issues in the cycle of development,management decision,and in the quality of software project.Aiming at the common problems of software cost estimation in the software industry,such as inaccuracy of cost estimation and estimation difficulty,this paper presented a weighted analogy-based software cost estimation method.In this method,the similarity distance is defined as the Mahalanobis distance with correlation,and the weight is obtained by particle swarm optimization.The software cost is estimated by analogy method.The result shows that this method has high accuracy compared with non-computational based model methods such as non-weighted analogy and neural networks.At the same time,the actual cases show that this method is more accurate than expert estimation in software cost estimation based on demand analysis at the early stage of software development.
In-page Wear-leveling Memory Management Based on Non-volatile Memory
SUN Qiang, ZHUGE Qing-feng, CHEN Xian-zhang, Edwin H.-M.SHA, WU Lin
Computer Science. 2018, 45 (11A): 505-510. 
Abstract PDF(2502KB) ( 838 )   
References | RelatedCitation | Metrics
Emerging non-volatile memory(NVM) is the promising next-generation storage for its advanced characteristics.However,the low endurance of NVM cells makes them vulnerable to frequent fine-grained data updates.This paper proposed a novel in-page wear-leveling memory management (IWMM) for NVM-based storage.IWMM divides pages into basic memory units to support fine-grained updates.IWMM alternatively allocates the memory units of a page with directional order allocation algorithm to distribute fine-grained updates evenly on memory cells.Experimental results show that the wear counts of IWMM can achieve 52.6% reduction over that of NVMalloc,a wear-conscious allocator.Meanwhile,the preformance of IWMM is 27.6% betterthan glibc malloc when the ratio of memory deallocation is less than 50%.
Research on Temporal Entity Dependencies Relation and Measurement Method
FU Yu-jing, ZHANG Jun, WANG Yi-heng
Computer Science. 2018, 45 (11A): 511-517. 
Abstract PDF(2786KB) ( 948 )   
References | RelatedCitation | Metrics
All kinds of dependencies exist among entities.Especially in the process of software development,the depen-dencies between software entities has a big impact on the impact analysis of software and risk analysis.Dependency graph is the most commonly used dependency representation method,the definition of nodes and edges is different from attribute comptation,while the temporal properties of nodes and edges are seldom taken into account in existing depen-dency graph methods.This paper presented formal definition and analysis of temporal characteristics of temporal depen-dencies,and also analyzed the importance of four measures including node center,node importance,node dependency and edge importance.Finally,test dataset was extracted form MAVEN data,and experimental results showregulation of indictors varying with the time.
Research upon Software Testing Process Model
LIU Kai, LIANG Xin, ZHANG Jun-ping
Computer Science. 2018, 45 (11A): 518-521. 
Abstract PDF(1680KB) ( 1586 )   
References | RelatedCitation | Metrics
Based on systematic analysis on the characteristics of traditional models,a new model of software testing process called parallel YU model was put forward,and the corresponding activities were laid out clearly.Parallel YU model describes clearly the complicit relationship among software testing activities,hierarchy,parallelism,time sequence and iterations between software developing activities and testing activities.
Interdiscipline & Application
Design and Application of Big Data Credit Reporting Platform Integrating Blockchain Technology
JU Chun-hua, ZOU Jiang-bo, FU Xiao-kang
Computer Science. 2018, 45 (11A): 522-526. 
Abstract PDF(3385KB) ( 2062 )   
References | RelatedCitation | Metrics
Credit is an intangible asset.Good credit record can not only bring a higher loan success rate,a lower borrowing rate,but also can make people enjoy the convenience brought by credit services.In the future,credit bonus will be highlighted,but it is also accompanied by problems,such as leakage of personal privacy,alteration of credit data,and unclear legal borders for the commercialization of big data credits.In order to create a healthy Internet credit ecological environment,this paper first summarized the existing problems in the existing credit information platform,discussed and analyzed the feasibility of adopting emerging technologies to solve these problems,and then incorporated a blockchain technology to design a kind ofmulti-source data sharing framework for supporting future credit reference system. Based on the multi-source data sharing of blockchains,abig data credit platform of multi-source heterogeneous data fusion was established using artificial intelligence,data mining,and smart contracts.Taking internet lending as an example,a decentralized lending application based on the big data credit platform was designed.
Design and Implementation of Distributed TensorFlow Platform Based onKubernetes
YU Chang-fa, CHEN Xue-lin, YANG Xiao-hu
Computer Science. 2018, 45 (11A): 527-531. 
Abstract PDF(1681KB) ( 1934 )   
References | RelatedCitation | Metrics
This paper designed and implemented a distributed deep learning platform based on Kubernetes.In order to solve the propblems of complex environment configuration of distributed TensorFlow,uneven distribution of underlying physical resources,low efficiency of training model and long development cycle,a method of containerized TensorFlow based on Kubernetes was proposed.By combining the advantages of Kubernetes and TensorFlow,Kubernetes provides a stable and reliable computing environment and gives full play to the advantages of heterogeneous TensorFlow,which greatly reduces the difficulty in large-scale use.Meanwhile,an agile management platform is established,which realizes the fast distribution of distributed TensorFlow resources,one key deployment,second level running,dynamic expansion,efficient training and so on.
Monitoring System for Library Environment Based on WiFi Internet of Things
WANG Dong, YUAN Wei, WU Di
Computer Science. 2018, 45 (11A): 532-534. 
Abstract PDF(3477KB) ( 1483 )   
References | RelatedCitation | Metrics
According to the characteristics of the library of environmental monitoring,this paper designed a building environment monitoring system based on WiFi.The sensor nodes use the STM32 processor to carry various environmental sensors and ESP8266 WiFi modules.The AP+STA mode of the module not only makes the node as terminal,but also a router.The processor sends control instructions to the WiFi module or receives data from the module and takes charge of encoding the environment parameters detected by various sensors.Through a reliable and practical automatic networking method,the multi hop data transmission of environmental parameters in wireless sensor networks was realized.Experiments show that the system can monitor various environmental indicators and has certain practical value.
Application of Sequence Pattern Mining in Communication Network Alarm Prediction
ZHANG Guang-lan, YANG Qiu-hui, CHENG Xue-mei, JIANG Ke, WANG Shuai, TAN Wu-kun
Computer Science. 2018, 45 (11A): 535-538. 
Abstract PDF(3155KB) ( 729 )   
References | RelatedCitation | Metrics
Alarm prediction is one of the techniques that ensures the stability and reliability of the entire network.Exis-ting alarm forecasting technologies have defects such as not considering the time sequence of warning data and difficult to obtain the priori knowledge.Therefore,this paper proposed a sequence pattern mining method based on topological constraints to find a meaningful alarm sequence pattern.This algorithm mainly considers the topological connections between network nodes and takes them as constraints for mining the alarm sequence pattern.In order to find non-frequent major alarm mode,it improves pruning of sequential pattern mining,preserves sequence patterns containing major alarms directly.Experiments show that the alarm sequence mode mined by the sequential pattern mining method based on topological constraints can improve the accuracy and efficiency of the network alarm prediction and predict the infrequent “major” alarms more accurately.
Research on Public Opinion Prediction Based on Inflection Point
ZHENG Bu-qing, ZOU Hong-xia, HU Xin-jie
Computer Science. 2018, 45 (11A): 539-541. 
Abstract PDF(3224KB) ( 1425 )   
References | RelatedCitation | Metrics
Public opinion prediction is an important part of monitoring.In view of public opinion inflection point in the evolution process will affect public opinion forecast,based on ARIMA and gray prediction model,a prediction method based on inflection point was proposed,the mathematical model of segmentation and mirror processing was established.Finally,an example was used to verify the model,and the advantages and disadvantages of the model were summarized.Experiments show that this method can reduce the influence of inflection point and improve the accuracy of public opini-on prediction.
Research on Hierarchical Modeling Technology of Typical System Based on Architecture
WU Zhong-zhi
Computer Science. 2018, 45 (11A): 542-544. 
Abstract PDF(2584KB) ( 617 )   
References | RelatedCitation | Metrics
With the deep integration of industry technology and information technology,computer modeling and simulation has been widely used in the development process of systems or products.However,the models have some problems such as disordered,fragmented,unclear in hierarchy,poor in reusability and so on.A method based on system architecture was proposed to develop system hierarchical models .When building the models,the whole system can be divided into multi-level with system level,subsystem level and component level.The interface of each hierarchical model is consistent with the interface defined by the system architecture.It can achieve the continuity of the design and provide effective support for the model-based design.The method can clearly describe the purpose,level and granularity of the model.The typical servo actuation system was used as an example to verify the effectiveness of the method.
Analysis of Factors Influencing Fasting Plasma Glucose Based on Multiple Linear Regression
ZHANG Fu-wang, YUAN Hui-juan
Computer Science. 2018, 45 (11A): 545-547. 
Abstract PDF(1610KB) ( 1773 )   
References | RelatedCitation | Metrics
A kind of fasting plasma glucose factors analysis method based on multiple linear regression analysis was proposed by analyzing the relationship ofinfluencing factors of fasting plasma glucose.Firstly,the data of major factors influencing fasting plasma glucose is collected,including serum total cholesterol,triglyceride,fasting insulin,and glyca-ted hemoglobin.Later,these influencing factors are analyzed and determined through scatter diagram.The multiple li-near regression model based on least square method is constructed by the collected data.Meanwhile,through stepwise regression,the revised model is otained.At last,this model is applied to determine the key factors that affect fasting pla-sma glucose,so as to give diet guidance for diabetic patients,and provide reference for the clinical treatment of doctors.
Information Capability Analysis and Evaluation of New Generation Command and Control System Computer Engineering and Applications
LU Yun-fei, LI Lin-lin, ZHANG Zhuang, HE He
Computer Science. 2018, 45 (11A): 548-552. 
Abstract PDF(1644KB) ( 1132 )   
References | RelatedCitation | Metrics
With the rapid development of information technology,the information capacity of traditional command and control system is seriously lagging,that can not meet the needs of modern war operations.Therefore,this paper carried out information capability analysis and evaluation of new generation command and control system.Firstly,the evaluation index system of information capability was constructed.Secondly,aiming at the problem that the traditional evaluation does not involve the index modeling,and the difficulty to guarantee the validity of the index value,the detailed quantifia-ble mathematical model was constructed according to the characteristics of each index.Finally,the classical evaluation methods were used to complete the evaluation of information capabilities and discover the system construction shortage,which provides an important reference for system improvement.
Research and Implementation of Remote Monitoring System for Mast-climbing Working Platform
WANG Wen-jie, JIA Wen-hua, FEN Hao, YIN Chen-bo
Computer Science. 2018, 45 (11A): 553-557. 
Abstract PDF(2993KB) ( 690 )   
References | RelatedCitation | Metrics
In order to realize the information upgrade of mast-climbing working platform and ensure safety,a remote real-time monitoring network platform was designed based on PHP and HTML5.Windows CE and PLC were used as the processing units,a GPRS DTU was used as the communication module,and the TCP protocol was used to connect with the server.These were all based on sensor network technology.Ultra high alarm,overweight alarm,bottomed alarm and tilt alarm could be realized in this system.The optimized limiting average filtering algorithm was used to solve the false alarm problem which caused by the pulse interference in absolute encoder.The web messages could be transferred in real-time based on HTML5 Web Socket technology,thus online real-time monitoring,historical data query and alarm were realized.The test results showed that the system was able to run in a long time,and had a good real-time and high cost performances,which could meet the needs of real-time monitoring in a variety of occasions.
Semantics and Analysis of BPMN 2.0 Process Models
ZHAO Ying, ZHAO Chuan, HUANG Bi, DAI Fei
Computer Science. 2018, 45 (11A): 558-563. 
Abstract PDF(2447KB) ( 789 )   
References | RelatedCitation | Metrics
The business process modelling notation 2.0 (BPMN 2.0) process is a defactor standard for capturing business processes.The mix of constructs found in BPMN 2.0 process makes it possible to obtain models with a range of semantic errors,including deadlocks and livelocks.Firstly,this paper defined a formal semantics of BPMN 2.0 process models in terms of a mapping to WF-nets.Secondly,this defined semantics were used to analyze the soundness of BPMN 2.0 process models,using analysis techniques of Petri nets.Finally,the experimental results showed that this formalization could identify the semantic errors of BPMN 2.0 process models.
Harmonic and Inter-harmonic Detection Method Based on EEMD-RobustICA and Prony Algorithm
DU Wei-jing, ZHAO Feng, GAO Feng-yang
Computer Science. 2018, 45 (11A): 564-568. 
Abstract PDF(3544KB) ( 891 )   
References | RelatedCitation | Metrics
For the modal aliasing existing in empirical mode decomposition and the noise-sensitive problem of Prony algorithm,a method with the combination of Prony based on Ensemble Empirical Mode Decomposition and Robust Independent Component Analysis was applied to the detection of harmonics and inter-harmonics.First of all,the noise containing harmonic signals was subjected to general empirical mode decomposition to obtain different orders of intrinsic mode functions.Then,it was used as the input of the robust independent component analysis method,and the achieved independent components would be processed by soft-threshold denoising method to attain reconstructed IMFs,which would be added up to acquire denoised signal to be identified with Prony algorithm to identify the parameters of the harmonic signal.The simulation results show that the method has a good antinoise and overcomes the disadvantages of Prony algorithm to noise sensitivity,which effectively improves the accuracy of harmonic and inter-harmonic detection.
Long Term Memory Analysis of Relationship Between Pulse Transit Time and Blood Pressure
LI Han, ZHAO Hai, CHEN Xing-chi, LIN Chuan
Computer Science. 2018, 45 (11A): 569-572. 
Abstract PDF(2380KB) ( 1640 )   
References | RelatedCitation | Metrics
Compared with the Korotkoff sound method,estimating blood pressure via pulse transit time is more portable and can be implemented for continuous measurement.However,the effective time of the linear equation established by the existing research is short,the mechanism of pulse transit time changing with the blood pressure needs further analysis.Based on 10 groups of data in MIMIC database,the relationship between blood pressure and pulse transit time was analyzed from the perspective of long-term memory,taking symbolization and complex network as the main research means.The degree distribution of the SBP network shows power-law characteristics,thus indicating the long term me-mory of the SBP-PTT time series.The node variation of the SBP network can be faster to achieve the saturation state compared with DBP network,which reflects the continuous influence of a certain core state on the SBP-PTT relationship.The results can provide a basis for the more accurate and noninvasive continuous measurement of blood pressure through the pulse wave transit time
Improved Marching Cubes Based on CUDA
Computer Science. 2018, 45 (11A): 573-575. 
Abstract PDF(1660KB) ( 773 )   
References | RelatedCitation | Metrics
Marching Cubes (MC) is one of the classical algorithms for medical volume data.But poor mesh quality and slow execution speed have affected the further development such as finite element analysis.In this paper,an improved MC algorithm was presented based on the CUDA.Three kinds of parallel computing were proposed to extract active volumes and edges in the CUDA.Simultaneously,point projection was used in the algorithm to move the endpoints of the active edges and improve the mesh quality.Finally,experimental results show that the presented method can realize the interactive modeling.
Research on Blockchain-based Information Transmission and Tracing Pattern in Digitized Command-and-Control System
DU Xing-zhou, ZHANG Kai, JIANG Kun, MA Hao-bo
Computer Science. 2018, 45 (11A): 576-579. 
Abstract PDF(1736KB) ( 1070 )   
References | RelatedCitation | Metrics
This paper stated an instruction transmission and tracing pattern in digitized command-and-control business.A typical digitized command-and-control system generally runs in a peer-to-peer network,in which each transaction of interactive instructions should be well recorded.This paper proposed a blockchain-based solution to implement efficient transmission and tracing of significant transactions which could be precisely defined by system designers.With nature attributes of encryption,decentralization and tamper resistance,blockchain-based solution focuses more on data consistency,transaction timeliness and information security.Prime events of command-and-control interactive instructions between master nodes,slave nodes and witness nodes of instruction would be completely transmitted and sequentially appended to traceable records data blocks in restraint of consensus mechanism.
System Solution of Multi-blockchain Transaction Dispatching and Event Handling
LIU Xiong-wen
Computer Science. 2018, 45 (11A): 580-583. 
Abstract PDF(2892KB) ( 918 )   
References | RelatedCitation | Metrics
In the blockchain system based on the Byzantine Fault Tolerance (BFT) protocol,the efficiency and perfor-mance of system are poor,which limited the application and development of blockchain.A system solution of supporting transaction dispatching and event handling based on multiple blockchains was proposed.In the solution,the system is divided into application clients,handling system and blockchains.Application clients are used to send out transaction requests and receive the events which are interested in.Every blockchains are used to internally handle transaction requests and events.The handling system is adaptive to the system of variable number of blockchain and the system of immutable number of blockchain.Two polices of selecting blockchain,including mapping relationship table and hash consistency alogorithm,are designed.According to the load balance,monotonicity and consistency,both of these two policies dispatch transactions based on the service type and the internal ID of service type.Every blockchain set up the one-to-one correspondence with transaction calling client,event and message streaming client.Every application client sents out the transaction requests and receives the transaction results by interacting with transaction application calling ser-ver,transaction application event connection server and event message streaming server.Transaction dispatcher and event handler were designed in detail,and basic test and verification were made for the whole system solution.
Research on Well Distribution in Carbonate Reservoirs Based on Novel Genetic Algorithm
JIANG Rui-zhong, YANG Yi-bo
Computer Science. 2018, 45 (11A): 584-586. 
Abstract PDF(1938KB) ( 539 )   
References | RelatedCitation | Metrics
Tahe Oilfield belongs to carbonate rock field.Due to the randomness of zave development,it is necessary to establish reasonable well position at the beginning of development to improve the development effect to the maximum extent.The traditional genetic algorithm was improved in multiple sections and a new genetic algorithm was put forward,which is used in the domain of oil and gas field development.The introduction of elimination operator,elite files and the new co-evolution between the various groups,greatly improve the optimization performance.Finally,by means of the actual development of the geological model of the oil field,the relevant simulation and calculation are done,and the final recovery ratio and accumulated produced oil is more than 5% higher than the result of traditional algorithm.The effect of this algorithm is good.
Design of Data Transmission System Based on 2D Code
TAO Sun-jie, YU Tao
Computer Science. 2018, 45 (11A): 587-590. 
Abstract PDF(2431KB) ( 1466 )   
References | RelatedCitation | Metrics
2D code based data transmission system is designed to implement automatic data transmission between classified information systems.A data communication procedure was proposed in this paper.The transfer data is segmented into multi-frames according to the 2D code data capacity in the sending side.The frames are used to generate a 2D code image sequence which is then sampled by a camera or some other vision sampling devices in the receiving side.The sampled 2D code images are decoded to recover the transfer data and the transferring result is feedback to the sending side.Comparing with the existing solutions such as manually interactive and safe-hinge equipment,the proposed 2D code based data transmission system solves the major drawbacks like low level of automation,low transferring efficiency and highly cost devices.Besides,the system implements an automatic and efficient data transmission in a network isolation environment,so that it can be used in the information interaction with high demand of time efficiency in classified information system.
Implementation and Optimization of SOM Algorithm on Sunway Many-core Processors
YAO Qing, ZHENG Kai, LIU Yao, WANG Su, SUN Jun, XU Meng-xuan
Computer Science. 2018, 45 (11A): 591-596. 
Abstract PDF(2117KB) ( 1072 )   
References | RelatedCitation | Metrics
The self-organizing map(SOM) is a classical algorithm often used in machine learning,but the execution time of the algorithm increases sharply when dealing with complex data.The parallelization of SOM can solve this problem effectively.A parallel SOM algorithm was proposed based on the “Sunway TaihuLight” heterogeneous supercomputer ranked first in the latest TOP500 list,which is implemented on the single core group and the multi core groups in view of model parallelism and data parallelism.On the one hand,the main calculation steps of SOM are transformed into matrix operations through the program refactoring,and its parallelism is implemented by using the high performance extended math library.On the other hand,a variety of optimization methods especially based on supercomputing hardware are used to optimize the performance.By these methods,the performance of the algorithm is improved greatly.In the experiment,the maximum speedup ratio reaches over 10000 when using 64 core groups,and the CPEs speedup ratio can reach more than 900 at most which indicate that the designed algorithm can take full advantage of the power of “Sunway 26010” CPE.
Modeling and Behavior Verification for Collaborative Business Processes
ZHAO Ying, PAN Hua, ZHANG Yun-meng, MO Qi, DAI Fei
Computer Science. 2018, 45 (11A): 597-602. 
Abstract PDF(1912KB) ( 607 )   
References | RelatedCitation | Metrics
Modeling and behavior verification for collaborative business processes is the key to ensure enactment right of business process.This paper proposed an approach to model and verify behavior of collaborative business processes.Firstly,this method uses finite state automaton to model each peer’s business process and composes them into the collaborative business process under the asynchronous communication model through the centralized message buffer.Se-condly,the declarative template is given for behavior constraint,which is used to define the behavior constraint relationship in collaborative business processes.This behavior constraint specification can be converted to LTL formula by mapping rules.Finally,the framework of behavior verification is proposed to automatically check the behavior of collaborative business processes with the help of PAT (Process Analysis Toolkit).The feasibility and effectiveness of this me-thod were proved through the modeling and behavior verification of emergency response system for public emergency public events.