Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 45 Issue 4, 15 April 2018
  
History and Development Tendency of Data Quality
CAI Li, LIANG Yu, ZHU Yang-yong and HE Jing
Computer Science. 2018, 45 (4): 1-10.  doi:10.11896/j.issn.1002-137X.2018.04.001
Abstract PDF(1627KB) ( 2376 )   
References | Related Articles | Metrics
In the Internet age,data becomes new factors of production,becomes the basic resources and strategic resources,and are important productive forces.Big data services have been widely carried out in China,and data exchanges have been established.Now,data quality has become a key issue restricting the development of data industry.This paper divided the research issues of data quality into three stages according to the chronological order,and summarized the representative results of each stage,including methodologies,techniques,models,tools and frameworks.Then,it analyzed the challenges and opportunities faced by data quality research in the new environment of big data,the internet of things and cloud computing.Finally,it prospected research focuses and development trend of data quality from six aspects:data quality model,quality management of big data,related quality techniques for big data,crowdsourcing,internet of things and data sharing.
Summary of Graph Edit Distance
XU Zhou-bo, ZHANG Kun, NING Li-hua and GU Tian-long
Computer Science. 2018, 45 (4): 11-18.  doi:10.11896/j.issn.1002-137X.2018.04.002
Abstract PDF(1424KB) ( 2794 )   
References | Related Articles | Metrics
Graph edit distance is one of the most flexible and general graph pattern matching models available.This matching method has provoked wide concern from scholars owing to its capability to handle many kinds of graph data.Firstly,the related concepts of graph edit distance were introduced.Then the exact graph edit distance algorithms based on heuristic search technology were described briefly,and the inexact edit distance algorithms of bipartite graph matching was emphaticallyanalyzed.Finally,some existing problems were summarized,and the future development trend was simply discussed.
Research Progress on Data Collection in Mobile Low-duty-cycle Wireless Sensor Networks
LIANG Jun-bin, ZHOU Xiang, WANG Tian and LI Tao-shen
Computer Science. 2018, 45 (4): 19-24.  doi:10.11896/j.issn.1002-137X.2018.04.003
Abstract PDF(2120KB) ( 724 )   
References | Related Articles | Metrics
Mobile low-duty-cycle wireless sensor networks (MLDC-WSN) are a kind of new sensor networks appearing in recent years,which can overcome the drawbacks of large energy consumption in traditional static wireless sensor networks (WSN).However,new features of MLDC-WSN bring new challenges to data collection applications.For instances,mobility will cause network topology to change constantly,which will make the network connectivity become unstable.The nodes in the network only wake up for a small proportion of time,which will increase the communication latency.This paper analyzed and summarized research progress on data collection in MLDC-WSN,and gave in-depth analysis on three main aspects of existing works:mobility management of nodes,sleep scheduling of nodes,and data collection protocols in the networks.In addition,it pointed out several important scientific problems in this field,and discussed future research directions.
Summary of Security Technology and Application in Industrial Control System
SUO Yan-feng, WANG Shao-jie, QIN Yu, LI Qiu-xiang, FENG Da-jun and LI Jing-chun
Computer Science. 2018, 45 (4): 25-33.  doi:10.11896/j.issn.1002-137X.2018.04.004
Abstract PDF(1727KB) ( 2779 )   
References | Related Articles | Metrics
In order to face the new challenges caused by the deep integration of control system and Internet technology and resist the target attack,such as shock virus,flame virus and BlackEnergy,aiming at the technical lag of industrial control system vulnerability mining,repair and control,and the problems of “difficult to detect,difficult to monitor,difficult to protect”,this paper researched the theoretical model,key technology,equipment development and test evaluation of industrial control system.Besides,through taking the research of vulnerability mining and utilization as the main line,taking theoretical system architecture research and test verification platform construction as the basis,taking dynamically monitoring protection and active defense as the goal,taking test example set attack and defense verification and typical demonstration as the applicationl,this paper proposed security technology solutions including industrial control system vulnerability mining,depth detection,dynamic protection,active defense,and designed the integrated security technology system including vulnerability mining,verification and evaluation,dynamic protection and active defense.
Survey of Terrain LOD Technology Based on Quadtree Segmentation
WANG Zhen-wu, LV Xiao-hua and HAN Xiao-hui
Computer Science. 2018, 45 (4): 34-45.  doi:10.11896/j.issn.1002-137X.2018.04.005
Abstract PDF(1881KB) ( 2353 )   
References | Related Articles | Metrics
Level of detail(LOD)technology is the most widely adopted technology in the model simplification of large-scale terrain,which greatly accelerates the speed of roaming terrain.Among all the LOD methods,the most popular one is LOD algorithm based on quadtree segmentation.Many domestic and foreign scholars have done lots of research work on the LOD technology.In this paper,the quadtree based LOD algorithms were sorted and summarized systematically,several key algorithms were classified and analyzed,their advantages and disadvantages were analyzed in details,and their current research status were introduced comprehensively.
Survey on Application of Homomorphic Encryption in Encrypted Machine Learning
CUI Jian-jing, LONG Jun, MIN Er-xue, YU Yang and YIN Jian-ping
Computer Science. 2018, 45 (4): 46-52.  doi:10.11896/j.issn.1002-137X.2018.04.006
Abstract PDF(1324KB) ( 4070 )   
References | Related Articles | Metrics
Nowadays,the existing machine learning algorithms can not analyze and calculate the encrypted data,at the same time,many areas(such as medical industry,financial industry) strongly require data to keep private and secure while analyzed and calculated by untrusted person or company.All these lead to the generation and development of encrypted machine learning.Homomorphic encryption is the primary idea of solving this problem by ensuring that calculations on the cipher text without decrypting,which can result in the same result of the same calculations on the plain text.This paper conducted a survey on application of homomorphic encryption in encrypted machine learning.This work mainly introduced three kinds of algorithms(encrypted neural network,encrypted K-nearest Neighbor,encrypted decision tree and completely random forest) which are used to realize encrypted machine learning with homomorphic encryption,and also analyzed the scheme design from the aspects of correctness,security and efficiency.This paper summarized the construction of different encrypted machine learning algorithms,pointed out the key problems of homomorphic encryption for encrypted machine learning and the content that needs to be focused on in further studies,and provided some referen-ces for homomorphic encryption and encrypted machine learning.
0-1 Knapsack Variant with Time Scheduling
WANG Zheng-li, XIE Tian, HE Kun and JIN Yan
Computer Science. 2018, 45 (4): 53-59.  doi:10.11896/j.issn.1002-137X.2018.04.007
Abstract PDF(1378KB) ( 809 )   
References | Related Articles | Metrics
This paper proposed an NP-hard 0-1 knapsack variant problem considering the space and time issues.Given n items with each item i having weight wi and continuous storage time length ti,and a knapsack with capacity S,a scheduling is asked to provide for the packing time of each item(the removing time can be deduced accordingly).At any time instant,the total weights of the packed items should not exceed the capacity of the knapsack.This paper proposed three algorithms to address this problem:a quick dynamic programming(DP) algorithm,a branch and bound(BnB) based exact algorithm and a genetic algorithm.The dynamic programming(DP) algorithm first regards all items as raw items,and uses DP to pack as much raw items as possible into the knapsack.Whenever there is an item that becomes mature,i.e.,it has been stored enough time in the knapsack,the mature item is removed from the knapsack,and for the remaining Knapsack capacity,DP is used again to pack as much raw items as possible.The above process continues until all items are mature and removed from the knapsack,completing the DP scheduling.The BnB based exact algorithm defines the lower bound and upper bound,and cuts the unnecessary branches to shorten the running time.The genetic algorithm defines each individual as a packing order,and defines the corresponding fitness value,selection,mutation and crossover operators.Experiments on three sets with a total of 36 designed benchmark instances show that DP can quickly produce high quality schedules,BnB based exact algorithm works well for small instances,the genetic algorithm can deal with hundreds of items within 1500 seconds and it outputs schedules with considerably shorter makespan when compared with DP.
Edge Cloud Clustering Algorithm Based on Maximal Clique
ZHU Jin-bin, WU Ji-gang and SUI Xiu-feng
Computer Science. 2018, 45 (4): 60-65.  doi:10.11896/j.issn.1002-137X.2018.04.008
Abstract PDF(1501KB) ( 712 )   
References | Related Articles | Metrics
Effective combination of edge clouds can offer more powerful computing capacity,which is a promising research direction in cloud computing.It is a challenging work to select multiple edge clouds to combine,because a bad strategy will make a negative impact on the computational power of the obtained group.The problem is generally modeled as a resource topology graph in which cloud nodes are represented as vertices,and the links between nodes are represented as edges.The selection of edge cloud is equivalent to subgraph extraction of the resource topology graph,and this is a typical NP problem.The current minStar algorithm extracts the subgraph with the smallest communication delay between nodes,and then assigns the corresponding resources to the customer.It is a greedy strategy,resulting in local optimal and poor global performance.The proposed algorithm based on maximal clique divides maximal cliques into several smaller but complete subgraphs which do not overlap each other,then constructs resource block in unit of complete subgraph,and assigns the cloud resource in unit of resource block.Compared with minStar algorithm,simulation results show that the global maximum communication delay is reduced by 50% with the proposed algorithm.
Chinese Part-of-speech Tagging Model Using Attention-based LSTM
SI Nian-wen, WANG Heng-jun, LI Wei, SHAN Yi-dong and XIE Peng-cheng
Computer Science. 2018, 45 (4): 66-70.  doi:10.11896/j.issn.1002-137X.2018.04.009
Abstract PDF(2453KB) ( 1428 )   
References | Related Articles | Metrics
Because traditional statistical model based Chinese part-of-speech tagging relies heavily on manually designed features,this paper proposed an effective attention based long short-term memory model for Chinese part-of-speech tagging.The proposed model utilizes the basic distributed word vector as the unit input,and extracts rich contextual feature representation with bidirectional long short-term memory.At the same time,an attention based hidden layer is added in the network,and the attention probability is distributed for hidden state in different time to optimize and improve the quality of hidden vector.The state transition probability is employed in decoding process to further improve accuracy.Experimental results on PKU and CTB5 dataset show that the proposed model is able to make Chinese part-of-speech tagging effectively.It achieves higher accuracy than traditional methods and gets competitive results compared with state-of-the-art models.
Parallelization of LTL Model Checking Based on Possibility Measure
LEI Li-hui and WANG Jing
Computer Science. 2018, 45 (4): 71-75.  doi:10.11896/j.issn.1002-137X.2018.04.010
Abstract PDF(2528KB) ( 914 )   
References | Related Articles | Metrics
Distributed model checking(DMC) is an effective solution for the state-explosion problem in model checking.The qualitative LTL distributed model verification algorithm has been proposed.However,the problem of parallelization quantitative LTL verification has not been solved effectively.In this paper,a new dynamic state partition method was presented and a quantitative DMC was proposed based on the qualitative LTL distributed verification algorithm.Firstly,the system model is transformed into a possible Kripke structure to choose a concurrent component,and its state space is divided by the relationships of states,which aims to allocate the related states to the same computing node.Secondly,the partition results are adjusted for the load balance of DMC system.Then,the partition results and the states of the rest components are done cross product to complete the state partition of the system.Finally,the properties are represented by nondeterministic finite automaton,and their cross products can complete the quantitative verfication of system based on the extended nested DFS distributed verification algorithm.
Hybrid Evolutionary Algorithm for Solving Mixed Capacitated General Routing Problem
SUN Qi, JIN Yan, HE Kun and XU Ling-xuan
Computer Science. 2018, 45 (4): 76-82.  doi:10.11896/j.issn.1002-137X.2018.04.011
Abstract PDF(2094KB) ( 1126 )   
References | Related Articles | Metrics
This paper studied an NP-hard mixed capacitated general routing problem(MCGRP),which is derived from the classical vehicle routing problem(VRP) by adding the capacitated constraints and the demands on the arcs.Given a vehicle team with unlimited number,the vehicles serve the customers from the depot and need to return to the depot after completing service.The constraints are that the total load of each vehicle cannot exceed its capacity and each demand can only be served by one vehicle once.This paper aimed to provide a plan of the service route for each vehicle,so that the total travel expenses of all vehicles are minimized when the constraints are satisfied.The MCGRP has a relatively high theoretical value as well as application value.An efficient hybrid evolutionary algorithm was proposed for MCGRP.It applies a variable neighborhood tabu search with five neighborhood operators to improve the quality of solutions,and designs a route-based crossover operator to inherit the high-quality solutions so that the search efficiency can be improved.Experimental results on 23 well-known benchmark instances show that the proposed algorithm is effective for solving the MCGRP.
Approximation Algorithm for Weighted Mixed Domination Problem
ZHANG Jia-nan and XIAO Ming-yu
Computer Science. 2018, 45 (4): 83-88.  doi:10.11896/j.issn.1002-137X.2018.04.012
Abstract PDF(1313KB) ( 1005 )   
References | Related Articles | Metrics
A mixed domination set of graph G =(V,E) is a set D with vertexes and edges in graph G,thus for every edge or vertex in G,if it is not in D,it must be adjacent or incident to at least one vertex or edge in D.The mixed domination problem is to find a mixed domination set with minimum cardinality.The mixed domination problem combines both the vertex domination problem and the edge domination problem,it is NP-complete and has many applications in real life .The weighted mixed domination set problem is a generalization of the mixed domination set problem,where a weight wv is set to each vertex and a weight we is set to each edge,and the problem is to find a mixed domination set for minimum weight.Although the mixed domination set problem has a 2-approximation algorithm,few approximation results are known for the weighted mixed domination set problem.This paper designed a 3-approximation algorithm for the weighted mixed domination set problem with constraint wv≤we.
Robustness Optimization of Sequence Decision in Urban Road Construction
WU Jian-hui, HUANG Zhong-xiang, LI Wu, WU Jian-hui, PENG Xin and ZHANG Sheng
Computer Science. 2018, 45 (4): 89-93.  doi:10.11896/j.issn.1002-137X.2018.04.013
Abstract PDF(2423KB) ( 899 )   
References | Related Articles | Metrics
In order to improve the robustness of sequence decision in urban road construction,a bi-level programming model was proposed to optimize urban road construction sequence decision.The model assumes that travel demand disturbs in a certain range,the upper-level is programmed to seek the comprehensive minimum value between system total travel time and the sensitivity of system total travel time under the limited funds constraint,and the lower-level programming is a stochastic user equilibrium assignment model.The sensitivity calculation formula of system total travel time travel demand was derived,and the solution algorithm of the model was also presented.At last,taking a test road network as an example,three decision optimization models based on system total travel time,based on sensitivity and based on comprehensive travel time with system total travel time and sensitivity were analyzed.The results show that three decision optimization models can seek the optimal urban road construction sequence of its objective function respectively,but the optimal results based on sensitivity and based on comprehensive travel time are more robust than the optimal result based on system total travel time under demand uncertainty.
Fast and Efficient Scheduling Algorithms for Mobile Cloud Offloading
SHI Wen-jun, WU Ji-gang and LUO Yu-chun
Computer Science. 2018, 45 (4): 94-99.  doi:10.11896/j.issn.1002-137X.2018.04.014
Abstract PDF(3604KB) ( 793 )   
References | Related Articles | Metrics
Running applications of high computation on mobile devices is constrained by limited battery capacity and energy consumption of the devices.Cloud offloading is a main solution for supporting computationally demanding applications on these resource-constrainted devices.This paper proposed a fast and efficient heuristic approach for scheduling and offloading problems of the application task graph in the wireless network.The proposed heuristic approach initially moves the tasks that can be offloaded to the cloud,and then iteratively moves the tasks with highest benefit value to the mobile device.The benefit values are updated in each iteration to cater for the task concurrence.In addition,this paper also constructed a tabu search approach to search for the global optimization solution.It presented and implemented the encoding method,tabu list,neighborhood solutions and the stopping criterion of the proposed tabu search algorithm. The customized tabu search algorithm is with the initial solution generated by the proposed heuristic algorithm.By comparing three algorithms based on non-offloading,full offloading,and random offloading,experimental results show that the proposed heuristic algorithm runs very fast,and the generated heuristic solutions are efficient.For the case of the task graphs with width of 10 and depth of 8,the energy consumption of non-offloading,full offloading,and random offloading are 5461,7 and 2271 respectively,while the proposed heuristic solution is 2111.It is further reduced to 1942 by the customized tabu search.The results confirm that the proposed heuristic algorithm can generate high quality approximate solution for the scheduling and offloading problem in mobile computing.
L1-norm Distance Based Least Squares Twin Support Vector Machine
ZHOU Yan-ping and YE Qiao-lin
Computer Science. 2018, 45 (4): 100-105.  doi:10.11896/j.issn.1002-137X.2018.04.015
Abstract PDF(3032KB) ( 805 )   
References | Related Articles | Metrics
Recently,LSTSVM,as an efficient classification algorithm,was proposed.However,this algorithm computes squared L2-norm distances from planes to points,such that it is easily affected by outliers or noisy data.In order to avoid this problem,this paper presented an efficient L1-norm distance based robust LSTSVM method,termed as LSTSVML1D.LSTSVML1D computes L1-norm distances from planes to points and is not sensitive to outliers and noise.Besides,this paper designed an efficient iterative algorithm to solve the resulted objective,and proved its convergence.Experiments on artificial dataset and UCI dataset indicate the effectiveness of the proposed LSTSVML1D.
Recognition Method for Corn Borer Based on Templates Matching in Muliple Growth Periods
LIU Bo-yi, TANG Xiang-yan and CHENG Jie-ren
Computer Science. 2018, 45 (4): 106-111.  doi:10.11896/j.issn.1002-137X.2018.04.016
Abstract PDF(10579KB) ( 929 )   
References | Related Articles | Metrics
Corn borer is one of main pests encountered in the corn.In order to solve the problems of high labor intensity,inaccuracy,and being not in time in artificial recognition,a novel identification method for Asiatic corn borer was proposed under natural scenes in this paper,which is based on reverse mapping of histogram and multi-template matching of contours.Firstly,this method performs mathematical morphology preprocessing for the obtained image.Secondly,the total probability image is obtained by using reverse mapping of histogram method and multi-template images,and then image contour can be extracted quickly and accurately by using constraint Otsu,and can be preliminary filtrated according to the perimeter and area characteristics of corn borer.Finally,the contours matched with the characteristics of Asiatic corn borer are selected by using Hu moment characters between multiple reference contours and the obtained target contours,and then identification results with triangle mark are obtained.The experimental results and theoretical analysis show that the proposed method has high timeliness and high recognition accuracy in complex natural scenes.
Energy-efficient Intra-domain Routing Algorithm Based on Directed Acyclic Graph
GENG Hai-jun, SHI Xin-gang, WANG Zhi-liang, YIN Xia and YIN Shao-ping
Computer Science. 2018, 45 (4): 112-116.  doi:10.11896/j.issn.1002-137X.2018.04.017
Abstract PDF(2152KB) ( 651 )   
References | Related Articles | Metrics
With the rapid development of Internet,it is facing lots of challenges,in which the problem of network energy consumption is particularly prominent.Therefore,the academics put forward a large number of schemes to solve the problem.However,all of these methods need to consider the real-time traffic data in the network,and the computation complexity is very high,which is inconvenient to the actual deployment.Therefore,this paper proposed an energy-efficient intra-domain routing algorithm based on directed acyclic graph,which merely relies on network topology,without traffic awareness.The proposed scheme builds a directed acyclic graph for each node in the network,for the purpose of avoiding routing loops and alleviating the network performance degradation caused by cutting off links.Experimental results show that EEBDAG not only has lower energy-saving ratio,but also has lower link utilization,which provides an efficient solution for the ISP solving the Internet energy saving problem.
Resilience Analysis Model of Networked Command Information System Based on Node Repairability
CUI Qiong, LI Jian-hua, WANG Hong and NAN Ming-li
Computer Science. 2018, 45 (4): 117-121.  doi:10.11896/j.issn.1002-137X.2018.04.018
Abstract PDF(12885KB) ( 970 )   
References | Related Articles | Metrics
A resilience analysis model of networked command information system based on node repairability was proposed.Firstly,the meaning of networked command information system resilience was confirmed,a network model was constructed,as well as the cascading failures and recovery process were described.Then,based on constructing the load distribution model and the node repairability model, according to the resilience process,a resilience measurement model was constructed.Lastly,the resilience under different model parameters was analyzed.The analytic results show that the recovery probability and mean repairability time of node can affect resilience distinctly,which may provide a gui-dance for improving the resilience of networked command information system.
Path Optimization Scheme for Restraining Degree of Disorder in CMT
WANG Zhen-chao, HOU Huan-huan and LIAN Rui
Computer Science. 2018, 45 (4): 122-125.  doi:10.11896/j.issn.1002-137X.2018.04.019
Abstract PDF(1869KB) ( 638 )   
References | Related Articles | Metrics
In order to lighten the disorder degree at the receiving side in concurrent multipath transfer(CMT),a new path optimization scheme was proposed in this paper based on MPTCP protocol.In this scheme,a geometric path evaluation model is established based on path delay,packet loss rate and bandwidth in three-dimensional cartesian coordinate system(3D).In oder to select a set of active paths with large bandwidth,small packet loss rate and small delay-difference from the geometric path evaluation model according to the required number of paths,the dichotomy is combined with the improved density-based clustering analysis method.Simulation results demonstrate that the proposed scheme can reduce the disorder length and the number of retransmission packets,and can improve the throughput and packet transmission rate.
Dual-cluster-head Routing Protocol Based on Vehicle Density in VANETs
YANG Yu-qi, ZHANG Guo-an and JIN Xi-long
Computer Science. 2018, 45 (4): 126-130.  doi:10.11896/j.issn.1002-137X.2018.04.020
Abstract PDF(2912KB) ( 726 )   
References | Related Articles | Metrics
In order to reduce the sensitivity of single vehicle node movements in forwarding area of the urban environment and reduce the average number of hops and transmission delay from source node to destination node,a dual-cluster-head routing protocol based on vehicle density in vehicular Ad-hoc networks was proposed.This routing protocol sets the size of cluster according to the vehicle density,one cluster head is respectively selected in the left and right lanes of the same section,the primary cluster head for forwarding data packets is selected according to the maximization lifetime,and the other cluster head is used as an alteration.Simulation results show that the proposed protocol has longer lifetime,smaller average number of hop and average delay,and better communication performance compared with CBDRP and EG-RAODV.
Optimization of Container Deployment Strategy Based on Stable Matching
SHI Chao, XIE Zai-peng, LIU Han and LV Xin
Computer Science. 2018, 45 (4): 131-136.  doi:10.11896/j.issn.1002-137X.2018.04.021
Abstract PDF(1426KB) ( 1063 )   
References | Related Articles | Metrics
With the development of Docker,virtualization containers of operating system level are on the rise,and contai-ner-as-a-service(CaaS) is also becoming more and more popular.With the development of container technology,the container will become the main deployment model in the cloud environment,but the integrated deployment technology for the container has not been widely studied.In the cloud environment,how to deploy a large number of containers to a suitable virtual machine to reduce the energy consumption of the data center becomes an problem which needs to to be solved urgently.Therefore,several similarity calculation methods in machine learning are used as the preference rules of the stabilization matching algorithm,and the virtual machines that have been allocated to the container are added to the preference list at the same time,which makes the one-to-one stable marriage matching algorithm update to many-to-one stable match,solving the initial deployment problem of integrate container into the virtual machine.The experimental results show that the optimal storage efficiency is about 12.8%,34.6% and 30.87% compared with the FirstFit,MostFull and Random methods respectively,and when the Euclidean distance is used as the preference rules of stabilization matching algorithm,the performance of energy saving is the best.
Fog Computing Task Scheduling Strategy Based on Improved Genetic Algorithm
HAN Kui-kui, XIE Zai-peng and LV Xin
Computer Science. 2018, 45 (4): 137-142.  doi:10.11896/j.issn.1002-137X.2018.04.022
Abstract PDF(1616KB) ( 930 )   
References | Related Articles | Metrics
Task scheduling and assignment has always been a key issue in the development of cloud computing.However,with the explosive growth of internet connection devices,cloud computing has been unable to meet some requirements such as health monitoring,emergency response and so on,which all require low latency.Thus fog computing appears.Fog computing extends the cloud services to the edge of network.Under the fog computing architecture,task scheduling and assignment is still a relatively new research hotspot.This paper introduced an improved genetic algorithm(IGA).The algorithm introduces the fitness judgment into the parental mutation operation which overcomes the blindness of simple genetic algorithm(SGA) in mutation operation.The response time restriction in the service level objective(SLO) is considered when the IGA is used to schedule tasks(FOG-SLO-IGA).The experimental results show that when scheduling user tasks are under the fog computing architecture,FOG-SLO-IGA is superior to the scheduling which uses IGA under cloud computing architecture in latency,SLO violation rate and service provider’s cost.Futhermore,IGA algorithm is superior to the traditional SGA algorithm and the round-robin scheduling algorithm(RRSA) in the execution of the tasks under the fog computing architecture.
Routing Scheme Based on Network Slicing and ILP Model in SDN
PANG Bo, JIN Qian-kun, HENIGULI·Wu Mai Er and QI Xing-bin
Computer Science. 2018, 45 (4): 143-147.  doi:10.11896/j.issn.1002-137X.2018.04.023
Abstract PDF(2241KB) ( 724 )   
References | Related Articles | Metrics
For the issues of the routing optimization problem in data layer of software defined network(SDN),a routing scheme based on network slicing and integer linear programming (ILP) multi-constrained optimization was proposed.Firstly,the Kruskal algorithm is used to slice the link resources in the data layer according to the link requirement of multi-tenancy service,so as to form the isolated sub-network as far as possible.Then,an ILP integer linear programming(ILP) routing optimization model was constructed under considering the link constraint and the QoS constraint of the tenant service,to minimize the transmission delay and obtain the optimal routing scheme.Simulation results show that the proposed routing scheme has fewer shared links,and it can effectively reduce the link congestion and transmission delay.
Remote Attestation Mechanism Based on Locality Principle
XIA Qing-xun and ZHUANG Yi
Computer Science. 2018, 45 (4): 148-151.  doi:10.11896/j.issn.1002-137X.2018.04.024
Abstract PDF(2374KB) ( 850 )   
References | Related Articles | Metrics
In order to improve the efficiency of the remote configuration attestation scheme,combining the locality principle of the program with the storage structure of Merkle Hash tree,the data structure used to store the Hash values of the program module integrity was improved,and a remote proof mechanism based on locality principle was proposed.Experiments show that the new mechanism can improve the efficiency of the remote configuration attestation by redu-cing the consumption of constructing stored measurement logs and shortening the length of authentication paths.
Distinguishing Attack of MORUS-1280-128
ZHENG Xiu-lin, SONG Hai-yan and FU Yi-peng
Computer Science. 2018, 45 (4): 152-156.  doi:10.11896/j.issn.1002-137X.2018.04.025
Abstract PDF(1275KB) ( 717 )   
References | Related Articles | Metrics
MORUS is an authenticated cipher,which is submitted to CAESAR competition and has been selected into the third-round security evaluation stage.To study the distinguishing attack of MORUS is significant for its security evaluation.This paper studied the distinguishing attack of MORUS-1280-128 in a nonce-resuse scenario.By using this method,the majority ciphertext can be distinguished,and a collision in internal state can be found for a tag forgery attack.The paper’s research results are of great significance for the safety analysis of MORUS.
Intranet Defense Algorithm Based on Pseudo Boosting Decision Tree
LI Bai-shen, LI Ling-zhi, SUN Yong and ZHU Yan-qin
Computer Science. 2018, 45 (4): 157-162.  doi:10.11896/j.issn.1002-137X.2018.04.026
Abstract PDF(1423KB) ( 677 )   
References | Related Articles | Metrics
Combining with the idea of TF-IDF algorithm,the frequency of characteristics(Eigen Frequency),the frequency of forest(Forest Frequency) and the pseudo boosting decision tree(PBDT) were put forward,solving the margi-nalized problem of wrong data with the increasing number of iterations for gradient boosting decision tree(GBDT).In PBDT,all the decision trees produce respectively in data sets after the original data set of the Bootstrapping,without aiming at each iteration to sample data sets.Then intranet defense experiment was conducted on distributed cluster.The experimental results show that on the training set with a certain scale,PBDT has better prediction accuracy.
PDiOS:Private API Call Detection in iOS Applications
WU Shu, ZHOU An-min and ZUO Zheng
Computer Science. 2018, 45 (4): 163-168.  doi:10.11896/j.issn.1002-137X.2018.04.027
Abstract PDF(1515KB) ( 1184 )   
References | Related Articles | Metrics
Apple has reviewed every application in App Store,including private application programming interface(API) calls,but some malicious applications still escape from the review.Aiming at the private API call in iOS application,a detection technique combining dynamic and static analysis was proposed.Most of the API call sites were processed by static analysis of backward slicing and constant propagation,and the remaining APIs are dealt with by dynamic iterative analysis based on enforcement.Static analysis includes a comprehensive analysis of the binary file and the implicit call analysis in the resource file processing.Dynamic analysis mainly depends on the binary dynamic analysis framework for iterative analysis.Finally,the existence of private API is determined by comparing the API in the public header file.There are 82 applications with 128 different private API calls during the testing of 1012 applications in App Store,and 26 applications are sure to use private API calls in the 32 applications signed by the enterprise certificate.
Study on Data Quality Based on Constraint in Computer Forensics
LIU Qin
Computer Science. 2018, 45 (4): 169-172.  doi:10.11896/j.issn.1002-137X.2018.04.028
Abstract PDF(1251KB) ( 773 )   
References | Related Articles | Metrics
To find clues and improve the quality of data,this paper proposed a data repairing algorithm based on constraint in the field of computer forensics.First,using the equivalence class,the algorithm initializes the data according to different constraints.Second,it repaires the problem data found in the initialization phase and gets different values according to the type of constraint.Third,the repaired collections are regenerated according to function dependence set and other constraint sets.If the problem data still exist,the algorithm continues to repair until there is no problem so far.Experiment proves the validity and high efficiency of the proposed method.
Passive Image-splicing Detection Based on Multi-residual Markov Model
LUO Xiao-yang, HUO Hong-tao, WANG Meng-si and CHEN Ya-fei
Computer Science. 2018, 45 (4): 173-177.  doi:10.11896/j.issn.1002-137X.2018.04.029
Abstract PDF(2791KB) ( 739 )   
References | Related Articles | Metrics
Aiming at the problem that calculating the difference matrix singly for the traditional Markov feature is not robust to the splicing detection,this paper presented a color multi-residual type Markov feature for splicing detection.This method introduces the residual model from rich models for steganalysis(SRM) to improve the traditional Markov features,respectively extracts 10 different types of Markov features from three color channels,and trains 30 unique SVM classifiers to make the classification through the proposed decision-making algorithm.This method achieves the accuracy of 95.40% at Columbia image splicing detection evaluation dataset.
Chosen Plaintext Attack on Chaotic Image Encryption Algorithm Based on Perceptron Model
ZHU Shu-qin, WANG Wen-hong and LI Jun-qing
Computer Science. 2018, 45 (4): 178-181.  doi:10.11896/j.issn.1002-137X.2018.04.030
Abstract PDF(3939KB) ( 666 )   
References | Related Articles | Metrics
This paper analyzed the security of a chaotic image encryption algorithm based on preceptron model and found that the essence of the algorithm is to change the value of bits of the plaintext image according to the equivalent key stream to get the cipher image.However,there is no relation between the equivalent key stream and the plain text image or the corresponding cipher text image.By applying chosen plaintext attacks,this paper found that the equivalent key can be cracked,which can be further exploited to decrypt the target plain image.In addition,two other security defects were also reported.Finally,the original algorithm was improved to overcome the shortcomings of the original algorithm.Both mathematical analysis and experimental results show that the feasibility of the proposed chosen plaintext attack strategies and the effectiveness of the improved algorithm are verified.
Spatial Keyword Range Query with User Preferences Constraint
GUO Shuai, LIU Liang and QIN Xiao-lin
Computer Science. 2018, 45 (4): 182-189.  doi:10.11896/j.issn.1002-137X.2018.04.031
Abstract PDF(2955KB) ( 653 )   
References | Related Articles | Metrics
With the wide application of location-based personalized service,spatial keyword range query with user pre-ferences constraint becomes a research hotspot.The existing indexes for spatial keyword range query do not take user preferences into account,resulting in poor pruning performance and low query efficiency.In order to solve these problems,a hybrid index called BRPQ(Boolean Range with Preferences Query index) was proposed to support user prefe-rences,spatial location and keywords collaborative pruning.This paper also proposed an efficient query processing algorithm for spatial keywords range query with user preferences constraint.Experimental results show that BRPQ outperforms the existing indexes in terms of building time and query processing efficiency.
Study on Automatic Method for AUTOSAR Runnable Entity-task Mapping
RAN Zheng, LUO Lei, YAN Hua and LI Yun
Computer Science. 2018, 45 (4): 190-195.  doi:10.11896/j.issn.1002-137X.2018.04.032
Abstract PDF(2287KB) ( 863 )   
References | Related Articles | Metrics
The next generation automotive electronic standard AUTOSAR defines that the automotive application design process includes system level design and ECU level design.Software components are function units of application in system level design and each software component comprises a set of runnable entities.The main task of ECU level design is organizing the code segments of runnable entities as embedded operating system tasks.In the process of transforming the component set which is assigned from one ECU into a real-time system task set,the experienced embedded development engineers are necessary for runnable entity-task mapping configuration to ensure the real-time performance of the system.As the requirements of runnable entity-task mapping configuration are large and complex,this paper proposed a runnable entity-task automatic mapping method.With the consideration of trigger relationship between runnable entities,period requirements,data sharing and other factors,this method has important practical significance in improving the efficiency of automotive software development.Finally,the proposed method was applied to the automotive electronic cruise control system instance in AUTOSAR.The experimental results show that the proposed method has good performance in the aspects of jitter time,blocking time,frequency of scheduling and data traffic.
Formal Description of Requirement of Slats and Flaps Control System for DO-178C Case
ZHAN Yun-jiao, WEI Ou and HU Jun
Computer Science. 2018, 45 (4): 196-202.  doi:10.11896/j.issn.1002-137X.2018.04.033
Abstract PDF(1366KB) ( 659 )   
References | Related Articles | Metrics
DO-178C is an improvement and supplement for airborne software airworthiness certification standard DO-178B,and it is used to provide guidance for software quality control of civil aircraft airborne systems and equipments.SCR(Software Cost Reduction),as a formal method,can be applied to the description of complex and large-scale embedded systems based on four-variable model.Based on the DO-178C,this paper used the SCR method to formalize the requirement specification of the flap slat control system in the original aircraft system,and carried on the detailed case for the flap motor speed control module in the flap slat control system.Through analysis,whether the DO-178C meets the relevant validation indicators can be determined .Through analyzing and validating,some application techniques of SCR method were proposed.This work will provide the basis for the application of SCR method in airborne software system.
Study on Construction of EFSM Model for Web Application Based on Session
GUO Jun-xia, GUO Ren-fei, XU Nan-shan and ZHAO Rui-lian
Computer Science. 2018, 45 (4): 203-207.  doi:10.11896/j.issn.1002-137X.2018.04.034
Abstract PDF(2242KB) ( 817 )   
References | Related Articles | Metrics
In the research field of Web application model,the main research object is the applications which don’t include Ajax.A small number of models which consider the specialty of Ajax use the traditional FSM,and can’t describe the parameter transfer problems after the message is triggered in client.There is a method in which the UML layered models is introduced into the FSM for Ajax.However,this method needs manual intervention.It is not convenient for the automatic generation of the test case.To address the above problems,this paper presented an approach which builds model for Web applications based on EFSM as an important software description model.This approach firstly analyzes users’ behaviors from users-session data which are recorded in server-side.Meanwhile,the client side events will be recorded.Then,the users’ behaviors with client-side events are matched to generate the complete user session data,from which the EFSM model can be built for the Web application.The experiments show that the EFSM model built by using the proposed approach can represent the state and its changes of Web applications effectively.The EFSM model is also convenient for generating test cases automatically.
Hot Topic Discovery Research of Stack Overflow Programming Website Based on CBOW-LDA Topic Model
ZHANG Jing and ZHU Guo-bin
Computer Science. 2018, 45 (4): 208-214.  doi:10.11896/j.issn.1002-137X.2018.04.035
Abstract PDF(2059KB) ( 778 )   
References | Related Articles | Metrics
Stack Overflow is a popular programming question and answer(Q&A) website,we can gather the hot programming knowledge which the developers focus on by studying the programming question text semantic mining.Owing to the high dimensionality problem which hinders processing efficiency and the topic distribution which makes topics unclear,it is difficult to detect topics from a large number of short texts in social network.To overcome these problems,this paper proposed a new LDA(Latent Dirichlet Allocation) model based topic detection method called CBOW-LDA topic modeling method.Using the model to target language and clustering similar words by vectors similarity before topic detection can decrease the dimensions of LDA output and make topics more clearly.Through the analysis of topic perplexity in the experiment dataset with different data collection capacity about the POST on Stack Overflow in 2010-2015,it is obvious that topics detected by our method has a lower perplexity,comparing with word frequency weighing based vectors named TF-LDA.In a condition of same number of topic words from the corpus,perplexity is reduced by about 4.87%,which means CBOW-LDA model performs better.When acting CBOW-LDA method in hot topic on Stack Overflow,TF-LDA method was used to be compared as well,and this paper established a manual annotation standard evaluation set and used Recall,Precision and F1 to contrast experiment results.This paper confirmed that the CBOW-LDA method has better effect because each measuring value of CBOW-LDA is better than TF-LDA,which proves that the hot spot mining effect of CBOW-LDA is good.Through ourexperiment,this paper effectively found out the hot issues of the theme and hot words in nearly six years.This paper drew the conclusion that “Java” is the hottest topic in the website,and “JavaScript” and “C” are the favorite words mentioned in questions from the users.
Study on Matrix Factorization Recommendation Algorithm Based on User Clustering and Mobile Context
WEN Jun-hao, SUN Guang-hui and LI Shun
Computer Science. 2018, 45 (4): 215-219.  doi:10.11896/j.issn.1002-137X.2018.04.036
Abstract PDF(3996KB) ( 788 )   
References | Related Articles | Metrics
With the rapid development of mobile Internet technology,more and more individuals use mobile devices to acquire information and services,which makes information overload problem more and more serious.Aiming at the puzzle resulted from data sparsity,insufficient contextual information and ignoring context similarity measurement,this paper proposesd a method of matrix factorization recommendation algorithm based on user clustering and mobile context(UCMC-MF) to predict user ratings and make recommendation.Firstly,the method clusters similar user by way of k-means,then finds similar contexts in each cluster,and searches users who are similar to the target user in preferences and context.Finally,experimental results on real datasets demonstrate that the proposed algorithm can effectively improve the accuracy of prediction.
Mobile Interface Pattern Clustering Algorithm Based on Improved Particle Swarm Optimization
JIA Wei, HUA Qing-yi, ZHANG Min-jun, CHEN Rui, JI Xiang and WANG Bo
Computer Science. 2018, 45 (4): 220-226.  doi:10.11896/j.issn.1002-137X.2018.04.037
Abstract PDF(1437KB) ( 519 )   
References | Related Articles | Metrics
Clustering is a very efficient method for analyzing information.Focusing on the issue that clustering results of the existing fuzzy C-means clustering algorithms based on particle swarm optimization are not good,a fuzzy C-means clustering algorithm based on improved particle swarm optimization was proposed and applied in mobile interface pattern clustering.Firstly,reasonable intuitionistic fuzzy entropy is constructed by using the geometric interpretation and the constraints of intuitionistic fuzzy entropy.Secondly,in the improved particle swarm optimization,the intuitionistic fuzzy entropy is used to measure the state of particle swarm,and chaotic opposition-based learning is used to improve the global search ability.Finally,the proposed algorithm employs the Gauss kernel function for enhancing nonlinear processing capability,and then it is applied in mobile interface pattern clustering.Experimental results show that the proposed clustering algorithm has better performance in mobile interface pattern clustering than the exis-ting clustering algorithms.
Satellite Imagery Cloud Fraction Based on Deep Extreme Learning Machine
WENG Li-guo, KONG Wei-bin, XIA Min and CHOU Xue-fei
Computer Science. 2018, 45 (4): 227-232.  doi:10.11896/j.issn.1002-137X.2018.04.038
Abstract PDF(6142KB) ( 1270 )   
References | Related Articles | Metrics
Cloud fraction is the key point for the application of satellite imagery.The existing methods cannot make full use of characteristics of satellite imagery,resulting in ineffective cloud detection and cloud fraction.In this paper,multi-layer neural network was used to extract the feature of satellite cloud image,and and through a large number of experiments,the best structure of depth learning network was found.This paper used deep extreme learning machine to detect and classify the cloud of satellite cloud image,and then used spatial correlation method to calculate the total cloud fraction.The results show that the deep extreme learning machine based on traditional extreme learning machine can extract the features of cloud images effectively,and can distinguish the boundary between thick cloud and thin cloud well.The cloud classification and cloud fraction accuracy of deep extreme learning machine are better than traditional thresho-ld method,extreme learning machine and convolutional neural network.
Study on Flexible Job-shop Scheduling Problem Based on Improved Discrete Particle Swarm Optimization Algorithm
DING Shu-yang, LI Bing and SHI Hong-bo
Computer Science. 2018, 45 (4): 233-239.  doi:10.11896/j.issn.1002-137X.2018.04.039
Abstract PDF(2392KB) ( 891 )   
References | Related Articles | Metrics
Flexible job-shop scheduling problem is an extension of the classical job-shop scheduling problem.The former is much closer to the practical production. Aiming at minimizing the maximum completion time,this paper proposed an improved discrete particle swarm optimization algorithm.The traditional particle swarm optimization algorithm is applicable to optimize the continuous models.As a combinatorial optimization problem with high complexity,FJSP is a typically discrete model.The proposed algorithm utilizes the load balancing mechanism for the machines to initiate the population,and introduces three operators into the procedure of updating the individuals’ status in the population.The three operators are respectively described as follows:the mutation based on the operation sequencing or the machine assignment,the precedence preserving order based crossover between current particle and the individual optimal,and rand-point preservation crossover between current particle and the global optimal.A particle is completely updated by using three operators sucessively.This method makes the population converge to the optimal solution very fast.The experimental results of benchmark instances show that the proposed algorithm can practically solve the flexible job-shop scheduling problem and search the near optimal solutions very fast.The proposed algorithm outperforms the other similar algorithms with respect to searching efficiency and convergence speed.
Greedy Randomized Adaptive Search Procedure Algorithm Combining Set Partitioning for Heterogeneous School Bus Routing Problems
HOU Yan-e, KONG Yun-feng and DANG Lan-xue
Computer Science. 2018, 45 (4): 240-246.  doi:10.11896/j.issn.1002-137X.2018.04.040
Abstract PDF(1323KB) ( 809 )   
References | Related Articles | Metrics
In practice of school bus route planning,there are a variety of planning applications with different optimization objectives under the types of school buses constraints.This paper dealt with a class of heterogeneous school bus routing problem(HSBRP) with different objectives.A greedy randomized adaptive search procedure(GRASP) algorithm combining set partition(SP) procedure was proposed in this paper.First,the routes generated in the execution of GRASP are used to build the set partition model,and then the model is solved by the CPLEX optimization software.To keep the algorithm suitable for different HSBRP problems,the initialization solution generation procedure of GRASP is adapted for these problems to obtain a valid solution,and the routes of this initialization solution are put into the route pool.In the local search phase,the many neighborhood operators and variable neighborhood descent procedure are executed for improving the solution.At the same time,the routes of the solution that is improved and the best local optimization in every iteration are both put into the route pool.The test results on the benchmark datasets show that the SP procedure of the proposed algorithm can improve the quality and stability of the algorithm.The proposed algorithm can effectively solve two types of HSBRP with different objectives,and it is effective when compared with the existing HSBRP algorithms.
Collaborative Filtering Recommendation Algorithm Based on Tag Clustering and Item Topic
LI Hao-yang and FU Yun-qing
Computer Science. 2018, 45 (4): 247-251.  doi:10.11896/j.issn.1002-137X.2018.04.041
Abstract PDF(1434KB) ( 784 )   
References | Related Articles | Metrics
The traditional item-based collaborative filtering algorithm only focuses on the rating data without the chara-cteristics of items when calculating the similarity between items.The appearance of social tagging can reflect the characteristics of items,but there are some semantic fuzziness problems while adding the social tags into the collaborative filtering algorithm directly.To solve the problems above,this paper put forward an improved item-based collaborative filtering recommendation algorithm.It clusters social tags to generate tag clusters which represent different topics,and calculates the relevance between items and topics to generate item-topics matrix according to the tagging results of items.The similarity between items is calculated by combining item-topics matrix with item-ratings matrix,the rating of target items are predicted through the collaborative filtering algorithm,and the personalized recommendation is realized.Expe-rimental results on MovieLens dataset show that the proposed algorithm can eliminate the semantic fuzziness and improve the quality of recommendation.
Distinguishing Sequence Patterns Mining Based on Density and Gap Constraints
WEI Qin-shuang, WU You-xi, LIU Jing-yu and ZHU Huai-zhong
Computer Science. 2018, 45 (4): 252-256.  doi:10.11896/j.issn.1002-137X.2018.04.042
Abstract PDF(1483KB) ( 586 )   
References | Related Articles | Metrics
Distinguishing patterns mining is an important branch of sequence patterns mining,and distinguishing patterns with density constraint can help biologists to find the distribution of special factors on biological sequences.This paper proposed an algorithm,named MPDG(Mining distinguishing sequence Patterns based on Density and Gap constraint),which employs Nettree data structure to mine the distinguishing patterns satisfying the density and gap constraints.The algorithm is efficient since it calculates all super-patterns’ supports of current pattern with one-way scanning the sequence database.Experimental results on real protein datasets verify the effectiveness of MPDG.
Relationships among Several Attribute Reduction Methods of Decision Formal Context
QIN Ke-yun and LIN Hong
Computer Science. 2018, 45 (4): 257-259.  doi:10.11896/j.issn.1002-137X.2018.04.043
Abstract PDF(1993KB) ( 614 )   
References | Related Articles | Metrics
The attribute reduction in the formal context is an important topic of formal concept analysis.Several kinds of attribute reduction in decision formal context have been put forward.This paper was devoted to the study of the relationships among reduction,granular reduction and rule based reduction.An equivalent depiction of rule based consistent set was provided by using formal concepts.It is shown that the rule based consistent set in strong consistent formal context is a consistent set,and the rule based consistent set in granular consistent formal context is a granular consistent set.
Study on Collaborative Optimization of Supply Chain with Uncertain Demand and Capacity Constraint
TONG Ze-ping, LI Tao, LI Li-jie and REN Liang
Computer Science. 2018, 45 (4): 260-265.  doi:10.11896/j.issn.1002-137X.2018.04.044
Abstract PDF(2384KB) ( 731 )   
References | Related Articles | Metrics
The uncertainty demand of supply chain market and the capacity constraint of suppliers have a great impact on the path of optimization in supply chain.Most of the existing literatures do not analyze the behavior choosing mechanization and collaborative optimization in supply chain with both uncertain demand and capacity-constrained suppliers.Thus,this paper established a multi-suppliers-one-retailer supply chain to analyze the efficient way to achieve the collabo-rative optimization by using the non-cooperative game theory.The results are helpful for the stakeholders of supply chain in making decisions.
Optimization of Networked Air-defense Operational Formation Structure Based on Bilevel Programming
LI Hui, ZHOU Lin and XIN Wen-bo
Computer Science. 2018, 45 (4): 266-272.  doi:10.11896/j.issn.1002-137X.2018.04.045
Abstract PDF(5117KB) ( 970 )   
References | Related Articles | Metrics
Scientific and reasonable operational formation air-defense structure(OFAS) is important to ensure the safety of formation,and improve the reliability and validity of operational missions.Aiming at the optimization problem of OFAS,firstly,relevant concepts of OFAS were defined and general process of formation air-defense operation was ana-lyzed.Secondly,based on the theory of bilevel programming,taking the farthest distance between defending nodes and core node and the strongest anti-saturation striking capability as upper and lower target respectively,the double layers optimization model for OFAS was built by comprehensively considering detection angle covering,fire intercepting time,missile twice catching and so on.Then,the hierarchical particle swarm optimization algorithm was introduced to solve the model,and concrete operation steps were given.Finally,taking OFAS for surface ships as an example,the optimal air-defense network structure was built,and the maximum anti-saturation striking capability was calculated.The rationality and feasibility of the model and method are verified through contrast with typical column and arc formation structures.
Facial Multi-landmarks Localization Based on Single Convolution Neural Network
ZHU Hong, LI Qian-mu and LI De-qiang
Computer Science. 2018, 45 (4): 273-277.  doi:10.11896/j.issn.1002-137X.2018.04.046
Abstract PDF(5495KB) ( 731 )   
References | Related Articles | Metrics
Facial landmarks localization methods using deep learning network technology have achieved prominent effect.However,the localization of larger number of facial landmarks still has lots of challenges due to the complex diversities in face images caused by pose,expression,illumination and occlusion,etc.The existing deep learning methods for face and mark localization are based on cascaded networks or tasks-constrained deep convolutional network(TCDCN),which are complicated and difficult to train.To solve these problems,a new method of facial multi-landmarks location based on single convolution neural network was proposed.Unlike cascaded networks,the network consists of three stacks,and each group consists of two convolutional layers and a max-pooling layer.This network structure can extract more global high-level features,which express the facial landmarks more precisely.Extensive experiments show that the approach outperforms the existing methods in the complex conditions such as pose,illumination,expression and occlusion.
Automatic Detection of Hypernasality Grades Based on Discrete Wavelet Transformation and Cepstrum Analysis
ZHAO Li-bo, LIU Qi, FU Fang-ling and HE Ling
Computer Science. 2018, 45 (4): 278-284.  doi:10.11896/j.issn.1002-137X.2018.04.047
Abstract PDF(1900KB) ( 729 )   
References | Related Articles | Metrics
This paper proposed an automatic hypernasality grades classification algorithm in cleft palate speech based on discrete wavelet decomposition coefficients and cepstrum analysis.Currently,the widely used features to classify hypernasality grades include MFCC,Teager energy,Shannon energy and so on.However,the classification accuracy is low,and the computation amount is large.The speech data tested in this work include 1789 Mandarin syllables with the final \a\,which are spoken by cleft palate patients with four grades of hypernasality.The wavelet decomposition coefficientcepstrum was extracted as the acoustic feature,and then KNN classifier was applied to identify four grades of hyperna-sality.The classification performance was compared with five acoustic features:MFCC,LPCC,pitch period,formant and short-time energy.Meanwhile,the performance of KNN was compared with SVM classifier.The experimental results indicate that the recognition accuracy obtained by using wavelet decomposition coefficient cepstrum feature is higher than that obtained by using five classical acoustics features.The classification accuracy is higher when using KNN than SVM classifier.Recognition accuracy obtained by using wavelet decomposition coefficient cepstrum feature combined with KNN is 91.67%,and 87.60% combined with SVM.Recognition accuracy using classical acoustics features combined with KNN is only 21.69%~84.54%,and 30.61%~78.24% combined with SVM.
Robust Face Recognition via Noise Spatial Structure Embedding and High Dimensional Gradient Orientation Embedding
LI Xiao-xin, LI Jing-jing, HE Lin and LIU Zhi-yong
Computer Science. 2018, 45 (4): 285-290.  doi:10.11896/j.issn.1002-137X.2018.04.048
Abstract PDF(7384KB) ( 629 )   
References | Related Articles | Metrics
Nuclear norm based matrix regression(NMR) is robust to the errors caused by facial occlusion and illumination changes.This paper analyzed the underlying mechanism of NMR.Firstly,the nuclear norm measures the energies of the error caused by noises in the principle orientations,which usually exclude the influence of the common noises.Seco-ndly,the nuclear norm embeds the spatial structure of noises, which is very important to represent and exclude the noises.However,it is insufficient to eliminate the influence of the noises completely by only embedding the spatial structure of noises.As high-dimensional gradient orientation(HGO) has strong ability in noise cancellation, this paper embedded HGO into NMR and proposed a novel regression method called HGO-NMR.Experiments show that HGO-NMR outperforms NMR.The critical significance of HGO-NMR is that the noise spatial structure and the noise cancellation mechanism are equally important for face recognition system for reality,and only using one of the two mechanisms will lead to unstable recognition performance.
Video-based Detection of Human Motion Area in Mine
LI Shan and RAO Wen-bi
Computer Science. 2018, 45 (4): 291-295.  doi:10.11896/j.issn.1002-137X.2018.04.049
Abstract PDF(10131KB) ( 610 )   
References | Related Articles | Metrics
The human motion area detection technology applied to the mine video can detect motion of miners and intelligently detect abnormal behavior of miners through further analysis.According to the results of feedback detection to achieve real-time alarm and linkage control,it obviously reduces the occurrence of mine accidents.This paper proposed a hybrid method TD-HF(Time Difference and Haar Feature) for extracting human motion area,which integrates the time difference method and the human detection algorithm based on Haar feature especially under the condition of mine.The experiment shows that this method is better than the simple classifier based on AdaBoost algorithm in the detection rate and false recognition rate,at the same time,it can satisfy the real-time requirements in detection time.It’s applicable to the detection of human motion area under the special condition of mine video.
Anti-occlusion Adaptive-scale Object Tracking Algorithm
QU Zhong and ZHAO Cong-mei
Computer Science. 2018, 45 (4): 296-300.  doi:10.11896/j.issn.1002-137X.2018.04.050
Abstract PDF(6229KB) ( 1004 )   
References | Related Articles | Metrics
There are still some problems in the aspect of handling scale and object occlusion by using different features of correlation filter to perform object tracking.In this paper,a multi-scale kernel correlation filter algorithm based on random fern detector was proposed.The tracking task was decomposed into the target scale estimation and the translation estimation.At the same time,the CN colour feature and HOG feature were fused in response level to further improve the overall tracking performance of the algorithm.In addition,an online random fern classifier was trained to reob-tain the target after the target was lost.By comparing with KCF,DSST,TLD, MIL and CT algorithms,it is proved that the proposed method can accurately estimate target status and effectively deal with the occlusion problem.
Estimation Algorithm of Atmospheric Light Based on Gaussian Distribution
ZHANG Wen-bo and HOU Xiao-rong
Computer Science. 2018, 45 (4): 301-305.  doi:10.11896/j.issn.1002-137X.2018.04.051
Abstract PDF(7969KB) ( 876 )   
References | Related Articles | Metrics
This paper proposed an estimation algorithm of atmospheric light based on Gaussian distribution to solve the problem that the existing approaches estimate the atmospheric light with few candidate points and it may lead to significant statistical error in defogging results.The proposed algorithm firstly uses a brightness threshold to select the candidate points to increase the number of initial sample points.Then it uses some clustering algorithms to merge the obtained clusters to increase the number of sample points in the clusters.It also uses a proportional threshold to filter out unreasonable point clusters and regards each point cluster as a single light source,and then calculates their influence on surrounding pixels with a Gaussian distribution based model.Finally,atmospheric light map instead of a constant value is used to restore the haze image.Experiments show that the defogging results produced by the proposed algorithm are more natural than other defogg algorithms in subjective vision.The image quality evaluation indicators also reveal the fact that the proposed algorithm improves the defoging effect efficiently in objective aspect compared with other algorithms.
Support Similarity between Lines Based CoSaMP Algorithm for Image Reconstruction
DU Xiu-li, GU Bin-bin, HU Xing, QIU Shao-ming and CHEN Bo
Computer Science. 2018, 45 (4): 306-311.  doi:10.11896/j.issn.1002-137X.2018.04.052
Abstract PDF(9482KB) ( 685 )   
References | Related Articles | Metrics
The performance of compressive sampling matching pursuit(CoSaMP) is restricted to the choice of its initial support set. Choosing initial support set inaccurately will not only affect the accuracy of recons itution,but also reduce the speed of reconstruction.In order to solve this problem,the stucture of image in sparse domain was introduced into CoSaMP algorithm,and the concept of support set similarity was presented.Then CoSaMP algorithm based on support similarity between lines was proposed and the high similarity between the adjoining rows in one digital image was used.The results of this experiment show that the proposed algorithm has higher quality in reconstruction without increa-sing the time complexity of algorithm,and the peak signal-to-noise ratio enhances 0.6~2.5dB compared with the traditional CoSaMP algorithm under the same condition of sampling frequency.
Online Single Image Super-resolution Algorithm Based on Group Sparse Representation
LI Jian-hong, WU Ya-rong and LV Ju-jian
Computer Science. 2018, 45 (4): 312-318.  doi:10.11896/j.issn.1002-137X.2018.04.053
Abstract PDF(6092KB) ( 743 )   
References | Related Articles | Metrics
The super-resolution algorithm based on sparse representation selects atoms in the dictionary by approximately random style to fit the specified image patch,but the selected atoms show strong structure sparsity in practice, which leads to the complexity of calculation and a great deal of errors,affecting the quality of the reconstructed images.For conquering the problem,a online single image super-resolution algorithm based on group sparse representation was proposed.The proposed algorithm takes advantage of the inputted low-resolution image to construct the group sparse dictionary by introducing the group sparse theory,and then incorporates the group sparse prior and geometric duality prior to design the cost function of the algorithm,which is solved by a proposed iterative optimization method.The experiments demonstrate that the proposed algorithm is superior to the main stream algorithms subjectively and objectively.