Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 46 Issue 6, 15 June 2019
Research on Application of Big Data Analytics in Network
FENG Gui-lan, LI Zheng-nan, ZHOU Wen-gang
Computer Science. 2019, 46 (6): 1-20.  doi:10.11896/j.issn.1002-137X.2019.06.001
Abstract PDF(3288KB) ( 1417 )   
References | Related Articles | Metrics
With the rapid development of new technologies like mobile internet,Internet of Things and 5G communication network,more and more infrastructures,devices and data are generated,such as hundreds of millions of network access points,networked devices,applications as well as massive data.Thus,great difficulties and challenges are brought to fault tolerance,cyberspace security,leading to some traditional solutions become inefficient to such large scale and complex security problems.Meanwhile,the increase of network big data presents unprecedented opportunities on deeply mining and taking full advantage of the big value of network big data.Big data analytics can extract hidden,valuable patterns,and useful information from big data.Therefore,both academia and industry have been attracted again by network field based on big data analytics,and have made certain research achievement.Researches on network field mainly involve four research directions,namely wireless network,SDN network,optical network and cyberspace security.First,the survey starts with the introduction of the big data basic concepts,data model and data analytics.Second,there is a detailed review of the current academic and industrial efforts toward network design using big data analytics.Third,the main network design cycle is illustrated by employing big data analytics.This cycle represents the umbrella concept that unifies the surveyed topics.Forth,the challenges confronted by the utilization of big data analytics in network design are identified.Finally,several future research directions are highlighted.
Research Progress on DNA Data Storage Technology
ZHANG Shu-fang, PENG Kang, SONG Xiang-ming, ZHANG Zi-yu, WANG Han-jie
Computer Science. 2019, 46 (6): 21-28.  doi:10.11896/j.issn.1002-137X.2019.06.002
Abstract PDF(2474KB) ( 1507 )   
References | Related Articles | Metrics
With the rapid development of computer technology and network technology,the massive generated data have brought great challenges to traditional data storage methods,so researchers begin to focus on finding a new generation of storage scheme.As a natural genetic information storage medium,Deoxyribonucleic acid (DNA) has advantages of large storage capacity,low energy consumption and long life,which effectively overcome the shortcomings of traditional storage methods,such as hard disk and computer storage.The DNA data storage method has become a research hotspot in the intersection field of information and biotechnology.This paper reviewed the research progress on DNA data stora-ge technology.Firstly,DNA and its theoretical framework of storage are introduced.Then,the coding technologies in DNA data storage are elaborated,which includes compression coding algorithm of binary data,error correction algorithm and conversion method from binary data to four bases of DNA.Finally,the existing DNA storage schemes are ana-lyzed,and the challenges in DNA data storage research are discussed.
Newly-emerging Domain Word Detection Method Based on Syntactic Analysis and Term Vector
ZHAO Zhi-bin, SHI Yu-xin, LI Bin-yang
Computer Science. 2019, 46 (6): 29-34.  doi:10.11896/j.issn.1002-137X.2019.06.003
Abstract PDF(1315KB) ( 531 )   
References | Related Articles | Metrics
Many existing words and phrases may be used in a domain in which they have never appeared before.These words and phrases are called newly-emerging domain words.The researchers can get insight into the latest development tendency and public opinions of a domain through these newly-emerging words.Therefore,it is significant to detect newly-emerging domain words.Based on dependency syntactic analysis and term vector,this paper proposed a newly-emerging domain words detection method.Firstly,the concept of syntactic dictionary was proposed, and its constructing method was proposed for some specific domains based on the dependency syntax of sentences and TF-IDF values of training corpus.Next,domain syntactic dictionary and term vectors were used to detect newly-emerging domain words.The comprehensive experiments were conducted to evaluate the proposed method with comment data from a skin-care products forum.The experimental results show that the syntactic dictionary is effective and the proposed method has good performance in newly-emerging domain word detection.
Missing Data Prediction Based on Compressive Sensing in Time Series
SONG Xiao-xiang, GUO Yan, LI Ning, WANG Meng
Computer Science. 2019, 46 (6): 35-40.  doi:10.11896/j.issn.1002-137X.2019.06.004
Abstract PDF(3194KB) ( 732 )   
References | Related Articles | Metrics
The frequent occurrence of data loss in time series acquisition processhas seriously hindered the accurate data analysis. However,most of the existing methods mainly find a certain pattern from the collected data to predict the missing data,which are only feasible to be applied to the case where only a low ratio of collected data are missing. In view of the problem above,this paper proposed an algorithm of missing data prediction based on compressive sensing. The missing data prediction problem is formulated as the multiple sparse vectors recovery problem. Firstly,the sparse representation basis is designed by making use of the temporal smoothness of time series,thus transforming the missing data prediction problem into the problem of the sparse vector recovery. Secondly,the observation matrix is designed based on the location characteristics of the data that are not missing,which is lowly coherent with the designed representation bases,thus ensuring the reconstruction performance of the proposed algorithm. The simulation results show that the proposed algorithm can predict the missing data very effectively even if the ratio of data loss is as high as 90%.
Implicit Feedback Recommendation Model Combining Node2vec and Deep Neural Networks
HE Jin-lin, LIU Xue-jun, XU Xin-yan, MAO Yu-jia
Computer Science. 2019, 46 (6): 41-48.  doi:10.11896/j.issn.1002-137X.2019.06.005
Abstract PDF(2230KB) ( 515 )   
References | Related Articles | Metrics
It is extremely practical and challenging to implement personalized recommendation based on large-scale implicit feedback information.In order to solve the problem of data sparseness and then achieve effective recommendation combining various side information,this paper proposed an implicit feedback recommendation model combining node2vec and deep neural networks.This model utilizes a deep neural network framework with embedded meta-data(Meta-DNN),and maps the users and items vectors in a low-dimensional manner.Wherein,the second-order random walk of node2vec is used to learn neighbor nodes in the network with embedded meta-data so that adjacent nodes have similar node representations.Besides,data sparsity is alleviated via improving the smoothness among neighboring users and items.Finally,deep neural network is used for further learning user preferences for items,thus providing recommendation for users.In addition,the popularity parameter is introduced to perform non-average sampling of unknown items and an implicit feedback negative sampling strategy is optimized.Experimental results on the typical Gowalla and Mo-vieLens-1M data sets demonstrate the prediction performance and recommendation quality of the proposed model compared with state-of-the-art recommendation algorithms.
Short-term High Voltage Load Current Prediction Method Based on LSTM Neural Network
ZHANG Yang, JI Bo, LU Hong-xing, LOU Zheng-zheng
Computer Science. 2019, 46 (6): 49-54.  doi:10.11896/j.issn.1002-137X.2019.06.006
Abstract PDF(1868KB) ( 548 )   
References | Related Articles | Metrics
In the short-term load current prediction,the traditional model can’t solve the problems of nonlinearity and time dependence of load current data simultaneously.To solve this problem,this paper proposed a short-term high vol-tage load current regression prediction(SHCP) method based on a long short-term memory(LSTM) recurrent neural network,namely SHCP-LSTM.The proposed method introduces the weight of self-circulation,which can make cells connected with each other circularly and dynamically change the cumulative time scale in the prediction,thus having a long short memory function.Meanwhile,the method uses the forgetting gate to control the input and output,so that the gate control unit has the sigmoid nonlinearity.Experiments show that the method is feasible and effective.Compared with linear logistic regression algorithm(LR) and machine learning algorithm artificial neural network(ANN) and back propagation neural network(BPNN) prediction,SHCP-LSTM has fast convergence speed and high accuracy.
Study on Processing Technology for Complex Event Management Based on Multivariate Time Series Data
LI Zhi-guo, ZHONG Jiang, ZHONG Lu-man
Computer Science. 2019, 46 (6): 55-63.  doi:10.11896/j.issn.1002-137X.2019.06.007
Abstract PDF(1817KB) ( 411 )   
References | Related Articles | Metrics
As the amount of data becomes bigger and bigger,it is increasingly meaningful to combine different business system data to mine potential values.The complex event processing technology abstracts the business data as an event sequence,and describes the potentially valuable composite data as a specific event matching structure through the event description method.Then the event detection engine detects the event sequence meeting the matching structure from a large number of event flows,and finally outputs the data fusion results.However,in the traditional event description,the input event flow of the event engine is a single atomic event type,the event predicate constraint contains a simple attribute value comparison operation or simple aggregation operation,and the time constraint between events is simple.This makes the traditional detection method cannot be suitable for some application fields in which the time is required to be more accurate and the event predicate constraint is required to be more complex,such as medicine and finance.In light of this,this paper designed a multivariate event input supported quantitative timing constraint representation model based on TCN and predicate constraint representation model based on time-interval feature constraint,and proposed a parallel detection algorithm for complex events(PARALLEL-TCSEQ-DETECTION).The method makes the complex event detection more efficient.The analysis results based on 200 million records of 2045 stocks demonstrate the validity and high efficiency of the proposed processing technology for the complex events.
Multi-type Relational Data Co-clustering Approach Based on Manifold Regularization
HUANG Meng-ting, ZHANG Ling, JIANG Wen-chao
Computer Science. 2019, 46 (6): 64-68.  doi:10.11896/j.issn.1002-137X.2019.06.008
Abstract PDF(1285KB) ( 351 )   
References | Related Articles | Metrics
With the development of big data applications,the size of multi-type relational data sampled from nonlinear manifolds is getting larger.The data geometric structure is more complicated,and the heterogeneous relational data are becoming extremely sparse.As a result,data mining becomes more difficult and less accurate.In order to solve this problem,this paper proposed a manifold nonnegative matrix tri-factorization(MNMTF) approach for multi-type relational data co-clustering.First of all,the correlation matrix is constructed with the natural relationship or content relevance of smaller-scale entities and it is decomposed into indicating matrix.The indicating matrix is used as the input of nonnegative matrix tri-factorization.Then,the manifold regularization is added on the basis of fast nonnegative matrix tri-factorization(FNMTF) to simultaneously cluster data inter-type relationships and intra-type relationships,improving the accuracy of clustering.Experiments show that the accuracy and performance of MNMTF algorithm are superior to the traditional co-clustering algorithms based on nonnegative matrix factorization.
Combined Feature Extraction Method for Ordinal Regression
ZENG Qing-tian, LIU Chen-zheng, NI Wei-jian, DUAN Hua
Computer Science. 2019, 46 (6): 69-74.  doi:10.11896/j.issn.1002-137X.2019.06.009
Abstract PDF(1347KB) ( 423 )   
References | Related Articles | Metrics
Ordinal regression,also known as ordinal classification, is a supervised learning task that uses the labels with a natural order to classify data items.Ordinal regression is closely related to many practical problems.In recent years,the research on ordinal regression has attracted more and more attention.Ordinal regression,like other supervised lear-ning tasks(classification,regression,etc.),requires feature extraction to improve the efficiency and accuracy of the model.However,while feature extraction has been extensively studied for other classification tasks,there are few researches in ordinal regression.It is well known that the combined features could capture more underlying data semantics than single features,but it is difficult to improve the accuracy of the model by adding general combined features.Based on the frequent mining patterns,this paper used the K-L divergence value to select the most discriminative frequent patterns for feature combination,and proposed a new ordinal regression combination feature extraction method.Multiple ordinal regression models are used for validation on both the public and our own datasets.The experimental results show that using the most distinguishing frequent pattern combination features can effectively improve the training effect of most ordinal regression models.
Recommendation Strategy Based on Trust Model via Emotional Analysis of Online Comment
LU Zhu-bing, LI Yu-zhou
Computer Science. 2019, 46 (6): 75-79.  doi:10.11896/j.issn.1002-137X.2019.06.010
Abstract PDF(1247KB) ( 252 )   
References | Related Articles | Metrics
Personalized recommendation technology has become a very effective approach to cope with “information overload” in E-commerce.Aiming at the problems of data sparseness and cold-start in traditional collaborative filtering recommender system,which have led to the decline of the accuracy in recommendation,weakened user’s confidence towards the system,this paper proposed a new recommendation strategy using trust theory in sociology to offer users better personalized service. From this perspective,user’s online comments to the items that they have experienced are analyzed,the user’s emotional tendency is extracted,and it is effectively quantified.The trust relationship between users is grown by analyzing the similarity of user’s emotional tendency.At the same time,users’ rating data are combined to compensate for the lack of recommendation factor caused by similarity as the only preference weight.The work in this paper includes three parts:analysis and quantification of user emotional tendency based on online reviews,mo-deling of trust relationship based on similarity between emotion and design of recommendation strategy based on trust relationship.Experiments show that the proposed recommendation strategy can effectively reduce the average absolute error value called MAE,which means the recommendation accuracy is improved.At the same time,the coverage rate is also effective increased,which means that the system has more items to recommend.Additionally,the management mechanism of trust relationship can also greatly enhance user’s personalized experience of the system and user’s confidence to the system.
Closed Sequential Patterns Mining Based Unknown Protocol Format Inference Method
ZHANG Hong-ze, HONG Zheng, WANG Chen, FENG Wen-bo, WU Li-fa
Computer Science. 2019, 46 (6): 80-89.  doi:10.11896/j.issn.1002-137X.2019.06.011
Abstract PDF(1837KB) ( 312 )   
References | Related Articles | Metrics
Current protocol format inferring methods based on network traffic can only extract flat sequence of keywords,and they do not consider the structural features of message keywords,such as sequential,hierarchical and parallel relation between the keywords.Additionally,the noise in message samples always lead to low recognition accuracy of keywords.This paper presented a method to automatically identify keywords of unknown protocol message and infer the message structure.Based on the collected communication messages of the unknown protocol,the method implements two-phase closed sequential patterns to identify protocol keywords and generate keywords sequence with keyword composition relation,extract sequential,hierarchical and parallel relation of the keywords,and then infer messages structure inference.To ensure recognition accuracy of the keywords,the method analyzes message samples directly containing noise by setting minimum support in keywordsidentification procedure.Experimental results show that the proposed method performs well in keywords identification and message structure inference for both text protocol and binary protocol.
SDN-based Multipath Traffic Scheduling Algorithm for Data Center Network
JIN Yong, LIU Yi-xing, WANG Xin-xin
Computer Science. 2019, 46 (6): 90-94.  doi:10.11896/j.issn.1002-137X.2019.06.012
Abstract PDF(1857KB) ( 430 )   
References | Related Articles | Metrics
In order to solve the problems of low bandwidth utilization and poor network performance in data center networks,this paper proposed a multi-path traffic scheduling algorithm considering multiple factors(MSF) based on SDN.The algorithm utilizes the characteristics of control and forwarding separation in Software Defined Network(SDN) architecture and the centralized control of the controller to calculate the route for the data stream.Firstly,this algorithm calculates all the path sets with the shortest hops from all feasible paths between source host and destination host,then finds out the paths with the least criticality in the shortest path sets,and finally seeks out the lowest-cost path as the down-forwarding path in final flow table.Experimental results show that the proposed algorithm improves the network bandwidth utilization and throughput,and reduces the average round-trip time of traffic compared with the ECMP algorithm and Hedera algorithm under different traffic models,thus improving the overall network performance of data center.
Cognitive Decision Engine of Hybrid Learning Differential Evolution and Particle Swarm Optimization
ZHANG Yu-pei, ZHAO Zhi-jin, ZHENG Shi-lian
Computer Science. 2019, 46 (6): 95-101.  doi:10.11896/j.issn.1002-137X.2019.06.013
Abstract PDF(1857KB) ( 281 )   
References | Related Articles | Metrics
In order to increase the speed and performance of parameter decision in cognitive radio system,a cognitive radio decision engine (HPSO-BLDE) based on hybrid particle swarm optimization and learning differential evolution algorithm was proposed.First,the adaptive mutation mechanism is introduced into the learning differential evolution algorithm,so that each chromosome adaptively varies with individual fitness and average fitness to improve its local optimization capability.Then,the learning factor of particle swarm optimization algorithm is modified and the perturbation is added to prevent the premature.The more appropriate transform function is selected to convert the forward and backward velocity to the same probability to update the particle position and improve the precision of the optimal solution,thus improving the global optimization solution.Finally,the improved binary particle swarm optimization (IBPSO) and the improved binary differential evolution algorithm (IBLDE) are run in parallel in the cognitive engine model,and the best individual information of the two algorithms is fused after a fixed number of iterations to obtain the HPSO-BLDE algorithm.The populations of IBPSO algorithm and IBLDE algorithm have the both advantages,thus the optimal solution accuracy and convergence speed of the HPSO-BLDE algorithm are enhanced.Parameter decision simulation results of multi-carrier communication system shows that the IBPSO algorithm,IBLDE algorithm and HPSO-BLDE algorithm have better performance than hilling genetic algorithm (HGA),binary quantum particle swarm algorithm (BQPSO) and binary learningdifferential evolution algorithm (BLDE),and HPSO-BLDE algorithm has the best performance among these algorithms.
Location-related Online Multi-task Assignment Algorithm for Mobile Crowd Sensing
LI Zhuo, XU Zhe, CHEN Xin, LI Shu-qin
Computer Science. 2019, 46 (6): 102-106.  doi:10.11896/j.issn.1002-137X.2019.06.014
Abstract PDF(1754KB) ( 393 )   
References | Related Articles | Metrics
The higher data quality is required,the more sensing cost is needed.How to achieve the trade-off between the quality and cost is one of the hot topics in the current research on the problem of task assignment in mobile crowd sen-sing.In this paper,the location-related online multi-task assignment problem where the lower bound of the data quality is required to ensure was investigated.The optimization goal is to minimize the total sensing cost,and the data quality requirement is quantified as the number of different execution nodes.This paper proposed a greedy algorithmbased on partition.Its main idea is as follows.Firstly,a disk is generated,whose center is the initial position of the execution node and whose radius is the farthest expected move of the node.After that,a subset of proper tasks whose locations are in the disk are selected,and they are regarded as the tasks to be taken by the corresponding execution node.According to the experimental simulation,compared with the GGA-I algorithm,the proposed algorithm reduces the total sensing cost on the average of 12.7% in the same running time,and reduces the running time on an average of 51% in the similar sensing performance.
Hybrid-based Network Congestion Control Routing Algorithm for LLN
WANG Hua-hua, ZHOU Yuan-wen, LIU Jiang-bing
Computer Science. 2019, 46 (6): 107-111.  doi:10.11896/j.issn.1002-137X.2019.06.015
Abstract PDF(1506KB) ( 238 )   
References | Related Articles | Metrics
Because the existing network congestion control routing algorithms in low power and lossy networks (LLN) cannot alleviate the current network congestion effectively,this paper proposed a hybrid-based network congestion control routing algorithm (HNCCRA).This algorithm mainly contains three innovations.Firstly,to reduce the probability of network congestion effectively,each node selects the parent node according to the load state of its alternative parent node in the process of network construction.Secondly,to avoid the problem that the child node of network congestion node selects the alternative parent node with a heavy traffic state as the new parent node when changing the data transmission path,each node notifies its own load status in real time during the maintenance process of the network topology.Finally,for alleviating the current network congestion effectively,network congestion control is conducted by combining the idea of data flow and the way of replacing the data transmission paths.The simulation results show that HNCCRA algorithm can improve the performance of all aspects of the network effectively compared with the existing network congestion control routing algorithm in LLN.Specifically,the network congestion probability is decreased by 19.89%,the average throughput of sink node is increased by 11.35%,and the network lifetime is extended by 9.75%.
MANET Routing Discovery and Establishment Strategy Based on Node State
ZHAO Xin-wei, LIU Wei
Computer Science. 2019, 46 (6): 112-117.  doi:10.11896/j.issn.1002-137X.2019.06.016
Abstract PDF(1876KB) ( 220 )   
References | Related Articles | Metrics
AODV is a typical on-demand routing protocol in MANET networks.For the defect of AODV routing strategy,a routing discovery and establishment strategy based on node state was proposed.By modeling the MANET network,Markov chain is used to predict the state of neighbor nodes during routing discovery.Based on the original AODV routing strategy,the last hop node uses the reverse routing established by the AODV routing discovery to obtain the status information of the neighbor node.When routing is set up,combining neighboring node status information,the idle and dormant nodes are preferentially selected as the next hop routing.The simulation results show that the AODV routing protocol optimized based on this strategy improves the packet delivery rate in the network,reduces the end-to-end delay,and improves the network performance.
Improved FCME Algorithm Based on Binary Searching by Mean and Its Applicationsin E/SLF Channel Noise Detection
ZHAO Peng, JIANG Yu-zhong, ZHAI Qi, LI Chun-teng
Computer Science. 2019, 46 (6): 118-123.  doi:10.11896/j.issn.1002-137X.2019.06.017
Abstract PDF(3853KB) ( 230 )   
References | Related Articles | Metrics
Extreme/Super Low Frequency (E/SLF,3 ~300 Hz) channel noise (CN) impulses are usually passivated by the transient effects in the receivers’ front-end stages,and it will cause the performance degradation for the common time-domain amplitude-based threshold detectors.Aiming at this problem,this paper proposed a detection method based on the constant false alarm rate ordering statistics (OS-CFAR) through local variance domain transforming (LVDT).In light of the potential divergency problem when FCME algorithm iteratively evaluates the background noise,this paper also presented an improved method namely binary searching method by mean (BSMM).BSMM doesn’t need to assume the initial clean set or sort process,and thus is more robust and has higher efficiency.Simulations show that the proposed method can reduce the computing time by more than 2 orders without losing estimation accuracy of background noise compared with the common FCME.Besides,the proposed CN detection method outperforms the local optimum threshold nonlinearities method (LOTNI).
Anti-collision Algorithm Based on Q-learning for RFID Multiple Readers
YUAN Yuan, ZHENG Jia-li, SHI Jing, WANG Zhe, LI Li
Computer Science. 2019, 46 (6): 124-127.  doi:10.11896/j.issn.1002-137X.2019.06.018
Abstract PDF(1584KB) ( 250 )   
References | Related Articles | Metrics
Due to the collision problem between multiple readers and tags communication in RFID system,this paper modeled the problem as a Markov decision process,and proposed an anti-collision algorithm based on Q-learning.By continuously interacting with the environment,the Q-value function is generated,as well as the optimal channel resources allocation.The complex hierarchical structure in HiQ algorithm is eliminated for simplifying the system model.The algorithm not only imports the concept of ε-greedy strategy to obtain the global optimal solution,but also improves the reward function to get the best state.Simulation results show that compared with HiQ and EHiQ,this intelligent algorithm can adaptively assign different channels to the reader for data transmission,therefore reduces the collision rate and improves the channel utilization and throughput rate.
Cloudlet Placement and User Task Scheduling Based on Wireless Metropolitan Area Networks
ZHANG Jian-shan, LIN Bing, LU Yu, XU Fu-rong
Computer Science. 2019, 46 (6): 128-134.  doi:10.11896/j.issn.1002-137X.2019.06.019
Abstract PDF(1678KB) ( 336 )   
References | Related Articles | Metrics
The computing capability requirements of mobile applications are becoming increasingly intensive,while the computing capability of transferable mobile devices is limited.In a mobile device,an effective way to reduce the system response time of an application is offloading its task to nearby cloudlet,which consists of clusters of computers.Edge computing enables computational tasks to be processed in time near the source,which is an effective way to reduce system delay.Cloudlet technology is an important application of edge computing.Although there is a great deal of research in mobile cloudlet offloading technology,there has been very little attention paid to how cloudlets should be placed in a given network to optimize mobile application performance.This paper studied cloudlet placement and mobile user task scheduling to the cloudlet in a wireless metropolitanare network(WMAN).This paper devised an algorithm for the problem,which enables the placement of the cloudlets at user dense regions of the WMAN,and scheduled mobile user to the placed cloudlets which balancing their workload.This paper also conducted experiments through simulation.The simulation results indicate that the proposed algorithm is very promising.
MC2ETS:An Energy-efficient Tasks Scheduling Algorithm in Mobile Cloud Computing
YE Fu-ming, LI Wen-ting, WANG Ying
Computer Science. 2019, 46 (6): 135-142.  doi:10.11896/j.issn.1002-137X.2019.06.020
Abstract PDF(2085KB) ( 215 )   
References | Related Articles | Metrics
Mobile cloud computing can migrate the tasks scheduled on mobile devices to cloud,which can reduce the ene-rgy consumption of mobile device and improve the tasks execution efficiency.Tasks scheduling problem with Directed Acyclic Graph (DAG) model in mobile cloud computing was studied.Traditional methods for scheduling tasks usually are short of optimizing synchronous both tasks completion time and energy consumption of mobile device,an energy-efficient tasks scheduling algorithm of mobile cloud computing (MC2ETS) was presented in this paper.The algorithm consists of three steps.Firstly,the initial scheduling is carried out to minimize the application completion time.Then the task scheduling migration is conducted based on minimizing the energy consumption,while satisfying the constraint of application completion time.At last,through DVFS(Dynamic Voltage/Frequency Scale) algorithm,the energy consumption is reduced further.The feasibility of the proposed algorithm was verified through the specific example,and the time complexity of the proposed algorithm was analyzed.Finally,through the systemic experimental analysis compared with the baseline algorithms,this paper proved that the proposed algorithm can achieve the trade-off optimization between the scheduling time index and the energy consumption of mobile device in most cases.
Topic-based Re-identification for Anonymous Users in Social Network
LV Zhi-quan, LI Hao, ZHANG Zong-fu, ZHANG Min
Computer Science. 2019, 46 (6): 143-147.  doi:10.11896/j.issn.1002-137X.2019.06.021
Abstract PDF(1333KB) ( 258 )   
References | Related Articles | Metrics
Social network has become part of people’s daily life recently,and brings convenience to our social activities.However,it poses threats to our personal privacy at the same time.Usually,people want to protect part of their private social activity information to prevent relatives,friends,colleagues or other specific groups from visiting.One common protective method is to socialize anonymously.And some social networks provide anonymity mechanisms for users,allowing them to hide some private information about social activities,thus separating these social activities from the main account.In addition,users can create alternate accounts and set different attributes,friendships to achieve the same aim.This paper proposed a topic-based re-identification method for social network users to make an attack on these protection mechanisms.The text contents published by anonymous users (or alternate accounts) and non-anonymous users (main accounts) are analyzed based on topic model.And the time factor and text length factor are introduced to construct user profiles in order to improve the accuracy ofthe proposed method.Then the similarity between anonymous and non-anonymous user profiles is analyzed to match their identities.Finally,experiments on real social network dataset show that the proposed method can effectively improve the accuracy of re-identification for users in social networks.
Network System Risk Assessment Model with Optimal Weights
ZHANG Jie-hui, PAN Chao, ZHANG Yong
Computer Science. 2019, 46 (6): 148-152.  doi:10.11896/j.issn.1002-137X.2019.06.022
Abstract PDF(1839KB) ( 270 )   
References | Related Articles | Metrics
Network system risk is affected by many factors,and has strong time-varying and non-linear characteristics.As a result,a single model cannot fully describe the characteristics of network system risk change.The traditional combination model cannot accurately describe the contribution of each model on the final evaluation results for network system risk by determining the weight of the model according to the network system risk assessment errors,causing the poor accuracy of network system risk assessment.In order to improve the effect of network system risk assessment,a network system risk assessment model with optimal weights was designed.Firstly,different models are used to evaluate the network system risk from different perspectives,and the prediction results of a single model is obtained.Then,the network system risk assessment results of a single model are taken as an evidence body.According to the improved evidence theory,the evidence body is fused,and then the final evaluation of network system risk is obtained.Finally,the proposed method is compared with other network system risk assessment methods.The test results show that the model can accurately evaluate the network system risk and reflect the changing characteristics of the network security situation.The evaluation accuracy is obviously better than other network system risk assessment methods,and more ideal network system risk assessment results can be obtained.
Secure Routing Mechanism Based on Trust Against Packet Dropping Attack in Internet of Things
ZHANG Guang-hua, YANG Yao-hong, ZHANG Dong-wen, LI Jun
Computer Science. 2019, 46 (6): 153-161.  doi:10.11896/j.issn.1002-137X.2019.06.023
Abstract PDF(2481KB) ( 342 )   
References | Related Articles | Metrics
In an open Internet of Things environment,nodes are vulnerable to malicious packet dropping attacks (including black hole attacks and gray hole attacks) in the routing process,which will seriously affect the connectivity of the network and lead to the decrease of packet delivery rate and the increase of end-to-end delay.For this reason,this paper proposed a trust-based secure routing mechanism on the basis of RPL protocol.According to the behavior of the nodes in the data forwarding process,the penalty factor is introduced to evaluate the direct trust relationship between the nodes,the entropy is used to assign weights to the direct trust value and the indirect trust value,so that the comprehensive trust value of the evaluated nodes is obtained.The fuzzy set theory is used to classify the trust relationship between nodes,and the neighbor nodes with higher trust level are selected for the routing node to forward data,while the neighbor nodes with lower trust level are isolated from the network.In addition,in order to prevent normal nodes from being isolated from the network as malicious nodes due to some non-intrusion factors,a given recovery time will be provided to further determine whether to isolate them from the network.This paper used Contiki operating system and its Cooja network simulator to carry out the simulation experiment of this scheme.The results show that the malicious node detection rate,false detection rate,packet delivery rate and end-to-end delay of this scheme are improved when the number of nodes and the proportion of malicious nodes are different.In terms of security,the malicious node detection rate and false detection rate of this scheme are significantly better than tRPL protocol.In terms of routing performance,the packetdelivery rate and end-to-end delay of this scheme are significantly better than tRPL protocol and MRHOF-RPL protocol.The simulation analysis results fully demonstrate that this scheme can not only effectively identify malicious nodes,but also maintain better routing performance in the presence of malicious attacks.
Digital Signature Algorithm Based on QC-LDPC Code
YANG Xue-fei, ZHENG Dong, REN Fang
Computer Science. 2019, 46 (6): 162-167.  doi:10.11896/j.issn.1002-137X.2019.06.024
Abstract PDF(1321KB) ( 277 )   
References | Related Articles | Metrics
Code-based public key cryptography can resist the attack of quantum algorithms.Aiming at the large amount of key in classical CFS signature scheme,this paper proposed a kind of CFS signature scheme based on QC-LDPC codes.This scheme improves the traditional CFS signature scheme based on QC-LDPC codes.The BP fast decoding algorithm of QC-LDPC codes is used in the signature process.The analysis shows that the new scheme can reduce the key storage space of CFS,improve the efficiency of signature,and effectively resist the attack of quantum algorithm without reducing the security.
Research and Application of LMS Adaptive Interference Cancellation in Physical Layer SecurityCommunication System Based on Artifical Interference
PENG Lei, ZANG Guo-zhen, GAO Yuan-yuan, SHA Nan, XI Chen-jing, JIANG Xuan-you
Computer Science. 2019, 46 (6): 168-173.  doi:10.11896/j.issn.1002-137X.2019.06.025
Abstract PDF(1807KB) ( 168 )   
References | Related Articles | Metrics
Regarding to the security threat of information transmission brought by external eavesdropper in wireless communication system,this paper proposed a secure transmission scheme that source node sends source information and artificial interference simultaneously.To effectively eliminate the impact of artificial interference signal on the correct recovery of source information,the LMS adaptive algorithm was applied to estimate and eliminate the artificial interfe-rence signal.To verify the effectiveness of the proposed scheme,the QPSK modulation system was taken as an example,and a simple artificial interference signal was designed.Through the MATLAB simulation of the bit error rate of the system,the effectiveness of the proposed scheme is verified,and the optimal step size and filter order of the LMS adaptive algorithm in the proposed scheme are obtained.
Two-phase Image Steganalysis Algorithm Based on Artificial Bee Colony Algorithm
MU Xiao-fang, DENG Hong-xia, LI Xiao-bin, ZHAO Peng
Computer Science. 2019, 46 (6): 174-179.  doi:10.11896/j.issn.1002-137X.2019.06.026
Abstract PDF(2226KB) ( 237 )   
References | Related Articles | Metrics
In order to improve the detection accuracy of the image steganalysis,this paper proposed a two-phase image steganalysis algorithm based on Artificial Bee Colony.In the first phase,steganography pattern detection algorithm based on fuzzy theory is designed to discover steganography content of some known steganography algorithms.In the second phase,dual features of regions and density of stego images are analyzed based on Artificial Bee Colony algorithm,and the embedded content of unknown steganography algorithms is analyzed by dual features.Experimental results on the public steganography images show that the proposed algorithm performs high detection accuracy,and it has desirable computational efficiency.
SCR Requirement Model Transformation Based on Table Expression
LI Si-jie, WEI Ou, ZHAN Yun-jiao, WANG Li-song
Computer Science. 2019, 46 (6): 180-188.  doi:10.11896/j.issn.1002-137X.2019.06.027
Abstract PDF(2295KB) ( 347 )   
References | Related Articles | Metrics
The process of requirement specification based on formal methods is based on strictly defined semantics and mathematical models,making the presentation of requirements clearer and easier to understand.SCR method is a forma-lized requirement specification method based on formal symbol-tabular expression,which uses multi-dimensional tabular structure to represent system requirements.The automated testing and verification tools for formal requirements increase the accuracy of the requirements and the efficiency of analysis.However,some current tools lack of automatic verification of safety properties and can’t guarantee the safety of the requirements.Therefore,this paper expanded T-VEC tool based on SCR method,developed model transform tool T2N with the help of language parser generator antlr,designed anguage structure transformation rules,and transformed requirement description language T-VEC based on SCR method into symbolic model checking language XMV,to achieve automatic verification of the extracted system safety properties.Finally,an example of a typical case lighting control system in requirement engineering was used for experimental analysis to verify the effectiveness of the T2N tool and the safety of the requirement model.
Study on Complete Requirement Acquiring Based on Tracking Matrix
LI Xiao, WEI Chang-jiang
Computer Science. 2019, 46 (6): 189-195.  doi:10.11896/j.issn.1002-137X.2019.06.028
Abstract PDF(1691KB) ( 290 )   
References | Related Articles | Metrics
Distributed system has gradually developed into an important research field in software engineering since it was proposed.Nowadays,distributed requirements become the main features of the software systems,and distributed requirements and functional requirements are closely related.Currently,the “4+1” views method recommended by RUP (Rational Unified Process) is usually used to model two kinds of requirements in different models.This method has already produced positive feedback and achieved good results in software engineering practice.However,distributed requirements and functional requirements are modeled separately,which leads to the segregation of functional and distri-buted requirements to a certain degree.This segmented requirements modeling method is not conducive to obtain complete software requirements when doing requirements analysis work.In response to the above questions,first of all,this paper gave the overall framework of requirements tracking.It illustrated the evolution of requirements tracking relationships across all phases of the software life cycle from three levels.Secondly,by analyzing the transmission route from requirements to other artifacts,requirements tracking relationships were obtained and requirements tracking matrices were established.Finally,with the matrix calculation,the specific implementation of the requirements change tracking was described.Therefore,through the above research,this paper established tracking links between distributed requirements models and functional requirements models eventually,which not only captures requirements completely,but also solves the problem of difficult requirements changes caused by requirements segmentation.
Optimization Algorithm of Complementary Register Usage Between Two Register Classesin Register Spilling for DSP Register Allocation
QIU Ya-qiong, HU Yong-hua, LI Yang, TANG Zhen, SHI Lin
Computer Science. 2019, 46 (6): 196-200.  doi:10.11896/j.issn.1002-137X.2019.06.029
Abstract PDF(1380KB) ( 254 )   
References | Related Articles | Metrics
Register allocation has become one of the most important optimization techniques for compiler for that registers are limited and valuable resources in hardware architecture of computer.One of the key factors affecting the results of register allocation is the access and storage costs incurred from spilling signed registers.For DSP architectures with two classes of general-purpose registers,this paper proposed a complementary utilization strategy between the registers and a corresponding register spilling optimization algorithm on the basis of graph coloring register allocation method.Through distinguishing the interference between candidates of the same register class from those of different register classes,an undirected graph is built by improving the analysis for variables’ live ranges.Compared with the conventional graph coloring register allocation,the improved algorithm fully consideres the interferences among the register allocation candidates for two register classes,thus achiving less memory access operations in register spilling and higher code performance.
Database-level Web Cache Replacement Strategy Based on SVM Access Prediction Mechanism
YANG Rui-jun, ZHU Ke, CHENG Yan
Computer Science. 2019, 46 (6): 201-205.  doi:10.11896/j.issn.1002-137X.2019.06.030
Abstract PDF(1616KB) ( 358 )   
References | Related Articles | Metrics
Web cache is used to solve the problems of network access delay and network congestion,and cache replacement strategy directly affects the hit rate of cache.For this reason,this paper proposed a database-level Web cache replacement strategy based on SVM access prediction mechanism.Firstly,according to previous access logs of users,a feature data set is constructed on the basis of extracting multiple features through a pre-processing operation.Then,a Support Vector Machine (SVM) classifier is trained to predict whether a cached object is likely to be accessed again in the future,and the cached objects that are classified as not being accessed are deleted to free memory.Simulation results show that,compared with the traditional LRU,LFU and GDSF schemes,this strategy has higher request hit rate and byte hit rate.
BLSTM_MLPCNN Model for Short Text Classification
ZHENG Cheng, HONG Tong-tong, XUE Man-yi
Computer Science. 2019, 46 (6): 206-211.  doi:10.11896/j.issn.1002-137X.2019.06.031
Abstract PDF(1450KB) ( 394 )   
References | Related Articles | Metrics
Text representation and text feature extraction are essential procedures in natural language processing and directly affect text classification performance.The major output of the present work is the establishment of the BLSTM_MLPCNN neural network model whose inputs are character-level vector integrated with word vector.In this model,firstly the character-level vector is obtained from character via convolutional neural network (CNN),and is integrated with the word vector to compose the pre-training words embedded vectors (also an input to BLSTM model).Then the combination of the BLSTM model’s forward output,word embedded vector and backward output forms the document feature map,and finally the MLPCNN model is used to extract feature.The experiments on the pertinent datasets prove the classification performance of BLSTM_MLPCNN model is superior to CNN model,RNN model and CNN/RNN combinatorial model.
KL-divergence-based Policy Optimization
LI Jian-guo, ZHAO Hai-tao, SUN Shao-yuan
Computer Science. 2019, 46 (6): 212-217.  doi:10.11896/j.issn.1002-137X.2019.06.032
Abstract PDF(2163KB) ( 472 )   
References | Related Articles | Metrics
Reinforcement learning has wide application prospects in dealing with the problem of complex optimization and control.Since traditional policy gradient method cannot learn the complex policy effectively in addressing with the environment with high-dimensional and continuous action space,that causes slow convergence rate or even non-convergence,this paper proposed an online KL-divergence-based policy optimization algorithm to solve this issue.Based on Actor-Critic algorithm,the KL-divergence is introduced to construct a penalty which adds the distance between “new” and “old” into policy loss function to optimization the policy update of Actor.Furthermore,the learning step is controlled by KL-divergence to ensure the policy update with maximum learning step within security region.On the experiment of Pendulum and Humanoid,simulation results show that KLPO algorithm can learn complex strategies better,converge faster and get higher returns.
Linguistic Multi-attribute Group Decision Making Method Based on Normal Cloud Similarity
XU Cong, PAN Xiao-dong
Computer Science. 2019, 46 (6): 218-223.  doi:10.11896/j.issn.1002-137X.2019.06.033
Abstract PDF(1613KB) ( 204 )   
References | Related Articles | Metrics
On the basis of analyzing the inadequacies of existed similarity measures between normal clouds,through synthetically considering the shape similarity and position similarity of normal clouds,this paper proposed a new similarity measure between normal clouds,and proved its characteristics.The comparison results with other methods demonstrate the stronger discrimination of the proposed method.The proposed normal cloud similarity measurement method was applied to the linguistic multi-attribute group decision.Firstly,the linguistic variables is transformed into a normal cloud according to the normal distribution law.Secondly,information aggregation is realized by means of cloud weighted arithmetic mean operator.Finally,according to the VIKOR method,the scheme is ranked by the comprehensive similarity of the optimal cloud and the worst cloud.The feasibility and validity of this method were analyzed through an example in this paper.
Knowledge Discovery Model Based on Neighborhood Multi-granularity Rough Sets
Computer Science. 2019, 46 (6): 224-230.  doi:10.11896/j.issn.1002-137X.2019.06.034
Abstract PDF(1266KB) ( 2400 )   
References | Related Articles | Metrics
It is the purpose of the present work to re-establish a knowledge discovery model based on neighborhood multi-granulation rough sets from the perspective of the deficiency with respect to the existing definition of neighborhood multi-granulation rough sets and the corresponding knowledge discovery algorithms.We firstly constructed the optimistic neighborhood multi-granulation rough set model and pessimistic neighborhood multi-granulation rough set model under multiple neighborhood radii,and discussed several pertinent properties.Then we gave a definition for the granularity importance of neighborhood multi-granulation rough sets,and constructed a granularity reduction algorithm.Finally we conducted a demonstration for the acting mechanism of the proposed algorithm by using an example,and veri-fied its validity.
Efficient Grouping Method for Crowd Evacuation
ZHANG Jian-xin, LIU Hong, LI Yan
Computer Science. 2019, 46 (6): 231-238.  doi:10.11896/j.issn.1002-137X.2019.06.035
Abstract PDF(2347KB) ( 530 )   
References | Related Articles | Metrics
In the crowd evacuation process,individuals usually produce the grouping phenomenon according to the intimacy of the relationship.Therefore,the grouping behavior is a factor that can not be neglected in the evacuation simulation of the crowd.The family,friends and colleagues usually form a group according to the degree of intimacy and gather together to a cluster in the evacuation process.The commonly used k-mediods clustering algorithm is sensitive to noise and easy to fall into the local optimum.It can only find the spherical cluster and is sensitive to the selection of the initial clustering center point,which is unsatisfactory in the accuracy of clustering.The DBSCAN algorithm has the advantages of strong ability to deal with noise and can find clusters of arbitrary shape and without specifying the initial clustering center,etc.But it can only identify clusters of similar density.Therefore,this paper proposed a binary DBSCAN clustering algorithm.This algorithm firstly divides the relational data to a grid,then it determines the cluster radius ε according to the density of population of the grid,and finally it executes the DBSCAN clustering algorithm for each grid,so these clusters with different densities can be identified.After clustering,the individual movement is driven in the social force modelwhich adds the individual attraction in the same group.And the influence of the intimacy degree on the aggregation degree is simulated.The experimental results show that,considering the spatial distribution of connected pedestrians in real life,this method has higher clustering accuracy.It can reappear the evacuation situation in the real scene and can be used as an important tool to predict evacuation time and evacuation situation.
Study on Stowage Optimization in Minimum Container Transportation Cost
ZHENG Fei-feng, JIANG Juan, MEI Qi-huang
Computer Science. 2019, 46 (6): 239-245.  doi:10.11896/j.issn.1002-137X.2019.06.036
Abstract PDF(1643KB) ( 308 )   
References | Related Articles | Metrics
With the rapid development of container transportation along the Yangtze River,the stowage planning has become an important issue in the development of container transportation.Aiming at the minimum shifting cost and stack usage cost during the voyage,this paper established a mixed integer programming model that considers the ship safety and loading stability constraints.Then CPLEX,genetic algorithm and greedy algorithm were designed to analyze the stowage planning of small and medium sized container ships along the Yangtze River,and the validity of the proposed model was proved.Besides,the two proposed algorithms were applied to solve the large-scale container loading problem,and both of them can quickly get reasonable solutions.Compared with the industry experience,the experiments prove that the proposed model can reduce 24.73% on average for the transportation cost of the route and is of certain guiding significance to the container transportation along the Yangtze River.
End-to-end Image Super Resolution Based on Residuals
HUA Zhen, ZHANG Hai-cheng, LI Jin-jiang
Computer Science. 2019, 46 (6): 246-255.  doi:10.11896/j.issn.1002-137X.2019.06.037
Abstract PDF(4880KB) ( 224 )   
References | Related Articles | Metrics
Image super-resolution reconstruction technology is widely used in real life.An end-to-end deep convolutional neural network (CNN) based on residual learning wasproposed to solve the problems of simple network structure,slow convergence rate and reconstructed texture blur in the network super-resolution CNN to further improve the quality of image reconstruction.The network is jointly trained by the local residual network and the global residual network,which increases the width of the network and learns different effective features.The local residual network includes three stages:feature extraction,upsampling and multi-scale reconstruction.The effective local features are extracted by densely concatenated blocks and the rich context information is obtained by multi-scale reconstruction,which is beneficial to the recovery of high-frequency information.In the global residual network,progressive upsampling is used to achieve multi-scale image reconstruction,and the convergence speed is improved by residual learning.Quantitative and qualitative evaluations are performed on the benchmark datasets Set5,Set14,B100,and Urban100 for scale factor of 2,3,and 4.The proposed algorithm shows improved performances by 34.70dB/0.9295,30.54dB/0.8490,29.27dB/0.8096,and 28.81 dB/0.8653 on scale factor of 3.In terms of qualitative comparison,the proposed method reconstructs a clearer image,and preserves the edge details in the image better.The experimental results show that the proposed me-thod has been greatly improved in subjective vision and objective quantization,which can improve the quality of image reconstruction effectively.
No-reference Quality Assessment of Depth Images Based on Natural Scenes Statistics
CHEN Xi, LI Lei-da, LI Qiao-yue, HAN Xi-xi, ZHU Han-cheng
Computer Science. 2019, 46 (6): 256-262.  doi:10.11896/j.issn.1002-137X.2019.06.038
Abstract PDF(2717KB) ( 443 )   
References | Related Articles | Metrics
Depth-image-based-rendering (DIBR) has been widely used in virtual view synthesis.The quality of depth maps is a crucial factor influencing the synthesis results,because the errors of depth information can easily lead to severe geometry distortions in virtual synthesis views,and it is difficult to obtain perfect depth maps.In this paper,an NSS-based no-reference quality assessment algorithm for depth maps was proposed.Firstly,the edges of the depth map are detected by the Canny operator,and the distorted edge region of the depth map is defined based on the detected edges.Secondly,Gradient Magnitude (GM) and Laplacian of Gaussian (LOG) of depth map in the distorted edge region are calculated.The GM distribution is fitted by Weibull function for distorted images as well as undistorted ones.The Asymmetric Generalized Gaussian Distribution (AGGD)is used to fit the LOG distributions for distorted images as well as undistorted ones.Images are naturally multiscale,and distortions affect image structure across scales.Hence,all features are extracted at five scales of the original image.Finally,Random Forests (RF) regression model is used to produce a quality index to assess the quality of the depth maps.Extensive experiments on benchmark databases demonstrate the effectiveness of the proposed method,and it outperforms the state-of-the-art methods.
Face Pose and Expression Correction Based on 3D Morphable Model
WANG Qian-qing, ZHANG Jing-lei
Computer Science. 2019, 46 (6): 263-269.  doi:10.11896/j.issn.1002-137X.2019.06.039
Abstract PDF(3440KB) ( 592 )   
References | Related Articles | Metrics
Aiming at the problems such as poor robustness and computational complexity in face pose correction,a new facial pose and expression correction algorithmis was proposed.First,the Fast-SIC algorithm is adopted to improve the AAM model and to enhence the fitting efficiency.Then,based on the face alignment results,3D face reconstruction is performed.A BFM-3DMM model combining expression parameters into classical 3DMM model was proposed.However,the face corrected by the BFM-3DMM model is not smooth enough.Due to the fact that the SFS algorithm is not constrained by the original statistical model,this algorithm is applied to re-correct 2D face from BFM-3DMM model.The algorithm achieves good alignment and correction effects both on AFLW and LFPW,which are the two famous large face databases,as well as self-build face database.The experimental evaluation results show that the corrected 2D faces have smoother apperance and higher fidelity compared with classical 3DMM model,and can also retain image background information.
Image Fusion Method Based on àtrous-NSCT Transform and Region Characteristic
CAO Yi-qin, CAO Ting, HUANG Xiao-sheng
Computer Science. 2019, 46 (6): 270-276.  doi:10.11896/j.issn.1002-137X.2019.06.040
Abstract PDF(4177KB) ( 290 )   
References | Related Articles | Metrics
Aiming at the advantages and disadvantages of two kinds of multi-scale transforms of àtrous wavelet transform and NSCT,through introducing àtrous-NSCT transform tool,this paper proposed an image fusion method based on àtrous-NSCT transform and region characteristics.This method regards the regional average gradient as the activity measure,and makes use of the fusion method with large coefficient to complete the low-frequency sub-band image fusion.Then,it utilizes the fusion method based on adaptive model with regional variance weighting for high-frequency sub-band images to complete the fusion,thus obtaining the final fusion results through àtrous inverse transform process.In the experiment,the proposed method was compared with other five multi-scale fusion methods.The experimental results show that the fusion results obtained by the proposed method are significantly improved in both subjective vision and objective evaluation as the number of decomposition layers of the new multi-scale transformation is 4.
Deep Face Recognition Algorithm Based on Weighted Hashing
ZENG Yan, CHEN Yue-lin, CAI Xiao-dong
Computer Science. 2019, 46 (6): 277-281.  doi:10.11896/j.issn.1002-137X.2019.06.041
Abstract PDF(1843KB) ( 332 )   
References | Related Articles | Metrics
In order to solve the problem that the accuracy rate may decrease and the memory occupancy rate may still be high when the convolution neural network with fused depth hash is used for face recognition,this paper proposed a deep face recognition algorithm based on weighted hashing.Firstly,a fully convolutional neural network of deep hash based on dimension splicing with high and low dimensional features is proposed to improve recognition accuracy.Secondly,a model compression method with floating-point weights quantized into hash coding is proposed to reduce memory occupancy rate of the model.The experimental results show that the proposed method improves efficiency by 68%,improves the Rank-1 accuracy by 1.67%,and the model size is compressed by 91.2% when it is improved based on VGG framework.In addition,it improves efficiency by 61% when the Rank-1 accuracy is slightly improved,and the model size is reduced by 42.24% when it is improved based on Sphereface framework.The results indicate that the proposed method can improve the recognition accuracy and efficiency,and reduce the memory usage.It also can be applied for other frameworks.
Automatic Sex Determination of Skulls Based on Statistical Shape Model
YANG Wen, LIU Xiao-ning, ZHU Fei, ZHAO Shang-hao, WANG Shi-xiong
Computer Science. 2019, 46 (6): 282-287.  doi:10.11896/j.issn.1002-137X.2019.06.042
Abstract PDF(2241KB) ( 411 )   
References | Related Articles | Metrics
The sex determination of skulls is one of the hot research topics in forensic anthropology.It has important research value in the field of criminal investigation and archaeological anthropology.Skull sex recognition is determined by anthropologists through empirical observation of morphology or measurement and analysis of characteristics with gender dimorphism differences.Which is with strong subjective.This aper proposed an automatic gender recognition me-thod for three-dimensional digital cranium.Firstly,a statistical shape model for skulls is built.Then,the feature of high-dimensional skull is projected into low-dimensional shape space.Finally,Fisher discriminant analysis is used to classify the skull in low-dimensional shape space.This method combines the advantages of the measurement and morphological methods.It is easy to operate with no professional and tedious manual measurements.In the experiment,267 Uygur cranium models were selected,including 114 male and 153 female.Of these,76 male and 102 female skulls were used to establish gender discrimination models,and the rest were used to verify.The results show that the accuracy rate in Uygur males and females was 94.7% and 92.1%,respectively.Leave-one-out cross validation shows that this method has high accuracy.
Improved Block-matching 3D Denoising Algorithm
XIAO Jia, ZHANG Jun-hua, MEI Li-ye
Computer Science. 2019, 46 (6): 288-294.  doi:10.11896/j.issn.1002-137X.2019.06.043
Abstract PDF(14880KB) ( 273 )   
References | Related Articles | Metrics
When dealing with the high-contrast images contaminated by Gaussian white noise,the traditional block-matching 3D (BM3D) algorithm can’t completely preserve the image edge and texture details,and the edge-ringing effect will appear in the denoised image edges.In order to overcome the shortcomings of traditional BM3D denoising algorithm when dealing with the image edge and texture details,this paper proposed an improved denoising algorithm.This algorithm firstly conducts anisotropic diffusion filtering for noise images,and then searches for similar blocks along the edge instead of the horizontal direction.Experimental results show that the number of similar blocks obtained by the improved algorithm is four times as much as the traditional method,and the PSNR is also further improved.Besides,the image edge and texture details are better preserved.
Bayesian Model Saliency Detection Algorithm Based on Multiple Scales and Improved Convex Hull
LU Wen-chao, DUAN Xian-hua, XU Dan, WANG Wan-yao
Computer Science. 2019, 46 (6): 295-300.  doi:10.11896/j.issn.1002-137X.2019.06.044
Abstract PDF(3491KB) ( 313 )   
References | Related Articles | Metrics
Traditional Bayesian model saliency detection algorithm may have a poor performance in terms of precision.Therefore,this paper proposed a novel algorithm based on the multi-scaled convex hull.Firstly,the manifold ranking (MR) algorithm is used to extract the foreground of the images in the CIELab color space,which is considered as the prior probability map.Secondly,the image is down sampled by Gaussian Pyramid algorithm,and three scaled images are obtained.The improved convex hull is derived by using the intersection about convex hull of Harris corners of the three scaled images.Thirdly,the color histogram and convex hull are combined to calculate the observation likelihood probability.Finally,according to the existing prior probability map and observation likelihood probability,the Bayesian model is used to compute the saliency map.Moreover,the optimization is carried out for better performance.The experiment results on public datasets MSRA1000 and ECSSD show that the proposedalgorithm not only achieves good vision effect,but also improves the performance evaluation of precision-recall curves and F-measure value.
Video Fire Detection Method Based on YOLOv2
DU Chen-xi, YAN Yun-yang, LIU Yi-an, GAO Shang-bing
Computer Science. 2019, 46 (6): 301-304.  doi:10.11896/j.issn.1002-137X.2019.06.045
Abstract PDF(2272KB) ( 653 )   
References | Related Articles | Metrics
It is difficult for general flame detection methods to adapt to complex scenes,so the detection rates is low.This paper proposed a deep learning flame detection method based on an improved YOLOv2 network to extract the flame features automatically.In order to avoid the information loss in the feature extraction process,the selected anchor box by clustering is suggested and multi-scale feature fusion method is used to fuse high-level and shallow feature information,to further improve the detection rate of the model.Experimental results on the Bilkent University flame video dataset show that the average true inspection rate of the proposed method is 98.8%,and the detection rate is 40 frames/s,so its robustness and real-time performance are strong.
Crowd Behavior Recognition Algorithm Based on Combined Features and Deep Learning
YUAN Ya-jun, LEE Fei-fei, CHEN Qiu
Computer Science. 2019, 46 (6): 305-310.  doi:10.11896/j.issn.1002-137X.2019.06.046
Abstract PDF(1803KB) ( 379 )   
References | Related Articles | Metrics
The target of analyzing crowd behavior is to better analyze and manage the state and tendency of crowd movement.This paper proposed a novel deep learning based crowd behavior recognition method by using two types of crowd behavior features.Firstly,the crowd is regarded as the main object,a foreground extraction method is used to extract the static information of crowd,and the dynamic information of crowd is obtained by the change of the crowd movement.Then two different crowd behavior characteristics are learned by using convolution neural network (CNN) model,so as to analyze crowd behaviors in the end.Additionally,the extraction location and interval of crowd data are crucial factors in the crowd behavior recognition.Experimental results show that two crowd characteristics can better describe crowd states on the spatial dimension and crowd changes on the temporal dimension.The rational data location and data interval can effectively improve the expression ability of crowd information.At last,this method was compared with other crowd behavior recognition algorithms.The quantitative and qualitative experimental results demonstrate the validity of the proposed method.Besides,better confusion matrix and higher precision can be obtained by this method.
Moving Object Detection Based on Continuous Constraint Background Model Deduction
ZHU Xuan, WANG Lei, ZHANG Chao, MEI Dong-feng, XUE Jia-ping, CAO Qing-wen
Computer Science. 2019, 46 (6): 311-315.  doi:10.11896/j.issn.1002-137X.2019.06.047
Abstract PDF(2358KB) ( 158 )   
References | Related Articles | Metrics
Moving target detection is one of the key technologies in the field of machine vision.Moving object detection is widely used in video moving object detection,remote sensing information processing and military reconnaissance,etc.Considering that the background similarity of adjacent video frames is high,and the shadow and noise are disconti-nuous,this paper proposed a low-rank decomposition background updating model with time continuity constraint,and applied it to the moving object detection of background subtraction.Firstly,low-rank components and sparse components are obtained by using low-rank decomposition.Then the background is constructed by updating the low-rank components based on time continuity constrained.Finally,moving object is obtained by background subtraction and adaptive threshold segmentation.Experimental results show that both the FM index and the ROC curve reflect that compared with the state-of-the-art background subtraction methods,this method can effectively overcome the influence of shadow and noise,reduce holes,extract moving objects more accurately,and has good robustness.
UAV Image Matching Algorithm Based on Improved SIFT Algorithm and Two-stage Feature Matching
SHAO Jin-da, YANG Shuai, CHENG Lin
Computer Science. 2019, 46 (6): 316-321.  doi:10.11896/j.issn.1002-137X.2019.06.048
Abstract PDF(1974KB) ( 590 )   
References | Related Articles | Metrics
Aiming at the problems in the matching process of UAV aerial images,such as long time,high cost and large amount of computation,this paper proposed an UAV image matching algorithm based on scale invariant feature transform scale invariant feature transform(SIFT) algorithm and geometrical algebraic method geometry algebra(GA) to achieve fast image feature extraction and feature matching.Firstly,the feature points are detected and described by using GA method and SIFT algorithms.Then,two-level feature matching is performed,fast library for approximate nearest neighbors (FLANN) algorithm is used to pre-matching the feature points and matching results is optimized according to the improved random sample consensus(RANSAC) algorithm.Experimental results show that compared with traditio-nal image matching algorithm,the proposed algorithm can locate more feature points accurately and improve the speed of image alignment process greatly,and it can save a lot of time for image matching of large drones.
Face Expression Recognition Model Based on Enhanced Head Pose Estimation
CUI Jing-chun, WANG Jing
Computer Science. 2019, 46 (6): 322-327.  doi:10.11896/j.issn.1002-137X.2019.06.049
Abstract PDF(1556KB) ( 357 )   
References | Related Articles | Metrics
Aiming at the problem that the existing expression recognition algorithm does not consider the head pose and cannot use the high-pixel picture,this paper proposed a model based on the random forest algorithm-head pose estimation (RF-HPE) network combining a convolutional neural network.First,the input image is normalized by intensity.Then the key points of the face marker are determined by using RF-HPE to determine the position of the face marker.Finally,a convolutional neural network is used to extract features and train model.This model reduces the influence of light intensity on the recognition result and improves the training accuracy without sacrificing the efficiency of the algorithm.Experimental results show that the improved model has greater advantages than other similar models,and the classification accuracy is also significantly improved.
Image Matching Method Combining Hybrid Simulated Annealing and Antlion Optimizer
ZHANG Huan-long, GAO Zeng, ZHANG Xiu-jiao, SHI Kun-feng
Computer Science. 2019, 46 (6): 328-333.  doi:10.11896/j.issn.1002-137X.2019.06.050
Abstract PDF(3410KB) ( 374 )   
References | Related Articles | Metrics
Aiming at low matching efficiency and accuracy of traditional swarm optimization algorithms in image matching,this paper proposed an image matching method combining hybrid simulated annealing(SA) and ant lion optimizer(ALO).In this method,the ALO algorithm is applied to image matching for the first time,and the boundary shrinkage mechanism and the search method of the interaction between ant and antlion are exploited to improve the matching efficiency and accuracy.Then,on the basis of making use of the rule of partial embedding criterion,the simulated annealing mechanism is introduced if the matching result falls into local optimum.Besides,the Lévy flight and the Metropolis criterion are utilized to ensure the algorithm run beyond the local optimum,thus improving the optimization performance and matching accuracy.Otherwise,ALO search strategy is directly used to complete image matching.The experimental results demonstrate fast matching speed and high matching accuracy of the proposed method.