Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 45 Issue 7, 30 July 2018
  
CCF Big Data 2017
Research on Deep Reinforcement Learning
ZHAO Xing-yu, DING Shi-fei
Computer Science. 2018, 45 (7): 1-6.  doi:10.11896/j.issn.1002-137X.2018.07.001
Abstract PDF(1307KB) ( 2983 )   
References | Related Articles | Metrics
As a new machine learning method,deep reinforcement learning combines deep learning and reinforcement learning,which makes that the agent can perceive the information from high dimensional space,train model and make decision according to the received information.Deep reinforcement learning has been widely researched and used in va-rious fields of daily life because of its universality and effectiveness.Firstly,an overview of the deep reinforcement lear-ning research was given and the basic theory of deep reinforcement learning was introduced.Then value-based algorithms and policy-based algorithms were introduced.After that,the application prospects of deep reinfercement learning were discussed.Finally,the related researches were summarized and prospected.
Survey on Performance Optimization Technologies for Spark
LIAO Hu-sheng, HUANG Shan-shan, XU Jun-gang, LIU Ren-feng
Computer Science. 2018, 45 (7): 7-15.  doi:10.11896/j.issn.1002-137X.2018.07.002
Abstract PDF(2529KB) ( 1777 )   
References | Related Articles | Metrics
In recent years,with the advent of the era of big data,big data processing platform is developing very fast.A large number of big data processing platforms,including Hadoop,Spark,Strom and etc.,have appeared,among which Apache Spark is the most prominent one.With the wide applications of Spark at home and abroad,there are many performance problems to be solved.As the underlying implementation mechanism of Spark is very complex,it is difficult for ordinary users to find performance bottlenecks,let alone further optimization.In light of the above problems,the performance optimization technologies for Sparkwere summarized and analyzed from five aspects,including development principles optimization,memory optimization,configuration parameter optimization,scheduling optimization and shuffle process optimization.Finally,the key problems of Spark optimization technologies were summarized and future research issues were proposed.
Collaborative Filtering Recommendation Algorithm Based on Space Transformation
ZHAO Xing-wang,LIANG Ji-ye,GUO Lan-jie
Computer Science. 2018, 45 (7): 16-21.  doi:10.11896/j.issn.1002-137X.2018.07.003
Abstract PDF(1713KB) ( 745 )   
References | Related Articles | Metrics
In real applications,traditional collaborative filtering recommendation algorithms are usually faced with the problem of computational scalability.To solve this problem,in the framework of item-based collaborative filtering re-commendation,a collaborative filtering recommendation algorithm based on space transformation was proposed in this paper.Concretely speaking,according to the user social network information,the users are firstly divided into different clusters by using the community discovery algorithm.Then,item clusters are found according to the corresponding relationship between users and items in the rating information matrix.And the membership of each item for each item clusters is calculated.The sparse high dimensional rating information matrix is transformed into a low dimensional dense membership matrix,and then the similarities between items are carried on the transformed matrix.The proposed algorithm was compared with other algorithms on the public data set.The experimental results show that the proposed algorithm can significantly improve the computational efficiency while guaranteeing the accuracy of recommendation.
Influence of Noisy Features on Internal Validation of Clustering
YANG Hu, FU Yu, FAN Dan
Computer Science. 2018, 45 (7): 22-30.  doi:10.11896/j.issn.1002-137X.2018.07.004
Abstract PDF(2929KB) ( 1068 )   
References | Related Articles | Metrics
Internal validation measures of clustering are extremly essentialin clustering analysis,and they are used to evaluate the effect of clustering results and are indicators to find the optimal cluster number when the true situation of sample is unknown.Although a large number of studies focus on the performance of internal validation measures of clustering and have found that some measures perform better than others,they ignore the influence of noisy features existing in real data.Therefore,it may mislead the selection and application of internal validation measures of clustering.This study selected 10 clustering validation measures to determine the number of clusters of simulation datasets and real datasets,so as to analyze the influence of noisy features on internal validation choosing and clustering results.Results indicate that noisy features among dataset have impact on all internal validation indices of clustering but KL,CH and CCC,and accuracy of the clustering results will decrease along with the increase of noise.
Ensemble Learning Method for Imbalanced Data Based on Sample Weight Updating
CHEN Sheng-ling ,SHEN Si-qi, LI Dong-sheng
Computer Science. 2018, 45 (7): 31-37.  doi:10.11896/j.issn.1002-137X.2018.07.005
Abstract PDF(1383KB) ( 1587 )   
References | Related Articles | Metrics
The problem of imbalanced data is prevalent in various applications of big data and machine learning,like medical diagnosis and abnormal detection.Researchers have proposed or used a number of methods for imbalanced learning,including data sampling(e.g.SMOTE) and ensemble learning(e.g.EasyEnsemble) methods.The oversamp-ling methods in data sampling may have problems such as over-fitting or low classification accuracy of boundary samples,while the under-sampling methods may lead to under-fitting.The Rotation SMOTE algorithm was proposed in this paper incorporating the basic idea of SMOTE,Bagging,Boosting and other algorithms,and SMOTE was used to indirectly increase the weight of minority samples based on the prediction result of the base classifier in the Boosting process.According to the basic idea of Focal Loss,this paper proposed FocalBoost algorithm that directly optimizes the sample weight updating strategy of AdaBoost based on the prediction results of the base classifier.Based on the experiment with multiple evaluation metrics on 11 unbalanced data sets in different application fields,Rotation SMOTE can obtain the highest recall score on all datasets compared with other imbalanced data learning algorithms (including SMOTEBoost and EasyEnsemble),and achieves the best or the second best G-means and F1Score on most datasets,while FocalBoost achieves better performance on 9 of these unbalanced datasets compared to the original AdaBoost.
Network Representation Model Based on Multi-architectures and Text Fusion
LI Jia-yi, ZHAO Yu, WANG Li
Computer Science. 2018, 45 (7): 38-41.  doi:10.11896/j.issn.1002-137X.2018.07.006
Abstract PDF(2137KB) ( 1003 )   
References | Related Articles | Metrics
Network representation obtains the vector representations of nodes by deeply learning network structure,and mines the potential information on the network,which is an important method of reducing dimension in social computing.As for TADW,which is a network representation method based on matrix decomposition and combining text and structure,this paper first analyzed and discussed the influence of the location of text attributes matrix on network representation.Then,it proposed a social network representation method that incorporates relationship structure,interaction structure and textual attributes.Experimental results on multiple datasets show that the proposed method outperforms other classical network representation methods in classification tasks.
R-tree for Frequent Updates and Multi-user Concurrent Accesses Based on HBase
WANG Bo-tao,LIANG Wei,ZHAO Kai-li,ZHONG Han-hui,ZHANG Yu-qi
Computer Science. 2018, 45 (7): 42-52.  doi:10.11896/j.issn.1002-137X.2018.07.007
Abstract PDF(2331KB) ( 827 )   
References | Related Articles | Metrics
Application based on location based service (LBS) has entered the era of big data.Traditional location based service techniques face new challenges such as scalability,performance,etc.Cloud computing technology is the basis of big data processing and index is an important way to optimize query.Although there exist a large number of research results,as far as we know,there is no R-tree index which supports frequent updates and multi-user concurrent accesses based on HBase.According to the above characteristics of moving objects,this paper proposed a new R-tree index which supports frequent updates and multi-user concurrent accesses based on HBase.In this new index,the R-tree only indexes the grid which contains the moving object to avoide the problem of frequent updating effectively.Furthermore,based on the organization of HBase data rows and I/O characteristics of data partitions,this paper reorganized the nodes and encoded the grid cells with z-order,which reduce the read and write operations of HBase and improve the query efficiency.Finally,it proposed an optimization strategy for distributed read and write locks based on Zookeeper,which improves the throughput of new indexes.The experimental results show that the query throughput of the proposed strategy is improved by 25%~50% and the update throughput is about the same level in the case of uneven data compared with the grid index.Compared with the index using distri-buted shared locks,the query throughput of the index using distributed read and write locksis increased by nearly 40%.
NCIS 2017
Survey on Storage Security of Emerging Non-volatile Memory
LI Yue,WANG Fang
Computer Science. 2018, 45 (7): 53-60.  doi:10.11896/j.issn.1002-137X.2018.07.008
Abstract PDF(1447KB) ( 2059 )   
References | Related Articles | Metrics
The age of big data provides new opportunities and challenges to the memory/storage system.Traditional main memory architecture based on DRAM faces the problems of capacity,energy consumption and reliability.The new non-volatile memory (NVM) devices are non-volatile and byte-addressable,and possess the feature of low idle consumption,so they can replace persistent storage,main memory or storage class memory (SCM).Though NVM devices provide new choices to the revolution of traditional memory/storage system,there are some security concerns as well.For NVM device itself,the endurance is limited.So writing frequently at one place can wear it out.The lifetime of the NVM devices can be seriously affected by that.When NVM devices work as memory,the non-volatile feature makes the data persistent in the NVM devices.The attackers can steal it and extract sensitive information or tamper the data.When NVM devices work with DRAM as heterogeneous memory,hard-to-find pointers may occur because of non-volatile feature of NVM.In addition,NVM device can work as SCM,because it’s byte-addressable like DRAM.Applications can directly operate the NVM devices through load/store interface bypassing the file system.This paper surveyed some solutions about wear-leveling,reducing write operation,reducing write amount,encrypting main memory,designing consistent and right management mechanism.Finally,it explored some issues that need to be concerned from the aspects of hardware,OS and programming model.
Performance Optimization of LSM Tree Key-value Storage System Based on SSD-SMR Hybrid Storage
WANG Yang-yang, WEI Hao-cheng, CHAI Yun-peng
Computer Science. 2018, 45 (7): 61-65.  doi:10.11896/j.issn.1002-137X.2018.07.009
Abstract PDF(2830KB) ( 1134 )   
References | Related Articles | Metrics
Because of the higher requirements on the scalability,performance,and cost for storage systems proposed by big data,Shingled Magnetic Recording (SMR) disks are widely used in big data storage systems due to the high storage density and low cost.However,since the random write performance of SMR disks are usually weak,the hybrid storage consisted of both SMR disks and the fast Flash-based Solid State Drives (SSDs) can promote the performance significantly.Meanwhile,the write-optimized Log-Structured Merge (LSM) Tree-based key-valuestorage system have been widely used in many NoSQL systems,such as BigTable,Cassandra,HBase,etc.Therefore,how to construct a fast LSM tree key-value storage system based on SSD-SMR hybrid storage is a research problem with great practical significance.This paper first modeled the performance model of LSM tree key-value storage system based on SSD-SMR hybrid sto-rage,and then designed a performance-optimized LSM tree key-value storage system and implemented it based on Le-velDB.The evaluation results indicate that the system based on SSD-SMR hybrid storage improves the random-write performance by 20% and improves random-read performance by 6 times coupled with only a very small SSD (i.e.,0.4%~2% of disk capacity) compared with the HDD-based solution.
Energy Consumption Optimization Scheme for New Energy-driven Storage System
ZHUANG Xiao-zhao, WAN Ji-guang, ZHANG Yi-wen, QU Xiao-yang
Computer Science. 2018, 45 (7): 66-72.  doi:10.11896/j.issn.1002-137X.2018.07.010
Abstract PDF(3493KB) ( 977 )   
References | Related Articles | Metrics
The growth of energy costs and the increasing environmental problems make the data center face severe challenges,and the introduction of economic environment-friendly new energy is imminent.But the new energy has several cha-racteristics such as intermittent,unstable and mutagenic,causing the data center can not effectively adapt new energy.To this end,the major data centers put forward many solutions,such as energy management strategies and load scheduling algorithms,etc.However,most of the existing research results emphasize the optimization of the calculation of energy consumption,and it can not be further adapted to storage.Therefore,this paper presented an energy consumption optimization scheme for new energy-driven storage system,and used the characteristics of different storage media and online-offline workload classification model to implement the matching between workload energy comsumption requirement and new energy supplying.In order to guarantee the performance and improve the energy efficiency of the storage system,the dual-drive energy control method and the virtualization consolidation technology were used to achieve fine-grained energy control program.In addition,this paper also designed and implemented an offline-workload scheduling algorithm to further improve the utilization of new energy.The experimental results show that the optimization scheme for new energy-driven storage system can make the utilization rate of new energy reach 95%,while ensuring the degradation rate of the storage system is less than 9.8%.
Greedy Frog Leaping Algorithm for 01 Knapsack Problem
GAO Si-qi,XING Yu-xuan,XIAO Nong,LIU Fang
Computer Science. 2018, 45 (7): 73-77.  doi:10.11896/j.issn.1002-137X.2018.07.011
Abstract PDF(1388KB) ( 1383 )   
References | Related Articles | Metrics
01 knapsack problem(01kp) is a classic combinatorial optimization problem and appears in many models of real world problem,such as cargo loading,budget control,resource allocation,financial management,and so forth.Many scientists have studied on this area for several decades and achieved a rich harvest.Even though 01kp has been researched for many years,it has been proved to be an NP complete problem,thus finding the best solution is not easy.In recent years,many original and improved intelligent algorithms was proposed to address 01 knapsack problem,for instance,chemical reaction optimization algorithm,genetic algorithm,particle swarm optimization algorithm,shuffled frog leaping algorithm,artificial bee colony algorithm,hill climbing algorithm and simulated annealing algorithm.This paper studied on many intelligent algorithms and 01 knapsack problems,and proposed greedy frog leaping algorithm (GFLA) to solve 01kp.Different from original shuffled frog leaping algorithm,GFLA always updates global best solution during each memeplex process,such that global exploration would employ the latest global best solution to expand the search space.In addition to traditional global exploration and local exploitation,aiming at 01kp,it proposed a greedy scheme at fitness computing stage,including drop and add two steps.At drop step,if the knapsack is over-weighted,then the item with minimum value density is removed from the knapsack and the solution is updated.At add step,if there is still space for the knapsack to load items,then items with minimum weight which have not been loaded are put into the knapsack and solution is also updated.Drop and add steps adopted in our work greatly improve greedy frog leaping algorithm’s ability to address 01kp.This paper conducted the experiments on benchmarks,and compared the results with chemical reaction optimization algorithm,genetic algorithm,quantum-inspired evolutionary algorithm,chemical reaction optimization with greedy strategy.All experimental results show that greedy frog leaping algorithm obtains the best solutions in all cases,which indicates that GFLA is an efficient algorithm to address 01 knapsack problem.
NMST:A Persistent Memory Management Optimization Approach Based on Segment Tree
HOU Ze-yi, WAN Hu, XU Yuan-chao
Computer Science. 2018, 45 (7): 78-83.  doi:10.11896/j.issn.1002-137X.2018.07.012
Abstract PDF(2559KB) ( 1033 )   
References | Related Articles | Metrics
The emergence of non-volatile memory (NVM) has led to the innovation of the programming model.The exi-sting programming models based on function library provide ACID characteristics to solve the problem of data consistency for memory system.However,they introduce huge overhead when allocating persistent memory dynamically,thereby degrading the applications’ performance.In this paper,an optimization approach based on segment tree was proposed to speedup persistent memory allocation and management.NMST targets NVM Library (NVML),namely a representative library programming model.Furthermore,an optimized NMST was proposedto mitigate the huge overhead of segment tree in maintaining continuous space by constructing segment tree with multi-granularity leaf nodes.The experimental results show that NMST reduces the latency by 36.9% compared with traditional methods when allocating persistent memory,and the optimized NMST reduces the latency by 43.6%.The results also demonstrate that performance improvement is closely related to the quantity and granularity of persistent memory allocation in programs.
Network & Communication
Routing Algorithm Based on Meteor-burst Communication
GAO Hang, MU Xiao-dong, YI Zhao-xiang ,TONG Tong, YUAN Tan-en
Computer Science. 2018, 45 (7): 84-89.  doi:10.11896/j.issn.1002-137X.2018.07.013
Abstract PDF(2203KB) ( 936 )   
References | Related Articles | Metrics
Meteor-burst communication is an important emergency communication mode,and the communication network has the characteristics of long delay transmission and intermittent interruption of link.The routing algorithm applicable to this special network has obvious pertinence and needs further study.Based on the study of meteor topology network topology,this paper built a meteor network model based on OPNET simulation software,and proposed an improved algorithm OED (Optimistic Earliest Delivery) by analyzing the communication delay model combined with ED (Earliest Delivery) algorithm and EDLQ(Earliest Delivery with Local Queue) algorithm for DTN (Delay Tolerant Network).Based on the established model,the data transmission success rate and network throughput were simulated.The simulation results show that the OED algorithm is superior to the ED algorithm and EDLQ algorithm in terms of data throughput and data transmission success rate of network,and it can overcome the packet loss caused by the queue overflow.By increasing the node capacity,the data pass rate of OED algorithm is increased by 20% compared with ED algorithm and is increased by 8% compared with EDLQ algorithm.The choice of routing algorithm does not affect the ave-rage durationand the average interruption latency time of link between the meteor nodes.OED algorithm has strong adaptability in meteor trail network,which can provide reference for meteor burst network construction.
Efficient Task Scheduling Algorithm Based on Cloud Environment
ZHONG Zhi-feng, ZHANG Tian-tian,ZHANG Yan, YI Ming-xing ,ZENG Zhang-fan
Computer Science. 2018, 45 (7): 90-94.  doi:10.11896/j.issn.1002-137X.2018.07.014
Abstract PDF(2182KB) ( 757 )   
References | Related Articles | Metrics
Efficient task scheduling is crucial in dealing with business efficiently and cutting down the operating costs for cloud service providers.To improve theperformance of task scheduling in cloud environment,this paper proposed a new algorithm,namely greedy simulated annealing (G&SA).Firstly,it finds the local optimal solution by executing the greedy algorithm,which is used to initialize the current optimal solution of the G&SA algorithm and the initial solution of simulated annealing algorithm.Secondly,the current optimal solution is updated by simulated annealing algorithm.As a result,the experiment shows that the G&SA algorithm can achieve global convergence faster compared with the traditional task scheduling algorithm.In addition,the G&SA algorithm not only obtains more stable optimization results and improves the quality and efficiency of optimization,but also reduces the total task time costs.Average resource utilization rate of virtual machine is steady at 99% or more,and the load can be more balanced.
Virtual Network Reconfiguration Algorithm for Nodes Load Balancing
LI Zhen-tao, MENG Xiang-ru , ZHAO Zhi-yuan, SU Yu-ze
Computer Science. 2018, 45 (7): 95-98.  doi:10.11896/j.issn.1002-137X.2018.07.015
Abstract PDF(2320KB) ( 731 )   
References | Related Articles | Metrics
In order to improve the acceptance ratio of virtual network embedding,this paper proposed a virtual network reconfiguration algorithm based on nodes load balancing,which aims to overcome the problem of physical network nodes imbalance.The algorithm sets a threshold based on available and minimal physical nodes resource,resets the physical nodes which exceed the threshold,and formulates a flexible strategy for selecting virtual nodes on the reconfi-gured physical nodes.The method reduces the number of migrated virtual nodes in case of balancing physical nodes load.The simulation results show that the method balances physical nodes load,and improves the acceptance ratio of virtual network request and the utilization rate of physical network resource.
Range-free Localization Based on Compressive Sensing Using Multiple Measurement Vectors
GUO Yan, YANG Si-xing, LI Ning, SUN Bao-ming, QIAN Peng
Computer Science. 2018, 45 (7): 99-103.  doi:10.11896/j.issn.1002-137X.2018.07.016
Abstract PDF(15577KB) ( 838 )   
References | Related Articles | Metrics
Conventional compressive sensing based localization algorithms are always range-based,and need the accurate information of targets.As a result,they can not be applied in the low-lost system.This paper proposed a range-free localization algorithm beyond connectivity using multiple measurement vectors,which can largely reduce the localization errors.On one hand,this paper designed a new localization model for the range-free localization beyond wireless connectivity.On the other hand,it attempted to obtain multiple localization information in the area adaptively to improve the localization accuracy.The proposed algorithm is easy to operate and can be widely applied.Finally,it is proved that the new method has high accuracy and robustness.
Power Control Based on Fairness in D2DUnderlaid Cellular Networks
WANG Zhen-chao,ZHAO Yun,XUE Wen-ling
Computer Science. 2018, 45 (7): 104-109.  doi:10.11896/j.issn.1002-137X.2018.07.017
Abstract PDF(1522KB) ( 810 )   
References | Related Articles | Metrics
For the resource allocation problem in device-to-device underlaid cellular networks,this paper studied the power control problem with regarding the fairness of transmitting rate between links as the goal for the first time.Joint channel allocation and power control optimization for maximization of the overall network throughput is formed for the system model.In order to reduce the complexity of the problem,it is decoupled into channel allocation and power control.Assuming that optimal matching set has been got previously,this paper focused on power control problem.Optimal transmitting power solutions which are in closed form or can be searched from a finite set are separately induced for the power control for fairness,the power control for maximization of the overall network throughput,and the power control for maximization of the overall network throughput under the condition that fairness has been guaranteed.Simulation results confirm that the fairness between links using the same channel can be improved significantly.
Sensor Node Deployment Model Based on Linear Programming
XU Tao,DU Yu-xuan,LV Zong-lei
Computer Science. 2018, 45 (7): 110-115.  doi:10.11896/j.issn.1002-137X.2018.07.018
Abstract PDF(1463KB) ( 1044 )   
References | Related Articles | Metrics
A sensor node deployment model based on linear programming was proposed to cover the interesting region with the least sensor nodes and ensure the connectivity between sensor nodes.In this model,the connectivity of sensor nodes is calculated by transitive closure,and the logical equations are transformed into linear equations.Then the optimal solution of the model is obtained.At the same time,the full coverage experiments under different grid sizes are designed to verify the correctness of the model.Moreover,this model can set its own maximum hops,interesting regions and the location of sink nodes,and the exact solution can be used as a comparison criterion for the approximate solution of the sensor node deployment model.
Caching and Replacing Strategy in Information-centric Network Based on Content Popularity and Community Importance
LUO Jian-zhen,CAI Jun ,LIU Yan,ZHAO Hui-min
Computer Science. 2018, 45 (7): 116-121.  doi:10.11896/j.issn.1002-137X.2018.07.019
Abstract PDF(2084KB) ( 781 )   
References | Related Articles | Metrics
The temporal and spatial distribution of content in the existing information-centric networking (ICN) cache strategy is irrational,and there exists the problem of invalid caching and homogeneous caching.In light of this,this paper proposed a caching and replacing strategy in ICN based on content popularity and community importance.In this strategy,content popularity and community importance of node are combined to select caching nodes,and the contents with different popularities are cached discretely in nodes with different community importance,so as to make the spatial distribution of content be reasonable,thus improving the diversification of caching contents.Meanwhile,the replacing strategy based on local popularity in a community is propitious to optimize the temporal distribution of caching content,so that the spatial distribution of content can be adjusted dynamically.The experimental results show that the proposed strategy can effectively reduce the average response time of user interest and improve cache hit rate and cache difference in the whole network.
Information Security
Probabilistic Graphical Model Based Approach for Bank Telecommunication Fraud Detection
LIU Xiao, WANG Xiao-guo
Computer Science. 2018, 45 (7): 122-128.  doi:10.11896/j.issn.1002-137X.2018.07.020
Abstract PDF(2334KB) ( 1121 )   
References | Related Articles | Metrics
Over the past few years,telecommunication fraud has caused enormous economic losses for bank users.Exis-ting detection methods firstly extract statistical features,such as RFM (Recency,Frequency,Monetary Value) of user transactions,and then use supervised learning algorithms to detect fraud transactions or fraud accounts through training classifiers.However,the RFM features don’t make use of the network structure of the transaction network.This paper designed a pairwise markov random field to capture the characteristics of the network structure in telecommunication fraud.Then,it exploited a linear loopy belief propagation algorithm to estimate the posterior probability distribution and predict the label of an account.Finally,it compared the proposed method with CIA and SybilRank on both synthetic dataset and real-world dataset.The results show that the proposed method outperforms other methods and can improve the F1-score of the RFM features based method.
Role Matching Access Control Model for Distributed Workflow
HE Si-yuan, OU Bo, LIAO Xin
Computer Science. 2018, 45 (7): 129-134.  doi:10.11896/j.issn.1002-137X.2018.07.021
Abstract PDF(1372KB) ( 709 )   
References | Related Articles | Metrics
In the distributed workflow,it is required to assign the users with appropriate roles for the security concerns.This paper proposed a role matching access control model under distributed workflow environment to address the optimal role matching problem for a given authorization.According to different tasks of workflow,the model can find a set or multiple sets of roles with relevant executive authority from the system role,and then optimize the role matching by considering the reference environment,time constraints and the inheritance relationship among the roles.The experimental results show that the model can eliminate redundant roles,and assign a set of minimum set of roles for users,thus achieving the role matching optimization.
Modeling and Analysis of Botnet with Heterogeneous Infection Rate
NIU Wei-na, ZHANG Xiao-song, YANG Guo-wu, ZHUO Zhong-liu, LU Jia-zhong
Computer Science. 2018, 45 (7): 135-138.  doi:10.11896/j.issn.1002-137X.2018.07.022
Abstract PDF(2487KB) ( 839 )   
References | Related Articles | Metrics
Botnet,as a common attack platform,uses the current advanced anonymous network and malicious code technology to provide a lot of effective resources for APT attacks.In order to effectively control the large-scale outbreak of botnet,it is necessary to study its construction rules.This work proposed a botnet propagation model with heteroge-neous infection rate based on disease model due to nodes with different infection rates in different regions.Through analyzing the characteristics of botnet in the steady-state,the mean-field approach is used to study its propagation cha-racteristics from the dynamic point of view.Then,how the heterogenous infection rate affects the botnet propagation threshold in BA network is explored.The experimental results show that the proposed model is more realistic,and the relationship between threshold and heterogeneous infection rate has nothing to do with the number of nodes.
One-off Public Key Scheme for Preventing Dishonest Third Party Attacking
CHAI Lin-peng , ZHANG Bin
Computer Science. 2018, 45 (7): 139-142.  doi:10.11896/j.issn.1002-137X.2018.07.023
Abstract PDF(1944KB) ( 667 )   
References | Related Articles | Metrics
Aiming at the problem that the existing schemes cannot resist the malicious behaviors of the dishonest third party,this paper proposed an improved one-off public key scheme that can doubly restrain the behaviors of the third party.In this scheme,users and service providers can judge whether the third party is honest or not via verifying theidentity index published by the third party and the publicly verifiable information generated during the private key extraction,and this scheme can restrain the dishonest behaviors of the third party consequently.At the same time,the index algorithm can improve the efficiency of trace for malicious users.
Reliable Logic Analysis Method of Multi-party Non-repudiation Protocol
YUAN Bo-ao, LIU Jun
Computer Science. 2018, 45 (7): 143-149.  doi:10.11896/j.issn.1002-137X.2018.07.024
Abstract PDF(1312KB) ( 803 )   
References | Related Articles | Metrics
Multi-party non-repudiation protocol needs to meet three main security goals of non-repudiation,fairness and timeliness,but the existing formal analysis methods for multi-party non-repudiation proctocol are just simple extensions of those applied to two-party protocols.At the same time,each method can not cover all three security goals and has limited ability to analyze one goal with result unreliable.In this paper,the existing analysis methods were compared and the SVO logic was chosen for further study.Time factor was introduced in the logic system with relevant syntax definition and deduction axioms brought in explicitly.Then,the semantic model of the improved logic was stated and the soundness of logic system was proved,causing that the improved logic system can support the analysis of all three secu-rity goals of multi-party non-repudiation protocol.In the end,a typical multi-party non-repudiation protocol was analyzed with the improved logic and the defects of timeliness and fairness were found with corresponding attacks stated.Among the defects,the defect of fairness was discovered for the first time.
Multiple Encrypted Storage Technology of User Information Based on Big Data Analysis
CHEN Gui-ping,WANG Zi-niu
Computer Science. 2018, 45 (7): 150-153.  doi:10.11896/j.issn.1002-137X.2018.07.025
Abstract PDF(1324KB) ( 760 )   
References | Related Articles | Metrics
In the analysis of big data,when information is encrypted and stored,the method of linear differential solution is used to optimize the mixed encrypted storage method.In the process of the key expansion,information encrypted and stored in information link has a nonlinear mutation,resulting in low security of the encrypted storage.This paper proposed a multiple encrypted storage method for user information based on ultra bandwidth.Chaos mapping is used to increase the protection shell of user information to achieve the purpose of multiple encryption of user information under big data analysis.The disadvantages of current methods are overcomed,and the nonlinear mutation of encrypted information storage is reduced.Multiple encrypted storage is implemented for user information by using super bandwidth multiple encryption storage technology,which effectively enhances the anti-attack of the encrypted storage information,improves the security of the multiple encrypted storage of the user information,and completes the research on multiple encrypted storage technology of the user information based on big data analysis.The experimental results show that the security of information is improved by using this method to store multiple encrypted information.
Network Nearest Neighbor Intrusion Detection Algorithm Based on Adaptive Convolution Filtering
LU Qiang, YOU Rong-yi, YE Xiao-hong
Computer Science. 2018, 45 (7): 154-157.  doi:10.11896/j.issn.1002-137X.2018.07.026
Abstract PDF(2534KB) ( 760 )   
References | Related Articles | Metrics
The intrusion of the nearest neighbor routing nodes in the deep wireless sensor combination network has the characteristic of fast load variation,and it is difficult to effectively identify the types of attacks and abnormal network behavior.Therefore,this paper proposeda network nearest neighbor instrusion detection algorithm based on convolution filtering.Network traffic is collected in deep wireless sensor combination network,and network intrusion signal model is constructed.Energy density and attack strength of network intrusion signal are analyzed in terms of time and frequency,and blind source filtering and abnormal characteristic extraction of network information are achieved by constructing an adaptive convolution filter.Joint time-frequency analysis method is used to estimate the spectrum parameters of network intrusion feature neighbor information,and intrusion detection of wireless sensor network is done according to the abnormal distribution of spectrum features.Simulation results show that this method has high accuracy for network intrusion detection,has high recognition ability and generalization ability for the unknown network traffic sample sequence,and is superior to HHT detection method and energy management method.
Authentication Protocol for Smart Meter Based on Quadratic Residues
XU Yang ,YUAN Jin-sha, GAO Hui-sheng ,ZHAO Zhen-bing
Computer Science. 2018, 45 (7): 158-161.  doi:10.11896/j.issn.1002-137X.2018.07.027
Abstract PDF(1277KB) ( 712 )   
References | Related Articles | Metrics
Only little memory in RFID tag is available for security problems under the stadard of EPC Class 1 Gen-2.Therefore,this paper proposed a RFID authentication protocol based on quadratic residue property.The protocol gua-rantees the forward security and anonymity of the tag by presetting the hash value of a smart meter ID in the reader and tag,and validates the identity of the tag by quadratic residue property.Then BAN logic theory is used to prove the security.Compared with the other two kinds of smart meter authentication protocols,this protocol has low complexity and can resist many kinds of attacks,ensuring user’s privacy and security.
Attribute-based Proxy Re-encryption Technology and Fault-tolerant Mechanism Based Data Retrieval Scheme
LIU Xin-yu, LI Lang, XIAO Bing-bing
Computer Science. 2018, 45 (7): 162-166.  doi:10.11896/j.issn.1002-137X.2018.07.028
Abstract PDF(2060KB) ( 763 )   
References | Related Articles | Metrics
Aiming at the privacy of user information stored in the cloud server problem,a scheme based on property broker re-encryption and fault-tolerant mechanism was proposed.This scheme mainly divides the data stored by users into files and the security index of files,encrypts them separately and then stores them on different cloud servers.Firstly,the security index of file is constructed by using the inverted structure and the keywords are preprocessed by using the fuzzy extractor,so the users can search multi-keywords with fault tolerance through the security index.Secondly,the access control tree is used for re-encryption ofdecryption key to realize right management,namely,the effective sharing of data in cloud.Finally,the scheme is proved to be secure in cloud environment through Complex Triple Diffie-Hellman problem,proving that the system master key generated by this scheme is secure.Compared with the existing schemes,it is shown that the scheme can reduce the computational complexity of key re-encryption and decryption,and the fault-tolerant mechanism improves the efficiency of data retrieval.
Artificial Intelligence
Rapid Knowledge Reduction of Large-scale Truth Table Based on Variable Granularity
SONG Bo ,YAN Ji-xiong, CHEN Ze-hua
Computer Science. 2018, 45 (7): 167-171.  doi:10.11896/j.issn.1002-137X.2018.07.029
Abstract PDF(1320KB) ( 756 )   
References | Related Articles | Metrics
In the analysis and design of large-scale logic circuits,the process of obtaining the simplest logic function expression from the large-scale truth table is often complicated.Aiming at this problem,a rapid knowledge reduction algorithmbased on variable granularity for large-scale truth table was proposed in this paper.With the change of the granularity of the input logical variables,the simplest logical function expression is quickly acquired from the large-scale truth table by introducing the marker matrix and the heuristic operator.Then,the algorithm is described in detail through an example,and its correctness is proved mathematically.At last,the comparative experiments of data sets are carried out to further prove the rapidness and effectiveness of the proposed algorithm.
Improved OCCF Method Considering Task Relevance and Time for Task Recommendation
WANG Gang, WANG Han-ru, HU Ke ,HE Xi-ran
Computer Science. 2018, 45 (7): 172-177.  doi:10.11896/j.issn.1002-137X.2018.07.030
Abstract PDF(2855KB) ( 889 )   
References | Related Articles | Metrics
With the development of crowdsourcing system,researchers pay more attention to the crowdsourcing system.Based on the task recommendation of crowdsourcing,most of research scholars convert the behavior data into rate data,without considering the relationship between tasksor the influence caused by the change of user interest on the recommendation results.Therefore,this paper proposed an improved OCCF method considering the task relevance and the time factor to recommend task.On the one hand,this paper introduced a forgetting function when extracting the negative cases,and extracted a certain number of negative cases according to users’ activity.On the other hand,it merged the similarity information of tasks in the probability matrix factorization phase.The proposed method was further applied to recommend tasks in the crowdsourcing system.This paper used the data set of Taskcn to conduct experiments.The experimental results show that the proposed method achieves better results,and effectively improves the quality of recommendation compared with the mainstream methods.
Local Attribute Reduction in Interval-valued Decision Systems
YIN Ji-liang ,ZHANG Nan ,ZHAO Li-wei ,CHEN Man-ru
Computer Science. 2018, 45 (7): 178-185.  doi:10.11896/j.issn.1002-137X.2018.07.031
Abstract PDF(2044KB) ( 826 )   
References | Related Articles | Metrics
The existing attribute reduction in interval-valued decision system is mainly relative to all decision classes.For some special classes of decision attributes in interval-valued decision system,the concept of local reduction and the judgment theorem of partial decision classes were introduced in this paper.Besides,the structure of local reduction was studied by using the method of discernibility matrix,and the local reduction algorithm based on discernibility matrix was given.The structure of the global reduction in interval-valued decision system was further depicted through the concept of the local reduction,and the relationship between the local reduction and global reduction was discussed.Finally,related experiments were carried out.The experimental results show the feasibility and effectiveness of the proposed algorithm.
Jaccard Text Similarity Algorithm Based on Word Embedding
TIAN Xing, ZHENG Jin, ZHANG Zu-ping
Computer Science. 2018, 45 (7): 186-189.  doi:10.11896/j.issn.1002-137X.2018.07.032
Abstract PDF(1229KB) ( 1577 )   
References | Related Articles | Metrics
Based on the research and improvement of the traditional Jaccard algorithm,this paper proposed a Jaccard sentence similarity algorithm based on word embedding.Traditional Jaccard algorithm is characterized by literals of the sentence,so it is restricted in the respect of semantic similarity calculation.While with the rapid development of deep learning,especially the proposal of word embedding,there is a breakthrough on the expression of words in computer.This algorithm firstly maps each word into a high-dimensional vector on semantic level by training,and then calculates the similarity between the respective word vector.The results which are higher than the threshold α are regarded as the intersection,and finally the sentence similarity is calculated.Experiments show that the algorithm significantly improves the accuracy of short text similarity calculation comparing with traditional Jaccard algorithm.
Rough K-means Algorithm with Self-adaptive Weights Measurement Based on Space Distance
WANG Hui-yan,ZHANG Teng-fei,MA Fu-min
Computer Science. 2018, 45 (7): 190-196.  doi:10.11896/j.issn.1002-137X.2018.07.033
Abstract PDF(1310KB) ( 789 )   
References | Related Articles | Metrics
The setting of weights coefficient of lower approximation and boundary area in rough K-means algorithm has an important influence on final clustering results of algorithm.However,traditional rough K-means and many refined rough K-means algorithms set up fixed weights of lower approximations and boundary area for all clusters,ignoring the effect of distribution difference of data objects within clusters.To cope with this problem,a new rough K-means algorithm with self-adaptive weights measurement based on space distance was proposed according to the spatial distribution of objects in lower approximation and boundary area relative to the cluster centers.During each iteration process,diffe-rent importance of lower approximation and boundary area on iterative computation of cluster centers was measured based on average distance of objects in lower approximation and boundary area relative to cluster centers and the relative weights coefficient of lower approximation and boundary area were dynamically calculated.The validity of the algorithm was verified by experimental analysis.
Fast Attribute Reduction Algorithm Based on Importance of Voting Attribute
WANG Rong, LIU Zun-ren, JI Jun
Computer Science. 2018, 45 (7): 197-201.  doi:10.11896/j.issn.1002-137X.2018.07.034
Abstract PDF(2306KB) ( 869 )   
References | Related Articles | Metrics
As an extension of the classical Pawlak rough set,neighborhood rough sets can efficiently manipulate numerical data.However,because the concept of neighborhood granulation is introduced,computational complexity in the neighborhood real space is much larger than that in the classical discrete space.For the neighborhood rough set algorithm,it is very meaningful to find the attribute reduction of the data set efficiently and quickly.To this end,an improved definition of voting attribute importance was proposed for the shortcomings of the definition of attribute importance in existing algorithms,then a fast attribute reduction algorithm based onimportance of voting attribute was proposed.Compared with the existing algorithms,the experiment proves that the algorithm can get the attribute reduction more quickly under the premise of ensuring the classification accuracy.
Information P-dependence and P-dependence Mining-sieving
LIU Ji-qin , ZHANG Hai-yue
Computer Science. 2018, 45 (7): 202-206.  doi:10.11896/j.issn.1002-137X.2018.07.035
Abstract PDF(2156KB) ( 644 )   
References | Related Articles | Metrics
P-sets(packet sets) is a new dynamic model.It is a set pair composed of the internal P-set XF(internal pa-cket set XF) and the outer P-set XF(outer packet set XF),or (XF,XF) is a P-sets.P-sets was proposed by introducing dynamic characte-ristics into a finite general set X(cantor set X) and improving X.Using P-sets and its dynamic characteristics,the paper gave the concept of information P-dependence(packet dependence) and its structure,and gave attri-bute characteristics of information P-dependence.It further presented the concept of information P-dependence mi-ning,mining theorems,information P-dependence mining criterion and P-dependence mining-sieving principle.Using these results,the paper gave an application of information P-dependence mining-sieving.
Density Scaling Factor Based ISOMAP Algorithm
LI Xiang-yuan, CAI Cheng, HE Jin-rong
Computer Science. 2018, 45 (7): 207-213.  doi:10.11896/j.issn.1002-137X.2018.07.036
Abstract PDF(2577KB) ( 823 )   
References | Related Articles | Metrics
ISOMAP is a widely used nonlinear unsupervised dimensionality reduction algorithm,and preserves the coor-dinate transformation from high-dimensional space to low-dimensional space by keeping the geodesic distance between the observation samples.However,practical data are inevitably corrupted by noise.Since the calculation of geodesic distance is sensitive to noise and does not consider the density distribution of dataset,the geometric deformation of the ISOMAP algorithm is happening after dimensionality reduction.For this shortcoming,according to the idea of local density,a density scaling factor based ISOMAP algorithm called D-ISOMAP was proposed.In the framework of traditional ISOMAP algorithm,local density scaling factor is first calculated for each observation sample.And then the original geo-desic distance of two samples is divided by the product of their density scaling factors.Finally,the improved distance matrix is obtained by the shortest path algorithm.Since the improved geodesic distance is reduced in the larger density region and enlarged in the smaller density region,it can achieve a good effect in noisy situations.The experimental results on artificial dataset and UCI dataset show that the D-ISOMAP algorithm is better than the classical unsupervised dimensionality reduction algorithms in terms of visualization and clustering of data sets.
Improved PSO Algorithm and Its Load Distribution Optimization of Hot Strip Mills
LI Rong-yu , ZHANG Wei-jie , ZHOU Zhi-yong
Computer Science. 2018, 45 (7): 214-218.  doi:10.11896/j.issn.1002-137X.2018.07.037
Abstract PDF(2309KB) ( 700 )   
References | Related Articles | Metrics
Aiming at the load distribution problem of hot strip rolling,an adaptive double layer particle swarm optimization algorithm based on empirical method (ADLPSO-EM) was proposed.After each population iteration,the algorithm usesimproved speed update formula to update memory swarm.At the same time,in order to improve the diversity of the population,it uses an improved adaptive adjustment strategy to update inertia weight.Finally,The initialization section of the algorithm is a changeable neighborhood based on the value obtained by the empirical method in load distribution problem.The experimental results show that the improved algorithm has a significant effect on the load distribution optimization.
Graphics, Image & Pattem Recognition
Fast Noise Level Estimation Algorithm Based on Nonlinear Rectification of Smallest Eigenvalue
XU Shao-ping, ZENG Xiao-xia ,JIANG Yin-nan ,LIN Guan-xi ,TANG Yi-ling
Computer Science. 2018, 45 (7): 219-225.  doi:10.11896/j.issn.1002-137X.2018.07.038
Abstract PDF(3937KB) ( 759 )   
References | Related Articles | Metrics
Considering the fact that the smallest eigenvalue of covariance matrix of the raw patches extracted from noise images is significantly correlated with noise level,this paper proposed a fast algorithm that directly uses a pretrained nonlinear mapping model based on the polynomial regression to map (rectify) the smallest eigenvalue to the final estimate.Firstly,some representative natural images without distortion are selected as training set.Then,the training sample library is formed,and the training set images are corrupted with the different noise levels.Based on this,raw patches are extracted for each noisy image,and the smallest eigenvalue of covariance matrix of the raw patches is gotten by PCA transformation.Finally,a nonlinear mapping model between the smallest eigenvalue and the noise level are obtained based on polynomial regression technique.Extensive experiments show that the proposed algorithm works well for a wide range of noise levels and has outstanding performance at low levels in particular compared with the existing algorithms,showing a good compromise between speed and accuracy in general.
Estimation of FOE for Railway Video Sequences
HU Yan-hua ,TANG Peng ,JIN Wei-dong, HE Zheng-wei
Computer Science. 2018, 45 (7): 226-229.  doi:10.11896/j.issn.1002-137X.2018.07.039
Abstract PDF(2578KB) ( 912 )   
References | Related Articles | Metrics
In the rail transit of the structured scene,due to the movement of camera,the objects in the image captured by the on-board camera will spread around the center of this image,which is called FOE (Focus of Expansion).In view of the current technology based on FOE,which is sensitive to noise and has a large amount of computation,it can not accurately calculate the FOE in the railway scene.This paper presented a method for estimating the FOE of railway video sequences.This method uses the Pyramid optical flow method to track and coarsely match the detected Harris corner points,and makes accurately matching with RANSAC algorithm based on the computation of fundamental matrix.Then the epipolar lines are extracted in the image,and the FOE is obtained at last.The experimental results show that the FOE error of this algorithm is smaller than that of the Hough line,and the proposed algorithm is suitable for real-time application.
Driver Pathway Identification Algorithm Based on Mutated Gene Networks for Cancer
GUO Bing, ZHENG Wen-ping, HAN Su-qing
Computer Science. 2018, 45 (7): 230-236.  doi:10.11896/j.issn.1002-137X.2018.07.040
Abstract PDF(2641KB) ( 1064 )   
References | Related Articles | Metrics
Large cancer genome projects such as The Cancer Genome Atlas(TCGA) and International Cancer Genome Consortium(ICGC) have produced big amount of data collected from patients with different cancer types.The identification of mutated genes causing cancer is a significant challenge.Genovariation in cancer cells can be divided into two types:functional driver mutation and random passenger mutation.Identifcation of driver genes is benefit to understand the pathogenesis and progression of cancer,as well as research cancer drug and targeted therapy,and it is an essential problem in the field of bioinformatics.This paper proposed a driver pathway identification algorithm based on mutated gene networks for cancer(GNDP).In GNDP,a nonoverlap balance metric is defined to measure the possibility of two genes lying in the same driver pathway.To reduce the complexity of the constructed mutually exclusive gene networks,the nonoverlap balance metric,the exclusivity and the coverage of a gene pair are computed first,and then the edges with low nonoverlap balance metric,low exclusivity and low coverage are deleted.Then,all maximal cliques which might be potential driver pathways are found out.After that,the weight of each clique is assigned as the product of its exclusive degree and coverage degree and then every node of a clique will be checked to judge whether is’s deletion might obtain a larger weight.At last,the maximal weight cliques are obtained in mutually exclusive gene networks as the final driver pathways.This paper compared GNDP algorithm with classical algorithm Dendrix and Multi-Dendrix on both simulated data sets and somatic mutation data sets.The results show that GNDP can detect all artificial pathways in simulated data.For Lung adenocarcinoma and Glioblastoma data,GNDP shows higher efficiency and accuracy than the comparison algorithms.In addition,GNDP does not need any prior knowledge and does not need to set the number of genes in driver pathways in advance.
Sitting Posture Detection System Based on Depth Sensor
ZENG Xing, SUN Bei ,LUO Wu-sheng, LIU Tao-cheng ,LU Qin
Computer Science. 2018, 45 (7): 237-242.  doi:10.11896/j.issn.1002-137X.2018.07.041
Abstract PDF(2837KB) ( 2029 )   
References | Related Articles | Metrics
For the purpose of the detection of bad postures and analysis of people’s sitting habit,a sitting posture detection system based on depth sensor was designed.The system first uses the Astra3D sensor to obtain the depth image of human body’s sitting posture and designs the fast and effective foreground extraction method based on the thre-shold segmentation method.The sitting foreground segmentation images are projected into three Cartesian planes respectively and three projection maps are obtained.The background removal,interpolation scaling and normalization are performed for each projection map to obtain the projection features.After the PCA dimensionality,the projection features and the pyramid HOG feature of the front view form the final sitting posture feature vector.Then,random forest is used toclassify and identify 14 kinds of sitting posture.In the experiment,the sitting posture depth image database of 20 people is used for uniform testing and cross testing.The test results show that the method of sitting posture recognition has good recognition rate and recognition speed,and it is superior to the existing method in the type of sitting posture and recognition.Finally,the method was implemented on the Android platform and the application software of the sitting posture detection system was designed to realize the effective detection of sitting posture and timely reminder for the bad sitting posture.
Tumor Image Segmentation Method Based on Random Walk with Constraint
LIU Qing-feng, LIU Zhe, SONG Yu-qing, ZHU Yan
Computer Science. 2018, 45 (7): 243-247.  doi:10.11896/j.issn.1002-137X.2018.07.042
Abstract PDF(4887KB) ( 926 )   
References | Related Articles | Metrics
Accurate lung tumor segmentation is critical to the development of radiotherapy and surgical procedures.This paper proposed a new multimodal lung tumor image segmentation method by combining the advantages and disadvantages of PET and CT to solve the weakness of single-mode image segmentation,such as the unsatisfied segmentation accuracy.Firstly,the initial contour is obtained by the pre-segmentation of PET image through using region growing and mathematical morphology.The initial contour can be used to automatically obtain the seed points required for random walk of PET and CT images,at the same time,it can be also used as a constraint in the random walk of CT image to solve the shortcoming that the tumor area is not obvious if the CT image has not been enhanced.For the reason that CT provides essential details on anatomic structures,the anatomic structures of CT can be used to improve the weight of random walk on PET images.Finally,the similarity matrices obtained by random walk on PET and CT image are weighted to obtain an identical result on PET and CT images.Clinical PET-CT image segmentation of lung tumorshows that the proposed method has better performance than other traditional image segmentation methods.
Fuzzy Static Image Target Reproduction Method Based on Virtual Reality Technology
JI Li-xia, LIU Cheng-ming
Computer Science. 2018, 45 (7): 248-251.  doi:10.11896/j.issn.1002-137X.2018.07.043
Abstract PDF(2748KB) ( 702 )   
References | Related Articles | Metrics
The pros and cons of fuzzy static image target reproduction method directly affect the final effect of fuzzy static image processing and the accuracy of target recognition.At present,the fuzzy static image target reproduction method is used to estimate the ambient light value of the fuzzy static image target and the transmittance of the fuzzy static image target based on the illumination condition.Then,the fuzzy model is used to restore the fuzzy static image target.Finally,the reversion results of the fuzzy static image target are obtained by reversing the target inversion.There is a problem of low target contrast in this method.In order to improve the contrast and the visual effect,a fuzzy static image target reproduction method based on virtual reality technology was proposed.Firstly,fuzzy realistic image and optical imaging principle are used to collect and classify fuzzy static image objects.Then,the gradation adjustment function is used to deal with the brightness channel of the fuzzy static image object,and the global mapping is carried out.Finally,the target details are processed accordingly,the visibility of the target details is maintained,and the target reproduction is completed.The experimental results show that the proposed method has better visual effects and more obvious details.
Camshift Tracking Moving Target Algorithm Based on Multi-feature Fusion
WU Wei, ZHENG Juan-yi ,DU Le
Computer Science. 2018, 45 (7): 252-258.  doi:10.11896/j.issn.1002-137X.2018.07.044
Abstract PDF(2895KB) ( 1299 )   
References | Related Articles | Metrics
Traditional Camshift algorithm tracks target characterized by the color histogram,with strong robustness for rigid target tracking.However,the tracking effects and accuracy may not be ideal when they are interfered by the objects with similar color.In view of this,a Camshift tracking algorithm of multi-feature fusion was proposed.Firstly,by extracting and processing the color feature,edge feature as well as the spatial information of target,the color space histogram and spatial edge orientation histogram are obtained.Then the central locations of target matching are obtained respectively under the framework of Camshift algorithm and the weight coefficient is obtained by using the similarity vectors of each image.The optimal central location is obtained through the method of adaptive weighted fusion.The experimental results show that compared with the traditional Camshift target tracking and the improved Meanshift algorithm based on complex feature fusion,the proposed method can effectively overcome the problems of color interference and overlapping occlusion that hamper tracking effects.It can avoid the target loss in the process of tracking which breaks throughthe limitations of traditional methods.
Image Segmentation Based on Saliency and Pulse Coupled Neural Network
WANG Yan ,XU Xian-fa
Computer Science. 2018, 45 (7): 259-263.  doi:10.11896/j.issn.1002-137X.2018.07.045
Abstract PDF(1800KB) ( 1214 )   
References | Related Articles | Metrics
Aiming at the problem that complicated images are interfered by background,an image segmentation method based on saliency and pulse coupled neural network (SPCNN) was proposed.Firstly,with the saliency filtering algorithm and the method of maximum between-class variance,the saliency map and the object image are obtained.Based on this,the interference which comes from the background for the initial seed point selection is eliminated.Secondly,according to saliency values in saliency map,the most saliency region is captured and the initial seed points are produced.Finally,the operations of object image segmentation are achieved via the improved RG-PCNN model.The experimental segmentation results of the gray natural images are obtained from the Berkeley segmentation dataset and ground truth database.There are three evaluating indicators:consistency coefficient(CC),similarity coefficient(SC) and integrate coefficient(IC).The experiment results show that the proposed model has better performance in terms of CC,SC and IC comparing with pulse coupled neural network (PCNN),region growing model (RG) and SPCNN.
Dictionary Learning Image Denoising Algorithm Combining Second Generation Bandelet Transform Block
ZHANG Zhen-zhen ,WANG Jian-lin
Computer Science. 2018, 45 (7): 264-270.  doi:10.11896/j.issn.1002-137X.2018.07.046
Abstract PDF(4383KB) ( 943 )   
References | Related Articles | Metrics
There are mainly three challenges for sparse coding in the process of image denoising,including incomplete image denoising,the noise residue,and the lack of protection of image edges and detailed characteristics.This paper proposed a dictionary learning image denoising algorithm combining the second generation Bandelet transformation block method to achieve better removal of noise.With the second generation Bandelet transformation,the sparse representation of images can be automatically obtained to accurately estimate the image information according to the regularity of the image geometry manifold.The K-singular value decomposition (K-SVD) algorithm is used to learn the dictionary under the moderate Gaussian white noise variance.Moreover,it utilizes the quadtree segmentation to adaptively predict the noise images and segment images into blocks.Experimental results show that the proposed method can effectively preserve the edge features of image and the fine structure of image while removing the noise.Since it employs the second generation Bandelet transformation for segmentation,the algorithm structure is well optimized and the operational efficiency is also improved.
Pavement Crack Detection Based on Sparse Representation and Multi-feature Fusion
ZHANG Yu-xue,TANG Zhen-min ,QIAN Bin ,XU Wei
Computer Science. 2018, 45 (7): 271-277.  doi:10.11896/j.issn.1002-137X.2018.07.047
Abstract PDF(4970KB) ( 933 )   
References | Related Articles | Metrics
In order to improve the performance of the practical pavement crack detection under complex background noise,an improved pavement crack detection algorithm based on sparse representation and multi-feature fusion was proposed.Firstly,this algorithm takes image sub-block as unit,and extracts statistics,texture and shape features which are effective for crack re-cognition.Then,the sparse representation classification method is adopted to realize sub-block re-cognition under each feature matrix separately,and a comprehensive recognition classifier for sub-block detection is designed by fusing the recognition results under different features.Finally,on the detected sub-block,a pixel-level detection method based on visual saliency is used to extract crack details accurately.The experiment results on highway pavement datasets show that the proposed algorithm can effectively improve the accuracy of pavement crack detection and has good robustness.
Interdiscipline & Frontier
CAUXT:A Tool to Help User Experience Researchers Capture Users’ Experience Data in Context of Interest
HAN Li , LIU Zheng-jie
Computer Science. 2018, 45 (7): 278-285.  doi:10.11896/j.issn.1002-137X.2018.07.048
Abstract PDF(2132KB) ( 678 )   
References | Related Articles | Metrics
With the rapid development of mobile internet technology,the usage of products is more and more rela-ted to context,which makes the users’ experience researches more associated with context.But for the users’ experience research,there are some difficulties in the existing technical means to capture the context of interest,and it is difficult to obtain users’ experience data based on context of interest.One reason for this difficulty is that there is a big gap between the tool and user experience researchers’ context-awareness.Researches are weak to deal with such problem,and the existing studies tend to improve the tools’ ability from the algorithm and the computational efficiency,but few stu-dies are given from the perspective of the user expe-rience researchers’ context-awareness mechanism.According to the context-aware theory in the field of cognitive science and human-computer interaction,this paper constructed a user experience researcher’s context-awareness model,and built a system based on this model which can capture user expe-rience data to a certain extent.Proved by the case study,this system can capture the contexts which reseachers are inte-rested in and can get the right UX data in those contexts.
KAAS-based Service Mechanism of Technology Resources for Collaborative Innovation
RAO Yuan, LU Shu-min
Computer Science. 2018, 45 (7): 286-292.  doi:10.11896/j.issn.1002-137X.2018.07.049
Abstract PDF(2794KB) ( 652 )   
References | Related Articles | Metrics
Based on the definition and analysis of KAAS,ascience and technology resource collaborative service model,called STRCS,was proposed in this paper.STRCS model includes technology resources model,service model and colla-borative model,and corresponding technology resources and service.At the same time,the service mapping mechanism between KAAS and STRCS was built to provide some new knowledge aggregation service pattern.Furthermore,aiming at the way of classification and aggregation of multiple tags querying and dynamic document indexing,an optimized mechanism was proposed.Meanwhile,knowledge synergy mechanism and personalized service of science and technology resources were given.A public service platform for science and technology resources based on socialization and know-ledge service was developed,and optimization of algorithm was used to enhance the ability and accuracy of personalized recommendation of science and technology resources platform,thus providing a new solution for the implementation and integration of collaborative innovation of science and technology resources and personalized knowledge service.
Multi-feature Fusion Classification Method Based on High-order Minimum Spanning Tree Brain Network
QIN Meng-na, CHEN Jun-jie, GUO Hao
Computer Science. 2018, 45 (7): 293-298.  doi:10.11896/j.issn.1002-137X.2018.07.050
Abstract PDF(2569KB) ( 920 )   
References | Related Articles | Metrics
Existing researches on classification of brain diseases is based on the traditional low-order functional connectivity network.Low-order functional connectivity network may overlook the complex and dynamic interaction patterns among brain regions,which are essentially time-varying.The high-order functional connectivity network can reflect the abundant dynamic time information contained in the network.However,the traditional high-order functional connectivity network adopts the clustering method to reduce the dimensionality of the data,making the construced network can not be effectively interpreted from the perspective of neurology.Even more importantly,due to the large scale of the high-order functional connectivity network,it is verytime-comsuming to use some complex network or graph theory to calculate some topological properties.Therefore,this paper proposed a method for constructing a high-order minimum spanning tree network,calculated the traditional quantifiable network properties (degree and eccentricity),and used frequent subgraph mining technology to capture the discriminative subnetworks as features.Then,this paper applied a multi-kernel learning technique into the corresponding selected features to obtain the final classification results.The experimental results show that the classification accuracy is up to 97.54%.
High-performance Parallel Preconditioned Iterative Solver for Helmholtz Equation with Large Wavenumbers
CHENG Dong-sheng,LIU Zhi-yong,XUE Guo-wei,GAO Yue-fang
Computer Science. 2018, 45 (7): 299-306.  doi:10.11896/j.issn.1002-137X.2018.07.051
Abstract PDF(2586KB) ( 1135 )   
References | Related Articles | Metrics
For solving the Helmholtz equation with a large wavenumber,the traditional sequential iterative solver is inefficient and limited to the memory of a single computer.To deal with these problems,a parallel preconditioned iterative solver was proposed based on the message passing interface(MPI).The complex shifted-Laplacian is used to precondition the Helmholtz equation,and the Krylov subspace method Bi-CGSTAB combined with the matrix-based multigrid method is employed to solve the large linear system resulted from discretization of the preconditioned equation.Paral-lelization of the preconditioned solver is achieved under the environment of MPI on the Linux cluster system,and the problems of parallel partition of the multigrid,information transfer and construction of the multigrid components are mainly tackled.Finally,numerical experiments were given.The results show that the proposed method contributes to an excellent parallel speedup,and improves the computing efficiency considerably compared with the sequential iterative solver.
Simplification and Verification of Matrix-based Workflow Logic Net Model
ZHENG Hong, DENG Wen-xuan, DENG Xiao, LU Xing-jian
Computer Science. 2018, 45 (7): 307-314.  doi:10.11896/j.issn.1002-137X.2018.07.052
Abstract PDF(1355KB) ( 755 )   
References | Related Articles | Metrics
Petri net is used as an effective modeling tool when analyzing workflows,but it is easy to cause “state space explosion” problem when dealing with complex workflow.Workflow logic,as a logical framework for workflow paths,enables further abstraction of workflow networks.In order to verify the smoothness of the larger workflow,the Petro net is used to model the logic network corresponding to the workflow.On this basis,a matrix-based workflow logic algorithm was proposed,which provides a theoretical basis for automatic simplification of the large-scale workflow.At last,this algorithm was applied to the bank location to verify the smoothness of its workflow logic net,which reflects the effectiveness of the algorithm in solving the practical problem.
Short-term Load Forecasting Method Combining Multi-scale Analysis with Data Co-transfer
LIU Shi-chang, JIN Min
Computer Science. 2018, 45 (7): 315-321.  doi:10.11896/j.issn.1002-137X.2018.07.053
Abstract PDF(1679KB) ( 682 )   
References | Related Articles | Metrics
In order to improve the performance of short-term load forecasting,this paper proposed a short-term load forecasting method combining multi-scale analysis with data co-transfer.On the one hand,aiming at the problem that the hidden information in the original series which is related to the subseries isn’t fully utilized in the modeling and prediction of subseries in multi-scale analysis forecasting method,this paper used mutual information feature selection method to select appropriate past loads and introduced them into the feature set of the approximation component of original load series.By expanding feature set,more information can be provided for learning algorithm,which can further improve the forecasting accuracy of the approximate component.On the other hand,aiming at the problem thatdifferent kinds of data can influence model’s performance,this paper proposed a transfer learning method based on kernel ridge regression to transfer similar data to data corresponding to days to be forecasted.By doing this,the similarity of these data was used and the difference of these data was taken into account during modeling.Case study shows that the proposed me-thod outperforms in MAPE,MAE and RMSE which are decreased by 6.2%,3.4% and 5.5% respectivelywhen compared with single model forecasting method.