Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 44 Issue 3, 13 November 2018
  
New Pattern of Actuarial Pricing Based on Deep Learning
ZHANG Ning
Computer Science. 2017, 44 (3): 1-2.  doi:10.11896/j.issn.1002-137X.2017.03.001
Abstract PDF(259KB) ( 993 )   
References | Related Articles | Metrics
The research introduced an actuarial pricing model based on biological age.And the deep learning technique was applied to get the robust biological age when a hand-back photo was provided.The new pricing model can show more accurate risk of policyholder and change the old way based on calendar age.
Key Management Issues and Challenges in Cloud
YANG Lu and YE Xiao-jun
Computer Science. 2017, 44 (3): 3-9.  doi:10.11896/j.issn.1002-137X.2017.03.002
Abstract PDF(633KB) ( 1537 )   
References | Related Articles | Metrics
In order to securely interact with cloud data services and store sensitive data which generated or processed by these services in the cloud environment,cloud suppliers need to provide different kinds of security encryption mechanisms.Compared with traditional IT systems,due to different ownerships among customers,suppliers and owners,cryptographic services will generate large scales of keys in different cloud service modes (Infrastructure as a Service,Platform as a Service,Software as a Service) which leads to much more complex issues of key management.This paper identified some key types,kinds of possible key states,essential key management functions and common security requirements,discussed key management’s security capabilities in the context of architectural solutions by taking three common cloud services modes as examples,and proposed some suggestions about related application system architecture of key management interoperability and possible system features of key management interoperability in the respect of interoperability requirements.
Algorithm for Bursty Term Query in Cloud Computing
ZHENG Shi-min, QIN Xiao-lin, LIU Liang and ZHOU Qian
Computer Science. 2017, 44 (3): 10-15.  doi:10.11896/j.issn.1002-137X.2017.03.003
Abstract PDF(1890KB) ( 733 )   
References | Related Articles | Metrics
Distributed bursty term query under the framework of Spark Streaming is a hot research issue.It aims to detect bursty terms in data streams.Most studies of bursty term query count and save all terms without consideration of hot terms.Under the background of exploding in the data scale,it makes more sense to get bursty time of hot terms.To solve this problem,we presented a distributed bursty term query algorithm.The algorithm uses dynamic update strategy and a checkpoint mechanism to extract hot terms.Also it finds the bursty time range in linear time.Experimental results show that the proposed algorithm has better performance.
Throughput Oriented Real-time Query Processing Algorithm for Moving Objects in Road Network
XUE Zhong-bin, BAI Li-guang, HE Ning, ZHOU Xuan, ZHOU Xin and WANG Shan
Computer Science. 2017, 44 (3): 16-19.  doi:10.11896/j.issn.1002-137X.2017.03.004
Abstract PDF(1059KB) ( 661 )   
References | Related Articles | Metrics
With the rapid development of wireless communication technology,space positioning technology and mobile computing technology,location based queries have become an important research issue in the area of database.In this paper,we studied the problem of moving object KNN query in road network.A series of algorithms have been proposed to process KNN queries of moving objects.However,these algorithms are either designed for fast response time or high update performance.With the increasing of moving objects,when both queries and updates arrive at a very high rate,the throughput becomes more important.For the query stream and object update stream,we proposed a high throughput main memory algorithm——dual stream road network KNN (DSRNKNN) algorithm,which is used for moving object KNN query in road network.DSRNKNN uses a snapshot approach.In each snapshot,DSRNKNN builds a new index structure based on the update of the moving objects,which avoids complex index maintenance operations and gives full play to the performance of the hardware.DSRNKNN executes a batch queries at each run,making full use of inner-and inter-parallel,which increases the data locality and improves the efficiency of the algorithm.We conducted a comprehensive performance study of the proposed techniques by using the real network generated data.The results show that DSRNKNN is highly efficient.
Accelerating Gene Clustering on Heterogeneous Clusters
WEI Jian-wen, XU Zhi-geng, WANG Bing-qiang, Simon SEE and James LIN
Computer Science. 2017, 44 (3): 20-22.  doi:10.11896/j.issn.1002-137X.2017.03.005
Abstract PDF(338KB) ( 789 )   
References | Related Articles | Metrics
Metagenome clustering is a novel approach to detect flaw genes which relies on massive gene data,effective clustering algorithms and efficient implementation.In clustering,calculating correlation matrix is essential,accounting most of computing time.To take a gene repo as an example,which has 1300 samples and million genes,it will take about 27 years to cluster them.Therefore,developing efficient implementations for calculating correlation matrix is most essential.After analyzing the algorithms,we proposed and took several optimization approaches.First,we implemented an efficient multithread one using OpenMP dynamic scheduling.Secondly,we further improved the performance by utilizing cache on CPU and shared memory on GPU efficiently.Thirdly,we implemented a loadbalance work distribution which works well on the MPI program on CPU.Compared to the unoptimized single-threaded CPU program,the two fasted one,MPI+OpenMP on 256 CPU cores and MPI+CUDA on 6 GPU cards,achieve 238.8 and 263.8 speedups.
Genetic Algorithm Based K-Medoids Clustering within MapReduce Framework
LAI Xiang-yang, GONG Xiu-jun and HAN Lai-ming
Computer Science. 2017, 44 (3): 23-26.  doi:10.11896/j.issn.1002-137X.2017.03.006
Abstract PDF(397KB) ( 953 )   
References | Related Articles | Metrics
Huge volumes of data are increasing exponentially with the rapid development of Internet,which poses signifi-cant challenges to traditional clustering technologies.Thus,improving the accuracy and computing performance of clustering has become a research hotspot.As one of the partition-based clustering algorithms,K-Medoids can effectively deal with the problems with isolate and noise points.However,it also suffers from problems such as sensitive to initial centers,easily falling into local optimum,CPU and memory bottlenecks with big data sets.We proposed a genetic algorithm based K-Medoids clustering under MapReduce framework.The algorithm solves the center sensitivity problem of the K-Medoids by using the genetic algorithm.Also,it is built on the MapReduce framework to boost the efficiency both for K-Medoids and the genetic algorithm.The experiments demonstrate that the proposed algorithm can effectively improve the quality and efficiency of clustering.
Volume Rendering Method of Mass Brain Imaging Data Based on Compression Domain
SHI Xue-kai, WANG Wen-ke, HUANG Hui, LI Si-kun and FU Yi-qi
Computer Science. 2017, 44 (3): 27-31.  doi:10.11896/j.issn.1002-137X.2017.03.007
Abstract PDF(1089KB) ( 670 )   
References | Related Articles | Metrics
Nowadays,brain science is the forefront field of international scientific and technological research,while the visualization of the high-precision brain imaging data is the fundamental requirements of the structural imaging of brain neuroscience.Aiming at the problems of great quantity of data and low efficiency during rendering the brain imaging data,a compression domain visualization algorithm based on the combination of flag based classical hierarchical vector quantization and perfect spatial hashing was put forward.Firstly,the volume data is blocked,the average of each block is recorded and then the blocks are classfied according to their average gradient value.Secondly,the hierarchical vector quantization is used to compress the blocks of whose average gradient is not 0.Thirdly,the perfect spatial hashing technology based on blocking is used to store two index values obtained by compressing.Finally,the above compressed data is decompressed to obtain the recovered volume data,and then the perfect spatial hashing based on blocking is applied to compress the differential volume data obtained by making the original volume data minus the recovered volume data.When rendering,the compressed data is reloaded as textures to GPU,then decompression and visualization can be done in real time.The experiment results show that the algorithm reduces the data storage space and improves the compression ratio and can make the single machine handle larger data under the premise of ensuring the better quality of image reconstruction.
Three-level Parallel Optimization and Application of Calculix in TH-2 Super-computing Environments
JIANG Wen-chao, LIN Sui, WANG Duo-qiang, LI Dong-ming and JIN Hai
Computer Science. 2017, 44 (3): 32-35.  doi:10.11896/j.issn.1002-137X.2017.03.008
Abstract PDF(816KB) ( 1509 )   
References | Related Articles | Metrics
To increase the computing efficiency of Calculix in large scale numerical calculation,this paper researched Calculix and proposed a three-level parallel optimization scheme including preprocessing-level,task level and thread le-vel.A multiple row double threshold incomplete LU matrix decomposition strategy based on column pivoting and dyna-mic abandon was presented.The corresponding algorithm was given and proved to be efficient,available and convergent.Furthermore,to utilize the powerful computing resources of TH-2 supercomputer,a task level parallel scheduling algorithm using between computing nodes and a thread level parallel algorithm using between multiple computing cores were also developed and deployed in the experimental platform which focused on the ship fatigue analysis.Both the theory analysis and the actual engineering cases testing were provided to show that the three level parallel optimization scheme based on Calculix can increase the solving speed of liner equations and the analysis efficiency in engineering design areas,and the average speed-up ratio can reach about 2~4 when enough resources can be obtained.
Research on Parallel Algorithm of Vehicle Image Interested Region Based on Pthreads
ZHOU Yi-hua, WANG Wen-dong, CHEN Hong-cai, WANG Ting and ZHANG Chang-you
Computer Science. 2017, 44 (3): 36-37.  doi:10.11896/j.issn.1002-137X.2017.03.009
Abstract PDF(259KB) ( 662 )   
References | Related Articles | Metrics
In order to improve the efficiency of searching criminal vehicles for the public security bureau,improving the efficiency of vehicle identification is very necessary.According to statistics,extraction of region interest area (ROI) is about 60% of the entire vehicle identification process.How to speed up the phase of extracting ROI is especially important.First,the basic parallel algorithm was realized by data partitioning method.Then,through the experimental analysis,the pre-processing decomposition scheme was carefully designed based on basic parallel algorithm.In our scheme,we set up a multi-queue buffer,reducing the number of threads sharing a cache and the times of lock each cache.Experiments show that the algorithm running on server with dual CPU12 core (support for hyper threading to 24 threads) achieved 13.1x speedup ratio compared with the serial algorithm.
Error Checking Tool for DAG-based Task Parallel Programs
LIU Yan-na, CHEN Li and TANG Sheng-lin
Computer Science. 2017, 44 (3): 38-41.  doi:10.11896/j.issn.1002-137X.2017.03.010
Abstract PDF(1637KB) ( 718 )   
References | Related Articles | Metrics
AceMesh is a dataflow-driven’ task parallel programming language,which allows programmers to parallelize traditional C/C++ program by using pragmas marking parallel regions,parallel loops and task regions with data inputs and outputs description.Then the program can be translated to a DAG-based task parallel program,being built depen-dency graphs at runtime and scheduled to multicore platforms efficiently.This paper analysed typical errors which may exhibit in parallelizing AceMesh programs,and introduced AceMeshCheck,a debug tool for them.The paper presented the implementation details of the tool,and discussed how it reduces the overhead of memory trace collection,and how to rebuild the three dimensional,rectangular access regions from linearized memory access sequences.Experimental results show that the tool can identify typical errors hidden in AceMesh programs with relatively low overhead.
Parallel Algorithm Design and Optimization of Range Query for Meteorological Data Retrieval
XU Jing, REN Kai-jun and LI Xiao-yong
Computer Science. 2017, 44 (3): 42-47.  doi:10.11896/j.issn.1002-137X.2017.03.011
Abstract PDF(1107KB) ( 763 )   
References | Related Articles | Metrics
With continuous improvement of numerical weather prediction technology and resolution,meteorological data shows massive growth trend,resulting in less efficient meteorological archival and retrieval system (MARS) on large data service requests.Aiming at this issue,we carried out the research on optimization for region query based on retrie-val in MARS,and proposed an efficient method through complement transform range query(CTRQ) by utilizing the complement ideas of mathematics and calculating principle of multi array aggregation,which transforms “big data” to “small data” in extensive range query.The basic idea is to calculate the rest by comparing the size of aggregation dimension in hypercube with attribute values set in query service request when indexes have more than half,and to second calculate the complement of physically stored information of meteorological data in retrieval.Experiment results show that comparing with the original index calculation method,CTRQ can effectively reduce metadata index computation overhead in data retrieval.On this basis,combining with parallel processing method,we designed and implemented CTRQ parallel algorithm,and attracted 1.9 times at maximum speedup ratio compared with the improved serial algorithm,to further improve the retrieval efficiency of Mars.
Parallel Calculation of Acoustic Scattering from Underwater Large Scale Elastic Shell at Low Frequency
ZHANG Jian-min, AN Jun-ying, CI Guo-qing and WANG Ning
Computer Science. 2017, 44 (3): 48-50.  doi:10.11896/j.issn.1002-137X.2017.03.012
Abstract PDF(944KB) ( 891 )   
References | Related Articles | Metrics
The coupling model of finite element method (FEM) and boundary element method (BEM) is effective to calculate the acoustic scattering from submerged elastic shell at low frequency.Applying this model,the calculation of acoustic scattering from large scale elastic object may consume massive computing time and memory space.In this paper,the parallel computing technology was used when implementing the calculation.At first,the matrices of FEM and BEM are generated through parallel computing,secondly the paralleled generalized minimum residual method (GMRES) is used to solve the large nonsymmetric linear equations which generated by the coupling of FEM and BEM.The paralleled GMRES(m) iterative algorithm is detailedly described and the convergence of the algorithm is analyzed by setting different iterative steps when calculating sound scattering from spherical shell.At last,the scattering of Benchmark model at low frequency is calculated,the bistatic target strength (TS) and the surface field of the Benchmark model are analyzed.
Parallel Algorithm of Nonnegative Matrix Factorization Based on Hybrid MPI and OpenMP Programming Model
TANG Bing, Laurent BOBELIN and HE Hai-wu
Computer Science. 2017, 44 (3): 51-54.  doi:10.11896/j.issn.1002-137X.2017.03.013
Abstract PDF(1747KB) ( 1471 )   
References | Related Articles | Metrics
Nonnegative matrix factorization (NMF) has been introduced as an efficient way to reduce the complexity of data and extracting character,and it has also been applied to various fields,such as recommendations and text clustering.However,the computation process of NMF is quite complex.In order to solve this problem,a hybrid parallel hierar-chical NMF algorithm based on OpenMP and MPI was presented in this paper,which makes full use of the advantages of both MPI-based message passing model and OpenMP-based shared storage model.The new algorithm is evaluated in a multi-core cluster environment,and experimental results demonstrate that it can achieve a high speed-up,and can be used to deal with large-scale NMF with a high efficiency.
Hierarchical Remote Data Possession Checking Method
MA Hai-feng, YANG Jia-hai, YAO Nian-min and GUAN Ming-shan
Computer Science. 2017, 44 (3): 55-58.  doi:10.11896/j.issn.1002-137X.2017.03.014
Abstract PDF(336KB) ( 621 )   
References | Related Articles | Metrics
In cloud storage environment,the storage servers may not be fully trustworthy.How to verify the integrity of the cloud data with a lower overhead for users has become increasingly concerned problem.While many methods have been proposed,these methods in the verification of multiple files need to authenticate them one by one.So the computation and communication overhead is still expensive when the number of file is large.Aiming at this problem,a hierarchical remote data possession checking method was proposed.Combining with remote data possession checking method,the proposed method can provide more efficient and secure remote data integrity protection,and support dynamic data opera-tion.The security analysis and experimental evaluation results show that the proposed method is safe and reliable,and it can gain 45%~80% performance improvement with lower false negative rate.
Level-Set GPU Evolution Algorithm Based on B-spline
YUAN Bin
Computer Science. 2017, 44 (3): 59-62.  doi:10.11896/j.issn.1002-137X.2017.03.015
Abstract PDF(1081KB) ( 730 )   
References | Related Articles | Metrics
Most of Level-Set evolution models are based on mean curvature or gradient,which don’t facilitate preserving threadlike features while removing noise in 3D dataset.Upwind scheme is often used to solve Level-Set equation,which has lower precision.This paper designed high order Level-Set evolution equations based on curvature difference and hybrid GPU solver based on B-spline and central difference.Experimental results show that the level-set equation based on curvature difference preserves threadlike features while smoothing dataset.
Study on Improved Clustering Collaborative Filtering Algorithm Based on Demography
WANG Yuan-yuan and LI Xiang
Computer Science. 2017, 44 (3): 63-69.  doi:10.11896/j.issn.1002-137X.2017.03.016
Abstract PDF(3864KB) ( 951 )   
References | Related Articles | Metrics
The traditional user based collaborative filtering recommendation algorithm in large data environment has the problem of high dimensional sparse and low recommendation accuracy.A collaborative filtering recommendation algorithm based on the combination of demographic data and improved clustering model was proposed to improve the accuracy and generalization ability of the recommendation system.Firstly,this method calculates the similarity among diffe-rent users through the user demographic data attributes and the user-item score matrix.Secondly,hierarchical neighbor clustering of user and project,calculates the similarity between users or items by the user’s score data for the project,and generates interest in a neighbor of a target user or project.Finally,according to the recent interest in the nearest neighbor to recommend.Simulation experiments on Epinions and MovieLents data set,the simulation results show that the proposed algorithm improves the recommendation accuracy compared with the traditional collaborative filtering algorithm,provide reference for the traditional collaborative filtering recommendation algorithm.
Routing Algorithm Research on Heterogeneous Network on Chip
FANG Juan, LIU Shi-jian and LIU Si-tong
Computer Science. 2017, 44 (3): 70-72.  doi:10.11896/j.issn.1002-137X.2017.03.017
Abstract PDF(261KB) ( 689 )   
References | Related Articles | Metrics
With the fast development of integrated circuits,the traditional buffered NoC,due to the increase of chip area overhead and energy consumption from the buffer,have made the bufferless routing technology receive widespread attention.By eliminating the buffer,the whole process of the pipeline is greatly simplified,performance is improved.But when the network load is very large,the data packets are repeatedly deflected or misrouted,causing the increase of the whole network latency,and system robustness is poor.According to the diversity of network running’s applications,heterogeneous networks as a relatively flexible network structure can effectively reduce the transmission latency and improve the system performance.In this paper,a new type of heterogeneous NoC which is combined with the bufferless NoC and the buffered NoC was mainly designed,and it matches the static routing algorithm and adaptive routing algorithm (AFC) for packet transmission.At the same time,a kind of optimized algorithm AFC-LP based on AFC was proposed,through the second arbitration for bufferless routing to further reduce the average latency of communication and improve the network performance.The result of experiments shows that comparing to the traditional buffered order X-Y routing algorithm,the proposed AFC_LP algorithm reduces the average of network latency by 28.4%,and improves the instructions per cycle (IPC) by 10.4%.
Cost-efficient Resource Management for Cloud Multi-tier Applications
YANG Jin, PANG Jian-min, QI Ning and LIU Rui
Computer Science. 2017, 44 (3): 73-78.  doi:10.11896/j.issn.1002-137X.2017.03.018
Abstract PDF(619KB) ( 633 )   
References | Related Articles | Metrics
Cloud data centers are increasingly becoming the first choice for many Internet enterprises,especially the newly formed and small ones,because it is convenient to deploy the applications,easy to maintain them,and unnecessary to build a private in-house infrastructure.In the cloud environments based on infrastructure as a service,Internet companies can rent the cloud infrastructure dynamically in accordance with the service requirements.In this way,they have the chance to save the costs of renting infrastructure with the performance guarantees.However,the existing methods for balancing workloads and scaling resources in the industry practice are only able to monitor the states of virtual machines,not the custom performance metrics such as the number of live application sessions.So these scaling methods cannot decide the exact resource demands according to the service requirements.In addition,there are few academic studies on reducing the renting costs and employing the billing resources efficiently in the cloud environment.On the basis of these points,this paper proposed a cost-efficient resource management approach for the cloud multi-tier applications,aiming to save infrastructure costs and improve the application performance with the billing resources.Last,we compared the proposed method with the algorithms used in practice by simulating them with the ripe benchmarks.The results indicate that our method can not only improve the quality and performance of the application,but also reduce the cost of renting infrastructure largely.
Research on Low Energy-consumption Data Collection for WSN Environment Based on Genetic Particle Swarm Optimization
WANG Hong-lei, XU Ping-ping, ZHU Wen-xiang and YOU Xing-miao
Computer Science. 2017, 44 (3): 79-83.  doi:10.11896/j.issn.1002-137X.2017.03.019
Abstract PDF(430KB) ( 633 )   
References | Related Articles | Metrics
Aiming at the problems in green house wireless sensor networks such as uneven nodes distribution,strict energy constraint,etc,an improved genetic particle swarm optimization algorithm was proposed to solve the problem of total energy consumption in data collection of wireless sensor networks.This algorithm uses the parent node representation method to encode the spanning tree into particles.An algorithm for generating random data collection tree was designed,which can satisfy the spanning tree of tree height.A single point mutation algorithm was designed,which makes the spanning tree satisfy the height limit of the tree.The particles get the next iteration by the single point mutation,the extreme value of the individual and the global extreme value time.Under the same number of hops,the simulation results show that the algorithm proposed in this paper reduces 7.34% of the total energy consumption compared with the DL-DCT and it prolongs the average lifetime of network.
Doppler Effect and Analysis of Signals in Three-dimensional Channel Model
CHEN Wen-wen, WANG Ya-lin and ZHOU Jie
Computer Science. 2017, 44 (3): 84-88.  doi:10.11896/j.issn.1002-137X.2017.03.020
Abstract PDF(1728KB) ( 1181 )   
References | Related Articles | Metrics
In the modeling of three dimensional (3D) wireless communication channels,an exponential probability density function for elevation angle (EA) distributions in different environments was proposed in this paper and then the wireless channels were implemented under this distribution.Furthermore,this paper simulated and analyzed the Doppler power spectrum density (PSD) and signal characteristics of terminals.This paper derived closed-form expressions for Doppler PSDs by using approximate algorithm.Meanwhile,the PSDs of envelope and squared envelope of arrival signals and probability density function of delay phase difference in a Rayleigh fading channel are investigated.Simulation results show that the channel parameter estimation of this model satisfies reference theories and practical experiences.In addition,it has a singular variable which is benefit to adapt to different actual channel environments.The model which we provided can be applied to parameter estimation of multiple wireless communication environments.
Task Allocation Strategy for Wireless Sensor Networks with Mixed Coalition
CAO Yi-qin, CHEN Ning-xia and HUANG Xiao-sheng
Computer Science. 2017, 44 (3): 89-96.  doi:10.11896/j.issn.1002-137X.2017.03.021
Abstract PDF(716KB) ( 654 )   
References | Related Articles | Metrics
As most of the existing task allocation strategies for wireless sensor networks seldom considered the internal structure of tasks,which may largely affect the network lifetime,energy consumption and load balance,etc.The paper proposed a novel wireless sensor network (WSN) task allocation strategy based the logical dependencies.First,the task was decomposed step by step which was made of sub-tasks by the elected leaders according to the logical dependency,and at the same time gave sub-tasks priority based on the logical dependence.Last using the matrix a binary coding,the paper proposed a wireless sensor network (WSN) task allocation strategy about the discrete particle swarm optimization algorithm based on the weighted location with mixed coalition until they found a suitable node to perform the sub-task.To enhance the communication between the elected leaders,some virtual nodes were introduced.To estimate the residual node energy,the estimated energy value was introduced to decide whether the sub-task is migration or not.What’s more,according to the expected completion time and weight coefficient of sub-tasks,the paper sorted out the key sub-task by using topological sort and inverse topological,and also the key sub-task was assigned by strong ability,high execution nodes to complete.Finally,the experimental results showed that the novel task allocation strategy could effectively prolong the network lifetime,a steady balance of network load and reduce the energy consumption,etc.
Route-radius-based Adaptive Protocol for Long-chain Wireless Sensor Networks
LIN Wei-lan, XIAO Jin-chao and ZI Shuang-fei
Computer Science. 2017, 44 (3): 97-104.  doi:10.11896/j.issn.1002-137X.2017.03.022
Abstract PDF(1289KB) ( 693 )   
References | Related Articles | Metrics
Long-chain wireless sensor network exhibits a special topology with the nodes distributed in one line just like a long chain.Such inefficient topology does not only enhance the probability of data collision,but also increases the time delay of data transmission,leading to the low working efficiency for data reception of Sink node,even the collapse of the entire network.Generally,the closer the nodes locate to the Sink node,the more obvious this phenomenon is.According to the linking characteristics of the long-chain wireless sensor networks,this study proposed an adaptive strategy termed Route-Radius Adaptive with Max transmission distance (RAMD) based on the maximum transmission distance,which adaptively adjusted the routing radius utilizing link quality assessment method through combining packet receive rate (PRR) with received signal strength indication (RSSI),eventually contributing to the calculation of collision avoidance parameters as well as the routing selection.The simulation of performance comparison with the typical hierarchical routing and sequential transmission routing,and building and testing of physical platform for long-chain wireless sensor networks which contains 200 nodes,show that the protocol played a crucial role in simplifying the method for routing selection,decreasing the rate of time delay,as well as reducing the probability of data conflicting.
Opportunistic Routing Algorithm Based on Regional Friendship
GUO Dong-yue and LIU Lin-feng
Computer Science. 2017, 44 (3): 105-109.  doi:10.11896/j.issn.1002-137X.2017.03.023
Abstract PDF(399KB) ( 706 )   
References | Related Articles | Metrics
In the opportunistic networks,especially the mobile social networks,the nodes tend to maintain contacts with their closer neighbors,which helps to predict the contacting probabilities among nodes through evaluating the node closeness degrees.Due to node closeness relating with both the time and the located regions,when constructing the closeness evaluation model exploiting the node contacting histories,we considered the influence of contact region and contact time on node closeness degrees.And thus close neighbors of the node in the corresponding regions-regional friends can be obtained.Finally,an opportunistic routing algorithm based on regional friendship was presented combining current location of the node and its regional friendship.The experimental results indicate that the proposed algorithm can get a higher delivery ratio and a lower forwarding energy consumption than others,under the different deployment density of nodes and tolerant delay.
Performance Analysis of Long-distance IEEE802.11n Wireless Link
HUANG Hui-fang, LI Ning, WANG Fa-peng and MA Wen-feng
Computer Science. 2017, 44 (3): 110-113.  doi:10.11896/j.issn.1002-137X.2017.03.024
Abstract PDF(303KB) ( 857 )   
References | Related Articles | Metrics
IEEE802.11n-2009,adding the functions of frame aggregation and block ACK,realizing the MAC layer enhancement and increasing network throughput dramatically,provides better performance in long-distance wireless networks.We demonstrated that the node using frame aggregation and block ACK is observably higher than without them by the calculation and simulation of channel utilization.The simulation results also show that the faster transfer speed and the bigger aggregated frame achieve higher throughput,and this superiority increases as the distance gets further.
Identification Method for OFDM Signal Based on MUSIC Algorithm
ZHANG Hai-chuan and LEI Ying-ke
Computer Science. 2017, 44 (3): 114-117.  doi:10.11896/j.issn.1002-137X.2017.03.025
Abstract PDF(326KB) ( 1130 )   
References | Related Articles | Metrics
Aiming at the poor performance and the large required OFDM symbol number of conventional identification methods for the short cyclic prefix OFDM signal,we proposed a identification method for OFDM signal based on MUSIC algorithm.The method first analyzes the structure characteristic of OFDM signal,then implements singular value decomposition for the autocorrelation matrix and extracts all the singular value matrix.Finally,the algorithm identifies the OFDM signal according to the number of larger non-zero singular values of the matrix.The experimental results shows that the proposed algorithm just need less OFDM symbols to be able to effectively identify the short cyclic prefix OFDM signal and has better performance than the conventional algorithm.
Routing Mechanism Based on Business Differentiating in Software Defined Network
LI Bing-kui, ZHUANG Lei, MA Ding, HU Ying, WANG Guo-qing and JING Chen-kai
Computer Science. 2017, 44 (3): 118-122.  doi:10.11896/j.issn.1002-137X.2017.03.026
Abstract PDF(444KB) ( 1515 )   
References | Related Articles | Metrics
Software defined network is a novel network architecture whose data path is obtained based on the global topology.In current SDN network,load balance and QoS assurance are the crucial issues which are unresolved perfectly.To this end,according to the global network information hold by the SDN controller,we proposed a differentiated business based routing mechanism in which the function of network awareness,the traffic-based business differentiating and the K shortest path algorithm are applied.Experimental results show that this mechanism can choose an optimal path which satisfies the QoS requirements for different types of data traffics,balance the load of entire network and improve the utilization of the underlying network resources.
Optimized Clustering Wireless Sensor Network Algorithm Based on Game Theory
YIN Xiang, CHANG Li-ping, DAI Wei-chao and LI Chun-xiao
Computer Science. 2017, 44 (3): 123-127.  doi:10.11896/j.issn.1002-137X.2017.03.027
Abstract PDF(408KB) ( 668 )   
References | Related Articles | Metrics
One main factor which should be considered in the design of wireless sensor network is the energy consumption.Most of existing researches achieve the decline and balance of network energy by clustering,and these approaches still have some drawbacks,such as the unstable number and uneven distributions of cluster heads,and then it affects the lifetime of the whole network.An optimized clustering routing protocol based on game theory was proposed in this paper.The protocol partitions the region according to the optimal number of cluster head,and a cluster head is generated through gaming within each sub-region.In order to balance the energy consumption of the entire network and extend the network lifetime,the algorithm also introduces zero probability mechanism and regional rotation mechanism.Finally,the superiority of the algorithm is verified by simulation experiments.
Energy Transmitter Placement to Optimize Duty Cycle of RF Energy Harvesting Wireless Sensor Networks
CHI Kai-kai, LIN Yi-min, LI Yan-jun and CHENG Zhen
Computer Science. 2017, 44 (3): 128-131.  doi:10.11896/j.issn.1002-137X.2017.03.028
Abstract PDF(328KB) ( 629 )   
References | Related Articles | Metrics
Radio frequency (RF)-based wireless energy transfer is one of promising techniques to power the next-gene-ration wireless sensor networks (WSNs).Due to the very limited energy harvesting rate,the RF energy harvesting WSNs usually operate in the ultra low duty cycle mode.So one fundamental issue of this kind of WSNs is how to deploy a given number of energy transmitters (ETs) so as to maximize the minimum duty cycle of all nodes.Firstly,this paper analyzed this optimal ET placement problem so as to deeply and theoretically understand it.Then the greedy approach and the particle swarm optimization (PSO) based approach were presented for this problem.Simulation results show that the proposed approaches can significantly improve the duty cycle when compared to the even placement of ETs.
Channel Allocation Algorithm of Multi-priority Satellite Network
BIE Yu-xia, BU Rui-jie and LIU Hai-yan
Computer Science. 2017, 44 (3): 132-136.  doi:10.11896/j.issn.1002-137X.2017.03.029
Abstract PDF(461KB) ( 889 )   
References | Related Articles | Metrics
In order to solve the problem of channel allocation in terminal access satellite network and make full use of the limited satellite network channel resources,on the analysis in the diversity of terminal level and operational level,this paper added the priorities identified,and established the multi-terminal and multi-service priority model.We analysed the relationship among access-satisfaction and the access constrictivity,the link bandwidth and link overall delay,and established access-satisfaction model.The mechanism of channel contention and bandwidth compression was proposed, and then an algorithm of multi-terminal and multi-service priority channel allocation was proposed.The measured results show that,compared with traditional algorithms,the algorithm can allocate the channel resources in different terminals and service levels,thus improving access satisfaction and channel utilization.
Research on Algorithm of Close-loop Single Base Station Direction-finding Based on Uniform Circular Array
WEI Lin-jing, LIAN Zhi-chao, PENG Jing and LI Guang
Computer Science. 2017, 44 (3): 137-139.  doi:10.11896/j.issn.1002-137X.2017.03.030
Abstract PDF(298KB) ( 709 )   
References | Related Articles | Metrics
In order to estimate the three-dimensional (3D) single base station direction (azimuth,elevation and range) based on uniform circular array (UCA),an improved closed loop algorithm was proposed for 3D scene.The algorithm is a general method whose calculating process is simple,and there is no need to use uniform circular array of central symmetry.There are no restrictions of any number of sensors,a single least square procedure is noly required for estimation of the base station location,and it can estimate all three dimensional direction parameters.The experimental results show that,compared to a variety of similar algorithms,the performance properties of algorithm and 3D MUSIC algorithm is comparable,it has low computational complexity and its implementation speed is faster.
Routing Selection and Channel Assignment Method for Mobile Ad Hoc Cognitive Network
LIU Ping and YUAN Pei-yan
Computer Science. 2017, 44 (3): 140-144.  doi:10.11896/j.issn.1002-137X.2017.03.031
Abstract PDF(397KB) ( 667 )   
References | Related Articles | Metrics
In order to solve the routing instability problem in mobile ad hoc cognitive network,a method of routing selection and channel assignment was proposed.First,it developes a data transfer costs measure,taking into account both routing stability and channel interference.Then,the remanding time of a link is calculated according to the position and velocity information of cognitive nodes,and the stability of routing is predictived.And then interference of primary nodes is avoided,through assigning channels according to different channel interference patterns.Finally,the best link is selected through two steps including route discovery and route confirmation.Simulation results show that,by comparing with the classical AODV method,the new method has lower packet loss rate and smaller transmission delay.
Lightweight and Shifted CA Architecture for MANET
GUO Ping, FU De-sheng, ZHU Jie-zhong and CHENG Ya-ping
Computer Science. 2017, 44 (3): 145-149.  doi:10.11896/j.issn.1002-137X.2017.03.032
Abstract PDF(1220KB) ( 690 )   
References | Related Articles | Metrics
In order to solve the problem that it is difficult to adopt more security and complex authentication mechanisms in mobile Ad hoc network (MANET) because of the opening communication channels,highly dynamic moving and sources-constrained nodes,a lightweight and shifted certificate authority (LSCA) authentication architecture for MANET was put forward,which is combined with an idea of lightweight CA,and it’s designed for MANET with short lifetime and highly dynamic topology.LSCA is equipped with the advantage of lightweight CA through simplifying the traditional certificate-based CA,which needs no certificates.Moreover,LSCA,through the transfer of the overall CA among a number of alternative CA nodes in a regular rotation,is not needed to preset nodes and know the topology of MANET,and the system is attained a certain degree of tolerance.Analysis and simulation results show that LSCA has robust resistance for DoS attacks,balances the tradeoff between communication,computation and storage,which is better than distributed CA and CA with threshold mechanism,and is especially suitable for the topology of very dynamic MANET networks.
Design and Security Threats Analysis for Information Platform of Fusion Ubiquitous Network in Power Grid
QI Yong, GUO Shi-wei and LI Qian-mu
Computer Science. 2017, 44 (3): 150-152.  doi:10.11896/j.issn.1002-137X.2017.03.033
Abstract PDF(370KB) ( 579 )   
References | Related Articles | Metrics
With the rapid development of electrical informatization,the electric power system has established EMS(Energy Management System),SCADA(Supervisory Control And Data Acquisition) power grid automatic dispatching system,etc.In order to solve the problem of “information islands” caused by various heterogeneous subsystems in the electric power system,an information platform of the SOA-based fusion ubiquitous network in power grid was proposed.Threats of the infrastructure layer,data layer and service layer were researched on the architecture of the system.The infrastructure layer threats include security threats for terminals,networks and servers.Data layer threats include private information leakage or damage and illegal access.Service layer threats include improper measures of certificate authority.Finally,security measures and recommendations were given for security threats faced by each level.
Deterministic Joint Remote Preparation of Arbitrary Four-qubit Cluster-type Entangled State Using EPR Pairs
YUAN Xiao-min, LIU Wen-jie, LIU Qi and LU Jin-shen
Computer Science. 2017, 44 (3): 153-157.  doi:10.11896/j.issn.1002-137X.2017.03.034
Abstract PDF(378KB) ( 621 )   
References | Related Articles | Metrics
Taking four EPR (Einstein-Podolsky-Rosen) pairs as quantum channel,a new protocol for deterministic joint remote preparation of four-particle cluster-type states was presented.In the protocol,one of the senders performs a four-qubit projective measurement,while the other performs a bipartite projective measurement.Afterwards,the receiver just adopts some appropriate unitary operations on his/her own two particles,and then respectively applies a controlled-NOT gate on two other auxiliary particles.As a result,he/she can obtain the desired state.Compared with other protocols for deterministic remote preparation of arbitrary four-particle cluster-type states,our protocol is more efficient,and it is economic and feasible in the physical experiment.
Research on High Performance Rule Matching Algorithm in IPV6 Networks
PANG Li-hui and JIANG Feng
Computer Science. 2017, 44 (3): 158-162.  doi:10.11896/j.issn.1002-137X.2017.03.035
Abstract PDF(419KB) ( 846 )   
References | Related Articles | Metrics
As is known to al1,firewall is the core device to guarantee network security,and rule matching is one of the most important technologies to firewall.However,those high performance rule matching algorithms based on IPV4 are not suitable for IPV6 with the network derivation from IPV4 to IPV6.The main cause of this situation is that the range of IPV6 address is much bigger than the range of IPV4 address.Thus we suggested a high performance rule matching algorithm suitable for IPV6,named HiPRM (High Performance Rule Matching).HiPRM algorithm classifies the whole rule set into several parts with the protocol and destination port field of rules,and it selects special bits from the combination of source and destination IPV6 address of rules to construct binary trees.After the construction of binary trees,those rule sets are split into smaller rule sets.When a packet matches one of these smaller rule sets,the linear searching algorithm is used to find the matching rules in this small rule set.Analysis and experiment results show that HiPRM algorithm not only has good time and space performance,but also has better scalability with different rule sets.
Ultra-lightweight Block Cipher Algorithm (PFP) Based on Feistel Structure
HUANG Yu-hua, DAI Xue-jun, SHI Yang-yang, LIU Ning-zhong, ZENG Qing-xi and SU Fei
Computer Science. 2017, 44 (3): 163-167.  doi:10.11896/j.issn.1002-137X.2017.03.036
Abstract PDF(390KB) ( 1467 )   
References | Related Articles | Metrics
To meet the application requirement for cipher algorithms in the resource-constrained terminal system such as the limited energy supply etc,an ultra-lightweight block cipher named PFP was designed by using the experience of PRESENT algorithm for reference.The iterative structure of PFP algorithm is Feistel structure.Its permutation was modified in diffusion layer.It requires only 1355GE with hardware implementation of PFP algorithm,which is better than the PRESENT.And it also fulfills the requirement of environment with extremely constrained resource (no more than 2000GE).Test results show that the speed of PFP algorithm is about 50% faster than PRESENT.Depen-dence test,linear analysis,differential analysis,impossible differential analysis and key schedule attack show that the PFP algorithm can satisfy the security requirements of the lightweight block cipher algorithm.
Improved Veron’s Identification with Lightweight Structure and Digital Signature Scheme
YE Jun-yao, ZHENG Dong and REN Fang
Computer Science. 2017, 44 (3): 168-174.  doi:10.11896/j.issn.1002-137X.2017.03.037
Abstract PDF(604KB) ( 781 )   
References | Related Articles | Metrics
At present,most of the public key cryptography schemes are based on hard problems such as large integer factorization or discrete logarithm.All these hard problems can be solved by a quantum computer in polynomial time.Cryptographic schemes based on error-correcting codes can resist the attacks by a quantum computer,so it is necessary to design identification schemes or signature schemes based on error-correcting codes.Veron’s identification scheme is very nice in general,but it’s public key is too long.Based on Veron’s scheme,we used double circulant matrix to further reduce the size of public key in Veron’s scheme.The secret key is embedded into the public key directly,which has the three following advantages.The security relies on a problem which is related to well-known and well-studied codes,namely the double circulant codes.The size of the public key is very low,only 1041 bits in a typical set-up,and the private key is also 1041bits.The transmission rate of each round is very low.Then we analyzed the security of the improved scheme.Its security can be reduced to GSD hard problem.At last,we used FS paradigm to transform the improved identification scheme into signature scheme,and then proved the correctness and security of the scheme.In the improved scheme,the use of cyclic structure makes it relatively easy to implement and have high efficiency.These chara-cteristics make our variant highly attractive for lightweight implementations,especially in handheld terminal or cloud storage environment.
Design of Secure Multi-hop Time Synchronization Protocol for IEEE802.15.4e
YANG Wei, WANG Qin, WAN Ya-dong and HE Jie
Computer Science. 2017, 44 (3): 175-181.  doi:10.11896/j.issn.1002-137X.2017.03.038
Abstract PDF(4422KB) ( 735 )   
References | Related Articles | Metrics
IEEE802.15.4e is the latest MAC layer standards for the industrial Internet of things,which enables highly reliable and ultra-low power wireless networking through time synchronization technique.Because time synchronization is a core fundamental technology for industrial wireless network,it often becomes an attractive target for attac-kers.This paper proposed a secure multi-hop time synchronization mechanism called SMTSF for IEEE802.15.4e.SMTSF mainly adopt anomaly-based intrusion detection algorithm,multi-path approach based on trust modeling,encryption and authentication technologies to secure multi-hop time synchronization.In the process of anomaly-based intrusion detection algorithm,the border router nodes verify the rank value of other nodes in the network which can effectively detect time synchronization tree attack.A mini-firewall based on packet filtering can stop intrusion attempts from the Internet.The multi-path approach based on trust modeling can find a secure path to the root node by establishing trust model between nodes.Simulation experiments show that SMTSF can detect time synchronization tree attack and defend against compromise attack.
Perimeter Intrusion Detection Based on Improved Convolution Neural Networks
ZHANG Yong-liang, ZHANG Zhi-qin, WU Hong-tao, DONG Ling-ping and ZHOU Bing
Computer Science. 2017, 44 (3): 182-186.  doi:10.11896/j.issn.1002-137X.2017.03.039
Abstract PDF(2456KB) ( 1307 )   
References | Related Articles | Metrics
Monitoring system has become one of the most important means for perimeter intrusion detection.But most of the existing monitoring systems are passive surveillance.In this paper,a method for active perimeter intrusion detection was proposed by identifying human targets in video images captured by monitoring systems.In order to enhance the robustness of different environment,this paper identified an improved convolution neural networks to realize an effective detection of human bodies with multiple postures captured by fixed cameras.Depth and shallow information are used to describe the pedestrian,so that it can improve the precision and robustness.Then,Softmax is used for classification.The experiment results confirm that the proposed algorithm has higher recognition rate for detecting human targets,which achieves recognition accuracy of 98.82% on INRIA database,99.82% on NICTA database,94.5% on CVC database and 99.92% on Daimler database,respectively.
Information Hiding Scheme for 3D Models Based on Local Height and Mean Shift Clustering Analysis
REN Shuai, ZHAO Xiang-mo, ZHANG Tao, SHI Fang-xia and MU De-jun
Computer Science. 2017, 44 (3): 187-191.  doi:10.11896/j.issn.1002-137X.2017.03.040
Abstract PDF(1949KB) ( 642 )   
References | Related Articles | Metrics
To satisfy the requirement of confidential communication,an information hiding scheme based on Mean Shift clustering analysis was proposed for 3D models.Local height is introduced into this scheme to measure the significance,which can be considered as one of the energy characteristics of vertex.Based on this,Mean Shift is used to analyze the vertex into three categories with different energy characteristics.Different information can be hidden into different kinds of vertex by modifying structure characteristic parameters.And the energy characteristics of information and its carrier should be matched as much as possible.Experimental results show that this scheme is of good performance.Especially the anti-analysis ability and the sensitivity to tamper are much better than others and the capacity is also improved greatly.
Forgery Attack on Authenticated Cipher Mode iPMAC and VPMAC
TIAN Yu-dan and WEI Yong-zhuang
Computer Science. 2017, 44 (3): 192-194.  doi:10.11896/j.issn.1002-137X.2017.03.041
Abstract PDF(217KB) ( 679 )   
References | Related Articles | Metrics
Message authentication has received the wide spread attention after being proposed.iPMAC and VPMAC become the representative of message authentication due to its parallel structure model.Whether iPMAC and VPMAC are secure become a research focus.Based on the variable parameter Γ and Λ,we put forward a new forgery attack by ma-king use of the basic idea of the collision.Based on known relations,we found out a new set of corresponding relations.We created a successful forgery by making only one query to the decryption oracle with probability 0.5.This attack process also applies to VPMAC.
Efficient BTB Based on Taken Trace
XIONG Zhen-ya, LIN Zheng-hao and REN Hao-qi
Computer Science. 2017, 44 (3): 195-201.  doi:10.11896/j.issn.1002-137X.2017.03.042
Abstract PDF(2085KB) ( 829 )   
References | Related Articles | Metrics
Computer architecture is beset with two opposing things:performance and energy consumption.To reduce the increasing energy consumption of embedded processor,we proposed a taken trace branch target buffer (TG-BTB) which is an energy efficient BTB scheme for embedded processors.Unlike the conventional BTB scheme,which requires lookup BTB every instruction fetch,the TG-BTB need lookup BTB only when the trace is a taken trace.This structure dynamically analyzes the trace behavior during program execution,and TG-BTB can achieve lookup BTB per taken trace and reduce the energy consumption of BTB lookup.In the process of dynamic analyzing,TG-BTB detects the instruction interval between two taken instructions firstly,and then stores this value into TG-BTB.Finally,the scheme determines to perform BTB lookup or not according to the instruction interval.The experimental results demonstrate TG-BTB achieves 81% energy consumption reduction compared to the conventional BTB scheme.
Research on Component Evolving Behavior Based on High-order π Calculus
HE Hai-yang, LI Qiang, YU Xiang and HAN Xiang-yu
Computer Science. 2017, 44 (3): 202-208.  doi:10.11896/j.issn.1002-137X.2017.03.043
Abstract PDF(548KB) ( 607 )   
References | Related Articles | Metrics
Using formal method to analyze the behaviors of component in the software evolution has become a research hotspot now.According to the requirement of the formal method supporting the component evolving behavior,a formal method for component evolving behavior,which is based on High-order π calculus,was proposed.It will classify the component evolving behaviors and transform the sequence diagram described evolution requirements into High-order π expressions.Based on the rule and equivalence of High-order π calculus,the evolving behaviors are computed and the evolving conflicts are detected.At last,this paper used an example to prove the feasibility and availability of this method.
Research on Test Data Automatic Generation Based on Improved Genetic Algorithm
GAO Xue-di, ZHOU Li-juan, ZHANG Shu-dong and LIU Hao-ming
Computer Science. 2017, 44 (3): 209-214.  doi:10.11896/j.issn.1002-137X.2017.03.044
Abstract PDF(557KB) ( 1529 )   
References | Related Articles | Metrics
Automatic test data generation is the basis of software testing,and it is also a key link in the process of test automation technology.In order to improve the efficiency of testing automation,a new algorithm was proposed to improve the traditional genetic algorithm based on the combination of test data automatic generation system model.The adaptive crossover operator and mutation operator are used in this algorithm,and the improved simulated annealing mechanism is introduced to improve it.At the same time,the algorithm is also designed to fit the fitness function to accelerate the optimization process of the data.Through the triangle program,binary search and bubble sort program,the basic genetic algorithm and the adaptive genetic algorithm were compared,and the performance test was done for improved algorithm.Experimental results show the practicability as well as feasibility and efficiency of the algorithm in the test data generation.
Time-aware Query-time Entity Resolution and Data Fusion in Heterogeneous Information Spaces
YANG Dan, CHEN Mo, WANG Gang and SUN Liang-xu
Computer Science. 2017, 44 (3): 215-219.  doi:10.11896/j.issn.1002-137X.2017.03.045
Abstract PDF(506KB) ( 577 )   
References | Related Articles | Metrics
Most of existing traditional entity resolution (ER) techniques mainly deal with static data sets by offline,non real-time methods.For large data sets,it usually requires a lot of time and system resources.In the face of evolved,hete-rogeneous entities with time information in heterogeneous information spaces,time-aware query-time ER and data fusion become a necessary trend to ensure data quality and user requirements.Aiming at entity search based on keyword query with temporal context in heterogeneous information spaces,this paper proposed a time-aware query-time ER approach TQ-ER to provide more accurate entity profiles to users.A time-aware iterative query expansion algorithm was proposed.TQ-ER leverags temporal context of query and temporal information of entities,which can identify the minimum entities to do ER and data fusion for a given query to be correctly answered.Extensive experimental results on real data sets show the effectiveness and correctness of TQ-ER.
Mobile SNS Data Dynamic Partitioning and Replication Algorithm Based on Location Information
WANG Qing-yun and CHENG Chun-ling
Computer Science. 2017, 44 (3): 220-225.  doi:10.11896/j.issn.1002-137X.2017.03.046
Abstract PDF(1963KB) ( 622 )   
References | Related Articles | Metrics
The existing social network data partitioning algorithms focus on the social relationship and interaction,without considering location information,which results in the long response time of location-based queries.To solve this problem,we designed a two-layer graph model of mobile social network which takes the location dependency of the user interaction behavior into account.We proposed a mobile SNS data dynamic partitioning and replication algorithm based on location information--MSDPR.MSDPR divides data based on the clustered results generated by an improved K-Means clustering algorithm,and then replicates data by using the social relationships.Experiments reveal that MSDPR can effectively improve the efficiency of the local access and reduce the latency of access in the mobile social network.Moreover,it also has better adaptability when adding data dynamically.
Semantic Web Service Selection Based on QoS
MA Li, QIU Zhi-yang, CHEN Yan-ping and ZHAO Jing
Computer Science. 2017, 44 (3): 226-230.  doi:10.11896/j.issn.1002-137X.2017.03.047
Abstract PDF(541KB) ( 654 )   
References | Related Articles | Metrics
Focused on the problems of semantic information lacking and the unappeasable user requirements of the traditional methods for Web service selection,the quantitative QoS properties were used as the heuristic information,and the Web service composition was transformed into the problem of searching and-or-graph in our work.The AO* algorithm was applied to find the optimal Web service last.The simulation results show that the proposed method is effective for the Web service composition and the composition efficiency is enhanced too.
Research on Method of Group Recommendation for Fusion of Hidden Features
LIU Yi, ZHONG Xian and LI Lin
Computer Science. 2017, 44 (3): 231-236.  doi:10.11896/j.issn.1002-137X.2017.03.048
Abstract PDF(464KB) ( 696 )   
References | Related Articles | Metrics
As the most successful mainstream recommendation method,singular value decomposition (SVD) algorithm builds the model from known huge data and uses the matrix decomposition dimension reduction to get effective information,and non negative matrix factorization (NMF) uses the decomposition of nonnegative matrix elements to explain the meanings of characteristics.These two kinds of successful methods are based on matrix decomposition of explicit feedback information,and obtain the user’s preference information.However,they cannot accurately reflect the true preferences of the users only according to user’s explicit feedback.To solve the problem,this paper put forward an improved method for the two models,integrating the hidden features and weight calculation method based on hidden features into the classical matrix decomposition algorithm,the hidden features can perfect the information of user’s prefe-rences.Weight calculation method based on hidden features can judge the group characteristics and give the appropriate weight to users,which improve the recommendation accuracy.The method was tested on the Tencent micro blog data set in KDD Cup 2012 Track1.The results show that from the experimental standard of the MAE and the precision,the fusion method is better than SVD and NMF on this data set,and significantly improves the recommendation perfor-mance.
Feature Processing Approach Based on MA-LSSVM in Safety Data
MA Yuan-yuan, SHI Yong-yi, ZHANG Hong, LIN Qi and LI Qian-mu
Computer Science. 2017, 44 (3): 237-241.  doi:10.11896/j.issn.1002-137X.2017.03.049
Abstract PDF(450KB) ( 736 )   
References | Related Articles | Metrics
With all kinds of biological intelligent evolutionary algorithms become increasingly mature,feature selection methods based on the evolutionary technology and its hybrid algorithm are emerging.According to the feature selection problem of the high dimensional small sample safety data,this paper combined memetic algorithm (MA) and least squares support vector machines (LS-SVM) to design a kind of wrapper feature selection method (MA-LSSVM).The proposed method utilizes the specialty of least squares support vector machine being easy to search optimal solution to construct classifier,then regards classification accuracy as the main component of memetic algorithm fitness function in the optimization process.The experimental results demonstrate that MA-LSSVM can be more efficient and stable to obtain features larger contribution to the classification precision,and can reduce the data dimension and improve the classification efficiency.
Multi-class Probability Output Based on Relevance Vector Machine
LI Rui and WANG Xiao-dan
Computer Science. 2017, 44 (3): 242-246.  doi:10.11896/j.issn.1002-137X.2017.03.050
Abstract PDF(400KB) ( 932 )   
References | Related Articles | Metrics
Based on the probability of memberships estimated by RVM (Relevance Vector Machine) basic model,posterior probability estimating approaches in one-versus-all strategy by multivariate sigmoid function and one-versus-one strategy by pairwise coupling were presented.Experimental results based on artificial gauss datasets and UCI datasets show the proposed approaches can calculate posterior probability precisely and are more efficient,as well can ensure high classification performance.
Collaborative Filtering Recommendation Based on Item Splitting
HE Ming, LIU Yi, CHANG Meng-meng and WU Xiao-fei
Computer Science. 2017, 44 (3): 247-253.  doi:10.11896/j.issn.1002-137X.2017.03.051
Abstract PDF(1306KB) ( 1002 )   
References | Related Articles | Metrics
Context-aware recommendation system is an effective way to improve the recommendation accuracy and user satisfaction by using context information.In this paper,an efficient context-item splitting approach for context-aware recommendation was proposed.Firstly,the items are divided according to the item split criterion.Secondly,the clustering is carried out through the context dimension based on the splitting results.Thirdly,the collaborative filtering re-commendation algorithm is used to predict the unknown ratings.Finally,simulation experiments are conducted on the LDOS-CoMoDa data set for different splitting criteria.The experimental results demonstrate that this method can effectively improve the accuracy of the recommendation and achieve the goal of improving the quality of recommendation.
Analysis of Microblog Community Users’ Influence Based on R-C Model
WANG Zhen-fei, ZHU Jing-yang, ZHENG Zhi-yun and SONG Yu
Computer Science. 2017, 44 (3): 254-258.  doi:10.11896/j.issn.1002-137X.2017.03.052
Abstract PDF(475KB) ( 559 )   
References | Related Articles | Metrics
The microblog community users’ influence has great significance in effective dissemination of microblog information.To rapidly and accurately find the regularity of micro-blog community’s information dissemination,a micro-blog community users’ importance algorithm was presented.Firstly, seed user data are extracted and microblog communities are detected by using R-C model,and one community form the divided communities is selected.Then,the influence of user in the community is calculated according to USR algorithm.Finally,the influence of user in the community is outputted.Taking the sina microblog data sets as example,the concept of isolated point and the coverage evaluation index of information dissemination impact person-time were proposed.We computed the users’ influence by comparing USR algorithm with other traditional algorithms.Experiments show that the USR algorithm can acquire better result than other algorithms.
Grey Wolf Optimization Algorithm with Self-adaptive Searching Strategy
WEI Zheng-lei, ZHAO Hui, HAN Bang-jie, SUN Chu and LI Mu-dong
Computer Science. 2017, 44 (3): 259-263.  doi:10.11896/j.issn.1002-137X.2017.03.053
Abstract PDF(393KB) ( 1151 )   
References | Related Articles | Metrics
The grey wolf optimization algorithm (GWO) is a new type intelligence optimization algorithm.However,the GWO has the problem of low convergence speed and easily falling into local optimum as the same as other optimization algorithms.Aiming at this problem,an improved GWO with the self-adaptive searching strategy was proposed.Firstly,the search direction is controlled by fitness value and the best learning equation is introduced to improve the convergence speed and the optimization precision.Secondly,in order to maintain the population diversity,the position vector difference is utilized to escape from local optimum.Finally,10 benchmarks and other four popular algorithms were compared to illustrate the superiority of GWO with the self-adaptive searching strategy.The experimental results and the Wilcoxon signed ranks test results show that the improved GWO outperforms the other four algorithms in terms of convergence speed and precision.
Multiple-instance Learning Method Based on CRO High Order Neural Networks
DENG Bo, LU Ying-jun and WANG Ru-zhi
Computer Science. 2017, 44 (3): 264-267.  doi:10.11896/j.issn.1002-137X.2017.03.054
Abstract PDF(426KB) ( 791 )   
References | Related Articles | Metrics
Multi-instance learning (MIL) is a variant of inductive machine learning developed recently,in which each learning example contains a bag of instances instead of a single feature vector.In this paper,we presented a novel MIL method based on the concept of instance label intensity(ILI) called ILI-MIL.Considering the complexity of the object function and the complexity of the gradient descent based training method in neural networks,we used a chemical reaction optimization (CRO) algorithm for training a high-order neural network (HONN) to implement the presented ILI-MIL method,which has more powerful nonlinear fitting capacity and high computation efficiency.The experiment results show that our ILI-MIL method have more effective ability of classification than the state-of-the-art methods.
System Function Structure Analysis in Complete and Incomplete Background Relationship
CUI Tie-jun, LI Sha-sha and WANG Lai-gui
Computer Science. 2017, 44 (3): 268-273.  doi:10.11896/j.issn.1002-137X.2017.03.055
Abstract PDF(596KB) ( 726 )   
References | Related Articles | Metrics
Considering complete or incomplete background relationship contains the function structure relations between components and system,the system function structure analysis was proposed based on the theory of factors space factors.Rigorous mathematical definition of the system of the method and the analysis process of logic mathematical description were given.Two examples of complete and incomplete information were analyzed to get the system function structure of two different expressions.Complete background relationship can obtain only certain system function structure and incomplete background relationship can obtain a family of certain system function.If incomplete background relationship is a subset of the complete background relationship,incomplete background relationship must be able to find the linear relationship between the functions,and supplement the incomplete background.This method is rigorous logic and mathematical reasoning,and similar problem analysis can be used in wide fields.
Mixed Geographically and Temporally Weighted Regression and Two-step Estimation
ZHAO Yang-yang, LIU Ji-ping, YANG Yi, ZHANG Fu-hao and QIU A-gen
Computer Science. 2017, 44 (3): 274-277.  doi:10.11896/j.issn.1002-137X.2017.03.056
Abstract PDF(368KB) ( 2578 )   
References | Related Articles | Metrics
In response to a phenomenon that both global stationary characteristics and spatial-temporal non-stationary characteristics exist at the same time,an approach named mixed geographically and temporally weighted regression (MGTWR) was proposed.This paper showed mathematical definition of MGTWR and gave the formula of regression parameters by using two-step estimation method.Besides,the weight calculation method and the parameter optimization method based on Akaike information criterion (AIC) were introduced.Some simulated data with different degrees of complexity were adopted to test the performance of method.Result shows that R2 are more than 0.8 when MGTWR and GTWR are used.Both MGTWR and GTWR can deal with the phenomenon that both global stationary characteristics and spatial-temporal non-stationary characteristics have.What’s more,MGTWR is better than GTWR.As MGWR cannot detect temporal non-stationary characteristics,the results of MGWR are bad.In addition,the complexity of the data affects the performance of MGTWR,GTWR and MGWR.The simpler the data are,the better the results will be.
Fast Tracking Algorithm Based on Mean Shift Algorithm
ZOU Qing-zhi and HUANG Shan
Computer Science. 2017, 44 (3): 278-282.  doi:10.11896/j.issn.1002-137X.2017.03.057
Abstract PDF(1385KB) ( 662 )   
References | Related Articles | Metrics
A fast moving target detection method based on Mean Shift was proposed for the problems that the Mean Shift algorithm is difficult to track fast moving objects,the number of iterations of the algorithm is too large and the process is time consuming.The method is combined with frame difference method and fuses background information for rapid detection of moving target.A new similarity measure method for preliminary testing was put forward to exclude the interference and fast select targets in accordance with the standard Mean Shift matching,finding out the best target.This method not only reduces the number of iterations of the traditional method,but also reduces the time required for the algorithm,and it achieves better tracking performance in the tracking experiment,which improves the robustness of the algorithm.
Blind Recognition Method of Cyclic Codes Parameters
WANG Lan-xun, JIA Ceng-juan and XIONG Zheng-da
Computer Science. 2017, 44 (3): 283-287.  doi:10.11896/j.issn.1002-137X.2017.03.058
Abstract PDF(1834KB) ( 651 )   
References | Related Articles | Metrics
In view of the problem of the blind recognition of cyclic code parameters,the code length and starting point are identified by the recognition method based on the fusion of a similarity measuring function in data mining and Spearman Rank correlation coefficient in statistics.The method was proposed by the maximum difference between the code weight distribution of the actual sequence and random sequence.Then,on the basis of the isomorphic principle of finite fields,the generator matrix is solved by selecting code words of code weight with the highest probability for Galois field Fourier transform.The blind recognition of cyclic code is finally realized.Theoretical analysis and simulation experi-ments show that the method is simple and has stronger error-tolerance,and it can identify medium short code better under the condition of BER of 0.01.
Real-time Determination and Progressive Correction of Vehicle Behavior on Smartphones
FAN Jing, WU Qing-qing, YE Yang and DONG Tian-yang
Computer Science. 2017, 44 (3): 288-295.  doi:10.11896/j.issn.1002-137X.2017.03.059
Abstract PDF(2106KB) ( 951 )   
References | Related Articles | Metrics
Up to now,the relevant research has some drawbacks:poor robustness,low accuracy rate and non-real time.To solve these problems,a vehicle behavior recognition algorithm of real-time determination and progressive correction on smartphone was proposed.This algorithm classifies vehicle behavior by the data generated during driving process,and uses the collected data as training samples to improve recognition and prediction capability of SVM.For the limitations of traditional sliding window,the endpoint detection algorithm is used to quickly extract useful information from the complete vehicle behavior,which reduces misjudgment simultaneously.The experimental results show that corrective algorithm on time-based segmentation can effectively predict the vehicle behavior,and ultimately achieve high re-cognition rate,which demonstrates the effectiveness of this method.
Jointing Gabor Error Dictionary and Low Rank Representation for Face Recognition
SHOU Zhao-yu and YANG Xiao-fan
Computer Science. 2017, 44 (3): 296-299.  doi:10.11896/j.issn.1002-137X.2017.03.060
Abstract PDF(1775KB) ( 606 )   
References | Related Articles | Metrics
Focused on the issues that face images have the problems occlusion,disguise,illumination and facial expression changes in face recognition,an improved face recognition method was proposed.According to the characteristics of the Gabor feature for occlusion,disguise,illumination and facial expression changes has stronger robustness,jointing Gabor error dictionary and low rank representation (GDLRR) for face recognition was proposed.Firstly,the Gabor feature of training samples and testing samples are extracted for making up features dictionaries that is to be tested,respectively.And then,a unit matrix is utilized to extract Gabor feature for training a more compact Gabor error dictionary.Finally,lowest-rank representation of feature dictionary of testing samples is sought for classification by jointing Gabor error dictionary and training feature dictionary.Experiments show that the proposed algorithm has better robustness and recognition results against the different problems in the face recognition.
Real-time L1-tracker Based on Haar-like Features
YAN Gang, QU Gao-chao and YU Ming
Computer Science. 2017, 44 (3): 300-306.  doi:10.11896/j.issn.1002-137X.2017.03.061
Abstract PDF(2398KB) ( 693 )   
References | Related Articles | Metrics
In order to solve the problem of real-time in L1-tracker algorithm under the framework of particle filter,this paper proposed an improved L1-tracker algorithm based on Haar-like features.Firstly,the complete dictionary is reconstructed by the Haar-like features and feature blocks.The single pixels are replaced by the pixel blocks to make up positive and negative trivial templates.Then,the dimensions of over-complete dictionary are reduced by sparse representation and the calculation amount of the sparse matrices is significantly reduced.Secondly,the number of the target template is reduced in order to decrease the calculation of sparse representation.Finally,the updating frequency of the templates is controlled by experiential value.Experimental results demonstrate that the proposed algorithm can significantly improve real-time of L1-tracker algorithm and is effective under short time occlusion,the deformation of target and illumination changes.
Inter-frame Consistency Structured Sparse Representation Object Tracking Algorithm
HOU Yue-en and LI Wei-guang
Computer Science. 2017, 44 (3): 307-312.  doi:10.11896/j.issn.1002-137X.2017.03.062
Abstract PDF(2059KB) ( 598 )   
References | Related Articles | Metrics
Object tracking technology is widely used in daily life and production.Howe 〖BHDWG1,WK42,WK43,WK42W〗第3期 侯跃恩,等:帧间连续结构稀疏表示的目标跟踪算法 ver,designing a accurate,robust and real-time object tracker is still a challenging task.For improving the performance of tracking algorithm,an inter-frame consistency structured sparse representation object tracking algorithm was designed.The algorithm is carried out under the framework of particle filter,and uses the principle of structured sparse representation to recombine candidate targets.Firstly,the dictionary is constructed by target and background templates.Hence the ability of discriminating target and background is improved.Secondly,a structured sparse representation objective function,which contains inter-frame consistency constraint term,is built.The function is able to decide targets by using the consistency of target state.Thirdly,according to the residual error information,a likelihood measurement is developed.Comparing to traditional likelihood measurements,the proposed measurement is insensitive to similar target.Finally,the proposed tracking algorithm was proved to be more robust and accurate through 6 compared experiments.
Research on Skyline Detection Based on Region Covariance and Median Filtering Algorithm
TU Bing, PAN Jian-wu, WU Jian-hui, ZENG Xiang and CAO Xu
Computer Science. 2017, 44 (3): 313-317.  doi:10.11896/j.issn.1002-137X.2017.03.063
Abstract PDF(3420KB) ( 1064 )   
References | Related Articles | Metrics
Skyline detection plays an important role in visual navigation and geographical location annotation.Firstly,the input image is segmented by the region covariance algorithm to determine the target area of skyline detection.And then,the gradient values of the sample images in the data set are ananlyzed to obtain the optimal threshold gradient.In the skyline for initial segmentation detection target area,a cyclic gradient algorithm was proposed to detect the position coor-dinates.Finally,the median filtering algorithm was used to correct the coordinates of each point of the skyline for eliminating the possible skyline singular point,and eventually to detect the skyline of the input images.The proposed algorithm was tested on the Web set of the machine vision laboratory in the university of nevada.The experimental results show that the algorithm can effectively detect the skyline of input images in the data set,which is unnecessary to extract the edge information of the images.The proposed algorithm has good validity and timeliness.
Multi-focus Image Fusion Using Adaptive Region and SCM Based on NSST Domain
ZHAO Jie, WEN Xin, LIU Suai-qi and ZHANG Yu
Computer Science. 2017, 44 (3): 318-322.  doi:10.11896/j.issn.1002-137X.2017.03.064
Abstract PDF(1771KB) ( 729 )   
References | Related Articles | Metrics
In order to improve the fusion effect of multi-focus image,combined with the shared similarity among multiple source images,a new image fusion algorithm based on adaptive regions and spiking cortical model (SCM) of nonsubsampled shearlet transform (NSST) domain was proposed.First,NSST is utilized for decomposition of the source images.Then by calculating the energy of edge(EOE),the low frequency coefficients are fused by weight votes in adaptive regions.And the high frequency coefficients is fused by fired map of SCM which is motivated by EOE.Finally the fusion image is gained by inverse NSST.The algorithm can both preserve the information of the source images well and suppress pixel distortion due to nonlinear operations in transform domain.Experimental results demonstrate that the proposed method outperforms the state-of-the-art transform domain and pulse coupled neural network (PCNN) fusion methods.