Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 43 Issue 5, 01 December 2018
  
Survey of Fuzz Testing Technology
ZHANG Xiong and LI Zhou-jun
Computer Science. 2016, 43 (5): 1-8.  doi:10.11896/j.issn.1002-137X.2016.05.001
Abstract PDF(833KB) ( 5060 )   
References | Related Articles | Metrics
Security vulnerabilities in software may lead to serious consequences,and vulnerability exploiting has become a hot area of research in network and information security.Popular vulnerability exploiting technologies include static analysis,dynamic analysis,binary code comparison,fuzz testing and so on.Along with the expansion of the scale and complexity of software,fuzz testing has incomparable advantages which other vulnerability exploiting technology can’t provide.Firstly,both advantages and disadvantages of various vulnerability exploiting technology are discussed.Second-ly,an account of the research advances of fuzz testing the procedure of fuzz testing and test case generation technology were described in detail.Finally,the applications of fuzz testing were shown and the trend of future study was discussed.
Reliability Analysis of Circuit under Soft Error
WANG Zhen and JIANG Jian-hui
Computer Science. 2016, 43 (5): 9-12.  doi:10.11896/j.issn.1002-137X.2016.05.002
Abstract PDF(456KB) ( 338 )   
References | Related Articles | Metrics
With the coming of the big data era,people demand more reliable microprocessor.While the intensive techno-logy scaling make the circuit encounter greater sensitivity to soft errors.It’s very important to analyze the reliability of the circuit under soft errors.This paper gave a survey on the reliability analysis methods,which are categorized into circuit-level,gate-level,register-transfer-level(RTL) and architecture-level,and introduced and compared these methods according to method property in each level.
Feedback Numbers of Generalized Kautz Digraphs GK(3,n)
XU Xi-rong, HUANG Ya-zhen, ZHANG Si-jia and DONG Xue-zhi
Computer Science. 2016, 43 (5): 13-21.  doi:10.11896/j.issn.1002-137X.2016.05.003
Abstract PDF(399KB) ( 275 )   
References | Related Articles | Metrics
A subset of vertices of a graph G is called a feedback vertex set of G,if its removal results is an acyclic subgraph.This paper investigated the feedback vertex set of generalized Kautz digraphs GK(d,n).Let f(d,n) denote the minimum cardinality over all feedback vertex sets of the Generalized Kautz digraph GK(d,n).The upper bound of the feedback numbers of GK(3,n) is obtained as follows f(3,n)≤n+5n8 - 3n4- 4n7+3.
Branch Divergence Optimization for Performance and Power Consumption on GPU Platform
YU Qi, WANG Bo-qian, SHEN Li, WANG Zhi-ying and CHEN Wei
Computer Science. 2016, 43 (5): 22-26.  doi:10.11896/j.issn.1002-137X.2016.05.004
Abstract PDF(394KB) ( 619 )   
References | Related Articles | Metrics
Because of the tremendous computing power,general purpose graphics processing units(GPGPUs) have been widely accepted in general purpose computing area.However,as GPGPUs using an execution model called SIMT(Single Instruction Multiple Threads),their efficiency is subject to the presence of branch divergence in a GPU application.People have proposed a method based on thread swapping to reduce the performance loss brought by branch divergence,but these methods always bring extra memory accesses in return,which not only decrease the performance gains to a certain degree,but also increase power consumption.Firstly,an example was used to explain the influence thread swapping range has on performance and power consumption of a program.Secondly,a method was proposed to reduce the extra memory accesses brought by thread swapping.Experiments show that,for Reduction,this method reduces power consumption by 7% with average performance loss by 4% when swapping range is 256.While for Bitonic,this method improves performance by 6.4% and 5.3% when swapping range is 256 and 512 with no power consumption overheads,respectively.
Hot-routine Based Optimization of Dynamic Binary Translation
DONG Wei-yu, LIU Jin-xin, QI Xu-yan, HE Hong-qi and JIANG Lie-hui
Computer Science. 2016, 43 (5): 27-33.  doi:10.11896/j.issn.1002-137X.2016.05.005
Abstract PDF(916KB) ( 287 )   
References | Related Articles | Metrics
According to observation of the behavior of system level program,the paper provided a hot-routine based optimization method of dynamic binary translation,which takes frequently executed routines as optimization unit,uses intra-block and inter-block optimization algorithm to remove redundancies introduced by dynamic binary translation.Compared with the trace based optimization,this method has the advantages of less optimization unit discovery overhead,bigger code region,no duplicated translation,and is more suitable for the optimization of OS code in the virtual machine.Evaluation on the cross-platform virtual machine monitor ARCH-BRIDGE demonstrates that,by applying the optimization method to kernel code,performance of SPEC CPUINT 2006 programs gets a speedup of 3.5%~14.4%,and is 5.1% faster than the trace based optimization at most.
Mechanism and Capability of Data Prefetching in Intel64 Architecture
DONG Yu-shan and LI Chun-jiang
Computer Science. 2016, 43 (5): 34-41.  doi:10.11896/j.issn.1002-137X.2016.05.006
Abstract PDF(622KB) ( 1890 )   
References | Related Articles | Metrics
Data prefetching is an approach to reducing cache miss latencies,which can appropriately fill the speed gap between the microprocessor and DRAM.Recently,Intel processor families employ several prefetching mechanisms to accelerate the movement of data or code to Cache,and improve performance.By a brief analysis of the memory hierarchy of Intel64 architecture,data prefetching mechanism of X86/X64 architecture,including hardware prefetching and software prefetching,was deeply dissected,and then the compiler support for software prefetching mechanism was analyzed.After testing the performance of data prefetcher of Intel64 architecture for nested loop,we concluded several factors affecting the effect of data prefetching.These works provide a valuable contribution for the research and deve-lopment of the loop-array-prefetching optimization on the Intel platform.
Descriptions for Ontologies Based on Category Theory
YU Shan-shan, SU Jin-dian and YI Fa-ling
Computer Science. 2016, 43 (5): 42-46.  doi:10.11896/j.issn.1002-137X.2016.05.007
Abstract PDF(455KB) ( 285 )   
References | Related Articles | Metrics
To solve the inconsistence problems brought by various ontology languages when describing ontologies,a description method for ontologies based on category theory was proposed by taking full advantages of abstractness and graphical expressions of category theory.The category theoretical definitions for ontology,ontology mapping and onto-logy instanilization were presented,where each ontology is described as an object of a category,each morphism between ontologies is described as the a homomorphism between objects of a category,and the instanlization of ontologies is described as a functor between two categories.After that,the notions of colimit and pushout were used to describe the merging of ontologies.And some interesting properties for ontology merging were proved from the viewpoints of category theory.
Research on Critical Nodes in Social Networks Based on Owen Values
WANG Xue-guang
Computer Science. 2016, 43 (5): 47-50.  doi:10.11896/j.issn.1002-137X.2016.05.008
Abstract PDF(367KB) ( 306 )   
References | Related Articles | Metrics
Discovering critical nodes in social networks has many important applications and how to find out K critical nodes with the most influence in a social network is a NP problem.Considering the widespread community structure in social networks,this paper presented an algorithm of discovering critical nodes based on two information diffusion mo-dels and obtained each node’s marginal contribution by using Owen value and Monte-Carlo method.And the solution of the critical nodes problem is the K nodes with the highest marginal contributions.The results show that the algorithm is more suitable for the networks with community structure and the time efficiency of the algorithm is dozens of times than the Greedy algorithm.
Multiuser Resource Allocation of Power Line Communication Based on Enhanced PSO_GA
ZHANG Pei-ling and ZHANG Hong-xin
Computer Science. 2016, 43 (5): 51-55.  doi:10.11896/j.issn.1002-137X.2016.05.009
Abstract PDF(410KB) ( 266 )   
References | Related Articles | Metrics
Under the conditions of minimum total power,required minimum rate of users and allocated minimum power for users,the PSO_GA hybrid algorithm based on particle swarm optimization(PSO) algorithm and genetic algorithm(GA) was proposed for the power line communication(PLC) multiuser orthogonal frequency division multiplexing(OFDM) system dynamic resource allocation.The hybrid algorithm effectively introduces the improved crossover operator and mutation operator from GA to PSO algorithm,so can improve the system’s convergence speed.The hybrid algorithm was tested in the typical power line channels.Simulation results show that the proposed resource allocation scheme based on PSO_GA algorithm achieves faster convergence speed and better performance than the other schemes respectively based on the tradition algorithm.
Aeronautical Ad hoc Network Hybrid Routing Algorithm
PANG Song-chao, LUO Chang-yuan, HAN Dong-dong and PANG Han-ying
Computer Science. 2016, 43 (5): 56-61.  doi:10.11896/j.issn.1002-137X.2016.05.010
Abstract PDF(472KB) ( 315 )   
References | Related Articles | Metrics
Routing algorithm is the key and difficult point in aeronautical ad hoc network(AANET) research.Due to the current situation of few studies in AANET routing of high dynamic and the characteristics of high dynamic but relativelystable local structure of the aircraft node,a hybrid routing algorithm based on clustering and geographic information(CGCR) was proposed by integrating the ADS-B system into the process of establishing the routing table.The algorithm uses node bit rate and flight intention data in ADS-B message to forecast node movement trend in order to select the optimal next hop.And it expands the selection area in next hop to avoid the generation of routing voids.Simulation results show that CGCR has good performance.
Adaptive Routing Protocol Based on Directional Transmission in VANETs
CAI Rong, ZHANG Guo-an and JI Yan-cheng
Computer Science. 2016, 43 (5): 62-66.  doi:10.11896/j.issn.1002-137X.2016.05.011
Abstract PDF(384KB) ( 249 )   
References | Related Articles | Metrics
The relay node needs to be found in the range of forwarding when packets data is forwarded.In order to reduce the complexity of searching relay nodes and the average number of hops from source node to destination node in the process of forwarding,an adaptive routing protocol based on directional transmission in vehicular ad hoc networks was proposed.Forwarding angle and the average distance of one-hop progress are the two key parameters of the routing protocol.Firstly,in order to reduce the forwarding range,a forwarding angle always to awards the destination node is designed,which can reduce the number of nodes in the forwarding range and simplify the calculation of searching relay nodes.And then,in order to reduce the average number of hops,optimal or sub-optimal neighbor nodes are chosen adaptively as relay node based on the distance of progress in the forwarding range.Simulation results show that compared with the OBDR,the proposed adaptive routing protocol is superior in terms of both the average number of hops and the average distance of one-hop progress,with which the data packets can be transmitted quickly to the destination node.
Study on Energy Consumption of ZigBee Networks Based on Distributed Neighbor Discovery Mechanism
HUANG Heng-jie, ZHOU Tao and WANG Gao-cai
Computer Science. 2016, 43 (5): 67-72.  doi:10.11896/j.issn.1002-137X.2016.05.012
Abstract PDF(562KB) ( 289 )   
References | Related Articles | Metrics
ZigBee networks are short-distance,low-energy consumption and low-data transmission rate wireless network technology based on IEEE 802.15.4 standard.Usually,its network life can be extended if the battery energy is utilized with high efficiency.In this paper,we mainly focused on ZigBee networks energy consumption based on distributed neighbor discovery mechanism.The mechanism can make the node of ZigBee find other neighbor nodes in an accessible area.The paper analyzed and obtained the number of required average frame in the discovering process for three distri-buted neighbor discovery algorithms,and presented energy consumption model for ZigBee networks.The energy consumptions of the requesting device and neighbor devices in these algorithms were analyzed and energy consumption expressions were given respectively.In simulation,we compared the energy consumption of the duty cycle and continuous mode of devices.The results show that the ZigBee network energy consumption,which is based on the distributed neighbor discovery algorithm,can achieve better energy saving effect when the time slot number is small if the competitive tree algorithm(CTA) is selected.And when the number of frame time slot is large,the framed slotted ALHOA without feedback packet(FSA_noFBP) and framed slotted ALHOA with feedback packet(FSA_FBP) ought to be selected.
Research of WiFi-based Fingerprinting Matching Algorithm in Indoor Positioning
TANG Yang, BAI Yong, MA Yue and LAN Zhang-li
Computer Science. 2016, 43 (5): 73-75.  doi:10.11896/j.issn.1002-137X.2016.05.013
Abstract PDF(240KB) ( 564 )   
References | Related Articles | Metrics
To enhance the performance of fingerprinting positioning,it is significantly necessary to rapidly and accurately match between the received signal strength (RSS) measured by the users and the pre-stored fingerprinting database.Therefore,a fingerprinting cluster algorithm was proposed to reduce the number of search points and optimize the search path,improving the performance of matching in rapidity and accuracy.Four different tests were designed to verify the proposed algorithm by evaluating the positioning accuracy and the rapidity,taking into consideration different shape of cluster.Our results show that the proposed algorithm reduces the number of search points by 60%,and has the same positioning accuracy with that of the traditional fingerprinting algorithm.By comparison of the performance of the different shape of clusters,the cellular cluster is optimal.
Novel Taxonomy of Security Weakness in Source Code Based on Three-dimension Tree Model
ZHANG Yan, LI Zhou-jun, DONG Guo-wei and MA Dian-fu
Computer Science. 2016, 43 (5): 76-79.  doi:10.11896/j.issn.1002-137X.2016.05.014
Abstract PDF(394KB) ( 226 )   
References | Related Articles | Metrics
We presented a novel taxonomy of security weakness in source code based on three-dimension tree model,which synthetically considers the three aspects:the causes of the defect,the results and its form of expression.Case studies show that compared with CWE and Fortify,the taxonomy in this paper is more accurate and detailed.This paper is not only helpful to establish a kind of relatively complete source code defect classification system,but also very signi-ficant in practice to refine the rules of the security weakness detection.
Static Detection Model and Framework for Software Vulnerability
WANG Tao, HAN Lan-sheng, FU Cai, ZOU De-qing and LIU Ming
Computer Science. 2016, 43 (5): 80-86.  doi:10.11896/j.issn.1002-137X.2016.05.015
Abstract PDF(663KB) ( 669 )   
References | Related Articles | Metrics
Static analysis of source-oriented software vulnerabilities has already been a research focus of information security in recent years.The core problem of vulnerability static detection is how to describe these vulnerabilities and how to detect them.We proposed a static analysis model to describe and detect software vulnerabilities.Firstly,formal definition is used to describe the attributes of several common software vulnerabilities,and these vulnerabilities and its discrimination rules are formulated with formal description.Secondly,a new program intermediate representation called vulnerability executable path set is proposed which is used to take place of traditional path analysis in order to reduce the program state space and avoid state explosion.Based on this model,we designed a static detection framework for software vulnerability based on vulnerability executable path set to solve vulnerability relation nodes with vulnerability syntax rule on vulnerability executable path set and detect vulnerabilities on vulnerability relation nodes by the vulnerability discrimination rules.The results show the correctness and feasibility of the static analysis model.
Novel Intrusion Detection Method Based on Semi-supervised Clustering
LIANG Chen and LI Cheng-hai
Computer Science. 2016, 43 (5): 87-90.  doi:10.11896/j.issn.1002-137X.2016.05.016
Abstract PDF(408KB) ( 298 )   
References | Related Articles | Metrics
A new semi-supervised intrusion detection method based on error-correcting output codes was proposed to solve the difficulties which existing in intrusion detection algorithms based on supervised learning usually face when the training samples are insufficient.This method mines the relationship under the unlabeled data to enlarge the known labeled normal data by introducing the idea of semi-supervised cop-kmeans algorithm.Firstly,the SVDD is used to mea-sure the class separabilty quantitatively.Then the inter-class separability matrix is got gradually.The binary tree is built based on the matrixes from the bottom to the up.Each node of the binary tree is encoded by level to get the final hierarchical error-correcting output codes and classifiter.The experiments in KDD Cup 1999 network data sets prove that the method has better performance in detection accuracy and good adaptability in the real network environment.
Multiple Trajectories Feature Detection Technology Based on Data Mining
XUE Fei, SHAN Zheng, YAN Li-jing and FAN Chao
Computer Science. 2016, 43 (5): 91-95.  doi:10.11896/j.issn.1002-137X.2016.05.017
Abstract PDF(406KB) ( 280 )   
References | Related Articles | Metrics
In order to solve the shortcomings of the malware behavior characteristic detection,we proposed a multiple tracks detection method which uses the behavior characteristics of file operation,network access and memory resources to construct a three-dimensional signatures of malicious behavior database.In the course of constructing projection database,we combined AC automation which can optimize frequent sequence query,deleted these frequent sequences which are shorter than the minimum length,and then got the improved data mining algorithm,called Prefixspan-x.We used the algorithm to dynamicly extract malicious behavior characteristic database and threshold match,in order to overcome the detection difficulties caused by software packers and confusion during static disassembly way to get the software beha-vior trajectories.Experimental results show that the proposed feature detection technology has high accuracy and low false negative rate.
Android Malicious Behavior Detection Method Based on Composite-event Trigged Behaviors
ZHANG Guo-yin, QU Jia-xing, FU Xiao-jing and HE Zhi-chang
Computer Science. 2016, 43 (5): 96-99.  doi:10.11896/j.issn.1002-137X.2016.05.018
Abstract PDF(444KB) ( 369 )   
References | Related Articles | Metrics
For lack of effective means to identify Android malware application in transmission link at current time,this paper proposed an Android malicious behavior detection method based on automated testing techniques and dynamic analysis techniques. Android applications behavior is triggered by automated testing technology and monitored by a vir-tual sandbox.This paper presented a model of triggering malicious behavior named DroidRunner using combined operations on malware,which improves the Android application code coverage and the trigger rate of the malicious behavior.It benefits to improving the detection rate of malicious Android applications.After the actual deployment and testing,this method has a high detection rate to the unknown malicious applications.It can help users to find and analyze the unknown malicious applications.
Research of Cloud Provider Selection Method Based on SecLA
ZHU Hua-min, WU Li-fa and KANG Hong-kai
Computer Science. 2016, 43 (5): 100-107.  doi:10.11896/j.issn.1002-137X.2016.05.019
Abstract PDF(693KB) ( 300 )   
References | Related Articles | Metrics
As the range of cloud computing applications is gradually expanded,users become more and more concerned about the security of cloud services.Existing selection methods of cloud provider focus on performance and cost while seldom emphasize security.There is no effective method for evaluating the security services of cloud computing.Under this background,this paper presented a method for quantitative assessment of cloud security services based on security level agreement(SecLA).Firstly,it builds the cloud computing security index system and the quantitative evaluation model based on cloud control matrix(CCM) and accompanying consensus assessments initiative questionnaire(CAIQ),which are published by cloud security alliance(CSA).Secondly,it designs the template framework of SecLA by extending WS-Agreement.Finally,it introduces two underprovisioning parameters to enhance comparison method of alternatives advantage degree and realizes the quantitative comparison of SecLAs in cloud computing environment.The experimental results prove that the methods are feasible and effective.Compared with reference evaluation method(REM) and simple linear weighted method,the cloud providers sorting results in this paper are more reasonable,and underprovisioning parameters contribute a good auxiliary effect to decision making.
Revised Steganography Scheme Based on SI-UNIWARD
SI Yi-fan, WEI Li-xian, ZHANG Ying-nan and LIU Jia
Computer Science. 2016, 43 (5): 108-112.  doi:10.11896/j.issn.1002-137X.2016.05.020
Abstract PDF(1125KB) ( 470 )   
References | Related Articles | Metrics
To solve the problem of the distortion cost being zero when the rounding error of the distortion cost in UNIWARD is 1/2,a new steganography scheme based on SI-UNIWARD was proposed.At first,the raw pixels of the precover are rounded,then a set of rounded error which have reasonable distance are redefined,and new embedded costs are generated.Consequently,the DCT coefficients of the 00,0,04,4 modes which are prohibited in SI-UNIWARD are reused.Finally,the pathological behavior of the detection performance is avoided,and the zero-embedding-cost problem is solved.The experimental results show that compared with SI-UNIWARD,the probability of changed DCT coefficients,pixel difference in spatial domain and the changes in spatial domain caused by DCT embedding are lowered,detection performance are promoted,and the side information are better used.
Privacy Protection under Cloud Computing against Data Mining
TAO Lin-bo, SHEN Jian-jing, YOU Qing-xiang and GUO Jia
Computer Science. 2016, 43 (5): 113-116.  doi:10.11896/j.issn.1002-137X.2016.05.021
Abstract PDF(344KB) ( 235 )   
References | Related Articles | Metrics
Data storage under cloud environments is both convenient for data mining and challengeable for privacy protection.In order to solve the problem of privacy protection against data mining in cloud computing,a model was presented under the circumstance of public cloud.A classification and prepocessing module was added to the model.Classification criteria and processing methods of data were discussed in detail together with information retrieval,data reduction,file running protection and data destruction.The security effectiveness of the model was proved through theory analysis.
Electronic Commerce Transaction Trust Model Based on Two Layers Nodes and Objective Risk
LI Dao-quan, WU Xing-cheng and GUO Rui-min
Computer Science. 2016, 43 (5): 117-121.  doi:10.11896/j.issn.1002-137X.2016.05.022
Abstract PDF(414KB) ( 253 )   
References | Related Articles | Metrics
In order to solve the security problem of electronic commerce in the environment of P2P,through the deep study of e-commerce trust model discussed by other scholars,and on the base of the possible objective transaction risk,this paper proposed a calculating method for comprehensive trust based on two layer nodes.In this model,various factors are considered,such as transaction amount and transaction time.The concept of transaction degree is also introduced,which is treated as the trade decision basis.Experiment results show that,the model effectively improves accuracy of the node trust value,and effectively improves the safety of transaction.
Virtual Group Revocation Policy-based Cloud Storage Access Control Model
XIE Li-xia, BO Fu-kuan and ZHAO Bin-bin
Computer Science. 2016, 43 (5): 122-126.  doi:10.11896/j.issn.1002-137X.2016.05.023
Abstract PDF(403KB) ( 230 )   
References | Related Articles | Metrics
To solve the problems that the existing cloud storage access control models have low efficiency of users’ privilege revocation and are unable to adapt to a large number of users,this paper proposed a new model on the basis of analysis of cipher-text policy attribute-based encryption.Virtual group revocation policy was given,all users were mapped to multiple virtual groups,and the access structure was rebuilt.The range of users’ privilege revocation was limited within a virtual group.By redistributing the users’ private key in the certain virtual group where revocation takes place,users’ privilege revocation can be achieved without any changes in the other virtual groups.Obviously,this approach greatly improves the efficiency of users’ privilege revocation.A simulation experiment was conducted in Apache Hadoop platform,and the experiment results demonstrate that this model has higher efficiency on users’ privilege revocation.
Model of Virus Propagation in Wireless Sensor Network Based on Directional Antenna
HU Jin-tao, SONG Yu-rong and SU Xiao-ping
Computer Science. 2016, 43 (5): 127-131.  doi:10.11896/j.issn.1002-137X.2016.05.024
Abstract PDF(488KB) ( 244 )   
References | Related Articles | Metrics
In this paper,the model of virus propagation in the wireless sensor networks was established based on the model of directional antenna.The nodes in the wireless sensor networks were triggered by the message of switching antenna broadcasted by the intelligent monitoring nodes to use directional antenna,which was used to study the virus propagation in the wireless sensor networks.The results of research show that using directional antenna not only saves the sensor energy and prolongs the life cycle of networks,but also effectively inhibits the virus propagation,reduces the risk of virus outbreaks in the wireless sensor networks,and the proposed model is suitable for different model of virus propagation and wireless sensor networks with different structure.
Double Video Watermarking Algorithm Based on Compressive Sensing
ZHOU Yan, ZENG Fan-zhi and ZHAO Hui-min
Computer Science. 2016, 43 (5): 132-139.  doi:10.11896/j.issn.1002-137X.2016.05.025
Abstract PDF(1148KB) ( 234 )   
References | Related Articles | Metrics
For the problem of video content protection and tamper detection in frame or between frames,by extracting the video content features as watermarking based on compressive sensing(CS) theory,we proposed a video protection and tamper detection algorithm based on double watermarking.Firstly,the content features of the macro blocks of I frame are extracted by CS procedure,and the semi-fragile authentic watermark is generated.Then,by performing binary arithmetic to the frame sequences,the integrity watermark is generated.Finally,the double watermarking is embedded into the CS measurements of high-frequency DCT coefficients in macro blocks of I frame and P frame by the OMP procedure of CS reconstruction.This algorithm can improve the anti-attack capability of video watermarking,and detect the tamper to video.Simulation results show that the proposed algorithm has accurate detecting capability for tamper in frame,which can detect the sub-blocks of video frame.Also,it has precise detecting capability for tamper between frames such as frame insertion,frame deletion and frame switch,which can detect any tampered frame.
Trust Evaluation and Service Selection Based on Collaborative Recommendation for Cloud Environment
YOU Jing, FENG Hui and SUN Yu-qiang
Computer Science. 2016, 43 (5): 140-145.  doi:10.11896/j.issn.1002-137X.2016.05.026
Abstract PDF(473KB) ( 221 )   
References | Related Articles | Metrics
In the “fuzzy and autonomy” cloud computing environment,the service category is numerous and the quality is uneven,so it is difficult for the users to carry out a trusted service selection.On the basis of the user interaction experience and the real interpersonal interaction model,a comprehensive trust evaluation model based on collaborative re-commendation was proposed.In the model,two kinds of dynamic factors,the time decay and the weights,are introduced and a multivariate hybrid collaborative recommendation algorithm is designed to achieve effective collaboration among users,which can help users select the trusted cloud services.In order to verify the feasibility of the model,a distributed prototype system was designed,and the simulation experiments were carried out on the user satisfaction and service quality of the model.Simulation results show that the model can faster increase the average service satisfaction and more effectively inhibit malicious service,and with the growth of the number of interactions,the service selection quality will also continue to improve.
Research on Fault Tolerant Description Language and its Application for Distributed Computing
CAI Yuan-yuan and ZHAO Zhi-zhuo
Computer Science. 2016, 43 (5): 146-149.  doi:10.11896/j.issn.1002-137X.2016.05.027
Abstract PDF(407KB) ( 249 )   
References | Related Articles | Metrics
It is essential to control system actions in the condition that every component functions normally,as well as in the environment that some component gets failed,when designing an architecture of a distributed fault-tolerant system.Designs for the both situations are generally tightly coupled,which increase the difficulty of understanding,designing,developing,and maintaining a distributed fault-tolerant system which may be large-scale and may provide strong fault tolerance.To settle this problem,a new method was proposed in this paper,which defines a description language by the Vienna definition language on the basis of Hoare’s theory of communicating sequential processes.It can describe not only the property of concurrency in distributed computing but also how a system tolerates failure.More importantly,it seems to indicate that programming languages are doomed to be more abstract in higher level in future,and it may provide a new idea in the field of distributed fault-tolerant computing.
Safety Analysis of Slat and Flap Control Unit for DO-333
CHEN Guang-ying, HUANG Zhi-qiu, CHEN Zhe and KAN Shuang-long
Computer Science. 2016, 43 (5): 150-156.  doi:10.11896/j.issn.1002-137X.2016.05.028
Abstract PDF(649KB) ( 271 )   
References | Related Articles | Metrics
DO-333 is the formal supplement to the aircraft software’s safety standard DO-178C,which provides gui-dance for the use of formal methods in the development process of aircraft software.Model checking,as a kind of formal methods,can be applied for the strict verification of the software artifacts in the requirement and design phase.Based on DO-333,this paper used model checking to formally verify the software artifacts of different development phases for a slats and flaps control unit,one component of the flight control system,aiming to justify whether the software artifact satisfies the related objectives or not in DO-178C and provide the evidence support.Firstly,model checking is applied to the specification and verification of the high_level requirements for the mutual updating operation between flap and slat.Then,it is applied to the low_level requirements for the control logic in a single wing.Based on the verification and analy-sis above,evidence is provided for the verification objectives about high_level and low_level requirements in DO-178C.This paper illustrated an application example of model checking in an aircraft software’s certification and could provide technical support for the safety assurance and airworthiness certification of aircraft software.
Fast Estimation of WCET Based on Distribution Function
ZHOU Guo-chang, GUO Bao-long, GAO Xiang, WANG Jian and YAN Yun-yi
Computer Science. 2016, 43 (5): 157-161.  doi:10.11896/j.issn.1002-137X.2016.05.029
Abstract PDF(403KB) ( 280 )   
References | Related Articles | Metrics
In the real-time software systems,the software-time performance analysis and evaluation techniques are important.But with the structure of CPU being more and more complex,traditional methods based on underlying hardware simulations are becoming increasingly difficult.Based on distribution function,the worst-case execution time(WCET) estimation method that is from a probabilistic perspective,can bypass the complex underlying hardware modeling and estimate the worst-case execution time.This article first divided TI TMS320C6713 DSP assembly code into basic blocks to build the program flow diagram.Then the run time of each instruction was simulated based on beta distribution whose parameters are determined according to the improved program evaluation and review technique(PERT).The instruction execution time was superimposed based on normal distribution to simulate the run time of each basic block.Finally, the worst execution time of the whole program was gotten by using a path based approach.Experimental results show that this method is feasible and reasonable.
On Reducing Monitoring Overhead in Runtime Verification
XU Sheng, YE Jun-min, CHEN Shu, JIN Cong and CHEN Pan
Computer Science. 2016, 43 (5): 162-168.  doi:10.11896/j.issn.1002-137X.2016.05.030
Abstract PDF(580KB) ( 259 )   
References | Related Articles | Metrics
A most important issue in the field of runtime verification is reducing monitoring overhead.How to reduce the monitoring overhead to minimize its influence on the system is one of the essential objectives.In this paper,monitoring methods and research status of runtime overhead control were first introduced.Then several computation methods of overhead costs were analyzed to optimize the monitoring mode.Finally,the current challenges and the future research directions of overhead control in runtime verification were further discussed.
Confluence Decision Method for Active Rule Set Based on Condition Conflict Analysis
XIONG Zhong-min, ZHU Ji-guang, LI Chang-ji and YUAN Hong-chun
Computer Science. 2016, 43 (5): 169-173.  doi:10.11896/j.issn.1002-137X.2016.05.031
Abstract PDF(418KB) ( 240 )   
References | Related Articles | Metrics
With active rules,many databases have automatic reaction function.Many areas including databases,know-ledge database and wireless sensor network have employed active rules.Confluence analysis can maintain data consistency of database,but it is a very hard task.Based on rule commutativity,the existing methods do not consider whether there exists a condition conflict between rules and then these rules cannot be executed at the same executive sequence.And the existing analysis is based on the rules set and does not consider the time order between triggering rules.To add-ress these problems,the concepts of triggering sequence was provided and the condition conflicts in a triggering sequence were analyzed.Finally,a new algorithm of confluence detection was provided and its correctness and termination were proved.
k Nearest Neighbor Query Based on Voronoi Diagram for Obstructed Spaces
ZHANG Li-ping, JING Hai-dong, LI Song and CUI Huan-yu
Computer Science. 2016, 43 (5): 174-178.  doi:10.11896/j.issn.1002-137X.2016.05.032
Abstract PDF(448KB) ( 380 )   
References | Related Articles | Metrics
In order to enhance the efficiency of k nearest neighbor in obstructed spaces,k nearest neighbor query method based on Voronoi diagram in obstructed spaces was studied,and kNN-Obs algorithm based on Voronoi diagram in obstructed spaces was proposed.This algorithm adopts two processes:filtration and refinement.Filtration mainly uses the filter function of Voronoi diagram and largely reduces the number of query points.Refinement mainly screens objects within the candidate set according to barrier distance and Voronoi neighbor.And further ADDkNN-Obs algorithm for newly-added points and DENkNN-Obs algorithm for deleted points were given.The experiment shows that the algorithm has advantages in dealing with the k nearest neighbor problem in obstructed spaces.
Research of Interval-based Out-of-order Event Processing in Internet of Things
ZHOU Chun-jie, DAI Peng-fei, LI Hong-bo and ZHANG Zhen-xing
Computer Science. 2016, 43 (5): 179-187.  doi:10.11896/j.issn.1002-137X.2016.05.033
Abstract PDF(732KB) ( 341 )   
References | Related Articles | Metrics
Many events in real world applications are long-lasting events which have certain durations.The temporal relationships among those durable events are often changeable and complex.Moreover,when there are out-of-order events,the query processing becomes more challenging.In applications of internet of things,ordered arriving of order events is strictly required.However,network latencies and machine failures often cause events to be out-of-order.In this work,we analyzed the preliminaries of event temporal semantics.A tree-plan model of out-of-order durable events was proposed.A hybrid solution was correspondingly introduced.Extensive experimental studies demonstrate the efficiency of our approach.
Study on Domain-corpus Driven Calculation Method of Sentence Relevance
LI Feng, HUANG Jin-zhu, LI Zhou-jun and YANG Wei-ming
Computer Science. 2016, 43 (5): 188-192.  doi:10.11896/j.issn.1002-137X.2016.05.034
Abstract PDF(774KB) ( 277 )   
References | Related Articles | Metrics
Sentence relevance calculation plays a very important role in various fields of NLP,such as public opinion monitoring,information retrieval and statistical machine translation(SMT) etc.This paper,after a clear definition of relationship between similarity and relevance,designed a domain-specific corpus-driven calculation model of sentence relevance.The model applies the linguistic data of the same domain to construct a “sentence-paragraph-article” three-level domanial semantic space.The topic relevance of words can be figured out through calculating different factors of various levels such as co-occurrence probability,co-occurrence average distance and sentence length etc.The paper made comparative experiments between the model and methods based on literal features,HowNet and Tongyici Cilin respectively and the results show that this model has great practical value.
Business Process Model Abstraction Based on Cluster Analysis
SUN Shanwu, WANG Nan and OUYANG Dantong
Computer Science. 2016, 43 (5): 193-197.  doi:10.11896/j.issn.1002-137X.2016.05.035
Abstract PDF(470KB) ( 225 )   
References | Related Articles | Metrics
ion Based on Cluster Analysis SUN Shan-wu1,2 WANG Nan1,2 OUYANG Dan-tong3 (College of Management Science and Information Engineering,Jilin University of Finance and Economics,Changchun 130117,China)1 (Laboratory of Logistics Industry Economy and Intelligent Logistics,Jilin University of Finance and Economics,Changchun 130117,China)2 (Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education,Jilin University,Changchun 130012,China)3 Abstract A prominent business process model abstraction(BPMA) use case is a construction of a process “quick view” for rapidly comprehending a complex process.Some researchers propose the process abstraction methods to aggregate the activities on the basis of their semantic similarity.Most of these methods focus on k-means cluster analysis which partitions an activity set into k clusters(k is predefined by users) and assigns an activity to a cluster whose centroid is the closest to this activity.But in fact,the number of the abstract activities(clusters or subprocesses) is unknown so that which activities are assigned to the same subprocess tends to depend on the designers’ experiences or modeling habits.Moreover,if the activities are not part of particular cohesive groups from the point of view of experienced mo-delers,the initial merging decisions may contain some errors.In this paper,the virtual document was introduced to represent activities and process models in order to eliminate the constraint of using the particular attributes of activities as the dimensions of vector spaces.And an approach was presented that exploits an industrial process model repository to derive a distance threshold between an activity and the subprocess it belongs to.According to this threshold,an algorithm was proposed to generate a possible number k of the clusters.By using k as a parameter and the distance threshold as a further constraint,a clustering procedure was given to aggregate the activities into different clusters.In an experimental validation,we compared both the proposed approach and k-means clustering(without the distance threshold as a further constraint) with actual model decisions,showing a stronger correlation between the proposed approach and actualmodel decisions.As such,this paper contributes to the development of modeling support for effective process model abstraction,easing the use of business process models in practice.
Novel Method on Community-based User Recommendation on Social Network
ZHAO Qin, WANG Cheng and WANG Peng-wei
Computer Science. 2016, 43 (5): 198-203.  doi:10.11896/j.issn.1002-137X.2016.05.036
Abstract PDF(526KB) ( 339 )   
References | Related Articles | Metrics
The user recommendation on social network is a hot spot of research.The existing work’s effort is not good in multiple-topics social networks.For solving this problem,we analyzed the topic classification of social network,and proposed a novel method of recommendation on relevant users.The experimental result shows that this method can improve the accuracy of recommendation and decrease the time complexity efficiently.
Orthogonal Non-negative Matrix Factorization for K-means Clustering
LI Meng-jie, XIE Qiang and DING Qiu-lin
Computer Science. 2016, 43 (5): 204-208.  doi:10.11896/j.issn.1002-137X.2016.05.037
Abstract PDF(378KB) ( 718 )   
References | Related Articles | Metrics
The orthogonal NMF K-means clustering algorithm based on basic theory of NMF was proposed to improve the quality of K-means clustering in high-dimensional data.We presented orthogonal NMF algorithm,added orthogonal restraint to data prototype matrix from factorization with improved Gram-Schmidt and Householder orthogonalization separately,which both ensure non-negative of low-dimensional feature and enhance the orthogonality of matrix,and then made K-means clustering.Experimental results show that K-means clustering based on H-ONMF has better clustering results on high-dimensional data.
Crowdsourcing Based Description of Urban Emergency Events
CHEN Hai-yan and XU Zheng
Computer Science. 2016, 43 (5): 209-213.  doi:10.11896/j.issn.1002-137X.2016.05.038
Abstract PDF(1232KB) ( 359 )   
References | Related Articles | Metrics
Crowdsourcing is a process of acquisition,integration,and analysis of big and heterogeneous data generated by a diversity of sources in urban spaces,such as sensors,devices,vehicles,buildings,and human.Especially,nowadays,no countries,no communities,and no person are immune to urban emergency events.Detection about urban emergency events,e.g.,fires,storms,traffic jams,is of great importance to protect the security of humans.Recently,social media feeds are rapidly emerging as a novel platform for providing and dissemination of information that is often geographic.The content from social media often includes references to urban emergency events occurring at,or affecting specific locations.In this paper,in order to describe the real time urban emergency event,the crowdsourcing based model was proposed.Firstly,users of social media are set as the target of crowd-sourcing.Secondly,the semantic,spatial and temporal information from the social media are extracted to detect the real time event.Thirdly,a GIS based annotation of the detected urban emergency event is shown.The proposed method was evaluated with extensive case studies based on real urban emergency events.The results show the accuracy and efficiency of the proposed method.
Extended Rough Description Logic
YAN Zhi-huan and LEI Yin-bin
Computer Science. 2016, 43 (5): 214-218.  doi:10.11896/j.issn.1002-137X.2016.05.039
Abstract PDF(384KB) ( 288 )   
References | Related Articles | Metrics
The existing rough description logic(RDL) are all based on the classical rough set theory,i.e,when dealing with uncertainty knowledge,one must have an equivalence relation on the considered domain of objects in advance.In fact,people often encounter a case that there is a formal concept structure on the domain of objects.A natural question is how to deal with the possibly occurring uncertain concepts.We combined formal concept analysis and rough set theory to establish two new RDLs.The approach proposed by Y.Y.Yao in literature [14] is applied in the frameworks of the new RDLs,where the upper and lower approximations of a non-definable concept are defined by lattice-theoretical operators and set-theoretical operators respectively,and particularly,a new form of lower approximation which is diffe-rent from the previous form is defined.The notions are very different from the classical form,but it is pretty practical.Based on the novel notions of the upper and lower approximations,we added the approximation operators to the structure of description logic.Then two new RDLs FlALC and FsALC were established.The corresponding semantic and syntax were given.At last,we also gave an extended Tableaux algorithm.It can be used to solve some related reasoning problems.
Split-Integration Recommendation Algorithm Based on Matrix Completion
WANG Yi and JIN Zhong
Computer Science. 2016, 43 (5): 219-222.  doi:10.11896/j.issn.1002-137X.2016.05.040
Abstract PDF(374KB) ( 209 )   
References | Related Articles | Metrics
The traditional recommendation system usually uses collaborate filtering or content-based recommendation as its method,but this paper applied matrix completion.Because of the sparsity of data,if matrix completion is used directly,error will be relatively large.Considering that some active users exist in all users,by means of finding these special users,the sparsity will be reduced by their integrated data.And improving recommendation quality for these users will be more likely to generate values.This paper proposed a split-integration recommendation algorithm,and the experimental results show that the proposed method can improve the accuracy of recommendation.
Research of Text Data Streams Clustering Algorithm Based on Affinity Propagation
LI Yi-ming, NI Li-ping, FANG Qing-hua and LIU Hui-ting
Computer Science. 2016, 43 (5): 223-229.  doi:10.11896/j.issn.1002-137X.2016.05.041
Abstract PDF(512KB) ( 326 )   
References | Related Articles | Metrics
With the advent of the era of big data,a large amount of unstructured text data streams have emerged online.Those data streams are dynamic,high-dimensional and sparse.For these characteristics and on the basis of the traditionalAP algorithm,a text data stream clustering algorithm,called OAP-s algorithm,was proposed in this paper.By introducing attenuation factor in the AP algorithm,OAP-s algorithm passes the clustering center of the current window to the next window,while attenuating the results.However,this OAP-s algorithm also has some shortcomings.Therefore,we proposed another text data stream clustering algorithm,called OWAP-s algorithm.Based on the OAP-s algorithm,OWAP-s algorithm defines the weighted similarity,introduces attractive factor and makes the historic clustering center more attractive,thus obtains more accurate clustering results.Meanwhile,both algorithms adopt the sliding time window model,which reflects the temporal characteristics as well as the distribution of the data stream.Experimental results show that both algorithms are flexible and extensible,and they are more accurate and stable than OSKM algorithm.
Fuzzy Support Vector Data Description with Centers of Classes Distance
WANG Min-guang and WANG Zhe
Computer Science. 2016, 43 (5): 230-233.  doi:10.11896/j.issn.1002-137X.2016.05.042
Abstract PDF(358KB) ( 275 )   
References | Related Articles | Metrics
Support vector data description(SVDD) ignores the importance of sample distribution.This paper proposed a new method called fuzzy SVDD with centers of classes distance,and it had been applied on the UCI data sets.The algorithm uses ratio of the distance of sample to the centers of two classes to give each sample a weight.The important samples’ weights should be increased and the others should be not,which can highlight the difference of samples.The results show that our algorithm has better performance than SVDD.
Linear Representation Method Based on Key Points for Time Series
CHEN Shuai-fei, LV Xin, QI Rong-zhi, WANG Long-bao and YU Lin
Computer Science. 2016, 43 (5): 234-237.  doi:10.11896/j.issn.1002-137X.2016.05.043
Abstract PDF(321KB) ( 308 )   
References | Related Articles | Metrics
Time series data has the features of large scale and high latitude.It has high computational complexity and is susceptible to noise if doing data mining on the raw sequence directly,so the original time series pretreatment is essential,and most methods of commonly used linear representation have low accuracy in selection piecewise points.Based on the time series variation,we proposed a linear representation method based on key points for time series.The method takes into account the time span and amplitude changes and can efficiently extract key points in the time series,which can prevent excessive noise removal and is implemened simply.Experiments show that the method has good universality for data from different areas.
Research on Parallel SVM Algorithm Based on Spark
LIU Ze-shen and PAN Zhi-song
Computer Science. 2016, 43 (5): 238-242.  doi:10.11896/j.issn.1002-137X.2016.05.044
Abstract PDF(411KB) ( 360 )   
References | Related Articles | Metrics
With the constant increasing of data scale,the parallel design of support vector machine(SVM) has become a hot research topic in data mining field.In view of the problems in model training including slow optimization and large memory,we proposed a new parallel SVM algorithm(SP-SVM) based on Spark.First of all,this paper implemented algorithm using Spark parallel computing framework.Secondly,this paper analyzed the performance of the parallel operator and optimized the algorithm in parallel design scheme,solving the problem of low efficiency that cascade training model encounters.Experimental results show that the new parallel training method can save more training time and greatly improve the efficiency in the case of a small precision loss.
Improved Text Clustering Algorithm Based on Kolmogorov Complexity
WANG You-hua and CHEN Xiao-rong
Computer Science. 2016, 43 (5): 243-246.  doi:10.11896/j.issn.1002-137X.2016.05.045
Abstract PDF(324KB) ( 297 )   
References | Related Articles | Metrics
Clustering algorithm based on Kolmogorov complexity has the advantages of generality,parameter indepen-dence,but always shows low accuracy when applied to the text semantic information clustering.In order to solve this problem,this paper proposed a text clustering algorithm based on feature extension-DEF-KC.For improving keyword’stheme contribution,DEF-KC applies feature extension to the keyword in the pretreated text by referencing information of specific entry in a baidu encyclopedia,and calculates the text similarity by approximate Kolmogorov complexity of the text.Finally it clusters text using spectral clustering algorithm.The experimental results show that the proposed algorithm has much better accuracy and recall rate compared to the traditional text clustering algorithm based on Kolmogorov complexity.
Research on Left-Right Invertibility for Integer Matrix Related with Data Mining
LU Cheng-gang
Computer Science. 2016, 43 (5): 247-251.  doi:10.11896/j.issn.1002-137X.2016.05.046
Abstract PDF(428KB) ( 251 )   
References | Related Articles | Metrics
We considered the discrete data observation model based on integer matrix like boolean observation matrix,or non-negative observation matrix in history.We proposed the problem on the integer decomposition for any integer observation matrix.The problem is equivalent to solving one classical Diophantine linear system in nature,but the only condition is that the base matrix is originated from the decomposed matrix part.Then we provided a new method based on left-right inverse to find the integer inverse of the base matrix.Because the base matrix choice is an option designing in engineering,it is convenient to consider the base matrixes only of possessing integer inverse.Additionally,it is known that any non-full rank matrix can be decomposed into the product of two full rank matrixes.So we explored some sufficient conditions on the existence of integer left-right inverse only for full-rank rectangular matrix,and also designed the integer inverse algorithm in computation.Our proposed integer inverse constructor can find integer inverse under some conditions while the constructor based on the least squares method fails.Lastly our method merged with the least squares method will become a complete solution to integer decomposition of integer observation matrix.
Selective Ensemble of SVDDs Based on Correntropy and Distance Variance
XING Hong-jie and WEI Yong-le
Computer Science. 2016, 43 (5): 252-256.  doi:10.11896/j.issn.1002-137X.2016.05.047
Abstract PDF(446KB) ( 237 )   
References | Related Articles | Metrics
Selective ensemble of support vector data description(SVDD) based on correntropy of information theoretic learning and distance variance was proposed.Correntropy is utilized to substitute mean square error to measure the compactness of ensemble and construct more compact classification boundary.Distance variance is used to measure the diversity of base classifiers to enhance the diversity of the ensemble model.An 1 norm based regularization term is introduced into the objective function to implement the selective ensemble.Moreover,the half-quadratic optimization technique is utilized to solve the proposed selective ensemble model.In comparison with single SVDD,Bagging based ensemble of SVDDs,and AdaBoost based ensemble of SVDDs,the proposed method achieves better classification perfor-mance.
Incremental Time Series Classification Algorithm Based on Shapelets
DING Jian and WANG Shu-ying
Computer Science. 2016, 43 (5): 257-260.  doi:10.11896/j.issn.1002-137X.2016.05.048
Abstract PDF(403KB) ( 257 )   
References | Related Articles | Metrics
This paper focused on research of time series classification according to time series features of high dimensionality,ordered real-valued variables,autocorrelation and so on.From the perspective of image processing,this paper proposed a method for ITTS to transform image information to time series.As the best approach to show a subsequence of time series,shapelets would change dynamically as time goes on.Considering this thought,this paper presented time series classification algorithm based on dynamically finding shapelets which is named IPST algorithm.IPST algorithm dynamically discoveries current optimal k shapelets well,so as to improve the accuracy of time series classification.The discovered shapelets can be also used with state-of-art classification algorithms,leading to better performance.
Fast Seed Set Refinement Algorithm for Closest Substrings Discovery
ZHANG Yi-pu, RU Feng and WANG Biao
Computer Science. 2016, 43 (5): 261-264.  doi:10.11896/j.issn.1002-137X.2016.05.049
Abstract PDF(295KB) ( 323 )   
References | Related Articles | Metrics
Finding the closest substrings in sequences is crucial for mining the specific functional sites in gene and understanding the gene regulatory relationship.This paper proposed a novel algorithm namely SCEM based on the improved expectation maximization algorithm of the seed sets refinement.SCEM divides the dataset into a series of seed sets by clustering the input sequences,then uses the improved EM algorithm to refine these seed sets and finds the closest substrings finally.Experiments using the real datasets and the simulated datasets demonstrat our algorithm can find the real closest substrings and has the high performance and efficiency comparing with the popular algorithm such as Random Projection.Moreover,SCEM also can solve the long closest substrings finding problem effectively.
Efficient k-Clique Mining Algorithm in Large-scale Networks
CHAI Xu-qing and DONG Yong-liang
Computer Science. 2016, 43 (5): 265-268.  doi:10.11896/j.issn.1002-137X.2016.05.050
Abstract PDF(324KB) ( 378 )   
References | Related Articles | Metrics
k-clique mining in networks is a basis for most network-based applications.Concerning the problem of low efficiency for mining k-cliques,this paper proposed an efficient k-clique mining algorithm for large-scale networks.Firstly,the problem of finding k-clique with maximal density is transformed to the problem of finding a k-clique whose density is larger than a given density value.Next,a bipartite network,whose left nodes are nodes in the original network and right nodes are k-1 cliques,is constructed,and the k-cliques problem can be solved within polynomial time applying it.In sparse networks,the time and space complexities of the proposed algorithm are O(c2k) and O(ck) respectively.The experiments show that,the proposed algorithm is more accurate and efficient than the best algorithm at present.Moreover,the proposed algorithm can be also applied to mine k-cliques in incomplete networks.
Product Image Sentence Annotation Based on Gradient Kernel Feature and N-gram Model
ZHANG Hong-bin, JI Dong-hong, YIN Lan and REN Ya-feng
Computer Science. 2016, 43 (5): 269-273.  doi:10.11896/j.issn.1002-137X.2016.05.051
Abstract PDF(503KB) ( 236 )   
References | Related Articles | Metrics
Product image sentence annotation was presented because sentence describes online products more accurately than single words.Firstly,image feature learning was executed.Gradient kernel feature that achieves the best annotation performance was chosen because the feature describes the key visual characteristics of product image such as shape and texture better than other features.Therefore,the gradient kernel feature was selected to complete image classification and image retrieval.Secondly,several key words were summarized from training images’ captions based on semantic correlation computing.Thirdly,a modified sequence that not only contains rich semantic information but also satisfies syntactic mode compatibility was created based on these key words by N-gram model.Sentence was generated according to predefined sentence template and the modified sequence.Finally,a Boosting model was designed to choose those sentences that obtain the best BLEU-3 scores to annotate product images.Experiments show sentences generated by the boosting model achieve the state of art annotation performances.
Super-resolution Image Reconstruction Based on Fractional Order Total Variation Regularization
LIU Ya-nan, YANG Xiao-mei and CHEN Chao-nan
Computer Science. 2016, 43 (5): 274-278.  doi:10.11896/j.issn.1002-137X.2016.05.052
Abstract PDF(936KB) ( 327 )   
References | Related Articles | Metrics
It is an ill-posed that a high resolution image is reconstructed from a degenerate low resolution image,and regularization is added to deal with the problem usually.In this paper,we introduced fractional order total variation(FOTV) as another regularization to constrain the solution space on the basis of traditional total variation(TV) operator.Detailed texture information of the image was better reconstructed by using FOTV regularization,and staircase effect was eliminated.Moreover,we divid the problem into sub-problems by alternating direction multiplier method(ADMM),and total variation and fractional total variation operators were constructed as cyclic matrices.Then,these were diagonalized by Fourier transformation.Therefore,computational complexity is reduced.Experimental results show that compared to existing methods,the proposed model does not suffer from staircase.Furthermore,the proposed model can keep the details of the information and has better value of peak signal to noise ratio(PSNR) and similarity index measure(SSIM).
Improved Algorithm of Fast Image Stitching by Reducing Panoramic Distortion
QU Zhong, LIN Si-peng and JU Fang-rong
Computer Science. 2016, 43 (5): 279-282.  doi:10.11896/j.issn.1002-137X.2016.05.053
Abstract PDF(573KB) ( 372 )   
References | Related Articles | Metrics
The traditional image stitching based on the SIFT feature points extraction,to a certain extent,has distortion errors.Especially,the panorama will get more seriously distorted when a panoramic result is composited by using a long image sequence.In order to create a high-quality panorama,the improved algorithm was proposed in this paper,including altering the way of selecting the reference image,and we put forward a method that can compute the transformation matrix for any image of the sequence to align with the reference image in the same coordinate space.Additionally,the improved stitching method dynamically selects the next input image based on the number of SIFT matching points.Compared with Song Fuhua’s stitching process,the improved methodincreases the number of matching feature points,and reduces SIFT feature detection area of the reference image.The experimental results show that the improved method can not only improve the efficiency of image stitching processing,but also reduce the panoramic distortion errors.
Detection Method for Group Riot Activity Based on Change Frequency of Optical Flow’s Magnitude
LIN Jie and LIN La
Computer Science. 2016, 43 (5): 283-287.  doi:10.11896/j.issn.1002-137X.2016.05.054
Abstract PDF(640KB) ( 296 )   
References | Related Articles | Metrics
Group riot activity is the focus of video surveillance,which is emergent and destructive.To detect the group riot activity intelligently is helpful for improving the intelligence level of video surveillance.The common activity detection methods have high false alarm rate while detecting group riot activity from videos.In this paper,a detection method for group riot activity was proposed based on change frequency of optical flow’s magnitude.Firstly, the optical flow of each pixel on each frame in a video is calculated.Secondly,a binary map is obtained to reflect the changes of optical flow aptively.Then an activity descriptor by dividing an image into several blobs and the change frequency histogram of optical flow is calculated independently.Finally,the activity features are trained and classified by using a linear support vector machine.Experiments show that the new method has low false alarm rate,low missed alarm rate and high genuine acceptance rate,while detecting group riot activity.So it can be widely used in the field of intelligent video surveillance.
Vehicle Behavior Recognition Method Based on Quadratic Spectral Clustering and HMM-RF Hybrid Model
FAN Jing, RUAN Ti-hong, WU Jia-min and DONG Tian-yang
Computer Science. 2016, 43 (5): 288-293.  doi:10.11896/j.issn.1002-137X.2016.05.055
Abstract PDF(484KB) ( 315 )   
References | Related Articles | Metrics
The vehicle trajectory extracted from highway surveillance system can be used to analyze and recognize vehicle behavior.Due to a small amount of abnormal trajectory,such as change lanes and overtaking,the classic spectral clustering with longest common sub-sequence(LCSS) can’t effectively distinguish all kinds of trajectory.In addition,the popular HMM trajectory model ignores the negative impact of the samples and only classifies them by maximum likelihood value to cause a higher rate of false recognition in vehicle behavior recognition.In order to address these issues,according the characteristics of highway vehicle trajectory,we proposed a vehicle trajectory recognition method based on quadratic spectral clustering and HMM-RF hybrid model.Firstly,the trajectory curvature is calculated to distinguish overtaking by curved characteristics,and then lane changes trajectory is distinguished by spectral clustering with inclination similarity in non-curve trajectory.Secondly,all the sub-clusters are clustered by spectral clustering with LCSS again,which can effectively distinguish overtaking,changing lanes and normal trajectory.We made the output of HMM model,the different dimension of probabilities as an input for random forest model to improve the precision of behavior recognition.We did experiments under different data sets to verify the effectiveness of the method.The average accuracy rate of trajectory clustering can achieve 96%,and the average accuracy rate of behavior recognition can reach to 89.3%,so the algorithm has higher accuracy and robustness.
Face Recognition Method Based on Curvelet Transform and Cosine Rules
LI Yan-ping, JIANG Ying, HU Jin-ming and LI Wei-ping
Computer Science. 2016, 43 (5): 294-297.  doi:10.11896/j.issn.1002-137X.2016.05.056
Abstract PDF(610KB) ( 250 )   
References | Related Articles | Metrics
Face recognition is common biometric identification,widely used in access control,public security and justice,and so on.Existing face recognition methods are greatly influenced by the change of acquisition conditions on illumination,facial expression,pose and occlusion,limiting the application of face recognition technology.This paper proposed a face recognition method based on curvelet transform and cosine rules,to improve the robustness on acquisition conditions.First,we executed curvelet transform on face image,and detected key points according to curvelet coefficients.Then,we extracted curvelet features from key points with different scales and directions,and obtained a face descriptor.Finally,the optimal matching result was computed through cosine rules,cumulative and extreme operation.Experiments show that the proposed method is robust to changes of illumination,facial expression,pose and occlusion,and has good recognition performance.
State Detection of Cancer Cell in Phase-contrast Microscopy Images
ZHANG Jian-hua, ZOU Yi-jie, GAO Qiang and CHEN Sheng-yong
Computer Science. 2016, 43 (5): 298-303.  doi:10.11896/j.issn.1002-137X.2016.05.057
Abstract PDF(1244KB) ( 310 )   
References | Related Articles | Metrics
This work described a new method based on morphology for detecting different states in a cell circle in time-lapse microscopy image of live cancer cells.A three-step approach was used as follows.First,we applied an improved level-set function to segment cancer cell images so as to identify the outline of each candidate.Then,an overall gray threshold of all the cancer cell regions was obtained through OTSU.We used this threshold to divide every cancer cell regions into two parts——dark and bright.At last,5 efficient features based on gray level,shapes and inside structures were proposed to distinguish cancer cells from interphase,mitotic prophase,mitotic metaphase and mitotic anaphase.Experiments upon T24 bladder cancer in time-lapse microscopy image were conducted,showing the proposed detection method performs well.Through this method,the position and state of cells can be detected correctly,and it has strong robustness of detecting states of cells in consecutive images.
Permutation Entropy Fault Feature Analysis Based on Ensemble Empirical Mode Decomposition for Lateral Damper of High-speed Train
WU Zhi-dan, QIN Na and JIN Wei-dong
Computer Science. 2016, 43 (5): 304-307.  doi:10.11896/j.issn.1002-137X.2016.05.058
Abstract PDF(317KB) ( 209 )   
References | Related Articles | Metrics
To cope with the status monitoring data characteristics of high-speed train bogie’s lateral damper,this paper proposed an analysis approach of fault characteristics combining EEMD and permutation entropy.The approach is used to do complexity analysis for signals under different time measure.Firstly,EEMD is used to decompose lateral damper’sseven signal status with different number of invalid.Then,permutation entropy is used to measure IMF’s complexity.Finally,support vector machine is used to classify the fault conditions.The experimental result shows that the recognition rate of multi-lateral dampers’ fault can reach to 100% at most,proving the permutation entropy’s effectiveness to signal fault analysis of high-speed train lateral damper.
Fast Image Recognition Method Based on Locality-constrained Linear Coding
CHEN Guang-xi, GONG Zhen-ting, WEN Pei-zhi and REN Xia-li
Computer Science. 2016, 43 (5): 308-312.  doi:10.11896/j.issn.1002-137X.2016.05.059
Abstract PDF(1000KB) ( 265 )   
References | Related Articles | Metrics
The traditional image recognition methods,such as ScSPM and LLC,are based on the SIFT feature,ignoring the limitations of artificial features,and the single image recognition time-consuming takes slightly longer.Considering these deficiencies,this paper proposed a fast recognition method for image based on locality-constrained linear coding.The method first directly uses locality-constrained linear coding to extract local features’ descriptors of image,then uses the linear spatial pyramid matching(LSPM) to calculate feature descriptors,and inputs the results into the linear support vector machine(LSVM) for training and testing.The experimental results for three usual image data sets show that the method has good recognition accuracy,and at the same time greatly reduces the signal image recognition time-consuming,which verifies the effectiveness of this method in the image recognition.
Forest Fire Image Recognition Algorithm Based on Wireless Multimedia Sensor Network of Hash Coding
CHANG Xiao-min, ZHAO Juan-juan, GE Lei, QIANG Yan and SHI Yao-hua
Computer Science. 2016, 43 (5): 313-317.  doi:10.11896/j.issn.1002-137X.2016.05.060
Abstract PDF(913KB) ( 241 )   
References | Related Articles | Metrics
Aiming at the existing problems about the application of the wireless multimedia sensor network(WMSN) in the forest fire monitoring,we proposed a forest fire recognition algorithm based on the mage hash code technology.First,we built the database of forest fire image and got database of corresponding hash code.Through extracting a series of static and dynamic characteristics of the flame image,we computed characteristic vector generation with the hash function to get the corresponding hash code.Second,we computed the hash code of the identified image,matched the ima-ge by calculating the hamming distance between the identified image and the image of database,and got the most similar image with the identified image,obtaining the conclusion of the presence of fire or non-fire.The experimental results show that the accuracy of flame recognition of this algorithm is 94.12%,and it is higher than other flame recognition algorithms based on the SVM,BP-neural network and sparse representation.Besides,this algorithm reduces the energy consumption of image transmission in WMSN,and improves the use efficiency of the network bandwidth.
Regional Covariance Based Image Superpixels Generation
ZHANG Xu-dong, LV Yan-yan, MIAO Yong-wei and YANG Dong-yong
Computer Science. 2016, 43 (5): 318-323.  doi:10.11896/j.issn.1002-137X.2016.05.061
Abstract PDF(1492KB) ( 345 )   
References | Related Articles | Metrics
Image segmentation is an important issue on image analysis and understanding,which means that the given image is segmented into some non-overlapping regions and each region has the same or similar intrinsic properties.These intrinsic properties are agreed in the same region,whilst they are different between different regions.In many ima-ge processing applications,due to its large amounts of image pixels,some pixel-based algorithms are always time-consuming and memory-demanding.The image superpixel-based scheme will be an efficient solution to alleviate the storage and time complexities.Based on the analysis of regional covariance,this paper presented a novel similarity measure for image regions and a robust scheme for generating image superpixels.Firstly,the input image is divided into some small regions using K-means algorithm,and then the intrinsic image properties are described by high-dimensional regional covariance matrix.Then,the similarity measure of different regions is determined by the regional covariance distance.Finally,combining the Graph-based scheme and K-means clustering,the final image superpixels are generated.Compared with other superpixel generation approaches,our proposed method is efficient and can reduce some unnecessary over-segmentation.Our algorithm can also keep the image edge information and reduce under-segmentation errors for generating compact image superpixels. The superpixel generation scheme can be applied to the stylized rendering,which will lead to the artistic oil painting.