Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 40 Issue 2, 16 November 2018
  
Research on Image Integral Algorithm Optimization Based on OpenCL
Computer Science. 2013, 40 (2): 1-7. 
Abstract PDF(651KB) ( 700 )   
RelatedCitation | Metrics
Image integral algorithm is widely used in fast feature detection, and improving the performance of this algorithm through GPU has an important practical significance. However, due to the complexity of the GPU hardware architecture and the architectural differences between different GPUs, how to complete the optimization of this algorithm and achieve performance portability on different GPU platforms is still a hard work. hhis paper analysed the differences between theunderlying hardware architectures of GPU, and studied the effects of performance on different GPU platforms using different optimization methods from the utilization of the off-chip memory bandwidth, the utilization of the computing resource, data locality and other aspects. And based on this, we implemented the image integral algorithm based on OpcnCL.Experimental results show that optimized algorithm gets 11.26 and 12.38 times speedup on AMD and NVIDIA GPU respectively, and the performance of the optimized kernel improves 55.O1% and 65.17% than the CUDA version in NVIDIA NPP library, which verifies the effectiveness and cross platform ability of optimization methods.
Parallel Implementation and Optimization of Two Basic Geo-Spatial-Analysis Algorithms Based on OpenMP
Computer Science. 2013, 40 (2): 8-11. 
Abstract PDF(455KB) ( 505 )   
RelatedCitation | Metrics
This paper introduced research on the methods for two basic geo-spatial-analysis algorithms; getting intersection points for large amounts of segments and point polygon-overlay, and we implemented the two algorithms on the shared memory multi-core environment based on OpenMP. We analyzed the reason why we don't get linear speedup,and got that it is because of unbalanced load and serial memory management method. Then we sorted the input data and adopted the dynamic scheduling of OpenMP. Also, we adopted and improved the current parallel memory allocating technique to manage the memory for parallel algorithms. Based on the two methods above, we improved the algorithms.The tests show that the improved method can reach nearly linear speedup, and the efficiency of each core in a four-core node is above 80%.
Optimization Model for Performance of Molecular Dynamics Simulation Based on Modified Cell-linked List Method
Computer Science. 2013, 40 (2): 12-15. 
Abstract PDF(343KB) ( 431 )   
RelatedCitation | Metrics
In the modified cell-linked list method, the reduction in cell sizes results in the reduction of communication loads and calculation costs of distance between particles, the requirements of more neighboring cells. The multi cell molecular dynamics (MD) method is a parallel one that is widely used for MD simulation. This paper applied the key idea of modified cell-linked list method to the multi cell MD method, derived a performance evaluation model for the MD simulation and proposed an optimal model for the acceleration of Ml)simulation according to that. The experimental resups show that cell sizes which arc optimized by using the optimal model, can improve the performance of molecular dynamics simulation.
Parallel Algorithm for Computing Convex Hulls in Multi-processor Architecture
Computer Science. 2013, 40 (2): 16-19. 
Abstract PDF(421KB) ( 404 )   
RelatedCitation | Metrics
This paper Improved the Z3-2 algorithm proposed by ZHOU Pei de, and proposed a parallel algorithm for computing the convex hulls of planar point set in multi-processor architecture. The times and duration of calculation were reduced by digitizing the positional relationship between a point and a directed line segment on plane with "Yan's distance". Further,the two progresses were decomposed iteratively which are the foremost time-consuming parts in the algorithm within the complexity of O(1). That is, the original task is decomposed into several sub-tasks when its scale is greater than a given threshold and then decompose any of the sulrtasks if its scale is still greater than the threshold. All sub-tasks will be executed in parallel from the parallel task group to take full advantage of the parallel computation resources of the multi-processor. The correctness of the algorithm was discussed. The experiment results show that the algorithm is efficient and stable.
Completed Free Resource Management Research on Dynamic Partial Reconfigurable System
Computer Science. 2013, 40 (2): 20-23. 
Abstract PDF(1244KB) ( 391 )   
RelatedCitation | Metrics
Reconfigurable computing system has the flexibility of traditional processor and the speed of ASIC approximately. Dynamic partial reconfigurable system realizes the computing and reconfiguration at the same time, in which an efficient free resource management scheme is very important to achieve high performance. I}his paper introduced an effident algorithm to find a series of maximal free rectangles (MFR) based on one-way stack. The algorithm uses different M value in and out of oncway stack to find all maximal free rectangles. We used simulation experiments to simulate the algorithm, and the results show that the this algorithm improves the performance of searching complete free resources.
Low Overhead Large Scale Location-based Information Sharing System
Computer Science. 2013, 40 (2): 24-29. 
Abstract PDF(839KB) ( 387 )   
RelatedCitation | Metrics
Frequent location updates from mobile clients make the server a bottleneck for severe communication and processing overhead in large scale location-based information sharing system. From our observation, implanted information is not distributed uniformly in geography, which leads to some blank zones. In existing systems, the client needs to update its location on the server periodically no matter whether any information needs to be shared, which brings extra communication overhead. We presented an Grid-based indexing mechanism(GIM),providing on-demand information request for mobile clients. In this mechanism, an information matrix is built in the server and synchronized to clients, in which each element indicates whether information is implanted in the associated zone. Client only needs to communicate with the server when the matrix indicates the rectuirement of the information sharing. The experimental results show that the scheme can eliminate about 70 0 o communication overhead and works well especially for applications with uniformly distributed information.
k-Nearest Neighbor Algorithm in Dynamic Road Network Based on Routing Mechanism
Computer Science. 2013, 40 (2): 30-34. 
Abstract PDF(446KB) ( 386 )   
RelatedCitation | Metrics
Aiming at the issue of geography information query, a new k-Nearest Neighbor algorithm for dynamic road network was proposed based on routing mechanism. With the idea of "space for time",we saved history query results in routing tables,and substituted the traditional method by requiring tables. We updated the route tables to adapt the time varying road status. With the kernel of routing table, we improved the filtering and refining procedure of kNN algorithm. I3y preprocess of dynamic road network using routing table, the amount of candidate points in k-NN computing is reduced, and the rang of query and the searching efficiency are promoted.
Delivering Social Activity Formation Service over Opportunistic Networks
Computer Science. 2013, 40 (2): 35-39. 
Abstract PDF(466KB) ( 359 )   
RelatedCitation | Metrics
The prevalence of devices (especially smart phones) with short range communication modules propels the development of opportunistic networks and its applications. Social activities formation (SAF) application was proposed.To facilitate data dissemination in opportunistic networks, broker-based protocols arc often used. However, existing protocols assume that brokers are willing to contribute, ignoring the selfish nature of human. Considering the individual willingness, user may not take mission which leads to packet loss problem. To this end, STBS (Social Tic based Broker Selection Algorithm) was proposed based on social tics and popularity. Experiments with smart phones data sets Reality Mining provided by MIT evaluation show that STBS has better performance and can provide better social activity support.
Property Weight Based Co-reference Resolution for Linked Data
Computer Science. 2013, 40 (2): 40-43. 
Abstract PDF(371KB) ( 870 )   
RelatedCitation | Metrics
The construction and development of the Web of data are affected seriously by the loss of links amongsemantic datasets fromheterogeneousdata sources. The construction of co-reference between instances in semantic datasets will be helpful for enriching the links among datasets, which will improve the reasoning and query across datasets. The property weights and the similarities of property values play an important role in the co-reference resolution based on similarity analysis. This paper proposed a new model for obtaining the weights of properties from the statistical information of datasets, and proved its rationality theoretically. Furthermore, the advantages of this model compared with traditional method were analyzed. Based on the new method proposed by this paper, a system for building co-reference among semantic datasets was implemented, and the performance was verified with some open datasets.
Application-specific Network-on-Chip Topology Optimization Based on Two-level Genetic Algorithm
Computer Science. 2013, 40 (2): 44-48. 
Abstract PDF(556KB) ( 400 )   
RelatedCitation | Metrics
Large scale system-on-chip is facing several communication problems, such as performance, synchronization and power dissipation. Network-on-chip provides a most promise solution to the communication challenges of complicated system-on-chip. Because network topology optimization is known to be an NP-hard problem, aiming at the characteristic that most system-on-chips arc application-specific, this article proposed a two-level generic algorithm to solve the approximate solutions to the topology optimization problem. Experiments show that the proposed method can, on average, improve about 1.1% the minimal power consumption and 97% simulation time which is greatly reduced by comparing to the existing threclevcl generic algorithm.
Study on Topology with Non-uniform Hierarchical Clustering for Wireless Sensor Networks
Computer Science. 2013, 40 (2): 49-52. 
Abstract PDF(432KB) ( 383 )   
RelatedCitation | Metrics
The topology of network affects the load balancing and service lifetime of the sensor nodes. Clustering is an effective pattern of topology management for wireless sensor networks. According to the distribution characteristics of the vascular paths and inspirations to topology for wireless sensor networks,a non-uniform hierarchical clustering algorithm was presented. First we studied the structural characteristics of the vascular network, then etabalished the mathematical model and network topology. The nodes with different pressure values in wireless sensor networks are hierarchically marked and statically clustered with non-uniform probability based on the improved particle swarm optimization. So the non-uniform hierarchical clustering topology is established with different scale and quantity of clusters in different hierarchy area. Simulation shows that the algorithm can optimize the network clustering, balance the nodes' energy consumption. It also lengthens service lifetime and avoids the hot issue in the energy consumption of network.
Study on Label Propagation Based Community Detection Algorithm for Social Semantic Network
Computer Science. 2013, 40 (2): 53-57. 
Abstract PDF(460KB) ( 636 )   
RelatedCitation | Metrics
According to the characteristics of online social network and the shortcomings of the existing community detection algorithms, this paper proposed an improved community detection algorithm based on semantic technology ISLPA (Improved Semantic Label Propagation Algorithm). ISLPA is suitable for discovering and identifying community structure in the larg}scale online social network. It is an improved SemTagP algorithm, combining with semantic and social tagging technology. hhis algorithm takes advantage of the semantic information and topology features of online social network to community structure discovering. ISLPA doesn't rectuire a priori information such as the number and size of communities while it's used to discovery community structures in largcscale online network,and it can also automatically identify the detected communities according to the tagging labels. This algorithm is much efficiency because it takes nearly linear time complexity. The experiment shows that SLAP algorithm can effectively discover and identify community structure for real online social networks.
Frequent Itemsets Mining Algorithm Based on Distributed Data Stream of Sensor Network
Computer Science. 2013, 40 (2): 58-60. 
Abstract PDF(345KB) ( 426 )   
RelatedCitation | Metrics
This paper mainly studied data stream frectuent itemsets mining problem of wireless sensor network. Aiming at the characteristics of sensor networks that centralized static data stream frequent itemset mining method cannot be directly used in sensor network,a frectuent itemset mining algorithm FIMDS based on distributed data stream of sensor network was proposed. Basai on FP-tree, the algorithm can fast mine the single data stream local frequent Itemsets of sensor nodes, and then through the routing, the local frequent itemsets arc uploaded and combined layer-by-layer, and last local frectuent itemsets collected on the sink node and global frequent itemsets are got by the top-down efficient pruning strategy. The experimental results show that the algorithm can effectively and greatly reduce candidate itemsets, and reduces the amount of communication traffic in wireless sensor networks, so the algorithm has good performance in time and spice.
Dynamic Scheduling Algorithms for Streaming Media Based on Hybrid Content Delivery Network
Computer Science. 2013, 40 (2): 61-64. 
Abstract PDF(451KB) ( 407 )   
RelatedCitation | Metrics
A hybrid content delivery network combining complementary advantages of CDN and P2P called HyCDN for streaming media was presented. hhe CVCR4P2P (Comprehensive Value Cache Replacement Algorithm for P2P) algorithm was proposed for the peers inside domain, which considers bytes benefit of prefix data, transmission cost and access rate of streaming media. Another algorithm, DSA4ProxyC (Dynamic Scheduling Algorithm for Proxy Caching),which joints the proxy caching and server scheduling strategics for proxies between domain was also shown. It employs the scheme of cache allocation based on the current hatching interval that has non-zero requests, which can be updated periodically according to the popularity of streaming media object. The principle is obeyed that the data cached for each streaming media object arc in proportion to their popularity at the proxy server. hheoretical analysis and simulation resups show that the hybrid dynamic scheduling can effectively reduce server and network bandwidth usage, and also has a very good adaptability for the variety of the rectuest arrival rate.
Track-to-track Fusion Algorithm Based on Track Quality with Multiple Model
Computer Science. 2013, 40 (2): 65-70. 
Abstract PDF(530KB) ( 608 )   
RelatedCitation | Metrics
How to determine the optimal weighting factor is a problem that is worth of further study in the weighted track-to-track fusion. The concept of track quality with multiple model(TQMM) was put forward, and a weighted track-to-track fusion algorithm with feedback was presented to solve the problem of the optimal allocation of weights when multi-sensor tracks the same target. The feedback mechanism was introduced into fusion system, and the weights were determined using TQMM, so that the fusion system can update the weights accurately, and track the target effectively in real time. Experimental results show that the algorithm has a better tracking performance compared with the existing weighted fusion algorithm, especially in the fusion system in which the measurement accuracy of sensors has a large difference. With the increase of the number of sensors, tracking precision of the fusion system is improved gradually.However, when sensor is increased to a certain number, fusion accuracy has no longer a significant improvement.
Efficient and Provably Secure IBE Scheme Suitable for Multi-hop Cognitive Radio Networks
Computer Science. 2013, 40 (2): 71-77. 
Abstract PDF(624KB) ( 419 )   
RelatedCitation | Metrics
Aiming at cognitive wireless network security issues and combining with the characteristics of the network,an identity-based security solution named Yu-the II3E scheme was proposed. The solution can realize cognitive node authentication without online trusted third party. Its function is similar with PKI, but certification chain is much simpler than the PKI. This security solution can realize the function of system key distribution, key regularly replace, domain and cross-domain communication with less infrastructure. In addition, this paper compared overall security solution of Yu-IBE with the existing two well-known data integration program. hhe simulation results show that the Yu-IBE program has better stability, and cognitive correct rate has remained relatively high level.
Access Control Method Based on Fuzzy ECA Rules for Pervasive Computing Environments
Computer Science. 2013, 40 (2): 78-83. 
Abstract PDF(559KB) ( 381 )   
RelatedCitation | Metrics
Context information is one of the key factors in pervasive access control systems,which exerts a decisive influence on authorization and access control on subjects. Permissions of the subjects,threshold intensity of the resources and security policies of the system should be automatically adjusted as the changes of the context information in pervasive computing environment, without thorough consideration by current access control models. In order to describe the active influence of fuzzy context information on pervasive access control (permissions of the subjects, the threshold intensity and the security policies of the system),based on the traditional ECA rules and the intcrvalfuzzy set thcory,a fuzzy ECA rule scheme was designed, and an active access control method was proposed based on the proposed fuzzy ECA rule scheme. By applying the new access control method, the pervasive access control can be self-adjusted in the pervasroc cnmronmcnts.
Research and Design of Cloud Computing Simulation System Based on Phase Space
Computer Science. 2013, 40 (2): 84-86. 
Abstract PDF(375KB) ( 414 )   
RelatedCitation | Metrics
A design method for cloud computing simulation system based on thermodynamic phase space theory was proposed and realized. This method constructs the phase space analysis model of the cloud computing through projecting the information of the cloud computing cluster nodes onto the phase space, turning their parameter variations into the movement of the projected points in the phase space. Then the macroscopic thermodynamic parameter is introduced to reflect the general status of the cloud computing cluster. As per this method, a simulation test platform and a measurement standard are provided for researching on the cloud computing core scheduling algorithm and the operational test of the cloud computing data center. The test results show that cloud computing simulator system which is rebuilt completely has some main advantages as follows. It is easy to build experimental environment and simulation results are intuitive. This system can intuitively reflect the status of cloud computing cluster and the performance of the scheduling algorithm.
Sleep Scheme Based on Contact Time in DTN
Computer Science. 2013, 40 (2): 87-90. 
Abstract PDF(329KB) ( 387 )   
RelatedCitation | Metrics
Considering the problem that energy supplies are limited in DTN, a sleep scheme based on contact time was proposed. Nodes carry out adaptive strategy to adj ust wait time and sleep time according to former contact time, and reduce the probability of missing communication opportunities in sleep period. Simulation results indicate that Epidemic algorithm with SSCT can reduce network overhead and energy consumption based on high message delivery ratio, and SSCT has better performance on multi-copy routing algorithms compared with First Contact algorithm.
Cluster-based Multipath Routing Protocol for Wireless Sensor Networks
Computer Science. 2013, 40 (2): 91-94. 
Abstract PDF(353KB) ( 319 )   
RelatedCitation | Metrics
A cluster-based multipath routing protocol (CBMRP) was proposed, which is used for data gathering in a kind of event driven sensor networks. Firstly, the nodes in the event area compete for becoming a clusteread according to the distribution of theirs neighbors and residual energy. Secondly, the CBMRP utilizes ant colony algorithm to search for multiple paths,and presents a load balancing function to distribute the traffic along the multiple paths discovered. The simulation results show that CBMRP can balance the load of the network, reduce the energy consumption of nodes and prolong the network lifetime.
Things Networking Cloud Storage Data Bad Information Detection Based on Boundary-incremental SVM Algorithm
Computer Science. 2013, 40 (2): 95-97. 
Abstract PDF(362KB) ( 351 )   
RelatedCitation | Metrics
In order to solve the problem that things networking cloud storage data network camouflage bad information concealment causes information preprocessing difficulties, deep semantic understanding not accurate and sample disequilibrium, etc.,this paper put forward the things networking cloud storage data bad information detection based on Boundary-Incremental SVM algorithm This algorithm firstly takes mean initial clustering analysis based on mean and standard deviation of cloud storage data information for sample space training classification, and then puts all the sample classes euchdean distance traversal calculation to get son clustering center distance matrix between class and the clustering center adjacent boundary son clustering region,and then through the information content camouflage and selection principle of cloud storage information makes authenticity screening, using probability occurred in bad information in the false information as index, the data safety threshold and bad camouflage information template vector set of similarity threshold value as indexes,identifies the cloud storage information pseudo,finally carries on the incremental mode study, obtains each classification sample final optimal separating hyper plane, and will detect all kinds of bad camouflage information output System test proves that this algorithm can fast effective in things networking cloud storage data of camouflage information detection
Data Privacy Protection Mechanism for Cloud Storage Based on Data Partition and Classification
Computer Science. 2013, 40 (2): 98-102. 
Abstract PDF(461KB) ( 787 )   
RelatedCitation | Metrics
The data management and ownership separation in cloud storage systems lead to the difficulties of data security and privacy protection. The traditional cloud storage data privacy protection mechanism based on simple encryption technologies brings larger extra overhead in the actual process of data manipulation. In order to achieve low overhead of the data privacy protection mechanism for cloud storage systems, this paper proposed a novel data privacy protection mechanism based on data partition and classification. The mechanism reasonably partitions the original data into two block; one is small, which is deployed locally, and the other one is large, which is deployed remotely, and then according to different security requirements of data, the data dyeing and data encryption technologies are adopted jointly in order to protect the cloud data privacy, improve flexibility and reduce overhead at the same time.
Survey on Feedback Correlation Based Dynamic Trust Model for P2P Systems
Computer Science. 2013, 40 (2): 103-107. 
Abstract PDF(445KB) ( 544 )   
RelatedCitation | Metrics
Users' demands for the security of peer-to-peer network stimulate the development of trust model. By analyzing the previous trust and reputation systems,we proposed a novel feedback correlation based dynamic trust model for P2P systems, namely CoDyTrust. The trust model adopts fake trust filtering and trust aggregation mechanisms based on the time frames,and introduces several factors in the calculation of the global trust value, including correlation cocffident, trust forgetting factor, abused trust value and recommended trust. The factors can be adjusted dynamically through the feedback control mechanism. Copy Trust can reflect the trust status of a node in the system, while providing the function of detecting malicious behaviors. I}he experimental results indicate that CoDyTrust can accurately detect malicious peers, which has the ability of defending strategic altering behavior, lying and collusive attack as well.
Simulation System Based on Multi-scale Fusion of City Traffic Network Optimization
Computer Science. 2013, 40 (2): 108-111. 
Abstract PDF(390KB) ( 693 )   
RelatedCitation | Metrics
City traffic simulation research is more, but there are some problems that transportation systems in spatial scales are integrated from the micro to the macro highly, and complexity of the traffic system determines any of a single-scale integration is difficult to make better objective simulation to transport phenomena. On this basis, proposed a simulation system based on multi-scale fusion to use the corresponding algorithm do integration from the macro, meso and micro scale and system design. Finally, this paper made experiments the city traffic network optimization simulation system based on multi scale fusion. The experiment proves the usefulness and reliability of the system.
Secret Gray-level Image Sharing Scheme Based on Recovery Function and Error Diffusion
Computer Science. 2013, 40 (2): 112-116. 
Abstract PDF(982KB) ( 377 )   
RelatedCitation | Metrics
A secret gray-level image sharing scheme based on a(n,n)-threshold, divides a secret gray-level image into n meaningful gray-level shadow images using recover function and error diffusion algorithm. In order to increase the steganographic effect for the security protection purpose, the proposed scheme selects n gray-level images containing meaningful contents as shadows. The proposed scheme can generate the shadow image with good visual quality by using error diffusion algorithm. The generated shadow image has no pixel expansion,and its size is ectual to the secret image's. Furthermore, the reconstructing process is fast and lossless. Experimental results and theoretical analysis demonstrate that the proposed scheme offers a high secure and effective mechanism for secret image sharing.
Chameleon Signature Scheme Based on Lattice
Computer Science. 2013, 40 (2): 117-119. 
Abstract PDF(272KB) ( 1214 )   
RelatedCitation | Metrics
The chameleon signatures not only meet the characteristics of non-repudiable, but also arc non-interactive and non-transferable compared with the traditional digital signatures. However, the scheme of chameleon signature based on traditional mathematic problem construction can not defense the attack of the quantum computers. In order to design a safe Chameleon signature in the environment of quantum computers, a latticcbased Chameleon signature was proposed,which is based on the hardness of average-case SIS(Small Integer Solution) and ISIS(Inhomogeneous Small Integer Solution). Further more,we proved that this scheme is unforgcability under adaptive chosen-message attack in the random oracle model.
Research of Chinese Writeprint Recognition Using Semi-random Feature Sampling Algorithm
Computer Science. 2013, 40 (2): 120-123. 
Abstract PDF(480KB) ( 361 )   
RelatedCitation | Metrics
Character N-gram can be used to effectively capture individual-author stylistic information in texts. To deal with the problems of high-sparsity and high-redundancy in the feature space, an ensemble classification algorithm based on semi-random feature sampling was proposed in this study. Firstly, the whole feature space is divided into several individual-author feature sets by a divergence rule. Then each of them is divided into equally sized subspaces by a semi-random selection method, and a base classifier is trained on each random subspace. Finally, these base classifiers arc combined to construct an ensemble via the majority voting method. To examine the algorithm, the experiment was conducted on a real-life dataset. It is observes that the algorithm achieved a considerable improvement in accuracy and robustness compared with the benchmark technique in Chinese writeprint identification (random subspace method, bagging and support vector machine).
Novel Identity Based Broadcast Signcryption Scheme
Computer Science. 2013, 40 (2): 124-128. 
Abstract PDF(422KB) ( 421 )   
RelatedCitation | Metrics
A novel identity based broadcast signeryption scheme was proposed to satisfy the diversity and changeability of the current data transmission environment, and guarantee the confidentiality and authority of the transferred information using the integration of the broadcast encryption, the identity-based cryptography and the signcryption. By means of this scheme, taking the various arithmetic operation such as hash, ring add and bilinear pairing et al. , the size of ciphertext is equal to the number of the receiver adding one, and the size of public or private key keeps constant. Simultaneously, the process of signcryption or unsigncryption needs not bilinear pairing operation with high computational cost and storage cost The detailed proof of security shows that the proposed scheme is not only to be IDN-CCA2 secure under the weak I3CDH problem but also to be existentially unforgeable under the EF-ACMA of PSG proposed by Paterson. Furthermore, the proposed scheme is efficient and practical at performance.
Research on Reputation Model Based on Interest Group in P2P File Sharing System
Computer Science. 2013, 40 (2): 129-132. 
Abstract PDF(420KB) ( 386 )   
RelatedCitation | Metrics
In order to decrease the risk of exchange,increase the rate of successful exchange,and make the P2P file sharing system to develop continually, a reputation model based on interest group was proposed, and relevant peer reputation calculating method, file exchanging process, access controlling strategy were designed. The reputation model is derived from the thought of human community, and keeps away from the shortage of the whole trust model and the partial trust model. All peers arc grouped by the type of their resources, thus the efficiency of file exchange is higher, and the security of the network is stronger. Our simulation experiments and theoretical analysis of the reputation model prove that our model has the characteristics of correctness and feasibility. Compared with the former trust models, the model has higher maturity and security.
Real-time Network Security Assessment Based on Dynamic Attack Graph
Computer Science. 2013, 40 (2): 133-138. 
Abstract PDF(783KB) ( 605 )   
RelatedCitation | Metrics
In order to evaluate the network security, a real-time security assessment method based on dynamic attack graph was presented. At first, network security related information such as network vulnerabilities, topology information,asset value,IDS alerts,and firewall rules was fused into attack graph. Then network security situation was evaluated and results were shown through visualization method, on this basis, some corresponding suggests were given to improve security. Finally, the feasibility and validity of this method were proved through some experiments.
Distributed Network Risk Assessment Method Based on Attack Graph
Computer Science. 2013, 40 (2): 139-144. 
Abstract PDF(521KB) ( 488 )   
RelatedCitation | Metrics
Evaluating risk effectively, selecting effective defence measures and defending information threats actively are the key points of resolving security problems of information system. Based on the actual requirements and status of risk assessment of information security, we integrated attack graph to apply it in studying risk assessment of information security. Firstly, focused on the uncertainty and complexity of risk assessment of information security, we integrated the technology of vulnerabilities associated with to apply it in studying risk assessment. On the other hand,since the attack path described by attack graph model is suited for the quantity data processing, and poor to the qualitative analysis, and risk is uncertain, we quantized the risk factors by the probability of attack path forming proposed in this dissertation, pre-treated the probability of atom attack, and proposed a risk assessment method based on attack graph model. The method takes full advantage of computing power of each host in the network,greatly shortens the attack graph generation time.
Exploring Multiple Execution Paths Based on Execution Path Driven
Computer Science. 2013, 40 (2): 145-147. 
Abstract PDF(362KB) ( 486 )   
RelatedCitation | Metrics
To solve the problem in dynamic analysis for binary program that not all program execution paths can be explored, a algorithm based on execution path driven was presented. The main idea of the algorithm is to run the program in a controllable simulation instrument environment, and drive it to execute the program paths that can't be executed under current input set by modifying the value of program counter(PC),so that multiple execution paths can be explored. Based on this algorithm, a prototype system of dynamic analysis for binary based on execution path driven was designed and implemented. Experiments results illustrate that the algorithm is effective in exploring execution path from binary program.
Multi-Agents Network Security Risk Evaluation Model Based on Attack Graph
Computer Science. 2013, 40 (2): 148-152. 
Abstract PDF(441KB) ( 541 )   
RelatedCitation | Metrics
In order to protect the network and evaluate the security risk of network automatically,a novel multi-agents risk evaluation model based on attack graph (MREMBAG) was presented. First, a well-structured model to manage entire evaluation process and the function architecture of primary-slave agents were designed. Then primary-slave agents constructed the attack path and generated the attack graph by using the attract graph building algorithm with the input of the dynamic data information collected by components. Finally, the risk indexes of attack path, components, hosts, the vulnerabilities and nodes correlation risk indexes were determined to calculate the target network quantitatively. The experimental results demonstrate that the MREMI3AG is a more practical and efficient way to evaluate the network security risk.
Model-based Approach for Software Test Adequacy Analysis
Computer Science. 2013, 40 (2): 153-158. 
Abstract PDF(501KB) ( 556 )   
RelatedCitation | Metrics
Test adequacy analysis usually uses coverage criteria to evaluate test design with respect to specific software characteristics. Conventional adequacy methods have following problems to address test evaluation of large software systems. First, codcbased coverage cannot ensure sufficient verification and validation of software requirements. Secondly,software testing adectuacy needs to take into consideration the contribution of different features. Important features deserve more test effort. hhe paper proposed a model-based approach for test adequacy analysis. An interface model was defined, representing executable software requirements for software components. Coverage of test case design was analyzed at two levels including service and service-compositions. Adequacy was calculated as weighted sum of coverage on various software features. Experiments were exercised to illustrate the proposed approach.
Imperative-style Semantics Framework for Object-oriented Languages
Computer Science. 2013, 40 (2): 159-162. 
Abstract PDF(311KB) ( 322 )   
RelatedCitation | Metrics
Object oriented languages have wide applications in software engineering practice, and designing a rigorous semantics framework for these language has great importance in understanding their key features, and also in software verifications etc. This paper presented a new imperative-style semantics framework for object-oriented languages. This framework includes the operational semantics and a type system. I}his paper proved the soundness theorem.
Approach for Web Service Composition Trustworthiness Evaluation
Computer Science. 2013, 40 (2): 163-166. 
Abstract PDF(413KB) ( 367 )   
RelatedCitation | Metrics
As software trustworthiness has become an important aspect in software engineering research, more and more researchers have focused on Web service trustworthiness evaluation while Web service has been the main form in soft- ware resource. Current research about Web service trustworthiness has concentrated on QoS attributes evaluation of single service rather than on Web service composition trustworthiness evaluation. This paper presented a general model of atomic service trustworthiness evaluation and an approach for composite service trustworthiness evaluation based on the weight of atomic services executed in composite service and the structure of composite service. Finally, we described the experiment of the approach by using a detailed case in a real online shopping.
Mashup Services Clustering Based on Tag Recommendation
Computer Science. 2013, 40 (2): 167-171. 
Abstract PDF(436KB) ( 502 )   
RelatedCitation | Metrics
Clustering Web services would greatly boost the ability of Web service search engine to retrieve relevant ones. hhe ProgrammableWeb. com is a popular online social Mashup site. Mashup as a Wclrbased application is Web services that developers provide. We proposed a novel approach, in which both description documents and tags arc uti- lined for Web service clustering. Furthermore,we presented a tag recommendation strategy to improve the performance of this approach. The experimental results show that the accuracy of tag recommendation-based services clustering is higher than other two methods, which indicates that the tag recommended strategy effectively expands the number of tags of Mashup services with few tags,so introducing more tag information,thus the clustering effect is better.
Attribute Weight Evaluation Approach Based on Approximate Functional Dependencies
Computer Science. 2013, 40 (2): 172-176. 
Abstract PDF(417KB) ( 480 )   
RelatedCitation | Metrics
In real applications, the normalization of some relational data is unreasonable and thus leads to the problems of data redundancy and inconsistency. In order to automatically evaluate the attribute importance of this kind of relational data, this paper proposed an attribute weight evaluation approach based on approximate functional dependencies. Based on the concept of the agree set, the maximum set is exported, and the minimum nontrivial functional dependence sets are generated conseduently in order to find the approximate dependence relations, thus the approximate key and approxi- mate keywords can be found. After this, this approach computes the weight of each attribute according to the supported degree of attribute. The experimental results and analysis demonstrate that the attribute weight evaluation approach presented in this paper can reasonably obtain the importance of the attribute in a relation, and the algorithm is stable and has high performance as well.
Classification Algorithm of Database Request Based on Session and Content Level in Cloud Computing
Computer Science. 2013, 40 (2): 177-179. 
Abstract PDF(267KB) ( 368 )   
RelatedCitation | Metrics
Nowadays, many studies have been focused on cloud computing in I T industry. Mass of users’requests on database exists in instance of cloud computing. It has a significant impact on system performance if we do not classify these requests on database reasonably. How to classify these requests, in order to achieve the Web QoS standard, is a aporia and key point in the study of cloud computing. hhere arc many defects in original request pattern, which is based on first come and first service. For instance, priority requirements from customers cannot be satisfied, and we failed to reach the aim of profit maximization. Moreover, service resources cannot be fully used. Therefore, we put forward the database request classification algorithm based on Session and content level. We also put forward a performance function based on Session and content level. hhis function takes customers' priority into consideration, in order to make Gusto- mers' rectuest with high priority getting service resources first. While to the others, owing to the introduction of time function in our algorithm, their priorities will get promoted and finally get processed as the waiting time goes by. The situation of endless waiting will be avoided. hherefore, this algorithm is a kind of dynamic database request classification of database adaptive. In our algorithm, not only the rectuest with highly priority will get processed first, but also it is en- sured that the request with lower priority will be processed although their priority is not enough.
Service-oriented Component Specification
Computer Science. 2013, 40 (2): 180-185. 
Abstract PDF(541KB) ( 384 )   
RelatedCitation | Metrics
Most of existing component specifications arc focused on describing method-oriented interface signatures and behaviors, but their lack of full description of business logic has influenced the efficiency of component reuse directly. On the basis of requirement analysis and elicitation, atomic services and services were defined and the corresponding methods to describe them were given respectively. And then, service-oriented component specifications (SOCS) were de- fined and the detailed domain application samples were given. SOCS is not a substitute for existed component specifica- lions but an extended version of them Finally, this paper talked about its application in the level of CI31〕process.
Software Reliability Growth Model Based on Dynamic Fuzzy Neural Network with Parameters Dynamic Adjustment
Computer Science. 2013, 40 (2): 186-190. 
Abstract PDF(414KB) ( 393 )   
RelatedCitation | Metrics
The parameters of dynamic fuzzy neural network were dynamically adjusted by genetic algorithm(GA- DFNN),and GA-DFNN was used to study software reliability growth model(SGRM). The optimal solution of DFNN' s parameters was resolved by genetic algorithm in the DFNN's training process, and according to the DFNN which has the optimal parameters, software failure data prediction model was established. According to 3 groups of software de- fects data, we compared the SGRM's predictive ability established by GA-DFNN with SGRM's predictive ability estab- lished by fuzzy neural network(FNN) and I3P neural network(BPN). The simulation results confirm that the SRGM es- tablished by GA-DFNN has steady short period prediction, and its short period prediction error is small and it has some versatility.
Formalization of Gauge Integration Theory in HOL4
Computer Science. 2013, 40 (2): 191-194.228. 
Abstract PDF(380KB) ( 474 )   
RelatedCitation | Metrics
Integral is one of the most important foundations in many subjects, such as real analysis, the differential equa- lions in signals and systems and so on. Gauge integral is a generalization of the Riemann integral in which some situa- lions are more useful than the Lebesgue integral. This paper formalized the operational properties which contain the fin- rarity, ordering properties, integration by parts, the integral split theorem, integrability on a subinterval, integrability of special functions and limit theorem, cauchy-type integrability criterion of gauge integral in higher-order-logic 4 (HOL4) , and then used them to verify an inverting integrator.
mDHT:A Search Algorithm to Extra-large Volume of Data Based on Open HDFS Platform and Multi-level Indexing
Computer Science. 2013, 40 (2): 195-199. 
Abstract PDF(546KB) ( 493 )   
RelatedCitation | Metrics
Corresponding to the storing and fast searching needs of extra-large scale of energy monitoring and statistics data,we proposed a Multi indexed Distributed Hash Table (mDHh) algorithm based on the HDFS/Hadoop open plat- form and multi-level indexing design, and accomplished the MapReduce implementation of the algorithm. hhe simulation experiment at a scale up to 48 million data records indicates that, when the data volume reaches the scale of 12 millions to 48 millions, the proposed mDH T algorithm presents an outstanding performance in data adding operation, compared to that of traditional MS SQL Server implementation. Even compared to the singlaindex search application, the mDHT approach reduces the data searching time by 24. 5%一57. 8 0 o. The multi-level indexed DHT algorithm presented in this paper provides a key technique for developing a fast search engine to the extra large scale of data on the cloud storage architecture.
Dependency Basis of Arbitrary Axiomatic System
Computer Science. 2013, 40 (2): 200-205. 
Abstract PDF(458KB) ( 412 )   
RelatedCitation | Metrics
This paper defined axiomatic and axiomatic systems of value dependencies formally, and proved that each axi- omatic system has dependency basis for any context strictly. It gave a method of computing dependency basis of arbi- trary axiomatic system for any context and proved that dependency basis of each axiomatic system is not only. And an inducement context was introduced to visually show the process of the prove and the compute dependency basis using the inducement context
Dynamic Collaborative Filtering Recommender Model Based on Rolling Time Windows and its Algorithm
Computer Science. 2013, 40 (2): 206-209. 
Abstract PDF(335KB) ( 355 )   
RelatedCitation | Metrics
For improving the performance of the traditional collaborative filtering recommender system, a dynamic user- item-time three-dimensional model based on rolling time windows was proposed, which considers the time seduence problem. I}hen a special collaborative filtering (CF) algorithm was explored to work with the model. 1}he interest scores at different times arc regarded differently according to the time sequence and the similarities between users arc com- posed of components at different times,which increases the timeliness of the algorithm. In addition, the similarities can also be calculated duickly by an incremental formula deduced in this paper so as to improve the scalability of the algo- rithm. At last, some reasonable experiments show that the model and algorithm presented in this paper outperform the traditional 2D collaborative filtering model and algorithm in terms of the hit rate.
Research on System Identification by SPPSO Programming for Time-delay HBV Model
Computer Science. 2013, 40 (2): 210-213. 
Abstract PDF(438KB) ( 381 )   
RelatedCitation | Metrics
Systems identification is an active research area of intelligent control theory. Existing algorithms like quadra- tic programming method can identify very limited parameter's number,has the limitations of stagnation and heavily de- pendent on initial values of the parameters. With the continuous development of the area of intelligent control, the de- grec of nonlinearity becomes higher and higher. But the method of nonlinear system identification has not formed a com- plete scientific theory system. Small population-based particle swarm optimization (SPPSO) is an optimization technique for locating the global optimum. SPPSO is easy to realize, quick convergence and effective. It can greatly reduce the time and resource costs in the processing of large data quantity of large-scale population problem. So,in system identifica- tion, especially in highly nonlinear and timcdelay system it is more meaningful, and this kind of complex system is typi- cal in medical system. SPPSO is used in solving timcdelay hepatitis B virus dynamics (HBV) model. It has good re- search and practical value.
Statistical Machine Translation Based on Translation Rules
Computer Science. 2013, 40 (2): 214-217. 
Abstract PDF(354KB) ( 853 )   
RelatedCitation | Metrics
Improved hidden Markov model was used to align words and solve the inconsistency between word alignment and phrase structures. Translation rules were extracted based on aligned phrases and English phrase trees. An extended CYK一CYK algorithm was used as the decoder and a two-pass-decoding algorithm was proposed for intergrating the language model during decoding, which can decode non-Chomsky normal form. The experimental results show the 13I_EU score of improved HMM is higher than the score of HMM, and the translation quality of translation rules is bet- ter than phrase-based machine transtion. The BLEU score of two-pass-decoding algorithm is close to the score of cube prune algorithm and decoding time costs less.
Novel Mining Implicit Text Fra}nent from Abstract Scheme
Computer Science. 2013, 40 (2): 218-221. 
Abstract PDF(626KB) ( 352 )   
RelatedCitation | Metrics
This paper extracted high-frequency keywords appearing in the literature, then positioned the abstract through inverted index, mined the fixed semantic phrases with keywords in the abstract, and tracked the dynamic chan- ges phrases in recent years by text bibliometric. 13y using the related affect matrix to establish associated network, the association between the semantic phrases was analysed and figured out. The experimental results show that the litera- lure summary implicit knowledge fragments can better reflect the trends of disciplines.
Adaptive Algorithm for MRMP
Computer Science. 2013, 40 (2): 222-228. 
Abstract PDF(581KB) ( 381 )   
RelatedCitation | Metrics
Multi vehicle ride matching problem(MRMP) studies the problem of taking passengers as much as possible through optimizing vehicles' route and matching between vehicles and passengers. 13ut at present, there arc some prob- lems in the researches such as models divorce from reality and low efficiency of algorithms. For this problem,this paper presented APSO(Attractive Particle Swarm Optimization) to solve this problem. I}he MRMP is transfered to RMP through APSO to form the first matching result between vehicles and passengers. And the best sort order is looked through sorting the result, making use of priori clustering based on the result from first matching. And at last, we opti- mined the solution one times more through optimized rules. Contrast experiment shows that the method based on APSO can solve the problem on a high rate of matching and a low cost.
Efficient Algorithm for Online Mining Closed Frequent Itemsets over Data Streams
Computer Science. 2013, 40 (2): 229-234. 
Abstract PDF(493KB) ( 364 )   
RelatedCitation | Metrics
Mining closed frequent itemsets from data streams has been extensively studied, in which NewMoment is re- garded as a typical algorithm. However, there is the problem of its big search space causing bad performance in time in NewMoment, This paper presented an algorithm called A-NewMoment which ameliorates NewMoment to mine closed frequent itemsets. Firstly, it designed a combinative data structure which uses an effective bit victor to represent items and an extended frequent item list to record the current closed frequent information in streams. Secondly, the new pru- ning strategics called WSS and CSS were proposed to avoid a large number of intermediate results generated, so the search space is reduced greatly. Finally, the pruning strategy called DNFIPS was also proposed to delete no closed fre- qucnt itemsets from HTC. At the same time, it also designed a novel strategy called DHSS to efficiently and dynamically maintain these operations that all closed frequent itemsedts are added and deleted. Theoretical analysis and experimental results show that the proposed method is efficient.
Quantum-behaved Particle Swarm Algorithm on Weapon Target Assignment
Computer Science. 2013, 40 (2): 235-236. 
Abstract PDF(261KB) ( 621 )   
RelatedCitation | Metrics
In order to improve the solving efficiency and performance of weapon target assignment (WI'A),this paper put forward a kind of improved quantum-behaved particle swarm optimization algorithm for solving WTA. First, based on the particle aggregation premature stagnation was judged. Then a slowly varying function was used to overcome pre- mature convergence while keeping the population diversity. Secondly, a multiple weapons target assignment was built to meet the target of the minimum failure probability in allocating weapons and shooting all targets. Simulation results in- dicate that the new algorithm can get the optimal or suboptimal solution to WTA problems, effectively solve }门A problems.
Resolution Determination of Generalized Literals in Linguistic Truth-valued
Computer Science. 2013, 40 (2): 237-240. 
Abstract PDF(372KB) ( 373 )   
RelatedCitation | Metrics
Automated reasoning is an important realm in artificial intelligence. Resolution-based automated reasoning is one of the research branches. I_atticcvalued logic system based on lattice implication algebra can deal with information or knowledge with comparability and incomparability, provide a stick logical foundation for automated reasoning. The present paper gave some properties of linguistic truth-valued lattice implication algebra外(}x z>.Under lattice-valued propositional logic system }v}9xz> PCX) based on linguistic truth-valued lattice implication algebra姊(。二z} with eighteen elements, the structures of generalized literals about 1-IESF and 2-IESF were given. In addition, the resolution determi- nation of generalized literals was obtained. These works can offer an important foundation for automated reasoning based on linguistic truth-valued lattice-valued logic system.
Flexible-resource Constrained Project Scheduling Research Based on Particle Swarm Optimization
Computer Science. 2013, 40 (2): 241-244. 
Abstract PDF(373KB) ( 337 )   
RelatedCitation | Metrics
In order to better solve the flexiblcrcsource constrained project scheduling problem, matrix with proficiency was established to show the relations between resources and skills, and CPSO (Chaos Particle Swarm Optimization) was used in this essay to solve this problem. In consideration of work's priorities and flexiblcresource constrained problem, a randomized priority chain was formed and followed by an optimized result based on Serial Schedule Generation Scheme (SSGS) and the best schedule of the whole project was found by updating the population by CPSO. Results prove the possibility and effect of this method in solving this problem. Therefore, this method has its practical application value for the flexiblcresource constrained project scheduling problem.
Grain Yield Prediction Model Based on Grey Theory and Markvo
Computer Science. 2013, 40 (2): 245-248. 
Abstract PDF(337KB) ( 470 )   
RelatedCitation | Metrics
The grain yield is influenced by many complex factors, and it has the characteristics of random and fluctua- lion, so this paper proposed a grain yield prediction model based on grey theory and Markov model. Firstly, the grey re- lational grade was used to screen influencing factor to build the grain yield prediction based on grey model, and then the Markov process was used to modify the forecasting error, making the prediction accuracy increases greatly. hhe simula- lion results show that the proposed algorithm has better prediction accuracy, and the model can satisfy the demand in prediction precision.
Analysis on the Applicability of Dempster's Combination Rule
Computer Science. 2013, 40 (2): 249-252. 
Abstract PDF(345KB) ( 396 )   
RelatedCitation | Metrics
Aiming at the open issues that the classical conflict coefficient in IBS evidence theory cannot correctly recog- nize the evidence conflict and the conflict increases with the number of information sources, adopting the distance among probability assignment functions transformed by pignistic and integrating classical conflict duantification standard, the applicability of Dempster's combination rule was analyzed. The results of the numerical examples and the comparison with the method made by Weiru Liu demonstrate that the proposed method provides a good applicable and reasonable indicator on the applicability of Dempster's combination rule.
Computational Complexity of Probabilistic Inference in Icing Graphical Model
Computer Science. 2013, 40 (2): 253-256. 
Abstract PDF(439KB) ( 661 )   
RelatedCitation | Metrics
Probabilistic inference in graphical model is to compute the partition function, the marginal probability distri- bution,and the conditional probability distribution through summing variables of the joint probability distribution. The hardness and inapproximability of the probabilistic inference arc important problem, and arc the foundation to design the probabilistic inference algorithm and approximate probabilistic inference algorithm. We analyied the computational com- plexity of the exact probabilistic inference and approximate probabilistic inference. In particular, we constructed a coun- ting reduction from the#2SAT problem to the probabilistic inference problem in polynomial time,and proved that the problem of computing the partition function, marginal probability distribution and conditional probability distribution in a Ising graphical model is#P-hard. Moreover, approximating the probability distribution is NP-hard.
Retrieval Based on Lexical Chains in Q&A System
Computer Science. 2013, 40 (2): 257-260. 
Abstract PDF(411KB) ( 503 )   
RelatedCitation | Metrics
Question Answering System (QA) allows users to ask question in natural language. Using natural language processing technology, QA returns answer to the user automatically. I}he paper used WordNet to construct lexical chains, and used the lexical chain in a ctuestion answering system based on the Internet. Using two kinds of different way calculates similarity of Snippets download from Google to sort them according to similarity. When going for the first one and first ten Snippets, the accuracy of the answer snippets is increased by 13. 68 0 o and 25. 4 0 o. We made significant inspection for the experiment results. In significant level scheffe post hoc 0. 05 conditions, p=0. 000078 is got and the accuracy of the system is dramatically improved.
Research on Knowledge Reasoning Technology in Emergency Command System
Computer Science. 2013, 40 (2): 261-264. 
Abstract PDF(362KB) ( 455 )   
RelatedCitation | Metrics
This paper, aiming at the urgent demand of timely and efficient function of auxiliary decision-marking in c- mergency management and command,from the angle of combining the intelligent technology and the industry applica- tion,made a research on knowledge reasoning techniques such as knowledge acquisition, knowledge expression and in- ference mechanism, then put forward the scheme of emergency command system based on intelligent auxiliary decision- marking. The application results in actual projections show the scheme is highly industrialized and self-adaptable and meets the intelligent demands in emergency command works.
Concept Algebra Based on Lattice
Computer Science. 2013, 40 (2): 265-269. 
Abstract PDF(471KB) ( 354 )   
RelatedCitation | Metrics
Concept lattice, as an effective model for concept analysis, representation and application,has been widely uti- lined in many fields. Based upon concept lattice, concept algebra is a newly developing algebra system which needs fur- ther enhancement and extension. We introduced and analyzed the Nilsson concept algebra, Wille concept algebra as well as concept algebra in cognitive informatics. Furthermore, this paper elaborated comparative analysis among above three theories of concept algebra. In addition, we proved that Nilsson concept algebra belongs to Wille concept algebra. The findings and conclusions of this paper lay a solid foundation for further improvements and applications of concept alge- bra.
Local Evolving Model Research of Layered Supply Chains Complex Networks
Computer Science. 2013, 40 (2): 270-273. 
Abstract PDF(349KB) ( 646 )   
RelatedCitation | Metrics
As a complex network system, supply chains have some characteristics of complex networks as dynamic, sclf- adaptability, self-organizational, etc. This paper considered the whole macroscopic behavior of supply chains networks, analyzed growth evolving rule of birth, decline, and exit of enterprise node, and constructed a layered weighted supply chains network model based on preferential attachment of multiattribute parameter combination of nodes. The experi- ment results verify the scalcfrec property of the model, whose exponent of power law is in the range form 2 to 3,and it has higher clustering and lesser average path length with network size, which shows that the model exhibits small-world property.
Qualitative Preference Decision and Reasoning of Consistency for CP-nets
Computer Science. 2013, 40 (2): 274-278. 
Abstract PDF(410KB) ( 596 )   
RelatedCitation | Metrics
CP-nets (condition preference networks) is a simple and intuitive graphical tool for representing conditional ceteris paribus (all other things being equal) preference statements over the values of a set of variables, and it suits mul- tiple attributes qualitative decision making under incomplete preference information situation especially. Firstly,by con- structing induced graph of CP-nets and studying its properties, it was found that the nature of strong dominance testing is accessibility between vertices on the induced graph. So the problem of strong dominance testing with respect to bina- ry-valued CP-nets is solved by DFS algorithm. Secondly, the consistency for CP-nets was studied. The theorems and properties on consistency were given separately from the perspective of acyclic CP-nets and cyclic CP-nets. In particu- lar, three methods on judgment of consistency were proposed. All these can be seen as the improvement and refinement of I3outilier's related works.
Medical Image Segmentation Based on Finite Mixture Models of Non-parametric Multivariate Chebyshev Orthogonal Polynomials
Computer Science. 2013, 40 (2): 279-283. 
Abstract PDF(680KB) ( 392 )   
RelatedCitation | Metrics
To solve the problem of over reliance on priori assumptions of the parameter methods for finite mixture mo dcls and the problem that monk Chebyshev orthogonal polynomials can only process the gray images, a segmentation method of mixture models of multivariate Chebyshev orthogonal polynomials for color image was proposed in this pa- per. First, the multivariate Chebyshev orthogonal polynomials was derived by the Fourier analysis and the tensor pro- duct theory, and the nonparametric mixture model of multivariate orthogonal polynomials was proposed. And the mean integrated squared error(MISE) was used to estimate the smoothing parameter for each model. Second, the expectation maximum(EM) algorithm was used to estimate the orthogonal polynomial coefficients and the model of the weight. hhis method does not require any prior assumptions on the model, and it can effectively overcome the "model mismatch" problem. hhe experimental results with the images show that this method can achieve better segmentation results than the mean-shift method.
Mental Fatigue Recognition Extension Model Based on Facial Visual Cues
Computer Science. 2013, 40 (2): 284-288. 
Abstract PDF(419KB) ( 513 )   
RelatedCitation | Metrics
The background information, such as life burden, work load and sleep quality, plays a very important role in mental fatigue recognition based on facial visual cues,however,they cannot be directly extracted from facial video. Based on the theory,ideals and methods of extenics,combined with some classic mathematical methods and existing computer vision technology, this paper proposed a new mental fatigue extension recognition model. Based on the idea of contradic- lions transformation, the background information which cannot be directly extracted from the facial video is translated into the corresponding facial fatigue appearance in the model,and the facial fatigue appearance can be directly extracted from the facial video. And this model also proposes a fusion approach for existing facial fatigue visual cues and facial fa- tigue appearance to recognize mental fatigue. hhe experimental results verify the validity of the model.
Research on Fast 3-D Rotation Calculation Based on Video Techniques
Computer Science. 2013, 40 (2): 289-293. 
Abstract PDF(1042KB) ( 483 )   
RelatedCitation | Metrics
This paper discussed the calculation method of rotation based on optical flow. Which is a non-touch measur- ing technique, and has important value in some special occasions. It is to set up rigid motion equations by optical flow character, and then using two-step iterative method to estimate motion equation and calculate rotation speed for each co- ordinate axis. In order to improve the computing speed, the matching algorithm based on gray-coded bit plane block is a- dopted in the measurement for displacement vector on the projection plane. hhe calculation is less complex because two characteristics blocks searching is done with simple logical XOR. hhe experiment results show the calculation is accu- ra飞e.
Robust and Fast Feature Points Matching
Computer Science. 2013, 40 (2): 294-296. 
Abstract PDF(794KB) ( 295 )   
RelatedCitation | Metrics
Aiming at the problem that most stereo matching algorithms cost too much time, an algorithm in which fca- lure points are detected in frectuency domain and feature vectors are extracted in spatial domain was proposed. Firstly, effective coding theory was studied. Secondly, the salient features were located in the image and their scales were com- puled. Finally, patterns whose scales are matched with the feature points' scales were constructed to extract features, and then,fcatures were matched by the nearest neighbor rule. The experiment results show that the proposed method has high computational effciency, less time consumption, strong robustness to scale and affine tranformation and make a balance between speed and performance.
Seal Character Strokes Segmentation Method Based on the Skeleton
Computer Science. 2013, 40 (2): 297-300. 
Abstract PDF(870KB) ( 389 )   
RelatedCitation | Metrics
This paper introduced a method how to segment seal character stroke from the skeleton image according to the art characteristics of seal-engraving. Firstly, we got the seal character skeleton through thinning algorithm. hhen we analysed the intersections and cross area of the skeleton image, forming some sub-strokes. Finally, we used the template matching method to get the sulrstrokes to be combined from the neighborhood of each intersection region, eventually combined the character elements into the seal character strokes. The experiment shows that the method is effective.
Improved Subdivision Algorithm of Trapezium Examining Strip for Constraint Voronoi Diagram Generation
Computer Science. 2013, 40 (2): 301-303. 
Abstract PDF(363KB) ( 394 )   
RelatedCitation | Metrics
Aiming at the problem that existing constraint Voronoi diagram generation algorithm may not converge when the constraints arc complex, this paper proposed an improved subdivision algorithm of trapezium examining strip for constraint Voronoi diagram by introducing several control factors. External and internal constraint line endpoint protec- lion radius control factors are used to control the size of the constraint line near the end points of the Voronoi edge dur- ing calculating the initial Voronoi growth process. External and internal constraint segment size control factors are used to control in the size of the constraint line on the Voronoi edge during the process of subdivision ex<}rnining strip. Experi- mental results show the proposed algorithm can get satisfied results even in the complex domain including internal boundary constraints,pencil of lines constraints and irregular areas.
Face Reconstruction Algorithm Based on Small Curve Relaxation Constraint Approximation
Computer Science. 2013, 40 (2): 304-307. 
Abstract PDF(329KB) ( 327 )   
RelatedCitation | Metrics
Aiming at the problem that face feature expression change is very complex, and the traditional modeling methods can't good depict subtle expression changes, and the realistic face modeling is not strong, this paper proposed 3d face modeling algorithm based on a small area the approaching a point curve. The three dimensional area face expres- sion changes arc relatively rich. This paper made small region partition of the area used points approximation technology to simulate more complex curve profile, to avoid the abuses of dipict and characteristics simulation distortion when using traditional algorithm based on the features of the straight line. This method can restore good 3d face expressions change relatively rich area feature information, making 3d face modeling more realistic, and features depict higher accuracy.
Soft Pen Calligraphy Beautification for Handwriting Characters Based on Stroke Feature Triangle
Computer Science. 2013, 40 (2): 308-311. 
Abstract PDF(327KB) ( 790 )   
RelatedCitation | Metrics
This paper presented an algorithm based on Stroke Fcaturc hriangle Model, adding calligraphy effect to points and lines sampled by computer, in order to beautify hand writing characters. Strokes arc formed by point set with calligraphy and character features. Algorithm based on Stroke Feature Triangle Model offers width to each part of strokes. I3-Spline Curves are used in smoothing of handwriting stroke outline. This method can offer precise calligraphy simulation in the fly of character handwriting, and has good portability between various equipments.
Human Action Recognition Using Image Contour
Computer Science. 2013, 40 (2): 312-314. 
Abstract PDF(625KB) ( 529 )   
RelatedCitation | Metrics
A hidden conditional random fields-based method for action recognition was presented by using the contours of image sequence as representative descriptors of human posture. First, human silhouette contour extraction is obtained by background subtraction and shadow elimination. Star skeleton which uses the centroid-border distance function de- scribing the contour is not good at describing the local features. A new contour descriptor based on distance array was defined, which is employed to transform the contour into a 1D distance signal. At last, the discriminative hidden condi- tional random fields arc used to recognize human actions. I}he experimental results show that this method can achieve the correct recognition rate above 91. 4 0 0.