Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 39 Issue 9, 16 November 2018
  
HybSim:A Simulation System for the Hybrid Streaming Media Storage
Computer Science. 2012, 39 (9): 1-4. 
Abstract PDF(466KB) ( 412 )   
RelatedCitation | Metrics
The advantages of high-performance and low power consumption make flash to get more and more attention. The most common way of utilizing flash is constructing hybrid storage system together with RAM and disks, especially for streaming media application which has strict requirements on both storage capacity and performance. However, accu- rate and full-featured hybrid storage simulation tools arc missing now. Firstly, this paper modeled the architectures and energy consumption of the hybrid storage systems,and then designed and implemented a hybrid storage simulation sys- tem-HybSim. Compared with the widely used storage simulation system-DiskSim and its flash patch, HybSim newly increases some new modules, including the implement of various kinds of hybrid storage architectures, power con- sumption, filclevel management and accessing, streaming media service mode, and the statistics of performance, QoS, energy consumption and device wearing. Some detailed simulations on both the performance and energy conservation were made based on HybSim, and then we compared some typical architectures of hybrid storage on performance, Quali- ty of Service(QoS),energy saving and device wearing.
Research on Security Framework of OGC-based Spatial Information Grid
Computer Science. 2012, 39 (9): 5-8. 
Abstract PDF(472KB) ( 431 )   
RelatedCitation | Metrics
Grid technology has been used widely in spatial resource sharing. Many kinds of resources (such as large a- mount of remote sensing images, complex observation data and professional geographical software) can be integrally ac- cessed on Internet on demand through grid environment. In order to protect the owner's right,resource's security and user's privacy, the security framework is needed. The framework should be comprehensive enough to cover the resource layer, service layer, transport layer, application layer etc. Meanwhile, the framework should keep each node of the grid themselves. hhis research could give advice to those applications which arc cross regions, cross organisations and share the resource through services.
Adaptation Decision-taking Model for MPEG-21 DIA
Computer Science. 2012, 39 (9): 9-14. 
Abstract PDF(520KB) ( 476 )   
RelatedCitation | Metrics
In the future, universal multimedia access will become the mainstream service pattern of multimedia service via networks. Aiming at the access pattern, the MPEG21 proposed the digital item adaptation (DIA) technique archi- tecture to standardize the related techniques. However, DIA specifics some metadata such as context description only. The key adaptation decision-taking engine is still an open question left. Considering the shortcomings of existing adap- tation decision models,the paper proposed a model named adaptation decision-taking tree (ADT) which looks upon ad- aptation decision as sequence decision and the decision tree is employed to model the decision process. To meet the per- sonalized demand in multimedia services, the AI)丁was constructed by user preferences. In addition, by using optimal model to create the decision space, which deduces the numbers of decision candidate, were deduced, and the decision time was cut observably, and the demand of real time was satisfied. Based on simulation, it can be concluded that the result of AI)丁is better than or similar to that of existing models in decision error and average iteration.
Efficient Regular Expression Matching Algorithm Based on DoLFA
Computer Science. 2012, 39 (9): 15-19. 
Abstract PDF(542KB) ( 574 )   
RelatedCitation | Metrics
With the rapid increase of the number of rules, the DFA used to present regular expression often results in states explosion, so it is very hard to satisfy the requirement of high speed network online processing. This paper pro- posed an efficient regular expression matching algorithm, which first divides an expression into three subsets: exact string, character class and character repetition, and then optimizes and detects the corresponding blocks, at last links them together with auxiliary node data structure, namely constructing a special state machine DoLFA. Theoretical anal- ysis and simulation shows that this algorithm not only can save more memory space, but also provide high throughput performance and scalability.
Secure Vehicle-to-Infrastructure Communication Protocol Based on Certificateless Cryptosystem
Computer Science. 2012, 39 (9): 20-23. 
Abstract PDF(346KB) ( 458 )   
RelatedCitation | Metrics
With the increase of traffic problems,various applications of vehicular Ad hoc networks(VANEh) arc pro- posed. And their security is receiving increasing attention. This paper proposed a secure vehicl}to-infrastructure com- munication protocol based on certificateless public key cryptosystem. The proposed protocol combines the advantages of the protocol based on traditional public key cryptosystem(no key escrow) and that based on II}PKC(implicit authenti- canon). Compared with a certificate based protocol at the same security level, the efficiency of the proposed approach is
Improved Binary Anti-collision Algorithm for Internet of Things
Computer Science. 2012, 39 (9): 24-27. 
Abstract PDF(335KB) ( 451 )   
RelatedCitation | Metrics
Radio frectuency identification technology is taken as the key technology in the application of Internet of things and produces collision inevitably. In view of the basic binary anti-collision algorithm's defects, this paper pro- posed an improved binary anti-collision algorithm Using the obtained collision message, the proposed algorithm trans- mils data dynamically and modifies return mode to reduce data amount and the search command transmission times. The results show that compared with the basic binary anti-collision algorithm, this proposed algorithm improves the throughput of the system, reduces the system transmission delay, and is very suitable for the large number of tag identi- fication in Internet of things.
New P2P Network Security Mechanism Based on the Rough Set and the Bayes Classifier
Computer Science. 2012, 39 (9): 28-32. 
Abstract PDF(595KB) ( 413 )   
RelatedCitation | Metrics
A new security management mechanism based on the classification control of trade nodes was proposed. The model gives over the concepts of local trust and global trust and takes the classification control of trade nodes. This pa- per's innovation is as follows;l) Through classification and quantification of failure events in trade between nodes,ac- cording to the severity of damage and the size of the trade, the trade failure events arc divided into malicious attacks, bad quality and so on. 2)The rough set classifier and I3ayes classifier are used and the nodes are divided into trust nodes, strange nodes and malicious nodes in a peer-to-peer network. "hhe trust node list and the malicious node list are built. hhe malicious nodes arc excluded from trading. 3) hhe rough set classifier and 13ayes classifier arc used to integrate the feedback recommendation and to decide the type of the recommended node. hhe feedback behaviors arc divided into the honest feedback, the malicious feedback and so on. The experiment indicates that compared with the existing trust mo- dcl, the model may obtain higher examination rate over malicious acts with the higher transaction success ratio, and has the better feedback information synthesizing capacity.
Research on Extended Ant Colony Optimization Based Virtual Machine Deployment in Infrastructure Clouds
Computer Science. 2012, 39 (9): 33-37. 
Abstract PDF(464KB) ( 595 )   
RelatedCitation | Metrics
Aiming at the virtual machine deployment problem in the cloud computing environment, based on the defining of the match-distance between the virtual machine and the server, the ant colony optimization(ACO) was used to re- search the deployment scheme. And the ACO was extended and modified for the deployment problem. Using the proba- bilistic tour decision with performance apperceive policy, the virtual machines with the same performance interest arc designedly placed in different servers to reduce the competition of the hardware resources. And using the single ant pheromone update rules, the misdirection of the inaccurate heuristic information is avoided. I}he parameter values for the arithmetic were researched with the experiments in C1oudSim. Finally, the performance of the extended ACC) was com- pared with that of the ranking deployment arithmetic and the original ACO. The experimental results show that the ex- tended ACO meets the need of the system load balancing better, and accelerates the convergence to the original ACO.
Analysis on Algorithms of Identifying Regional Influential Users in Micro-blogging
Computer Science. 2012, 39 (9): 38-42. 
Abstract PDF(466KB) ( 507 )   
RelatedCitation | Metrics
Based on the Micro-blogging analysis, from a regional perspective, we researched micro-process of informa- lion diffusion. I3y measuring the real behavior of the user's information sharing,we constructed the network model of information listen and information forward. We found that a small group of nodes can cover the most of the communica- lion behavior of the information diffusion. I}hen, we proposed a recognition algorithm named WciboRank like Hits to find them, and proposed a evaluation based on real measurement coverage. It is verified that compared with the other al- gorithms, the proposed algorithm is effective.
Collaborated E-learning Scheduling Algorithm Based on Max-Flow
Computer Science. 2012, 39 (9): 43-46. 
Abstract PDF(345KB) ( 392 )   
RelatedCitation | Metrics
To solve the problem of lacking activity guidance for learners in clcarning platform, a scheduling algorithm was proposed for the collaborated learning model which is classified by related e-learning projects. The basic idea of the proposed algorithm is to convert the problem into a Max-flow model, so that an effective learning schedule for learners can be generated in a reasonable time span.
Wireless Sensor Networks Hybrid MAC Protocol for Overhead Transmission Line Monitoring
Computer Science. 2012, 39 (9): 47-50. 
Abstract PDF(455KB) ( 357 )   
RelatedCitation | Metrics
Considering the new feature of wireless sensor network applying to the overhead transmission line monitoring in topology and network traffic,a hybrid MAC protocol was proposed. The protocol adopts X-MAC during idle period and pipelined transmission during busy period. It uses different X-MAC parameters according to node's role to improve real-time performance and pipelined transmission to resolve the hidden terminal problem. Simulation results show that the protocol can meet the network's requirement of real-time and energy efficient.
Identifying Algorithm for Opinion Leaders of Forums Based on Time-varying Graphs
Computer Science. 2012, 39 (9): 51-54. 
Abstract PDF(328KB) ( 510 )   
RelatedCitation | Metrics
Because the existing methods for identifying opinion leaders of the forums are diffcult to capture the dynamic characteristic of network, a algorithm for identifying opinion leaders based on timcvarying graphs was proposed. The algorithm represents the evolution of the network as a sequence of static graphs, each of which represents the aggrega- ted interactions over a given tim}window. Each potential opinion leader is identified by a quantitative indicator, and these results arc then matched each other so that reliable opinion leaders can be identified. Results prove that the pro- posed algorithm has good validity and feasibility.
Internet Autonomous System Prefix Reputation Model
Computer Science. 2012, 39 (9): 55-59. 
Abstract PDF(427KB) ( 589 )   
RelatedCitation | Metrics
Prefix hijacking faced by 13GP can highly disrupt the Internet network reliability. By introducing trust tech- nology, the paper proposed an autonomous system prefix reputation model AS-PRM to evaluate the trust of an autono- mows system (AS) originating the prefix belonging to the AS. An AS selectively prefers the prefix route announcement originated by the AS with higher prefix reputation. As a result, prefix hijacking can be suppressed. According to multi- ple prefix hijacking detection systems' results,AS-PRM model computes AS prefix reputation based on the beta reputa- tion system, after considering false positives and false negatives of detection systems, and updates prefix reputation fol- lowing the "slowly rising, quickly falling" principle. In the end, the model validity was verified by simulation experi- ments.
Research of Dynamic Community Discovery Based on Role Assorted Thoughts
Computer Science. 2012, 39 (9): 60-63. 
Abstract PDF(441KB) ( 342 )   
RelatedCitation | Metrics
Traditional community discovery algorithms focus on the analysis of static topology structure of networks while ignoring the influence of individual activity on the formation of networks. This paper introduced the concept of community seed and liaison, and aiming at the special nodes, researched and analyzed the formation and evolution mecha- nism of social network from both individualism and structuralism perspectives, proposed a role assorted community dis- cowry algorithm. This paper tested the performance of this algorithm both on artificial network and real-world net- works and compared the results with GN, fast GN and Polish. Experimental results show that the results of role as- sorted algorithm are much better than GN algorithm, with great suitability and expandability. Besides, the discovery communities arc all strong connected communities.
Clustering Algorithm Based on Link Power Control for Wireless Sensor Network
Computer Science. 2012, 39 (9): 64-70. 
Abstract PDF(628KB) ( 470 )   
RelatedCitation | Metrics
In order to solve the energy control problem of link layer in a non-uniform deploy network environment, the dissertation proposed a clustering routing algorithm based on link power control (CLPC). Based on optimal connectivity power clustering and using dual-channel mechanism with anti-control interference strategy, CLPC solves the problem of collision retransmission and channel access fairness in link layer from network layer, so as to improve the whole net- work performance. Collision and competition intension among nodes can be reduced by optimal connectivity power mechanism, and collision probability during data transmission can be decreased by dual-channel mechanism so as to lead the improvement of channel utilization. Moreover, anti control interference strategy, by anti-controlling nodes which arc high transmit power, can guarantee low transmit power nodes to ensure they can share channel fairly. The simulation re- sups show that CI_PC algorithm has a further improvement for energy efficiency and system throughput.
GEAR Sensing Network Equilibrium Algorithm Based on the Fuzzy Area Loose Distance
Computer Science. 2012, 39 (9): 71-73. 
Abstract PDF(270KB) ( 363 )   
RelatedCitation | Metrics
A sensor node effective choice for wireless sensor network communication has an important influence. Based on the analysis of the traditional GEAR sensor node distribution, an improved sensor equalization algorithm based on the fuzzy area of the distance from the loose network GEAR was proposed. Using fuzzy interval differentiate based on a node, along with node, using a loose the method of distance to realize approach, relying on the fuzzy reliability nodes to get membership function, the final selection of nodes was completed. The disadvantages that traditional GEAR node dis- tribution algorithm depends on geographic location and energy costs priori knowledge to choose node was avoided. The experiments show that communication energy consumption trend curve of this method is lower than traditional GEAR algorithm, and it has regulate sensing network congestion.
Researches on Policy-based Network Convergence Architecture
Computer Science. 2012, 39 (9): 74-77. 
Abstract PDF(394KB) ( 406 )   
RelatedCitation | Metrics
After analyzing current status of network convergence of China,a convergence policy of network convergence was presented, and then the network convergence architecture based on common IMS technology was proposed, and its main entity functions and control procedures was described. Under the case that the proposed architecture, each net- work's management and main technology are not changed, each network shares all users' information and network re- sources, and end-users can choose different networks and services according to their own needs, and thus network con- vergence is realized.
Efficient Multipoint Relay Selection Algorithm Facing Mobile Ad hoc Algorithm
Computer Science. 2012, 39 (9): 78-80. 
Abstract PDF(348KB) ( 524 )   
RelatedCitation | Metrics
Mutipoint relay selection algorithm is a flooding technique which can be used to propagate message in MA- NET. This algorithm is proved to save node's energy and prolong the MANET's life,and also can reduce the number to be searched and reduce the broadcasting time. This paper began with the relationship between set covering problem and multipoint relay selection algorithm to improve the classic algorithm and then proposed a new one called efficient mutipoint relay selection algorithm (E-MRSA). The simulation results show that the new algorithm can reduce the number of nodes up to 14% Moreover, it also can reduce the power-consumption of network up to 12% and save the propagation time by 9 %. So the E-MRSA algorithm can improve the performance of mobile Ad hoc networks in a certain extent
Hierarchical Key Management Based on Matrix Space
Computer Science. 2012, 39 (9): 81-84. 
Abstract PDF(355KB) ( 404 )   
RelatedCitation | Metrics
In order to improve the efficiency of the storage resource of wireless sensor networks, enhance security at the same time, this paper proposed a hierarchical key management based on matrix space. This scheme uses sub-space of LU matrix for successive key predistribution and uses removal mechanism to reduce some of the information of matrix of nodes after key establishment. Figures of simulation show that the proposed scheme provides better storage efficiency and better security through the removal mechanism. The security of network can up to 100% in the end.
Sliding Window-based Network Coding Transmission Scheme for Vehicular Ad hoc Networks
Computer Science. 2012, 39 (9): 85-88. 
Abstract PDF(464KB) ( 386 )   
RelatedCitation | Metrics
Tim}sensitive data transmission in vehicular Ad hoc networks (VANE7)is particularly challenging due to the high mobility and the rapidly changing topology. To address this problem, we proposed a scheme that uses network coding with dynamic sliding window in opportunistic routing. 13y adjusting the window size according to the network status,it encodes different number of native packets into a coded packet such that it can tolerate the acknowledgement delay and improve the throughput in different cases. I}he scheme uses a lower triangular matrix coding method to smooth the decoding interval in the receiver. Simulations show that the scheme is able to increase the throughput and decrease the delay fitter efficiently. It is especially appropriate for time sensitive multimedia applications in VANET.
Study of Spatial Information Hiding Algorithm Based on MSB and HVS
Computer Science. 2012, 39 (9): 89-93. 
Abstract PDF(940KB) ( 509 )   
RelatedCitation | Metrics
In order to improve the capacity and robustness of the image information hiding, the spatial information hi ding algorithm based on most significant bit (MSI3) and human visual system (HVS) was proposed. Firstly the secret information and the most significant bit plane were matched and replaced to reduce the capacity of the embedded infor- mation, and new secret information was formed. Then according to the different degrees of redundancy of the human visual for texture areas and smooth areas, high gray areas and low gray areas of images, the information was embedded. hhe experimental results show that the algorithm has better robustness and the embedding capacity is greater than the LSI3 algorithm,and that the PSNR value of stego images is greater than 38dI3 to maintain good visual effect.
Relative Mobility of Nodes Based Adaptive Demand-driven Multicast Routing Protocol
Computer Science. 2012, 39 (9): 94-96. 
Abstract PDF(344KB) ( 350 )   
RelatedCitation | Metrics
This paper presented a kind of relative mobility of nodes based adaptive demand-driven multicast routing pro- tocol RMNAM. RMNAM protocol inherits the on-demand features of ADMR protocol, and introduces the concept of relative mobility of nodes. On the one hand, the protocol uses the relative mobility of nodes as an important factor in multicast forwarding tree, which improves the robustness of the multicast paths; on the other hand, the protocol optimi- zes the transmission switching strategy of source node by consulting the global repair frequency and the average relative mobility of nodes to promote adaptability. The simulation results show that RMNAM protocols improve ADMR proto- col in the packet delivery ratio and transmission delay, and it keeps the advantages of effectiveness and scalability com- pared with ODMRP.
Daptive Packet Segmentation Scheme for Input Queued Switches
Computer Science. 2012, 39 (9): 97-100. 
Abstract PDF(346KB) ( 371 )   
RelatedCitation | Metrics
Focusing on the problem of low bandwidth utilization and poor flexibility of traditional packet segmentation scheme for input queued switches, an adaptive segmentation scheme was proposed. Because of the synchronous proper- ties of centralized scheduling, the new scheme adjusts the length of segmentation unit and the matching time during the scheduling by using the status of input ports, which can reduce the padding data and the speed up that the switch needs. Finally, the simulations demonstrate that the proposed scheme exhibits good delay performance under different traffic models and is well applicable to real network.
Comprehensive Evaluation Method for Network Survivability under Complex Environment
Computer Science. 2012, 39 (9): 101-104. 
Abstract PDF(428KB) ( 509 )   
RelatedCitation | Metrics
In order to evaluate the network survivability quantificationally and objectively in complex environment, a muti-objects network survivability evaluation method was proposed. The concept of environment parameter set based on PSO was introduced,which was used to show the indexes' weight and the different rectuirements for network surviva- bility index in different environments. And from the view of node, considering the itegration of the network robust, a- vailability,adaptation and cost,an algorithm which can reflecte the general network survivability performance in diffe- rent environment was proposed. The problems of the existing methods such as subjective, non-quantification, poor adap- tation, were solved. I}he method was verified through simulation.
Research on Evidence Collection under Cloud Computing Environment
Computer Science. 2012, 39 (9): 105-108. 
Abstract PDF(423KB) ( 640 )   
RelatedCitation | Metrics
Cloud computing is a kind of super computing model on the Internet. The development of cloud computing brings some challenges to evidence collection. Under cloud computing environment, the evidence collection includes three stages:detection,fixup and scavenge. There are some safe risks in cloud computing, so collecting evidence under cloud computing environment will face some technologic and legal problems. As to the problems on evidence information and collecting measure, we can solve them by the technology of data migration and Intrusion Detection System. As to the 1e- gal problems,we can solve them by improving the legislation and strengthening the international cooperation.
Field-sensitive Memory Model for Memory Safety of Heap-manipulating Programs
Computer Science. 2012, 39 (9): 109-114. 
Abstract PDF(637KB) ( 392 )   
RelatedCitation | Metrics
Heap-manipulating programs usually operate memory cells directly through shared and mutable data-struc- tures,which makes their memory safety more complex and harder to guarantee. A field-sensitive k-limit abstract memory model was proposed in this paper to support dynamic adjustment of the precision and efficiency of the analysis .We presented its framework, property and operations. And then,four kinds of memory-related errors were identified in the operational semantics of the abstract memory model according to the definition of memory safety. In the end, we pro- posed the dataflow iteration algorithm for detecting the memory safety of C programs.
Symbolic Execution Based on Branch Confusion Algorithm
Computer Science. 2012, 39 (9): 115-119. 
Abstract PDF(427KB) ( 495 )   
RelatedCitation | Metrics
Symbolic execution is a common static analysis technology. Issue of array element confusion is one of the key factors limiting symbolic execution performance itself. hhrough analysis to array confusion essence, branch confusion al- gorithm was proposed. With the strategy that manages confusion algorithm and symbolic execution in the same time, some complex array problems were solved. Using the real time method of constraint solving, infcasible confusion bran- ches were cut in time. Combining with symbolic execution and constraint solving, the prototypical tool ASym was devc loped,which was based on improved confusion algorithm. Primary experiments show that it can solve the confusion problem in branch structure and avoid array semantic error in delay replacement. Mcanwhile,extensional branches arc dramatically reduced and efficiency is improved.
ROP Attach Detecting Method Based on DBI
Computer Science. 2012, 39 (9): 120-125. 
Abstract PDF(549KB) ( 420 )   
RelatedCitation | Metrics
As the promotion of the idea of return-oriented programming (ROP),programs will face many new kinds of challenges from virus programs. With fine granularity, covert virus features, deliberate and sophisticated construction and rare static characteristics, ROP attack can circumvent many traditional defending measures. Under this circum- stances,it's imperative to discover the dynamic features of ROP attack program,identify its characteristics and defend it when it is executed. At this time, introducing the technology of dynamic binary instrumentation provides powerful sup- port for dynamic analysis of ROP attack. We introduced a defending measure to ROP attack with the help of DBI tech- nology. I3y identifying malicious program execution flow and restricting the call specification of libraries, we detected ROP attack. Furthermore, we designed an extensible defending framework over ROP attack to prove the generality and portability of our detect tool.
Process-driven Method of Component Assembly for Forest Simulation System and its Application
Computer Science. 2012, 39 (9): 126-132. 
Abstract PDF(1043KB) ( 329 )   
RelatedCitation | Metrics
Duc to the changing of the requirement and the application aim of the forest simulation systems, this paper presentd a process-driven method of component assembly for forest simulation system This method designs and imple- menu the component models and the interfaces of forest simulation components,and reuses the business processes and computation algorithms by assembling software components. It can solve the existing problem that the forest simulation system is difficult to quickly build or reconstruct. In addition, this method is successfully applied to develop the forest simulation systems,and it can achieve rapid construction and reconfiguration of forest simulation systems. The applica- dons show that the process driven method of component assembly for forest simulation system proposed in this paper can significantly reduce development cycle and development difficulty compared with other methods, such as code-level reusing, redevelopment.
Behavioral Compatibility Test Generation of Component-based Real-time System
Computer Science. 2012, 39 (9): 133-137. 
Abstract PDF(380KB) ( 329 )   
RelatedCitation | Metrics
Although component based software development has gained widespread use, composition of the component based real-time system is still complicated and error-prone. This paper presented an approach for behavioral compatibili- ty test generation of component based real-time system. Necessary extensions of timed automata (hA) were proposed such that the new model can describe the real-time component. Then, a compatibility coverage criterion was proposed and the behavioral compatibility test generation was converted into reachability analysis of the model, based on which the corresponding algorithm was given. Finally, through an example we demonstrated how our technique works and per- forms.
Analyzing and Verifying of SysML Activity Diagram Based on Petri Net
Computer Science. 2012, 39 (9): 138-142. 
Abstract PDF(433KB) ( 517 )   
RelatedCitation | Metrics
Systems modeling language is the latest international standard systems engineering modeling language. It contains two parts of semantics and notations which is lack of analysis and verification tools. For solving the questions, this paper presented a method for conversing the SysML activity diagram to the Petri net,mainly defined six transfor- mation rules for converting SysMI. activity diagram to a Petri net executable model. Using these transformation rules, SysML activity can be transformed into Petri net, and thus realizing its simplication, analysis and verification. Besides of this,it can detect concurrency-related code of conduct nature,such as dcadlocks,boundness,etc. And the consistency of the model was verified by the list method and simulate method. Finally, an example was used to verify the feasibility of the method.
Bigraphical Reactive Systems Based on Nested Sortings
Computer Science. 2012, 39 (9): 143-151. 
Abstract PDF(755KB) ( 451 )   
RelatedCitation | Metrics
Nested sorting based bigraphical reactive systems are extended model of bigraph theory in order to character't- ze nesting relation of controls of place graph n bigraphical reactive systems. The definition of nesting place graph was given by a signature category S}}(为.And then, some characters especially the construction of relative pushout and con- sistent conditions of idem pushout in nesting place graphs were presented. Furthermore, proofs of corresponding propo- sitions and theorem were given.
Research of Services Discovery Based on Service Access Log
Computer Science. 2012, 39 (9): 152-154. 
Abstract PDF(256KB) ( 332 )   
RelatedCitation | Metrics
How to find service accurately and quickly is the key of the service application. Due to the low recall and low precision of the traditional service discovery based on keyword, this paper proposed the new service discovery based on service access log. This method optimizes the traditional service discovery by mining the service access log, expanding ctuery, choosing the best service and finding the possible related services. The experiment shows that this method higher recall and higher precision compared with the service discovery based on keyword.
Non-reuse-based Software Cost Estimate Model's Reuse Transformation and Model Coefficients Modification Strategies
Computer Science. 2012, 39 (9): 155-156. 
Abstract PDF(256KB) ( 435 )   
RelatedCitation | Metrics
As an important measure to control and manage software development cost, software cost estimate technolo- gy has become an important issue in software engineering. Currently software development based on reuse is becoming mainstream of software engineering, but few software cost estimate model takes the reuse factor into account. This pa- per proposed non-reuscbased software cost estimate model's reuse transformation model and applied the model to moth fy COCOMO,then used it to estimate an instance project cost. The paper also showed the strategy of using stored pro- cedure technology to calibrate model coefficients. I}his model can provide a reference to various software development and reuse cost estimation.
FlowS:A Fair Scheduling Method for Mapreduce Dataflow
Computer Science. 2012, 39 (9): 157-161. 
Abstract PDF(523KB) ( 432 )   
RelatedCitation | Metrics
MapReduce Job scheduling has been paid great attention in academic research. Based on the analysis ofMi1 Reduce dataflow scheduling model,this paper presented a fair scheduling method for MapReduce dataflow-F1owS. This method can not only provide the isolation of MapReduce dataflow through dataflow pools, but also assure the fairness of resource allocation through dynamic construction algorithm. The results of experiences show that the proposed method can improve the processing efficiency of Hadoop Clusters.
Novel Graph Partition Based Clustering Algorithm---GAGPBCUK
Computer Science. 2012, 39 (9): 162-165. 
Abstract PDF(425KB) ( 389 )   
RelatedCitation | Metrics
A novel graph partition based clustering algorithm(GAGPBCUK) was proposed to prevent the defects of spectral clustering methods,such as sensitive parameters and inaccurate results. Experiment results on three simulation datasets indicate that the proposed algorithm can not only determine and learn the dataset's cluster number effectively but also can get more effective clustering result than spectral clustering algorithm(i. c. NJW algorithm).
Research of PSO-based Fuzzy C-means Clustering Algorithm
Computer Science. 2012, 39 (9): 166-169. 
Abstract PDF(348KB) ( 691 )   
RelatedCitation | Metrics
Fuzzy C-mean algorithm is sensitive to initial centroid and the choice of fuzzy weighting exponent m plays an import role in clustering result PSO has the advantage of global optimization and good convergence speed. This paper proposed a new method by combining PSO and fuzzy GMeans to solve those problem. I3y a simple and effective particle encoding method, the best initial centroid and fuzzy weighting exponent were both searched in the process of PSO. Ex- periments on synthetic data sets and several UCI data sets achieve good results.
Quick Algorithm of Determining Arrow Relations of a Formal Context
Computer Science. 2012, 39 (9): 170-174. 
Abstract PDF(378KB) ( 298 )   
RelatedCitation | Metrics
In formal concept analysis, the so-called "arrow relation" plays an important role in reducing the size of a for- mal context and recognizing a compatible subcontext of a formal context Thus,how to quick determine the arrow rela- lion between the objects and the attributes of a formal context is worth to be investigated. This study first gave an e- quivalent theorem of checking whether or not there is an arrow relationship between an object and an attribute of a for- mal context Then a ctuick algorithm of determining the arrow relation between the objects and the attributes of a formal context was developed. Finally, a real example and numerical experiments were used to demonstrate the feasibility and efficiency of the proposed algorithm.
Study of Automatic Keywords Labeling for Scientific Literature
Computer Science. 2012, 39 (9): 175-179. 
Abstract PDF(425KB) ( 374 )   
RelatedCitation | Metrics
Keywords of scientific literatures provided by authors are helpful for readers. But there are also some scienti- fic literatures that are not labeled with keywords due to all sorts of reasons. So this paper proposed a new abstract based automatic keywords prediction algorithm for scientific literatures without keywords. I}he abstracts of scientific litera- tures,which had been given keywords by authors,were used as the training data set. Four text modeling methods:lan- guagc modcl(LM),latent dirichlet allocation(LDA),probabilistic author-topic model, and a combination of LM and I_DA were employed to model the abstracts and the keywords in training set to build the relations between keywords and terms of abstracts. Then the trained models were used to predict keywords for the abstracts of scientific literatures without keywords. The experimental results on both Chinese data sets and English data sets show that the keywords predicted by the proposed algorithms can reflect the content of scientific literature well. Among all of the models, the combination of LM and LDA is best.
CGP-WPSO Hybrid Algorithm for Gene Regulatory Network Modeling
Computer Science. 2012, 39 (9): 180-182. 
Abstract PDF(345KB) ( 464 )   
RelatedCitation | Metrics
The phenotype of the crops can be predicted through the gene regulatory network (GRN) , which is important for the global food security. This paper proposed a Cartesian genetic programming and linear decreasing inertia weight particle swarm optimization algorithm for GRN modeling. To verify the effectiveness of the proposed algorithm,we applied it to the recovery of the Arabidopsis flowering time control system. The computer simulation indicates that our proposed algorithm is able to infer the GRN model which can predict the phenotype of the crops fairly accurately based on its genotype and environmental conditions.
Group Search Optimizer Applying Opposition-based Learning
Computer Science. 2012, 39 (9): 183-187. 
Abstract PDF(419KB) ( 1162 )   
RelatedCitation | Metrics
Group search optimizer(GSO)is a new swarm intelligence algorithms based on the producer-scrounger model.GSO has been shown to yield good performance for solving various optimization problems. However, it tends to suffer from premature convergence and get stuck in local minima. This paper proposed an enhanced GSO algorithm called GOGSO, which employs generalized opposition-based learning to transform the current population into a new opposition population and uses an elite selection mechanism on the two populations. xperiments were conducted on a comprehensive set of benchmark functions. The results show that OGSO obtains promising performance.
Maximum Relative Separation Ratio Single Spherical Classifier with an Adaptive Upper Bound
Computer Science. 2012, 39 (9): 188-191. 
Abstract PDF(382KB) ( 320 )   
RelatedCitation | Metrics
Without taking the spread of negative class samples into account, the objective of single spherical classifier (RSS) is only to maximize the separation ratio. According to the Fisher discriminant analysis, this paper introduced relafive margin into RSS to enhance the cohesion of negative class samples and improve the discriminant accuracy by the upper bound constraint in the feature space. Because the upper bound is unpredictable, a maximum relative separation ratio single spherical classifier with an adaptive upper bound (ARRSS) was built to avoid no solution and its parameters were researched afterwards. Experiments show the proposed method achieves better generalization performance compared with RSS.
Algorithm of Constructing Maximal Consistent Block Based on Consistent Data Reinforcement
Computer Science. 2012, 39 (9): 192-197. 
Abstract PDF(516KB) ( 336 )   
RelatedCitation | Metrics
Maximal consistent block technictue has some advantages in dealing with the incomplete information. But the construction of maximal consistent blocks itself can be a timcconsuming process. After the properties of maximal consistent blocks in incomplete information system were analyzed, the consistent data reinforcement method was defined to handle the missing data in maximal consistent blocks. Using the method in incomplete information system, a new algorithm was obtained to construct maximal consistent block. And the decision tree structure was introduced to optimize the algorithm based on the algorithm' s characteristics. Algorithm was evaluated on the standard benchmark dataset.The result of the experiments indicates that the algorithm has better performance in large scale dataset.
Intuitionistic Fuzzy Filters of the BIrAlgebras
Computer Science. 2012, 39 (9): 198-201. 
Abstract PDF(347KB) ( 478 )   
RelatedCitation | Metrics
Combining intuitionistic fuzzy sets and filter theory, we studied the instuitionistic fuzzy filters of the BL-algebras. The basic knowledge of the BL-algebras and the intuitionistic fuzzy sets were firstly reviewed. The filter,lattices filter, Boolean filter and implicative filter were introduced in the intuitionistic fuzzy sets, respectively. And their important properties were investigated. The intuitionistic fuzzy filter was proved to be the intuitionistic fuzzy lattice filter, and the intuitionistic fuzzy 13ollcan filter was also proved to be equivalent to the intuitionistic fuzzy implicative filter in the BL- algebras, and practical examples were used to verify. Finally, we discussed the relation between intuitionistic fuzzy filter and fuzzy filter.
Joint Inference Open Information Extraction Based on Markov Logic Networks
Computer Science. 2012, 39 (9): 202-205. 
Abstract PDF(385KB) ( 375 )   
RelatedCitation | Metrics
In recent decades,natural language processing has made great progress. Better model of each sub-problem achieves 90 0 o accuracy or better. However, success in integrated, end-to-end natural language understanding remains elusive. I}he main reasons arc that the systems processing sensory input typically have a pipeline architecture: the output of each stage is the input of the next, and errors are cascaded and accumulated through the pipeline of naively chained components, and there is no feedback from later stages to earlier ones. Actually, later stages can help earlier ones to process.Now a number of researchers have paid attention to this problem and proposed some joint approaches. But they do not perform open information extraction (Open IE),which can identify various types of relations without requiring prespecifications. We proposed a joint inference model, which is based on Markov logic and can perform both traditional relation extraction and open IE. The proposed modeling significantly outperforms the other open IE systems in terms of both precision and recall. The joint inference is efficient and we have demonstrated its efficacy in real-world open IE detection tasks.
New Multi-label Sample Class Incremental Learning Algorithm
Computer Science. 2012, 39 (9): 206-207. 
Abstract PDF(266KB) ( 373 )   
RelatedCitation | Metrics
To multi-label sample, a class incremental learning algorithm based on hyper ellipsoidals was proposed. For every class, the smallest hyper ellipsoidal that contains most samples of the class was structured, which can divide the class samples from others. In the process of class incremental learning, the hyper cllipsoidals of new class were structured,and the historical hyper ellipsoidal that its class exists in the incremental samples was structured again. The multi-label class incremental learning is realized in a small memory space, and the history results that has nothing to do with the new sample classes arc saved at the same time. For the sample to be classified, its class is confirmed by the hyper ellipsoidal that it belongs to or its membership. The experimental results show that the algorithm has a higher performance on classification speed and classification precision compared with hyper sphere algorithm.
Approach of Radar Target Recognition Based on Multiple Polarization Features Fusion
Computer Science. 2012, 39 (9): 208-210. 
Abstract PDF(376KB) ( 385 )   
RelatedCitation | Metrics
Aiming at the problem of a dramatic increase in data of multi-polarized high resolution range profile (HRRP) target recognition, a recognition algorithm based on low-dimension time-shift invariant feature vectors and dynamic combination of multiple classifiers was proposed. In this algorithm, firstly three on}dimension features of single-polarized HRRP sequence arc extracted to form the timcshift invariant feature vectors, and then a general classifier combination by dynamic ensemble of multiple classifiers is achieved, which is used to classify. Finally, the classification results of four singlcpolarized HRRPs arc assembled by weighted voting method. hhc result of experiment indicates that the algorithm not only reduces the size of the data, but also uses polarization information effectively to acquire higher correct recognition rate.
Method for Fricative and Affricate Classification Based on Articulatory Characteristic
Computer Science. 2012, 39 (9): 211-214. 
Abstract PDF(386KB) ( 489 )   
RelatedCitation | Metrics
A fricative and affricate classification method based on articulatory characteristic was proposed. According to this method, the speech segment energy distributions and spectrum statistical features were first got based upon Seneff's auditory spectrum,and the differences of them were well described. Then fricative and affricate classification was achieved using the support vector machine model. The experimental results show that the class$ication accuracy is 90. 08% for clean speech, 80. 4% for noisy speech with the SNR of 5dB. Compared with the traditional timcfrequcncy energy distribution features based fricative and affricate classification methods, the proposed method gets great performance improvement under low SNR.
Discrete Particle Swarm Optimization Algorithm for Solving Dynamic Knapsack Problem
Computer Science. 2012, 39 (9): 215-219. 
Abstract PDF(426KB) ( 365 )   
RelatedCitation | Metrics
Dynamic knapsack problem (DKP) is a kind of classic dynamic optimization problems, which can be used to describe many practical issues. So far the study of dynamic knapsack problem has mainly focused on genetic algorithm,and particle swarm optimization algorithm is of rare application. This paper proposed a discrete particle swarm optimization algorithm based on discrete particle swarm optimization model for solving dynamic knapsack problem(DSDPSO),and introduced environment change detection and post change response mechanism. Our algorithm was compared with the existing classical adaptive primal-dual genetic algorithm(API}GA) into two dynamic knapsack problems,and the resups show that the DSDPSO algorithm can rapidly find the optimal solution and remains stable after environment varialion. Consectuently, this algorithm is more suitable to solve dynamic knapsack problem.
Design and Realization of MDAS for Transportation Safety Management
Computer Science. 2012, 39 (9): 220-224. 
Abstract PDF(506KB) ( 388 )   
RelatedCitation | Metrics
This paper introduced a method of applying the multi-dimensional analysis technology to the analysis of GPS monitoring data of commercial vehicles, for the purpose of solving the problems of the lack of pertinence and validity which exist in the process of decision-making in the roadway transportation safety management nowadays. After analyzing the application state in existence, a general solution of MDAS for the transportation safety management, including the global Network Topology framework and the five-layer system structure was presented. Moreover, a data extractor based on the D"hS and the trigger was designed, and the non-spacial and spacial data were preprocessed with generalizability theory and the method of the nearest point estimation map-matching. hhen, the snowflake model of the data warehouse, MDAS data collection and the data analysis engine based on the MDX were devised. On the basis of the above technologies,the MDAS for Chongcting City's roadway transportation safety management was developed and realized. The real application shows that MICAS meet the requirement of the decision support in transportation safety rrmnagement.
Construction and Visualization Method of Application Knowledge Map
Computer Science. 2012, 39 (9): 225-228. 
Abstract PDF(414KB) ( 389 )   
RelatedCitation | Metrics
According to the combinations and application problem of different knowledge, the paper put forward a construction and visual representation method about application knowledge map (AKM) , gave the formal description of interactive interface, ontology operations, knowledge types and knowledge statute, introduced construction method, basic model and visual drawing process. Oriented to business topics, the AKM is built by components and WebServices, solves the integration problem of diverse knowledge application in business process, and improves system service establishment with good efficiency,scalability,flexibility and practicality.
Kernel Covariance Component Analysis and its Application in Clustering
Computer Science. 2012, 39 (9): 229-234. 
Abstract PDF(534KB) ( 440 )   
RelatedCitation | Metrics
A new feature dimensionahty reduction method called kernel covariance component analysis (KCCA) was put forward on the criterion that the transformed data best preserves the concept of the difference (denoted as D-vs-E) betwecn total densities and Renyi entropy of the input data space, induced from kernel covariance matrix. The generalized version of D-vs-E was also developed here. KCCA achieves its goal by projections onto a subset of D-vs-E preserving kernel principal component analysis (KPCA) axes and this subset does not generally need to correspond to the top eigenvalues of the corresponding kernel matrix, in contrast to KPCA. However, KCCA is rooted at the new concept of D-vs-E rather than Renyi entropy. Experimental results also show that KCCA is more robust to the choice of Gaussian kernel bandwidth when it is used in clustering.
Interdisciplinary Collaborative Literature Recommendation Based Topic Modeling
Computer Science. 2012, 39 (9): 235-239. 
Abstract PDF(546KB) ( 382 )   
RelatedCitation | Metrics
There exists plenty of all kinds of literature, although the researchers can use some sorts of searching tools for literature retrieval, but how to seek out more efficiently relevant literature becomes more and more difficult. Recently appearing in a series of online community for the researchers is a new solution. A model based on topic model of interdisciplinary literature recommendation was presented, the combination of traditional collaborative filter and probability topic model, and knowledge collaboration network model has been put forward, so it provides a latent semantic structure which can be distinguishable. In terms of different user's appraisal of the given literature index rate, and topic distribution of newly published literature in the foundation, I}his paper used semantic similarity calculation tools, and puts forward the retrieval recommendation for interdisciplinary research based on probability. For this reason,we studied a set of data from the CiteULike,and experimental results show the feasibility and effectiveness of this method.
Dynamic Self-adaptive Harmony Search Algorithm for Solving High-dimensional Complex Optimization Problems
Computer Science. 2012, 39 (9): 240-243. 
Abstract PDF(398KB) ( 336 )   
RelatedCitation | Metrics
This study presented a dynamic self-adaptive harmony search (DSHS) algorithm to solve high-dimensional optimization problems. In the proposed DSHS algorithm, the orthogonal experimental design algorithm was used to initialize population;two new harmony adjustment operators,multi-dimensional dynamic adaptive adjustment operator and oncdimensional tones finctuning operator, were integrated to the improvisation scheme. For avoiding the search being trapped in local optimum,an improved band width adjustment algorithm was employed to enhance the disturbance performance. 6 benchmark function experiments show that the proposed algorithm has strong convergence velocity, stabilination and capacity of space exploration on solving high-dimensional complex optimization problems, compared with most other approaches.
Formal Deduction System for Solving Einstein's Riddle
Computer Science. 2012, 39 (9): 244-246. 
Abstract PDF(205KB) ( 569 )   
RelatedCitation | Metrics
The time complexity of the SAT-based approach for solving Einstein's riddle is very high. Aiming to this problem, we constructed a formal system called Γ. At first, we described the riddle with a set of axioms and a set of rules in Γ. And then, we used the axioms and rules in Γ to prove some theorems in Γ. So, a solution to the riddle was obtwined. Compared with the existing approach, the new method gives a deduction-based procedure to solve the riddle. And the state explosion problem is avoided.
Visualize Black-box of NN Model and its Application in Dimensionality Reduction
Computer Science. 2012, 39 (9): 247-251. 
Abstract PDF(548KB) ( 533 )   
RelatedCitation | Metrics
Since the neural network can easily fit the nonlinear mapping from the input space to the output space, the users of artificial neural network directly use it to get the black-box model with data pairs including input variables and output variables,often without taking dependencies between the input variables and output variables into account. So,there arc often redundant variables in the model which would result in poor reliability and robustness. An approach to increase visual capability for black-box properties of neural network was proposed. Firstly, the network interpretation diagram is employed to make the network transparency. I}hen, connection weights method is used to compute the relative contribution of each input variable for estimating the importance to the output variable. Lastly, the significance tests of the connection weights and contribution ratios of input variables arc implemented using the improved randomization tests for trimming the model, and the redundant variables can be eliminated by the intersection of the variables which are not significant for the overall contribution and the relative contribution rate to realize dimensionality reduction of neural network model. I}he experimental results indicate that the method can increase the transparency of model, select the best input variable set, eliminate redundant input variables, and improve the reliability and robustness of the model.Therefore, the study provides a new approach to visualize neural network model and eliminates redundant input variables.
Design and Implementation of VMM-based File Integrity Monitoring System
Computer Science. 2012, 39 (9): 252-256. 
Abstract PDF(807KB) ( 332 )   
RelatedCitation | Metrics
A virtual machine monitor(VMM) has strong control ability and its characteristic of isolation,and can solve open ctuestion in the existing file integrity monitoring systems. A new VMM-based method for file integrity protecting system was proposed,which is isolated between the system and the guest systems. This method should prcconfigure the files to be protected and can avoid the attack to these files from the malicious codes. In this scheme of file integrity protection, the system can intercept all the access attempts to the protected files in real-time by designing and implanting the "detector" and "reversed file locator" into the isolated layer of the virtual machine, and achieves the strategy of pre-protection.
Medical Object Extraction by Gaussian Mixture Model and Region Competition Active Contour Model
Computer Science. 2012, 39 (9): 257-261. 
Abstract PDF(693KB) ( 324 )   
RelatedCitation | Metrics
This paper proposed a regional active contour model with an embedded classifier, based on a Gaussian mixlure model fitted to the intensity distribution of the medical image. The difference between the maximum probability of the intensities belonging to the classes or subclasses of the object and those of the background is made as an energy term in the active contour modcl,and minimization of the whole energy function leads to a novel iterative equation. An additional speed controlling term slows down the evolution of the active contour when it approaches an edge, making it quickly convergent to the ideal object. The developed model has been applied to liver segmentation. Some comparisons are made between the geodesic active contour,C-V(active contour without edges),manual outline and our model. As the experiments show that our model is accurate,flexible and suited to extract objects surrounded by a complicated back-ground.
Face Feature Extraction Based on Weighted Multiple Kernel Fisher Discriminant Analysis
Computer Science. 2012, 39 (9): 262-265. 
Abstract PDF(358KB) ( 387 )   
RelatedCitation | Metrics
Kernel Fisher discriminant analysis method is an effective nonlinear discriminant analysis method. Tradition kernel Fisher discriminant analysis uses only single kernel function, which makes it insufficient in face feature extraction,therefore we proposed multiple kernel Fisher discriminant analysis. Weighted projections were obtained through weighted combination several projections obtained by single kernel Fisher discriminant,and then feature extraction and classification were made based on the weighted projections. Experimental results show that weighted multiple kernel Fisher discriminant analysis method is superior to single kernel Fisher discriminant analysis for facial feature extraction and classification.
Multi-step Self-calibration Method Based on Kruppa Equations
Computer Science. 2012, 39 (9): 266-268. 
Abstract PDF(339KB) ( 1436 )   
RelatedCitation | Metrics
Most of the existing algorithms for camera self-calibration have following problems: non-linear optimization of Kruppa equations, and not robust In order to solve these problems, a multi-step self-calibration method was proposed. This method begins with getting the fundamental matrix between images by 8-point algorithm. I}he candidate matching images arc acquired by taking two pictures of one scene using different focal length. We established the constraint correspondence between feature points of two images. The coordinate of principle point was calculated by using least square method. Finally the genetic algorithm was adopted to optimize the scale factor of the Kruppa equations and accomplishing the task of camera calibration. Experiment results indicate that the proposed method can effectively improve the accuracy and robustness of self-calibration method based on Kruppa equations.
Interpretation of Online Multistroke Sketching with Polylines Based on the Time-space Relationship
Computer Science. 2012, 39 (9): 269-274. 
Abstract PDF(530KB) ( 594 )   
RelatedCitation | Metrics
This paper presented a novel method for interpreting multistroke freehand sketching with polylincs.The multistrokes were interpreted as sketch content and were used to generate 2D geometric primitives. The pretreatment approach proposed is able to interpret the dashed or continuous strokes. The polylines multistroke interpretation is classified as two parts. Onc is made of the polylines and straight lines and the other is made of polylines and polylines. Some new conceptions are defined for discussing the latter. The corresponding algorithms are given for interpretation of online multistroke sketching. I}his paper developed a human-computer interface prototype system FSR using the proposed theory, which makes system interface easy to use.The FRS system was tested with a number of multistroke sketches. And the test results show that the algorithm achieves a satisfactory interpretative efficiency.
Local and Global Margin Embedding Method for Feature Extraction of Face Image
Computer Science. 2012, 39 (9): 275-278. 
Abstract PDF(368KB) ( 485 )   
RelatedCitation | Metrics
To overcome the disadvantage that the penalty graph constructed by marginal Fisher analysis (MFA) can't sufficiently describe interclass separabihty, this paper proposed a novel feature extraction method, called local and global margin embedding (LGME). In LGME, all interclass data pairs are used to construct penalty graph, whereas the importance of limited interclass data pairs with minimal margins is emphasized properly. Compared with MFA, I_GME simultaneity uses local and global interclass margin to characterize interclass separability, so the data features extracted by LGME have more discriminative power. hhe experimental results show that the face image features extracted by LGME for face recognition have higher recognition rate and more robust.
Men Groove Aging Test Based on Small Neighborhood and Characteristics Groups
Computer Science. 2012, 39 (9): 279-281. 
Abstract PDF(277KB) ( 384 )   
RelatedCitation | Metrics
Face aging process is very complex, and skin of features and purchase is subtle with a lot of noise. The traditional leather channel matching method can't depict subtle aging characteristics change wall, and face aging test accuracy is not strong. A face aging detection algorithm based on a small neighborhood and the characteristics groups was proposed. The corresponding relationship between small face image neighborhood and characteristics groups was set up,and the energy function of tectonic motion parameters association probability distribution was constructed to reflect the constraint relation between the parameters. The simulated annealing algorithm was used to seek the optimal solution of the related constraints association of the skin channel feature point relevance,geting aging characteristics under the different aging condition. I}he experiment shows that this method can recognize the aging skin channel characteristics under the different face state. Test results are more accurate,higher accuracy.
Visual Analysis of Double Flowers Topology Structure Model Algorithm
Computer Science. 2012, 39 (9): 282-283. 
Abstract PDF(265KB) ( 400 )   
RelatedCitation | Metrics
Aiming at the problems that double flowers structure is relatively complex, and petals arrangement of the nonlinear is stronger, and the three dimensional simulation effect is poor, this paper put forward a double flowers topology structure model algorithm. Using peony as examples, according to the arrangement of botany petals theory,joining the randomized function, expanding the L system to construct different levels flowers form model, using two I3ezier surfaces three times to construct the petals model, using 2 d texture mapping method based on triangle strips of model, the algorithm can complete any level of flowers modeling. This method is simple, intuitive, and the reality sense effect is good.
Fast Robust Thresholding Method Based on Two-dimensional Renyi's Entropy
Computer Science. 2012, 39 (9): 284-288. 
Abstract PDF(463KB) ( 324 )   
RelatedCitation | Metrics
The thresholding method based on two-dimensional Renyi's entropy is a classical thresholding approach.However, it suffers from the sensitivity to the salt and pepper noise. I}his paper introduced the median filtering method which is robust to the salt and pepper noise into the process of constructing the two-dimensional histogram. To further improve the performance of the algorithm, an improved version of particle swarm optimization with increasing inertia weight was employed. Several experiments show the algorithm is effective.
Image Threshold Segmentation Method Based on Improved Particle Swarm Optimization
Computer Science. 2012, 39 (9): 289-291. 
Abstract PDF(329KB) ( 566 )   
RelatedCitation | Metrics
Aiming at image extraction problem, optimal threshold selection is key for image segmentation results. In processing different kinds of image region,because of particle swarm optimization algorithm (PSO)’s premature phenomenon, it is difficult to accurately calculate the optimal segmentation threshold image segmentation, and accuracy rate is low. In order to improve the segmentation accuracy and accurate extraction of the image target, this paper proposed a image threshold segmentation methods based on chaos particle swarm optimization algorithm(CPS()).Benefit from chaotic operation of ergodicity, sensitivity to initial conditions and other advantages, CPSO improves particle swarm premature aggregation and can notbe traped in a local optimum problem,accelerates the overall optimal solution search ability. The CPSO image segmentation performance is best by simulation experiment, and the experimental results show that, compared with other image segmentation algorithm, CPSO not only accelerates the speed of operation, improves the efficiency of image segmentation, but also improves the segmentation accuracy, and it is very suitable for real-time image segmentation.
Implementation of Four-way Double Precision Short Vector Registers in GCC Backend
Computer Science. 2012, 39 (9): 292-295. 
Abstract PDF(386KB) ( 452 )   
RelatedCitation | Metrics
It will cost several years to design and implement a new product level compiler. Designing and implementing based on an already-issued product level compiler are the main approach to develop a compiler for a new architecture.GNU compiler collection (GCC) supports multiple high level languages and multiple platforms, and its internal documents and source code are open. Based on the Spare backend of GCC,we implemented the description of four-way doublcprecision short vector registers which support four-way doublcprecision SIMD instructions. In this process, we defined a new target machine, expanded a new vector mode, defined a new class of register constraints, provided the descriptions of four-way doubl}precision short vector registers,designed the machine descriptions of the four-way double-precision SIMD instructions. For the builtin functions for this kind of SIMD instructions, our GCC can produce correct SIMD instructions using such kind of vector registers.
Communication Optimization Algorithm Using Reordering Transformation and Loop Distribution
Computer Science. 2012, 39 (9): 296-301. 
Abstract PDF(493KB) ( 470 )   
RelatedCitation | Metrics
Aiming at the problem that existing communication optimization algorithms can't make MPI automatic parallelizing compilers to generate messag}passing programs of ideal speedups, this paper proposed a communication optimination algorithm using reordering transformation and loop distribution. According to the interprocedural side effect sets and reordering transformation rule based on mpi_wait/mpi_irecv movements, this algorithm makes orderly use of rcordering transformation and loop distribution, and then expands communication-computation overlap windows of the point to-point non-blocking communication as safely as possible, so that MPI automatic parallelizing compilers can generate message-passing codes of overlapping communication with more computation. Experimental results show that this algorithm can hide more point-to-point non-blocking communication overheads than others and improve their specdups significantly.
Design of Quantum Comparator Based on Extended General Toffoli Gates with Multiple Targets
Computer Science. 2012, 39 (9): 302-306. 
Abstract PDF(386KB) ( 544 )   
RelatedCitation | Metrics
By employing extended general Toffoli gates with multiple targets, a constructive method of classical quantum information comparator was presented. Further its correctness was proved theoretically. Based on which an application of quantum comparator working in the ctuantum search algorithm was given. Compared with the other like ctuantum comparators, our comparator uses less ancilla qubits so that the required related quantum resources can be saved. By setting the control conditions of the extended general Toffoli gates with multiple targets, the subsectuent gates can not work any longer after obtaining the comparison result in our comparator.Thus the efficiency is improved, and the error rate is reduced and the robustness of comparator is enhanced.
Branch Predictor with TBHBP Based on Simultaneous Multithreaded Processors
Computer Science. 2012, 39 (9): 307-311. 
Abstract PDF(469KB) ( 408 )   
RelatedCitation | Metrics
Aiming at the shortages of high conflict of alias and capacity as well as prediction information disorder exisled in the branch instruction, a new branch predictor TI3HI3P which is based on simultaneous multi-threading processor was proposed. In this branch predictor, the comprehensive historical information of the historical thread information and an index based on address information of the local history are taken as the pattern-matching table index of PHT.Though the way of history register is owned by separate thread and branch history register, the instruction execution speed of branch prediction is improved by adding the branch result output table. The researches results show that branch predictor of I}BHBP can effectively improve the problems of out-date branch information, branch instruction aliases and capacity conflicts. Comparing with the Gshare, instruction throughput rises up to 12.5% , and branch prediclion error rate and error forecast path rate can separately drop down by 0. 5% and 2.1%.