Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 40 Issue 1, 16 November 2018
  
Description Logics for Relations in Databases
Computer Science. 2013, 40 (1): 1-4. 
Abstract PDF(324KB) ( 355 )   
RelatedCitation | Metrics
Two description logics for relations in databases were given in our work. On the one hand, in one description logic, a relation can be represented as a know-ledge base and a model of the knowledge base, and any model of the knowledge base has a sub-model which is isomorphic to the relation; on the other hand, in another description logic, a relation can be represented as a knowledge base and a model of the knowledge base so that any model of the knowledge base is isomorphic to the relation.
New Random Testing-based Fault Localization Approach
Computer Science. 2013, 40 (1): 5-13. 
Abstract PDF(816KB) ( 462 )   
RelatedCitation | Metrics
Fixing faults in software are an essential task in software development, and many approaches have been presented to automate fault localization. Among them, testing-based approaches are most promising. These approaches use the information of test cases to localize the faults, and they are called collectively as TBFL approach. But these TBFL approaches have ignored the similarity of the test cases, which may harm the effectiveness of these approaches. In fact it is impossible to completely avoid redundancy. Therefore this paper presented a new TBFL approach named random TBFL approach from a new view. The basic idea is that; the program is viewed as a random variable, and before testing, a prior distribution about the error probability of statements of the program is given, then some adjustments to the error probability of statements arc made based on the execution information of the test suite, and the readjusted probability is called posterior probabihty,finally this posterior probability is used to localize the faults. This paper integrated the traditional TI3FI_ approaches into the random framework, and compared and analyzed them on several instances. The analysis demonstrates that the random TI3FL approach can correctly locate the faults,and redundancy has little influence on the effectiveness of the random TBFL approach.
Dataflow Processing in Service Workflow
Computer Science. 2013, 40 (1): 14-18. 
Abstract PDF(448KB) ( 398 )   
RelatedCitation | Metrics
Service workflow needs to deal with a large number of heterogeneous data interactions between services. Different data processing approaches can directly affect workflow execution efficiency. Some data modeling issues such as dataflow implementation models, data mapping techniques and dataflow validation approaches were firstly presented.Then, several data management topics during workflow execution were discussed, including dataflow schedule, data storage challenges and data transfer challenges. In addition, the current situation of dataflow processing in service workflow was analyzed. Finally,combined with the existing data flow related works, some future research directions of dataflow were pointed out
Study on Recommendation Algorithm with Matrix Factorization Method Based on MapReduce
Computer Science. 2013, 40 (1): 19-21. 
Abstract PDF(336KB) ( 810 )   
RelatedCitation | Metrics
Matrix factorization is a collaborative filtering recommendation technique proposed in recent years. In the process of recommendation, each prediction depends on the collaboration of the whole known rating set and the feature matrices need huge storage. So the recommendation with only one node will meet the bottleneck of time and resource. A MapReduce-based matrix factorization recommendation algorithm was proposed to solve this problem. The big feature matrices were shared by Hadoop distributed cache and MapFile techniques.The MapReduce algorithm could also handle multi-? situation. The experiment on Netflix data set shows that the MapReduce-based algorithm has high speedup and improves the efficiency of collaborative filtering.
Analysise and Realization of Micro-satellite Communication Based on UDP Protocol
Computer Science. 2013, 40 (1): 22-25. 
Abstract PDF(343KB) ( 1128 )   
RelatedCitation | Metrics
In theses years, scientists have been paying more and more role on the satellite communication based on the TCP/IP protocol. There are many problems in the inter-satellite links when using the TCP protocol as the transport layer, while the UDP protocol is much more simple, convinenment and higer efficiency which can meet the demand of the satellite communication well. hhe processor on the on-board computer (OBC) is MPC8260,which integrates the C/OS-II embedded real-time operating system. It choises the LwIP protocol as the communicating protocal, and the Slip protocol as the link layer protocol, the Nrf 24E1 ZigBee as the wireless communicating tool. It sets up the inter-satellite link communication test platform, relizes the wireless communication between the OBC and PC based on the UDP protocol. The result shows that the UDP protocol can perform well and is an optional protocol in the inter-satellite link.
System Level Dynamic Temperature Estimation for SOC
Computer Science. 2013, 40 (1): 26-28. 
Abstract PDF(326KB) ( 670 )   
RelatedCitation | Metrics
Increase of non-uniform power density and high switching frequency has presented new challenges in predicting transient temperature response to fast changing power inputs in highly integrated circuit. This work presented a fast algorithm for predicting temperature evolution of a System on Chip (SoC) or more general semiconductor devices.It utilizes the equivalent thermal RC network for model reductions, and adopts recursive infinite impulse response (IIR)digital filters for accelerated computation in discrete time-domain. The algorithm is validated by comparison to existing convolution integral methods, yielding excellent agreement with several orders of magnitude improvement in computation efficiency. Duc to its simplicity in implementation, the algorithm is very suitable for run-time evaluation of temperature response for dynamic power management applications.
Energy Efficient MAC Layer Broadcast Protocol for Wireless Sensor Networks
Computer Science. 2013, 40 (1): 29-32. 
Abstract PDF(435KB) ( 452 )   
RelatedCitation | Metrics
Sleeping is an efficient energy saving approach in wireless sensor networks. Many state of art communication protocols have been designed with periodically sleeping schedule, such as B-MAC, VPCC, X-MAC, A-MAC and so on. In these protocols,broadcast scheme is not proposed or works with large energy consumption and is seriously affected by hidden node problem. An energy efficient broadcasting scheme for the asynchronously sleeping wireless sensor networks was proposed. In this approach, the broadcast data arc retransmitted multiple times with random interval. hhis approach guarantees that the receivers can receive the broadcast data whenever they wake up. When hidden nodes and the broadcast packets are interfering with each other, the random interval can separate the packets earlier. We tested our approach with NS-2 simulation. hhe experiment results show that our broadcast scheme can reduce the energy consumption significantly and reduce the impact of interference.
Improvement of DV-Hop Localization Based on Artificial Bee Colony Algorithm
Computer Science. 2013, 40 (1): 33-36. 
Abstract PDF(321KB) ( 371 )   
RelatedCitation | Metrics
With regard to the problem that the typical rangcfrec localization algorithm of DV-Hop for wireless sensor networks has low localization accuracy, the artificial bee colony algorithm with good robust, high convergence speed and outstanding performance on solving global optimization problems was applied to the design of DV-Hop algorithm and an improved algorithm named as ABDV-Hop(Artificial Bee Colony DV-Hop) was proposed. Based on the original DV-Hop algorithm, the improved algorithm uses the information of distance between the nodes as well as the location of beacon nodes.Through establishing the optimization function, the location of unknown nodes is estimated at the final stage of the algorithm. The results show that the proposed method can significantly reduce positioning error compared with the original DV-Hop algorithm without increasing the hardware overhead of sensor nodes.
Novel Transport Control Protocol for Deep-space Inter-planet Networks
Computer Science. 2013, 40 (1): 37-40. 
Abstract PDF(396KB) ( 377 )   
RelatedCitation | Metrics
Now, many international groups and researchers have studied the transport protocol for GEOIP networks,and many solutions have been proposed. However, due to the worse channel situation than that of GEO IP networks, the performance of these solutions degrades significantly in Inter-planet IP networks. A new transport protocol,TP-satellite+,was proposed for deep space Inter-planet IP networks. Simulation results show that TP-Satellite+ can resist the channel errors, reduce the setup time and the bandwidth used in the reverse path, enhance the performance of Inter-planet networks.
Key Distribution Scheme in Designated Manually Deployed Wireless Sensor Networks Based on One-way Hash Chain
Computer Science. 2013, 40 (1): 41-44. 
Abstract PDF(412KB) ( 379 )   
RelatedCitation | Metrics
Aiming at the case of manually deployed wireless sensor networks(WSNs) , we introduced a key distribution scheme based on one-way hash chain. By employing the mechanism of key chain partly activation,our scheme can effectively weaken the threat of node capture, be resilience against node replication or node forgery. Besides of good security properties, the scheme supports node redeployment and promises good network coverage rate.
Closest-neighbor Node Clustering Strategy in Heterogeneous Wireless Relay Network
Computer Science. 2013, 40 (1): 45-48. 
Abstract PDF(398KB) ( 396 )   
RelatedCitation | Metrics
To improve the accessing performance in a heterogeneous cellular domain, a new relay node clustering strategy was adopted into heterogeneous wireless network. At first, the heterogeneous interference models for relay network were derived, which have great influence on wireless accessing performance. Secondly, system outage probability was given under heterogeneous relay network, so as to get symbol error rate. On the basis of these, Closest neighbor clustering algorithm was used to achieve an good performance. As a comparison, fixed node deployment strategy was adopted to understand cooperative performance. In numerical simulation, the Closest neighbor solution shows an effectively performance improvement in the heterogeneous cellular network.
Research on Application-driven Parallel Program Performance Tuning
Computer Science. 2013, 40 (1): 49-53. 
Abstract PDF(438KB) ( 498 )   
RelatedCitation | Metrics
Multi-core processor provides multiple threads parallel execution capability, and makes applications to have huge potential for performance improvement, but makes it enormously challenge to efficiently develop high-performance program Meanwhile, through the old process of performance optimization, the scalability is difficult to be guarantee.From application point of view,attributing core calculation to patterns and motifs,and optimizing these motifs can produce reusable library that can support efficient develop new application,and also can guarantee the scalability of the application performance. A layered parallel computing model was used for guidance in this article. From the perspective of application-driven parallel program performance optimization, this article designed a new parallel multi-core processor computing modcl,which can be used in the architecture of multi core processor chip. Based on this model,g-scan algorithm was designed which has good extendibility together with high performance after analyzing and optimizing some fundamental parallel algorithms. At the last, newly designed parallel algorithm was applied to OpenSeesSP which is a finite element software widely used in structure engineering.
External Interference Cancelation and Blind Multi-user Detection of DS-CDMA System Based on Non-gaussianity Measure
Computer Science. 2013, 40 (1): 54-58. 
Abstract PDF(523KB) ( 348 )   
RelatedCitation | Metrics
In an asynchronous DS-CDMA system of multiple users, there are inter-symbol interference(ISI),multiple access interference(MAI) and external interference(EI). According to function space thcory,decomposing the narrowband external interference on a function space with finite dimensions, regarding each basis of the function space as a vector of the mixing matrix, and taking the coefficient of corresponding basis as one of sources, we proposed a BSS-based method, requiring the received signal to be over-sampled, which can realize both blind equalization and blind multi user detection in the case that all these three interferences coexist simultaneously. The simulation result indicates that this method performs better than RAKE with perfect equalization, especially in lower SIRs.
Novel Piecewise Logistic Chaotic Spread Spectrum Communication Algorithm
Computer Science. 2013, 40 (1): 59-62. 
Abstract PDF(309KB) ( 406 )   
RelatedCitation | Metrics
Chaotic sequence as spreading code in spread spectrum systems has the characteristics of rich sequences and good confidentiality. Traditional Logistic chaotic sequence as well as its modified sequence is not ideal in ergodicity and randomness of its sequence. ho solve this problem, a new piecewise Logistic chaotic spread spectrum communication algorithm was proposed based on the analysis of the randomness, correlation, initial sensitivity and Lyapunoy exponents of the referred chaotic sequence, and applied to spread spectrum communication. hhe simulation results show that the algorithm greatly improves the error rate and confidentiality in the spread spectrum communication compared to the traditional logistic chaotic sequence and its modified sequence, which proves that the algorithm mentioned in this article is effective.
Improved DV-Hop Positioning Algorithm Based on Modifying Hop Counts
Computer Science. 2013, 40 (1): 63-67. 
Abstract PDF(419KB) ( 373 )   
RelatedCitation | Metrics
DV-Hop algorithm is one of the most classic range-free localization algorithms in wireless sensor networks.In order to enhance the localization accuracy of the DV-Hop algorithm, this paper introduced two terms of the previous node and Total HopSize, and designed an improved MHDV-Hop algorithm. In the improved algorithm, the correction factor of unknown node to beacon node hops is calculated based on the number of PN. It not only dons't change the positioning process of the DV-Hop algorithm at lower ratio of beacon nodes and without additional hardware support, but also greatly improves the localization accuracy.
Planning Paths Algorithm Based on Performance Evaluation of Industrial Wireless Networks
Computer Science. 2013, 40 (1): 68-72. 
Abstract PDF(414KB) ( 356 )   
RelatedCitation | Metrics
Against limitations for routing nodes to lookup message routing,this paper brought forward an algorithm by which the gateway device plans the message paths. The algorithm combines the regularity of the industrial wireless network communication data, finds a path combination, after considering performance comprehensive evaluation in the delay, power consumption, reliable transmission, and load balance, in order to avoid messages' congestion and interference and improve communication performance. And then, a simulation was desighed to compare the paths with AODV and the paths planed overall. The simulation results show that overall planning the paths of messages can make the communication better.
Pattern Matching Method of Complex Event for RFID Data Processing
Computer Science. 2013, 40 (1): 73-76. 
Abstract PDF(328KB) ( 349 )   
RelatedCitation | Metrics
RFID data is generally uncertain. Complex event processing (CEP) treats the data as different types of events,queries sequence of events in which match specific patterns of sequence are defined by high-level application from the event stream. Event stream is divided into multiple alternative event stream and single alternatives event stream.NFA-MMG pattern matching method for multiple alternatives event stream was proposed. hhe method uses combination of directed acyclic graph and automatic machines to achieve complex event pattern matching on the uncertain data.NFA-Tree pattern matching method for single alternatives event stream with the use of matching tree and automatic machines on uncertain data was proposed. The NFA-Tree algorithm was improved by pruning the matching tree to improve the efficiency of query optimization, which filters the results of the match situation based on probability threshold. The complex event processing system prototype uncertain data was developmented to realize the above algorithm,and the experiment examines the validation and performance of the algorithms.
Study on Embedding Problems of Exchanged Hypercube Networks
Computer Science. 2013, 40 (1): 77-80. 
Abstract PDF(281KB) ( 383 )   
RelatedCitation | Metrics
As a new variant of hypercube, the exchanged hypercube has nice recursiveness and preferable network parameters. Based on the relevant properties of exchanged hypercube, this paper studies the problems of embedding E-2DMesh networks and hypercube networks into exchanged hypercube. The following conclusions are obtained: (1) for max(s,t)<7, there is no mapping embedding for EM(2n,2m) into EH(s,t) (m+n<=s+t+1) with dilation=1. (2) EM(2s,2t) can be embedded into EH(s,t) with expansion=2, dilation=4, load=1. (3) for min(s,t)>1, there is no mapping embedding for Qn into EH(s,t) (n=s+t) with dilation=1. (4) Qn can be embedded into EH(s,t) with expansion=2, dilation=4, load=1(n=s+t). The results show that exchanged hypercube has nice versatility.
Research on Data Storage for Smart Grid Condition Monitoring Using Hadoop
Computer Science. 2013, 40 (1): 81-84. 
Abstract PDF(318KB) ( 691 )   
RelatedCitation | Metrics
With the development of smart grid, the mass data collected for the equipment conditioning requires higher performance of data store and data inquiry. A storage system based on Hadoop for smart grid condition monitoring was designed and achieved. The system consists of three parts; Hadoop cluster, storage client and inquiry client. A quantity of experiments were carried out including benchmark on hadoop cluster, verification for storage effect and analysis on query performance, which validate that the system has the advantages of distributed, mass storage and efficient inquiry,and is suitable for dealing with the mass data in condition monitoring of power equipments in smart grid.
Optimal Anycast Routing Algorithm for Delay-sensitive Wireless Sensor Networks
Computer Science. 2013, 40 (1): 85-87. 
Abstract PDF(633KB) ( 339 )   
RelatedCitation | Metrics
In sleep-wakcup scheduling Wireless Sensor Networks ( WSN),anycast technique can significantly reduce time delay. However, previous research work only attends to optimizing the time delay during each hop, this scheme is not optimal even sometimes bad for end-to-end time delay. An optimal anycast routing algorithm for delay-sensitive WSN was proposed to solve the problem of end-to-end time delay. In the algorithm, base stations apply AODV-based multipath routing protocol to acctuire anycast routing information, then genetic algorithm is applied to search the optimal anycast paths between each nodes and bases stations, after that, base stations report the information to each nodes. Our algorithm has features of both global optimization and self-adaptivity of switching anycast paths. Experiment results show that our algorithm can reduce end-to-end time delay more efficiently in contrast to previous algorithms.
Research of MAC Protocol for Wireless Medical Body Area Network Based on Fuzzy Logical Algorithm
Computer Science. 2013, 40 (1): 88-90. 
Abstract PDF(875KB) ( 354 )   
RelatedCitation | Metrics
To solve the problem of energy-constrained in wireless medical body area network(WMBAN) , a novel MAC protocol-Fuzzy Logical MAC (FL-MAC),which uses fuzzy logical control theory to reduce the frequency of the node's radio,was proposed. Fuzzy logical algorithm was used to filter the normal messages that need to be sent and reduce the data traffic in the network. Meanwhile, the addition of guard time slot (UTS) ensured the unexpected events could be handled in time. Simulation results show that this protocol provides better performance than other fuzzy logical protocol such as Asynchronous Energy-efficient MAC (ASCEMAC) and Distributed Queuing Body Area Network MAC (DQBAN-MAC) both in node's lifetime and the latency of whole network.
Establishment of Tunnel MIMO Channel Geometry Model Simulation
Computer Science. 2013, 40 (1): 91-93. 
Abstract PDF(322KB) ( 390 )   
RelatedCitation | Metrics
This paper stutied the MIMO roadway geometry modeling technology. On the basis of electromagnetic wave in mine roadway environment characteristics of the establishment of the MIMO channel GBDB scattering model and direct channel model, the model is derived for the correlation function, and the selection of transmitter and receiver horizontal angular power spectrum is Laplasse distribution function of power spectrum.The specific effects of tunnel environment MIMO antenna placement angle, antenna spacing, tunnel length, Les factor, angle spread and the average angle on the spatial correlation function were simulated. hhe simulation results show that in the environment of tunnel antenna placement angle effect on correlation coefficient is larger, and antenna spacing for a larger value can get a lesser correlation; but correlation coefficient is also affected by the antenna spacing and tunnel length and angle expansion and so on,and the ground is correlated with greater than.
Hierarchical Safeguard Scheduling Algorithm for Safety Critical Real-time Application
Computer Science. 2013, 40 (1): 94-97. 
Abstract PDF(354KB) ( 401 )   
RelatedCitation | Metrics
The paper solved the problem that current popular safeguard scheduling algorithm cannot achieve the safeguard function under the environment of software and hardware failure. It built a new hierarchical real-time scheduling model which describes the safety requirement of safety-critical real-time application from two aspects including function component and safe partition,and designed a three-level safeguard scheduler framework. Based on the model and framework, the paper proposed a new hierarchical safeguard scheduling algorithm ( HSS) which achieves spacial separation effect by distributing function components with different critical degrees to different physical processor clusters,and attains temporal separation effect by activating various partitions running on the same processor in a fixed cycle. Empirical investigations show that the improvements in the safeguard performance and the endurance to different application loads can be achieved by choosing HSS than other similar algorithms.
Research and Design of Multimedia Digital Products Copyright Protection Model
Computer Science. 2013, 40 (1): 98-102. 
Abstract PDF(443KB) ( 425 )   
RelatedCitation | Metrics
Illegal acts of piracy violate the legitimate interests of creators, making copyright protection of multimedia information to become very important. Based on the analysis about the general copyright protection management system,we proposed an efficient model for copyright protection of multimedia information. In the model, watermarking is embedded four times with comprehensive utilization of robust watermarking and fragile watermarking for multimedia; multimedia information is encrypted by the method of full file encryption; special client software, which adopts double buffering mechanism and secure storage mechanism, is used for information decryption. Finally, performance analysis shows that the model improves the security of the DRM system under the premise of not reducing its authority, fairness, and practice.
False Distance-based Location Security Algorithm for VANETs
Computer Science. 2013, 40 (1): 103-106. 
Abstract PDF(332KB) ( 347 )   
RelatedCitation | Metrics
Gcographirbased location routing protocols are widely used in VANETs since vehicles move fast. Vehicles periodically broadcast its current location information, which results in the event that location information is vulnerable to leak. Therefore, the FLPP (false-based location privacy protection)routing protocols were put forward in this paper. In FLPP, it makes use of the false distance between forwarding vehicles and destinations vehicles to build routing,not the real distance. In order to protect the vehicle location information, the timer of vehicle was cleverly set, and at the same time, the pseudonym strategy was used to hidden identity information. The simulation results show that the proposed FLPP has the high data delivery ratio and the position of privacy protection.
Variable Intervals Analysis of Firmware Code Based on Binary-bit Operation
Computer Science. 2013, 40 (1): 107-111. 
Abstract PDF(411KB) ( 359 )   
RelatedCitation | Metrics
The variable intervals analysis plays an important role in program data-flow analysis. There are two different operations, word-level and bit level. For the traditional iterative algorithm is inefficient to analyze the result intervals of bitwise operations if the variable has a large range, a quick bitwise operation method was proposed, which turns variables into the uncertainty bit form, and then makes the bitwise operators. When the uncertainty bit form of a variable needs to do word-level operation, the interval generated algorithm proposed can convert the form to the intervals. The experimental results show that the proposed method is time-consuming stability and more efficient than iterative algorithm with variable range large, and has a downward trend when the variable range expands.
New Mechanism of Monitoring on Hadoop Cloud Platform
Computer Science. 2013, 40 (1): 112-117. 
Abstract PDF(541KB) ( 653 )   
RelatedCitation | Metrics
Job monitoring and resource managing mechanism in cloud computing platform arc the core functions. The tasks of job monitoring and resource managing in Hadoop are assigned to JobTracker which is implemented on receiving heartbeat message sending by slave node, so JobTracker is the bottleneck of Hadoop. A new mechanism of job monitoring and resource managing scheme was presented. In this scheme,job monitoring and resource managing arc separated from original JobTracker, but job monitoring function is still assigned to JobTracker node, and the resource managing function is accomplish by newly added resource monitoring node. JobTracker sends the necessary information of Objects to resource managing node by using a new method of delta updates algorithm. Resource managing node schedules tasks with heartbeat message, and returns the result to Job Tracker node. Experimental results show that the scheme implemenu monitoring node monitor job completely,makes JobTracker node more flexible and robust, reduces the loading of Job Tracker node, improves the efficiency of Hadoop platform.
Improved Remote Attestation Mechanism of Platform Configuration Based on Chameleon Hashes
Computer Science. 2013, 40 (1): 118-121. 
Abstract PDF(334KB) ( 420 )   
RelatedCitation | Metrics
In order to obtain a practical remote attestation mechanism for platform configurations, an improved RAMT(remote attestation based on Merkle hash tree) method was proposed using chameleon hashes and software group. And the relevant proof was given. hhe problems of the existing methods were analyzed. And the architecture of improved scheme, its process of integrity measurement and attestation were discussed in detail. The advantages of new scheme were also discussed. Compared with RAMT, the scalability and the ability to protect privacy are enhanced, and the efficiency of the remote attestation is improved highly.
Malware Detection Approach Based on Structural Feature of PE File
Computer Science. 2013, 40 (1): 122-126. 
Abstract PDF(427KB) ( 1094 )   
RelatedCitation | Metrics
In order to solve the problems existing in malware detection, we proposed a novel malware detection approach by mining structural features of PE (Portable Executable) files and conducted the against recent Win32 malwares. Experimental results indicate that the accuracy of our method is 99. 1% and the value of the AUC is 0. 998 which is close to 1(The AUC value of the best possible classifier) and better than that of other static approaches. Compared with other static approaches, our method achieves higher detection accuracy with less detection time, is hard to evade by malware which applies the obfuscation and packing technique, and is real-time deployable. Most malware detection approaches using data mining may overfit experimental data in feature selection, but our experiments show that our method overcomes this problem.
Trust Measuring Model Based on Social Factors of Users and their Behavior
Computer Science. 2013, 40 (1): 127-131. 
Abstract PDF(435KB) ( 439 )   
RelatedCitation | Metrics
Trust measure is the basis of trust mechanism. Now the trust mechanism is facing the threat that malicious users manipulate the reputation. I}he trust measure model based on the social factors of users and their behavior expands the traditional trust mechanism. It describes and analyses the characters of malicious users and their behavior by the social factor,which reflects the essential of user and behavior. This model also adds the audit process in order to correct the reputation under the attack, so it can guarantee the credibility of trust measure in distributed Environment. Simulation experiments show that this model can effectively react to the reputation manipulation attack by the malicious users.
Trust Evaluation and Control Analysis of FANP-based User Behavior in Cloud Computing Environment
Computer Science. 2013, 40 (1): 132-135. 
Abstract PDF(404KB) ( 487 )   
RelatedCitation | Metrics
The open environment in cloud computing is much more complex and unpredictable, and can not fully adapt to the new application of computer network such as cloud computing. Combining such security measures of dynamic user behavior trust, effectively confirming the untrusted cloud terminal users and correctly analyzing their abnormal behavior are the basis to ensure cloud security in complex and dynamic environment This paper adopted the method of fuzzy analytic network process(FANP) based on triangular fuzzy numbers, which can reflect the fuzziness of expert evaluation through using fuzzy numbers, and weaken the subjectivity of simply using ANP. The paper also gave a quantization calculation to the weight of each attribute in order to make the evaluation results more objective. The evaluation results provide a quantitative analysis foundation for security control based on dynamic trust and provide the quantitative basis for service providers who adopt a more safe strategy in response to a user's requesting.
Research on Improved FCC Algorithm in Network and Information Security
Computer Science. 2013, 40 (1): 136-138. 
Abstract PDF(218KB) ( 461 )   
RelatedCitation | Metrics
Network information security suffers many network threats, the existing encryption algorithm has been unable to meet the needs of network and information security problems. I}his paper proposed an improved ECC algorithm based on network information security, and the algorithm is based on the original ECC algorithm and makes its dot product operation optimization and square residual determination, optimization and transformation of the private key update to improve the original operational efficiency and safety performance of the ECC algorithm. The experiments show that the ECC algorithm based on network information security has significant improvements in safety performance than the RSA algorithm as well as the original ECC algorithms, and the program is effective.
Using Points-to Combinations to Optimize Dependence Graph Construction
Computer Science. 2013, 40 (1): 139-143. 
Abstract PDF(399KB) ( 316 )   
RelatedCitation | Metrics
The dynamic nature of pointers makes a pointer possibly points to many different locations in an execution in program analysis. hhe existing dependence graph construction algorithms have already taken these multiple points-to relations into consideration. However, they do not consider the combination of points-to relations. Many points-to relations are not combinable. Without excluding these invalid combinations,we may lose precision in dependence graph construction. To address the problem, this paper proposed an approach that uses the invalid combinations of points to relations to optimize dependence graph construction. The approach can discard many false dependences which cannot be identified by the existing approaches, and thereby improve the precision of dependence graph construction.
Node Failure Recovery Algorithm for Distributed File System Based on Measurement of Data Availability
Computer Science. 2013, 40 (1): 144-149. 
Abstract PDF(1041KB) ( 325 )   
RelatedCitation | Metrics
The strategy for distributed file system dealing with node failure needs much bandwidth and disk space resources and affects stability of the system. By studying HDFS' s cluster structure, data blocks storage mechanism, the state relationship between node and block, we defined the cluster nodes matrix, node status matrix, file block partition matrix, block storage matrix and block state matrix Those definitions enable us to model the availability of data block easily. Based on the measurement of data block's availability,we proposed the new node failure recovery algorithm and analyzed the performance of the algorithm. The experimental results show that compared with the original strategy, the new algorithm ensures the availability of all blocks in the system and reduces the bandwidth and disk space resources for recovery, shorts the recovery time, and improvs the stability of system.
Development Approach of Exception Handling Logic for BPEL Process Based on Coloured Petri Net
Computer Science. 2013, 40 (1): 150-156. 
Abstract PDF(580KB) ( 347 )   
RelatedCitation | Metrics
According to the problem of exception handling for WS-BPEL in service oriented software, this paper presented an approach for BPEL's exception handling based on color petri net(CPN). Through formalizing description of exception handling mechanism of BPL,the approach builds exception handling CPN model,instructs the development of BPEL's exception handling logic, and based on this modeling idea, gives a converting tool for the BPEL's exception handling CPN model,which converts the BPEL's exception handling CPN model into BPEL codes with exception handling function, and through danamically adding exception handling elements of 13PEI,the tool can produces new BPEL process with exception handling. Finally we represented a case study to illustrate the using process and prove the affectivity of our method.
Source to Source Translation of Fortran90 Based on Open64
Computer Science. 2013, 40 (1): 157-160. 
Abstract PDF(320KB) ( 414 )   
RelatedCitation | Metrics
Source to source translation is a very useful part in modern advanced compiler. It translates one programing language to another, which is equal in semantic and can be compiled again. Currently, source to source translation model of the latest Open64 version 5.0 is not consummate. It has to deal with the following two problems in source to source translation model. One problem is now it can't support dynamic array translation in Fortran90, the other problem is intermediate representation contains pseudo-register after aggressive optimizing. After translation process and intermediate representation were researched, information preservation mechanism was introduced to solve the translation problem of dynamic array and pseudo-register. Test results prove that the method can greatly improve the robustness of the source to source translation in Open64.
Software Aging Detection Based on Nonlinear Multiparameter Model
Computer Science. 2013, 40 (1): 161-165. 
Abstract PDF(503KB) ( 359 )   
RelatedCitation | Metrics
This paper presented a method of nonlinear autoregressive models with exogenous inputs to detect the aging phenomenon of the software system. It figures out the problem existing in current software aging methods that there is no cosideration of the correlation between multivariate and the impact of delay from historical data. We first collected the performance data of the HelixServer-VOD server, did principal component analysis of the data, determined the input dimension, determined the best model order according to AIC criteria, selected the reasonable network model structure eventually. We used the known unaging state samples to train the MARX network in order to establish the identification model of the system, then hypothesis tested the residual of the NARX identification model through the method of sequential probability ratio test, finally judge the aging condition of the system. The experimental result shows that NARX model-based fault detection method can be effectively applied in checking software aging.
New Method of Software Maintainability Requirement Analysis Based on Maintenance Scenario Analysis
Computer Science. 2013, 40 (1): 166-170. 
Abstract PDF(378KB) ( 330 )   
RelatedCitation | Metrics
Software maintainability is one of important software quality attributes. Poor software maintainability may conduce to great software maintenance cost Software maintainability rectuirement analysis and design have often been ignored during the software development process. A scenario based software maintainability requircxnent analysis method was proposed. Software maintainability requirements can be acquired through software maintenance scenarios analysis.
Method of Regression Test Set Generation Based on Output Lines of Code
Computer Science. 2013, 40 (1): 171-174. 
Abstract PDF(294KB) ( 330 )   
RelatedCitation | Metrics
By reason of the constant turnover of the software version as well as the ceaseless change of the source code,tester needs the regression test with a specific targeted direction. However, the current technical methods arc difficult in meeting such requirement. A method generating the test sets of regression test specifically based on the lines of the output code was proposed. I}he method classifies the test sets of regression test pertinently according to the execution path of each test case. The selection of the test case into the test subset will be made only if the case passes through the selected lines of the output code. Our method roundly takes the output functional properties of code lines into consideration which makes it applicable for obtaining the test sets whit certain pertinence in small block of code. Furthermore,this paper made the application of the method into the generation of the regression test sets based on the user rectuiremenu. Some examples show that the selection of regression test sets for modified code through this method can be both efficient and comprehensive, which enables it to suffice the regression test with stringent demand and provide test with expansibility.
Approach to Transforming MARTS Sequence Diagram to TTS4SD Models
Computer Science. 2013, 40 (1): 175-178. 
Abstract PDF(307KB) ( 371 )   
RelatedCitation | Metrics
The sequence diagram was extended in the MARTE specification for modeling purpose, but it can not be used in the correctness verification stage. The OMG proposes to solve this problem by model transformation techniques; the model A is transformed to a formal model B which is equipped with efficient analysis or verification tools. ho describe the semantics of A by model B can guarantee the bi-simulation relation between them. A model named timed transition system for sequence diagram(TTS4SD)was proposed. At first,we offered the formal syntax of the sectuence diagram and the TTS4SD,then we described the semantics of the sequence diagram by I hS4SD. Taking the semantics as basis,the checking work was carried out on the TTS4SD.An example was given to describe the above process.
Research of Software Reliability Prediction Model Based on AGA-LVQ
Computer Science. 2013, 40 (1): 179-182. 
Abstract PDF(309KB) ( 343 )   
RelatedCitation | Metrics
The prediction accuracy of most current software reliability prediction models is not high. This paper put forward a software reliability prediction model based on AGA-LVQ,which takes advantage of non-linear computing power of the learning vector quantization (LVQ) neural network and parameter optimization capability of the adaptive genetic algorithm (AGA). Firstly,principle components analysis (PCA) preprocessing was used to reduce the dimension of the metrics and remove the redundancy and error data. Secondly, AGA was used to calculate the optimal initial vector weights of the LVQ neural network. Lastly, LVQ neural network was used to do the software reliability prediction experiments. The experiment results indicate that the method has a higher prediction precision than the traditional software reliability prediction model.
Minimal Association Rules Mining Based on Itemset Dependency
Computer Science. 2013, 40 (1): 183-186. 
Abstract PDF(418KB) ( 701 )   
RelatedCitation | Metrics
There are excessive and messy rules produced by traditional association rule mining, many of which are not relevant to users' interest. Minimal association rules mining algorithm was represented based on the concept of minimal association rules-set and strong dependency between items. Not only avoiding checking whether every non-empty subset of one frectuent itemset can form an association rule,but also simplifying the traditional rules set by deleting those excessivcly complex and reduplicative rules. I}he support and confidence degree of most redundant rules can be derived from minimal association rules set, which achieves a nearly lossless representation of the traditional rules set. The recults based on four benchmark data sets from UCI repository show that the number of rules generated by proposed method is reduced greatly and those rules in rules set arc more briefly without reduplicative information. This provides a better way to find minimal association rules.
Temporal Data Index Based on Linear Order Partition
Computer Science. 2013, 40 (1): 187-190. 
Abstract PDF(633KB) ( 353 )   
RelatedCitation | Metrics
The paper presented the temporal data index technology based on linear order partition. Firstly, the paper discussed the conception of linear order partition as well as its constructing algorithm on a given period set. Secondly, it put forward the TQOindex which is capable of indexing temporal database in external storages. Besides, the simulation resups of the index show its practicability and effectiveness. The basic property of TQOindex is that it is built on the math frame of order relationship and its data operation can be completed once a set in temporal query.
Shadowed Sets-based Rough Fuzzy Possibilistic C-means Clustering
Computer Science. 2013, 40 (1): 191-194. 
Abstract PDF(322KB) ( 598 )   
RelatedCitation | Metrics
It has been shown that soft clustering is advantageous over hard clustering in describing clusters without crisp boundaries. Both rough sets and fuzzy sets arc effective mathematical tools in handling uncertainty. As claimed in many studies,they are complementary. The theories of rough sets and fuzzy sets have been integrated into clustering algorithms,such as rough fuzzy possibilistic Gmeans clustering (RFPCM). In this study, we introduced the shadowed sets optimization theory and proposed an objective method to select the threshold in RFPCM.
Research of [0,∞)Value Flexible Logic Average Operation Model
Computer Science. 2013, 40 (1): 195-199. 
Abstract PDF(396KB) ( 368 )   
RelatedCitation | Metrics
Not only the continuous changeability of propositional truth value, but also the continuous changeability of relations among propositions influence the operation models of propositional connectives in flexible logics. Flexible logic operators arc continuously variable operator cluster along with both generalized self-correlation coefficient k and generalined correlation coefficient h. This paper studied the flexible average operators, and presented the definition of 0-level and 1-level [0,∞) value logic average operation models. To ensure the 0-level integrity of the model, the average operator cluster in its existential domain, continuously and monotonously transforms from the maximal average operator,passing through probability average operator and central average operator to the minimal average operator, and the four special operators on [0,∞)are proved.
Application of Random Forest Algorithm in Improtant Feature Selection from EMG Signal
Computer Science. 2013, 40 (1): 200-202. 
Abstract PDF(229KB) ( 526 )   
RelatedCitation | Metrics
It is a hard problem that how to find the effective feature from high-dimensional feature in EMU signal emotion recognition. I}his paper used random forest algorithm to comput the contribution in different emotion recognition of the 126 EMU signal, depending on the feature evaluation criteria of random forest algorithm, and then preferentially made up the features that have more contribution to emotion recognition,and used then to emotion recognition. Experiments show it is reasonable.
Functional Network Learning Algorithm with Recursively Base Functions
Computer Science. 2013, 40 (1): 203-207. 
Abstract PDF(344KB) ( 315 )   
RelatedCitation | Metrics
By transforming the functional neuron,we proposed a functional network learning algorithm with recursively base functions. The algorithm uses a recursive method for solving matrix's pseudo-inverse to achieve adaptive adjustment of base functions in functional neural network, finally realizes the functional network structure and parameters of the optimal solution together. The experimental results show that the learning algorithm has adaptive, robustness and high accuracy of convergence, and will have broad application in real-time online identification.
Computer Training Project of the Multistage Fuzzy Comprehensive Evaluation
Computer Science. 2013, 40 (1): 208-210. 
Abstract PDF(226KB) ( 503 )   
RelatedCitation | Metrics
Fuzzy comprehensive evaluation obtains the universal application in the”clear connotation, denotation not clear" ctuestion. In order to improve the credibility of evaluation and reduce the fuzzy comprehensive evaluation of existing in the course of subjectivity, this paper, through calculating of the index to construct the information entropy of each layer of the index weights, and combinating the subjective weights of index reduced the fuzzy evaluation model of the subjective influence. Based on the revised fuzzy algorithm, this paper developed an application system and used the system as an example to verify the feasibility of the algorithm.
Bucket-Tree Based Algorithm for Automated Reasoning
Computer Science. 2013, 40 (1): 211-217. 
Abstract PDF(589KB) ( 429 )   
RelatedCitation | Metrics
The bucket elimination algorithm and the join-tree reasoning algorithm arc popularly used for automated reasoning. To improve the efficiency of message propagation in the join-tree reasoning algorithm, a new join-tree reasoning algorithm (called JTR) was proposed. Meanwhile, to handle the inefficiency of multi-task automated reasoning of the bucket elimination algorithm BE,a bucket tree reasoning algorithm (named as BJTR),based on the join-tree structure and message propagation mode in JTR, was further developed from BE. Our study shows that in comparison with the BTE algorithm, the proposed algorithm川丁R improves the time performance while space performance decreases a little. Furthermore, as compared with BE,BJTR effectively reduces the demand on timcoverhead while maintaining a slightly low space performance in the handling of multi-task automated reasoning. Meanwhile, both examples and experimenu demonstrate that BJTR algorithm has an obvious advantage in the time performance for multi task reasoning.
Graph Regularized Non-negative Matrix Factorization with Sparseness Constraints
Computer Science. 2013, 40 (1): 218-220. 
Abstract PDF(315KB) ( 573 )   
RelatedCitation | Metrics
Nonncgativc matrix factorization(NMI)is based on part feature extraction algorithm which adds nonnegative constraint into matrix factorization. A method called graph regularized non-negative matrix factorization with sparseness constraints(UNMFSC)was proposed for enhancing the classification accuracy. It not only considers the geometric structure in the data representation, but also introduces sparseness constraint to coefficient matrix and integrates them into one single objective function. An efficient multiplicative updating procedure was produced along with its theoretic justificanon of the algorithmic convergence. Experiments on ORI. and MI T-CBCI. face recognition databases demonstrate the effectiveness of the proposed method.
Neighborhood System Based Rough Set and Covering Based Rough Set
Computer Science. 2013, 40 (1): 221-224. 
Abstract PDF(290KB) ( 378 )   
RelatedCitation | Metrics
Neighborhood system based rough set and covering based rough set arc two important expansions of the clas- sical rough set, I3y means of comparing the lower approximation sets, the upper approximation sets and accuracy meas- ures,thc relationships of neighborhood system based rough set and six covering based rough set models were systemati- cally studied. The conclusion is that the relationships between the lower or upper approximations of neighborhood sys- tem based rough set and six covering based rough set models are clear, either can be compared, or not. Under the compa- ruble situation, it was proved that there arc inclusion even equivalence relations. Under the incomparable situation, it was proved by counter-examples. The comparative study on different expansions of rough set not only provides a better un- derstanding of these models, but also gives a hand on learning rough set in the macroscopic level.
Two-phase Strategy on Overlapping Communities Detection
Computer Science. 2013, 40 (1): 225-228. 
Abstract PDF(867KB) ( 392 )   
RelatedCitation | Metrics
Communities, especial overlapping communities in complex networks are significant in many fields such as in- formation spreading and recommending,public opinion controlling,and commercial marketing. Overlapping communities detecting is attracting increasing attentions since some nodes may naturally belong to several groups in real-world net- works. This paper proposed an overlapping community detecting algorithm based on two phase strategies; initial com- munity extracting and community merging. In extracting phase, a node with maximal degree and its tight neighbors arc selected as an initial community, and nodes tight with the community arc also included. In merging phase, two communi- ties are merged if the modularity gets larger after merging. Three real-world complex networks including a large-scale one were used to evaluate the algorithm. Experimental results demonstrate that the proposed algorithm is efficient for detecting overlapping communities in complex networks.
Cross-domain Sentiment Classification with Opinion Target Categorizati nn
Computer Science. 2013, 40 (1): 229-232. 
Abstract PDF(440KB) ( 319 )   
RelatedCitation | Metrics
The task of sentiment classification is domain-specific,i. e. ,a classifier learning from the annotated data from a domain often performs dramatically badly on the data from a different domain. We presented a novel approach for cross-domain sentiment classification. Specifically, we first generalized four general categories of the opinion targets; o- verall, software, hardware, and service and classified all sentences into these categories. hhen, some sentences with the category information were annotated in the source domain and a classifier for opinion target categorization was developed with the annotated data to classify all the sentences in both the source and target domain. Third, the four categories of opinion targets were considered as four different views which arc employed in a standard co-training algorithm to per- form cross-domain sentiment classification. Experimental results across several domains of Chinese reviews demonstrate the effectiveness of the proposed approach.
Improved Quantum Optimal Control in Dynamic Control Field
Computer Science. 2013, 40 (1): 233-235. 
Abstract PDF(319KB) ( 527 )   
RelatedCitation | Metrics
According to the specialty of quantum systems, the reliability or the efficiency of a general optimal control scheme leaves much to be desired. This paper developed a special improved quantum optimal control method for using dynamic control field,which can be implemented as fast convergent algorithms. The relations among penalty of the field energy value, convergence speed and iteration step parameters were analyzed in detail through the system simulation ex- periments. I}hcoretical analysis shows that this new algorithm exhibits significantly superior to class of quantum optimal control and the cojugate gradient method in reliability and convergence rate.
Learning Algorithm of Binary Neural Networks for Parity Problems
Computer Science. 2013, 40 (1): 236-240. 
Abstract PDF(344KB) ( 797 )   
RelatedCitation | Metrics
Binary neural network can completely express arbitrary Boolean function, but more isolated nodes such as parity problem are difficult to implement with simple network structure. According to this problem, we presented a learning algorithm to realize 13oolcan function such as parity problems with many isolated samples. 13y means of the ant colony algorithm, we obtained the optimized core nodes and the extension order of true and false nodes, by combing the geometrical algorithm, we gave the steps of how to expand the classifier hyperplanes with the optimized core nodes, so this algorithm can reduce the number of hidden neurons in network, and the expression of the hidden neurons and the output neuron are also given. Finally,this algorithm is validated to be effective through examples.
Design and Realization of Ontology Evolution Requirement Auto-generated Model
Computer Science. 2013, 40 (1): 241-243. 
Abstract PDF(354KB) ( 375 )   
RelatedCitation | Metrics
I3y analysing the requirement of ontology evolution, it proposed a model of ontology evolution requirement which can be generated automatically. By segmenting the domain texts,we got the candidate concepts,and got the final key concepts by restoration, extraction, conciseness and shift This paper adopt ATF、PDF algorithm to get the key candidate concepts,and bring in the thesaurus,which is used to convert the natural concept to normalized format. Then, we decomposed the compound demands of evolution to atomic changes, and implemented an evolution system by using corresponding evolution strategy. The experiment result indicates the model can get a good result by evolution and anal sis of the experiment.
Genetic Algorithm and Particle Swarm Optimization Based Animation Group Modeling Platform
Computer Science. 2013, 40 (1): 244-246. 
Abstract PDF(266KB) ( 336 )   
RelatedCitation | Metrics
Animation production often requires a lot of individual model. In order to solve the problem of group mode- ling efficiency and simulation,population modeling method based on genetic algorithm and particle swarm algorithm- the NGP algorithm was proposed. The algorithm achieves the process of group cartoon model generated by one cartoon model. Genetic algorithm is applied to groups of the same type of modeling,applying this method for each component to form a variety of component libraries. Particle swarm algorithm is applied to the combination of the components of the complex models, combinatorial optimization of the various components in this algorithm, the formation of the model groups. Animation modeling platform based on the NGP algorithm was achieved. I}he experiment show the generated model has a high degree of simulation, and the generation process is quick.
Weighted Combination of Conflicting Evidence Based on Evidence Classification
Computer Science. 2013, 40 (1): 247-250. 
Abstract PDF(307KB) ( 527 )   
RelatedCitation | Metrics
In order to combine highly conflicting evidence efficiently, a new evidence combination rule making use of evi- dence classfication was proposed based on triangular norm and discount factor analyse. First, utilizing average evidence distance and discount factor based on triangular norm, the evidence was classified into three categories: reliability, no conflict and conflict .The discounting factors of the former two categories of evidences were set to one, which keeps the evidence hold of the right hypothesis to a great extant and makes the fusion results focus onto the right hypothesis more strongly. Then the was improved evidence weight was obtained based on evidence distance, and the modified evidence was obtained by verifying the conflicting evidence according to the weighting rule in order to eliminate the conflict. Fi- nally, according to the Dempster's rule, the modified evidence was combined. Numerical examples show the efficiency and rationality of the proposed approach.
Ontology Based Sentence Similarity Measurement
Computer Science. 2013, 40 (1): 251-256. 
Abstract PDF(476KB) ( 424 )   
RelatedCitation | Metrics
This paper proposed sentence similarity computing based on ontology. Using the relations between the ontol- ogy concepts and key words in the sentences to establish semantic index to extract the direct and indirect semantic rela- lion, ontology based semantic vector was represented to calculate the semantic similarity between sentences, thus the sentence similarity computing method was proposed. This method is applied in the Microsoft Research Institute of para- phrase corpus (MSRP). Experiments show that compared with the related similarity computing methods, this method obtains good accuracy and recall rate in the incomplete additional information background.
Clustering Structural Analysis on Fuzzy Proximity Relation
Computer Science. 2013, 40 (1): 257-261. 
Abstract PDF(387KB) ( 286 )   
RelatedCitation | Metrics
The clustering structural analysis of fuzzy proximity relations was presented based on the granular space, and the clustering structural characteristics was discussed. Firstly, the representation and generation algorithm of the granu- lar space (or clustering structure) was given and the concept of key point sequence was introduced. The minimum dy- namic connected graph was built to explain the generation process of the granular space. Secondly, by introducing the concepts of the isomorphism and。一similarity, the corresponding determinant theorem that two fuzzy proximity relations arc isomorphic or。一similar was given. Finally, by introducing the concept of the strong。一similarity, the relationship be- twecn the isomorphism and strong。一similarity of two fuzzy proximity relations was studied. I}hesc results provide re- search tools for the general analysis of clustering structure.
Research of Global Asymptotic Stability for CNN Based on Quadratic Form
Computer Science. 2013, 40 (1): 262-265. 
Abstract PDF(338KB) ( 335 )   
RelatedCitation | Metrics
Stability of cellular neural networks is significant because it has been used in a certain application areas as im}r ge processing, video communication, optimal control and so on. How to choose a reasonable template of the parameters is the key issue of stability researches. Lyapunov second method was used to analyze the global asymptotic stability of cellular neural networks, and a better I_yapunov function was constructed to receive a new sufficient condition for deter- mining the global asymptotic stability of the system. The condition improves previous results and further derives a suffi- cicnt condition when original point is equilibrium point. Numerical simulations show their effectiveness and feasibility.
Calculation Model of Satisfaction Degree Based on Intuitionistic Fuzzy
Computer Science. 2013, 40 (1): 266-268. 
Abstract PDF(238KB) ( 509 )   
RelatedCitation | Metrics
Satisfaction degree theory is widely used in optimization, system control, management, decision-making, re- source allocation, task Scheduling, and other fields. But most of them based on background of the problem define and calculate satisfaction, which lack a universally applicable formalized satisfaction calculation model. Based on intuition fuzzy set theory, a universal multilevel intuitionistic fuzzy satisfaction degree calculation model was established. It effec- lively induces intuitionistic fuzzy satisfaction degree by using intuitionistic fuzzy filter operator. Combining qualitative and quantitative methods to calculate satisfaction degree, it makes the results more informative, scientific, reasonable and has a high automated degree. This paper analyzed the model's calculation complexity, and combining with a practical ex- ample computed the intuitionistic fuzzy satisfaction degree of shangluo tourism, and the result shows that the model is effective and practical.
Threshold-based Segmentation for 3D Medical Volumetric Images
Computer Science. 2013, 40 (1): 269-272. 
Abstract PDF(628KB) ( 358 )   
RelatedCitation | Metrics
This paper presented a threshold-based segmentation framework for 3D medical volumetric images, in which two classical image segmentation algorithms, OSTU and Gradient based Segmentation, were re-designed and optimized to be suitable for 3D volumetric images. In order to evaluate the framework's performance, we defined two novel ctuanti- tative performance indicators: Segmentation Accuracy and Segmentation Balance, and use them to evaluate and analyze their performances. Experimental results show that both of the proposed segmentation algorithms can yield satisfied segmentation results for 3D medical volumetric images, and OShU segmentation achieves better performance than Gra- dicnt-based segmentation.
Fast Asian Script Identification Based on Multi-feature
Computer Science. 2013, 40 (1): 273-276. 
Abstract PDF(573KB) ( 305 )   
RelatedCitation | Metrics
Script identification has important applications in the field of document image information retrieval. An east asiatic script identification approach was proposed based on multi feature. Compared to traditional identification method based on statistical characteristics and symbols matching, the algorithm first analyzes and extracts the token shape matching features,layoutfeatures and character complexity features,and then uses closeness degree of fuzzy sets to i- dentify. The experimental results show that the algorithm has higher recognition accuracy and strong robustness to dif- ferent fonts.
Geological Layers Accurate Segmentation Method in Geological Map
Computer Science. 2013, 40 (1): 277-281. 
Abstract PDF(1603KB) ( 279 )   
RelatedCitation | Metrics
For the stratum information about geological sectional view, a novel anti-interference stratum segmentation algorithm was proposed. Firstly the method used the variation functions and the confidence degree to measure the re- gional orientation angles, then based on the direction flow fields snake model, segmentation method was designed for the sub-regions with inconsistent directions. Secondly the method based on the gradient information to calculate image structure and orientation was abandoned, but image direction obtained by radon transform was integrated into the Gabor filter, thereby different geologic layers were accuratly extracted and internal gaps in the same geological layer were re- paired. Finally experimental results verify the effectiveness of our proposed method which has a better texture segmen- tation results in the practical geological section map, and our method plays a role in promoting the oil exploration and geological environment three-dimensional reconstruction.
Watershed Algorithm Based on Image Filtering by Using Component Tree and Fast Region Merging
Computer Science. 2013, 40 (1): 282-285. 
Abstract PDF(1237KB) ( 320 )   
RelatedCitation | Metrics
An image segmentation algorithm combined with image filtering by using component tree and fast region merging was proposed to deal with over-segmentation. Firstly, the component tree was used to represent the gradient image, the relative potential energy and its properties were computed according to the ordered extremism. Then the gra- dicnt image was filtered to reduce the local minima in preprocessing stage. Then, the watershed algorithm was applied to the filtered gradient image to get the pr}segmentation result. Finally, a fast region merging algorithm was used to merge the prcsegmentation regions based on the perfect scene criterion to get the final segmentation. Experimental re- sups show that image filtering by using component tree can reduce the local minima disc to the noise and reduce the o- ver-segmentation. It can help improve the region merging accuracy and processing speed greatly.
Change Detection Method for Buildings Based on Pixel-level and Feature-level
Computer Science. 2013, 40 (1): 286-293. 
Abstract PDF(1581KB) ( 524 )   
RelatedCitation | Metrics
Aiming at the problem that only using pixel-level or featurclevcl change detection for high-rise building has low accuracy, a method combined pixel-level and feature-level change detection was presented. Detect changes of multi temporal remote sensing image based on ratio method, obtain the candidate change regions of high-rise buildings, then detect changes in the candidate regions based on building feature. Firstly,a novel fast registration algorithm of con- straint based Delaunay triangulation is used to make registration of multi-spectral images with two different phase. The building will lead to changes of distribution and characteristics of color in the texture of the local area changes, so, ro- bust texture and color characteristics for radiation and registration are extracted to make change detection. Experiment results show that the combination of pixel-level and featurclevcl of the building change detection method can effectively improve the accuracy, and reduce the false alarm.
ISAR Imaging Algorithm Based on Sparse Representation and Time-frequency Transform
Computer Science. 2013, 40 (1): 294-297. 
Abstract PDF(604KB) ( 392 )   
RelatedCitation | Metrics
Inverse synthetic aperture radar images maneuvering targets, and during the coherent processing interval time, imaging projection plane and the scale of cross-range change with time, so many parameters are difficult to accu- rately extract and more prior knowledge can't be acquired. In general, the robust rangcdoppler (RD) imaging algorithm is used. But the conventional RI)imaging algorithm is based on the hypothesis of the target rotating uniformly and sam- pling uniformity in azimuth. For maneuvering target imaging, RD algorithm will make the image fuzzy. Especially for ra- dar gapped data, the performance of imaging descends greatly and even cant be identified. This paper introduced a range instantaneous Doppler imaging algorithm based on sparse representation and time-frectuency transform which can effec- tively image the maneuvering target. hhe experimental results validate the effectiveness and feasibility of this approach.
Research on Multi-granularity Image Retrieval Method
Computer Science. 2013, 40 (1): 298-301. 
Abstract PDF(972KB) ( 340 )   
RelatedCitation | Metrics
Based on the quotient space and granular computing theory, the image retrieval process was analyzed, and then the method of multi-granularity image retrieval based on quotient was proposed. With the proposed method, an imp ge was divided into different regions based on the equivalent relation R (i. e.,connectivity of image' s dominant color) and features of different regions in different granularity levels of color, shape, spatial information were extracted respec- tively, and then the synthetic feature was obtained by composing the attribute functions in different granularity levels based on the theory of composing multi granularity attribute functions in the quotient space. Finally, the images were re- trieved images by the synthetic feature. Experimental results indicate that the proposed methodin is superior to the method based on single attribute,MI}H and the color volume histogram method.
Feature Extraction and Parameter Selection of SVDD Using Simulated Annealing Approach
Computer Science. 2013, 40 (1): 302-305. 
Abstract PDF(318KB) ( 609 )   
RelatedCitation | Metrics
Support vector data description (SVDI))is considered as a classical method for novelty detection. As is well known, the parameter setting and the quality of features arc two key points to affect the performance of SVDD. Combi- ning feature extraction and parameter selection of SVDD,this paper proposed a simulated annealing approach for feature extraction and parameter selection of SVDD (SA-SVDD). During the procedure of simulated annealing, the optimal ker- ncl parameter, tradeoff parameters, and number of extracted features arc automatically selected. Experimental results on the UCI benchmark data sets demonstrate that SA-SVDI)has better performance than the traditional parameter selec- tion methods.
Hairstyle Modeling with Layered Multistage Constraint Domain
Computer Science. 2013, 40 (1): 306-310. 
Abstract PDF(715KB) ( 335 )   
RelatedCitation | Metrics
The isolated styling constraints are commonly used in existing hairstyle modeling, and these methods may not work when the hairstyle to be built is complicated enough. Due to the characteristic of layered and stage-by-stage de- formation along hair strand, a method of hairstyle modeling with layered multistage constraint domain was presented, by which various of complicated hairstyles can be built with styling constraints interactively. Through the establishment of global and local styling constraint queue on hair wisp, more constraints can be used to generate complicated hair strand deformation properly. Moreover, the rotation minimizing frames (RMF) of hair strand and user-defined parametric c- quations of space spiral are combined to add more detailed appearance. The experiment result shows that rich and natu- ral hairstyle can be built with the method proposed effectively.
Small Neighborhood 3D Movement Constraint Estimate Fabric Simulation Algorithm
Computer Science. 2013, 40 (1): 311-313. 
Abstract PDF(502KB) ( 292 )   
RelatedCitation | Metrics
In traditional 3d fabric simulation, because of the randomness effect of fabric movement state,fabric simulated 3d coordinate in athletic process will occur dislocation, and motion state mutations increase movement process of the pa- rameters of the fabric of 3d dynamic randomicity,making the fabric 3d coordinate produces small neighborhood mutation and 3d photo-realistic fabric simulation result is not quite clear. In order to solve this problem, this paper proposed a small neighborhood 3d movement constraint estimate fabric simulation algorithm, which uses markov movement model and the collision of the movement of the randomness to constraint fabric range limited area, and realize the real 3d fabric of computer simulation. Experiment results show that this method can achieve real 3d simulation fabric, real degrees higher.
Improved Ant Colony Algorithm with Planning of Dynamic Path
Computer Science. 2013, 40 (1): 314-316. 
Abstract PDF(246KB) ( 652 )   
RelatedCitation | Metrics
In view of the shortcomings of slow rate of convergence and easy to fall into local optimal solution for the tra- ditional ant algorithm, this paper put forward to improve distance heuristic factor to encrease effects on the next node, so as to enhance the global search ability, avoid trap in local optimal solution and improve the rate of convergence. Con- sidering the complexity and diversity of the real environment, this paper introduced multiple path quality constraints to improve the rules of the pheromone update. I}he simulation results show the improved ant colony algorithm has a good effect in the dynamic path planning.