Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 36 Issue 9, 16 November 2018
  
Research on Reputation Based Trust Model for P2P Environment
HU Jian-li, WU Quan-yuan, ZHOU Bin
Computer Science. 2009, 36 (9): 1-6. 
Abstract PDF(630KB) ( 1567 )   
RelatedCitation | Metrics
The more along with the wide application of P2P system, the more it will face some severe trust problems such as service faking and resource abusing by some malicious peers. The conventional security measures cannot be used to cater for this demand, whereas the reputation based trust model has given a key to them. Firstly, the relationship between trust and reputation was analyzed, and a basic running framework for the reputation based trust model was put forward. And then the overview and comparison for current classic reputation based trust models were made. Finally, the challenges of this field for future research were presented, as well as some problems existing in current research arc discussed.
Research on Parallel Performance Simulation of Large Scale Parallel Computer
XU Chuan-fu , CHE Yong-gang, WANG Zheng-hua
Computer Science. 2009, 36 (9): 7-10. 
Abstract PDF(478KB) ( 1125 )   
RelatedCitation | Metrics
Simulation is an important tool for performance evaluation of computer system. This paper focused on parallel performance simulation of large scale parallel computer systems and message passing applications. Firstly we summarued some key techniques and state of the art in the research area, then we gave a detailed introduction about several representative parallel simulators, and based on the trend of parallel computer system and its application we discussed some challenging problems and useful suggestions for further research of parallel simulator in the end.
Survey of Digital Watermarking for the Vector Maps
SUN Jian-guo,MEN Chao-guang, YU Lan-fang, CAO Liu-juan
Computer Science. 2009, 36 (9): 11-16. 
Abstract PDF(540KB) ( 756 )   
RelatedCitation | Metrics
Vector maps are widely used in the geographic information systems (GIS),military surveying and mapping and so on. Watermarking techniques for vector maps are widely used for protecting the copyright of digital maps,preventing from illegal forgery and authentication. They have been well developed recently. Firstly the characters and appraisement criteria of watermarking for vector maps were given. Then the analysis of spatial domain, frequency domain,zero-watermark and multi watermark were discussed. The comparisons showed the advantages and disadvantages of these algorithms. Finally the development direction and possible research targets were pointed out.
Survey on Solving SAT Problems in FDA
WANG Xiu-qin, WANG Hao, MA Guang-sheng
Computer Science. 2009, 36 (9): 17-20. 
Abstract PDF(451KB) ( 1908 )   
RelatedCitation | Metrics
Boolean satisfiability is a famous problem in theory computer and artificial intelligence, many problems can be solved by the means of solving SAT problems. In this paper, the solving technology for SAT problems in EDA fields was studied. Major solving approaches were summarized, and these different approaches were sorted and compared.Problems existed in this fields were discussed, hot research issues and the development trends in the future were pointed out.
AS-level Internet Topology Power-law and Node Aging Analysis
FU Da-yu, ZHAO Hai, ZHANG Jun, GE Xin
Computer Science. 2009, 36 (9): 21-23. 
Abstract PDF(339KB) ( 726 )   
RelatedCitation | Metrics
The Internet topology, especially the AS-level topology, is the hotspot issue of the research. We can comprebend well the inner connective mechanism of the network by researching the evolvement trend of the Internet topology.In this thesis, the research task is based on the massive data authorized by CAIDA(The Cooperative Association for Internet Data Analysis) Skitter project and the data's time span is from January 2004 to June 2008. This paper introduced the essential basic conceptions first,then carried out the CCDF(d)-degree power-law analysis, the degrecrank analysis and the node aging analysis. It is shown by the evolvement of the network topology structure that the top degree nodes of the AS-level Internet are stable and maintenance the clustering and the power-law of network. But these nodes lost the efficiency connection gradually by time and the network topology presents the trend of the laggard evenly.
QoS-constrained Workflow Scheduling Algorithm for Grid Computing Based on Two-way Stratified
YAO Lei, DAI Guan-zhong, GHANG Hui-xiang, REN Shuai
Computer Science. 2009, 36 (9): 24-27. 
Abstract PDF(324KB) ( 602 )   
RelatedCitation | Metrics
In order to meet user's QoS rectuirement for the implementation of grid workflow, the key tasks of the workflow were analyzed firstly, and the QoS of the whole workflow was divided into segments which are the QoS-constrained of a single task. Then,a grid workflow scheduling algorithm (Q-TWS) based on Two-Way Stratified was proposed.Through both positive layering and reverse layering, this algorithm can find the parallel relation between tasks easily and accurately. Q-TWS can relax the task execution time, increase flexibility scheduling and meet user QoS requiremenu. Simulation results show that Q-TWS has a shorted execution time and a less execution cost compared with BL when the two algorithms have the same deadline.
Efficient Certificate-based Encryption Scheme
LU Yang,LI Ji-guo, XIAO Jun-mo
Computer Science. 2009, 36 (9): 28-31. 
Abstract PDF(328KB) ( 807 )   
RelatedCitation | Metrics
The certificate-based encryption (CBE) is a new PKC paradigm which combines traditional publi}key encryption (PKE) and identity based encryption (IBE) while preserving their features. CBE provides an efficient implicit certification mechanism for a PKI and allows a form of automatic certificate revocation,while it is not subjected to the private key escrow problem and secret key distribution problem inherent in IBE. This paper presented an efficient pairing-based CBE scheme and proved it to be IND-CBE-CCA secure in the random oracle model based on the hardness of the p-BDHI problem. Compared with other existing CBE schemes, this scheme has obvious advantage in the computation performance.
Method for Calculating the Upstream Buffer Size of the CM Based on Contention Request Mode
WANG Qin, PAN Guang-rong, DU Li-guo
Computer Science. 2009, 36 (9): 32-35. 
Abstract PDF(320KB) ( 650 )   
RelatedCitation | Metrics
The contention request mode of upstream bandwidth allocation for HFC (Hybrid Fiber Coax) network access device CM (Cable Modem) was discussed based on the international standard specification for HFC,DOCSIS specification. Firstly, this paper proposed a Markov Chain model for the contention request algorithm of the CM upstream channet. Based on this model, we proposed an M/M/1/K queue model for the data frame sending process of the CM upstream channel,and deduced a theoretical method for estimating the data frame sending buffer size of the CM upstream channel. Finally, we simulated the upstream channel by NS-2 simulator. The simulation results show that our method is able to correctly estimate the relationship between the upstream sending buffer size and the buffer overflow probability.Hence our method can offer a theoretical reference to the setup of the CM upstream sending buffer size.
Group Signature Scheme with Multiple Security Strategy
ZHU Jian-hua, CUI Guo-hua , ZHOU Shi-yang
Computer Science. 2009, 36 (9): 36-38. 
Abstract PDF(336KB) ( 669 )   
RelatedCitation | Metrics
A group signature scheme equipped with multiple security strategy was proposed. This scheme has forward security which can minimize the damage caused by the exposure of any group membefs signing key, and does not affect to the past signatures generated by this member; meanwhile, ahead signature generated by a group member before the joining date can be prevented via this strategy. Moreover, this scheme supports the group member revocable function efficicntly, if a certain group member is cancelled from the group, he can sign a message on the behalf of the group no longer. There is traceability for signature, in the case of a dispute about a signature, GM can open the signature to determinate the identity of a signature, anyone besides GM can not open any signature. Furthermore, there is no requirement for time period limits like previous schemes in which it is necessary to setup system while all time periods are used out.
Study on Node Dead & Alive States in Event-detection Wireless Sensor Networks
HU Li-qiong,SHU jian, WU Zhen-hua, LIU Lin-lan, SUN Li-min
Computer Science. 2009, 36 (9): 39-42. 
Abstract PDF(431KB) ( 665 )   
RelatedCitation | Metrics
Based on experiments, the paper analyzed power-exhausting process of a node in event detection wireless sensor networks (WSN). It defined two states dead state and alive state of a node, according to node work limitation. What is more, it proposed a detection method of node sub-death state based on heartbeat mechanism, which employs neighbor node monitoring method to detect if a node is on silent state. With the method, sub-death state was determined by voltage value and restart of a node. A formula of node death was presented. Experiment results show that the method is effective,and the rate or incorrect state judgment is reduced. It supports the applications of event detection WSN.
Wireless Sensor Networks Based on Multi-angle Trust of Node
DONG Hui-hui , GUO Ya-jun
Computer Science. 2009, 36 (9): 43-45. 
Abstract PDF(342KB) ( 654 )   
RelatedCitation | Metrics
The trust model based on communication can not completely solve the problems of security threats and lack of energy in wireless sensor networks. A trust model based on communication, data and energy was proposed in this paper,which added the sensing data and the node's energy in the factors of trust assessment,and used different methods to calculate the separate trust values. I3y this way, a wireless sensor network of node more reliable could be established.Simulation results show that, one model only based on communication trust is not enough to decide on the trustworthiness of a node,and the model based on multi-angle trust can judge the credibility of a node more simply and accurately.
Self-synchronization Chaotic Encryption Method with Time-varying Keys
JU Lei, WENG Yi-fang, ZHAO Geng, ZHENG De-ling
Computer Science. 2009, 36 (9): 46-48. 
Abstract PDF(229KB) ( 716 )   
RelatedCitation | Metrics
The separation-time of system variable can reflect the approximate level of system parameters in the chaotic system, this facilitates parameters detection and synchronizating attack. For soluting this security weaknesses of all the chaos system we got a new chaotic variable by preprocessing the chaotic variable and adding disturbance to the system parameters. The separation-time of new variable can't reflect the approximate level of system parameters almost. Subsequently, a self-synchronization chaotic encryption method with time-varying keys was presented. This method can re- sist the synchronizating attack and has the capability of one time padding. Security analysis and simulation results show that method is feasibly and safety.
Community-finding Algorithm in Complex Networks Based on Spectral Clustering
CAI Xiao-yan , DAI Guan-zhong, YANG Li-bin
Computer Science. 2009, 36 (9): 49-50. 
Abstract PDF(234KB) ( 1031 )   
RelatedCitation | Metrics
Research on community finding is very helpful to control virus spreading in networks. Most of the proposed community-finding algorithms are not suitable for very large networks because of their time-complexity. Combined with the advantage of solving the clustering of unknown distributed data set of the spectral clustering,and the ability of modularity function in finding good community number in large networks,a community-finding algorithm based on spectral clustering was proposed. Experimental results indicate that the new algorithm is efficient and effective at finding good community structure in large networks.
Fusion-based Sensor Placement for Target Detection with Guaranteed Accuracy
YUAN Zhao-hui, WANG Gao-feng
Computer Science. 2009, 36 (9): 51-54. 
Abstract PDF(425KB) ( 641 )   
RelatedCitation | Metrics
Sensor placement is a key issue in target detection, placing smaller sensors to achieve high detection accuracy which presented as a specific high detection probability and a low false alarm rate is very important. Data fusion based collaboration between sensors can boost the level of accuracy in general. In this paper, the accuracy model and the relationship between the fusion radius and density of sensors were analyzed; a quality-threshold clustering algorithm was employed to organize the surveillance locations into placement units,place sensors in them from the intensive to sparse one. Simulation results show that this placement algorithm can significantly reduce the total number of sensors with guaranteed accuracy.
Distributed IPS Model Based on Near Neighbor Distance
ZHANG Huan, CAO Wan-hua , FEND Li, ZHANG Jian
Computer Science. 2009, 36 (9): 55-58. 
Abstract PDF(326KB) ( 614 )   
RelatedCitation | Metrics
The characteristics and problems of Intrusion Prevention System (IPS) architecture were analyzed and a distributed IPS model based on near neighbor distance was proposed in this paper. In the model, message types transmitted between cooperation nodes were defined, and a message-based cooperation method was adopted to enhance the flexibility for system deployment. In order to reduce the redundant message, the distance between nodes was calculated and the communication region was optimized in the model. The experimental results show that the model decreases the IPS net- work load evidently.
Distributed Weight-clustering Algorithm in Wireless Sensor Networks
CHENG Lu, CHENG Geng-min
Computer Science. 2009, 36 (9): 59-62. 
Abstract PDF(337KB) ( 561 )   
RelatedCitation | Metrics
In recent years, the wireless sensor networks (WSN) attract increasing attention due to its bright application prospect in both military and civil fields. Energy conservation becomes a crucial problem in WSN routing protocol. Cluster-based routing protocols such as LEACH conserve energy by forming clusters which only cluster heads need to consumo extra energy to perform data aggregation and transmit it to base station. Unfortunately, cluster formation not only dissipates lots of energy but also increases overhead. We proposed a distributed, weighted clustering algorithm which improves the cluster formation process of LEACH by taking residual energy, mutual position, workload balance and MAC functioning in to consideration. The algorithm is flexible and coefficients can be adjusted according to different networks. The simulation experiments demonstrate the algorithm proposed in this paper is better in performance than LEACH.
Research of Subjective Routing Trust in MANET
GUO Wei,XIONG Zhong-wei, XU Ren-zuo
Computer Science. 2009, 36 (9): 63-66. 
Abstract PDF(347KB) ( 658 )   
RelatedCitation | Metrics
The routing behavior of Mobile Ad Hoc Network (MANET)is dependent on the cooperation of autonomous mobile nodes under the open environment. Based on the analysis of the MANET route trusting relationship and the characteristic,this article indicated the binomial event posterior probability of the node forward experience that is the foundation for the node route trust measure on. Using probabilistic model, the subjective route trust measure and mathematics estimate problem can be solved, and the route trusting relationship quantification and the forecast can be realined. Taking this as the foundation, the article transformed the MANET path trust as the trust synthesis of route participation node,and thus held the problem confiding in finding the solution that end-to-end routing trust changes into maximal spanning tree,and realized trust measurement and choice in routing.
Analysis of Mitigating UDP Flooding by Stochastic Fairness Queueing
YU Ming
Computer Science. 2009, 36 (9): 67-69. 
Abstract PDF(252KB) ( 703 )   
RelatedCitation | Metrics
Stochastic Fairness Qucucing (SFQ) is a typical mplementation of Fair Qucucing (FQ). UDP flooding is a common way to launch a DDoS attack. In this paper, a comparative study was made between the widely used FCFS (First Come First Served) and SFQ on their efficacy in mitigating UDP flooding. Simulation results based on Network Simulator 2 show that FCFS has little effect on UDP flooding mitigation while SFQ is more effective.
Blind CFR Estimation for SIMO SC-FDE Systems
LI Meng-xing, HUANG Long-yang, CHENG En, LIU Ze-min
Computer Science. 2009, 36 (9): 70-73. 
Abstract PDF(315KB) ( 705 )   
RelatedCitation | Metrics
A blind scheme to estimate frequency-domain channel response (is also called channel frequency response,CFR) in single-input multiple-output (SIMO) single-carrier frequency-domain equalization (SGFDE) systems based on linear prediction algorithm(LPA) was presented. Compared with conventional LPA based time-domain channel estimation approach, this method obtains the closed-form solution for channel estimation in frequency-domain directly from tap weights of the prediction filter, rather than Cross-correlation of innovation and measurements. It exploits merely seconBorder statistics(SOS) , and is robust to channel order overestimation. Furthermore, the performance of the proposal is better than conventional LPA based timcdomain channel estimation approach. Finally, computer simulations confirm the theoretical analysis.
Self-distribution of Network Security Policy Based on Structure-dissimilarity
TANG Cheng-hua YU Shun-zheng
Computer Science. 2009, 36 (9): 74-78. 
Abstract PDF(505KB) ( 728 )   
RelatedCitation | Metrics
The operations of security policy's rectuest, update, and execute have put forward higher rectuirements of the policy distribution. For purpose of resolving the distribution efficiency of the network security policy,the security policy self- distribution mathematic model and structural model were proposed based on tructurcdissimilarity, which introduced the concepts of distribution factor, security domain, etc. Expression and making ways of the structure-dissimilarity policy faced on attribute characters and operation were analyzed emphatically. The security policy searching algorithm,comparing algorithm, structurcdissimilarity policy building algorithm, address assigning and data transmitting algorithm based on security domain were presented. Compared with the classical entire distribution model, the proposed methods arc superior to enhance the system security policy disposal efficiency, and occupy the lesser resources of network channel.
Research on Handover Algorithm of Satellite Mobile Communication Network
YE Xiao-guo, XIAO Fu, SUN Li-juan , WANG Ru-chuan
Computer Science. 2009, 36 (9): 79-82. 
Abstract PDF(327KB) ( 1259 )   
RelatedCitation | Metrics
Low earth orbit(LEO) satellite network has a great advantage for global mobile communications. Handover scheme is very important to control end-to-end communication delay and improve duality of service and utilization of LEO satellite network link. Ground-satellite link handover and recomputing routes problems were analysed in detail.Then,a link handover algorithm for LEO satellite mobile communication network was proposed. Simulation results show that the satellite link handover algorithm proposed has less end-to-end delay, and is stabler and more customizablc.
Research on Wireless Mesh Network QoS Routing
XU Zhen, HUANG Chuan-he
Computer Science. 2009, 36 (9): 83-85. 
Abstract PDF(265KB) ( 582 )   
RelatedCitation | Metrics
Wireless mesh networks allow multiple orthogonal channels to be used simultaneously in the system to improve the throughput, but interference still can' t be avoided. The paper studied TDMA-based timeslot allocation to schedule links efficiently, and presented an effective heuristic algorithm for calculating end-to-end bandwidth on a path,which is used together with AODV to setup QoS routes. Simulation results show that compared with the shortest path routing, our interferenccaware QoS routing algorithm always increases success ratio of finding a route with required bandwidth to destination node.
Method of Improving Bluetooth Piconet Data Transfer by MSK Modulation
ZHANG Chao,ZHUANG Yi-qi, XU Fei, LU Tian-ran
Computer Science. 2009, 36 (9): 86-88. 
Abstract PDF(327KB) ( 656 )   
RelatedCitation | Metrics
The Bluetooth piconet needs better average channel SNR (received signal-to-noise ratio) in order to transmit data effectively in different channels with the modulation methods in Bluctooth specification. The MSK modulation method could improve the data throughput performance of wireless communication in noisy environment efficiently. The Bluetooth data packet transmission mathematical model was built; the relations of I3luetooth piconet data throughput and the average channel SNR were derived. The corresponding functions were deduced.The data transmission performance with the MSK method and modulation methods in I3luetooth specification was analyzed. The MSK modulation method can improve the Bluetooth piconet data transmission throughput in low SNR environment. The simulation result proved the conclusion.
Design and Analysis of the Augmentation Positioning System in Near-space
ZHANG Lei
Computer Science. 2009, 36 (9): 89-91. 
Abstract PDF(351KB) ( 717 )   
RelatedCitation | Metrics
Combining the technology advantages of GPS with the development requirements of near-space, this paper analyzed the application proprieties of near-space platform and proposed the augmentation system in near-space and analyzed the constrain problems and their solutions based on system fusion and augmentation positioning, and then discussed in detail the system integrity. Through the analysis of the system design and the algorithm process, the paper indicated that the development of the near-space positioning platform is the important technical support to the integration system of space and air and earth.
On the Necessity of Load Balance in DHT
NIE Xiao-wen,LU Xian-liang, LI Liang, XU Hai-mei, PU Xun
Computer Science. 2009, 36 (9): 92-95. 
Abstract PDF(310KB) ( 823 )   
RelatedCitation | Metrics
In the distributed hash table (DHT),the identifiers of nodes are chosen at random,but this does not mean that the DHT is load-balanced. The simulation in Chord has shown that the load is imbalanced.The paper summarized the previous work on this problem to make clear that DHT is imbalanced essentially.We gave the precise scope of the upper bound of imbalance,and verified the results with simulations.
Architecture of Education Resource Sharing Network and its Key Strategies
LIU Fang-ai, XING Chang-ming
Computer Science. 2009, 36 (9): 96-99. 
Abstract PDF(459KB) ( 619 )   
RelatedCitation | Metrics
For the primary and secondary education resource sharing issues, an education resource sharing network architecture was designed and the functions of nodes in each layer were defined in this paper. To reduce the network delay and improve the network utilization, we explored the methods and measures from the two angles of the network topology and communication efficiency. Firstly, we constructed a logic RP(k) network among the management nodes, and then, to improve the QoS of education resource sharing system, a series of key strategies based on the RP (k) were proposed.These strategics included the node joins or leaves strategy, distributed resource retrieval strategy and the data coopera-tive strategy. Finally, the advantages of the logic RP(k) network and the effectiveness of correlated strategics were confirmed by theoretical analysis and comparison.
Research on Correlation Electromagnetic Analysis for DES
DING Guo-liang, ZHAO Qiang, ZHANG Zheng-bao, YANG Su-min
Computer Science. 2009, 36 (9): 100-102. 
Abstract PDF(334KB) ( 784 )   
RelatedCitation | Metrics
The article analyzed the CMOS logical gate's electric current characteristic under the active status, explained data and electromagnetic emissions correlation of ICs, established the electromagnetic information leakage hamming dislance model in registers level. Aimed at the data encryption standard (DES) cryptographic system realized by the P89C668 microcomputer, correlation electromagnetic analysis (CEMA) algorithm was described, the choice of attack point D and the computational method were analyzed, an attack experiment was processed by CEMA, thereby which made us obtain 48-bit sulrkey of the 16th round of DES. The result shows that EM information leakage exists in CMOS integrated circuit during work, XOR operation in each round of DES is an attack point. The correlation analysis is more effective than the differential attack. It can provide a basis for implementing protective measures in the cryptographic systems.
Block Cryptosystem Based on Piecewise Linear Map
MA Jie , ZHANG Yuan-qing
Computer Science. 2009, 36 (9): 103-105. 
Abstract PDF(280KB) ( 645 )   
RelatedCitation | Metrics
Based on the study of some existing chaotic encryption algorithms, a new block cipher was proposed. The proposed cipher encrypts 128-bit plaintext to 128-bit ciphertext blocks. It consists of eight computationally identical rounds transformation. All roundkeys are derived from K and a 128-bit pseudorandom binary sequence generated from a chaotic map. Analysis shows that the proposed block cipher does not suffer from the flaws of pure chaotic cryptosystems and possesses high security.
Study on Computer Aided Design for System Software FMEA
ZENG Fu-ping, YANG Shun-kun, LU Min-yan
Computer Science. 2009, 36 (9): 106-109. 
Abstract PDF(339KB) ( 1028 )   
RelatedCitation | Metrics
Software failure modes and effects analysis(SFMEA for short) is a important method to improve software reliability. To solve the tim}consuming problem of the artificial SFMEA,with a view to the SFMEA’s process, the research on system SFMEA computer aided design and corresponding tool were carried out, the specific steps of the system SFMEA were introduced, the aided design methods as follows were advanced; the software conventional level was made by modeling the functional unit; the software failure mode was acquired from the existent failure mode and the analysis results of the adjacent level; the software failure cause and corrective measures were designed, and the corresponding tool was developed to reduce the system SFMEA's workload and improve the system SFMEA's efficiency.
Study of Web Service Composition on Combining AI Planning with Workflow
FANG Qi-qingl, PENG Xiao-ming, LIU Qing-hua, HU Ya-hui
Computer Science. 2009, 36 (9): 110-114. 
Abstract PDF(443KB) ( 616 )   
RelatedCitation | Metrics
Dynamic service composition is a major route to build a service-oriented, loose coupling and integrated applicanon system And by now, the business world has adopted a approach in which Web service instances are composed into flows with a language like BPEI.Academia has propounded the AI approach. These approaches by themselves are piecemeal, and insufficient. Based on the contrastive analysis on these approaches, this paper proposed a dynamic Web service composition method which is integrated the AI planning with the workflow technique. The practice shows that it is an effective method to overcome the single method's insufficient and finally achieve the whole process from the requirement to the deployment.
Framework for Integrated Testing and Debugging of Logic Programs Based on Computed Answers Semantics
ZHAO Ling-zhong, LIAO Wei-zhi, QIAN Jun-yan, GU Tian-long
Computer Science. 2009, 36 (9): 115-121. 
Abstract PDF(604KB) ( 675 )   
RelatedCitation | Metrics
Debugging logic program is a time-consuming process that usually contains considerable manual interaction.Reducing unnecessary calls to a debugging procedure can improve the efficiency of software development. Samcerror-source symptoms obtained in program testing is a source of unnecessary calls to a debugging procedure. This paper proposed an integrated testing and debugging framework, in which the generation of test cases, discovering symptoms and the debugging (including the diagnosis and correction) of the program under consideration (PUC) arc interleaved in such a way that only one of the symptoms with same-error-source relation between each other will lead to the execution of a debugging procedure. In this way unnecessary calls to the procedure arc effectively avoided. With a constraint based fixpoint semantics for Prolog, the framework is instantiated to a novel testing and debugging algorithm, whose applicability and effectiveness are shown by an example in this paper.
Study of Security Metrics of Software System for Comparative Evaluation
ZHANG Xin, GU Qing, CHEN Dao-xu
Computer Science. 2009, 36 (9): 122-126. 
Abstract PDF(545KB) ( 983 )   
RelatedCitation | Metrics
Quality of protection can be seen as the security target of security modules when doing their security treatments,which can be judged by ctuantitative criteria. The question of how to evaluate whether the current software system has fulfills the quality of protection target objectively and effectively has become one of the hotspots of research.Currently, however, most security professionals use the ctualitative method for security evaluation, which is highly sub- jective and makes the evaluation result dependent on the individual experience and thus unreliable. So what needed are substantive and quantitative security metrics. Because of the complexity and the difficulty of implementing the security metrics, a novel security evaluation model was presented in this paper, which analyzed the relative security level of given systems from the views of attack surface, denial of service and attack graph. At last, a general discussion for the process and the result of the evaluation were given.
Research on Dynamic Software Architecture Based on π-Calculus
REN Hong-min, ZHANG Jing-zhou, YANG Zhi-ying
Computer Science. 2009, 36 (9): 127-130. 
Abstract PDF(372KB) ( 596 )   
RelatedCitation | Metrics
Dynamic software architecture is one of the important research subjects in software architecture. This paper discussed issues in modeling and analyzing dynamic software architectures, proposed a modeling method of dynamic software architectures based on π-calculus, and developed an algorithm for reasoning about the semantics of dynamic architectural configuration. The π-calculus based modeling method is able to specify many aspects of dynamic architecture,including cause,time,operations and non-instantaneous change,etc.
Research on the Integrated Model of Campus Legacy System Based on SOA
JI Yi-mu, LU Li-li, WANG Ru-chuan, ZONG Ping
Computer Science. 2009, 36 (9): 131-134. 
Abstract PDF(364KB) ( 580 )   
RelatedCitation | Metrics
In order to solve the problem of lacking in effective sharing, integrating and uniform interface in universities,one integration model and its schema named CRPW are were proposed, which is one campus legacy system integrated schema based on service oriented architecture(SOA). CRPWare schema could integrate the resources and correlative legacy systems in universities effectively using SOA, and interact with uniform Web portal, such as going on various resources description, registration, access and deployment. By comparison with CORBA and Web integration models, SOA could solve the heterogeneous problem among the resources and the services being provided by different institutes, and avoid the "information island".
Software Reliability Growth Model Selection and Composition Method
FENG Guang-chen, GU Qing, CHEN Xiang, CHEN Dao-xu
Computer Science. 2009, 36 (9): 135-138. 
Abstract PDF(335KB) ( 848 )   
RelatedCitation | Metrics
Software reliability growth models are used for the prediction of software reliability which can decided whether software could make release. At present, all kinds of common models' predict ability is inconsistent. We proposed a framework for software reliability growth model selection and application. The method provided guidelines on how to select better model set on the given failure data set, and then used these models to predict reliability with neural network. I}he method was applied in a case study using failure data from some real world software projects and the experiment indicates its efficiency.
Enabling Adaptive Service Matchmaking with Abstract Service Model
CHEN Wang-hu, LI Jing
Computer Science. 2009, 36 (9): 139-142. 
Abstract PDF(326KB) ( 600 )   
RelatedCitation | Metrics
To improve the adaptability of the approach to service matchmaking, an abstract service model called ASM was proposed. ASM described service interfaces and behaviors, and thus addressed important factors affecting service capabilities. The adaptive approach to service matchmaking based on ASM were also analyzed. I}hc model and the approach have been partly applied to constructing problem solving environments for bioinformatics researches. Analysis and applications show ASM is effective to enable adaptive service matchmaking.
Infinite Joining E-commerce Mode Based on Trans-regional Distributed Architecture
XIA Yang,CHEN Gui-hai,Xi Zhao
Computer Science. 2009, 36 (9): 143-147. 
Abstract PDF(466KB) ( 637 )   
RelatedCitation | Metrics
Computer technology has been developing rapidly over the years, but the growth of E-commerce mode does not scan to catch up with the step. Based on the new distributed E-commerce architecture proposed in this paper, the author put forward a new E-commerce mode characterized by infinite joining around the world. Different from the traditional E-commerce system, the architecture makes the best use of the advantages of the new computer technology to distribute the E-commerce events on several servers and integrates multi commercial activities of E-commerce under the loos}coupled service component so as to raise expansibility, growth-ability, reusability, interoperability and maintains bility. It has therefore created an entirely new vision to build up a large scale distributed E-commerce system. The new distributed architecture based E-commerce mode, characterized by infinite joining around the world, ensures that local businesses can have easy access to marts around the world by publishing services at low costs and can be accepted to cover the three traditional transaction modes of B2C,B213 and C2C.
NGridFTP:A Grid Data Transfer Tool Based on GridFTP
REN Xun-yi, WANG Ru-chuan, ZHANG Yu
Computer Science. 2009, 36 (9): 148-150. 
Abstract PDF(309KB) ( 776 )   
RelatedCitation | Metrics
Grid data transfer protocol provides secure and reliable ability for transfers of large amounts of data,but the newest grid middlewarcGlobus Toolkit 4. 0 only suppose some commands and Application Program Interface. It is not easy to use GridFTP for common users. On the study of GridFTP, the paper designed and implemented one visual grid data transfer tool NGridFTP based on JavaCoG. NGridFTP realized upload and down of data, parallel data transfer, and third-party control transfer, etc. The experiment on large amounts of data shows that the parallel data transfer of NGridFTP has advantage over traditional transfer, but in parallel mode, adding the number of parallel TCP streams doesn't increase the transfer velocity.
Novel Method for Continuous Queries Processing in Road Networks
LIAO Wei , WU Xiao-ping, YAN Cheng-hua, ZHONG Zhi-nong
Computer Science. 2009, 36 (9): 151-153. 
Abstract PDF(367KB) ( 642 )   
RelatedCitation | Metrics
CKNN (Continuous k-Nearest Neighbor) queries in road networks have recently received many attentions in moving objects databases. In this paper,we presented a novel road networks directional graph model and use memory-resident grid cell structure and linear list structures to store the moving objects and road network directional model. By introducing directional network distance measurement we proposed the directional network expansion (DNE) algorithm to reduce the network searching cost of CKNN ctueries processing. Experimental results show that the DNE algorithm outperforms existing algorithms.
Data Preprocessing in Web Log Mining Based on User Access Tree
LIU Jia-ling, FAN Jun
Computer Science. 2009, 36 (9): 154-156. 
Abstract PDF(334KB) ( 828 )   
RelatedCitation | Metrics
Data preprocessing is the basis of the whole process of data mining in Web log mining,which directly influences the quality of the Web log mining and its result. A method of data preprocessing in Web log mining based the user access tree was proposed. The user access tree was created according to the Web logs in the preprocessing and it was used to identify the user and transaction.So the preprocessing can be worked well without the site topology.
Knowledge Discovery Approach Based on Generalized Variable Precision Rough Fuzzy Set Model
SUN Shi-bao, WU Qing-tao, PU Jie-xin, QIN Ke-yun
Computer Science. 2009, 36 (9): 157-160. 
Abstract PDF(327KB) ( 538 )   
RelatedCitation | Metrics
Generalized Ziarko' s variable precision rough set model and generalized rough fuzzy set model were introduced and the drawbacks of them were found. Based on support relative error ratio and error parameter β(0≤β<0.5),the generalized variable precision rough fuzzy set model was proposed. The basic properties of approximation operators were investigated. hhc relation of between this model and generalized Pawlak's rough set model,generalized Ziarko's variable precision rough set model and generalized rough fuzzy set model was analysed in detail. Finally, the definitions and the approaches of approximation reduction were discussed, an example was given to illustrate the validity of the presented algorithms.
Associated Model of Bi-Markov Decision Processes
WANG Zhen-zhen, XING Han-cheng
Computer Science. 2009, 36 (9): 161-166. 
Abstract PDF(556KB) ( 643 )   
RelatedCitation | Metrics
Human thought is often divided two levels while dealing with problems. First people always treat problems from a whole perspective, i. c.,they have a general plan, then they specifically deal with details. I}he human itself is a good example for having a multi-resolutional characteristic. It can not only generalize bottom-up among multi-levels (the granule of viewpoint abou印roblem becomes "rough" , analogous to abstract) , but also instantiate top-down (the granule of viewpoint becomes "thin",analogous to specification). So we constructed a semi Markov decision process consisting of two Markov decision processes running respectively on two levels-the ideal space (generalization) and the actual space (instantiation). It is called an associated bi-Markov decision model. Then we discussed how to find optimal policy under this associated model. Finally an example was given to show that the associated bi Markov decision process model can economically economize "mind" and is a good tradeoff between the computational validity and computational feasibility.
Study on Selection Strategies of Multiobjective Evolutionary Algorithms
XIE Cheng-wang,DING Li-xin
Computer Science. 2009, 36 (9): 167-172. 
Abstract PDF(527KB) ( 723 )   
RelatedCitation | Metrics
It is scarce for literatures devoted to the multiobjective evolutionary algorithms (MOEAs) to systematically research on selection strategics, however, these strategics arc crucial to MOEAs for solving some multiobjective optimination problems successfully,as they not only guide th e process of search and determine the search directions,but also exert great effect on the convergence of MOEAs. With the unified framework, the paper first discussed how to construct an appropriate fitness function in multiobjective optimization problem, then, selection strategics were classified as six categories based on MOEA's selection mechanism and principle through systematically analyzing various MOEAs. As it is rare for expressing the operators of the MOEAs symbolized in most literatures, which is not conducive to comprehend them deeply. This paper described the principle and mechanism of each selection strategy symbolized and analyzed its advantages and weaknesses respectively. At last, the paper proved the convergence of MOEAs with certain features, and the process of proof has shown that it is reasonable to regard PKNOWN achieved from the final results of MOEAs as PTRUE or the approximated Pareto optimal set.
Incompatible Rules Correcting Algorithm Based on Entropy
ZHU Hao-dong, ZHONG Yong
Computer Science. 2009, 36 (9): 173-175. 
Abstract PDF(224KB) ( 559 )   
RelatedCitation | Metrics
Exceptional rules may be exceptional information of information systems, these exceptional information is as important as those of normal information in modern times. This paper summed up much shortcoming of many algorithms of processing incompatible rules and presented a correcting algorithm of incompatible rules based on entropy. According to confidence which the paper assigned,it can judge whether rules of information system are exceptional rules.An example was given to demonstrate the main idea of this algorithm.
Decision on Minimal Covering of Preserving Quaternary Regularly Separable Relations in Partial Four-valued Logic
ZHOU Xiao-qiang,LIU Ren-ren
Computer Science. 2009, 36 (9): 176-177. 
Abstract PDF(226KB) ( 570 )   
RelatedCitation | Metrics
According to the completeness theory in partial K-valued logic, regularly separable relation and the similar relationship theory among precomplete sets, the decision of minimal covering in partial four-valued logic was analyzed, and the minimal covering members of function sets preserving quaternary regularly separable relations in partial four-valued logic were decided.
Improved Fuzzy Discriminant Analysis Algorithm Based on the Relaxed Condition
SONG Xiao-ning, ZHENG Yu-jie, YANG Jing-yu,YANG Xi-bei
Computer Science. 2009, 36 (9): 178-181. 
Abstract PDF(333KB) ( 582 )   
RelatedCitation | Metrics
A study was made on the essence of fuzzy Fisher discriminant analysis (FLDA) algorithm in this paper. A reformative FLDA algorithm based on the fuzzy k-nearest neighbor (FKNN) was implemented to achieve the distribution information of every original sample represented with fuzzy membership degree and was incorporated into the redefinition of the scatter matrices. Furthermore, considering the fact that the outlier samples have some adverse influence to the classification result, a relaxed normalized condition in the fuzzy membership degrees was proposed simultaneously,therefore, the limitation from the outlier samples was overcome. Unlike the conventional FLDA algorithm, the proposed method computes its discriminant vectors with fuzzy membership degree from every training sample, which is thcoretically effective to address the small size sample and outlier samples problems. Extensive experimental studies conducted on the ORL and NUST603 face images show the effectiveness of the proposed algorithm.
Coreference Resolution with Supervised Correlation Clustering
LIU Wei-peng, ZHOU Jun-sheng, HUANG Shu-jian, CHEN Jia-jun
Computer Science. 2009, 36 (9): 182-185. 
Abstract PDF(365KB) ( 692 )   
RelatedCitation | Metrics
Coreference resolution plays an important role in natural language processing. A supervised correlation clustering algorithm for coreference resolution was proposed. Firstly, coreference resolution was treated as a graph correlalion clustering problem,which partition the coreference relation from the global view,rather to make pairwise coreference decisions independently of each other. Then, the inference algorithms for correlation clustering were presented. Finally, a learning algorithm based on gradient descent was proposed to make the features parameters be trained from the training corpus, so that the learned parameters can better fit the objective of the correlation clustering. The experimental results on the ACE Chinese corpus demonstrate that the proposed method achieves better performance, compared with the traditional approaches.
Parallel Planning Based on Representation with Multi-valued State Variables
SHI Jing-jing,LIU Da-you,CAI Dun-bo,LU Shuai,JIANG Hong
Computer Science. 2009, 36 (9): 186-192. 
Abstract PDF(551KB) ( 773 )   
RelatedCitation | Metrics
Fast Downward had won the classical track of the 4th International Planning Competition at ICAPS 2004, our work extended the Fast Downward planning system to a parallel setting, and implemented a parallel planning system called Parallel Downward. Several techniques were proposed in this paper. Firstly, four definitions related to parallel plan were given. Secondly, we proposed a method to decide mutual exclusive actions in multi-valued planning tasks, ineluding definitions, the necessary and sufficient condition and the deciding algorithm. Thirdly, we designed an efficient generation of successor states with low cost of generating all the legal sets of actions. Finally,we proposed an effective search control policy of parallel planning to enhance the quality of this valid plan. In addition, we proposed the pruning policy to restrain state space of parallel planning exploding exponentially. We tested Parallel Downward on planning benchmarks used in the International Planning Competitions. The experimental results show that Parallel Downward is excellent in both planning efficiency and planning quality. Parallel Downward is superior in scalability compared with Sapa parallel planning system.
Uncertain Knowledge Expression to the Decision Rule of Incomplete Information System
LI Ping,QU Ying, WANG Fang, WU Qi-zong
Computer Science. 2009, 36 (9): 193-195. 
Abstract PDF(230KB) ( 578 )   
RelatedCitation | Metrics
The uncertainty measurement method of decision rules in complete information system was firstly introduced into the incomplete information system. When some of attribute values are missing,the uncertainty measurement of decision rule shows the characteristics of probability interval. On the basis of that, the certain factor coverage factor can be measured by approximate probability value. Besides this, the reflection degree of rule to the knowledge classification can be computed using the certain factor.
Semi-supervised Clustering of Complex Structured Data Based on Higher-order Logic
LI Lin-na,CHEN Hai-rui,WANG Ying-long
Computer Science. 2009, 36 (9): 196-200. 
Abstract PDF(429KB) ( 532 )   
RelatedCitation | Metrics
Semi supervised clustering algorithms have recently received a significant amount of attention in the machine learning and data mining of communities. All of current algorithms use attribute-value languages to represent knowledge. Attributcvaluc languages have inherent drawback for represent complex structured data. However, knowledge representation language Escher based on higher-order logic, can represent complex structured data. With Escher as knowledge representation formalism, firstly, when prior knowledge is pairwise constraints between instances, the method of initializing the K-Means algorithm was proposed; secondly, when the number r of cluster centers which can be initialized with incomplete knowledge is less than K, the algorithms MSS-KMeans and SMSS-KMeans were proposed to initialize the rest K一r cluster centers. Finally, the empirical study carried out on datasets of complex structure data showed the feasibility of the presented algorithms. I}he final experimental results demonstrate the comparability between the presented algorithms and the known algorithms based on attribute-value language.
Method of Association Rule Privacy Protection Based on Temporal Constraint
LI Jun-huai,LIU Hai-ling,PING Jun,ZHANG Jing,CHEN Xiao-ming
Computer Science. 2009, 36 (9): 201-204. 
Abstract PDF(459KB) ( 559 )   
RelatedCitation | Metrics
Time is an inherent property of data in the real world and also is an important basic characteristic of privacy data. We can delve into the problem about data privacy protection if we take time property as a constraint condition. On the consideration about privacy protection and data security, we combined the data temporal property with security level of the different data granularity and then proposed the conception of time effectiveness about data security. Further- more, we applied the different level to generalize the different items or transactions in order to protect data privacy, and presented the algorithm of association rule privacy protection based on temporal constraint. Finally, we analyzed and evaluated the method performance,which include information loss and algorithm effectiveness.
Backward Reasoning Algorithm of Fuzzy Petri Nets for Semantic Web Services Composition
GE Jing-jun, HUANG Hua, HU Jian-ming
Computer Science. 2009, 36 (9): 205-207. 
Abstract PDF(300KB) ( 579 )   
RelatedCitation | Metrics
As for the technology of semantic Web services automatic composition, the key which solves this problem is to construct a model of the formal description of Web services and realize the Web service composition request by using this dependency relationship of available data. However, current approaches for service composition may return back lots of global state with exponential growth. To address this issue, backward reasoning algorithm of fuzzy petri nets for semantic Web services automatic composition with correctness guarantees was proposed. The algorithm fully takes advantage of mathematics foundation of petri nets, a complex system can be transformed into a simpler system closely related to the current problems. Thus, the space complexity of the algorithm can be reduced. Finally, an example was used to illustrate the applicability of this approach.
Study of a Selective Ensemble Algorithm Named SER-BagBoosting Trees
CHEN Kai,MA Jing-yi
Computer Science. 2009, 36 (9): 208-210. 
Abstract PDF(248KB) ( 564 )   
RelatedCitation | Metrics
Ensemble learning now becomes much popular in the field of machine learning. "hhis paper introduced a new ensemble algorithm, SER-BagBoosting Trees ensemble algorithm, which was a combination of tree predictors and was based on variational similarity cluster technology and greedy method, and it was also combined with the features of Boosting and Bagging. Compared with a series of other learning algorithms, it often has better generalization ability and higher efficiency.
Method of Character Weighted Set Pair Analysis
ZHU Hong-ning, ZHANG Bin
Computer Science. 2009, 36 (9): 211-214. 
Abstract PDF(310KB) ( 765 )   
RelatedCitation | Metrics
This paper proposed an improved method of SPA theory named character-weighted set pair analysis, described constant and variable weighted processing methods to the two kinds of familiar problems in QoS evaluation of Web services. It indicated that the method of character-weighted SPA was fitter than the method of connection number weighted SPA in the aspect of Web service selection based on QoS evaluation.
Sample Reduction Strategy for SVM Large-scale Training Data Set Using PSO
ZENG Lian-ming,WU Xiang-bin, LIU Peng
Computer Science. 2009, 36 (9): 215-217. 
Abstract PDF(264KB) ( 585 )   
RelatedCitation | Metrics
A PSO algorithm reduction strategy was proposed to a SVM large-scale training samples by updating the velocity and location of the particles, each particle was corresponding to the status of the training samples, the ideal status included the smallest number of sv, the new training sample has reduced some nsv which arc not effect the SVM classification, so as to reduce the size of the training data sets. A practice of the remote session image classification has proved that the strategy not only has reduced samples, but also enhanced the efficiency of the largcscale data sets training.
Dynamic Context Knowledge Acquisition and Sharing
YU Zhi-yong, ZHOU Xing-she, WANG Hai-peng, NI Hong-bo, YU Zhi-wen, WANG Zhu
Computer Science. 2009, 36 (9): 218-223. 
Abstract PDF(523KB) ( 620 )   
RelatedCitation | Metrics
Context aware systems need to acquire and share various kinds of context knowledge, which not only include the concrete values of environmental parameters, but the vocabularies used to describe the environmental states, i. e.,concepts. Context knowledge is inherently dynamic. It is difficult to predict the context information involved in the system at the moment of design. Most of existing systems lack of effective mechanisms for dynamic context knowledge maintenance, thus unable to support run-time scalability of context aware applications. This paper introduced an ontology based hierarchical context model. On the basis of this model, a dynamic context knowledge acquisition and sharing infrastructure(DCASI) was proposed. The mechanism of distributed acquisition on demand ensures that each class of context knowledge comes from the entity with the most competence to define it, which makes the context knowledge in the system sufficient but not redundant. hhe mechanism of centralized sharing with doublcrepository maintains two knowledge repositories according to different functions, thereby efficiently supports the definition and discovery of new context knowledge. A prototype system was implemented and the proposed infrastructure was validated.
Application of PSO Neural Network Based on Extended T-S Model in Fault Diagnosis
WANG Jian-fang, LI Wei-hua
Computer Science. 2009, 36 (9): 224-226. 
Abstract PDF(342KB) ( 547 )   
RelatedCitation | Metrics
To solve fuzzy and non-linear features of faults,a fault diagnosis method was developed based on extended T-S ( Takagi- Sugeno) fuzzy model of self-adaptive disturbed PSO (Particle Swarm Optimization) combined with Neural Network. Firstly, the membership function of the basic T-S fuzzy model was modified by the adaptive gaussian function,and then the extended T-S model was used to adjust the PSO parameter. Secondly, the neural network was trained by the modified PSO algorithm. Finally, the proposed method in the paper was applied to fault diagnosis of gear-box The diagnosis results show that the mean sctuare error is improved 0.1981% , meanwhile, comparisons with the diagnosis result of the different models show the method in the paper is convenient, efficient, and provides a new approach to fault diagnosis.
Research and Implementation of Web Table Positioning Technology
LIAO Tao, LIU Zong-tian, SUN Rong
Computer Science. 2009, 36 (9): 227-230. 
Abstract PDF(288KB) ( 704 )   
RelatedCitation | Metrics
Web table positioning technology is considered as essential components of Web table information extraction,and more and more people pay attention to them This paper realized table positioning according to Web table structure label and heuristic method rules of user-definition,which includes the solution of nesting problem,the deterurination of table data's integrality, and traversal of
tree.
New Algorithm Research for Mining Workflow Frequent Pattern
GAO Ang,YANG Yang,WANG Yue-wei
Computer Science. 2009, 36 (9): 231-233. 
Abstract PDF(294KB) ( 582 )   
RelatedCitation | Metrics
To improve mining accuracy of workflow models,a new algorithm for mining workflow frectuent pattern was proposed. Firstly,the Workflow Model depend Matrix (WM) was defined,and set up WM by using workflow logs. Secondly, using the depend relation of activities as frequent itemsets, an alogrithm was designed to automatically generate frequent itemsets based on WM. Finally, got the workflow frectuent pattern by disposing frequent itemsets. The algorithm has advantage in disposing the interleaving relations between activities and workflow models with the serial or parallel relations.
Research on Hunting and Escape of Virtual Life with Self-balancing System in 3 Dimensional Space
BAN Xiao-juan, CHEN Xi, NING Shu-rong
Computer Science. 2009, 36 (9): 234-237. 
Abstract PDF(300KB) ( 549 )   
RelatedCitation | Metrics
Faking virtual fish as an example, the paper presented an algorithm with critical technical points on virtual lives' food hunting behavior and escaping in three-dimensional environment. The core algorithm were researched which can search automatically, detect food, predict self-behavior and adjust searching direction after an enemy or food is locked down with the precondition of maintaining self-balance. Results of simulation were presented to prove the effectiveness of the series of algorithms.
Classified Query Expansion Algorithm Based on Semantic Relation Tree
REN Yong-gong, FAN Dan, WU Jia-lin
Computer Science. 2009, 36 (9): 238-241. 
Abstract PDF(425KB) ( 720 )   
RelatedCitation | Metrics
Introducing semantic computing technology into the ctuery expansion is an important research direction. In this paper we presented a semantic relation tree model which combines with topic selection and local feedback method,classified expand query from the perspective of semantic. Traditional methods exist for the problems, such as the lack of knowledge in the topic, the introduction of irrelevant words and the filter functions arc not proper. We introduced Web text classification into the semantic relation tree model to make subject expansion with improving the word filter funclion and increasing the threshold limit to control noise. The combination of user interaction with the local feedback method not only reduces the user's work in traditional relevance feedback method but also solves the problem of highly dependent primary retrieval result in local feedback. The experimental results on the SMART platform show that this method can increase the rate of recall and precision.
Rough Set-based One-class Support Vector Machine
WANG Lei,YANU Yi-fan,ZHOU Qi-hai
Computer Science. 2009, 36 (9): 242-245. 
Abstract PDF(308KB) ( 588 )   
RelatedCitation | Metrics
The rough set theory is an important mathematical tool to deal with uncertainty and incompleteness. This paper proposed a novel rough oncclass support vector machine by introducing rough margin into oncclass support vector machine. With the definitions of upper approximation and lower approximation hyperplanes, the influences of training samples on the decision hyperplane arc determined adaptively by their position within the rough margin. Moreover, outlier samples are prone to produce small margin errors since they lie close to the upper approximation hyperplane, so that the overfilling problem of decision hyperplane can be avoided. Experimental results on UCI datasets show the superior generalization performance of rough oncclass support vector machine.
Weighted Probabilistic Model for Identifying Time-varying Objects
WU Shi-xian
Computer Science. 2009, 36 (9): 246-247. 
Abstract PDF(251KB) ( 583 )   
RelatedCitation | Metrics
Web object retrieval is becoming one of the main trends in the development of Intelligent Search Engines.High-precision time-varying objects recognition technology is one of the important prerectuisite for high-precision Web object retrieval. Usually,different attributes have different capabilities to token the essence of the object, stochastic attribute obeys some type of distribution, and determinate attribute has better separating capacity than stochastic attribute. In this paper, starting from the abovcmentioned thought a weighted probabilistic model for identifying time varying objects was proposed to improve the accuracy of identify objccts.
Performance Evaluation for Fault-tolerant Parallel Algorithm
DU Yun-fei , TANG Yu-hua , YANG Xue-jun
Computer Science. 2009, 36 (9): 248-251. 
Abstract PDF(302KB) ( 754 )   
RelatedCitation | Metrics
The fault tolerant parallel algorithm (FTPA) is an application-level technique for tolerating hardware failures.FTPA achieves fast failure recovery making use of parallel recomputing. How to deal with system failures is a concern in the design of FTPA. Thus, evaluating the performance of FTPA under system failures is necessary. In this study,we presented the performance metrics to evaluate the performance of FTPA and a model to predict the application completion time under system failures. Then, the influence of program section executing time, checkpointing cost, failureate, and speedup of parallel recomputing on the performance of FTPA were evaluated.
Design and Implement of Multichannel Video Coding Terminal Based on DSP
SU Shu-guang, SU Yue-ming, LIU Yun-sheng
Computer Science. 2009, 36 (9): 252-254. 
Abstract PDF(361KB) ( 611 )   
RelatedCitation | Metrics
The paper proposed a scheme of hardware design of a four-channel realtime video coder.The design of core chips of TMS320DM642 and audio/video chips were focused on.In addition storage,Ethernet,IIC and power source modules were discussed.Finally signal simulation was discussed and some keys of PCB layout and route were proposed.The software test proved the hardware worked well and produced good visual quality of audio/video.
Replica-optimized P2P Cache Management Strategy
WANG Fen, XIE Chang-sheng, LU Zheng-wu, WANG Yu-de, ZHAN Ling
Computer Science. 2009, 36 (9): 255-257. 
Abstract PDF(365KB) ( 592 )   
RelatedCitation | Metrics
According to the in-depth research on P2P stream media application, how to improve quality of service is a main aspect in research work.We proposed a CORPC caching management scheme,which defines a best caching capacity of chunk's replicas in a P2P system.Based on media chunk popularity,replica number and peer capacity,CORPC scheme adopts heuristic greedy algorithm to implement cache admission control and replacement policy. This scheme can improve quality-of-service (QoS) of chunks with lower popularity while with little sacrifice of chunks with high popularity.Experiment results show that system QoS is improved obviously with an increase of peers' cache capacity.
Re-loss-free Intra Coding Using Integer Linear Programming for H. 264/MPEG-4 AVC
DONG Peng-yu,LIN Tao
Computer Science. 2009, 36 (9): 258-261. 
Abstract PDF(0KB) ( 371 )   
RelatedCitation | Metrics
This paper addredsed the video distortion of infra coding in multigeneration, which is introduced by the irreversible operation of current clip module. In order to implement the rcloss free (RLF) infra coding in modified H. 264/AVC,we proposed a novel optimal clipping algorithm based on Integer Linear Programming (ILP). Furthermore, we improved the cost function of infra prediction to guarantee later encoder has the same prediction values with the former.Experimental results show that the ILP-based infra coding method completely eliminates the video degradation and achieves superior performance in comparison with current infra coding method in H. 264.
Application and Research of Matrix Data Read-write Method in System Simulation
WANG Lei, LU Xian-liang,ZHANG Wei
Computer Science. 2009, 36 (9): 262-266. 
Abstract PDF(415KB) ( 539 )   
RelatedCitation | Metrics
In order to enhance the efficiency of software simulation system and shorten the running periods of system,the technique of Sybase Open Client Component was introduced and the matrix data read-write method was proposed,multi-image data wining method was given finally through experiments and analysis. Sybase Opcn Client interface technique was introduced firstly, and then the access-control structure of sybase database, the data described form adopted in simulation was expatiated in sequence, finally the method of matrix data for read-write was researched and the matrix data write method was experimented and analyzed.
Algorithm of Spectrial Angle Parallel Classification on Remote Sensing Image
LIU Xiao-yun,KANG Yi-mei,QI Tong-jun,FANG Jin-yun
Computer Science. 2009, 36 (9): 267-270. 
Abstract PDF(331KB) ( 967 )   
RelatedCitation | Metrics
This paper proposed a spectral angle parallel classification algorithm on remote sensing image against mass remote sensing data processing. Based on single-machine spectral angle algorithm, aiming at defects in processing remote sensing image data, such as slow speed, poor effect, and insufficient internal memory, this algorithm proposed some synchronized,exclusive and load balancing strategies against spectral angle parallel classification algorithm on remote sensing image, as well as method of parallel processing on multiple equipments under cluster environment. The paper comparcel classified image with famous software ENVI through analyzing time complexity and speedup ratio of the algorithm and doing verification by actual examples under the cluster environment to inspect and verify the superiority and validity of the algorithm.
Wavelet Packet Transformation in Mixed Image Noise Enhancement Technology
WAN Mini-hua,SHAO Jie,HUANG Chuan-bo,JIN Zhong
Computer Science. 2009, 36 (9): 271-272. 
Abstract PDF(263KB) ( 718 )   
RelatedCitation | Metrics
The images are always influenced by noise when it is obtained and transmitted, which causes too big difference between the image data and the realistic. To fast and effectively eliminate these noise signal, this article Bisected elimination noise's method scene and discussed many kinds of mixed noise image intensification processing technologies based on the wavelet packet transformation. The experiment indicated that our method in image denoising processing is effective and feasible.
Color Image Retrieval Approach Based on Color Moments and Multi-scale Texture Features
YANG Hong-ju, ZHANG Yan, CAO Fu-yuan
Computer Science. 2009, 36 (9): 273-277. 
Abstract PDF(344KB) ( 701 )   
RelatedCitation | Metrics
Feature extraction is one of the key steps in content based image retrieval,but one feature based approach expresses only partial attributes of an image, which unilaterally describes the image content and is short of enough resolving power. This approach may not achieve ideal result in the case of images varying greatly. An approach was proposed based on image color and texture features, wherein color moments is the color feature and the variances of the multi scale high frequency sub-bands of wavclet domain. In experiment section, the presented method was compared to the traditional histogram and color moments methods. Combining the two features for image retrieval, we analysed the weight distribution between the two features by using ANMRR evaluation method which MPEG-7 recommends. The experiment shows that the approach has good retrieval performance.
Research of Block Matching Criterion for Motion Estimation
XIANG You-jun, LEI Na, YU Wei-yu, XIE Sheng-li
Computer Science. 2009, 36 (9): 278-280. 
Abstract PDF(263KB) ( 1304 )   
RelatedCitation | Metrics
Motion estimation algorithm is the research priority of video encoding technology. High precision and high efficiency of the match and compensation can reduce prediction error and improve video compression effect, so the accuracy of block matching is the core issue. The Minimum Absolute Difference(MAD ),the Minimum mean Square Error (MSE) , Normalized Cross}orrelation Function (NCCF),the classification of pixel difference, sulrsampling, Variance Of Difference ( VOD) , and other matching criterions were theoretically and experimentally analysised. And a method of using the distribution of the image difference as a matching criterion was proposed, that is image Difference Variance (DVAR) matching criterion. Experimental results show that image difference variance matching criterion received relatively high codec quality. Finally the paper pointed out the future development of block matching criterion which is more in line with the human visual system (HVS) characteristics.
Medical Image Registration Methods Based on Wavelet Transformation and Probability Estimate
KANG Xiao-dong, SUN Yue-cheng, QIAO Qing-li, YU Rui-guo, LI Chang-qing
Computer Science. 2009, 36 (9): 281-282. 
Abstract PDF(230KB) ( 604 )   
RelatedCitation | Metrics
To improve the performance of medical image registration, a new method based on wavelet transformation and probability estimate was proposed in this paper. Two-step decomposition of the original images was obtained by using the wavelet transformation. Bayesian maximum a posterior (MAP) probability estimate was performed separately for the sub-band component of each decomposition layer. Each wavelet sub-band component of image registration was got by probability estimating on the parameter. Original medical image registration was realized by wavclet's inversing transformation at last.
Method for Slow-motion Replay Detection on Compressed Domain in Sports Video
HOU Lu-lin, BAI Liang, LAO Song-yang
Computer Science. 2009, 36 (9): 283-286. 
Abstract PDF(330KB) ( 737 )   
RelatedCitation | Metrics
The Slow-motion replay(SMR) detection is an important technique for content-based video analysis. This paper introduced the basic theory of video compression and MPEGl video compression standard. A two-step method for SMR detection on MPEG-1 uncompressed domain was proposed, in which firstly shot density was defined and calculated based on shot detection results using MPGE macroblock types information to detect the occurrences of possible SMR shots,and secondly the difference between consecutive frames was measured to detect true SMR in candidate SMR shots.Experiments results show that unlike previous approaches, our method has great improvement in both speed and accuracy.
Image Fusion Algorithm Based on Fuzzy Entropy and Non-separable Wavelet Transform
GE Wen,GAO Li-qun
Computer Science. 2009, 36 (9): 287-289. 
Abstract PDF(241KB) ( 634 )   
RelatedCitation | Metrics
In the view of this situation that in the image fusion process of the traditional separable wavelet transform,there arc problems of lost part edges and blurred texture information, a fusion algorithm that highlights image details and reduces the image blurring was proposed. Under the non-separable wavelet decomposed frame,a fusion rule of local blurred entropy maximum was used for the low-frequency component which reflects approximate content, and a fusion rule of region brightness details priority weighted was used for the high-frequency component which reflects features and details of image. Finally, the fusion image was reconstructed through an inverse transform of non-separable wavelet.Experimental results show that under the condition of reservation of source image information, this algorithm improves the clarity of fused image, enhances details information and brightness contrast.
Estimating the Standard Deviation of Noise via Controlled Function
WANG Wen-yuan
Computer Science. 2009, 36 (9): 290-293. 
Abstract PDF(353KB) ( 587 )   
RelatedCitation | Metrics
This paper presented a new algorithm to estimate the standard deviation of noise for nature images. It uses a gradient filter to estimate the noise on some parts which are selected by a controlled function from the tessellating image. The controlled function can be related to the preestimate of SNR. We optimally determined the controlled function as an exponential function by minimizing the total error. The controlled function can effectively offset the effect which is caused by filtering the image structure. Therefore unlike some existing algorithms which can only accurately estimate noise for images with limited range of noise intensity,the proposed algorithm can perfectly do this for images with the noise intensity range of extremely small to extremely heavy. Moreover, an iterative process which has fast convergence was implemented to obtain more robust solution. Quantitative comparisons with some existing algorithms on an image data set demonstrate that the proposed algorithm has high accuracy, and outperforms those existing algorithms used in this study.
Evaluation Method of IPMCS Real-time Characteristic
ZHANG Yu, LIU Feng
Computer Science. 2009, 36 (9): 294-296. 
Abstract PDF(253KB) ( 572 )   
RelatedCitation | Metrics
Few studies are done on industrial process measurement and control systems (IPMCS) real-time characteristic evaluation. So the paper provided an evaluation method Of IPMCS real-time characteristic based on stochastic Petri net. First,FB was used to model IPMCS. Then the model was transformed to SPN model. Second, SPN model was transformed to Markov Chain (MC). Because an SPN model is isomorphic to a continuous time MC. Lastly, based on MC's transition matrix and steady probability, IPMCS real-time characteristic was evaluated. An example was given to illustrate how to evaluate IPMCS real-time characteristic. The method can give out real-time characteristic quantitative data which can provide reference when designing IPMCS.
Production Scheduling Simulation and Optimization of Re-entrant Produce Jop Shop
CHEN Xiao-hui,ZHANG Qi-zhong
Computer Science. 2009, 36 (9): 297-299. 
Abstract PDF(328KB) ( 801 )   
RelatedCitation | Metrics
For the re-entrant production scheduling of steel pipes job shop,the eM-Plant computer simulation software was used to solve the problem. Firstly, set the regulator of work pieces group together, simplify the problem to materials putforward strategies and job dispatch strategies. Then adopt the Uenetic Algorithms to optimize schedule planning.Compared with the results, it was obvious that there is a strong optimize performance and stability with genetic algo- rithms.
Techniques of Electronic Governmental Opening Network Construction Technique Based on MPLS VPN
FENG Nai-guang, ZENG Huang-lin
Computer Science. 2009, 36 (9): 300-302. 
Abstract PDF(246KB) ( 1038 )   
RelatedCitation | Metrics
The techniques and characteristics of MPLS VPN were introduced. Some of problems of an electronic governmental opening network built on the public network was analyzed based on MPLS VPN. A construction scheme of electronic governmental opening network was presented on MPLS VPN techniques. We accomplished one of Sichuan province's electronic governmental opening network on the public network, which can connect all the local governmental networks as a whole network with safety information transmission effectively as well as a good social beneficial result.It is shown that electronic governmental opening network construction techniques based on MPLS VPN is of merits of easy and smooth use and flexible practice as well as wide application and so on.