Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 39 Issue 8, 16 November 2018
  
Overview of Community Detection Models on Statistical Inference
Computer Science. 2012, 39 (8): 1. 
Abstract PDF(729KB) ( 959 )   
RelatedCitation | Metrics
Community detection can identify salient structure and relations among individuals from the complex net- work. Researchers put forward many different methods,which are mainly used to detect the groups with dense connec- lions within groups but sparser connections between them. To detect more latent structures in reality networks,various models on statistical inference have been proposed since 2006 , which are on sound theoretical principles and have better performances identifying structures, and have become the statcof-thcart models. These models' aims arc to define a generative process to fit the observed network, and transfer the community detecting problem to I3ayesian inference. First, the concepts on generation model were defined. Then, the article divided the generation models on community de- tection into vertex community and link community based on composition in community, and discussed design ideas and algorithms of each model in detail. What these models adapt to was also summarized from aspects of network type and scale, community structure, complexity etc, and then a method was given on how to select an existed statistical model. hhe existing classical models were tested and analyzed on the popular benchmark datasets. In the end, main problems on these models were highlighted, as well as the future progress.
Survey on Cascading Failures on Complex Networks
Computer Science. 2012, 39 (8): 8. 
Abstract PDF(664KB) ( 911 )   
RelatedCitation | Metrics
On complex networks, large-scale cascading failures that are triggered by some small disturbances can lead to disastrous consequences. In order to satisfy demand of the people on the security and reliability of complex networks re- lated to the national economy and people's livelihood, the study on cascading failures on complex networks becomes a hot branch of complex networks research in recent years. hhe theoretical modeling is the basic and key problem for a- nalysis, prevention and control of cascading failures. Main developments of cascading failures on complex networks were survcyed,mainly including several types of cascading failures models and the relevant research results. Both the existing problems at present and the development trend were pointed out
Research Progress in Security Ontology
Computer Science. 2012, 39 (8): 14. 
Abstract PDF(623KB) ( 531 )   
RelatedCitation | Metrics
Ontology is applied in various domains, including intelligence system, computer sciences, information techno- logy, and biomedical sciences et al. hhe analysis of ontology becomes the core of knowledge representation system in dif- ferent domains. At the same time the study of ontology facilitates knowledge sharing and reuse. Applying ontology in in- formation security makes security ontology(SO). hhis paper mainly described the following aspects of SO: present and history of SO, principle, method, and meanings of building SO, existing SO and its categories, dedcription language and application of SO. hhen according to above analysis, this article concluded its structural system, depicted knowledge rep- resentation and reasoning ability of SO, and pointed out there is a long way to go in building of SO, ontology evaluation, ontology learning, and application of SO.
Mobile Trusted Platform Model for Smart Phone
Computer Science. 2012, 39 (8): 20-25. 
Abstract PDF(554KB) ( 506 )   
RelatedCitation | Metrics
As virus or equipment lost, secret data on mobile phone is facing the danger of leakage. To meet the security needs of mobile platforms,TCG's MPWG has proposed the Mobile Trusted Platform specification,which does not spc cify a particular technical approach to design a Mobile Trusted Module(MTM). Existing research does not provide an o- verall framework of the M`hM, which can actually be used in the smart phone environment, and nor a detailed deploy- ment process of the Trusted Software Stack(TSS) of the framework. A model design on mobile trusted platform for smart phone was proposed in this paper, which combines the pure software MhM based on TrustZone technology with smart card MTM based on Java Card to build two trusted engines. The deployment scheme of trusted computing bases and security analysis of this model were put forward as well.
Property-based Attestation Scheme Based on the BB+Signature
Computer Science. 2012, 39 (8): 26-30. 
Abstract PDF(420KB) ( 440 )   
RelatedCitation | Metrics
In trusted omputing environment, to solve the problem of binary attestation scheme proposed by the trusted computing group(TCG),this paper proposed a new property-based attestation scheme. Firstly, this paper introduced the ideas of PISA and the security model of PISA-I3I3十,secondly,based on improved I3I3+ signature technology,presented a concrete construction of PISA-I3I3} scheme,and compared the scheme with other PISA schemes,finally,in oracle random model,proved the PBA-BI3} scheme meets configuration privacy and unforgcability.
Trust Management Model Based on Evaluation of Resources
Computer Science. 2012, 39 (8): 31. 
Abstract PDF(323KB) ( 408 )   
RelatedCitation | Metrics
Aiming at the safety problem in P2P network, a new trust management model based on evaluation of re- sources was proposed in this paper. A concept of praise degree was given firstly, and a single praise degree was compu- ted by fuzzy comprehensive evaluation. hhe transaction log table was saved and managed by mother peers of the one provided resources. When a peer chose which peer to deal with, it considered not only the direct trust value but also the total praise degree of resource. An incentive mechanism based on virtual currency was imported finally to enhance the peers' positivity of participation. The experiment result shows that this trust management model can effectively resist the attack of malicious peers and increase the success rate of network transactions.
Mobile Object Localization Algorithm for Sensor Networks Based on MCB
Computer Science. 2012, 39 (8): 34-37. 
Abstract PDF(367KB) ( 426 )   
RelatedCitation | Metrics
In view of low real-time sampling efficiency of Monte-Carlo localization boxed, a new localization method named enhanced Monte-Carlo localization boxed was proposed. Based on the MCI3, EMCI3 introduces the crossover and mutation operations in genetic algorithm to make samples to move towards regions with large value of posterior density distribution. So the distribution of samples is optimized, and the problem of low sampling efficiency is solved. Simulation results show that compared with the MCI3, the new algorithm reduces the number of samples. Therefore, the sampling efficiency and localization accuracy arc improved by about 17 0 0 , while the cost is reduced by about 30 0 0.
Human Psychological Cognitive Habits and Cloud Model Based P2P Trust Model
Computer Science. 2012, 39 (8): 38-41. 
Abstract PDF(336KB) ( 393 )   
RelatedCitation | Metrics
To solve the problems of the computational complexity and uncertainty of trust in P2P trust model, this paper proposed a new trust model for P2P network. The proposed model evaluates the peer's trust degree from the idea that absolute priority is given to human's direct experience in human psychological cognitive habits to judge,which reduces the computational complexity and the risk of accepting false recommendation information. Further, two characteristic parameters of cloud model, entropy and hyper entropy, are used to introduce positive and negative reward factors to en- courage good peers and punish malicious peers respectively. Experiments show that the proposed model can resist chcat- ing behaviors of strategic malicious peers and also distinguish complex malicious peers which behave badly with small probabilities.
Efficient Identity Based Online/Offline Signcryption Scheme
Computer Science. 2012, 39 (8): 42-46. 
Abstract PDF(350KB) ( 438 )   
RelatedCitation | Metrics
We proposed an efficient identity based online/offline signcryption scheme with shorter ciphertext} Under the l-I3DHI assumption and l-SDH assumption, the scheme can be proved secure in the random oracle model.
Optimization Scenarios of Interactions between MIPv6 and PMIPv6 Based on Fast Handover
Computer Science. 2012, 39 (8): 47. 
Abstract PDF(394KB) ( 507 )   
RelatedCitation | Metrics
Mobile IPv6(MIPv6) is a host based protocol,while Proxy Mobile IPv6(PMIPv6) is a network based proto col. The interactions between PMIPv6 and MIPv6 should be considered. Several proposed scenarios belong to hard handoff and cause long handover latency. An optimized scenario--Fast Handover for Interaction with MIPv6 and PMIPv6(FHIMP) was proposed. Comparison of the handover latency of the optimized procedure with the predecessor and the analytic result shows that the optimized procedures have an advantage in network performance.
Fluid Stochastic Petri Net Model for Hybrid Petri Net
Computer Science. 2012, 39 (8): 51-54. 
Abstract PDF(308KB) ( 405 )   
RelatedCitation | Metrics
Hybrid Petri net and fluid stochastic Petri arc two modeling methods for hybrid system. hheir modeling mechanism and analysis method are different and the two modeling formalisms are still evolving. Through converting for each other,one modeling method can use the others modeling primitives and analytical method for analyzing system, which is helpful for the future development of them. The formal conversion from hybrid Petri net to fluid stochastic Pe- tri net and a transition mergence method after conversion were proposed. Also, the effectiveness of proposed conversion and mergence was proofed. At the end of paper, a case was illustrated for describing the process of conversion and mer- gcncc.
Research on Cloud Monitoring Oriented to Mobile Terminal
Computer Science. 2012, 39 (8): 55. 
Abstract PDF(429KB) ( 412 )   
RelatedCitation | Metrics
Aiming at the demand of the high efficiency and lightweight client of virus prevention for mobile terminal, this paper improved the HIPS by the technology of cloud security to form the cloud monitoring model. Through adding file property judge function and moving rule library and the work of file property judge to the server, the server system occupies was reduced. Through changing the strategy of rule-making, according to different virus set different rules, the complexity of the rules was reduced and the efficiency of rules matching was improved. Through the black and white list technology and a single step dangerous behavior analysis, the cost of communication between the client and server was reduced and the efficiency of file property judge was improved. hhrough changing monitoring mode, changing the active monitoring to passive, the working time was reduced and the working efficiency of the cloud monitoring model was improved. Finally, the formal method proves the security of the cloud monitoring model.
Optimization Scheme of the Web Service Communication Safety
Computer Science. 2012, 39 (8): 59. 
Abstract PDF(333KB) ( 376 )   
RelatedCitation | Metrics
Comprehensively applying Web services safety standards and now main Web service safety technology to op- timize Web service structure can realizes Web service communication safety. The optimization service structure realizes identity authentication and authorization function by setting business gateway, and realizes confidentiality, integrity and nonrepudiation of the message by extending SOAP agreement. Compared with the technology which focuses on the scr- vice structure extension and the technology which focuses on the agreement expansion to achieve Web service safe, this optimization scheme strengthens the communication security. Combined with the research results, the future research di- rection of the Web service safety is prospected.
Cross-layer Based Anomaly Detection Mechanism with Hidden Semi-Markov
Computer Science. 2012, 39 (8): 62-66. 
Abstract PDF(506KB) ( 413 )   
RelatedCitation | Metrics
The existing methods on anomaly detection in wireless Mesh network mostly focus on single malicious at- tack, which can not detect various malicious attacks originated form different protocol layers. We presented a cross-layer based anomaly detection mechanism. Firstly a distributed )DS structure for Mesh backbone network topology was pro- posed, secondly cross-layer based features were collected for comprehensively monitoring network activities. Further- more,with the multidimensional observation sequences, the hidden semi-Markov model(HsMM) was trained and ex- ploited to characterize and model the normal states of network activities. The entropies of observation secfuences against the HsMM were calculated to evaluate their abnormality. An anomaly alert will be reported if the entropy is lower than a threshold. Experiment results show that the proposed detection mechanism is able to detect various malicious attacks from different protocol layers.
Efficient Hierarchical Identity-based Signature Scheme
Computer Science. 2012, 39 (8): 67-69. 
Abstract PDF(317KB) ( 383 )   
RelatedCitation | Metrics
By using bilinear pairings technique, this paper presented a hierarchical identity based signature scheme in the standard model according to hierarchical identity based encryption scheme proposed by 13onch et al. hhe size of signature in the scheme is a constant and regardless of hierarchy depth of the signer. In the last, we proved the scheme is existen- tial unforgcable against selective identity, selective chosen message attack in terms of the hardness of DHI problem.
Formal Modeling of Cryptographic Protocols Using Petri Nets
Computer Science. 2012, 39 (8): 70-74. 
Abstract PDF(399KB) ( 509 )   
RelatedCitation | Metrics
Cryptographic protocol is secure mechanism for sharing network resources,is the cornerstone to build security network environment The security of the cryptographic protocol plays a vital role to entire network environment. A new colored Petri nets (CPN) methodology for security analysis of cryptographic protocol was proposed. We applied the new approach to model TMN protocol with multi concurrent session, and the model was categorized based on session configuration and session schedule. And the attack traces were obtained using on-th}fly method. Using the state space search method, several attack states of multi concurrent session were found, and a new attack pattern was obtained.
Research on QoS Service Routing Algorithm for Overlay Internet of Things
Computer Science. 2012, 39 (8): 75-78. 
Abstract PDF(336KB) ( 391 )   
RelatedCitation | Metrics
In the Internet of things, the service may be provided by any nodes together, and the traditional best effort communication service can not guarantee quality of service(QoS). This paper established an active services Overlay rou- ting logic topology structure, and made a model of the service routing problem for Internet of things. On this basis, this paper proposed an active Overlay QoS service routing optimization algorithm based on Agent and ACO, which was im- proved with mobile Agent and QoS guaranteed service path selection. It is proved theoretically that the algorithm is cor- rect and convergent. And the actual performance of this algorithm was tested and compared by simulation experiments.
Semantic-based Chinese Web Page Retrieval
Computer Science. 2012, 39 (8): 79-87. 
Abstract PDF(866KB) ( 465 )   
RelatedCitation | Metrics
Semanti}based Chinese Web page retrieval is a promsing application. The existing semantic retrieval mecha- nisms are categorized into three types, which are based on ontology, natural language understanding, and text classifica- tion and clustering respectively. The three technologies were reviewed and examined in detail. Semantirbased Chinese Web page retrieval system should focus on popular fields to draw great attention from Web users. Moreover, Web pages should be indexed with words rather than Chinese characters. Advanced Chinese information processing technologies should be integrated into semantic retrieval systems. Some directions for future research were finally presented, inclu- ding semantic relevance ranking, ontology definition and instance automatic extraction, semanti}based indexing, and large-scale semantic training collections construction.
Secure Network Coding Against the Omniscient Adversaries
Computer Science. 2012, 39 (8): 88-91. 
Abstract PDF(426KB) ( 365 )   
RelatedCitation | Metrics
A secure network coding algorithm against the omniscient adversaries was presented. While the adversary can eavesdrop all links and jam z} links, the algorithm transforms the source news with sparse matrix to enhance the data anti wiretapping capacity. In order to detect and eliminate pollution attacks,thc receiver uses list decoding algorithm to recover the source news. The theoretical analysis and simulations both confirm that this algorithm can be designed and implemented in polynomial time, resistant eavesdropping and pollution attacks. At the same time, this algorithm can also make the standard random network coding to achieve the weakly secure condition at a high probability, increase the en- coding rate and reduce the occupied memory space. Furthermore, only the source and destination need to be modified, and intermediate nodes implement a classical distributed network code.
Certificateless Signature Scheme without Public Key Replaced
Computer Science. 2012, 39 (8): 92-95. 
Abstract PDF(346KB) ( 767 )   
RelatedCitation | Metrics
In certificateless signature(CLS) based on Al-Riyami frame, the secure condition is key generation center (KGC) which can not implement public key replacement attack. So CI_S can not resist this attack from KGC. A new sig- nature system called certificateless signature without public key replaced was presented. It shows that public key signa- ture for users can resist the public key replacement attack from KGC and raise the security of CI_S to a new level. An instance of constructing this class of certificateless signature scheme based on Al-Riyami frame was afforded. The public key signature was proven unforgery under the random oracle model. I}he method to construct the public key signature can be applied to any certificateless signature schemes based on Al-Riyami frame.
One Tag Time-weighted Recommend Approach on Tripartite Graphs Networks
Computer Science. 2012, 39 (8): 96-98. 
Abstract PDF(324KB) ( 343 )   
RelatedCitation | Metrics
Social tags can provide highly abstract information about not only item contents but also personalized prefe- rences,hence using labels could improve the accuracy of personalized recommendation. As a result of user preferences changes over time, network resources also will be increased as time goes by. How to recommend the network resources in which users have immediate interest based on the user preferences changes becomes a new research problems in the recommendation system Combined with using tag frectuency and label time on user-object tag tripartite graphs,we pro- posed a recommendation algorithm based on tag timcweighted network. Experimental results dcxnonstrate that the usage of tag timcweighted can significantly improve accuracy and diversification of recommendations.
Improved Data Sharing Scheme over Cloud Storage
Computer Science. 2012, 39 (8): 99-103. 
Abstract PDF(365KB) ( 471 )   
RelatedCitation | Metrics
Cloud storage security has always been emphasis in cloud security. Zhao et al. proposed a `I'SCS(Trusted Sha- ring over Cloud Storage) scheme, which is incapable of withstanding malicious CSP attacks, according to our analysis. This paper constructed the model of TSCS in terms of its requirements for security. Afterwards, a new TSCS scheme was proposed based on it, Analysis shows the new one relies less on random numbers,and not only retains the security properties of Zhao et al. 's scheme, but also resists tampering of the server, and could be put into application.
Dynamic Bounding Algorithm for Approximating Multi-state Network Reliability Based on Arc State Enumeration
Computer Science. 2012, 39 (8): 104-110. 
Abstract PDF(483KB) ( 382 )   
RelatedCitation | Metrics
In order to reduce complexity of calculating multi-state network reliability, considering stay probability and effect on network capability when transfering to adjacent state of multimode arc in each intermediate state,an algorithm for calculating multi-state network reliability dynamic bounds based on arc state enumeration was proposed. Firstly, as- suming arcs can be in either of two states: operating or failed, obtained initial reliability upper and lower bounds respec- lively by adding probability of each arc in all intermediate states to probability of operating or failed state. Then, itera- lively enumerated intermediate states in order of decreasing effect to network reliability, and calculated variation value of upper and lower bounds by comparison operation of sets,while it derived a series of decreasing upper bounds and a se- ries of increasing lower bounds simultaneously,guarantecd to enclose the exact reliability value. The algorithm does not require a priori the d-minimal cuts or d-minimal puts of the multi-state network, and can ensure an exact difference be- tween the upper and lower bounds with enumerating fewer network states. Related lemma warrant and example analysis verify correctness and effectiveness of the algorithm.
Improved Scheme of DAA Authentication Based on Proof Mechanism of a Committed
Computer Science. 2012, 39 (8): 111-114. 
Abstract PDF(352KB) ( 426 )   
RelatedCitation | Metrics
An improved scheme was proposed against the shortage of current mechanism of direct anonymous attesta- tion(DAA) in trusted computing platform. This scheme firstly adopted the CA to verify the EK certificate of prover to help prover and DAA issuer building the session key respectively. The DAA issuer can issue the secret certificate to the prover with the key. hhen the prover used a committed number lying in a specific interval to attest the validity to the verifier by integrating the protocol that two committed numbers arc equal with the protocol of the CI}T proof. The anal sis shows that this scheme not only has a higher security, but also is non-fraudulence, anonymity, can be withdrawed and more efficiency.
Intelligent Node Deployment Scheme for Wireless Sensor Networks
Computer Science. 2012, 39 (8): 115-118. 
Abstract PDF(487KB) ( 466 )   
RelatedCitation | Metrics
To satisfy the specific requirements from wireless sensor network applications, e. g.,the quality of coverage and connectivity etc.,the sensor deployment issue for guaranteeing coverage and under probabilistic sensing and com- munication models was studied connectivity performance .A novel solution based on elitist non-dominated sorting ge- netic algorithm(NSGA-II) was proposed. Simulation results show that the proposed scheme can satisfy the user-speci- fied coverage and connectivity requirements with a smaller number of nodes compared with random and grid deployment schemes. Furthermore, it provides a set of non-dominated solutions that reflect the conflicting relationship among multi- plc objectives,and gives the user more intuitive compromises to choose.
Relative Distance-based Clustering Algorithm for Multilevel Energy Heterogeneous Wireless Sensor Networks
Computer Science. 2012, 39 (8): 119-121. 
Abstract PDF(325KB) ( 352 )   
RelatedCitation | Metrics
Prolonging network lifetime and obtaining better monitor duality are important performance indexes to the clustering algorithm for wireless sensor networks. Based on the analysis of the existing clustering algorithms,a relative distance clustering algorithm which adapts to multi-level energy heterogeneous sensor networks was proposed. For this algorithm,nodes decide the possibility of becoming cluster heads based on their average distance from other nodes, the distance from the base station and their own current residual energy. All nodes take turns to become cluster heads to share energy consumption. Simulation results show that in multi level energy heterogeneous sensor networks, compared with the existing algorithms, this new clustering algorithm can prolong the lifetime and has better network monitor quality.
Research on Congestion Control Mechanism Based on the Cooperative AODV Routing Protocols in the LTE Networks
Computer Science. 2012, 39 (8): 122-125. 
Abstract PDF(353KB) ( 505 )   
RelatedCitation | Metrics
Aiming at the problems that existing UPP-LTE network congestion control algorithm can not adapt to the distributed network topology and the complex network environment and its efficiency is low,and cost is big,this paper analyzed LTE network wireless channel quality on the AODV protocol performance. Based on the path loss threshold cooperative mode AODV routing mechanism, this paper put forward a based on the ctueue length and the number of hops congestion control strategy. LI'E network link level and system level simulation experiment proves performance of the modified congestion control mechanism in wireless I'CP based on cooperative routing protocol. Mathematical analy- sis and simulation results show that compared with the traditional TCP congestion control mechanism,the cooperative congestion control mechanism has better performance.
Design and Implementation of Sensor Node for Environment Monitoring Internet of Things
Computer Science. 2012, 39 (8): 126-129. 
Abstract PDF(319KB) ( 429 )   
RelatedCitation | Metrics
Aiming at the properties that the energy of sensor node in wireless sensor network is generally provided by battery and not easy complementing, a low energy consumption node which is suitable for realtime monitor of environ- ment Internet of things was designed. hhe nodes can be divided to sensor node and sink node. MSP430F1611 is used as the MCU and CC2420 as RFIC in the sensor node. S3C2410 is used as the MCU and CC2420 is used as RFIC in the sink node. The hardware structure and software flow was given. Finally, the data mining accuracy, energy consumption and drop data packet ratio of node were simulated and analyzed. The result shows the sensor designed can satisfy the de- mand of Internet of things, and has the advantage of high accuracy for data mining, low energy consumption and high re- liability.
Per-flow Admission Control with Adaptive Reservation of Bandwidth and Multiple QoS
Computer Science. 2012, 39 (8): 130-135. 
Abstract PDF(498KB) ( 344 )   
RelatedCitation | Metrics
A novel dynamic distributed admission control with adaptive reservation of bandwidth and multiple QoS sup- port of EDCA(enhanced distributed channel access) in IEEE802. lle based on cross-layer was presented. In this mecha- nism,the bits of the subcarrier of OFDM in the station are allocated to obtain the maximum channel capacity subjected to the maximum power at first and the bit rate is transmitted to the medium access control layer at the same time. By means of that, a dynamic reservation of the bandwidth was presented based on the distributed measurement, which a- dapts to the characteristics of the channel and the service. And an estimation of the surplus factor by center control half- based on model was presented, which overcomes the un-accuracy resulted by directed measuring and locality resulted by distributed estimation. The relationship between the service's parameters and the collision probability was obtained and the double admission criteria of the bandwidth and the collision probability was suggested to support the multiple QoS such as bandwidth and delay as well as error ratio of the frame. All above suggestions structure an adaptive admission control from the upper layer to the lower layer. The simulation shows the admission control presented outperforms the former in the utilization of the resource and the quality of the service.
Method for Generating Formal System Model Based on Scenarios Analysis
Computer Science. 2012, 39 (8): 136-140. 
Abstract PDF(501KB) ( 364 )   
RelatedCitation | Metrics
An important operation for constructing reliable and safety software systems is system's safety analysis and verification by formal methods. Formal modeling for system is vital, which can affect the results of safety analysis and verification. We provided a method for generating formal system model based on scenarios analysis. The method adopted UMI_ sequence diagrams to specify system's requirement firstly. In order to get consistent scenario-based requirements, we combined pre-condition and post condition of object constraint language with domain knowledge to analyze conflicts in I1MI_ sequence diagrams. 13csidcs, we proposed a model conversion algorithm for transforming interactions of objc(is in UML sequence diagram to finite stated process. Finally,we generated a finite states model for system's formal mod- e1, which is conformed to system's functional requirements. The correctness and feasibility of the proposed method were confirmed by generating formal model for railway station interlocking system. The new method provides a good way for system's formal modeling,and improves safety duality in demanding for safety software's designing and development.
Action Models Learning Algorithm with Indeterminate Effects for Software Requirement Specification
Computer Science. 2012, 39 (8): 141-146. 
Abstract PDF(592KB) ( 364 )   
RelatedCitation | Metrics
Software systems are becoming an integral part of all walks of life. This aggravates the need for an artificial intelligent perspective for requirements engineering, which allows for modeling and analysing requirements formally, rapidly and automatically, avoiding mistakes made by misunderstanding between engineers and users, and saving lots of time and manpower. For exacting software requirement specification automatically, we applied intelligent planning and machine learning methods to convert software reduirement into an incomplete planning domain, and proposed an algo- rithm AMI_CP to learn action models with indeterminate effects. Furthermore, we obtained a complete planning domain by applying this algorithm and converted it into software requirement specification.
Web Service Discovery Method Based on Net Unit Model of Service Cluster
Computer Science. 2012, 39 (8): 147-152. 
Abstract PDF(461KB) ( 385 )   
RelatedCitation | Metrics
A net unit model of service cluster was defined and a Web service discovery method based on the model was proposed. First, service clustering was completed based on the semantic similarity of functional description and the pa- rameters. Second, the net unit models of service clusters were built with the unified labels of service parameters, and then service cluster parameter matrix was constructed with standardize treatment. And finally, service discovery was re- alized by using the service cluster parameter matrix On the base of Petri net, the formal model of service cluster was proposed for the first time, and a quick service discovery was realized. I}he results indicate that the model to construct service clusters is effective and reasonable. Compared with the traditional service discovery methods based on the pa- rameter matching, this proposed method can effectively reduce the number of parameters matching times and improve the service discovery efficiency.
Declassification Policy Based on Content and Location Dimensions
Computer Science. 2012, 39 (8): 153-157. 
Abstract PDF(517KB) ( 355 )   
RelatedCitation | Metrics
Current research on declassification policies mainly involves content, location, time and other dimensions, and each of them has some limitations. Attacker could learn more confidential information than intended by using the vulner}r bility of other dimensions. A synthesis of different dimensions in declassification policy would further improve assu- rance that confidential information is being declassified properly. This paper proposed a declassification policy based on the content and location dimensions, using attacker knowledge model. The key idea of content dimension of the policy is that attacker is not allowed to increase observations about confidential information by causing misuse of the declassifica- lion mechanism,and that location dimension of the policy controls confidential information is declassified only through the declassification statement. Additionally,we established type rules of policy enforcement and proved its soundness.
Complex Event Detection for Long Process Based on Out-of-Core
Computer Science. 2012, 39 (8): 158-163. 
Abstract PDF(496KB) ( 369 )   
RelatedCitation | Metrics
Complex event detection can get useful information from vast of data, so it has drawn extensive research and attention in recent years. However, there are many complex events occurring in a very long period. Due to the constraint of memory, traditional complex event detection technologies arc not suitable for this case. To detect complex events for long process, this essay proposed the policies of object tree as memory storage structure,PR replacement, storage of e- vents by classify, the instance map table. Finally, the effectiveness and efficiency of our proposed methods for complex event detection were verified through experiments on real data set.
Application of Autonomous Component Architecture in Storage Business Simulation Test
Computer Science. 2012, 39 (8): 164-168. 
Abstract PDF(431KB) ( 386 )   
RelatedCitation | Metrics
The general performance indicators of the storage product such as throughput, response time, concurrent number of I/O,and its performance under the specific applications reflect the core competitiveness. Therefore, perfor- mance testing is an important part of the storage products' development and purchase. The storage performance testing software with efficiency and high degree of confidence is the urgent need for both of the manufacturer and the customer. However,so far the usual software in the industry has some deficiencies,namely,limited scenarios,distortion of testing result, etc. Aiming at these shortcomings and current requirements, the thesis making use of autonomous component ar- chitecture (ACA) with its standardized, loosely coupled, reusable and scalable features, proposed and implemented a fca- Bible solution of a scenario simulation storage performance testing software. hhen, the technical principle of general per- formance test tools and the architecture of system were analyzed. The system model of the business simulation perform- ance test tools was presented for autonomous component architecture. The infrastructure and the component communi- cation weredesigned. The implementation principle of the main modules was proposed.
Network-constrained On-line Path Prediction Based on Global Learning Mechanism
Computer Science. 2012, 39 (8): 169-172. 
Abstract PDF(336KB) ( 347 )   
RelatedCitation | Metrics
The trajectory prediction of the moving object in the network-constrained has been the hot spot of the intclligent traffic's attention. And it has been widely used in the area of emergency security, GPS and so on. But if we only know the recent trajectory of the moving object,we couldn't predict its future trajectory with the existing methods. A trajectory prediction's method I_PP(longest frequent path prediction) was put forward, which could construct the fast accessing structure LPP-tree through the global learning mechanism to find out the longest frectuent trajectory. Based on the recent trajectory of the moving object, one could predict its future trajectory swiftly online. And the experiment proves the validity of this method.
Location Privacy Preserving Nearest Neighbor Querying Based on Coordinates Accumulation
Computer Science. 2012, 39 (8): 173-177. 
Abstract PDF(381KB) ( 414 )   
RelatedCitation | Metrics
With the development of spatial positioning and wireless communication technology, location based services have been promoted greatly. Users can get services by sending their position information to LI3S server. But in this manner, it is inevitable to disclosure users' locations. With witnessed concerning on privacy preservation of individuals, it becomes pressing to provide location based query services without compromising users' location privacy. Most of existing solutions adopt a framework of trusted third parties(TTPs) to serve as intermediary between user clients and LBS server. These solutions suffer from the following issues; (1) Trusted third party is difficult to find, (2)TTP is inclined to be the system's bottleneck. Hence, it results in poor query efficiency and scalabihty. This paper proposed a TTP-free based method that submits coordinates accumulation instead of the user's real location. The client submits the query with coordinate accumulation of his location to the server. Special coordinate accumulation based query process is devised at the server side,which can generate candidate answers including the just query result. Further,an effective pruning strategy is applied to improve communication cost and workload at both client and server sides. Theoretical analysis and experimental results demonstrate that our method can solve the problems mentioned earlier effectively.
QCS :A Preventing Multi-dimensional Inference Approach for OLAP
Computer Science. 2012, 39 (8): 178-181. 
Abstract PDF(422KB) ( 345 )   
RelatedCitation | Metrics
For high complexity and low practicability of most on-line analytical processing(OLAP) system inference control approaches, the paper proposed an improved preventing multi-dimensional inference approach on the basis of previous researches. hhis approach is based on the QCS(Query Cells Set). It puts preventing detection of multi-dimensional inference threat on the cells set(not simple cell) that requested cells of the query depend on, so the complexity of detection algorithm is reduced greatly,which meets the normal query processing requirement of OLAP. Then the effectiveness proof and algorithm were provided, and an example was used to illustrate the algorithm as well. Compared with former inference control approaches, QCS approach not only protects the sensitive data in OLAP system effectively,but also has better computationally efficiency, which meets the practical requirements of OLAP system.
Research on Optimal Fractional Bit Minwise Hashing
Computer Science. 2012, 39 (8): 182-185. 
Abstract PDF(295KB) ( 533 )   
RelatedCitation | Metrics
In information retrieval,minwise hashing algorithm is often used to estimate similarities among documents,and frbit minwise hashing is capable of gaining substantial advantages in terms of computational efficiency and storage space by only storing the lowest h bits of each(minwise) hashed value(e. g. ,b=1 or 2). Fractional bit minwise hashing has a wider range of selectivity for accuracy and storage space requirements. For the fixed fraction f,there are so many combinations of f. We theoretically analyzed limited combinations of fractional bit hhe optimal fractional bit was found. Experimental results demonstrate the effectiveness of this method.
Spatial Data Index Method Based on P2P
Computer Science. 2012, 39 (8): 186-190. 
Abstract PDF(420KB) ( 406 )   
RelatedCitation | Metrics
This paper put foward a spatial data index based on group chord. At the same time, it provided a kind of spatial ctuery under this index frame and routing recovery. Test results present that this distributed index's maintenance cost is lower. So making using of this index to progress spatial index has scalability. The increase of the size of groups can decrease the query hops. But for the overall spending of this query, there is the most optimized size of group. In addition, this paper also put forward a routing recovery mechanism based on the space-taken, which can deal with the problems of peer failing efficiently and can reinforce the usability of the system as well.
New Leaning Method for Optimal Warping Window of DTW
Computer Science. 2012, 39 (8): 191-195. 
Abstract PDF(403KB) ( 936 )   
RelatedCitation | Metrics
The dynamic time warping is a classic similarity measure which can handle time warping issue in similarity computation of time series, and the DTW with constrained warping window is the most common and practical form of DTW. After systematically analyzing the traditional learning method for optimal warping window of D"I}W, we introduced time distance to measure the time deviation between two time series,and proposed a new leaning method for optimal warping window based on time distance. Since the time distance is an appurtenant of the DTW computation, the new method can improve D"TW classification accuracy with little additional computation. Experimental data show that the optimal DTW with best warping window gets better classification accuracy when the new learning method is employed.What is more,the classification accuracy is better than the ERP(Edit Distance with Rcal Penalty) and the LCSS(Longest Common SubSequcnce) , and is close to the TWED(Time Warp Edit Distance).
Iterative Learning Assignment Order for Constrained K-means Algorithm
Computer Science. 2012, 39 (8): 196-198. 
Abstract PDF(328KB) ( 450 )   
RelatedCitation | Metrics
Constrained K-means algorithm often improves clustering accuracy, but sensitive to the assignment order of instances. A clustering uncertainty based assignment order Iterative Learning Algorithm(UALA) was proposed to gain a good assignment order. The instances stability was gradually confirmed by iterative thought according to the characteristics of Cop-Kmeans algorithm stability, and then assignment order was confirmed. The experiment demonstrates that the algorithm effectively improves the accuracy of Cop-Kmeans algorithm.
Representation and Decomposition of Fuzzy Knowledge Granularity Based on Product Fuzzy Rough Set Model
Computer Science. 2012, 39 (8): 199-204. 
Abstract PDF(504KB) ( 365 )   
RelatedCitation | Metrics
Pawlak proposed the rough set theory in order to process data and knowledge which are imprecise or uncertainty in artificial intelligence. And then, the theory got extended. There're generally two methods; one is to weaken the dependence on equivalence relations,the other is to develop domains to be studied from one to many. Based on the two kinds of thoughts, we researched a product fuzzy rough set model based on two fuzzy approximate spaces, and representation and decomposition of fuzzy rough sets in the product fuzzy approximation spaces. We could explore questions of fuzzy knowledge Granularity's expression from different angles in the high dimension fuzzy knowledge space. We first researched hierarchical structure of a fuzzy approximation space-kcut approximation spaces,and gained the relationship between vary hierarchical knowledge granularity. Secondly, the product of finite fuzzy equivalence relations was defined, and its algorithm was investigated. Finally, a product fuzzy approximation space was constructed based on product fuzzy equivalence relations, and decompositions of upper and lower approximations of fuzzy sets were discussed in the high dimension fuzzy approximation space,and a characterization of upper(lower) approximation of crisp decomposable sets was given.
A Kind of Multi-objective Optimization Algorithm Based on Differential Evolution with Multi-population Mechanism
Computer Science. 2012, 39 (8): 205-209. 
Abstract PDF(399KB) ( 883 )   
RelatedCitation | Metrics
In order to avoid the situation of falling into local optimum in solving the multi-objective optimization problem(MOP) with differential evolution algorithm(DE) , we designed a bidirectional search mechanism which can improve the ability of local search of the DE. We also designed a multi-population mechanism for DE, which can reduce the risk of local optimum, and make the Pareto fronts more evenly distributed. Experimental results shows that, compared with similar algorithms such as NSUA-II, the proposed method is more efficient, while the precision and distribution of Pareto optimal solution set is better than the former.
Diagnosability of Discrete-event Systems Based on Temporal
Computer Science. 2012, 39 (8): 210-214. 
Abstract PDF(505KB) ( 376 )   
RelatedCitation | Metrics
This paper proposed a new diagnosis approach based on the temporal relationship between events. In order to reduce the scale of the model, we first added some communication events to divide the global model. Then we deleted some useless road in the local models to reduce the space of states. According to the temporal relationship between communication events and observable events, we could get the diagnosability of the limited model and some properties. We used the properties to get the diagnosability of the local models, which avoids the high complexity of the synchronization operation. Finally we gave some examples to analysis the steps of the diagnosed process.
Qualitative Spatio-temporal Reasoning Combining Multiple Spatial Relations and Time
Computer Science. 2012, 39 (8): 215-219. 
Abstract PDF(356KB) ( 361 )   
RelatedCitation | Metrics
There are a variety of spatial and temporal relations between spatial entities, such as topology, direction, distance, size, time and so on. Previous research work is mainly focused on the representation and reasoning of combination of less than three aspects of spatio-temporal relations, and the integration of more than three kinds of aspects is few.However, multiple spatio-temporal relations are unified and mutually constrained, and the integration of more and more aspects is not only the inevitable trend of spatio-t}mporal representation and reasoning research, but also the urgent need for practical applications. An unified spatio-temporal representation model was proposed, which represents spatial relations using rectangle relations, and represents time using change times of rectangle relations. Spatial relation and time changes were deduced by using conceptual neighborhood graph. Based on the model, combined with a rectangular relationship and path consistency algorithm, an algorithm was proposed to verify the consistency of the unified model network,and analysis of algorithm complexity. This research improves the accuracy of the spatial relationship analysis,reduces the time information redundancy, which is of certain theoretical significance and application value for the analysis and inquiry of spatial relations and time change of spatial entities in geographic information system
Feature Selection Algorithm-based Approximate Markov Blanket and Dynamic Mutual Information
Computer Science. 2012, 39 (8): 220-223. 
Abstract PDF(350KB) ( 410 )   
RelatedCitation | Metrics
To resolve the poor performance of classification owing to the irrelevant and redundancy features, feature selection algorithm based on approximate Markov Blanket and dynamic mutual information was proposed. The algorithm uses mutual information as the evaluation criteria of feature relevance, which is dynamically estimated on the unrecognixed samples. Redundancy features were removed exactly by approximate Markov Blanket. So a small size feature subset can be attained with the proposed algorithm. To attest the validity,we made experiments on UCI data sets with support vector machine as the classifier, compared with DMIFS and RcliefF algorithms. Experiments result suggest that,compared with original feature set,the feature subset size obtained by the proposed algorithm is much less than original feature set and performance on actual classification is better than or as good as that by original feature set.
Anti-attach Ability Based on Costs in Complex Networks
Computer Science. 2012, 39 (8): 224-227. 
Abstract PDF(437KB) ( 437 )   
RelatedCitation | Metrics
Current researches on anti-attack of complex networks arc based on the hypothesis that all the attacks wouldn't spend any cost. However,most of the complex networks are fragile in confronting selective attack strategies without any cost, which is conflict with real-world networks. Aiming at the contradiction, this paper proposed key indexes, i. c.,the network compactness index and average degree, to measure the attack effects with costs. This paper developed the selective node attack strategies model based on the network compactness index, and ctualitative analysis of the relation between network compactness index, average degree and anti-attack of complex networks.The detailed simulation results show that the key indexes proposed in the paper arc effective; the more compact the network is, and the larger the average degree is, then the more robust the network is; in the same average degrees, the larger the network compactness index is, the more robust the network is.
Covering Rough Vague Sets and Uncertainty Measurement
Computer Science. 2012, 39 (8): 228-232. 
Abstract PDF(354KB) ( 357 )   
RelatedCitation | Metrics
Aiming at the error of idempotent property of the covering rough Vague sets in paper [21],a new covering rough Vague sets model based on neighbor domain was proposed. This paper discussed some related properties and relationship with the first covering rough Vague sets. Finally, we defined a uncertainty measurement method for covering rough Vague sets based on knowledge entropy in covering granular space. Example analysis shows that the uncertainty degree of the second covering rough Vague sets decreases with the decrease of the granularity.
Modified Decimal MIMIC Algorithm for TSP
Computer Science. 2012, 39 (8): 233-236. 
Abstract PDF(339KB) ( 551 )   
RelatedCitation | Metrics
Modified decimal MIMIC algorithm is a kind of discrete estimation of distribution algorithm, which is based on binary MIMIC algorithm and convenient to solve traveling salesman problem. Considering the drawbacks of the MIMIC algorithm while solving larger-scale TSP, this paper improved the encoding mode and probability model, proposed new individual strategy, introduced greedy algorithm at the initial phase of the probability matrix, and adopted crossover operator, mutation operator, etc. during the process of evolution, employed dynamic adjusted method to determine the population size. These modifications gurantec the population diversity even in small population and for larger-scale TSP. Experiment results show that problem scale, solution quality and speed of the optimization are improved significantly.
Multi-subjects Classification Algorithm Based on Maximal-margin Minimal-volume Hypersphere Support Vector Machine
Computer Science. 2012, 39 (8): 239-238. 
Abstract PDF(232KB) ( 435 )   
RelatedCitation | Metrics
For multi-subjects classification problem,a multi-subjects maximal-margin minimal-volume hypersphere support vector machine was proposed according to maximal-margin minimal-volume hypersphere support vector machine and fuzzy theory. The algorithm uses 1-a-r maximal-margin minimal-volume hypersphere support vector machine to train sub-classifiers, obtain membership vector of the sample that is classified according to the classifiers. At last it labels the subjects that the sample belongs to according to the membership vector. The experimental results show that the algorithm has higher performance on precision, recall and Fl.
Quantum Genetic Algorithm Based on Angle Coding of 3D
Computer Science. 2012, 39 (8): 242-245. 
Abstract PDF(325KB) ( 356 )   
RelatedCitation | Metrics
In order to make full use of the quantum characteristics of the quantum state in the algorithm, and improve the search efficiency, reduce storage space, a new quantum genetic algorithm called 3I}AQUA was proposed. The algorithm describes ctuantum bit as a pair of angles in 3D spherical coordinate,makes full use of the ctuantum space motion characteristics, and introduces a kind of adaptive scheme to calculate the rotation angle size and direction which not only makes the process of chromosome's update and variation simplified, but also improves ctuantum characteristics, storage properties and time performance of the algorithm greatly .The simulation results show that the efficiency of the algorithm and the search ability arc superior to the simple genetic algorithm and common quantum genetic algorithm.
Receptor Editing-inspired Real Negative Selection Algorithm
Computer Science. 2012, 39 (8): 246-251. 
Abstract PDF(529KB) ( 374 )   
RelatedCitation | Metrics
Inspired by theory of biological immune receptor editing,a receptor editing-inspired real negative selection algorithm(RERNS) was proposed. For the detector that matches self,algorithm uses directional receptor editing to make a new life. These new detectors are located in the area of self and non-self boundary, thereby the diversity of detector is increased and the boundary covered by the algorithm is also improved. For the detector that does not match self , algorithm uses direction receptor editing for identifying identical nearest self to expand coverage of no-self space under the circumstances of containing original scope of detector. Theoretical analysis and experimental verification show that RERNS algorithm generates less un-mature detectors and obtains better detection performance than the most representative RNS algorithm and V-detector algorithm.
Prediction of B-barrel Transmembrane Protein from Sequence Based on SVM
Computer Science. 2012, 39 (8): 252-255. 
Abstract PDF(329KB) ( 390 )   
RelatedCitation | Metrics
Membrane protein is a sort of protein with important biological functions. ho predict if the protein belong to B-barrel transmembrane protein according to sequence is the important precursor step for predicting 3D structure of 208B-barrel transmembrane protein,also is a challenging job in computational protein field. This paper introduced the feature extraction of 208B-barrel transmembrane protein sequences and prediction by SVM. The features consist of the position information in sequence, the physical and chemical properties of amino acid residues. And the results show that the accuracy and MCC of the method are 88.36% and 0.7723 respectively.
Improved Two-dimensional Minimum Error Image Thresholding Method
Computer Science. 2012, 39 (8): 259-262. 
Abstract PDF(647KB) ( 346 )   
RelatedCitation | Metrics
The two-dimensional minimum error(TME) thresholding method is a viable image segmentation method,but it has high complexity and is hardly used in real-time applications, and it is sensitive to noise, so an improved THE thresholding method was proposed. First the traditional 3*3 template was divided into two complementary parts: acrossing template and a 4-angle template, and the original image was median-filtered with two templates respectively to get two filtered images, then the efficient 2-D histogram was created and the better TME segmentation results were obtwined using the two images,finally the formula of the TME was deduced and simplified to get the simplified formula,and a novel and fast algorithm was deduced with the TME computing features and the formula in order to reduce the computational complexity. Experimental results show that compared with the current THE thresholding algorithm, the proposed method has not only better segmentation performance and robustness, but also its speed is much faster and its memory space is much less.
Service Model of Spatial Data Storage Scheduling Based on Subdivision Theory
Computer Science. 2012, 39 (8): 263-267. 
Abstract PDF(450KB) ( 383 )   
RelatedCitation | Metrics
With the spatial data applications to the depth and breadth of expansion, spatial data exist for speed, efficiency and other problems in the organization, storage, update, applications and other aspects. In view of the above problems,based on global subdivision organization theory, combined with the client oriented aggregation service that G/S mode,studied and proposed a space subdivision data storage scheduling service model, and presented the model architecture,data access process,designs the address coding structure and resolution process,forming effective mechanism of space subdivision data organization and management, according to the needs of integration, fast scheduling with data distributed storage and client information aggregation. Finally realized and verified the prototype system, it has certain theory significance and the application value.
Inverse P-reasoning and Discovery,Reasoning-search for Unknown Information
Computer Science. 2012, 39 (8): 268-272. 
Abstract PDF(373KB) ( 399 )   
RelatedCitation | Metrics
Inverse P-sets(inverse packet sets) is a new mathematical structure based on P-sets(packet sets). It is a pair of sets composed of internal inverse P-set XF (internal inverse packet set XF) and outer inverse P-set XF (outer inverse packet set),i. e.is inverse P-sets. Inverse P-sets has dynamic characteristics,the dynamic characteristics of inverse P-sets is identical to that of one kind of information systems. P-sets is obtained by introducing dynamic characteristic into ordinary set X, P-sets has dynamic characteristic, the dynamic characteristics of P-sets is identical to that of another kind of information systems. P-sets has been widely applied to this kind information systems. P-reasoning(packet reasoning) is a reasoning model generated by P-sets, and has dynamic characteristics. On basis of inverse P-sets and inverse P-reasoning,theorem of inverse P-reasoning and internal-outer search was given. The geometric characteristics of inverse P-reasoning was given. Several theorems and applications about inverse P-reasoning and unknown information search-identification were given. Inverse P-sets and inverse P-reasoning have extensive property in practical application.
New Concept Lattice Structure——Interval Concept Lattice
Computer Science. 2012, 39 (8): 273-277. 
Abstract PDF(346KB) ( 377 )   
RelatedCitation | Metrics
Analysis of classic concept lattice and rough concept lattice shows that concept extension with either all the attributes or only one attribute can decrease support and confidence of the extracted association rules greatly. To solve this problem, the author put forward a new concept of lattice structure: interval concept lattice,in which the concept extension is object sets that meet the intension property in the interval. Firstly, it was proved that interval concept lattice degenerates into classic concept lattice when , interval concept lattice degenerates into rough concept lattice. Secondly, the measurement precision and coverage of interval concept lattice concepts were given and some related properties were discussed. Thirdly, some unique properties of interval concept lattice were proved. Fourthly, the construction method of interval concept lattice was preliminary provided. Finally, the necessity and practicability were verified through a case study.
Improved Medical Image Segmentation Algorithm Based on Laplacian Level Set
Computer Science. 2012, 39 (8): 278-280. 
Abstract PDF(569KB) ( 354 )   
RelatedCitation | Metrics
Being a key procedure of image recognition and image understanding, image segmentation, on one hand, is regarded as being of important potential value, hence a lot of algorithms have been proposed, on the other hand, it has encountered a lot of challenges. Among all these challenges, one of them is how to acquire continuous segmentation result from blurring region. A new medical image segmentation algorithm based on the Lapalacian level set was proposed, and this algorithm combines regional information into speed function to drive the evolution of level set surface. The algorithm utilizes not only the information of image edges and gradient information, but also image region information. The algorithm takes advantage of regional global optimization features meanwhile maintaining the local features of edges.The new proposed algorithm implements effective segmentation of medical images. Compared with the classical level set segmentation methods, the improved algorithm has good performance in maintaining the continuity of the edges, so that the segmentation result is relatively complete. This algorithm can provide reliable scientific data for image analysis.
Study of Weaken Expression in Face Recognition
Computer Science. 2012, 39 (8): 281-283. 
Abstract PDF(575KB) ( 312 )   
RelatedCitation | Metrics
For expression impact on face recognition accuracy, a guidcbased model was proposed to weaken local expression, and the mesh deformation technology was used to reduce the plastic deformation of facial expression. First the expression area was transformed according to triangular patches,and the deformation gradient was calculated according to gradient operator to realize the guidcbased differential gradient field transform, and then the Poisson equation was used to complete the triangle stitching and reconstruction. The rigid region in expression face was found to get the class average face differences and constraints. hhe constraints was used to maintain the differences between classes and within-class similarity. Comparing with other algorithms, the face recognition accuracy rate is improved significantly, and the effectiveness is proved.
High Resolution Color Remote Sensing Image Segmentation Based on Improved JSEG Algorithm
Computer Science. 2012, 39 (8): 284-287. 
Abstract PDF(813KB) ( 432 )   
RelatedCitation | Metrics
JSEG algorithm is a method of segmentation of color images which can make the image filtering, color space quantization and spatial segmentation in due order. However, using the method directly in segmentation could result in inaccurate of edge due to fuzzy edge in the remote sensing image, or lead to excessive segmentation because of different shades in the region. To obtain the better effect of segmentation,the local homogeneous matrix which gives reasonably description for color homogeneity in region and the boundary of different region was used to correct the local J value of traditional JSEG algorithm in this paper. In addition, to weaken or to eliminate the over segmentation, LBP/C operator which gives stable description for the texture information of image was used to combine class maps with similar texture information. Experimental results show that the above methods work well.
Level Set Image Segmentation Method Based on Prior Shape Knowledge
Computer Science. 2012, 39 (8): 288-291. 
Abstract PDF(784KB) ( 354 )   
RelatedCitation | Metrics
A shape-prior based on image segmentation method was proposed to approach the defect of the current level set method in segmenting the subject with strong noise and weak boundary. By using level set method,this model combines region and boundary information together and selects the optimal prior shape by Similarity Matching. The model displays its advantages in the segmentation of complicated image with strong noise and weak boundary,and in the efficicncy to solve the problems of confirming the primary contour of curve evolution. Compared with traditional way, the experiment proves that this approach contributes better segmenting effect and improved accuracy.
Image Retrieval Using the Double Density Contourlet Transform
Computer Science. 2012, 39 (8): 292-295. 
Abstract PDF(656KB) ( 324 )   
RelatedCitation | Metrics
We proposed the double density contourlet transform(DDCT) and studied its applications. The DDCT is a frame operator for L2(Z2)and has the advantages of near shift invariant, multi-scale and multi-direction. The DDCT sulrbands of the texture images have the property of non-Gaussian. The generalized Gaussian density(GGD) can denote the whole statistical feature of image to some extent At the same time, the local binary pattern is used to describe the local texturcspatial feature for the low frequency sulrband of multiwavelets. I}hus the GGD and local binary pattern features can be computed as the feature texture. Experiments indicate that the retrieval efficiency of this algorithm is raised by 5. 3% than the Contourlet algorithm.
Target Tracking Based on Multi-core Parallel Particle Filtering
Computer Science. 2012, 39 (8): 296-299. 
Abstract PDF(332KB) ( 382 )   
RelatedCitation | Metrics
The particle filter's real-time capability is poor because of a large number of the particles' calculation of the algorithm. Because of the particle filter's characteristics of parallelism, we used OpenMP multithread implementation to derive multiple threads, in order to change the algorithm from the single thread serial implementation to multithread implementation in parallel. In the multiple processors platform, the parallel-computed technology and particle filter algorithm were used to implement the tracking of the moving object. The results show that the method can improve the algorithm's execution speed.
Study on an Improved Algorithm of Video Keyframe Extraction
Computer Science. 2012, 39 (8): 300-303. 
Abstract PDF(593KB) ( 630 )   
RelatedCitation | Metrics
Video segmentation and keyframe extraction are the core issues of Content based Video Retrieval(CBVR).We proposed an improved approach of key-frame extraction for video retrieval, adopting the synthesis method based on histogram-based method and pixel-based method in video shot segmentation. In the method, videos arc firstly segmented into shots according to video content, by our improved histogram-based method, with the use of histogram intersection and nonuniform partitioning and weighting. hhen, the obtained results arc secondly detected by pixel-based method to optimize the results. At last, within each shot, key-frames arc determined with the calculation of image entropy of every frame in HSV color space. Our simulation results prove that the extracted key frames with our method are compact and uniformly distributed.
Combined Method of Dynamic and Static Address Mapping for Shared Last Level Cache of CMP
Computer Science. 2012, 39 (8): 304-310. 
Abstract PDF(656KB) ( 317 )   
RelatedCitation | Metrics
The shared last level cache(LLC) of CMP often uses a static address mapping method. This method may map some processor's temporary private data to other processor's last level cache. The processor needs longer access latency on these data than on the data mapped to local. This paper proposed a combined method of dynamic and static address mapping. The method can map most temporary data to their accessing processor's cache, so that these data's access latency can be reduced to loca lLLC access latency and the power and bandwidth of interconnection wasted for these data are saved. The experiment results show that the combined method of static and dynamic address mapping used in a CMP with a ring interconnection and SOR cache coherence protocol can obtain average performance increase of 9%,and the maximum is 38%.
Fetch Policy with IFSBSMT Based on Simultaneous Multithreaded Processors
Computer Science. 2012, 39 (8): 311-315. 
Abstract PDF(476KB) ( 324 )   
RelatedCitation | Metrics
Fetch policy influences instuction throughput rate of computer processor directly. In the view of the shortages of unbalaned utilization of fetch bandwidth and high conflict rate of instruction queue, a new fetch policy named instruction flow speed based on simultaneous multithreading(IFSBSMT for short) was proposed. This strategy fetch takes the value of instrunctions per clock(IPC) to select the high priority thread and the method of prefetch instructions number to allocate fetch width. Meanwhile, IPC value for thread and L2 Cache miss rate are taken as the dual priority dynamic resource allocation mechanism to allocate the system resource for processors. hhe result shows IFSBMT can effectively improve the instruction fetch bandwidth and overcome the problems of instruction queue and resources waste, which can improve instruction throughput and abtain a better fetch fairness.