Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 38 Issue 8, 16 November 2018
  
Theory and Key Technologies of Architecture and Intelligent Information Processing for Internet of Things
ZHAO Zhi-jun,SHEN Qiang,TANG Hui, FANG Xu-ming
Computer Science. 2011, 38 (8): 1-8. 
Abstract PDF(884KB) ( 415 )   
RelatedCitation | Metrics
Internet of Things(IoT) is a virtual network which is dedicated to combine the Internet and a huge number of things together and those things arc connected to the Internet via various technologies, e. g. , wired or wireless technologies. By connecting to the Internet, things, e. g. , RFID, sensors or actuators, could communicate with each other. First1y, this paper introduced the characteristics, network architecture and the difference among Ioh and other networks, e.g. , wireless sensor networks and RFID. To achieve the target of large-scale application, we proposed a general network architecture for IoT. Meanwhile, the concept of Regional Server was presented firstly in the proposed architecture to address the "Information Isolated Island" problem. Then, this paper claimed that semantic interoperability, knowledge presentation and context-ware technology arc key technologies for Ioh. Accordingly, this paper deeply investigated the key problems of intelligent information processing, e. g. definition of information spaces, duality of information and information processing technologies,in an extensive way.
Review of Architectures of Cognitive Network
WANG Hui-qiang,XU Jun-bo, FENG Guang-sheng, WANG Zhen-dong, CHEN Xiao-ming
Computer Science. 2011, 38 (8): 9-16. 
Abstract PDF(942KB) ( 421 )   
RelatedCitation | Metrics
Increasingly complex application environments and diverse users’ needs have resulted in the complicated management of network systems that is short of ability of intelligent self-adaptation. In response to actual demand, Cognitive Network(CN) was proposed, which is considered as the inevitable trend of the next generation networks. Since CN come into the people's vision, it has shown a great role in improving the overall performance of networks and end-to-end goals, as well as simplifying network management Related work was summarized in depth in this paper firstly,and then 3 typical CN architectures were introduced briefly. Based upon the analysis of related work, a new more general framework called as Super-NET was proposed, which will lay the foundation for building theoretical model of CN. At last,problems and challenges facing by CN as well as direction of development were introduced briefly.
Survey on Software Scalability of Distributed Systems
CHEN Bin,BAI Xiao-ying,MA Bo1, HUANG Jun-fei
Computer Science. 2011, 38 (8): 17-24. 
Abstract PDF(770KB) ( 918 )   
RelatedCitation | Metrics
Scalability represents the ability that a system continually satisfies the performance requirements with the changing system rectuirements and resources. Based on different scenarios, the basic definition and metrics of scalability can be understood and introduced from different perspectives. The main approach that systems implement scalability is to dynamically change the amount of available resources and the manner of task scheduling according to the system requirements and running status, resulting dynamic adjustment of system performance. The key technictue of distributed resource management system can be analyzed from parallel task scheduling and distributed system framework. Scalability is the key issue for evaluating and measuring system performance, this paper stressed on parallel code testing and the main approaches in scalability testing system design. With the development in software diagram, the deployment and supply of software are gradually transferring to the approach of open and shared virtual resource management online service platform, scalability becomes an important guideline for software under Cloud Computing, this paper further discussed the challenging problems under new software diagram for further research purpose.
Research on Granularity Clustering Algorithms
XU Li, DING Shi fei
Computer Science. 2011, 38 (8): 25-28. 
Abstract PDF(455KB) ( 846 )   
RelatedCitation | Metrics
Information granularity is a measure of different levels for refining information and knowledge. With the advantages of selecting granularity structure flexibly, eliminating incompatibility between clustering results and priori knowledge, completing clustering task effectively, granularity clustering methods become one of the focus at home and abroad. In this paper, combined the traditional clustering algorithms from the view of rough set, fuzzy set and ctuotient space theories, effective clustering algorithms with the idea of granularity and their merits and faults were studied and generalized. Finally, the feasibility and effectiveness of handling high-dimensional complex massive data with combina- lion of these theories were forecasted and outlooked.
Research on Commit Method in Distributed Short Transaction Systems
FU Yan-yan, CHEN Chi, FEND Deng-guo
Computer Science. 2011, 38 (8): 29. 
Abstract PDF(498KB) ( 383 )   
RelatedCitation | Metrics
Transactions in distributed short transaction system are usually short and asked frectuently. Most of the exiting distributed commit protocols arc designed for the complex long transaction system, therefore they can not meet the needs of short transactions. In this paper, an improved schema for distributed short transaction systems was proposed.By backuping logs and querying among participants, we can efficiently reduce the number of log-written, communicadons during the transaction and optimizing the failure processing. Compared with other distributed commit protocols and systems,this protocol can effectively improve the efficiency of services,and has a very high log availability.
k-Anycast Routing Protocol for Wireless Sensor Networks
GAO De-min,QIAN Huan-yan,YAN Xiao-yong,WANG Xiao-nan
Computer Science. 2011, 38 (8): 33-37. 
Abstract PDF(495KB) ( 352 )   
RelatedCitation | Metrics
For the problem of multiple base stations in wireless sensor networks, k-Anycast technology based on routing protocols was suggested, the maximum life of wireless network based on multiple base stations was researched. First,based on all energy consumption and data traffic conservation, a combination of the nonlinear planning model was established. I3acause the mathematical models are the NP-hard problem,they were further changed into mixed integer nonlinear programming model to solve all the base links to link the propitiation for the routing paths. According to the link life, assigned data flow to reach balance energy consumption purposes and to maximize the life span. I}he simulation models indicate that the communication could prolong the life cycle.
Optimized User Mode Virtual Router
YIN Dong, MU De-jun ,DAI Guan-zhong
Computer Science. 2011, 38 (8): 38-41. 
Abstract PDF(389KB) ( 398 )   
RelatedCitation | Metrics
Network virtualization provides the ability to run concurrent virtual networks over a shared substrate. However, it is challenging to design such a platform to host multiple heterogeneous and often highly customized virtual networks. Not only minimal interference among different virtual networks is desired, high-speed packet forwarding is also rectuired. This paper presented Optimized User mode Virtual Router(OUVR) , it is a virtual network platform which uses efficient user mode packet forwarding to supports high-speed and highly customizable virtual networks. With adopting lightweight OS-level virtualization to slice a physical server into virtual machines, the data plane of a virtual muter runs in an isolated virtual machine so as to safe for customization. We designed a new user mode packet processing scheme for virtual routcrs hosted in OUVR to achieve high speed forwarding. Experiments show that an OUVR virtual muter can be four times faster than conventional user mode software muter.
New Measure of Complex Network Centrality Based on Resource Allocation
CHEN Guo-qiang,CHEN Liang
Computer Science. 2011, 38 (8): 42. 
Abstract PDF(332KB) ( 417 )   
RelatedCitation | Metrics
A new centrality measure for complex networks, called resource allocation centrality measure, was proposed in this paper. It can overcome some disadvantages of several often used centrality measures that can not be applicable to the disconnect networks. The resource allocation centrality of a node is defined as its amount of resource received from other nodes. If a node receives more resources from other nodes, the node is more important than others. Simulation tests on artificial networks and real networks show that the resource allocation centrality measure has good performances in detecting bridge node and has good stability.
Scheduling and Resource Allocation Algorithm Based on Cross-layer in Middle and High Rate Sensor Networks
SHEN Jian-fang,CHENG Liang-lun
Computer Science. 2011, 38 (8): 45. 
Abstract PDF(490KB) ( 359 )   
RelatedCitation | Metrics
To efficiently support the diverse quality of service (QoS) requirements in middle and high rate sensor networks, a scheduling and resource allocation algorithm based on cross-layer optimization was proposed in this paper. By dynamically adjusting values of the two parameters, delay compensation factor and throughput compensation factor, a mathematical formulation was provided with the goal of maximizing throughput under real-time service QoS constraints to meet the requirements of smaller delay in real-time service and larger throughput in non real-time service. Simulations show that the scheme can achieve compromise between system efficiency and users QoS satisfaction flexibly and guarantee fairness among different user.
Petri Net Model and Deadlock Detection of Distributed Lock
JIN Hong-lin , LIU Bo
Computer Science. 2011, 38 (8): 49-52. 
Abstract PDF(321KB) ( 647 )   
RelatedCitation | Metrics
DLM uses 6 kinds of locks, which makes not only more concurrency in distributed systems, but also more difficult management of locks. Then Petri net works. The Petri net model of a distributed system is compounded from the simplified Petri net of its subsystems. With the reachable marking graph of the system's Petri net, deadlocks can be detected in real time, which processes in the deadlock chain will also be known.
Optimization of Peer-to-Peer Overlay Network Topology Based on Topologically-critical Nodes' Protection
LI Liu,TANG Jiu-yang,ZHANG Zhang,XIAO Wei-dong,TANG Da-quan
Computer Science. 2011, 38 (8): 53-57. 
Abstract PDF(571KB) ( 385 )   
RelatedCitation | Metrics
Connectivity of network is the premise to optimize the topology of P2P network. In order to ensure that each node has connected to each other in the P2P network and enhance the survivability of the network topology, an effective distributed method was carried out in the unstructured P2P network which detects the topologically-critical nodes and eliminates them appropriately, it will strengthen the overlay network to defeat the partition and improve the system' s fault tolerance significantly. I}his paper proposed an SCAM topologically-critical nodes discovery algorithm, which improves the CAM The theoretical analysis and simulation results show that, the SCAM reduces the network consumplion and improves the discovery efficiency significantly by maintaining the high accuracy at the same level.
Group Key Management Scheme Based on Bilinear Pairing
ZHOU Jian,ZHOU Xian-wei,SUN Li-yan
Computer Science. 2011, 38 (8): 58-60. 
Abstract PDF(321KB) ( 336 )   
RelatedCitation | Metrics
Current schemes on group key management arc almost based on GDH (Group key Management Based on Diffie-Hellman)protocol,and their sub tree in key tree is limit To solve the ctuestion,this paper put forwards a new scheme based on bilinear pairing (BPGKM) , which increases the children of node in key tree from two to three. Therefore, this scheme supports larger networks on the premise of protecting the network security, whose efficiency is better than current schemes with reducing half of computations on key.
Optimization in IMS Registration Procedure Using One-pass Interaction
WANG Xiao-jing,ZHANG Qi-zhi,FAN Bing-bing
Computer Science. 2011, 38 (8): 61-64. 
Abstract PDF(455KB) ( 387 )   
RelatedCitation | Metrics
Two authentication procedures arc essential when mobile station registers to the IMS network; the GPRS access authentication and the IMS authentication. Both of the authentication procedures perform the protocol AKA. Almost all of the operations are the same, which increase the register procedure delay. As a result, it is very inefficient.This paper investigated the performance of IMS registration procedure and proposed an optimized scheme in IMS registration procedure based on previous study. We through the interaction between SGSN and HSS completed the registration procedure between mobile station and network, which avoids the bandwidth waste when authentication fails. The result of data analysis illustrates that the performance of this optimized registration scheme is better than the primitiveone.
Privacy-preserving Distance Measure of Different Coordinates
WANG Tao-chun,LUO Yong-long ,ZUO Kai-zhong,DU An-hong
Computer Science. 2011, 38 (8): 65-68. 
Abstract PDF(331KB) ( 546 )   
RelatedCitation | Metrics
Coordinate transformation is a problem which can be frequently encountered during cooperatively completing certain surveying and mapping work. However, as related to their own security and interests, the partners do not want to disclose their input The paper first proposed privacy preserving coordinate transformation and designed corresponding transformation protocol. Based on the above protocol, a protocol for Distance Measure of Different Coordinates was developed and the two's correctness, security and efficiency were analyzed further. In preserved privacy,the problem of privately determining polygonal similarity was successfully solved in the paper,which was also applied to the judgment of target positioning accuracy.
Modeling and Analysing Network Topology of Friend Relationships in Instant Messaging System
WANG Fu-lin,GAO Qiang,LIU Yan-heng,WANG Jian
Computer Science. 2011, 38 (8): 69-73. 
Abstract PDF(418KB) ( 361 )   
RelatedCitation | Metrics
Instant messaging (IM) system has become primary communication tools between people. In order to design a more friendly instant messaging system, it is necessary to understand how the friend relationships are built and evolved in the real instant messaging system. This paper studied the characteristic of friend relationships in network and the trend of making friends with other QQ users and considered nodes in degree and out degree of actual QQ users. At the same time a new algorithm for IM topology was proposed by considering the influence of both node property and the trend of making friends of QQ users. We call it Attributcbased model (AI3M) which is different from traditional rules in which node degree is top-priority. I}he experiments show that AI3M behaves better than 13A algorithm when friend relationship is considered.
Research of Multi-copy Routing in Delay Tolerant Networks
XU Jia,WANG Ru-chuan,XU Jie, LIAO Jun
Computer Science. 2011, 38 (8): 74-79. 
Abstract PDF(623KB) ( 377 )   
RelatedCitation | Metrics
Since Delay/Disruption holerant Networks(DTNs) have been proposed as the new evolution of MANET and WSN, the communication between nodes without end-to-end path is possible. DTNs have greate application prospects in the intelligent highway, biological monitoring, satellite communications, rural communication, personal information exchange, etc. Routing is a challenging and absorbing research field in DTNs. In this paper, we classified and analyzed the multi-copy routing, identified open research issues and intended to motivate new research and development in this area.
Differential Fault Analysis of Grain-v1
WANG Lu, HU Yu-pu , GHANG Zhen-guang
Computer Science. 2011, 38 (8): 80-82. 
Abstract PDF(229KB) ( 417 )   
RelatedCitation | Metrics
By analyzing the weakness in design of the stream cipher Grain-vl,a differential fault attack was presented.The attack makes use of the weakness that the key stream equations in the first 17 times have comparatively low orders. The attacker needs to inject faults to the specified positions of LFSR at the stage of generating key streamBy differentiating,the attacker is able to acquire 17 linear ectuations which are linear independent and 80 initial states of the stream cipher directly. The attacker just needs to guess 62bits internal states, and then all the internal state can be achieved. The proposed attack algorithm can reduce the complexity to O(2 74,26).The result shows that the analyzed algo- rithm has security vulnerabilities, and the computational complexity of attacks is lower than that the designers claimed O(2 80).
Research on the Multi-granularity Semantic Annotation of Web Resource and its Application Technology
ZHU Jia-xian,BAI Wei-hua,LI Ji-gui
Computer Science. 2011, 38 (8): 83-87. 
Abstract PDF(492KB) ( 408 )   
RelatedCitation | Metrics
The search results from the present Web search engine arc all the Web files, Web pages or the links annotated based on keywords,and it does not support the inner information searcheding of these files. In order to support the inner searching of a file, it researched on the multi-granularity semantic annotation of Web resource, which named the root node, the branch node, the leaf node and the resource information metadata, to manage the Web resource. Based on it, it also researched the searching technology based on ontology, and by the primary experiment and analyses, it shows that it can enhance the efficiency of searching and getting information data from the multiform and great deal of Web resources.
New Identity-based Proxy Signature in the Standard Model
JI Hui-fang, HAN Wen-bao, LIU Lian-dong
Computer Science. 2011, 38 (8): 88-91. 
Abstract PDF(335KB) ( 317 )   
RelatedCitation | Metrics
A proxy signature scheme entity to delegate its signing capability to another allows an original entity (proxy) in such a way that the proxy one can sign messages on behalf of the delegator. A new identity-based proxy signature scheme was proposed. The scheme is existentially unforgeable in the standard model against adaptive chosen message and identity attacks under the Computational DifficHellman assumption. Compared with the known identity-based proxy signature scheme in the standard model>the scheme enjoys less operation.
Mobile Digital Library-oriented Context Sensitive Knowledge Recommendation
HU Mu-hai,CAI Shu-qin,TAN Ting-ting
Computer Science. 2011, 38 (8): 92-95. 
Abstract PDF(503KB) ( 400 )   
RelatedCitation | Metrics
With the development of mobile communication, the knowledge service of digital library is implemented on mobile terminals, but the existing digital libraries lack of capacity to aware the context such as reader's environment and scenario, so can not offer personalization knowledge recommendation service, which also adapt to the current context and the reader. So a method to measure reader's context sensitivity based on entropy was proposed,and the measure was used to compare two different users based on what the collaborative filtering can be improved. The experiment results show that the improved algorithm can offer mobile digital library a new approach to increase the knowledge recommenlotion accuracy and its context aware ability, then provide reader personalized knowledge resource which also adapt to his current context.
Network Capacity of Multi-channel Multi-interface Hybrid Networks
CHEN Lin ,WEI Shu-tao, TAN Wen-an
Computer Science. 2011, 38 (8): 96-100. 
Abstract PDF(427KB) ( 414 )   
RelatedCitation | Metrics
A basic problem of wireless ad hoc networks is network capacity,which reflects the data throughput of each node with the increasing of network scales. It indicates the scalability of wireless networks. The network capacity of multi channel multi interface hybrid network is constrained by the switch ability of interface among different channels.By constructing the mathematical analysis model, three conditions, interfaccfixed, interfaccconstrained and interface free, their impacts on network capacity were studied, and the upper and lower bounds of network capacity were obtamed. The theoretical analysis results show that the switch ability of interface has great influence on the capacity of hybrid multi-channel networks, and if the interface can freely switch among different channels, there is no capacity loss,which provides the important theoretical reference for network optimization.
Research about Evaluated Model for Creditability of Ad-hoc Network in Uncertain Environment
LIU Jian-jun
Computer Science. 2011, 38 (8): 101-105. 
Abstract PDF(395KB) ( 318 )   
RelatedCitation | Metrics
This paper solved evaluated problem to model for Ad-hoc network by uncertainty theory. The paper carried on analysis, evaluation and test to creditability level of Ad-hoc network by uncertainty theory, proposed uncertain comprehensive evaluated method,established credibility evaluation model for Ad-hoc network. At first,weight of each evaluated index was characterized as uncertain variable in this model, strengthened rationality of weight of every index Then carried on evaluation to every index by single index evaluation model. At last, the creditability level for the whole Ad-hoc network was determined by uncertain comprehensive evaluated method. The comprehensive evaluated result was obtamed by calculating an example, the analysis to result indicates that this model is effective and feasible, and has scicntificalness and rationality.
Secure Data Aggregation for Sensor Networks
ZHANG Peng , YU Jian-ping , LIU Hong-wei
Computer Science. 2011, 38 (8): 106-108. 
Abstract PDF(279KB) ( 420 )   
RelatedCitation | Metrics
Secure data aggregation aims at realizing the end-to-end confidentiality and authentication of the data, which is collected and aggregated by sensor. End-to-end confidentiality is usually guaranteed by the privacy homomorphic encryption. However, the homomorphic authentic schemes are not multi-source and multi-message. For the contradictions between the end-to-end authentication and the data aggregation, a secure data aggregate authentic scheme was proposed. A secure data aggregate protocol was constructed with the proposed authentic scheme and a homomorphism encryption scheme. Security analysis result shows that the proposed data aggregate protocol can obtain the end-to-end confidentiality and authentication of the collected data.
Difference Value Energy Detection in Cognitive Radio
TANG Cheng-kai,LIAN Bao-wang,GHANG Ling-ling
Computer Science. 2011, 38 (8): 109-110. 
Abstract PDF(268KB) ( 446 )   
RelatedCitation | Metrics
In cognitive radio systems, cognitive users need to accurate determine the using of the spectrum by real-time.Studying the performance detection of energy detection, it was found that the larger the noise fluctuations, the sharper the decline in performance of energy detection, especially in low SNR. We used the maximum fluctuations range of the noise uncertainty to amend the energy detection and gave a joint detection of the difference value and the energy which utilize the alteration of the difference value in the every energy detection to amend the judgment result and get the blending final result based on it. Simulation results show that under guaranteeing the advantage of the traditional energy detection the performance of our new energy detection can effectively improve the cognitive users accurate detection per- formance of spectrum usage in real-time.
Design of Optimal Pilot-tones Based on Mean-square Error Estimate in MIMO-OFDM System
ZHAN Xing-xiang,LI Suo-ping, CHEN Wei-ru,SU Ying
Computer Science. 2011, 38 (8): 111-114. 
Abstract PDF(376KB) ( 437 )   
RelatedCitation | Metrics
In the channel estimation algorithm of MIMO-OFDM system, the least sctuares method (LS) based on frequcncy domain, has the very low implementation complexity under the guarantee of certain error performance, but its crror performance isn't best. After comparing many pilot symbol assisted algorithms, we designed a new pilot structure embarked from arbitrary receiving antenna,which can improve the channel estimation performance and reduce the compute complexity. In the new pilot structure, the sequences of pilot phase arc mutually orthogonal and evenly distributed.In addition, being different from the existing result that the pilot matrix adopts the shape of real part and imaginary unit, the clement of the proposed pilot matrix in this paper is real number or purely imaginary number so as to reduce the compute complexity effectively. The theoretical derivation proves that the proposed structure is the best pilot and has the minimum mean square error (MSE). The simulation results show that the proposed algorithm is better than the traditional method of least squares in mean square error, which is more obvious with the increase of signal to noise ratio(SNR). The proposed algorithm has the very low system error rate (SER) than the traditional least square, and muchbetter performance with increasing the number of sulrcarriers in the case of big SNR.
Fast and Effective LBG Initial Codebook Generation Algorithm
LIANG Yan-xia ,YANG Jia-wei , LI Ye
Computer Science. 2011, 38 (8): 115-116. 
Abstract PDF(281KB) ( 546 )   
RelatedCitation | Metrics
With consideration of LI3G algorithm depending on initial codebook, a new LIBU initial codebook generation algorithm was dcvclopcd,on the basis of Greedy Tree Growing Algorithm (GTGA) and Most Dispcrscd Codewords in Initialization (MDCI) algorithm. A fundamental codebook was generated by GTGA first, and then an initial codebook was produced from the fundamental codebook by MDCI algorithm. The computation of this new algorithm is lower than usual Split algorithm, and the run time decreases too. Compared with these two basic algorithms, both the Distortion and Average Spectral Distortion are redueed.
Extension and Simulation in NS2 for Wireless Sensor Networks
ZHANG Xiao-qing,LI Chun-lin,ZHANG Heng-xi
Computer Science. 2011, 38 (8): 117-120. 
Abstract PDF(377KB) ( 455 )   
RelatedCitation | Metrics
In this paper, the principle and mechanism of NS2 were analyzed. The split object model and discrete-event dispatch were emphasized, taking sensor network as a key, the extension mechanism of sensor network for NS2 was discussed,including energy consumption model,mobile node extension and routing protocol. Based on these works,the code transferring process and simulation model of wireless sensor networks routing protocol in NS2 were introduced. Then taking LEACH as an example, some simulation experiments were carried out. The network simulation and codes extension in NS2 become easily by the help of such works.
Research on the Technical and Legal Issues of Collecting Evidence by Honeypot
WANG Jian-bong,He Xiao-xing
Computer Science. 2011, 38 (8): 121-124. 
Abstract PDF(435KB) ( 1348 )   
RelatedCitation | Metrics
Honeypot is a new kind of the defense technology, which can defend the Internet attack actively and collect the important evidence of the attackers. The techniques of realizing collecting evidence by honeypot include Internet deception, port reaction, data seizure, data analysis and data control etc, which help to improve the detection level and reactivity of dynamic protection system. Running honeypot may result in some technical risks, therefore, it is necessary to control the risks by choosing low-risk honcypot, intensifying data seizure and alarm function and increasing the link control and route control therefore etc. As to the legal problems, including entrapment, privacy right and liability, we can solve them by undue-temptation prohibition, privacy right hint and cautious supervision.
GSPN Based Web Service Composition Performance Prediction Model
ZHU Jun,GUO Chang-guo, WU Quan-yuan
Computer Science. 2011, 38 (8): 125-129. 
Abstract PDF(879KB) ( 334 )   
RelatedCitation | Metrics
The performance of individual Web service is related to the users' locations,because service interaction messages can be easily influenced by Internet environment. As a result, service composition, which is implemented by coordinating interactions between different Web services, is also greatly impacted by Internet However, current modeling methods of service composition focus on composition itself without the impact of Internet and branch execution probability. In order to describe the relationship between service interactions, Internet environment and branch execution probability, this paper introduced a Web services performance prediction model based on generalized stochastic Petri nets. The model takes not only the situations of individual Web services but also the factors of Internet environment and branch execution probability into considerations. The model is used to evaluate and optimize the performance of service compositions.
Configurable Hybrid Algorithm for Combinatorial Test Suite Generation
SUN Wen-wen, JIANG Jing, NIE Chang-hai
Computer Science. 2011, 38 (8): 130-135. 
Abstract PDF(1051KB) ( 380 )   
RelatedCitation | Metrics
Combinatorial besting is a method of testing which proves to be scientific and effective by practice, one research priority is Combinatorial Testing suite generation algorithms. In Parameter Order test generation strategy is representative in such algorithms, advantages of which relics on the selectivity of horizontal expanding and the extendibility of test suite. On the foundation of extracting the parameters which affects the implementation of IPO, a configurable framework was presented and implemented; a new mixed algorithm IPO_GA was presented which uses Genetic Algorithm to do horizontal expanding in the configurable IPO framework. Experiments and analysis were done on each parameter of the configurable IPO framework and IPO_GA was compared with some existing algorithms, results show that IPO_GA performs better while the size of a chromosome is short when doing horizontal expanding; otherwise if the size of a chromosome is too large, IPO_GA performs worse than expected.
MDA-based Method on Resource Modeling and Model Transformation of Real-time Software
JI Ming,HUANG Zhi-qiu , ZHU Yi , WANG Shan-shan , SHEN Guo-hua
Computer Science. 2011, 38 (8): 136-141. 
Abstract PDF(609KB) ( 604 )   
RelatedCitation | Metrics
Model Driven Architecture (MDA) is a model-centric software development framework. Its nature is meta modeling and model transformation. In this paper,a MDA-based method on resource modeling and model transformation of real-time software was proposed. This method first abstracts MARTE meta-model which contains some resource information and Priced Timed Automata (PTA) metes-model through metes modeling. Secondly it uses ATL model transformation language to transform instance models from MARTE model to PTA model. The transformation is to construct transformation rules for MAR I}E metes model and P TA metes-model. Finally, the result is formal verified through formal tool UPPAAI.I}he case shows the feasibility and effectiveness of the method which can improve the reliability of resource modeling of real-time software.
Formal Semantics of Object Oriented Methods Based on Coalgebras
YU Shan-shan,LI Shi-xian,SU Jin-dian
Computer Science. 2011, 38 (8): 142-146. 
Abstract PDF(416KB) ( 338 )   
RelatedCitation | Metrics
According to the problems of relative weak mathematical theory foundations of object oriented methods, the coalgebraic methods were used to analyze the formal semantics of object oriented methods from the perspectives of category theory and observation. Firstly,we presented the coalgebraic descriptions of classes and objects,among which abstract class was defined as a class specification and classes satisfying the class specification were described as coalgebras. Each object belonging to a class was viewed as an element of the state space of the class, as coalgebras. We also used strong Monads theory and assertions respectively to give the parametric descriptions and semantic restrictions of objects' behaviors. Secondly,we further used coalgebraic bisimulation to discuss the behavioral equivalence relationships of objects with the considerations of strong Monads. Finally,we took an example to demonstrate how to use PVS tool to prove the consistence of class specification and objects' behavioral relationships.
Research of Real-time OSGi Service Deployment Mechanism Based on RTSJ Event Mechanism
ZHANG Yi , CAI Wan-dong
Computer Science. 2011, 38 (8): 147-149. 
Abstract PDF(344KB) ( 339 )   
RelatedCitation | Metrics
OSGi specifications provide dynamic services management framework, but there arc no real-time criterion and detailed solutions on the specifications. This paper presented a solution of providing a dynamic real-time framework for deploying the real-time components and services by integrating RTSJ into OSGi. This paper first analyzed the impact of RTSJ on the OSGi framework, and then for the reason that the existing OSGi event mechanism in the RTSJ can not satisfy the real-time computing rectuirements, provided a new event mechanism solution based on the RTSJ real-time threads. It resolves the problem of real-time switching between the services in the RTSJ-based OSGi framework and ensures OSGi based time predictability of real-time embedded systems in the dynamic environment.
Tableau Decision Algorithm for the Temporal Description Logic ALC-LTL
CHANG Liang , WANG Juan,GU Tian-long,DONG Rong-sheng
Computer Science. 2011, 38 (8): 150-154. 
Abstract PDF(484KB) ( 676 )   
RelatedCitation | Metrics
As a combination of the description logic ALC and the linear temporal logic LTL, the temporal description logic ALC-LTL not only offers considerable expressive power going beyond both ALC and LTL, but also makes the satisfiability problem of it preserved to be EXPTIME-complete;however,an efficient decision algorithm for ALC-LTL is still absent Based on a combination of the Tableau algorithm of LTL and the reasoning mechanism provided by ALC,this paper presented a Tablcau decision algorithm for the logic ALC-L TL and proved that this algorithm is terminating,sound and complete. hhis algorithm enjoys excellent expandability; when the ALC component in the logic ALC-LTL is substituted by any other description logic X which is still decidable, this algorithm can be easily modified to act as a Tablcau decision algorithm for the corresponding temporal description logic X-LTL.
Migration Strategy of Mobile Agents for the Intelligent E-commerce Applications
GAN Gao-bin,KONG Xiang-yin,XIAO Guo-qiang
Computer Science. 2011, 38 (8): 155-160. 
Abstract PDF(582KB) ( 346 )   
RelatedCitation | Metrics
Considering some factors (the quality of service and network load) on searching efficiency for the intelligent e-commerce applications, this paper defined the concept of time slice, and presented a migration strategy model of mobile agents searching goods. In this context, the evaluation approach of QoS (the quality of service) was given for businesses objectively and accurately. The request response mode is utilized to effectively avoid the migration failure because of network overload and server overload, and reduce the waste of the network bandwidth and server resources.
SystemC-based Simulation Code Generation for AADL Software Component
MA Chun-yan,DONG Yun-wei, LU Wei,ZHU Xiao-yan
Computer Science. 2011, 38 (8): 161-164. 
Abstract PDF(472KB) ( 566 )   
RelatedCitation | Metrics
Currently, AADL in mission-critical and safety critical embedded field has a good application. In the design phase,how to simulate AADL model is urgent. Model simulation can iteratively construct and refine design model for curly detection of problems,assuring the quality of the design modcl,thcreby reducing the cost of system development. SystemC is a system description language of hardware and software co-simulation. The paper proposed conversion tech- nology and implemented conversion tool for AADI_ software component to SystemC simulation code. At last, the paper used cruise control system as an example to explain the conversion technology and SystemC based thread scheduling simulation. hhrough the research results, users can obtain SystemC-based simulation of AADI_ software components in- eluding the interaction, execution time and sequence between software components, and the thread scheduling simula- tion, and can implement software and hardware co-simulation through combining AADI_ hardware component simulation platform.
Indent Shape Based Approach for Mining Repeated Patterns of HTML Documents
ZHU Yan-xu,WANG Huai-min,SHI Dian-x,YIN Gang,YUAN Lin, LI Xiang
Computer Science. 2011, 38 (8): 165-168. 
Abstract PDF(414KB) ( 338 )   
RelatedCitation | Metrics
Mining repeated patterns is the key to find encoding templates of Web pages, which is the basis for automatic Web data extraction and Web content mining. Existing approaches such as tree matching and string matching can detect repeated patterns with high precision, but their performance is still a challenge for massive Web pages processing. In order to improve performance,the paper presented a novel indent shape based approach for mining repeated patterns of HTML documents. Firstly, the approach defines the indent shape model, which is a kind of simplified abstraction of HTML documents consisting of indents and first tags of each line; Then, it detects repeated patterns indirectly by identifying tandem repeated waves from indent shape. Extensive experiments show that our approach achieves better performance compared with existing approaches.
Feature Subgraph Selection Strategy Based on Category Information
WANG Gui-juan,YIN Jian,ZHAN Wei-xu
Computer Science. 2011, 38 (8): 169-170. 
Abstract PDF(465KB) ( 427 )   
RelatedCitation | Metrics
Selecting frequent subgraph as feature in graph datasets classification based on frequent subgraph plays a very important role. hhe new feature selection strategy was presented based on the information about classes,which is to select unictue and distinctive frectuent subgraphs as feature subgraphs from the candidate subgraphs. The results showed:in the compound data classification, the selection strategy is superior to AutoSVM's and CEP's in classification performance.
Design of an Academic Search Engine Based on the Scholar Community
CHEN Guo-hua,TANG Yong,PENG Ze-wu,LI Jian-guo
Computer Science. 2011, 38 (8): 171-175. 
Abstract PDF(698KB) ( 2712 )   
RelatedCitation | Metrics
Scholar communities and academic search engines are becoming more and more important when conducting researches. We designed an academic search engine based on the scholar community. The functions of the academic search engine and the key problems it should put priority on its consideration were proposed, and the roadmaps to solving these problems were illustrated. Finally, we presented the system architecture and discussed the scholar information integration algorithm which aims to integrate the different content of the academic information provided by different suppliers distributed on different places among the interned I}he algorithm we propose can effectively solve the problem of fetching academic information. Normal Chinese word segmentation components cannot be applied to the Chinese names effectively. Aiming at this problem, we proposed a Chinese name segmentation algorithm which can effectively and efficiently solve the problem. A prototype of such an academic search engine was implemented on the basis of an open source framework Nutch and Hl3ase, which attests the validity of our design.
Multi-level Associative Classifier Based on Class FP-tree
LI Lin,SHAG Feng-jing ,YANG Hou-jun,SUN Ren-cheng
Computer Science. 2011, 38 (8): 176-178. 
Abstract PDF(345KB) ( 336 )   
RelatedCitation | Metrics
Focused on the problem that the traditional multi level associative classification mining cause a lot of redundancy rules which affecte the efficiency of classification, this paper presented an improved multi level associative classier based on Class FP-tree named MACCF (Multi-level Associative Classifier based on Class FP-tree). It is to plot the training set based on the class property of records, using CLOSETS generates to complete candidate item set, through the proposal inside and outside prune strategies reduce most of redundancy and has improved the accuracy; adopting cross associative method to solve the cross level classification, experimental results show its high efficiency in classification.
Spatial Lines Clustering Algorithm Based on Connectivity
LIU Sheng, JI Gen-lin,LI Wen-jun
Computer Science. 2011, 38 (8): 179-181. 
Abstract PDF(317KB) ( 348 )   
RelatedCitation | Metrics
At present,most spatial clustering algorithms focus on spatial points without considering spatial topological relations of spatial objects. The spatial line connectivity was defined by line intersection relations. Algorithm SLCC was proposed for clustering spatial lines based on spatial line connectivity. This algorithm, which is based on K-means, selects the spatial line connectivity as the distance between lines to cluster spatial lines. The experiment results show the algorithm is effective and efficient
Model of Mobile Real-time Transactions Combining Dynamic&Static Segmentation in Wireless Broadcast
DANG De-pent,XU Juan
Computer Science. 2011, 38 (8): 182-184. 
Abstract PDF(282KB) ( 314 )   
RelatedCitation | Metrics
Wireless data broadcasting allows that the realization of large-scale application systems serving mass mobile users is possible. Existing data broadcasting system does not support the mobile real-time transactions. Considering the inherent limitations of wireless broadcast environments,mobile of data users, especially mobile from one unit to another, and the deadlines of mobile real-time transactions, in this paper, a better transaction model segmented transaction model combined static and dynamic segmentation was given for real-time data broadcasting.
Semantic Relatedness-based Information Focusing Service Model and Approach
HUANG Hong-bin,XIONG Fang, DENG Su,ZHANG Wei-ming
Computer Science. 2011, 38 (8): 185-188. 
Abstract PDF(397KB) ( 349 )   
RelatedCitation | Metrics
Information Service is still a hot topic for information sharing research. This paper presented the concept and mathematics model of semantic relatedness based information focusing. Information focusing, as a new information service,implements information congregating by using the semantic association relationship between information units.This paper presented the Ontology-based metadata model,and gave mathematics model of information and information space, and presented the mathematics about information focusing. By giving the formally definition of semantic relatedness,this paper also defined the concept of information focusing based semantic relatedness.
Hierarchical Abstraction Model Based on System-centered Ontology
WANG Nan,SUN Shan-wu,OUYANG Dan-tong
Computer Science. 2011, 38 (8): 189-192. 
Abstract PDF(368KB) ( 327 )   
RelatedCitation | Metrics
The hierarchical ontology class makes it possible to automatically construct the hierarchical model of the physical world by using the abstract mapping relationship between ontology classes with different abstraction degrees.In this paper,we redefined Ontology Class as Object based Ontology Class, and extended the definition of Flow Fragment to propose the system-centered concept of Flow-based Ontology Class. The behavior-indistinguishable flow fragment were defined. We discussed the hierarchical representation of the flow-based ontology class in two directions; the abstraction hierarchy of flow fragment and the aggregation of indistinguishable flow fragments. In the hierarchical structure, the abstraction mapping relationship between flow-based ontology classes was constructed to realize the hicrarchical modeling process based on system-centered ontology. At the same time, the hierarchical relationship of flow-based ontology classes also provides a mechanism of system-centered knowledge sharing and reuse.
Research on Discernibility Matrix Knowledge Reduction Algorithm in Cloud Computing
QIAN Jin,MIAO Duo-qian, ZHANG Zchua
Computer Science. 2011, 38 (8): 193-196. 
Abstract PDF(344KB) ( 314 )   
RelatedCitation | Metrics
Knowledge reduction is one of the important research issues in rough set theory. Classical knowledge reduction algorithms can only deal with small datasets,while the existing parallel knowledge reduction algorithms assume all the datasets can be loaded into the main memory and only implement reduction tasks concurrently, which is infeasible for handling large-scale data. Massive data with high dimension makes attribute reduction a challenging task. To solve this problem, the characteristics of discernibility matrix cells were analyzed, and discernibility matrix for data parallel was designed in terms of the indiscernibility of the attributes) and MapReduce programming model. Thus, large-scale data oriented discernibility matrix knowledge reduction algorithm in cloud computing was proposed. I}he experimental results demonstrate that our proposed algorithm can scale well and efficiently process largcscale datasets on commodity computers.
Sequential Evolutionary Game Simulation Characterized by Learning and Emotion
SHAO Zeng-zhen,WANG Hong-guo,LIU Hong,CHENG Zhao-qianu,YIN Hui-juan
Computer Science. 2011, 38 (8): 197-200. 
Abstract PDF(493KB) ( 338 )   
RelatedCitation | Metrics
Currently the research of game theory focuses on cooperation, competiton and stability analysis, the impact of psychological factors on game research is still rare. A novel model was proposed in this paper to simulate group game process more truly which introduces emotion character. According to different kind of agent, this model builds individual learning mechanism and strategy mutation mechanism based on emotion character. Simulations show that the ability of learning promotes individual' s even cooperation rate of game and improves group total payoff. And, simulations also show that emotion factor would lead to fluctuation more dramatically whereas little influences to group even total payoff.
Local Reconstruction and Global Preserving Based Semi-supervised Dimensionality Reduction Algorithm
WEI Jia,WEN Gui-hua,WANG Wen-feng,WANG Jia-bing
Computer Science. 2011, 38 (8): 201-204. 
Abstract PDF(380KB) ( 335 )   
RelatedCitation | Metrics
Considering that Local and Global Preserving Based Semi Supervised Dimensionality Reduction (LGSSDR) is sensitive to the selection of neighborhood parameter and inaccurate in the setting of the edge weights of neighborhood graph, a new algorithm of Local Reconstruction and Global Preserving Based Semi-Supervised Dimensionality Reduction(LRGPSSDR) was proposed in this paper. hhe algorithm can set the edge weights of neighborhood graph through minimizing the local reconstruction error and can preserve the global geometric structure of the sampled data set as well as preserving its local geometric structure. The experimental results on Extended YaleB and CML1 PIE face database demonstrate that LRGPSSDR is better than other semi-supervised dimensionality reduction algorithms in the performance of classification.
Research on the Model of Helpfulness Factors of Online Customer Reviews
PENG Lan, ZHOU Qi-hai,QIU Jiang-tao
Computer Science. 2011, 38 (8): 205-207. 
Abstract PDF(417KB) ( 1077 )   
RelatedCitation | Metrics
The value of online customer reviews have been recognized by customer and online retailers and research on the helpfulness of the review have become the emerging fields. From the start of reducing the risk of consumer decision-making,this study defined the conception of helpfulness of the customer reviews based on the conception of perceived diagnosticity and built a model of the helpfulness factors of online customer reviews. Based on the theory of communication and persuasion,revicw rates,review length,percent of helpfulness and experience of using the Internet arc the important factors affecting helpfulness of the review. Product type moderates the helpfulness of the review.
Path Planning Method of Mobile Robot Based on Quantum Genetic Algorithm
LIU Chuan-ling,LEI Yan,YANG Jing-yu
Computer Science. 2011, 38 (8): 208-211. 
Abstract PDF(347KB) ( 401 )   
RelatedCitation | Metrics
Based on artificial potential field and grid method, in order to solve the prematurity and lower convergence speed in genetic algorithm(GA) for robotic path planning,a novel mobile robot path planning method based on ctuantum genetic algorithm(QGA) was proposed. This method uses grid method to establish mobile robot work environment model, artificial potential field to control mobile robot, quantum genetic algorithm to select the optimal or sub-optimal path, and double fitness evaluation function to evaluate the path to protect the optimal or sulroptimal path in to the next generation. The ability of finding the best solution and the stability of this method are greatly improved compared with GA and QGA by Simulation, and it has better convergent property and ability of searching more extensive space. It is fit for the solution of complex optmization problems.
PCA Based Feature Selection Algorithm on Speaker-independent Speech Emotion Recognition
LUO Xian-hua,YANG Da-li,XU Ming-xing,XU Lu
Computer Science. 2011, 38 (8): 212-213. 
Abstract PDF(265KB) ( 374 )   
RelatedCitation | Metrics
A very important part of emotion recognition is how to select effective emotional features. Until now, some feature selection algorithms, which are usually used, can help boost recognition accuracy. But some defects, such as less robustness in theory, a higher randomness, more computation, still exist. For these reasons, a new feature selection algorithm based on PCA (principal component analysis) was proposed. First the original feature set was transformed by PCA, then analyzing the weights of these features using the transforming matrix and finally, choosing the important features according to their weights. hhe experiment result shows that features, which arc selected by this method, make a high contribution to the recognition accuracy and they are important.
Attribute Reduction in Bayesian Rough Set Model for Binary Decision Problems
ZHOU Jie,MIAO Duo-qian
Computer Science. 2011, 38 (8): 214-216. 
Abstract PDF(383KB) ( 353 )   
RelatedCitation | Metrics
Bayesian rough set model(BRSM) , as the hybrid development between rough set theory and I3ayesian reasoning,can deal with many practical problems which could not be effectively handled by Pawlak's rough set model. The prior probabilities of events are considered as a benchmark when determining the approximation regions in BRSM. In this paper, the equivalence between two kinds of current attribute reduction models in BRSM for binary decision problems,viz. Slezak and Ziarko's modcls,was proved. Furthermore, an associated discernibility matrix for binary decision problems in BRSM was proposed, with which the available attribute reduction methods based on discernibility matrices in the Pawlak's rough set model can be transferred to BRSM.
Fuzzy Rough Set Based Soft Margin Support Vector Machines
LU Shu-xia,HU Li-sha,WANG Xi-zhao
Computer Science. 2011, 38 (8): 217-220. 
Abstract PDF(317KB) ( 323 )   
RelatedCitation | Metrics
This paper analyzed the advantages and disadvantages of fuzzy rough set based support vector machines(FRSVMs). FRSVMs arc generated by modifying constraints of hard margin support vector machines(SVMs) to get better generalization ability. Although having considered inconsistency between conditional attributes and decision attributes of training samples in datasets,FRSVMs construct the optimal hyperplane which must classify all the training samples correctly. So FRSVMs arc sensitive to noises. Fuzzy rough set based soft margin support vector machines(GFRSVMs) were proposed in this paper to overcome this shortcomings. C-FRSVMs use Gaussian kernel function as their fuzzy similarity relation, consider inconsistency between conditional attributes and decision labels of the samples in datasets,allow training samples to be misclassified during constructing the optimal hyperplane in the training process,punish the misclassification degrees of training samples in their original optimization problems. C-FRSVMs construct the optimal hyperplane by considering both maximal margin and minimal misclassification errors. So C-FRSVMs are less sensifive to noises than FRSVMs. Experimental results show that the proposed approach can obtain higher test accuracy compared with hard margin SVMs,soft margin support vector machines(C-SVMs) and FRSVMs. So,C-FRSVMs can get better generalization ability compared with FRSVMs.
Half P-sets(XF,X) and Noise Data Rejction-application
LI Yu-ying
Computer Science. 2011, 38 (8): 221-225. 
Abstract PDF(421KB) ( 324 )   
RelatedCitation | Metrics
Half P-sets (half packet sets) are a set pair composed of internal P-set XF(internal packet set XF)and finite general set X,or (XF,X) is Half P-sets. It has internal dynamic characteristic. Using Half P-sets for rejecting noise data and getting target data,the recursion-rejection method of noise data by supplementing attributes was put forward in this paper. Some concepts were presented such as the noise data, the noise data integration,F-data core. The recursion method and the recuision structure about the generation of the noise data and the generation of F-data were given. The relation theorem between the noise data integration and F-data core was given as well as the dependence and identification theorems of F-data, the recursion-rejection theorems of the noise data. hhe identification criterion and the recursion-rejection criterion for the noise data were provided as well as the applicatin of recursion-rejection for the noise data. Half P-sets are not only a new study branch of theory and application about P-set but also a new mathematical method for researching information systems which have internal dynamic characteristic.
Research on Verification of RCWW Algorithm for Lot-sizing Planning Problem
HAN Yi,CAI Jian-hu,ZHOU Gen-gui, LI Yan-lai, MIAO Wei-nan
Computer Science. 2011, 38 (8): 226-231. 
Abstract PDF(496KB) ( 512 )   
RelatedCitation | Metrics
Wagner-Whitin(WW)algorithm is a classical optimization heuristic algorithm for lot-sizing planning (LSP)problem. It can provide the best production volumes of a product effectively for medium- and small-sized problems. Randomized cumulative WW (RCWW) algorithm is a modified WW algorithm,which is very suitable for solving LSP problem with general structure and multiple levels. The performance of RCWW algorithm was proved before. Based on the executive idea of RCWW algorithm, this paper adopted C programming language to implement RCWW algorithm. Through computation on LSP problems with general structure, the effects of RCWW algorithm were verified. Also, the errors from literature were found out and our understanding on RCWW algorithm was proved to be correct.
News Text Event Extraction Driven by Event Sample
XU Xu-yang, LI Bi-cheng,ZHANG Xian-fei,HAN Yong-feng
Computer Science. 2011, 38 (8): 232-235. 
Abstract PDF(349KB) ( 1085 )   
RelatedCitation | Metrics
At present, popular methods of event extraction regard event arguments or triggers as drivers, but they may cause positive and negative samples imbalance. Furthermore, there will be data sparseness problem when the corpus is small. This paper proposed an event extraction method driven by event sample. Firstly, features of event samples were extracted from news text sentences to compose the description of candidate event Secondly, event samples and non-event samples of news text were classified through binary classification. Finally, event samples were clustered by hierarchical and k-medoids clustering algorithm to complete event extraction. The method not only overcomes positive and negative samples imbalance and data sparseness problem, but also resolves the limit of pre-defined event types. Experimental results indicate that the proposed method is effective, improves precision and recall of event extraction compared to traditional methods.
Research on Information Fusion of Multiple-valued Intuitionistic Fuzzy Set
WANG Chao,ZHOU Qi-hai,LI Yan
Computer Science. 2011, 38 (8): 236-238. 
Abstract PDF(322KB) ( 320 )   
RelatedCitation | Metrics
The multiplwalued intuitionistic fuzzy set is extending the traditional intuitionistic fuzzy set. Comparing with the traditional ntuitionistic fuzzy set, the multiplcvalued intuitionistic fuzzy set has better abilities to descript the indefinite, unprecise and incomplete information. But one of the important problems to be solved in applying the multiple-valued intuitionistic fuzzy set is how to fuse these criterions of membership and the nonmembership into one criterion of membership and the nonmembership, thus obtain the synthesis j udgment criterion. This paper researched on this question, and constructed several fusion methods for them.
Semantic Calculation Framework Based on Multiscale Analysis
WANG Zhong-lin
Computer Science. 2011, 38 (8): 239-241. 
Abstract PDF(387KB) ( 319 )   
RelatedCitation | Metrics
The calculation of the sentence semantic distance acts a base role in many intelligent systems. Based on multi-scale analysis,a multi level semantic distance calculation framework was proposed. All sentence pairs were filtered by word-level semantic distance algorithm first, and then syntax parsing and semantic parsing were executed for the sentence pairs which semantic distances were below the threshold. After getting the standard semantic frameworks, the core conceptions in the frameworks were compared then. The final semantic distances of the sentence pairs passing the second level filter were obtained using isomorphism-based semantic distance algorithm,which could dynamically adjust its weights. Experiments show that the total precision of the method reaches 73.3%. For the cases, it has higher relevance, reaches 91.4%,similar to the semantic-level algorithm.
Study on Multiclass Text Classification Algorithm Based on Hyper Ellipsoidal
QIN Yu-ping,CHEN Yi-di,WANG Chun-li,WANG Xiu-kun
Computer Science. 2011, 38 (8): 242-244. 
Abstract PDF(261KB) ( 409 )   
RelatedCitation | Metrics
A new multiclass classification algorithm based on hyper ellipsoidal was proposed. For every class, the smallest hyper ellipsoidal that contains most samples of the class was structured, which can divide the class samples from others. For the sample to be classified, its class is confirmed by the hyper ellipsoidal that it belong to. The experiment results show that the algorithm has a higher performance on classification speed and classification precision, compared with hyper sphere algorithm
Approach for Feature Selection Based on (m+λ)-ES Evolutionary Strategy
FENG Lin,YUAN Yong-le
Computer Science. 2011, 38 (8): 245-247. 
Abstract PDF(279KB) ( 431 )   
RelatedCitation | Metrics
Feature selection is one of the hot spots in the field of pattern recognition and data mining etc. A novel feature selection method, termed FeBES(Fcature Selection Based on (m+λ)-ES Evolutionary Strategy),was proposed.Under the rule of optimization evaluation of features subset and (m+λ)-ES evolutionary strategy, a subset of features based on genetic algorithm was selected. Experimental results illustrate that the FeBES is effective for feature selection.
Using SOM to Recognize Similary Individuals from Mobile Phone Data
CHEN Ai-xiang
Computer Science. 2011, 38 (8): 248-252. 
Abstract PDF(588KB) ( 342 )   
RelatedCitation | Metrics
Data collected from mobile phones have the potential to provide insight into the community's individual classification. Using the reality data collected from 94 Nokia 6600 users, a training dataset fed to SOM neural network was generated, the training dataset consists of user's correspondence statistics and survey data. A Similary Subject Recognition System(SSRS) building on SOM neural network was designed and realized. The system SSRS can carry on the effective classification to community's individual and lay the foundation of forecasting the individual future behavior.
Cognitive Radio Decision Engine Based on Adaptive Ant Colony Optimization
LUO Yun-yue,SUN Zhi-feng
Computer Science. 2011, 38 (8): 253-256. 
Abstract PDF(377KB) ( 501 )   
RelatedCitation | Metrics
Cognitive decision engine is a key technology in cognitive communication system. Cognitive engine can dynamically configure its working parameters according to the changes of communication environment and users' requirement. An adaptive ant colony optimization (AACO) cognitive radio engine was proposed to achieve the optimal configuration working parameters. The novel algorithm based on the basic ant colony algorithm improves the path selection mechanism and adaptively adjusting pheromone decay parameter mechanism Therefore, it can ensure the global search ability and convergence speed, and effectively avoid falling into local optimization result. Simulation results show that the AACO engine has better performance than GA and ACO engines in different scenarios.
Distance-based Adaptive Fuzzy Particle Swarm Optimization
LI Shuo-feng, LI Tai-yong
Computer Science. 2011, 38 (8): 257-259. 
Abstract PDF(257KB) ( 318 )   
RelatedCitation | Metrics
The classical Particle Swarm Optimization (PSO) neglects the difference among particles while updating a particle's velocity in a generation. I}o cope with this issue, a novel Distanccbased Adaptive Fuzzy Particle Swarm Optimization (DAFPSO) was proposed in this paper. The DAFPSO designed membership functions to tune the basic parameters used in updating a particle's velocity according to the distance between the current particle and the global best particle. Several classical benchmark functions were used to evaluate the (DAFPSO.The experiments demonstrate the efficiency and effectiveness of the proposed DAFPSO.
Hyperspectral Image Classification Based on Adaptive Ridgelet Neural Network
SUN Feng-li,HE Ming-yi,GAO Quan-hua
Computer Science. 2011, 38 (8): 260-264. 
Abstract PDF(461KB) ( 437 )   
RelatedCitation | Metrics
Artificial neural network is an important tool in the scope of remote sense classification. A new model named adaptive ridgclet neural network was presented based on the theory of multi scaled geometric analysis. On the bases of conventional ones, a novel adaptive PSO algorithm progressed by a so-called swarm density factor was proposed to train the ridgelet network constructed. To validate the performance of ridgelet network, a hyperspectral image classification task was carried out on the fcaturcselected hyperspectral data set AVIRIS 92AV3C by means of mutual information band-selection method. Numerical experiments show the novel PSO algorithm outperforms the conventional PSO for ridgelet network training especially in high-dimensional scenarios. Ridgelet neural network, compared with RI3F and SVM classifier, is advantageous in accuracy referring to ground materials classification with apparent margin, and under the same circumstances, the network always works with simpler structure and smaller size.
Bag of Spatial Visual Words Model for Scene Classification
WANG Yu-xin,GUO He,HE Chang-qin,FENG Zhen,JIA Qi
Computer Science. 2011, 38 (8): 265-268. 
Abstract PDF(376KB) ( 396 )   
RelatedCitation | Metrics
An approach to recognize scene categories by means of a novel model named bag of spatial visual words was proposed. Images were hierarchically divided into sub regions and the spatial visual vocabulary was constructed by grouping the low-level features collected from every corresponding spatial sub region into a specified number of clusters using k-means algorithm. To recognize the category of a scene, the visual vocabulary distributions of all spatial sub regions were concatenated to form a global feature vector.The classification result was obtained using LIBSVM and two kinds of features were used in the experiments:"V1-like" filters and PACK features.
Frontal Face Recognition under Glasses Occlusion
LIN Qing,MA Wei-yang,SHAH Ping-ping,ZHAN Yong-zhao, LIANG Jun
Computer Science. 2011, 38 (8): 269-271. 
Abstract PDF(277KB) ( 373 )   
RelatedCitation | Metrics
Aiming at the problem that eyeglasses occlusion severely affecteds the recognition rate, this paper described a method to remove eyeglasses from the frontal facial image. Firstly, the input facial image was reconstructed by PCA+ICA and then the region occluded by the eyeglasses was obtained by comparing the reconstructed facial image and the input facial image. Secondly, an eyeglasses facial image was synthesized by recursive error compensation. Finally, considering to the specialty of synthetic image, the improved feature weighted method was used to realize the face recognition. hhe experimental results show the method is simple and easy to realize, can generate natural looking facial images without eyeglasses. The average accuracy of face recognizing is 91%, and this method outperforms traditional methods.
Camera Calibration Based on Projective Lines
WANG Zhi-liang,ZHANG Qiong,CHI Jian-nan,SHI Xue-fei
Computer Science. 2011, 38 (8): 272-274. 
Abstract PDF(260KB) ( 581 )   
RelatedCitation | Metrics
In low cost cameras and widcangle cameras, lens distortion should be calibrated. In traditional methods, camera's interior parameters and lens distortion are calibrated together. In this work,we presented a method to calibrate distortion and camera's parameters, without any calibration objects and 3D coordinates. Lens distortion was corrected based on the projective invariant of straight lines,orthogonal straight lines were used to calibrate camera's interior parameters. The accuracy and availability of the method were verified on the experimental result of simulation and real data.
AAM Facial Feature Localization Algorithm Based on Skin Model and Breadth-First Search
XUE Wei,LIANG Jing-dong,LIN Jin-xing
Computer Science. 2011, 38 (8): 275-277. 
Abstract PDF(261KB) ( 381 )   
RelatedCitation | Metrics
This paper presented an advanced AAM face detection algorithm based on skin model and Breadth-First Search to accelerate the process of initialization, which takes full advantage of skin information. Based on skin model,combined with morphological operations and Breadth-First Search, it finds out the face area at first, then gives a rough location of the gravity of landmarks. It effectively narrows the search window,thereby reducing AAM search time. Experiments show that the improved algorithm can increase detection rate and reduce more than 60 percentage computing burden compared with AAM algorithm.
Improved Two-dimensional Maximum Entropy Image Thresholding and its Fast Recursive Realization
ZHANG Xin-ming,ZHANG Ai-li,ZHENG Yan-bin,SUN Yin-jie, LI Shuang
Computer Science. 2011, 38 (8): 278-283. 
Abstract PDF(602KB) ( 474 )   
RelatedCitation | Metrics
The traditional two-dimensional(2-D) maximum entropy(ME) thresholding method has not good segmentation performance mainly owing to approximately processing. So a fast and improved 2-D ME image thresholding method was presented in this paper. Firstly, a 2-D histogram with the improved neighborhood mask was given and the ME method was used on the 2-D histogram to get a more ideal threshold. Then, some values of objects area and background area in the 2-D histogram main-diagonal district in the ME method were calculated precisely to obtain better segmentation performance. Finally, a 2-D histogram was analyzed to get its features and two theorems were proved, and the fcalures and the theorems were employed to infer a new recursive approach to search the best threshold vector to reduce the computational complexity. Experimental results show that the proposed method not only achieves more accurate segmentation results and more robust anti-noise, but also requires much less memory space and its running time is much less, around 0. 04 second, compared to the current 2-D ME thresholding methods.
Implementation and Optimization of the FFT Using OpenCL on Heterogeneous Platforms
LI Yan,ZHANG Yun-quan, WANG Ke,ZHAO Mei chao
Computer Science. 2011, 38 (8): 284-286. 
Abstract PDF(366KB) ( 1204 )   
RelatedCitation | Metrics
Fourier methods have revolutionized fields of science and engineering, from astronomy to medical imaging,from seismology to spectroscopy. A fast Fourier transform(FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and its inverse. OpenCL(Open Computing Language) is a new framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors, and it provides parallel computing using task-based and data-based parallelism. In this paper, we first implemented FFT with OpenCL, then tested and analyzed the performance of it on heterogeneous multi-core platforms like Cell, NVIDIA GPU. I}he performance we achieved is about 65% of Cell SDK, and 75% of CUDA CUFFT,and it needs to improve in the near further.Furthermore, we acquire unprecedented performance results that nearly 140% of CUFF’on Fermi GPU by exploiting hardware features when the size of FFT is small.
NFS Prefetch Mechanism Based on Given I/O Pattern
LUO Zhi-jun,XU Jian-wei,ZHENG Gui,LIU Xin-chun, SHAO Zong-you,NIE Hua
Computer Science. 2011, 38 (8): 287-290. 
Abstract PDF(409KB) ( 485 )   
RelatedCitation | Metrics
Network File System (NFS) has been widely used in many domains for its stability and ease of use. To improve the I/O pcrformance of NFS server,we presentcd a NFS Pseudo Random Prefetch (NPRP) mechanism based on given I/O pattern. Experimental results show that more than 200 0 0 I/O performance of NFS server with NPRP mechanism has increased for applications with given patterns.
Dynamic Analysis Model of Green Network Storage Systems
GE Xiong-zi,FENG Dan,LU Cheng-tao,JIN Chao
Computer Science. 2011, 38 (8): 291-296. 
Abstract PDF(597KB) ( 310 )   
RelatedCitation | Metrics
This paper analyzed and studied the dynamic behavior rules on power management and control in complex network storage systems. An Ideal Energy-Efficient Data Placement model (IEEDP) for distributed storage systems was proposed based on the analysis of the disk power model in network storage systems. On the basis of IEEDP, by combining the techniques of data migration and data replication, a 2-D cellular automata model, named Green Network Storage System model (GNSSCA) was proposed. The simulation results show that some complicated temporal and spatial behaviors evolve from the adjustment of local cells. The overall number of replicas of the system increases as the load becomes heavier, and finally it tends to a stable sate. Moreover, when the load is light, it can be seen that there is an approximate power law distribution of the entropy of request queue length.