Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 39 Issue 5, 16 November 2018
  
Discussion on the Intelligent Vehicle Technologies
Computer Science. 2012, 39 (5): 1-8. 
Abstract PDF(984KB) ( 617 )   
RelatedCitation | Metrics
Nowadays,the pollution of the environment,traffic safety and congestion are truly severe problems affectingthe entire world. hhe intelligent vehicle(IV) is an attempt to move towards a new traffic paradigm, where drive becomes enjoyable experience, and cars don't crash anymore, and traffic congestion is drastically reduced. Therefore, the development of intelligent vehicle in new generation becomes one of the most important strategic objectives in main developed countries. In this paper, intelligent vehicle was creatively defined as a car with power-driven and intelligent control. Five basic functional features should be realized in intelligent vehicle,which consists of the vehicle-to-vehicle(V2V) interaction, vehiclcto-pcople( V2P) interaction, vchiclcto-road ( V2R) interaction, vehiclcto-network(V2N) interaction and energy-saving. These four interactions prescribe a cooperative system between the intelligent vehicles and surroundings,while the energy-saving focuses on the clean and economical energy management system. Meanwhile, the novel theories and technologies for the development of intelligent vehicle were discussed in this paper. Finally, the autonomous driving simulation experiment was performed by miniature intelligent vehicles, and the future of intelligent vehicle industry was prospected.
Survey on Network Reliability Evaluation Methods
Computer Science. 2012, 39 (5): 9-13. 
Abstract PDF(547KB) ( 2957 )   
RelatedCitation | Metrics
With the development of the network technology, network reliability evaluation has been a hot research topic in these years. 13ut the traditional reliability evaluation methods arc hard to be applyied to network evaluation due to the complexity, dynamic and multi-state characteristics of a network. This paper divided the related literatures into three main types; connecting reliability, capacity reliability and performance reliability, and introduced the new application reliability which is based on the three types of reliability and considers the service support capacity of a network, then reviewed the research progress and the existing problems. Finally, the direction of future research was discussed.
Research Advances and Prospect of DNA Computing by Self-assembly
Computer Science. 2012, 39 (5): 14-18. 
Abstract PDF(428KB) ( 859 )   
RelatedCitation | Metrics
Recently, many researchers demonstrate that two-dimensional self-assembly model has universal computational power, and DNA computing by self-assembly is proved to be scalable. With the development of molecular biology techniques,DNA computing by self-assembly has promising prospects, and it has more innovations and applications in nano-science, optimization calculation, cryptography, medicine and other areas. This paper gave more comprehensive introductions to the current status of the research, molecular structure, mathematical models, complexity and error analysis of DNA computing by self-assembly. Also, the problems to be studied and prospect of DNA computing by self-assembly were analyzed.
Research Progress of Affect Detection in Interdisciplinary Perspective
Computer Science. 2012, 39 (5): 19-24. 
Abstract PDF(573KB) ( 577 )   
RelatedCitation | Metrics
Affective detection in human-computer interaction environment is an interdisciplinary field which is related to psychology,computer science,etc. The basic affective theories were presented in the paper, which provide the engineer with the background of psychology knowledge. Then according to the differences of the original data,the existing methods and techniques were introduced. Finally a summary was showed and the insufficiency and development of recent research were presented.
Research on Model of Resource Homeostasis in Constellation Satellite Communication System
Computer Science. 2012, 39 (5): 25-30. 
Abstract PDF(507KB) ( 547 )   
RelatedCitation | Metrics
Resource homeostasis is very important in constellation satellite communication system In order to fulfill the communication task efficiently, macroscopical economic theory was applied to resource homeostasis. A resource homeostasis mathematic model based on dummy factory was presented, and capability evaluation system, consummation evaluanon system and cost evaluation system were discussed in detail to improve the effectiveness of resource usage evaluation, whose feasibility and validity were proved with the sample of the evaluation of Iridium 13y building and solving the resource homeostasis modcl,rclationship between inner and exterior circumstances of the system, users' requirements, resource usage and time and space was abtained,and the rectuirements to reach pareto optimality were also gained. Research on the resource homeostasis lays the foundation for solving resource allocation problem in constellation satellite communication system.
Random-and-Age-based Replication Maintenance Strategy
Computer Science. 2012, 39 (5): 31-35. 
Abstract PDF(535KB) ( 425 )   
RelatedCitation | Metrics
Replication technology is one of the main technologies to improve the data availability, data access efficiency of the structured P2P networks. Though ARMS can choose the stable nodes, it also causes the problem that the replicas are in imbalance distribution. In order to choose stable nodes and avoid too many replicas saved by one node, based on analyzing the disadvantage of the ARMS, this paper presented the randonrand-age-based replication maintenance strategy(RARMS). I}his strategy adds the random factor on the ARMS, in this way the replica can keep on stable and be distributed uniformly in some area. The theoretical analysis and experimental verification demonstrate that this strategy combines the advantages of the random neighbor selection strategy and the ARMS, and can achieve the desired effect a hove. In addition, after analyzing the selected value of the random factor s,this paper got the conclusion that when the value L/r is equalled by the s,it can be better.
Optimization of Network Resource Allocation Based on Genetic Algorithm
Computer Science. 2012, 39 (5): 36-39. 
Abstract PDF(327KB) ( 538 )   
RelatedCitation | Metrics
As the next-generation network architecture becomes more and more complex and the applications become more diverse, how to improve the performance of the network becomes a big problem that we must solve. One important effective method to solve the problem is to rationally allocate and optimize the network resource. Based on the multiservice networks, this paper proposed a new optimization model which is targeted at network resource leveling under the QoS restrictions,and balance of network traffic distribution by optimizing the allocation of the network bandwidth and buffers with an improved genetic algorithm so as to improve the performance of network. A method based on threshold of population stability coefficient to terminate the iteration of the genetic algorithm was given by analysing the variation trend of population stability coefficient. The experiments show that the method is effective to improve the efficiency of the algorithm.
Self-adjusting Distance MDS Algorithm for WSNs
Computer Science. 2012, 39 (5): 40-43. 
Abstract PDF(451KB) ( 498 )   
RelatedCitation | Metrics
Localization is one of the key technologies in wireless sensor networks(WSNs). Considering the characteristics of WSNs, this work began with a thorough investigation of the existing MDS_MAP localization algorithm. On this basis,we proposed a self-adjusting distances localization algorithm(S人MDS). The basic idea behind S人MDS is to estimate the distance between the two-hop nodes with three approaches,followed by self-adjustment of the estimated dislances between nodes, in order to improve positioning accuracy. Our simulation results show that the SA-MDS algorithm improves the accuracy of node localization significantly.
Application Research of Data Prediction in Wireless Sensor Network Based on Neural Network
Computer Science. 2012, 39 (5): 44-47. 
Abstract PDF(350KB) ( 463 )   
RelatedCitation | Metrics
Wireless sensor network is a typical application of wireless sensor and composed of the huge number of network nodes. It has been widely used in many fields. Neural network was introduced into the wireless sensor network,and the metes model of neural network was constructed by the neurons with the description for wireless sensor data. A wireless sensor network of the data gathering fusion and extraction was realized based on wireless sensor neural network model and the improved traditional neural network model. Through the various application type of differences influcnce data output, a predicting model was established by the choice of the main factors. The experimental prototype of one region in fire was carried out based on the forecast of the existing fire data for the training sample and through the network of convergence, the probability of occurrence in a fire was forecasted. The experimental results show that it is a feasible and effective method for data prediction using the wireless sensor network based on neural network.
Distributed IP Address Lookup Architecture Based on SD-Torus Networks
Computer Science. 2012, 39 (5): 48-52. 
Abstract PDF(452KB) ( 402 )   
RelatedCitation | Metrics
Aimed at the scalability of the IP muter, especially the fact of the FIB(Forwarding Information Base) limit for IP routers, a scalable direct network, the SD-Torus(SemiDiagonal Torus) network was proposed. The topology properties of the HT-Torus network was discussed and a load-balanced routing algorithm was presented. By adopting a novel mapping algorithm for the distributed FIB, the entire FIB was split into sulrtables and mapped onto each node and its neighbors,hence to guarantee the communication latency between nodes for distributed IP address lookup, and decrease the memory footprint of the FIB. The proposed schemes can be applied to high performance IP address lookup.
Research on Sequential PLD Security Vulnerability Detection Method
Computer Science. 2012, 39 (5): 53-56. 
Abstract PDF(412KB) ( 587 )   
RelatedCitation | Metrics
Due to the wide application of PLD in the electronic devices, the vulnerability detection of PLI)has become a challenging subject in the information security field. By analyzing the existence form of PLD security vulnerability, a security vulnerability detection method was proposed, based on state transition diagram. Using off-line reverse analysis and on-system reverse analysis technology, this method unifies the detection ideas, which is suitable for different PLD security vulnerability detection. Then the detection algorithms were proposed on the basis of the existence forms. Finally,the effectiveness of the detection ideas and algorithms were verified by simulation.
Quick P2P Search Algorithm Based on Bloom Filter and Probabilistic Distribution Queue
Computer Science. 2012, 39 (5): 57-61. 
Abstract PDF(542KB) ( 389 )   
RelatedCitation | Metrics
The strategy of searching resource is a research hotspot in unstructured peer to peer network. It is hard to optimize response time, ctuery hit, and coverage rate for resource location of unstructured P2P network simultaneously.This paper presented a quickly search algorithm called BFPDQ (Bloom filter and probabilistic distribution queue),which is based on probabilistic distribute queue and Bloom filter technology. I3FPDQ is mainly used for acyclic random network. Information of resources and requests can be expressed by Bloom filter technology. Meanwhile, performance information of the underlying network's path can be used to guide transmitting strategy for upper layers. PDQ (probabilistic distribute queue) uses distributed queues to substitute traditional walkers to search resources. Requester coordinates direction and depth of those queues and aggregates their resource location messages. Simulation results show that BFPDQ can decrease redundant information, while maintaining a significant reduction in response time.
Cloud Data Storage Security and Privacy Protection Policies under IoT Environment
Computer Science. 2012, 39 (5): 62-65. 
Abstract PDF(451KB) ( 557 )   
RelatedCitation | Metrics
The Internet of Things implements information intelligence relying on the powerful data processing capability of cloud computing. However, it is not worthy of user trust that cloud computing manages data and services. According to cloud data security issues in the environment of the IoT, the cloud data storage security and privacy policies were raised,in order to ensure the accuracy and privacy of user data in the cloud. The experimental results show that the program is effective and flexible, and can resist the Byzantine failure, malicious modification of data, and even server collusion attack.
Research on Scale-free Network Model Based on Coupling Coefficients
Computer Science. 2012, 39 (5): 66-68. 
Abstract PDF(331KB) ( 478 )   
RelatedCitation | Metrics
The classic model of scalcfrec networks selects some nodes in a certain probability in the global scope, and then it will establish a fixed connection in priority among them,but as we know that it is difficult to do this in reality.To solve this problem, the paper established a model based on the scale-free network by introducing the coupling coefficicnts and attracting factor in the BA scalcfrec network, and gave the degree distribution of the evolved model by the theoretical analysis. The analysis proves that this model has a more obvious features of a scale-free network. The simulation results also show that the degree distribution based on a power-law distribution has a better stability and a wider range of applicability.
Overlay Construction Policy in Layered P2P Streaming System
Computer Science. 2012, 39 (5): 69-74. 
Abstract PDF(499KB) ( 411 )   
RelatedCitation | Metrics
Peer-to-Peer(P2P)based streaming applications hold the advantages of high efficient deployment abilitsy and high scalability. Layered video coding based P2P streaming system splits the source video into multiple layer of video data and distributes these streaming data, which makes peer can select the playback duality according to its bandwidth resource. I}his kind of streaming system is more accommodative to the heterogeneity of peers. However, difference distribution overlay path of different layer of streaming data also leads to challenges of overlay construction in layered P2P streaming system. This paper formulated the overlay construction issue in layered P2P streaming environment, which is a NP-hard problem, and put forward a centralized heuristic algorithm, then designed a streaming group based distributed overlay construction policy. Large scale network simulation shows that this overlay construction mechanism has the characteristics of comparatively low bandwidth consumption at streaming server and high streaming acquisition ratio.
Dynamic Trust Management Research Based on ITIL Model
Computer Science. 2012, 39 (5): 75-79. 
Abstract PDF(444KB) ( 376 )   
RelatedCitation | Metrics
The trust running framework of ITIL platform can provide a good context environment for dynamic trust management, Most researches on dynamic access control focus on models of P2P or of grid platform, and few models arc used in ITIL platform. We proposed a framework to evaluate the trust of ITIL service based on the monitor performance in ITIL context environment, We calculated the trust value of ITIL service, and used IOWA calculator to help the evaluation and forecast of trust value of ITIL service in different time period. We also mined the frequent sequence in alarm log to improve the accuracy of forecast. Example analysis shows that our method has a good effect on fault analysis and resources optimization under ITIL platform.
New Channel Estimation Algorithm in Time-domain for MIMO-OFDM System Based on Training Sequences
Computer Science. 2012, 39 (5): 80-82. 
Abstract PDF(217KB) ( 975 )   
RelatedCitation | Metrics
A channel estimation algorithm was proposed for MIM+OFDM system. Estimation criteria and training patterns were given in detail. Due to the correlation of the proposed training patterns,the channel estimation can be acquried conveniently and accurately. The result of both theoretical analysis and simulation shows that the proposed algorithm's performance is as good as the LS algorithm based on optimal training sequences in timcdomain. At the same time the algorithm does not rectuire time domain and frectuency domain transformation, and only a certain number of correlation calculation is needed, so the computation burden is low.
Research on Computer Forensics System Based on Cloud Computing
Computer Science. 2012, 39 (5): 83-85. 
Abstract PDF(276KB) ( 558 )   
RelatedCitation | Metrics
Cloud computing is the most popular computing mode of Internet, which features elastic computing, resource virtualization and on-demand service, etc. In this environment, resources such as infrastructures, development platforms and applications are directly provided by could computing center. Users no longer own infrastructures, software and data themselves, but share entire cloud computing resources. hhis will affects security and availability of could computing environment, and leave it tremendous risks. Security flaws and threats of cloud computing were analyzed in this paper, and a design of computer forensic system based on cloud computing was presented, which resolves security problems of cloud computing using computer forensic techniques, and meets the need of high-performance computing using cloud computing and super computing technictues.
Resource Allocation Algorithm for OFDMA-based Cooperative Communication Systems
Computer Science. 2012, 39 (5): 86-90. 
Abstract PDF(389KB) ( 543 )   
RelatedCitation | Metrics
Taking users' demand for both QoS and fairness into account,a joint sub-carrier and power allocation algorithm was proposed for OFDMA-based cooperative cellular networks. Existing sub-carrier allocation algorithm considers only average power allocation, in which incomplete application of power in relaying nodes exists. As a versus, we studied the optimal residual power allocation on the completion of sub-carrier allocation. Consequently, a water-filling power allocation scheme ground on dichotomy was given. Simulation shows that our proposed joint allocation algorithm can satisfy users' need for QoS and fairness,while it boosts the networking throughput dramatically.
Study on Application of Hyperchaotic Encryption Combined with 3DES in Secure E-mail System
Computer Science. 2012, 39 (5): 91-94. 
Abstract PDF(355KB) ( 396 )   
RelatedCitation | Metrics
The security problem of email transmission on the Internet was considered. A scheme of cascade cipher based on improved hypcrchaotic sequences combined with 3DES algorithm was proposed. 13asai on hypcrchaotic system, the real chaotic sequences were pretreated and quantified. The improved sequences, which have stronger pseudo-random,better correlation and irreversible characteristics were validated by the NIST test. An example of email transmission using the cascade cipher system was presented to demonstrate its performance.
Modeling of COA Problem of C2 Organization Based on Dynamic Influence Nets
Computer Science. 2012, 39 (5): 95-98. 
Abstract PDF(411KB) ( 387 )   
RelatedCitation | Metrics
A method based on dynamic influence nets(DINs) which models the course of actions(COA) problem of command and control(C2) organization was proposed. In this method,the causal strength parameters arc introduced instead of conditional probability tables,and the probability propagation algorithm using causal strength parameters was presented. At last,the superiority and efficiency of this modeling method was illuminated by a case of joint campaign.
Optimization of New Nonlinear Propagation Model of Worm Based on Immune Hosts
Computer Science. 2012, 39 (5): 99-101. 
Abstract PDF(320KB) ( 394 )   
RelatedCitation | Metrics
The thesis analyzed the propagation model of worm in detail and the model was optimized. We presented a new propagation model of worm named as the QSIRV propagation model based on the Two-Factor propagation model.The model reasonably considers the failure of the immunity. The experimental results show the QSIRV model can better descript the worm propagation and the interaction between the worms spread of network traffic and worms flow, and especially the simulation of the changed number of the host immune is consistent with the actual situation, and the model also considers the influence for isolation, immune, number of infected hosts and the people vigilance to worm. The paper further improved the QSIRV model. The improved model can faster resist the worms spread.
Underwater Sensor Networks Localization Based on Polyhedron Centroid Algorithm
Computer Science. 2012, 39 (5): 102-105. 
Abstract PDF(338KB) ( 360 )   
RelatedCitation | Metrics
In order to overcome the poor localization accuracy of both distance-based underwater localization algorithm and distance-independent underwater localization algorithm at present, this paper proposed a distance-independent polyhedron centroid algorithm with the optimization measures of adaptive network density, iterative location and relocation,periodically updating and forecasting, and simulation results show that not only the performance of polyhedron centroid localization algorithm is much better than ALS algorithm, and the localization accuracy can be significantly improved,but also this new localization algorithm reduces localization costs. Finally, it pointed out future research issues and trends of underwater localization algorithm.
Study of Requirements Evolution Based on Queuing Theory
Computer Science. 2012, 39 (5): 106-109. 
Abstract PDF(428KB) ( 370 )   
RelatedCitation | Metrics
Abstract It needs methodology to study rectuirements evolution by quantitative methods. Based on ctueuing theory, this paper analyzed the process of requirements change rectuests and pointed out the characteristics of queuing models. The current method of calculating requirements maturity index was improved by M/M/1/m/m. An example shows that the quantities got from this ctueuing model have their meanings to evaluate the effectiveness of rectuirements analysis and the capability of requirements engineers. The funciton of queuing theory in guiding the management of requirements change was pointed out.
Web Services Evaluation Model Based on Qualification of Time Utility
Computer Science. 2012, 39 (5): 110-113. 
Abstract PDF(341KB) ( 521 )   
RelatedCitation | Metrics
Current evaluation methods of Web services do not consider the timcutility, while qualification of timcutility in other systems are hard to satisfy the requirements of distinction of decay of time utility. As a result, Web service evaluation model based on qualification of time utility was proposed. Based on forgetting phenomenon of time utility, the model analyzes the relationship between timcutility and memory, unfolds regular of update of forgetting-ratio to time utility, and designs method to qualify time utility concerned with the distinction of decay of time utility. Meanwhile, result of time utility is used to optimize calculation of length of evaluation widows, as to improve KNN in artificial setting of evaluation windows. Experiments shows that model meets the requirements of distinction of decay of time utility, achieves low evaluation error, reduces computational cost and fits users' evaluation, which is meaningful to theoretical research and practical use.
Semantics of Delay Declaration in Logic Programming Language Godel and its Implementation
Computer Science. 2012, 39 (5): 114-116. 
Abstract PDF(231KB) ( 385 )   
RelatedCitation | Metrics
The logic programming language Godel is developed slowly since its appearance due to its complex language components and the lack of rigorous semantic foundation and mature compilers. In this paper, we firstly described the procedural semantics of its delay computation using evolving algebra. Then the specific implementation methods were introduced with a flow chart and a description by C language. Finally the execution of delay computation in the compiler based on extended Warren's Abstract Machine was illustrated. Its feasibility was proved by the implementation.
Optimal Testing for Software Defects Based on Controlled Markov Chain
Computer Science. 2012, 39 (5): 117-119. 
Abstract PDF(329KB) ( 417 )   
RelatedCitation | Metrics
The Controlled Markov Chain(CMC) model for software testing discussed in most works at present is obtained from a series of assumptions,and partial of the assumptions have been specialized,which makes the scope of application of these models were comparatively small. And thus these models deviate from practical application. According to the software cybermetics, this article provided an improved CMC model with cost constraints by introducing a series of new transformation of limit condition. I}his model eliminates some of the defects of existing models. Meanwhile, the model can reach a balance in efficiency, complexity and applicability. In order to verify the effectiveness of the model, a new optimal testing strategy for software defects was designed according to the newly provided model. Through a simulation experiment, the strategy was compared with the traditional random testing strategy. The results show that our improved CMC model has high applicability and effectiveness.
Research on Component Behavior Fragment Extraction and Composition Based on Logical Reasoning
Computer Science. 2012, 39 (5): 120-123. 
Abstract PDF(330KB) ( 432 )   
RelatedCitation | Metrics
In the background of components assembly, since the existing software can hardly fulfill changeable requirement of users, the paper proposed an algorithm for component behavior fragment extraction and composition based on logical reasoning. The main idea is based on the researches of interface mapping and state transition of components,which establishes the structural model and state model of component behavior, and breaks down then into component behavior fragments that are derived on the basic of calculus. In the end,following the method of logical reasoning, the paper took the input and output as target solutions,and provided a composite component to meet requirement goals by compositing the useful component behavior fragment from relation derivation.
Automatic Test Data Generation Tool of Dynamic Variable Parameters Based on Genetic Algorithm
Computer Science. 2012, 39 (5): 124-127. 
Abstract PDF(428KB) ( 482 )   
RelatedCitation | Metrics
Test data generation is the key of implementing software test automation, and the realization of this technology can greatly save for time and money of software development. This paper, using genetic algorithm theory and algorithm characteristics, established a test data automatic generation tool of variable parameter dynamic. Through the visual interface of the tool, you can input genetic algorithm parameters dynamically. And according to the different path selection you can input corresponding fitness function, overcome defects that fitness function is modified in the source code in the past. Finally through the two experiments,we can prove the superiority of the algorithm in this paper.
Study and Application of Agent Based Parallel Iteration Reengineering Method
Computer Science. 2012, 39 (5): 128-132. 
Abstract PDF(572KB) ( 390 )   
RelatedCitation | Metrics
Software needs to be maintained and reengineered, as the program techniques. With the study of the concept and models of software recngineering, an method was proposed to recnginecr a massive legacy system combining parallel iteration and software agent technology. This approach was later applied in the reengineering of the FVS system, a FORTRAN program which is more flexible and scalable,and is a good example approach for other reengineering process of similar legacy systems.
Automatic Data Type Reconstruction in Decompilation
Computer Science. 2012, 39 (5): 133-136. 
Abstract PDF(353KB) ( 351 )   
RelatedCitation | Metrics
As one of the most significant modules of decompilation, data type reconstruction has an important role in readability and intelligibility. This paper proposed an algorithm for automatic type reconstruction from assembly code obtained from the MinGW GCC 3. 4. 5 compiler. The basic types are reconstructed using an iterative algorithm, which uses a lattice over the types' properties. I}he composite types' skeletons arc recovered by establishing label equivalence classes, and the member variables by constructing the set of offsets for each composite type. The algorithm is the essential part of the tool being developed by authors, which not only reconstructs the basic type exactly, but also makes an active research into the hot issue aimed by all researchers currently and it has a favorable outcome.
Conceptual Modeling Method of Simulation System Based on
Computer Science. 2012, 39 (5): 137-140. 
Abstract PDF(357KB) ( 376 )   
RelatedCitation | Metrics
The problems of model's low reusability and lacking management arc presented in the conceptual model's development of simulation system. To solve the above problems, the concept of meta conceptual model(MCM) was presented for realizing the conceptual model's abstract in the higher arrangement. The ontology conception was introduced to the MCM's designing, and the modeling method of ontology-based MCM(OMCM) was presented. The OMCM's hiberarchy and modeling method were given too. I3y mapping the OMCM to conceptual model, the conceptual model's modeling was realized. Finally,the method was applied in the conceptual model's modeling of equipment support simulation system, and the better validation was obtained.
Data Allocation Algorithm Based on Visit Capacity and Dependency Evaluation in Cloud
Computer Science. 2012, 39 (5): 141-146. 
Abstract PDF(630KB) ( 367 )   
RelatedCitation | Metrics
A huge number of largcscale intensive data have to be stored in distributed data centers. Nowadays, under the cloud environment, large-scale data storage can be better supported. However, a challenging issue is that the transmission of intensive data between cloud data centers may cause low efficiency of data access. Also, the bottleneck of access on data center may be derived from the unbalanced capacity of data visit in unit interval. We first proposed a model based on data flow between larg}scale intensive data. Afterwards,a data allocation algorithm was presented to guarantee the load balance of data centers while considering dependencies between intensive data. Extensive experiments confirm that our solution has better performances than conventional approaches particularly in load balance.
Improved Broadcast Scheduling Algorithm in Mobile Real-time Environment
Computer Science. 2012, 39 (5): 147-150. 
Abstract PDF(300KB) ( 332 )   
RelatedCitation | Metrics
Data broadcast is an efficient method for data accessing in the asymmetry bandwidth of mobile real-time environment For characteristics of such a network, we analyzed some of existing broadcast scheduling algorithms, such as UFO algorithm and propose SI3S algorithm and CRS algorithm. They improve UFO from server and mobile client. The two algorithms can automatically generate broadcast scheduling lists which depend on the given data items' probability distribution. I}hen theoretical analysis and experimental results show that the proposed algorithm can not produce any transactions' restarting and effectively reduce data items' accessing time. All of these make the average waiting time that users access data broadcast minimized.
Multi-resolution Modeling Based on Parallel Storage
Computer Science. 2012, 39 (5): 151-155. 
Abstract PDF(432KB) ( 346 )   
RelatedCitation | Metrics
Multi-resolution simulation is one of the emphases and difficulties in simulation research and its importance gradually appears. Among the existing means in the field, because the aggregation-disaggregation is simple and practicable, it rivets people's attention, but the consistency problem is serious. Based on the analysis of the problems in current methods, a new concept named parallel storage was proposed. This method solves the consistency problem in some way while costing less resource.
Primitives-patterns Based Architecture Modeling Method of Operational Activity Model
Computer Science. 2012, 39 (5): 156-160. 
Abstract PDF(431KB) ( 496 )   
RelatedCitation | Metrics
In order to realize the understanding, comparison and integration of architecture developed under different ar-chitecture frameworks and tools with various modeling methods,and support the data-centric methodology of architecture development better, a new method named primitives-patterns based modeling method of architecture development was proposed. According to the transformation approach from architecture meta-model to modeling languages with the extensible markup language(XML) format, the primitives-patterns based mapping specification was designed. Based on the international defense enterprise architecture specification, the mesa-model of operational activity model was built. By analyzing the modeling principles of IDEFO and UMI. activity diagram, this method constructed the semantic mapping relationship between OV-5 meta-model and modeling languages, such as IDEFO and UML activity diagram, ensuring the precision and coherence of architecture data and semantics.
Approach of Chinese Event IE Based on Verb Argument Structure
Computer Science. 2012, 39 (5): 161-164. 
Abstract PDF(453KB) ( 1227 )   
RelatedCitation | Metrics
For the purpose of using the constraintrules between verbs and their arguments in the event extraction, an approach of Chinese event IE based on verb argument structure was proposed around the model variant, which was formed by introducing the verb argument structure into an event model. First pre-treating and syntactic analysis of text were made to get its grammatical structure. hhen the structure was compared with the properties of verb argument structure to find out the disposal argument of each verb. Finally the features of the corresponding events were determined by the semantic properties of argument and the event extraction was completed. Experimental results show that this method can improve the extraction system performance, increase the efficiency of extraction.
Research on Estimate Method of DTW Early Abandon Ratio
Computer Science. 2012, 39 (5): 165-167. 
Abstract PDF(324KB) ( 351 )   
RelatedCitation | Metrics
Early abandon is of great importance in improving efficiency of time series similarity search and reducing the redundant computations. However, previous works are focused on empirical experimental to estimate the effects of early abandon and theoretical analysis method is not available. The mechanism of DTW early abandon was analyzed, and a model of estimating DTW early abandon ratio was proposed, then, experiments were made to testify its validity. The resups of experiments show that the proposed method can effectively estimate DTW early abandon ratio, and perform better in precision than that of EaEst method.
New Attribute Reduction Algorithm Based on Reconstructed Consistent Decision Table
Computer Science. 2012, 39 (5): 168-171. 
Abstract PDF(318KB) ( 340 )   
RelatedCitation | Metrics
By now, the positivcbased attribute reduction is one of the most popular algorithms for attribute reduction.Some inconsistent objects may be present in the real world decision tables. And with the decrease of the number of attributes during the process of reduction, some new inconsistent objects may also occur in the decision tables. For a positivcbased attribute reduction algorithm, the inconsistent objects can not provide any useful information. I}herefore, dcleting those objects from the decision table will not change the results of positive regions, and the final result of reduction. Moreover, this operation may improve the efficiency of the algorithm obviously. However, most of the current positivcbased attribute reduction algorithms have not concerned this problem. I}hcy use all objects in the domain to calculate the positive regions and obtain the results of reduction. To solve this problem, we defined the notions of reconstructing consistent decision table and reconstructing consistent decision sulrtable. The aim for introducing the two notions is to delete the inconsistent objects in the original decision table and obtain a consistent decision table during the process of reduction. By virtue of the two notions, we proposed a novel positivcbased attribute reduction algorithm. I}he experimental results on real datasets demonstrate that our algorithm can obtain smaller reducts and higher classification accuracks than the traditional algorithms. And the time complexity of our algorithm is relatively low.
Spectral Clustering Algorithm for Large Scale Data Set Based on Accelerating Iterative Method
Computer Science. 2012, 39 (5): 172-176. 
Abstract PDF(399KB) ( 440 )   
RelatedCitation | Metrics
The advantage of the traditional spectral clustering algorithm is applicable in the small scale data set. A new method was proposed in the light of the laplacian matrix characteristics. First, a new Gram matrix was reconstructed and some lies of the new matrix were needed, then the eigen-decomposition based on accelerating iterative method was solved. The calculation speed of the proposed method is very fast and the space complexity is small for large scale data set
Dynamic Splog Filtering Algorithm Based on Combined Features
Computer Science. 2012, 39 (5): 177-179. 
Abstract PDF(333KB) ( 399 )   
RelatedCitation | Metrics
Splog filtering has become a new hot area in the international in recent years. Most of the traditional filtering algorithms arc based on word frequency feature classification, which is quite redundancy and lack of relevance. According to this problem,a dynamic filtering algorithm based on the combination of features for splog(CFDSI))was proposed to solve the problem of low relevance and redundancy. The CFDSD algorithm uses self-similarity feathers and the attributes of author, at the same time adopts the 13ayesian classification algorithm to optimize word frequency feature classification. Experiments show that the algorithm is adaptable to dynamical updated features of the blog with time changes, and improves filtering efficiency, while reducing the time to filter splog.
Density Based Weighted Fuzzy Clustering Algorithm
Computer Science. 2012, 39 (5): 180-182. 
Abstract PDF(252KB) ( 461 )   
RelatedCitation | Metrics
The traditional clustering algorithm will converge to a local minimum point when the initial objects' attributes have no obvious difference,which can cause the decline of algorithms' accuracy and incorrectness of the results. In order to overcome these drawbacks, a density based weighted fuzzyc一 mean clustering algorithm was proposed. It used the results of the calculation of the relative density differences attributes to determine the initial partition. After obtaining the better initial centers,a weighted fuzzy algorithm which can distinguish the importance of each attribute was implemented. Experimental results show that the algorithm not only can discriminate the attributes’contribution, but also can improve the stability and accuracy.
Outlier Sub-sequences Detection for Importance Points Segmentation of Time Series
Computer Science. 2012, 39 (5): 183-186. 
Abstract PDF(304KB) ( 526 )   
RelatedCitation | Metrics
Because the time series has a large amount of data, detecting it directly will has a high complexity. And this paper proposed an outlier sub-sectuences detection algorithm based on importance points segmentation of time series to relieve the problem. Outlier detection of sub-sectuences can offset the limitations of the outlier detection of points. This algorithm firstly obtains a series of smoothed important points, and then divides the sequences according to them, mcanwhile extracts the four characteristic values of each sub-sequence: length, height, mean and standard deviation, and applies these four characteristic values to the curopean euclidcan distance. Finally, it detects the outlier sulrsequences with the KNN algorithm Experimental results show that the algorithm is effective and reasonable.
Orthogonal Differential Evolution Algorithm for Solving System of Equations
Computer Science. 2012, 39 (5): 187-189. 
Abstract PDF(313KB) ( 375 )   
RelatedCitation | Metrics
First,nonlinear system of ectuations was transformed into unconstrained optimization problem by using the concept of surrogate constraints and amended maximum entropy function. Then, the concept of average similarity was introduced to design adaptive orthogonal crossover operator, and orthogonal design was used to generate initial populalion, and on the basis, adaptive orthogonal differential evolution algorithm was proposed for solving the maximum entropy function. Finally, using equations verified the algorithm.
Classification for Imbalanced Microarray Data Based on Oversampling Technology and Random Forest
Computer Science. 2012, 39 (5): 190-194. 
Abstract PDF(440KB) ( 359 )   
RelatedCitation | Metrics
In recent years, applying DNA microarray technology to diagnose for disease, especially for cancer, has been becoming one of hot topics in bioinformatics. In contrast with many other data carriers,microarray data generally holds some unique characteristics. A novel oversampling technology based on probability distribution was proposed to solve the problem brought by the characteristic of sample distribution imbalance of microarray data. 13y this technology, some reasonable pseudo samples would be created for the minority class to guarantee the balance between two classes. Then we used random forest to classify the samples belonging to different classes. Its effectiveness and feasibility were verified on two benchmark microarray datasets. Experimental results show that the proposed method can obtain better classification performance, compared with some traditional approaches.
Research and Implement of Facial Action on Human-like Agent
Computer Science. 2012, 39 (5): 195-197. 
Abstract PDF(332KB) ( 372 )   
RelatedCitation | Metrics
Based on a uniform model for facial action,the characteristic and restriction of facial muscles were analyzed.The unit of facial action was propsed to generate facial action by using kinematical method and teaching method. The complex facial actions can be synthesized by the basic units of facial action. Experiments show that the kinematical method reduces the probability of the unnatural expression by controling muscles accurately, and the teaching method generates the facial action without considering the details, and the synthesized facial actions arc generated effectively by reusing the units of facial action.
Research on Glowworm Swarm Optimization with Hybrid Swarm Intelligence Behavior
Computer Science. 2012, 39 (5): 198-200. 
Abstract PDF(324KB) ( 465 )   
RelatedCitation | Metrics
Glowworm swarm optimization(GSO ) algorithm is the one of the newest nature inspired heuristics for optimization problems. In order to enhance accuracy and convergence rate of the GSO, two behaviors which are inspired by artificial bee colony algorithm(AI3C) and particle swarm optimization(PSO )of the movement phase of GSO were proposed. The effects of the parameters about the improvement algorithms were discussed by uniform design experiment. A number of experiments were carried out on a set of well-known benchmark global optimization problems. Numerical re- sups reveal that the proposed algorithms can find better solutions compared with classical GSO and other heuristic algorithms and are powerful search algorithms for various global optimization problems.
Two-dimensional Locality Preserving Projections Based on L1-norm
Computer Science. 2012, 39 (5): 201-204. 
Abstract PDF(394KB) ( 412 )   
RelatedCitation | Metrics
This paper presented a method of two-dimensional locality preserving projection based on Ll-norm(2DLPP-Ll). I}he proposed approach has two advantages compared with the conventional I_2-norm based two-dimensional locality preserving projection(2DLPP). Firstly, it is more robust against outliers because Ll-norm is insensitive to noises. Moreover, it does not require the eigenvalue decomposition. Experiments on two face databases and one hand-written digit dataset illustrate that compared with 2DI_PP,the proposed method exhibits better performance when there arc outliers in training sets.
Some Results on the Decision of the Minimal Covering for Full Symmetric Function Sets in Partial K-valued Logic
Computer Science. 2012, 39 (5): 205-207. 
Abstract PDF(241KB) ( 360 )   
RelatedCitation | Metrics
According to the completeness theory in partial K-valued logic, starting with the chart of binary full symmetric relations, this paper first proved two kinds of preserving binary full symmetric relations function sets unbelong to the minimal covering members in partial K-valued logic, and then discussed the edge number of chart, decided the emergence of preserving binary full symmetric relations function sets in the minimal covering members in partial K-valued logic.
Effective Approach to Label Extraction and Matching
Computer Science. 2012, 39 (5): 208-212. 
Abstract PDF(442KB) ( 457 )   
RelatedCitation | Metrics
Label extraction and matching arc an important part of the query interface understanding. A vision-based label extraction and matching approach was proposed in this paper. First, the factors which affect label matching were deeply analyzed, and then, a method of reconstructing query interface by analyzing its html code was given correspondingly which can restore the visual layout of form effectively. Finally, the clement label matching was realized which comprehensively considers label tag,text semanteme and position feature. Experiments on 277 query interfaces in 8 domains demonstrate the feasibility of our proposed approach.
Research on Q Learning Algorithm with Sharing Experience in Learning Process
Computer Science. 2012, 39 (5): 213-216. 
Abstract PDF(328KB) ( 395 )   
RelatedCitation | Metrics
The aim of the research is to improve the efficicnce of multi-agent "lcaring algorithm. This paper proposed a method of multi-agent C}learning with sharing experience based on the pursuit problem. This algorithm simulats human behavior of a learning team, and all agents share a common utimate goal of capturing the prey, at the same time every agent gets their own milestones through negotiations. I}he learning process is divided into some stages. After a learning stage,there will be a stage summary. Then good learning experience will be shared with each other in order to facilitate the next stage of learning. I}he agents who learn fast and well can help the ones who learn slow and not well,so in this way the performance of the system is enhanced. The simulation results prove that the呀learning algorithm with sharing experience in learning process can improve the performance of learning systems and efficient convergence to the optimal strategy.
Opposition-based Self-organizing Migrating Algorithm
Computer Science. 2012, 39 (5): 217-218. 
Abstract PDF(238KB) ( 372 )   
RelatedCitation | Metrics
A new opposition-based self-organizing migrating algorithm(OSOMA) was proposed to deal with premature convergence of self-organizing migrating algorithm. The key points of OSOMA lie in;1) the opposition-based learning is applied to extend the migrating direction and obtain better individual, which maintains diversity of population and improves the convergence speed. 2) the algorithm adaptively adjusts the step to further balance between the ability of exploration and exploitation capacity. Then, OSOMA is used to solve typical problems and numerical results show the effcctivcncss of OSOMA.
Abstract of Core Operators of Multi-objective Optimizations Immune Algorithms
Computer Science. 2012, 39 (5): 219-222. 
Abstract PDF(464KB) ( 628 )   
RelatedCitation | Metrics
Artificial immune system is the computing system based on the theory of natural immune, and the multi-objective optimization is an important researching direction in optimizing field. The study found that all kinds of immune algorithms operational principles and process are not same. This paper presented a method that can unify the expression of the multi-objective optimization immune algorithm,and abstracted the main principle and operational process of three kinds core operators of the immune algorithm. This core operator can express classic immune algorithms NNIA and CMOIA. It is proved that this three immune operators are feasible and efficient to express algorithms.
Improved Dijkstra Shortest Path Algorithm and its Application
Computer Science. 2012, 39 (5): 223-228. 
Abstract PDF(492KB) ( 1589 )   
RelatedCitation | Metrics
The problem of”the shortest path" has wide application. There are many algorithms to solve this problem and the best one is "Dijkstras label algorithm".But experiment results show that this algorithm needs to be improved:① The algorithm's exit mechanism is ineffective to non-unicorn figure and will fall into an infinite loop;② The algorithm is not concerned in the problem of previous adjacent vertices in shortest path;③ The algorithm is not concerned in the problem that more vertices obtain the "xrlabcl" at the same time. hhis paper improved the "Dijkstras label algorithm".Experiment results indicate that the improved algorithm can solve above problems effectively. On the basis of the above work,we developed the "Beijing Road optimal route selection system" , which help users to select the shorted route, so that people can avoid the most congested road traffic and save time.
Research of Event Anaphora Resolution Based on Machine Learning Approach
Computer Science. 2012, 39 (5): 229-233. 
Abstract PDF(422KB) ( 695 )   
RelatedCitation | Metrics
In event anaphora resolution, the antecedent of the anaphor is an event and the anaphor is a noun phrase.They are parts of different semantic categorization systems. So most of features applied in entity anaphora resolution are not appropriate for event anaphora resolution. hhis paper proposed an event anaphora resolution framework using a machine learning approach. The instances creation and the features selection were presented in detail. This paper also illustrated the experiment results on OntoNotes 3. 0 corpus. From the results we can find that the recall of the framework is very good, but the precision must be improved in the further work.
Inference Rules of Sequence Failure Symbol in Cut Sequence Set Model
Computer Science. 2012, 39 (5): 234-238. 
Abstract PDF(380KB) ( 349 )   
RelatedCitation | Metrics
In the cut sequence set(CSS) model, in order to get the minimal cut sequence set(MCSS) from the primary form of CSS,which is transformed from dynamic fault trees(DI}I}),the inference rules of sequence failure symbol(SFS)were put forward. The rules, which include combination law, or distribution law, and distribution law, absorption law,CSP law and WAP law,were established according to the sequence of basic events and the SFS. The proofs of the rules were provided. The paper also listed some inexistent cut sequences in reality, and provided some educed rules that can be got from the inference rules. SFS inference rules are the formalized qualitative analysis of CSS model. And they can be used to automatically get the MCSS of dynamic systems and design the computer assisted tools.
Moving Human Target Tracking and Contour Extraction with Weak Boundaries Based on Level Set Method
Computer Science. 2012, 39 (5): 239-242. 
Abstract PDF(598KB) ( 370 )   
RelatedCitation | Metrics
Aimed at the characteristics of low deviation between targets and background and weak boundaries, a novel approach of fast convergence and strong ability to capture boundaries was presented based on level set method. We used index function with quicker convergence as indicator function and normalized Gauss distribution function to improve tra- ditional Dirac function. In the tracking process of the moving target, this article obtained minimum circumscribed rectan- gular frames of human moving target in every video frame with the kalman filter method, and used the method of the level set curve evolution to obtain moving human target outline eventually. Experiments on targets under the visible light and infrared moving video sectuences show this method can greatly improve the tracking speed compared with tra- ditional methods and has better results in target-tracking and contour extraction for infrared images with weak bounda- ries and huge convex-and-concave features.
Improved Face Recognition Method Based on NMF
Computer Science. 2012, 39 (5): 243-245. 
Abstract PDF(611KB) ( 492 )   
RelatedCitation | Metrics
Non-negative matrix factorization ( NMF ) is a method of parts based feature extraction, according to this characteristic. An improved algorithm dealing with NMF basis was introduced to enhance partial occlusion face recogni- lion rate. Firstly, discrete wavelet transformation was used to produce a representation in the low frectuency domain, and basic matrix was got according to the NMF method. Secondly,parts of face features which possess outstanding perfor- mance were extracted by threshold value judgments,and they were used to form optimized facial subspace feature. The training and testing images were projected to the optimized subspace feature. Finally support vector machine was used for classification. Experiments show that the improved algorithm's computing time is short, and this method achieves remarkable effects under the partially occluded circumstance.
Multilevel Median Filter Algorithm Based on Vertical and Horizontal Windows Relation
Computer Science. 2012, 39 (5): 246-248. 
Abstract PDF(495KB) ( 390 )   
RelatedCitation | Metrics
A multilevel median filter algorithm based on vertical and horizontal windows relation was proposed for the problem that traditional median filter algorithm can't protect the image detail very well, and that its performance will be in the sharp decline in dealing with high-density noise image. The algorithm determines the noise point of the image in window of NX N by using switching strategy. For the noise point, it's pixel value will be replaced by the median value of the median value in every window that lies in all 2N windows round the noise point. The pixel value will be retained for the non noise points. hherefore, we achieved noise removal and detail preservation. Experimental results indicate that the algorithm has better image detail preservation and strong denoising ability.
Face-hashing Algorithm Based on Fuzzy Cyclic Random Mapping
Computer Science. 2012, 39 (5): 249-253. 
Abstract PDF(428KB) ( 697 )   
RelatedCitation | Metrics
With the broad application of various biometric technologies, their deficiencies on safety are gradually ex- posed. The biometric encryption technology, as the combination of biometric recognition and cryptography technology, was proposed to ensure the safe usage of biometrics. Based on our previous researches on face biometrics, we proposed a face-hashing algorithm,i. e. Fuzzy Cyclic Random Mapping(FCRM) , which considers both the security and the fault tol- erance. In each cycle, the encryption model utilizes the key of the previous cycle as a random seed to generate a mapping matrix and makes random mapping of the face features. hhus the proposed algorithm forms a cyclic random mapping process. We also adopted the fault tolerant technology in the proposed algorithm to reduce the impact of random noise existing in the face images of the genuine users,while the cyclic mapping process can simultaneously prevent the impos- to to authenticate the system without reducing the accuracy.
New Method for Branch Recognition Based on Gradient Phase Grouping
Computer Science. 2012, 39 (5): 254-256. 
Abstract PDF(589KB) ( 343 )   
RelatedCitation | Metrics
A new method of Hough transform based on gradients phase grouping for branches detection was proposed to locate the position of branches of fruit trees accurately and rapidly. After the gradient phase of edge points was calcu- lated by using the squared gradient method, the histogram of gradicnts's direction was calculated. From the histogram, several peak values were found by threshold T. Then edge points were grouped according to their gradent phases, and points in each group almost had the same gradient's direction. At last, an improved two-points Hough transform was applied to edge points in each group to calculate the parameters of lines, and the gradent's direction of every group was used to affirm the correctness of the parameters. The research results show that the proposed method has the merits of high speed, low error and strong robustness, can locate the branches of fruit trees accurately and fastly, and also can de- feet part covered branches satisfactorily.
Fast Calculation of Simulation Projection for Cone-beam CT Based on Shepp-Logan Head Phantom
Computer Science. 2012, 39 (5): 257-260. 
Abstract PDF(581KB) ( 543 )   
RelatedCitation | Metrics
Aiming at the problem of projection simulation of 3D Shepp-I_ogan head phantom, this paper proposed a fast parallel method to calculate simulation projection. Firstly, the intersection points of 3D ray and each ellipsoid were calcu- lated in turn. Then, the sequence of intersection points was sorted, by which the region number of phantom and length of intersection between the ray and the region were determined. Finally,the projection of the ray was obtained by summing the projection of each region. On this basis,we decomposed the computing task into four independent subtasks and rea- lined fast parallel calculation of simulation projection of con}beam CT on multi-core platform by multi-thread tech- nictue. Experimental result shows that the proposed method is very effective, can get a speedup about 3. 5 times on ctuad cores platform. The accuracy of the projection data generated by the proposed method was verified by results of image reconstruction.
Study on Image Shape Classification Algorithm of Human Parasite Eggs Based on Boundary Features
Computer Science. 2012, 39 (5): 261-265. 
Abstract PDF(454KB) ( 458 )   
RelatedCitation | Metrics
In view of the shortcomings of shape classification method based on images of parasite eggs, this paper pro- posed a new classification algorithm based on boundary spatial distribution of parasite eggs. Firstly the location of para- site egg is positioned based on edge profile features of different eggs, and then method of level set is used to extract edge. Finally feature classification is realized by using Fourier descriptors. Experimental results show that this new method has better recognition rate and operating efficiency.
Visual Tracking of Artificial Fish Swarm Algorithm Based on Riemannian Manifold Metric
Computer Science. 2012, 39 (5): 266-270. 
Abstract PDF(709KB) ( 572 )   
RelatedCitation | Metrics
A novel visual tracking method based on artificial fish swarm algorithm on Riemannian manifold metric was proposed. The new algorithm can well deal with the interactive occlusion, and consume less computation load comparing with global exhaustive search, both of which arc the limits of classical covariance descriptor tracker. hhe paper used co- variance descriptor combining with object information of position, color, and gradient to enhance the adaptability to change of gesture and illumination changing. hhc artificial fish swarm algorithm was utilized to find the best matching between object and candidate. Its parallel operation and global search ability improves the effectiveness of processing and can be more robust to occlusion. The experimental results show that the proposed method is more robust for visual tracking under complex scene.
Research on Laplace Image Enhancement Algorithm Optimization Based on OpenCL
Computer Science. 2012, 39 (5): 271-277. 
Abstract PDF(621KB) ( 998 )   
RelatedCitation | Metrics
OOpenCL is a general-purpose programming framework forheterogeneous computing platforms, however, due to the differences in hardware architecture,how to achieve performance portability on different platforms based on the function portability is still to be studied. Currently most of the researches on algorithm optimization are aimed at a sin- gle hardware platform, and difficult to achieve the efficient running on different platforms. This paper analysed the differences between the underlying hardware architectures of GPU, and studied the effects of different GPU platforms using different optimization methods on performance from the access efficiency of global memory, full use of the(}PU compute resource, the constraints with hardware resource and other aspects. Based on this, the Laplace image enhance- ment algorithm based on OpenCL was implemented. Experimental results show that optimized algorithm gets 3. 7一136. 1 times and 56. 7 times on average speedup(without calculate the data transfer time) on both AMD and NVIDIA GPU, and the performance of the optimized kernel increases 12. 3%一346. 7 0 o and 143. 1 0 o on average than the CUDA ver- sion in NVIDIA NPP library,which verifies the effectiveness and cross-platform ability of optimization methods.
PHG-Solid:A Parallel Adaptive FEM Software for 3D Structural Analysis
Computer Science. 2012, 39 (5): 278-281. 
Abstract PDF(415KB) ( 725 )   
RelatedCitation | Metrics
We presented our newly developed open source parallel adaptive FEM software for 3D structural analysis. It is based on the 3D parallel adaptive finite clement toolbox PHG. It features parallel adaptive finite clement analysis for pure 3D structures. It has several advantages over other structural analysis software. Firstly,it supports fully automatic and highly parallel adaptive FEM computations. Secondly, it is robust, efficient and highly scalable in solving large scale problems. Thirdly, the software is extensible, and users can conveniently add their own modules if needed. Several large scale numerical experiments demonstrate that the largest problem size of our software exceeds 500 million degrees of freedom and the largest number of MPI process reaches 1024.
Power Analysis for Executable Program on Single Computer Based on Artificial Neural Network
Computer Science. 2012, 39 (5): 282-286. 
Abstract PDF(535KB) ( 348 )   
RelatedCitation | Metrics
Power management of application program is a hot research in the area of green computing and high produc- tivity computing. Because of the complexity of application programs,heterogeneity of processors and uncertainty of the running environment, it is difficult to propose an accurate method to predict energy consumption for application program directly. So we presented a power analysis paradigm for application program based on artificial neural network. First, we built a power analysis model based on back propagation neural network(I3PNN). The three factors of software, hard- ware and environment were taken as the inputs of BPNN, and energy consumption and finish time as the outputs of 13PNN. Next, we chose lots of classic application programs from different fields as training samples. After learning and training, an expected I3PNN was obtained which can be used to predict energy consumption for other new programs. Re- pcated experiments show that this power analysis paradigm is rational and feasible.
Bio-inspired Fault-tolerant Approach for Sobel Operator
Computer Science. 2012, 39 (5): 287-290. 
Abstract PDF(443KB) ( 363 )   
RelatedCitation | Metrics
A bio-inspired fault tolerant approach for Sobel operator was described in this paper. I3y imitating four bio- logical principles, namely, match-based recognition in protein sorting, substitution among homogeneous cells, differentia- lion of stem cells,and conversion between heterogeneous cells, we designed the architecture of electronic tissue(eTis- sue) , which supports hierarchical self-healing. We then implemented the eTissue architecture which is specific to Sobel operator by programming with MPI. Our fault injection experiments prove the feasibility of this fault tolerant approach.
Study on Method for Operational Event-trace Description Modeling and Validation
Computer Science. 2012, 39 (5): 291-294. 
Abstract PDF(303KB) ( 397 )   
RelatedCitation | Metrics
A model based on extended UML sequence diagram was given, for modeling operational event trace descrip- lion, which is one of the productions of Dol)八F. The model includes graphics and formal description. The lifeline of se- qucnce diagram was defined as time message, and the homologous relationship between sequence diagram and petrinets was studied, and then the arithmetic of sectuence diagram mapping to petrinets was given. After that, the arithmetic of extended sequence diagram mapping to object-based petrinets was studied. And the self-message was considered as spe- cial. At last, an example was given based on the air defense process.
Coding Scheme of Relative Length of Runs
Computer Science. 2012, 39 (5): 295-299. 
Abstract PDF(428KB) ( 628 )   
RelatedCitation | Metrics
A coding scheme of relative length of runs was presented. It reduces the length of runs needing to be encoded by not increasing the number of runs in encoded test data. So the compression effect is enhanced by shortening the length of codeword. Experimental results on some ISCAS 89 benchmark circuits show that the proposed scheme obvi- ously outperforms the traditional coding methods in the compression ratio and the implementation of decompression, such as Uolomb, FDR, EFDR coding.
Applications of MPI Parallel Debugging and Optimization Strategy in the Gas-Kinetic Numerical Algorithm for 3-D Flows
Computer Science. 2012, 39 (5): 300-303. 
Abstract PDF(439KB) ( 566 )   
RelatedCitation | Metrics
Based on the numerical simulated program of 13oltzmann model equation in three dimensional flows, this paper studied the parallel strategy of domain decomposition and supplied the optimization methods in I/O, communication, memory access, etc, which arc applied in the debugging and optimization of the parallel MPI program. Experimental re- sups of large-scale parallel computing of flows around 3-dimensional sphere and spacecraft in rarefied field at high space show that the parallel strategy and the optimization methods are correct and efficient, and the parallel implementation scheme is very useful and can short the computing time explicitly.
Study of Silicon Photonics Based On-chip Optical Interconnect
Computer Science. 2012, 39 (5): 304-309. 
Abstract PDF(545KB) ( 2555 )   
RelatedCitation | Metrics
With the development of semiconductor technology, the chip integration is increasingly improved, and more and more computing cores can be integrated in a single chip. The efficiency of data moving between computing cores im- pacts the performance of whole chip. Optical interconnect using wavelength division multiplex technology has high transmission speed but low signal loss, low transmission delay and power consumption. It will become the potential technology to solve the on-chip communication problem of future multi core and many-core processors. To meet the needs of future on-chip communications, the features and limitations of electrical interconnect were analyzed. Then, the statcof-thcart and the future trends of optical interconnect technology were studied, and its limitation was presented. Based on these works,the features,advantages and drawbacks of some typical on-chip optical interconnection architec- ture were analyzed deeply. Finally, five important open problems were proposed.
Design and Optimization of Simultaneous Iterative Reconstruction Technique Based on GPU Platform
Computer Science. 2012, 39 (5): 310-313. 
Abstract PDF(361KB) ( 459 )   
RelatedCitation | Metrics
Electron tomography (ET) is widely used in reconstructing non-uniform cells or macromolecules in nano scale. One of the best methods of ET is iterative reconstruction due to its outstanding quality of reconstruction, but it is limited by its huge computational requirements. A parallel simultaneous iterative reconstruction technique(SIRT) was designed and implemented based on UPU platform with Tesla 01060 using CUDA programming languages. Experimen- tal results demonstrate the performance of optimized parallel SIRI} algorithm. I}he maximum speedup of the parall c1 SIRT is 47 times of secauential SIRT approach, and it is not any loss of accuracy.