Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 38 Issue 12, 01 December 2018
  
Survey of Uncertain Schema Matching
Computer Science. 2011, 38 (12): 1-5. 
Abstract PDF(481KB) ( 301 )   
RelatedCitation | Metrics
Schema matching is one of the research directions of data integration and semantic Web. Its work is to discover correspondence of schema elements using heuristic information. We proposed a new classification of schema matcking based on the way that heuristic information is processed. An overview of composed schema matching was given according to different aggregation methods of schema matching results. Uncertainty is the inherent character of schema matching. We surveyed data models modeling uncertainty of schema matching,and on this basis we introduced uncertain schema matching. At last we prospected the research directions.
SOA and Cloud Computing:Competition or Integration
Computer Science. 2011, 38 (12): 6-11. 
Abstract PDF(557KB) ( 530 )   
RelatedCitation | Metrics
Clarifying the relationship between cloud computing and SOA is significant to the application and combination of them. This paper surveyed their relationship, and on this basis, the differences and relations between cloud computing and SOA were analyzed. In addition, different views from enterprises and the latest academic researches about their relationship were summarized, and the approaches combined with SOA to integrate cloud services were proposed.First, it's possible and essential for them to combine. Furthermore, the main combinations between them were analyzed. Second,this paper divided possible opportunities and challenges of combinations into several groups discussed in detail respectively. Additionally, the methods to solve these problems were further analyzed. At last, concrete integration needs were proposed, and the integration methods of SaaS and PaaS by adopting SOA were introduced respectively. Further work of realizing the integration of SaaS by adopting SOA was also discussed.
New Survey on Automatic Image and Video Annotation
Computer Science. 2011, 38 (12): 12-16. 
Abstract PDF(498KB) ( 376 )   
RelatedCitation | Metrics
In recent years,automatic image and video annotation has been becoming a popular research topic and is advancing very fast I}his paper provided a new survey to this technique by reviewing a variety of literatures published lately. The reviewed work was classified into two categories:learning-based approaches and search-based approaches.We firstly summarized their basic ideas,advantages,and drawbacks,respectively. Afterwards,we introduced a few commercial systems and research prototypes of retrieval and annotation. Finally, a potentially promising idea was suggested.
New Method of the Fast Narrow Brand C-V Level Set Model for Image Segmentation
Computer Science. 2011, 38 (12): 17-19. 
Abstract PDF(321KB) ( 298 )   
RelatedCitation | Metrics
Region level set model for image segmentation presented by Chan-Vese can naturall change his topology with level set evolution. So level set model is widely applied into many areas, especially in image segmentation area and target trace area. Level set C-V model is better performance than level set based gradient in the anti-noise. But the evolution of C-V level set function is also more complex,and its main drawback is that evolution is relatively slow speed in particular, so this model can not be applied into practical project. Aiming at this problems, a narrow brand level set model based on region level set without Re_initialization was presented. I}his optimization method detects the approximate edge in low Resolution image, then maps this edge to the high resolution image. More accurate edges are detected in the narrow brand at the middle of the edge. The speed of edge detection is greatly improved. Finally, the feasibility of the method is validated by practical application with SAR(Synthetic Aperture Radar) image.
CORS TV:A Network Coding Based P2P TV System
Computer Science. 2011, 38 (12): 20-27. 
Abstract PDF(717KB) ( 219 )   
RelatedCitation | Metrics
Network coding can achieve maximum throughput of multicast, which shows the potential to decrease playback delay of users in P2P TV system, to improve delivery ratio, and therefore to improve video duality of the whole system. In order to improve the performance of the P2P I}V system, a system based on Random Linear Nework Coding(RLNC),called CORS TV, was designed and implemented with carefully designed topology construction and data transmission. Full taking advantage of network coding and further improving the performance of P2P TV, CORS I}V has a modular architecture,and each function module runs an independent thread. The system integrates Gossip protocol for topology construction, uses most first algorithm for determining initial playback point, and employs push based transmission scheme with First Come First Served(FCFS) transmission algorithm. The experiment results validate the effectiveness of the CORS TV design. Compared with current P2P TV systems, it has the advantages of lower redundancy ratio,higher delivery ratio and better playback quality of users.
Cloud-based Architecture for Media Forensics
Computer Science. 2011, 38 (12): 28-30. 
Abstract PDF(348KB) ( 227 )   
RelatedCitation | Metrics
Current researches about media forensics face the following key challenges : heterogeneity, scalability, and efficicncy. We proposed a new computing paradigm for media forensics, called "Media Forensics Cloud",which integrates the concept of cloud computing to handle large-scale media forensics service effectively. To address the above challenges,first,we presented a layered architecture of Media Forensics Cloud,which can provide efficient and scalable media forensics service. hhen we presented the key technologies of Media Forensics Cloud, namely resource virtualization, forensic task parallelization,and service adaptation,by which users can efficiently access media forensics service from different terminals at anytime anywhere. Lastly, we presented a key study of Media Forensics Cloud to demonstrate the feasibility of our design.
Coordinated Dynamic Spectrum Allocation Mechanism Design Based on Combined Priority Scheduling
Computer Science. 2011, 38 (12): 31-35. 
Abstract PDF(432KB) ( 248 )   
RelatedCitation | Metrics
In coordinated spectrum access, when the networks sharing the same spectrum have highly varying spectrum requirement, existing mechanisms cannot provide better performance of end-to-end time delay and fairness. To tackle this problem, a coordinated dynamic spectrum allocation mechanism based on combined priority scheduling was proposed. The mechanism models the spectrum allocation as proportional fairness and set dynamic priority to networks by combining the service grade of networks and the arrival time of spectrum requirement. After figuring out the results of the proportional fairness model, the mechanism allocates spectrum according to the results based on the criterion that the center frequency of allocated spectrum is close to the spectrum requirement The simulation results show that the proposed mechanism can provide better fairness of spectrum allocation than existing mechanisms as network(or terminal) static priority, network(or terminal) dynamic priority and request dynamic priority based coordinated dynamic spectrum allocation, and also brings improvement in terms of performance of end-to-end time delay of packet transmislion.
Novel Routing Algorithm Based on Trustworthy Core Tree in WSN
Computer Science. 2011, 38 (12): 36-42. 
Abstract PDF(622KB) ( 236 )   
RelatedCitation | Metrics
A novel routing algorithm based on trustworthy core tree(TCTR) in WSN was proposed in this paper. It aims to prolong network lifetime as well as increase network security in a hierarchical-cluster sensornet. Cluster heads with higher residual energy and trust level Were elected from underlying sensor nodes. A minimum pathloss tree algorithm was borrowed to organize all cluster heads as a trustworthy core tree with sink node as tree root. Expanded the Trustworthy core tree to cover all nodes so that each node reports to sink node with a certain route. A trust model was integrated in TCTR to evaluate node's trust level and detect evil nodes. Simulation results testified the effectiveness of the algorithm in producing a longer network lifetime and a safer network.
Design and Implementation of Music Content Dynamic Encryption and License Authorization System
Computer Science. 2011, 38 (12): 43-48. 
Abstract PDF(492KB) ( 229 )   
RelatedCitation | Metrics
Copyright infringement cases often occur while the digital music business becomes booming. The MP3-based online music is downloaded and disseminated freely, so the copyright of MP3 must be protected urgently. According to the existing DRM protection technologies and solutions, a MP3 audio file DRM protection scheme based on Windows mobile system was designed. The solution doesn't require analysis of the process of MP3 encoding and decoding, only by analyzing the structure of MP3 files, and combining the technology of AES encryption algorithm, encrypts the file in struck When the content is packaged, it should be uploaded to the service platform and downloaded by the user. The user uses the DRM technology of AES decryption, license strategy, digital signature, CA certificate to finish real-time decryption play. The experiment shows that the scheme has the features of high security and real-time,and encryption is fast,which can be well applied to mobile terminal music copyright protection, achieving the purpose of the digital copyright protection.
Self-adjusting Sampling Area MCL Algorithm for Mobile WSNs
Computer Science. 2011, 38 (12): 49-52. 
Abstract PDF(411KB) ( 215 )   
RelatedCitation | Metrics
Localization technology is one of the key supporting technologies in wireless sensor networks(WSNs). Most existing localization algorithms in literature are designed for static WSNs. Thus,most of them cannot be applied to mobile WSNs. This work began with a thorough investigation of Monte Carlo Localization algorithm. On this basis, we proposed a self-adjusting sampling area localization(SA_MCL) algorithm,in consideration of the characteristics of mobile sensor node. SA_MCL uses interpolation simulation method to process historical location information of a node. The purpose is to get the velocity and direction of the node, thereby improving positioning accuracy. Simulation results show that SA_MCL algorithm improves positioning accuracy of a node significantly.
Hamming Weight-based Algebraic Side-channel Attack against PRESENT
Computer Science. 2011, 38 (12): 53-56. 
Abstract PDF(351KB) ( 270 )   
RelatedCitation | Metrics
This paper examined the theory and model of algebraic sidcchanncl attack against block ciphers, the method of converting non-linear boolean equation system to SAT problem, proposed a method of Hamming weight based algebraic sid}channel cryptanalysis against PRESENT, reduced the complexity of solving non-linear boolean ectuation system and the sample size of sidcchanncl attack,finally testified the validity of theory through experiments. Results show that if knowing one sample of plaintext, it can recover 80 bit keys of PRESENT with Hamming weights of S-box inputs and outputs of front 10 round in 0. 63 seconds; if plaintext and cipher are unknow or the used Hamming weights of Sbox input arc random, it can also make a success of recovering complete PRESENT key.
Certificate-based Aggregate Signature Scheme
Computer Science. 2011, 38 (12): 57-60. 
Abstract PDF(302KB) ( 172 )   
RelatedCitation | Metrics
An aggregate signature is a digital signature that can aggregate multiple signatures of different messages. Because of the short length, aggregate signature is useful to electronic contract signature, border gateway protocol and so on. We constructed an efficient aggregate signature scheme with two aggregate modes which are ordered and disordered. hhe security analysis was given. By comparison to existed schemes,our scheme without proof-of-knowledge is more efficient. Finally, based on PBC, we simulated the scheme and drew the efficient curve.
Improved Monte Carlo Node Localization Scheme by Using Multi-energy-level Beacon
Computer Science. 2011, 38 (12): 61-64. 
Abstract PDF(401KB) ( 174 )   
RelatedCitation | Metrics
A localization algorithm based on Monte Carlo suitable for mobile wireless sensor network was proposed.Each mobile anchor emits beacons at different power levels. From the information received by each unknown node, the sensor node can determine which particular ring or inner circle it lies within from that anchor, which is called constraint region. The positions of unknown nodes are able to be estimated by few samples based on an improved Monte Carlo Localization scheme. Collincarity Limiting Factor(CLF) was introduced to avoid localization failure which is caused by Beacon Collineation, and a scheme of beacon selection was put forward. Simulation results show that the proposed algorithm has a lower localization error and better flexibility under different factors,such as anchor density,moving speed,ranging error and so on.
Research on Cooperative Relaying Scheme Based on Compressed Packet
Computer Science. 2011, 38 (12): 65-67. 
Abstract PDF(244KB) ( 170 )   
RelatedCitation | Metrics
Cooperative Relaying based on Compressed Packet(CRCP) was proposed. hhc relay nodes encoded and retransmitted the received packet cooperatively. The number of transmitting slot was decreased and diversity gain combated wireless fading effectively. The spectral efficiency was analyzed in two-hop multiple access channels. The results show that at high SNR regimes, CRCP has the higher spectral efficiency compared with the traditional strategics. At low and medium SNR regimes,CRCP has the much better spectral efficiency in contrast with complex field network coded cooperation based on non-orthogonal transmission(CFNCO-NOT) strategics.
Approach of Kernel Integrity Monitoring Using Hardware Virtualization
Computer Science. 2011, 38 (12): 68-72. 
Abstract PDF(449KB) ( 399 )   
RelatedCitation | Metrics
Kernel-level attacks compromise operating system security by tampering with critical data and control flow in the kernel. Current approaches defend against these attacks by applying code integrity or control flow integrity control methods. However, they focus on only a certain aspect and cannot give a complete integrity monitoring solution. This paper analyzed the kernel integrity principle and got practical rectuirements to ensure kernel integrity. Critical data objects effect operating system function directly. Only certain code is able to modify critical data objects at certain condidons to ensure data integrity. All factors about code execution sequence are protected and monitored to ensure control flow integrity. Implementation in Xen VMM(Virtual Machine Monitor) using hardware virtualization,or referred to as HVM(Hardware Virtual Machine) is introduced to protect and monitor Linux kernel. Experiments show that the solution can detect and prevent attacks and bugs compromising the kernel.
Quantification Model Considering Uncertainties for Node Reputation in Trusted Networl}s
Computer Science. 2011, 38 (12): 73-76. 
Abstract PDF(432KB) ( 166 )   
RelatedCitation | Metrics
Node reputation in trusted networks is an important trust relationship with many uncertainties. Reputation and the process of its aggregation arc companied with characteristics of fuzziness and randomness. Cloud model can scientifically describe uncertainties of reputation during the process of its aggregation. Converse cloud generation algorithm is adopted to discover the laws of fuzziness and randomness of node reputation and its aggregation process during its whole life. Obtained numeric eigenvalues of reputation cloud are to direct the quantification of node reputation in local computation widows. Based on certainty degree of service satisfaction degree, together with attenuation factor, a reputation quantifaction model was proposed. Certainty degree is a random number with stable tendency. Weights in the model are also random numbers with stable tendency related to service satisfaction degrees. The proposed reputation quantifaction model well fits with uncertainty laws of trust relationship in open networks. Simulation results show that the proposed model maintains stable output and has good anti-attacking abilities, compared with other models.
Research on VMM-based Rootkit and its Detection Technology
Computer Science. 2011, 38 (12): 77-81. 
Abstract PDF(918KB) ( 388 )   
RelatedCitation | Metrics
Leveraging virtualization technology, rootkit has improved its stealth capability greatly. Research on VMM based rootkit has become the focus in computer security field. This paper summarized the traditional hidden methods and the bottleneck of the in-box technology, introduced the advantage of VMM at architecture and the implementation based on software and hardware,and then analyzed the design and operation mechanisms of various VMM Rootkits. In order to resolve the limitation of VMM existence detection, it proposed a new method detecting malicious VMM. In addition,this paper discussed the evolvement of VMM Rootkit,and presented how to apply virtualization technictues safely to defend VMM Rootkit.
Further Discussion on SynFlood Attack Detection Based on Distance Computation in Space Geometry
Computer Science. 2011, 38 (12): 82-87. 
Abstract PDF(516KB) ( 184 )   
RelatedCitation | Metrics
This paper gave a new method to detect the SynFlood attack by analyzing the relationship between Syn segment, Fin segment and Rst segment in TCP protocol. Firstly, the relationship between Syn segment, Fin segment and Rst segment is mapped to Space Geometry; the relationship in a given time frame is mapped to one point in Space Geometry while that when no attack behavior exists is mapped to a line in Space Geometry. The distance between the point to the line can hence be used to detect and determine the SynFlood attack. Furthermore, the efficiency and accuracy are improved by using moving average technology which can anti abasing the distance discribed above. The experimental result shows that the method can detect the direct SynFlood attack and the reflect SynFlood attack accurately and have low rate of false alarm. Also the method can be deployed to mid-large scale networks because of its high performance for processing data packets.
Research on One-hop Expansion Enhanced Tree Routing Protocol for Wireless Sensor Networks
Computer Science. 2011, 38 (12): 88-91. 
Abstract PDF(446KB) ( 183 )   
RelatedCitation | Metrics
AEnhanced tree routing (ETR) is a routing protocol proposed recently for wireless sensor networks, in addition to parent child links,EI}R also uses enhanced links to other onchop neighbors if it is decided that this will lead to a shortcut path than Tree Routing (TR). In order to explore as much potential shortcut routes as possible, this research proposed a One-hop Expansion Enhanced Tree Routing (OEETR) protocol for wireless sensor networks. For a node that is making the routing decision,OEEI}R not only takes the enhanced links built with its onchop neighbors but also takes the enhanced links built by its one-hop father and one-hop sons with their neighbors into account,to find a shortcut route for packet forwarding. So the scope of the optional shortcut routes is no longer limited to the enhanced links built between the node and its one-hop neighbors,but expands up to the enhanced links built by its father with its one-hop neighbors and down to the enhanced links built by its sons with their neighbors,and OEETR will choose the shortest shortcut route for packet forwarding. This research presented the decision process for OEETR, and applied this protocol to ZigBee network. Simulation results reveal that OEETR not only outperforms TR and ETR in terms of hop-counts, but also saves the energy consumption than that of TR and ETR.
AEMP-A New Reuptation Model of Sensor Nodes in WSNs
Computer Science. 2011, 38 (12): 92-95. 
Abstract PDF(339KB) ( 168 )   
RelatedCitation | Metrics
A reputation model of sensor nodes in WSNs named AEMP was proposed. AEMP means 'Addition Encoura gement, Multiplication Punishment'. Compared with current other models, AEMP can work with simple calculation and need fewer resources. So it is fit for the character that sensor nodes in WSNs have limited resources. Emulation experiments show the model can depress the reputation value of the node that executes bad communication behaviors quickly.And it can also reduce the possibility of wrong judgment. The mode affords a new routing algorithm for designing a WSNs routing protocol.
Method for Evaluating on Two-terminal Reliability Based on Topology
Computer Science. 2011, 38 (12): 96-99. 
Abstract PDF(347KB) ( 291 )   
RelatedCitation | Metrics
In order to evaluate the reliability of a largcscalc real-life long-haul communication network, a method for computing network two-terminal reliability based on topology was presented. On basis of simplifying the topology, considering the reliability of edges and node, the two-terminal reliability was expressed by transfer matrices. The experiment results show that the edges being directed will have influence on various measures of network performance. Although the failure frectuency and the failure rate of the connection are quite similar, the locations of complex zeros of the two-terminal reliability polynomials exhibit strong differences. And the size of transfer matrices will become bigger with the increasing of network. I}he method is valuable for optimizing network and assigning the mean time of reparation.
Service Selection Approach Considering the Dynamic Change of QoS
Computer Science. 2011, 38 (12): 100-105. 
Abstract PDF(579KB) ( 150 )   
RelatedCitation | Metrics
With the high development of Web services technology, there are several features similar Web service on Internet. How to select suitable services to meet the needs of customers and generate the service composition has become one of the major issues. Based on the existing QoS service selection method is usually assumed that service provider's value is real and fixed, but the duality of services often changes in the actual operation. Therefore, the paper put forward a dynamic change of data to consider QoS service selection method. The method introduces a reliable QoS sulrperiods,which can be more relevant to describe the actual operation of service. The method according to service at different time changes on the reliability of the subdivided into different services, using redundant, under a different time period to meet the requirements to provide users with multiple options for services. Finally, a simulation experiment shows the availability and effectiveness of the method.
Research on Location Updates of Network-constrainted Moving Objects in Mobile Computing Environment
Computer Science. 2011, 38 (12): 106-109. 
Abstract PDF(421KB) ( 252 )   
RelatedCitation | Metrics
New algorithms were proposed in this paper based on the segment-based policy, in order to detect disconnection in mobile computing environment and update location efficiently. First algorithm uses fix-time policy, can detect disconnection quickly and efficiently, but cause more numbers of location updates. Second algorithm uses adaptive time limit policy and finds a good balance. At last group-based policy will be modified to detect and deal with disconnection. Our experiments show that these algorithms not only decrease numbers of location updates,but also can detect network disconnection efficiently.
Routing Mechanism Based Algorithm for Fast Path Generation in Variable-Weight Network
Computer Science. 2011, 38 (12): 110-112. 
Abstract PDF(351KB) ( 169 )   
RelatedCitation | Metrics
There are a mass of redundant calculations when vast vehicles generate their pathes in the large-scale traffic flow simulation. In order to reduce the redundant calculation and improve the speed of path generation, a routing mechanism based algorithm for fast path generation in variable-weight network was proposed. The algorithm introduced the routing mechanism of network into traffic flow simulation system and took each road intersection as a muter. It disassembled and stored the calculated shortest path tree into each relevant muter as the guiding information like the next direction from A to B is C. Each vehicle got its next driving direction by visiting the current muter when it drove to an intersection. As historical data, the stored guiding information would also improve the speed of future path generation. Furthermore, the algorithm would update the relative region on time when the weight of road network was changed and thus the path of each vehicle is still reasonable and considerable. hhe experimental results show that this algorithm can reduce the repetitive calculation and improve the speed of path generation.
Robust MP3 Audio Watermarking Algorithm Based on MDCT Spectral Entropy Recognition
Computer Science. 2011, 38 (12): 113-117. 
Abstract PDF(414KB) ( 190 )   
RelatedCitation | Metrics
An innovative robust audio watermarking scheme based on using MDCT spectral entropy as implicit synchronization to achieve self-synchronization was proposed in this paper. The algorithm makes full use of MP3 compression mechanism to implement reduplicate embedding on audio segments after decoding, combining the masking effect in psychoacoustic model, determines the suitable low-frequency DWT coefficients to deal with. Signals are recompressed to form MP3 bitstrcam which contains watermarking information. Extracting can accomplished without the original audio and is guaranteed by identifying the sequence of MDCh spectral entropy. hhe experimental results show the proposed scheme itself can achieve good excellent robustness to desynchronizaition attacks and common audio processing methods created by Stirmark.
Load Balancing Algorithm for Hierarchical P2P Networks
Computer Science. 2011, 38 (12): 118-120. 
Abstract PDF(331KB) ( 197 )   
RelatedCitation | Metrics
Load imbalance restricts performance of P2P applications badly. Previous researches solve P2P load imbalance focusing on flat DHT(Distributed Hashing Table) based networks. Recently, hierarchical topologies get more attendon for many inherent merits. In this paper,a load balancing algorithm was proposed by merging advantages of virtual server and hierarchical technology. Simulation results show that our algorithm can ensure fair load distribution on heterogeneous nodes.
Analysis on Topology of P2P Logical Network with Simulation Experiments
Computer Science. 2011, 38 (12): 121-124. 
Abstract PDF(341KB) ( 173 )   
RelatedCitation | Metrics
A system for simulating the P2P(peer-to-peer) network was designed and implemented, based on deep analysis on its protocols. Large scale simulation experiments were performed so as to identify the topology of the logical P2P network. The results of all experiments demonstrate that the logical P2P network is a network with small average path length, big cluster coefficient, and its degree follows the exponential distribution. It is easily derived from these features in applying the complex network theory that the P2P logical network is a small-world one with the exponential degree distribution.
Method of Shellcode Detection Based on Static and Dynamic Mechanism
Computer Science. 2011, 38 (12): 125-127. 
Abstract PDF(331KB) ( 208 )   
RelatedCitation | Metrics
Buffer overflow attack has been a major security problem in recent years,where attackers utilize buffer overflow vulnerabilities to control other computers. As the vehicle of attack, Shellcode is the main target of buffer overflow attack detections. Now attackers tend to employ polymorphic techniques to encode Shellcode, which makes it harder for signature-based NIDS to detect it, This paper proposed a new method to detect the Shellcode executed under MS Windows, which integrates static analysis and dynamic execution techniques. It made some new principles of Shellcode detection, which enhance both the accuracy and performance of polymorphic Shellcode detection.Then a prototype system was implemented and tested. The test results on both the accuracy and performance are quite encouraging.
Research on Distribution Center Information System Model for Internet of Things Based on RFID and SCOR
Computer Science. 2011, 38 (12): 128-130. 
Abstract PDF(341KB) ( 208 )   
RelatedCitation | Metrics
This paper collated and introspected the traditional information feedback mechanism, combined the study of the literatured and the characteristics of the operation of distribution center, analysed the informatization of distribution center, and brougth out the modeling of the logistic information system of distribution center based on the RFID technology and the SCOR model(supply-chain operations referenccmodel). In this paper, the distribution center system framework was clarified, and the relationship of different level was discussed, and the solution for the application of Internet of Things to the distribution center was provided. At last, we can provide an available reference model of logistic distribution center under the condition of Internet of Things for modem logistic industry, especial for the retail industry.
Answer Set Programming Based Verification of Semantic Web Service Composition
Computer Science. 2011, 38 (12): 131-134. 
Abstract PDF(386KB) ( 143 )   
RelatedCitation | Metrics
Formal description and verification of semantic Web service composition arc the premise of the correctness of running composite services. The paper described a method to modeling OWL-S(Semantic Web service description language) based on answer set programming and analysed the advantages of this method, and the mapping of several kinds of basic control constructs in process model of OWL-S to Petri net which is the mid-mode was provided and an algorithm for generating the answer set programming was proposed. Meanwhile, introduced temporal constraints to the composite service verification to represent the property to be checked. Finally, a specific instance of the modeling and verification was applied into a case.
Estimation Method of Software Reliability for Safety-critical System
Computer Science. 2011, 38 (12): 135-138. 
Abstract PDF(323KB) ( 266 )   
RelatedCitation | Metrics
Importance sampling is a changcof-measure technique for speeding up the simulation of rare events in stodrastic systems. In this paper we established a technictue for computing optimal state transition probabilities for software reliability estimation based on a Markov usage model. I3y suitable changes of the probabilities of state transitions during test, an iterative method based on the Ali Silvey distance was proposed for this choice. A learning algorithm for the computation of optimal transition probabilities of the Markov chain usage model was also presented and experimental results of this algorithm were reported.
Description and Verification of Asynchronous Web Service Composition Based on XYZ/ADL
Computer Science. 2011, 38 (12): 139-143. 
Abstract PDF(410KB) ( 153 )   
RelatedCitation | Metrics
Concerned with Web service composition, this paper especially emphasized the formal description and verificanon of asynchronous communication behaviors and timed properties. Firstly, analyzing Web service composition from software architecture, the interactive behaviors and timed properties were described by XYZ/ADL based on temporal logic language. Secondly, timed asynchronous communication model(TACM) which accords with the specification of model checker UPPAALwas proposed. Finally, based on the transition from XYZ/RE communication command senfences to TACM, the correctness of asynchronous communication behaviors of the service composition system was verified by UPPAAI.
Research of Human Task Performing System Based on Asynchronous Pattern in BPEL
Computer Science. 2011, 38 (12): 144-146. 
Abstract PDF(339KB) ( 167 )   
RelatedCitation | Metrics
BPEL(Business Process Execution Language) is a kind of language which can be used to write automated business process,but it doesn't support user interactions. User interactions can be achieved by human tasks. Therefore,an architecture which supports the performing of the human task in BPEL was proposed. There is a human task manager outside of the engine to maintain human activities. BPEL processes interact with the human task manager with asynchronous pattern, which adapts to the feature of time uncertainty of the human task. The message correlation set in BPEL is used to correlate the invoking activity and the waiting activity during the period of asynchronous invocation.
Analysis and Evaluation of Heuristic Algorithms for Test Suite Reduction
Computer Science. 2011, 38 (12): 147-150. 
Abstract PDF(415KB) ( 321 )   
RelatedCitation | Metrics
During the development and maintenance of software, regression testing is used to enhance confidence to the modified parts of software and guarantee no side effect to the existing parts of software. Regression testing is an expensive process. Test suite reduction algorithm removes all redundant test cases among the test suite to get a minimal subset of test suite that still satisfy test criterion. hhis paper surveyed the most important heuristic algorithms for test suite reduction in the literature and used a unify framework and terminologies to define and analyze different algorithms. Typical heuristic algorithms for test suite reduction were analyzed and compared. The future work was presented.
Schema Mapping Based on Semantic Similarity of Data Instances Sets
Computer Science. 2011, 38 (12): 151-155. 
Abstract PDF(421KB) ( 163 )   
RelatedCitation | Metrics
Schema mapping, as the important issue in Web integrated information management and application, has been widely concerned and researched in recent years. Most schema mapping methods have been implemented based on schema information,and schema mapping methods based on data instances are rare. There are many cases in which schema information is insufficient or conflict, and incorrect schema mapping method may be got. In order to improve the performance of schema mapping,a measure was proposed to evaluate semantic similarity between attributes,also the semantic difference between two semantic similarattributes in the same schema was analyzed. Further, a mapping approach based on weighted bipartite was proposed. hhe mapping method proposed was evaluated with experiments and was verified to be efficient and effective to obtain the more complete and accurate schema mapping in situations where schema information is incomplete or conflicting.
Conceptual Design Methodology for Fuzzy XML Model
Computer Science. 2011, 38 (12): 156-161. 
Abstract PDF(492KB) ( 158 )   
RelatedCitation | Metrics
XML has been the defacto standard of information representation and exchange over Web. In addition, imprecise and uncertain data are inherent in the real world. Although fuzzy information has been extensively investigated in the context of relational model, the classical relational database model and its fuzzy extension to date do not satisfy the need of modeling complex objects with imprecision and uncertainty on the Web. Based on possibility distributions, this paper concentrated on fuzzy information modeling in the fuzzy XML model and the fuzzy IFO model. In particular, the formal approach to mapping a fuzzy IFO model to a fuzzy DTD model was developed.
Research on Watermarking Relational Database Based on Character Field
Computer Science. 2011, 38 (12): 162-166. 
Abstract PDF(537KB) ( 249 )   
RelatedCitation | Metrics
An innovative practical watermarking scheme for relational database based on character field was proposed in this paper. Identifying the target attribute values through the primary key, set of user keys and the span of watermark.Calculating the characteristics of attribute values and the watermark according to the defined rules and choosing those different bits as watermark bits. Semantic analysis gives a character, which is non-down structure and one of the most relevant attributes in the domain,and watermark embedding is the process of editing(insert/delete) the character. The embedded watermark is invisible and does not affect the availability of the database, and enables the blind extraction.The proposed scheme can achieve good excellent robustness to common database update such as insert, delete, modify and delete database fields.
Approach of Ontology Learning from Relational Database Based on FCA
Computer Science. 2011, 38 (12): 167-171. 
Abstract PDF(456KB) ( 191 )   
RelatedCitation | Metrics
Ontology learning is a process,which extracts semantic information from the existing data model and generates ontology using a set of predefined mapping rules. Relational database is the main model of data access and management,and extracting ontology from relational database is one of the research hotspots in ontology engineering field. A common method adopted by the domestic and foreign scholars is that ontology is constructed by using mapping rules between E-R model and ontology elements. But this method is subjective and it goes against the application of ontology. To address this issue, an approach of ontology learning from relational database based on formal concept analysis was proposed,which could obtain the hierarchical relation of concept and semantic relation of data objectively. The proposed method not only keeps the semantic information of relational data tables, but also shows the advantage of FCA in automatic extraction of semantic information. Thus the quality of final ontology is improved and the application field of ontology is extended. A case study of the proposed method combined with materials service safety database was also presented.
Privacy Preserving Technology for Multiple Sensitive Attributes in Medical Data Publishing
Computer Science. 2011, 38 (12): 171-177. 
Abstract PDF(511KB) ( 252 )   
RelatedCitation | Metrics
In view of the privacy leak problem of secure data publishing when sensitive data contains multi-attributes,on the basis of analysing the multi dimension bucket approach,this paper proposed an 1-coverage clustering grouping approach based on the same sensitive attribute set and the idea of lossy join. Firstly it calculated the same sensitive attribute set of each record, and then we grouped each record which satisfies the constraints of L-coverage following the idea of clustering. Also we designed a LCCG algorithm to implement the approach. Experimental results on the real world datasets show that the new model is able to reduce privacy disclosure apparently and enforce security of data publishing.
Data Stream Clustering Algorithm Based on Density Grid
Computer Science. 2011, 38 (12): 178-181. 
Abstract PDF(341KB) ( 232 )   
RelatedCitation | Metrics
On the basis of improvements on defects in data stream clustering algorithm based on density grid, a data stream clustering algorithm was proposed which improved D-Stream algorithm. The algorithm set density threshold of grid cell dynamically by statistics on density of grid cell and number of clusters. To increase the precision of cluster boundary, a non-uniform division was employed on the grid boundary cell. The result of experiments on synthetic and real data set shows that the algorithm has fast processing speed and the ability to detect dynamic changes of data for data stream clustering, and improves clustering quality.
Extracting Name Aliases of Mailbox Users from Email Bodies
Computer Science. 2011, 38 (12): 182-186. 
Abstract PDF(514KB) ( 195 )   
RelatedCitation | Metrics
Mining user identity information from emails is an important research topic in data mining. Most approaches extract users' names only from the email headers, but names appearing in email bodies are usually more suitable for representing the sender's or recipient’s identity. This paper focused on extracting users’name aliases in the body of plain-text emails. Firstly,to effectively elicit salutation and signature block from email bodies,a salutation and signature blocks locating algorithm based on statistical and rules restricted methods was proposed. I}hen to extract all valid aliases in the salutation and signature lines, a novel approach was proposed based on name boundary word template built on the characteristics of alias neighboring words,which can verify and amend aliases identified by named entity recognition or part-of-speech tagging tools. Results on Enron corpus indicate that the approaches proposed can efficiently and automatically extract user's aliases from email bodies.
Balanced Space-time Frequent Itemsets Mining over Data Stream
Computer Science. 2011, 38 (12): 187-190. 
Abstract PDF(366KB) ( 155 )   
RelatedCitation | Metrics
Data stream has characteristics of the flow, continuity, and the unbalanced distribution of item Minging frequent itemsets over data stream is a significant and challenging work. Presented a balanced space-time algorithm for mining frequent itemsets over data stream-Bala_Tree. The algorithm can only scan data stream once, make rapid cluster updates, periodical tree reconstruction and mine frequent itemsets based on classical algorithm. Experiments show that the algorithm can quickly scan and update data, realize the rational use of memory, accurate access to frectuent itemsets. Bala_ Tree algorithm is superior to other algorithms.
Improved Positive and Negative Association Rules Mining Algorithm
Computer Science. 2011, 38 (12): 191-193. 
Abstract PDF(339KB) ( 236 )   
RelatedCitation | Metrics
Aiming at the problems of the traditional positive and negative association rules, such as multi-scanning database and generating large candidate frectuent itemset,on the basis of comparing recent research on association rules mining,put forward an improved positive and negative association rules mining algorithm, which can accomplish mining positive and negative association rules by scanning Dataset twice;and improve the algorithm of mining the maximal frequcnt itemset, which result in upgrading the efficiency of the algorithm; besides, improve the standard of confidence level,in order to increase the quality of association rules mining.
Pricing Mechanism of TSP Solving Service in Cloud Computing
Computer Science. 2011, 38 (12): 194-199. 
Abstract PDF(540KB) ( 239 )   
RelatedCitation | Metrics
The traveling salesman problem(TSP) is a typical path optimization problem which has similar problems and applications in urban transportation planning, logistic transport and communication network settings. However, TSP is a NP hard problem When problem scale is very large, large scale parallel computing environment such as cloud computing platform is needed. In this paper, we illustrated cloud service pricing mechanism with TSP. Uenerally, pricing mechanism should be fair, flexible, dynamic and flexible. To be fair and reasonable, there arc two main aspects to be considered when pricing a service. One is the difficulty of solving the problem including time complexity, space complexity and quantity of data the application input and output. The other is the quality of service including precision of the result, response time and whether the service is provided in peak time or not which can be served for Service Level Agreement between service provider and customer. Next, we proposed principles of pricing the service and pricing formula. Finally,a case study aiming at pricing solving TSP service was given, which has a reference value for pricing NP hard problem in cloud computing environment.
Minimum Joint Mutual Information Loss-based Optimal Feature Selection Algorithm
Computer Science. 2011, 38 (12): 200-205. 
Abstract PDF(513KB) ( 242 )   
RelatedCitation | Metrics
In this paper, a minimum joint mutual information loss-based optimal feature selection algorithm was proposed,which firstly finds a non-discriminate feature subset of the original set via a dynamic incremental searching stratagy,and then eliminates false positives by keeping minimum joint mutual information loss with class in each iteration using a minimal conditional mutual information criterion, in such a way as to obtain an approximate optimal feature subset. Furthermore, for the computationally intractable problem arising in high dimensional feature space that characterizes the existing method of conditional independence test with conditional mutual information, a fast implementation of conditional mutual information estimation was introduced and used to implement the proposed algorithm. Experimental resups for the classification task show that the proposed algorithm performs better than the representative feature selection algorithms. Experimental results for the execution task show that the proposed implementation of conditional mutual information estimation has a considerable advantage.
sqrt(3) Subdivision Based on Laplacian Coordinate
Computer Science. 2011, 38 (12): 206-208. 
Abstract PDF(545KB) ( 150 )   
RelatedCitation | Metrics
According to the influence of the original mesh on the subdivided limited surfaces, we proposed a sqrt(3)subdivision method based on the laplacian coordinate. In order to preserve the details of the original mesh, the laplacian coordinate of the face's central point was interpolated and was used to modify itself. In side of the boundry face subidivision of the uncloseness mesh, we pointed out the lack of the original sqrt(3) method and proposed a new unified boundry-subdivided method. It can control the incrcasement of the faces and has the advantage of the stability, operability. The result of the experiment shows that the method can make the original mesh reveal on the limited surface and a limited smooth surface mesh can also be obtained.
Granularity Transformation Operators for Granular Computing
Computer Science. 2011, 38 (12): 209-212. 
Abstract PDF(320KB) ( 213 )   
RelatedCitation | Metrics
The triarchic theory of granular computing offers a conceptual framework of granular computing. It emphasized the exploitation of useful structures known as granular structures characterized by multilevel and multiview. The granular structures in graphs were studied, granules and levels in a graph were defined based on the vertices set, and granular structures in the graph were defined based on partial orders. Based on the granular structures in graphs, "zooming-in" and "zooming-out" operators were proposed. The "zooming-in" operator deals with the shift from a fine granularity to a coarse granularity and the "zooming-out" operator deals with the change from a coarse granularity to a fine granularity.
Existence of Answer Sets of Normal Logic Programs
Computer Science. 2011, 38 (12): 213-220. 
Abstract PDF(608KB) ( 171 )   
RelatedCitation | Metrics
Determining the existence of answer sets of logic programs is a key problem in answer set programming, and is also NP-complete. Current methods for determining the existence of answer sets are mainly based on the evenness/oddness of the number of edges in negative circles.The limitation of this kind of methods is that if a normal logic program is not stratified, the existence of answer sets of the program cannot be accurately determined. A novel negative circle based method was proposed to solve this problem. An algorithmic framework and the correctness of the method were presented in this paper, and its effectiveness was illustrated by example analysis.
Particle Swarm Optimization Algorithm Based on Stable Strategy
Computer Science. 2011, 38 (12): 221-223. 
Abstract PDF(252KB) ( 325 )   
RelatedCitation | Metrics
An improved particle swarm optimization algorithm based on the evolutionarily stable strategy was proposed to avoid the problem of local optimum. The key to this algorithm lies in searching an optimum solution for optimal individuals of the population according to the standard particle swarm algorithm by setting a stable parameter, while the rest part of population is mutated randomly. Therefore, the operator can keep the number of the best individuals at a stable level when it enlarges the search space. The performance of this algorithm shows that this algorithm can effectively avoid the premature convergence problem Moreover, this algorithm improves the ability of searching an optimum solution and increases the convergent speed.
Research on Boosting-based Imbalanced Data Classification
Computer Science. 2011, 38 (12): 224-228. 
Abstract PDF(389KB) ( 443 )   
RelatedCitation | Metrics
This paper aimed to investigate boosting-based unbalanced data classification algorithms. hhrough the deep analysis of existing algorithms, a weight sampling boosting algorithm was proposed. Changing the data distribution by weight sampling,the trained classifier was made suitable for unbalanced data classification. The natural of the proposed algorithm is that the loss function of naW c boosting is adjusted by the sampling function and the positive examples are emphasized so that the classifier focuses on correctly classifying these examples and finally the recognition rate of positive examples is improved. The new algorithm is simple and practical and has been shown to outperform naive boosting and previous algorithms in the problem of unbalanced data classification on the UCI data sets.
Partial Cut Set Algorithm for Maximum-flow of Networks
Computer Science. 2011, 38 (12): 229-231. 
Abstract PDF(249KB) ( 249 )   
RelatedCitation | Metrics
Network maximum-flow problem is a classical module in graph theory. First, based on the rough set attribute reduction algorithm of discernibility matrix, it defines a partial cut set matrix. Afterwards, it finds out all the cut by meet and join operations for sets. Furthermore, the minimum cut is yielded out. At last, it gets the maximum flow of the network with the assistance of the theorem of maximum flow minimum cut.
Research and Implementation on Visualization System for PPI Network
Computer Science. 2011, 38 (12): 232-235. 
Abstract PDF(382KB) ( 221 )   
RelatedCitation | Metrics
Research on protein-protein interaction (PPI) network has become a hot spot in the biomedical domain. With analysis and clustering application on PPI network, researchers detect complexes for a better understanding of cellular organizations. Visualizing PPI network in analyzing process can intuitively present the organization of the network,which makes easy comparison of clustering methods and promotes researches on PPI network. We designed and implemented a PPI network visualization system with JUNG, a network modeling and visualization library. We also integrated several effective graph clustering algorithms into the system, and implemented a protein complex detection method based on protein function annotation. The original network and clusters can be navigated conveniently in two-dimensional network views.
Construction and Learning Method of Rough Logic Neural Network
Computer Science. 2011, 38 (12): 236-238. 
Abstract PDF(336KB) ( 228 )   
RelatedCitation | Metrics
The traditional rough logic neural network can do research in information systems and decision-making and reveal the substance of rough set theory, but cannot get good results when dealing with the problem of non-singlcvalue input The rough neuron with upper and lower boundary can deal with the above problem, and with the development of rough set, the concept of upper and lower boundaries has been widely used. Comprehening the above advantages, this paper propounded the construction and studying of a kind of rough logic neural network. It is made up of rough logic neural network and rough neurons ( each variable in this pattern has both upper and lower bounds),which is called boundary rough logic neural network. First the paper gave the basic knowledge about rough neuron, rough logic and decision-making, and then propounded the structure of boundary rough logic neural network and learning methods, then gave the two models about it and compared the advantages and disadvantages between them. It indicated that this type of neural network, compared with traditional rough logic neural network, can be more efficiency when dealing with the problem non-single-valued and continuous approximation function. At last it proposed the optimized direction.
ILDM:Information Lifecycle Dynamic Management
Computer Science. 2011, 38 (12): 239-241. 
Abstract PDF(320KB) ( 168 )   
RelatedCitation | Metrics
The most important basis of data classification is data's values and how they change over time in the period of fully automated Information Lifecycle Management(ILM). Different from the departed value model that uses the degree of information usage to classify the file, the ILDM based on the ILM was proposed, which uses the data distribution to classify the information. I}hc recency, the degree of information usage and the data distribution arc taken into consideration in this model. It can be concluded from the experiment that this method is helpful for reducing the migration workload and enhancing the utilization ratio of the system resource.
Mandarin Prosodic Break Automatic Detection Based on Complementary Model
Computer Science. 2011, 38 (12): 242-246. 
Abstract PDF(398KB) ( 226 )   
RelatedCitation | Metrics
Automatic prosodic break detection and annotation arc important for both speech understanding and natural speech synthesis. We developed complementary model to detect Mandarin prosodic break by using acoustic, lexical and syntactic evidence. Our proposed method has the following advantages: (1) We do not adopt the independent assumption between the acoustic features and the lexical and syntactic features. (2) The complementary models not only in the features of the current syllable but also in the contextual features of the current syllable at model level realizes the complementarities by taking the advantages of each model. We verified our proposed method in a speech corpus of Chinese dis- course(ASCCD),where 90.34% prosodic break detection precision rate can be achieved and 6.09% is improved than the baseline.
Lagrange Twin Support Vector Regression
Computer Science. 2011, 38 (12): 247-249. 
Abstract PDF(277KB) ( 177 )   
RelatedCitation | Metrics
This paper proposed a fast support vector regression algorithm. This algorithm converts the ctuadratic programming problems(Qpps) with pair groups of linear inequality constraints to two small size Qpps with only one group of linear inequality constraints. Each of the small size Qpps is solved by an iterative algorithm. The iterative algorithm converges from any starting point and does not need any quadratic optimization packages. Thus this algorithm is fast.The experimental results on several benchmark datasets demonstrate the effectiveness of the proposed algorithm.
Improved ISOMAP for a Single Manifold with a Gap
Computer Science. 2011, 38 (12): 250-254. 
Abstract PDF(415KB) ( 202 )   
RelatedCitation | Metrics
ISOMAP algorithm could have been applied successfully on uniform-density dataset drawn from a single manifold. However,given a uniform-density dataset with a gap,ISO MAP fails possibly. In this paper,G-ISOMAP(ISOMAP with a Uap)algorithm was presented, which exploits characteristic of the gap in the dataset. The algorithm first finds pairs of data points, whose Euclidean distances arc shortest between the separated submanifolds, and then makes them neighbors each other. At last, ISOMAP algorithm is applied to find low dimensional embedding structure. The theoretical discussion on difference and relationship between G-ISOMAP and ISOMAP can be given and it is concluded that ISOMAP is a special case of G-ISOMAP algorithm and G-ISOMAP is an extension of ISO MAP algorithm. The experimental results show that the proposed algorithm is best among the frectuently used manifold learning algorithms on several datasets with a gap.
Rule-based Ellipsis Identification in Chinese
Computer Science. 2011, 38 (12): 255-257. 
Abstract PDF(306KB) ( 198 )   
RelatedCitation | Metrics
The phenomenon of ellipsis is widely existed in Chinese and the results of ellipsis resolution arc directly impacted by correctness of the ellipsis identification. So ellipsis identification is very important We introduced a learning approach of rul}base ellipsis identification in Chinese. That approach constructs a corpus-base by marking all sentences in CTB manually and then proposes a verb-driven method to extract rules to get syntax structure information. Experimental results shows that our method is feasible.
Saliency Detection Method Based on Spectrum Tuning Using Lateral Inhibition
Computer Science. 2011, 38 (12): 258-262. 
Abstract PDF(663KB) ( 176 )   
RelatedCitation | Metrics
The traditional spectral-based saliency detection methods have many drawbacks, such as unstable detection result and lack of bionics meaning. This paper presented a new saliency detection method based on spectrum tuning using lateral inhibition mechanism. The new method tunes the Fourier spectral self-adaptively using multiple nonlinear tune forms to inhibit the redundant image features and enhance the saliency image features, and therefore the salient region could be detected effectively. We tested this method on both real natural images and artificial pictures such as psychological patterns.The results indicate that our method is more efficient than other methods in aspects of saliency region detection rate, salient region contour detection and high salient region contrast.
New Variational Model for Image Segmentation
Computer Science. 2011, 38 (12): 263-265. 
Abstract PDF(561KB) ( 197 )   
RelatedCitation | Metrics
Due to traditional variational level set for image segmentation with slowly computing efficiency, a new varitional model(PDE) was proposed. Firstly, the energy function was modified, then the minimization problem was transformed to convex optimization problem via convex relaxation technictue,and a new auxiliary variable was introduced,at last, the additive operator splitting(AOS) numerical algorithm was applied. The effective results show that the proposed model for image segmentation is feasible and performs better than conventional variatioanl model.
Medical Image Fusion Method Based on Lifting Wavelet Transform
Computer Science. 2011, 38 (12): 266-268. 
Abstract PDF(247KB) ( 201 )   
RelatedCitation | Metrics
Multimodality medical image fusion is significant in practical clinic application and therapy. According to characteristics of the regional relativity in the low-frequency sub-band and high frequency sub-band, CT and MRI images were decomposed by lifting wavelet transform. Region variance was adopted as fusion rules in the low frequency sulrband, and fusion rules based on region spatial frequency was adopted in the high frequency sub-band. Finally the fusion image was obtained by taking inverse lifting wavelet transform. Compared with the traditional methods,the results of experiment show that the algorithm enhances the fusion medical image information, effectively retains the source image edges and detail information, and the effect of fusion and quantifying indicators are fairly good.
Image Sequence Based Lunar Landing Locating Algorithm
Computer Science. 2011, 38 (12): 269-273. 
Abstract PDF(417KB) ( 164 )   
RelatedCitation | Metrics
According to the requirements of the computer vision landing locating system on a Lunar lander, this paper proposed an algorithm to obtain the landing position of the Lunar lander using image sequence registration. This algorithm is based on SURF(Speeded Up Robust Features) , and uses least squares to iteratively remove the mismatch point pair,finally gets the range of a image taken when just landed in a wide range Lunar image with latitude and longitude information to obtain the exact landing position, moreover, analyzes the error of the landing position. The experiment uses 2 kinds of simulated Lunar image sequence. hhe numerical results show that it achieves high registration accuracy and stability, and is qualified for engineering application.
Hyperspectral Remote Sensing Image Terrain Classification Based on Direct LDA
Computer Science. 2011, 38 (12): 274-277. 
Abstract PDF(313KB) ( 175 )   
RelatedCitation | Metrics
Hyperspectral remote sensing image has the problem of high dimensionahty. A new hyperspectral image terrain classification method, i. c.,direct LDA subspace method, was presented. Firstly, direct linear discriminant analysis(direct LDA) was used to extract features in original high dimensional hyperspectral space, and then shortest distance classifier was used to perform terrain classification in the feature subspace. Recognition results based on airborne visible/infrared imaging spectrometer(AVIRIS) hyperspectral image show that the presented method can remarkably reduce data dimensionality and improve recognition efficiency.
Image Segmentation Based on Sobel Operator and Maximum Entropy Algorithm
Computer Science. 2011, 38 (12): 278-280. 
Abstract PDF(582KB) ( 190 )   
RelatedCitation | Metrics
Image segmentation accuracy problem was researched. In traditional Sobel operator, image segmentation easi1y causes the image segmentation not clear, and contrast is not apparent, segmentation accuracy is low, so, the article put forward an improved Sobcl operator 2-d maximum entropy digital image segmentation method. Algorithm firstly makes image segmentation according to digital image characteristics, then Sobel operator is used to detect real digital image edge, and Sobel edge detection algorithm obtains the threshold value which is used to the 2-d maximum entropy image segmentation method. Based on digital image goals and objectives fringe parting, this paper used different threshold segmentation to solve the segmentation inaccurate problem produced by local image stack. Simulation experiments show that the proposed algorithm for image segmentation has good robustness,segmentation rate,which is an effective applicable algorithm.
License Plate Location Method Based on Image Region Segmentation
Computer Science. 2011, 38 (12): 281-283. 
Abstract PDF(227KB) ( 199 )   
RelatedCitation | Metrics
Angle measurement is high frequency. North-Finder is the most precise apparatus, but it is very expensive and so unwieldy that it is not used widely. In this article, we introduced an angle measurement arithmetic based on digital image processing. A special target is imported to make location and measurement simple. First locate the target, and then process the target to measure the angle. Cheap and easy are the advantage. But it is not very nicety when the angle is very small.
Analysis Method of Embedded Code's Semantic Properties Based on ARM Microprocessor
Computer Science. 2011, 38 (12): 284-287. 
Abstract PDF(666KB) ( 152 )   
RelatedCitation | Metrics
By taking deep research on the characteristic of the ARM instruction system and its compiled codes,we built a binary embedded code analytical model based on ARM microprocessor and discussed the method of embedded code's semantic analysis based on the ARM architecture. We discussed the method of extracting the code semantic properties in terms of the instruction and its sequences respectively, and analysed its instances based on the analytical model. The result shows that this method greatly improves the accuracy and readability of the code analysis.
Pipeline Hardware Design and Optimization for H.264 Deblocking Filter
Computer Science. 2011, 38 (12): 288-292. 
Abstract   
RelatedCitation | Metrics
With respect to the bottlenecks of H. 264 deblocking filter, which involve considerable volume of intermediate data and unagrecable processing speed, an optimization for pipclincbased deblocking filter was proposed. In the course of design for it, the processing sequence and datapath were optimized, then the intermediate data were filtered timely,thereby reducing the hardware resources for storage of them and cutting down the clock cycles for deblocking filter,while advancing the processing speed. Via the experiments of hardware implementation, the results demonstrate that the proposed deblocking filter is able to limit the block effect, accelerate the processing, while meeting the requirement of few hardware resources for high-definition video.
Implementation and Principle of Improved MSD Carry-free Adder for Ternary Optical Computer
Computer Science. 2011, 38 (12): 293-296. 
Abstract PDF(303KB) ( 343 )   
RelatedCitation | Metrics
The most important device in the ternary optical computer(TOC) is ternary optical processor, and many studies in this field have been done. So far, the TOC adder based on MSI)mainly rewrites the addend and summand to MSD format. After that, l}, W transformations arc first applied respectively, then the outputs of them arc as the input of T',W' transformations, at last T transformation is used to achieve carry-free addition operation. In this paper, the above addition algorithm was improved, and a new computing method was put forward. According to this method,for the two MSD data,first use T,W transformations,then use T',W' transformations,at the third step use W' transformation to achieve carry-free addition operation. The feasibility and validity of this method are proved theoretically and experimentally. In this way, in some sense, it can reduce the number of basic elements in optical adder, which will decrease the design difficulty, and provide a new approach for the design of the optical adder.