Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 40 Issue 8, 16 November 2018
  
Key Technologies of Data Delivery over VANETs
ZHANG Li-feng,JIN Bei-hong and ZHANG Fu-sang
Computer Science. 2013, 40 (8): 1-5. 
Abstract PDF(564KB) ( 483 )   
References | RelatedCitation | Metrics
Data delivery over VANETs can provide data transmission for various services including traffic information collecting,urban life services,emergency alarming,and military command.But there are many challenges in data delivery brought by the rapid changing of network topology,limited wireless channel capacity,and etc.The paper analyzed the key challenges,pointed out the main influencing factors,classified the essential mechanisms,and reviewed the existing implementation approaches of data delivery over VANETs.In addition,the paper also summarized the optimization strategies to improve capabilities of data delivery.The paper concluded the future research trends.
Multisource Information Fusion:Key Issues,Research Progress and New Trends
CHEN Ke-wen,ZHANG Zu-ping and LONG Jun
Computer Science. 2013, 40 (8): 6-13. 
Abstract PDF(746KB) ( 3709 )   
References | RelatedCitation | Metrics
In recent years,a new research boom of multisource information fusion technologies has been set off both in China and abroad.This paper proposed a comprehensive review of the information fusion state of the art.Firstly,the paper explored the conceptualizations and essence of information fusion,then comprehensively reviewed the issues and challenges involved in information fusion from the perspectives of both information processing requirements and fusion system’s design considerations,as well as the existing major fusion models and approaches,at the same time especially analyzed the future trends of the research on information fusion methodologies,including some relatively new areas.Next,wide ranges of information fusion applications were surveyed and some emerging application areas were specially discussed.At last,the summary and outlook of information fusion research routes were presented.
Skyline Query Processing in Wireless Sensor Networks
WANG Hai-xiang,ZHENG Ji-ping and SONG Bao-li
Computer Science. 2013, 40 (8): 14-23. 
Abstract PDF(1147KB) ( 369 )   
References | RelatedCitation | Metrics
As one of the important means of multi-criteria decision making problems,the skyline query processing technology plays an increasingly important role for applications of the wireless sensor networks.This paper summarized the studies of skyline query processing techniques in wireless sensor networks.Firstly,it described algorithms for answering skyline queries in centralized databases.Secondly,it listed typical applications of the skyline queries in the wireless sensor networks.Thirdly,it thoroughly discussed the skyline query methods of the wireless sensor networks according to limited energy and storage as well as processing abilities.Finally,this paper proposed several directions for further research.
Research on Cubic Convolution Interpolation Parallel Algorithm Based on Dual-GPU
LAI Ji-bao,MENG Yuan,YU Tao,WANG Yu-jing,LIN Ying-hao and LV Tian-ran
Computer Science. 2013, 40 (8): 24-27. 
Abstract PDF(403KB) ( 614 )   
References | RelatedCitation | Metrics
The traditional cubic convolution algorithm has to confront with the problems of large operational scale and slow efficiency,when it is used to realize the remote sensing image magnification.In this paper,GPU,as a burgeoning high performance computing technique,was proposed to make parallel processing of the traditional cubic convolution,which we call the Cubic Convolution Parallel Algorithm(CCPA).This algorithm that divides the pixels points equally to each block,guarantees each pixel point is executed by a thread and threads are executed simultaneously in GPU,improving the interpolation efficiency greatly.The experimental results show that compared with the traditional cubic convolution algorithm,this algorithm not only increases the calculation speed,but also achieves high quality image after zooming.Meanwhile,with the growth of image resolution,the advantages of the algorithm become more and more obvious,for instance,to the image of 10240* 10240resolutions,the speed processed by GPU is 97.7% higher than that by CPU,and the speed processed by double-GPU is twice than the speed processed by single GPU.Moreover,this algorithm also has profound practical value for remote sensing image processing under some emergency situations such as earthquakes,floods and other disasters,with the characteristic of good image quality and real-time mechanism.
Design and Implementation of Separated Path Floating-point Fused Multiply-Add Unit
HE Jun,HUANG Yong-qin and ZHU Ying
Computer Science. 2013, 40 (8): 28-33. 
Abstract PDF(761KB) ( 752 )   
References | RelatedCitation | Metrics
Considering the shortcoming that the fused multiply-add(FMA)unit increases the latency of separate floa-ting-point addition and multiplication operations,a separated path FMA(SPFMA)unit was designed and implemented firstly.The SPFMA unit can reduce the multiplication and addition latency from 6cycles to 4cycles while keeping the FMA operation latency to 6cycles by separating the multiplication and addition path,overcoming the shortcoming of traditional FMA unit.Then utilizing the specific technology cell library,the SPFMA was logically synthesized and could work at 1.2GHz above with area about 60779.44um2.Finally based on the hardware emulation accelerating platform,the performance of the SPFMA unit was estimated through running the SPEC CPU2000floating-point benchmarks.It turned out that the performances of the benchmarks are all improved,5.25% at most and 1.61% on average,which proves that the SPFMA unit helps to promote floating-point performance further.
WPP-L2:A Way-predicting Algorithm for Shared L2Cache of Multi-core Processors
FANG Juan,GUO Mei and DU Wen-juan
Computer Science. 2013, 40 (8): 34-37. 
Abstract PDF(676KB) ( 427 )   
References | RelatedCitation | Metrics
This paper proposed a way-predicting L2cache mechanism based on cache partitioning for the problem of multi-core processors power,which is WPP-L2.WPP-L2partitions the shared cache for fairness,and then uses cache way-predicting techniques to reduce the energy consumption when the prediction hits and misses.The results turn out,WPP-L2cache could reduce EDP(Energy Delay Product)by 24.7% compared with the way-predicting L2cache,and reduce EDP by 66.1% compared with the traditional L2cache under the eight-core processors,which lowers the energy consumption of L2Cache and maintains the performance at the same time.
SKP-ABE Based Access Control Method in Cloud Storage
LIU Hong,LI Xie-hua and YANG Bo
Computer Science. 2013, 40 (8): 38-42. 
Abstract PDF(389KB) ( 390 )   
References | RelatedCitation | Metrics
For protection of data in cloud storage server,this paper proposed a simple KP-ABE to make the access control more flexible and efficient.An efficient access control scheme was proposed for transferring some of the re-encryption and key module to cloud storage server so that the computational expenses will be reduced dramatically.Moreover,this paper used the strategy of revoke minimum influence attribute,which will only influence limited attributes and make attribute revocation easily.
Target Tracking Based Distribute Adaptive Particle Filter in Binary Wireless Sensor Networks
ZHU Zhi-yu and SU Ling-dong
Computer Science. 2013, 40 (8): 43-45. 
Abstract PDF(320KB) ( 366 )   
References | RelatedCitation | Metrics
In order to improve real-time and reduce communication costs in wireless sensor network tracking,distributed adaptive particle filtering(DAPF) in binary wireless sensor network was proposed,in which only filtering value and error variance need to be transmitted between cluster heads,meanwhile number of particles is online adjusted according to filtering variance to decrease computation amount.Simulation research was done in aspects of time-consuming,root mean square error(tracking precision) and communication amount.The results indicate that DAPF has obviously less time-consuming and communication amount than centralized particle filter and distributed particle filter,and the rate of change of root mean square error is tinily affected by number of particle.
Rapid Weak Signal Acquisition Algorithm Based on Bit-flip Detection
LI Wei-bin,ZHANG Ying-xin,SHANG Bao-wei and MA Xian-min
Computer Science. 2013, 40 (8): 46-48. 
Abstract PDF(328KB) ( 464 )   
References | RelatedCitation | Metrics
For the weak GPS satellite signal,it requires longer coherent integration duration to improve the receiver’s sensitivity.Unfortunately,the influence of navigation data bit-flip limits integral duration to 20ms.It’s difficult for the traditional methods to balance sensitivity and computation complexity.In this paper,a rapid weak signal acquisition algorithm based on transition detection was proposed.Firstly,the possible position of bit-flip is detected in 1ms through the comparison of the coherent integration results of the several groups of signal blocks,and the rest signal which is at intervals of 20ms with the flip bit is removed.Then coherent integration is applied to its subblock signal in 19ms and its results operate non-coherent integration to overcome its bit-flip.The theoretical analysis and simulation results show the algorithm’s performance,and it can capture the weak signal under -47dB SNR with higher computation efficiency.The proposed algorithm will improve the sensitivity and the computation complexity of GPS receiver.
Cognitive Radio Spectrum Assignment Based on Binary Bacterial Foraging Optimization Algorithm
LI Yue-hong,WAN Pin,WANG Yong-hua,DENG Qin and YANG Jian
Computer Science. 2013, 40 (8): 49-52. 
Abstract PDF(415KB) ( 416 )   
References | RelatedCitation | Metrics
How to make efficient spectrum allocation of cognitive wireless network is the key technology for dynamic spectrum access.This paper presented an improved binary bacterial foraging optimization algorithm with quantum variation operation based on the graph coloring theory model of spectrum assignment,and used the maximum system efficiency of cognitive wireless network as the objective function,achieving the free radio frequency spectrum’s dynamic allocation among the cognitive users.Simulations were conducted to compare this algorithm with color sensitive graph co-loring algorithm and traditional binary bacterial foraging optimization algorithm.Results show that the proposed algorithm has better performances.It can achieve the maximization of network utility and increase the second user’s average utility.Compared with the traditional binary bacterial foraging optimization algorithm,it has better optimization ability and faster convergence speed.
Construction and Implementation of Radio Fuze Echo Simulation with Dynamic Scene
LU Zhao-gan,LIU Yu-jiong and YIN Ying-zeng
Computer Science. 2013, 40 (8): 53-58. 
Abstract PDF(744KB) ( 378 )   
References | RelatedCitation | Metrics
By analyzing the meeting process of radio fuze and targets,the signal models of radio fuze transceiver were given to simulate the echo waveforms under the case of dynamic scenes constructed by professional 3D modeling software.The simulator can realize the independency of scene modeling,movement control and simulating algorithm,and has good generalization and extensibility.So,different radio fuze systems with any complex scene can be simulated by the implemented echo simulator.The simulator was constructed into an independent echo simulating block,which could be integrated with common used system simulating software.It has an interface to dynamically exchange data with scene model data,and one simulating instance indicats that the simulated echo waveform can reflect the signal changes during the process of radio fuze meeting targets.
Design of Wireless PCG Monitor System Based on nRF24L01
RI Jaeg-wang,PAE Jong-hon and JIANG Hong-bo
Computer Science. 2013, 40 (8): 59-62. 
Abstract PDF(307KB) ( 485 )   
References | RelatedCitation | Metrics
This paper introduced the design of a wireless PCG monitor based on nRF24L01.The controller is STC12C5A60S2with a 8bit MCU.10bit ADC in STC12C5A60S2is used to convert the PCG signal which has been out of the amplifier and filter.The data will be writen in the memory of nRF24L01and be transmitted.The receiver takes the data and sends them to computer for display and analysis.The result of testing shows that the indoor communication is well and waveform of PCG meets the requirements.In addition,the communication protocol and the program were also analyzed.
Research on Mechanism of Xen Virtual Machine Dynamic Migration Based on Memory Iterative Copies
XIONG An-ping and XU Xiao-long
Computer Science. 2013, 40 (8): 63-65. 
Abstract PDF(307KB) ( 550 )   
References | RelatedCitation | Metrics
In order to reduce the dynamic migration time of virtual machine,the pre-copy algorithm was used in Xen virtual to choose the right time for downtime copy to guarantee the superior performance of virtual machine’s migration in low load or empty load.But with heavy load,Pre-copy algorithm needs to iterative repeatly to the memory pages,which affects processing performance seriously.With algorithm of Xen virtual machine iterative copies,we proposed a mechanism of fragments interative copies to reduce virtual machine live migration time by cutting the termination of time to iterative copy.The results of experiment show that mechanism of memory fragment iteration enhances performance of the live migration of Xen virtual machine.
Strategies of Energy Hole Avoiding for Wireless Sensor Networks Based on Ant Colony Algorithm
LI Bin,WANG Zhen and LIU Xue-jun
Computer Science. 2013, 40 (8): 66-71. 
Abstract PDF(537KB) ( 378 )   
References | RelatedCitation | Metrics
Wireless sensor network(WSN)has a special energy hole phenomenon,and random self-adaptive ant colony algorithm makes the algorithm very suitable to wireless sensor network environment.So based on the analysis of the effectiveness of some existing approaches towards mitigating the energy hole problem,this paper presented a strategy of avoiding the energy hole of local area based on ant colony algorithm,to avoid energy hole by using the self-adaptive ant colony algorithm,and finally search a optimal path.Simulation result shows that the algorithm can maximize the network lifetime.
RQRR:A New Fair and Efficient Packet Scheduling Algorithm
LIU Gui-kai and GAO Lei
Computer Science. 2013, 40 (8): 72-78. 
Abstract PDF(544KB) ( 514 )   
References | RelatedCitation | Metrics
Aiming at variable length packet queues,a new packet scheduling algorithm named Resilient Quantum Round Robin(RQRR)was presented.Different from existent algorithms,the quantum given to each of the flows in a round is not fixed and is calculated depending on the transmission situation of all the flows in the previous round.The theoretical analyses show that the relative fairness measure of RQRR has an upper bound of 7Max-1,where Max is the largest size of the packets.RQRR takes O(1)processing complexity per packet and is simple to implement at high-speed networks.
Research about Efficient and Scalable Hybrid Memories at Fine-granularity Cache Management
JIANG Guo-song
Computer Science. 2013, 40 (8): 79-82. 
Abstract PDF(408KB) ( 531 )   
References | RelatedCitation | Metrics
Hybrid main memories are composed of DRAM which can provide much larger storage capacity than traditional main memories used as a cache to scalable non-volatile memories,such as phase-change memory.However,for hybrid main memories with high performance and scalability,a key challenge is to effectively manage the metadata (e.g.,tags) for data cached in DRAM in a fine- granularity.Based on this observation:storing metadata on off-chip cache line in the same row as their data corresponding to the metadata exploits DRAM row buffer locality,this paper reduced the overhead of fine-grained DRAM cache by using a small buffer to cache chip cache line which has recently been accessed.We also developed an adaptive policy to choose the best granularity when migrating data into DRAM.On a hybrid me-mory with a 512MB DRAM cache,our proposal using an 8KB on-chip buffer can increase the performance within 6% and save 18% better energy efficiency than a conventional 8MB SRAM metadata store,even when the energy overhead due to large SRAM metadata storage is not considered.
Design and Implementation of Distributed SCADA System
SHU Da-you,FENG Xuan,LU Jun and GUO Ben-jun
Computer Science. 2013, 40 (8): 83-85. 
Abstract PDF(243KB) ( 646 )   
References | RelatedCitation | Metrics
Traditional SCADA is based on the copper wire construction,commonly used in small-scale data collection.With the advancement of the industry informatization,the new SCADA system is gradually evolving into one kind of IP-based and facing massive data acquisition.In this change,SCADA is facing with two problems.One of them is self-adaptive problem of IP data acquisition agreement,and the other is distributed processing problem of mass data acquisition.According to above problems,this paper,based on the soft bus technology for distributed data pipeline and secondary distributed scheduling mechanism,designed and implemented a distributed SCADA,which can be effectively applied to IP-based mass SCADA system.
Implementation of Message Transmission of IEC61850Standard Based on Embedded Real Time Operating System of VxWorks
WU Qian and ZHOU Qing
Computer Science. 2013, 40 (8): 86-89. 
Abstract PDF(584KB) ( 392 )   
References | RelatedCitation | Metrics
Aimed at the puzzles of being poor in interoperability and difficult in implementing information interaction among intelligent electronic devices (IED) for substations,the paper explored the implementation of message transmission of IEC61850standard based on embedded real time operating system of VxWorks.By means of multitasking environment and function of communication and synchronization among proceedings offered by Wind micro-core of VxWorks,it discussed the task scheduling strategy,communication synchronization and mutual exclusion mechanism,researched the communication map of information model and transmission service of high-speed packets,constructed the solid information model implemented by IEC61850standard,analyzed the implementation of high-speed packets in the VxWorks,and also the optimal recommended scheme was given.The result of preliminary experiment in the spare powerautomatic switching of economic operation of transformer shows that date international standard of IEC61850substation automation is propitious to implementing the monitoring of power parameter by adopting date communication technology,and it obligates wide space for system prolongation.
Phase Match Separate Method between Signal and Chaos
WU Peng,WANG Chen and ZHAO Zhi-gang
Computer Science. 2013, 40 (8): 90-92. 
Abstract PDF(222KB) ( 442 )   
References | RelatedCitation | Metrics
In allusion to distilling signal under the chaos background,this paper put forward phase matched separate method in order to clean chaos background.The method is differ from other methods.It neither reconstructs phase to beforeh estimate chaos nor makes geometry analysis of chaos in finity dimension phase space.The method efficiency was validated via imitation under sine signal of different amplitude and phase in chaos background.The separating effect is distinct via separating degree analysis.
Selection Methods Based on Block Diagonalization Precoding for Multiuser Multiantenna Cognitive Radio System
ZHOU Xiang-zhen and HU Hai-bo
Computer Science. 2013, 40 (8): 93-95. 
Abstract PDF(191KB) ( 365 )   
References | RelatedCitation | Metrics
In the users’multiple antenna cognitive multiple antenna cooperation feedback system,the channel direction information has quantization error,thus the existing user selection method will not be able to maximize the second user system capacity,and increase the primary user system interference.In order to eliminate the influence of the quantization error on the primary and secondary user system,this paper put forward a kind of statistical independent factor second users selection method based on the BD precoding.Analysis and simulation results show that the proposed scheme effectively enhances the second user system capacity in the primary user system interference limited conditions.
New Three-level Symmetry Scheme of Traitor Tracing
SU Jia-jun and WANG Xin-mei
Computer Science. 2013, 40 (8): 96-99. 
Abstract PDF(297KB) ( 397 )   
References | RelatedCitation | Metrics
This paper presented a new symmetry scheme of traitor tracing.Based on the broadcast encryption techniques and Hash function theory,the corresponding key scheme,encryption scheme,decryption scheme and traitor tracing algorithm were constructed in this paper.The proper values of system parameters were obtained by the Chernoff bound.The scheme in this paper has the advantages of lower storing complexity of personal key,lower user computing complexity and less data redundancy than existing CFN symmetry schemes and it can efficiently combat against the collusion key attack in the broadcast encryption.
DTN Resilience Quality Adaptive Prototype System Based on ASSFR
LIU Zhen,QI Yong,LI Qian-mu,HAN Hui and ZHANG Hong
Computer Science. 2013, 40 (8): 100-108. 
Abstract PDF(797KB) ( 347 )   
References | RelatedCitation | Metrics
Taking supporting the timeliness and resources sensitive service quality assurance in DTN as research background,this paper integrated the cross layer ideas of INSIGNIA architecture and intelligent management mode of PBNM architecture to design a distributed quality assurance system -- DTN resilience quality adaptive system.What’s more,this paper improved previous routing protocol,proposed a new routing protocol,called Adaptive Seed Spray and Focus Routing.Lastly,this paper used ONE network simulation software to analyze and estimate the performance of prototype system. The result indicates that compared with the MaxProp routing protocol and Epidemic routing protocol,the prototype system can reduce the messages lose rate,end-to-end delay,routing overhead,and improve message delivery successful rate.
New Related-key Rectangle Attack on Full ARIRANG Encryption Mode
LIU Qing and WEI Hong-ru
Computer Science. 2013, 40 (8): 109-114. 
Abstract PDF(418KB) ( 335 )   
References | RelatedCitation | Metrics
The security of ARIRANG encryption mode was revaluated with the method of related-key rectangle attack.First,some new high-probability related-key rectangle distinguishers of 38rounds and 39rounds were found.Based on these distinguishers,some improvements in them were made.The main idea is the use of modular subtraction and XOR differential instead of the only XOR differential.Also,outputs of the distinguishers chose a differential set instead of the only XOR differential.All kinds of results based on these new high-probability distinguishers are presented,and they are better than the previous results.The best result is the attack on full ARIRANG-256encryption mode with the data complexity and time complexity of 2220.79and 2155.60,respectively.
Spatial K-Anonymity Reciprocal Algorithm Based on Locality-sensitive Hashing Partition
HOU Shi-jiang,ZHANG Yu-jiang and LIU Guo-hua
Computer Science. 2013, 40 (8): 115-118. 
Abstract PDF(318KB) ( 408 )   
References | RelatedCitation | Metrics
Spatial K-anonymity is an important measure for privacy to prevent the disclosure of personal data.The main methods are based on the model of User-Anonymizer-LBS.This paper proposed a spatial K-anonymity algorithm based on locality-sensitive hashing partition.The algorithm is shown to preserve both locality and reciprocity with moderate computation complexity.Finally,aimed on effectiveness(minimum anonymizing spatial region size)and efficiency(construction cost),the experimental results verify that the proposed method has high performance.
Function Vulnerability Detection Method Based on Parse Tree
CHEN Yong-yan,SHU Hong-chun and DAI Wei
Computer Science. 2013, 40 (8): 119-123. 
Abstract PDF(542KB) ( 432 )   
References | RelatedCitation | Metrics
Custom software vulnerability detection is difficult.Most of static vulnerability detection approach usually produces large amount of false information and positives results.A new method is able to understand the analyzed source code when a function is called.This paper proposed a method of combination top-down and bottom-up parsing tree which is based on CFL(context-free language).In a case of not understanding or partially understanding inside code of a function definition,it can analyze function contract before or after function called,named pre-condition and post-condition.Extending the rules of XML grammar on object-oriented,pre-condition and post-condition can deal with objects belonging to inheritance relationship’s class.The experiments show that,compared with the same type of security analysis tools,it can avoid repeat function analysis,has good rules scalability and high accuracy for custom defined object classes and parameters in custom environmental especially.
Method of Binding Secure Label to Data Object Based on XML
CAO Li-feng,LI Zhong,CHEN Xing-yuan and FENG Yu
Computer Science. 2013, 40 (8): 124-128. 
Abstract PDF(449KB) ( 360 )   
References | RelatedCitation | Metrics
How to bind secure label to data object is a key problem in multi-level network that restricts MLS from practicality on network.This paper analyzed deeply xml,and expounded secure label of object based on xml and its restrictions,then put forward a method of binding secure label to data object based on XML.At the same time,some operations were discussed in detail,such as query of secure label,decomposition of object.Finally,secure communication based on secure label was described in multi-level network.The method can not only meet the need of secure communication in multi-level network,but also accomplish fine-grained mandatory access control,which may improve availability of information and reduce complexity of binding.
Research on Reliability Optimal Method of Cloud Service
LIANG Yuan-ning,CHEN Jian-liang and YE Li
Computer Science. 2013, 40 (8): 129-135. 
Abstract PDF(631KB) ( 356 )   
References | RelatedCitation | Metrics
In allusion to the redundant features and reliability guarantee requirements,this thesis tried to explore the effective ways for improving cloud service reliability.A trust redundancy reliability enhancement framework of cloud ser-vice was put forward based on the reliability architectue and management model.On the redundant design phase of the service being ready,based on the failure polling detection algorithm of voting protocol,a trust-aware fault-tolerant ser-vice selection algorithm was designed,and the calculation methods of min-number of fault-tolerant services were given.Based on the running fault-tolerant management framework of services combination,a cloud service invoking policy of ensuring service response time based on failure rules was presented.Experiments show that the fault-tolerant service selection algorithm and the cloud service invoking police have more practicality and effctiventness.
Efficient Content Extraction Signature Scheme without Certification
LIU Qing-hua,SONG Yu-qing and LIU Yi
Computer Science. 2013, 40 (8): 136-139. 
Abstract PDF(324KB) ( 414 )   
References | RelatedCitation | Metrics
For the problem of low efficiency of the existing content extraction signature scheme,based on the certificateless public key cryptography and using the scalar multiplication over elliptic curve group to replace the pairing,this paper proposed an efficient content extraction signature without pairing.The proposed scheme is proved existentially unforgeable under adaptive chosen-message attacks assuming in the random oracle model.Compared with known schemes built from pairings,our schemes are more efficiency and practicality.
Achieving(k,l)-Diversity in Privacy Preserving Data Publishing
YANG Gao-ming,LI Jing-zhao,YANG Jing and ZHU Guang-li
Computer Science. 2013, 40 (8): 140-145. 
Abstract PDF(509KB) ( 376 )   
References | RelatedCitation | Metrics
In order to avoid disclosure of individual identity and sensitive attribute,reduce the information loss when data release,a clustering-based algorithm to achieve(k,l)-diversity(CBAD)in data publishing was presented.The discrete attributes and continuous attributes mixed in the data set were fully taken into account while clustering.The probability distribution was used as metrics to measure similarity between the data objects.We solved the confusion of the information loss and the distance between data objects,pointed out that the clustering-based optimization(k,l)-diversity algorithm is NP-hard problem,proposed the concept of privacy protection degree with parameter k and l,and analysed the complexity of the algorithm.Theoretical analysis and experimental results show that the method can effectively reduce the execution time and information loss,improve query precision.
Malware Domains Detection by Monitoring Group Activities
ZHANG Yong-bin,LU Yin and ZHANG Yan-ning
Computer Science. 2013, 40 (8): 146-148. 
Abstract PDF(344KB) ( 551 )   
References | RelatedCitation | Metrics
At present,many botnets adopt Domain Flux techniques to avoid the block of domain blacklists.A new technique was proposed to detect malicious domain by analyzing group-behavior of compromised hosts on DNS queries.The method clusters new domains and Non-Existent domains queried by hosts in each epoch,groups these hosts by new domain names,and identifies that if the hosts within the same set have group activities when querying Non-Existent domains,to detect compromised hosts and IP addresses of C&C servers.The monitoring results for an ISP DNS show that compromised hosts and IP addresses of C&C servers are detected accurately.
Analysis and Improvement of Remote Bitstream Update Protocol Preventing Replay Attacks on FPGA
LI Lei,CHEN Jing and ZHANG Zhi-hong
Computer Science. 2013, 40 (8): 149-150. 
Abstract PDF(229KB) ( 437 )   
References | RelatedCitation | Metrics
The remote bitstream update protocol preventing replay attacks on FPGA proposed by Devic et al has a lower efficiency in the key distribution,updating and storage.We proposed an improved protocol which utilizes key chains to obtain keys for request and acknowledgement.The improved protocol can improve the efficiency of key management,and reduce the storage requirements of participants.Technical discussions show that the improved protocol ensures confidentiality,integrity,and prevents replay attacks.
Model for Cloud Computing Security Assessment Based on Classified Protection
JIANG Zheng-wei,ZHAO Wen-rui,LIU Yu and LIU Bao-xu
Computer Science. 2013, 40 (8): 151-156. 
Abstract PDF(468KB) ( 538 )   
References | RelatedCitation | Metrics
The security topic in application and development of cloud computing is one of the greatest concerns.Aiming at the requirement of security level quantification in cloud computing service,based on classified protection in our country and learned from the cloud computing risk control and security assessment frameworks designed by European and American institutions,a cloud computing security assessment indexes system was built up through Delphi method,and the weight of each index was calculated with analytic hierarchy process.According to this indexes system,fuzzy comprehensive analysis method was introduced to the evaluation of a cloud computing instance.The case study shows that this model can effectively quantify and assess the security level of cloud platform.
Research of Generic Codec for Web Application Testing
LIU Yong-po,WU Ji and LIU Shuang-mei
Computer Science. 2013, 40 (8): 157-160. 
Abstract PDF(827KB) ( 397 )   
References | RelatedCitation | Metrics
Codec is a necessary part of TTCN-3test system which transfers TTCN-3template data to SUT input data,and SUT output data to TTCN-3template data.This paper presented the design of generic codec for testing Web application.The generic codec needs to transform TTCN-3template into URL request string,and extract Web page content to transform into TTCN-3template.Conscious of the Web pages with large volume of data and noise data,this paper presented the multi-level parsing and composition strategy.Experiments show that the strategy works well, and could save much codec development effort.
Automatic Program Testing with Dynamic Symbolic Execution and Model Learning
CHEN Shu,YE Jun-min and ZHANG Fan
Computer Science. 2013, 40 (8): 161-164. 
Abstract PDF(336KB) ( 388 )   
References | RelatedCitation | Metrics
An automatic testing approach was proposed with dynamic symbolic execution and model learning.The model that represents I/O interaction of program with its environment was constructed by stepwise learning algorithm.With abstract interaction model,the process of dynamic execution is guided by states of the model,and test data is automatically generated.Abstract interaction model is also refined by the test data and used for further execution.The problem that traditional symbolic execution lacks guidance is solved,and its speed and code coverage rate are also improved.
Clustering-based Algorithms to Semantic Summarizing Graph with Multi-attributes’ Hierarchical Structures
SUN Chong and LU Yan-sheng
Computer Science. 2013, 40 (8): 165-171. 
Abstract PDF(850KB) ( 352 )   
References | RelatedCitation | Metrics
This paper allocated the raw nodes of graph into some different groups and established relationships among the groups by considering the raw edges of graph.We called this procedure graph summarization and the newly built graph was called graph summary.Users can set the scale value of graph summary and obtain the information of raw graph with the help of it.However,by using the classic method,we can only build the graph summaries whose sizes are above a certain scale lower bound,which cannot be acceptable in many applications.K-SGS is a novel graph summarization method which solves the scale limits.By using the concept hierarchy of the nodes’ attributes,K-SGS can group the nodes in a flexible way.It groups the nodes not only with same values but also with similar values.Besides the edges’ information loss,it also considers the nodes’ information loss during the summarization and models the summarization as multi-objective planning.We proposed two hierarchical agglomerative algorithms.One is based on forbearing stratified sequencing method and the other is based on unified objective function method.The experiment on real life dataset shows that our methods can solve the problem and get the graph summaries with good quality.
Method for Automotive Electronics Oriented Dynamic Configuration Interface Generation
YAN Hua,CHEN Hao and GUO Xuan-you
Computer Science. 2013, 40 (8): 172-175. 
Abstract PDF(438KB) ( 413 )   
References | RelatedCitation | Metrics
The functions of basic software modules need customizing according to the resource of specific hardware platform during automotive electronics development,while the configuring requirements of automotive basic software mo-dules are large and complex.So,designing a high configurable,general purpose configuration tool prototype has important application significance.This paper proposed a dynamic configuration interface generating method according to the practical requirements of automotive basic software.The proposed method separates the configuration parameters and configuration interface and improves the extensibility and maintenance of configuration tool greatly.The experimental results show the proposed method is feasible and effective.
Approach for Querying RDF with Fuzzy Conditions and User Preferences
WANG Hai-rong,MA Zong-min and CHENG Jing-wei
Computer Science. 2013, 40 (8): 176-180. 
Abstract PDF(424KB) ( 612 )   
References | RelatedCitation | Metrics
RDF fuzzy retrieval is an important module for realizing intelligent retrieval in Semantic Web.In this paper,Zadeh’s type-II fuzzy set theory,as well as the concepts of α-cut set and linguistic variable was adopted to put forward the RDF fuzzy retrieval mechanism supporting user preference,which extends SPARQL to express fuzzy and preference conditions.Moreover,ordered sub-domain table of linguistic values was constructed to realize the projection from the fuzzy values to relayed sub-domains in the table,so as to figure out the interval of membership.On this basis,extended queries were then converted into standard SPARQL queries with a set of defuzzification rules,so as to achieve fuzzy retrieval operations.In order to test the ideology proposed in this paper,the fp-SPARQL retrieval system was developed.According to the result of this experiment,the method improves the performance of RDF fuzzy retrieval,and corres-pondingly,users’ satisfaction rate on the retrieval results is also enhanced.
NLOF:A New Density-based Local Outlier Detecting Algorithm
WANG Jing-hua,ZHAO Xin-xiang,ZHANG Guo-yan and LIU Jian-yin
Computer Science. 2013, 40 (8): 181-185. 
Abstract PDF(416KB) ( 513 )   
References | RelatedCitation | Metrics
The time complexity of the density-based outlier detecting algorithm (LOF algorithm) is not ideal,which effects its applications in large scale datasets and high dimensional datasets.Under such circumstances,a new density-based outlier detecting algorithm (NLOF algorithm) was introduced.The main idea of the NLOF algorithm is as follows:the known information is used as much as possible to optimize the neighborhood query operation of adjacent objects in the process of neighborhood searching of a data object.This method is adopted in neighborhood computing and searching in this paper.Firstly,clustering algorithm is taken as a preprocessing,and local outlier factors are calculated only for the data objects out of clusters.Secondly,the local outlier factors for the data objects out of clusters are calculated in the same way as calculating the local outlier factors in LOF algorithm.Leave-one partition information gain is introduced in the process of calculating the local outlier factor.The weight of attribute is determined by leave-one partition information gain.The weighted distance is used in calculating distances between objects.Extensive experimental results show the advantages of the proposed method.NLOF algorithm can improve the outlier detection accuracy,reduce the time complexity and realize the effective local outlier detection.
Research on Structure Soundness of Software Processes Based on EPMM
DAI Fei,LI Tong,XIE Zhong-wen,MO Qi and JIN Yun-zhi
Computer Science. 2013, 40 (8): 186-190. 
Abstract PDF(315KB) ( 398 )   
References | RelatedCitation | Metrics
In order to ensure the correctness of software evolution processes,to check the structure soundness of the software processes under which the corresponding software is evolving is necessary.It is used to improve the quality and efficiency of software evolution and shorten the time of software evolution.According to the software evolution process models modeled by software evolution process meta-model(EPMM),the structure soundness was defined from the point of view of software processes.Moreover,the corresponding algorithms were designed.The result shows that to check the structure soundness helps to improve the quality of software evolution processes.
Rapid Robust Clustering Algorithm for Gaussian Finite Mixture Model
HU Qing-hui,DING Li-xin,LU Yu-jing and HE Jin-rong
Computer Science. 2013, 40 (8): 191-195. 
Abstract PDF(380KB) ( 671 )   
References | RelatedCitation | Metrics
Finite mixture model is an effective clustering method based on probability model. Aiming at the clustering algorithm of Gaussian mixture model.This paper imposed entropy penalized operators on the mixed coefficients of components and the labels of samples respectively,which brings to two levels controls for the number of components and rapid reduction of the illegitimate ones.Thus the algorithm converges to exact solutions with only a few iterations.Since the traditional algorithm is very sensitive to the initial values (for example,the number of components must be set in advance),which often leads to the EM algorithm to fall into local optima or converges to the boundary of the solution space,the new algorithm of this paper is very robust and has no special demands for the initializations,just testified by the experiments.
Merging Algorithm Research Oriented to Understanding of Geometry Proposition
HUANG Huan,LIU Qing-tang and CHEN Mao
Computer Science. 2013, 40 (8): 196-199. 
Abstract PDF(423KB) ( 406 )   
References | RelatedCitation | Metrics
As the development of machine proof technology,how to make the existing geometry instructional softwares to understand the latent semantics of the geometry proposition,automatically generate the figure and give out the reasoning process,is becoming a new challenge.Although the existing methods can transform the geometry proposition into figure or other machine-readable forms to a certain extent,they do not consider the dependent relationship between simple clauses,so they can not avoid the redundancies and inconsistence between construction commands of adjacent simple clauses.To resolve this problem,this paper proposed a merging algorithm,and further integrated the merging algorithm into exist natural language interface for automatic geometry construction to verify its effectiveness.The experiment results suggest that the merging algorithm can enhance the accuracy of automatic geometry construction from 84.17% to 91.67%,and can improve the accuracy of geometry proposition understanding.
Embedding-camouflage of Inverse P-information and its Separation-discovery by Inverse P-reasoning
XU Feng-sheng,YU Xiu-qing and SHI Kai-quan
Computer Science. 2013, 40 (8): 200-203. 
Abstract PDF(360KB) ( 304 )   
References | RelatedCitation | Metrics
Inverse P-sets is a set pair which is composed of internal inverse P-set F (internal inverse packet set F) and outer inverse P-set (outer inverse packet set ),denoted by ().It is gotten by introducing dynamic characteristics into finite general set X,which has the opposite characteristic to P-sets(packet sets).Based on inverse P-sets,the paper presented the concepts of inverse P-information and its embedding-camouflage,discussed camouflage-restoring attribute characteristic about inverse P-information,obtained the embedding-camouflage theorems.And the theoretical results were applied to separate and discover embedded and camouflaged inverse P-information by using inverse P-reasoning.
Adaptive Approach to Personalized Learning Sequence Generation
JIANG Yan-rong,HAN Jian-hua and WU Wei-min
Computer Science. 2013, 40 (8): 204-209. 
Abstract PDF(515KB) ( 350 )   
References | RelatedCitation | Metrics
To generate suitable learning sequence according to the student’s individual characteristic is a key to build an intelligent tutoring system,and is also an exhibition of intelligence for the system.The difficulty is that both the relationship between the knowledge items should be considered,and the personalized learning characteristics of students should be adapted.In this paper,a learning sequence generation algorithm,which is under the guidance of relationships between knowledge,was proposed.The computation of learning level and the modeling of student personality were described.Based on these,the learning sequence can be optimized.And the detailed adjust and optimization methods were discussed in detail.Learning window was proposed to describe the learning contents and to adapt to the students with different learning abilities,and the learning model was used to distinguish the organization of contents and the corresponding learning styles.The application of the prototype indicates the effectiveness of the proposal,and the prototype system is adaptive to meet students’ individual learning needs.
Improved Artificial Bee Colony Algorithm and its Application in CBD Location Planing
ZHANG Peng,LIU Hong and LIU Peng
Computer Science. 2013, 40 (8): 210-213. 
Abstract PDF(608KB) ( 373 )   
References | RelatedCitation | Metrics
Central Business District is the functional core of the city which reflects its modernization.The development of CBD depends much on the reasonableness of location planning.Location planning for CBD by manual analysis has low efficiency and accuracy of data.Using intelligent optimization algorithm for location planning can not only reduce the costing,but also improve efficiency and accuracy in construction of CBD.Aiming at this question,this paper improved original ABC algorithm and proposed a new method(NABC)based on evaluation of accessibility.Microscopic simulation experiment was made for CBD location planning by taking advantage of this method.This model overcomes the defect of convergence speed compared with original algorithm and improves the intelligence and accuracy of CBD construction according to the simulation experiment.
Population Dynamics Optimization Based on 3Populations Lotka-Volterra Model
HUANG Guang-qiu,ZHAO Wei-juan and LU Qiu-qin
Computer Science. 2013, 40 (8): 214-219. 
Abstract PDF(448KB) ( 1533 )   
References | RelatedCitation | Metrics
A population dynamics-based optimization algorithm with global convergence is constructed based on 3populations Lotka-Volterra model.In the algorithm,each population is just an alternative solution of an optimization problem,and each mutual relation among 3populations,which includes the competition,mutual-benefit,predator-prey and their arbitrary combinations,is expressed into a graph,and its associated Lotka-Volterra model is established,and each mutual relation among 3populations responds to an evolution operator,whose mathematical expression is just the discrete expression of its associated Lotka-Volterra model,furthermore,in order to solve much complicated optimization problems,the mergence,mutation and selection behaviour of populations are used to construct evolution operators.The features of all constructed operators can ensure population suitability index(PSI)of each population to keep either to stay unchanged or to transfer toward better states,therefore the global convergence is ensured,and during evolution process of populations,each population’s transferring from one state to another realizes the search for the optimum solution.The stability condition of a reducible stochastic matrix was applied to prove the global convergence of the algorithm.The case study shows that the algorithm is efficient.
Research on Solution to Association Weakening Problem in Data Mining
YANG Ze-min,GUO Xian-e and WANG Wen-jun
Computer Science. 2013, 40 (8): 220-222. 
Abstract PDF(205KB) ( 353 )   
References | RelatedCitation | Metrics
The support vector machine (SVM) and mean cluster data mining algorithm,almost all rely on the correlation between data,complete data matching.Once the database contains a large amount of redundancy data,the correlation between data will be reduced,and relevance is destroyed,resulting in traditional data mining algorithm efficiency lower.In order to avoid the above defects,this paper proposed a weakening association rules repair mining algorithm.In the data selection process,the method will not make initial classification processing for all elements only calculates proba-bility that one element belongs to a category,and determines multiple weak clustering center,calculates weak clustering relevance between different data,so as to realize the association rules weaker redundancy environment accurate data mining.The experimental results show that this algorithm can effectively improve the massive redundant environment data mining efficiency,has made the satisfactory effect.
Study on Application of Attributive Reduction Based on Rough Sets in Data Mining
ZHANG Ying-chun,SU Bo-hong and CAO Juan
Computer Science. 2013, 40 (8): 223-226. 
Abstract PDF(267KB) ( 352 )   
References | RelatedCitation | Metrics
The attribute reduction is one of the key issues on knowledge acquisition in rough set theory.In this paper,the discernibility matrix was utilized to obtain the core attribute first,then a new heuristic algorithm for attribute reduction was proposed based on the importance ratings of the attribute.Our algorithm can give proper combination of attri-butes for effective attribute reduction and therefore the complex computing is avoided,which is different from some methods relying on algebra and information entropy.Finally,the example analysis demonstrates the validity and feasibi-lity of our algorithm.
Discovery and Application of Risk Investment Losses Based on P-reasoning
YU Xiu-qing,XU Feng-sheng and SHI Kai-quan
Computer Science. 2013, 40 (8): 227-232. 
Abstract PDF(444KB) ( 329 )   
References | RelatedCitation | Metrics
P-sets (Packet sets) is a new mathematical structure and model,which has dynamic characteristics.It is a pair of sets composed of internal P-set X(internal packet set X) and outer P-set XF(outer packet set XF),in other words,(X,XF) is P-sets.P-sets are obtained by introducing dynamic characteristics into finite Cantor set X and improving it.P-reasoning (Packet reasoning) consists of internal P-reasoning (internal packet reasoning) and outer P-reasoning (outer packet reasoning).By using P-sets and P-reasoning,the paper researched about discovery of risk investment losses.At first,it proposed the concepts of law,internal P-law,outer P-law,P-laws and their generation.And then law attribute theorems,discovery theorems about internal P-law,outer P-law and P-laws were given.In the end,applications of P-reasoning in risk investment losses were provided.
Decomposition Theorem of Flou-valued Fuzzy Sets
CHEN Li-ming,GUO Si-cong and BI Ling-ling
Computer Science. 2013, 40 (8): 233-238. 
Abstract PDF(360KB) ( 358 )   
References | RelatedCitation | Metrics
This paper introduced a new kind of L-fuzzy set which is called Flou-valued fuzzy set to consider the deficiencyof interval-valued fuzzy set and three-parameters interval-valued fuzzy set in expressing fuzzy information.The authors put forward eight kinds of cut set on the Flou-valued fuzzy sets.Based on them,the decomposition theorems were obtained and its effectiveness was vertified by a practical account case.
Large Margin and Fast Learning Model Based on Difference of Similarity
YING Wen-hao and WANG Shi-tong
Computer Science. 2013, 40 (8): 239-244. 
Abstract PDF(488KB) ( 312 )   
References | RelatedCitation | Metrics
Many pattern classification methods,such as support vector machine and L2 kernel classification,often use the kernel methods and are formulated as quadratic programming problems,but computing kernel matrix would require O(m2)computation,and solving QP may take up to O(m3),which limits these methods to train on large datasets.In this paper,a new classification method called difference of similarity support vector machine(DSSVM)was proposed.DSSVM pursues a best linear representation of a total similarity between any sample and a particular class.According to the sparsity of the linear representation and the max margin of the difference of similarity,a new optimization problem is obtained.Meanwhile,the difference of similarity support vector machine can be equivalently formulated as the center constrained minimal enclosing ball,and thus difference of similarity support vector machine can be extended to diffe-rence of similarity core vector machine(DSCVM)by introducing fast learning theory of minimal enclosing ball,to solve the classification for large datasets.This is confirmed in the experimental studies.
Study on Prediction Model of Ecological Security Index in Chongqing City Based on SVR Model
FENG Hai-liang,XIA Lei and HUANG Hong
Computer Science. 2013, 40 (8): 245-248. 
Abstract PDF(302KB) ( 341 )   
References | RelatedCitation | Metrics
In actual practice,owing to hysteresis of the conventional statistical analysis on ecological safety index,this article implemented Multivariable Grey model,Radical Basis Function Network and Support Vector Regression to input extremely relevant samples of ecological safety index from 1998to 2007in Chongqing.The outputs generated by the three models were evaluated and compared with ecological safety index gathered in 2008and 2009.According to the error analysis between the outputs and the actual index,more accurate predictions were produced by the Support Vector Regression model.In conclusion,the Support Vector Regression model is applicable to actual practice and has higher accuracy than the other two models.
Applications of XCSG in Multi-robot Reinforcement Learning
SHAO Jie,DU Li-juan and YANG Jing-yu
Computer Science. 2013, 40 (8): 249-251. 
Abstract PDF(300KB) ( 402 )   
References | RelatedCitation | Metrics
XCS classifier system has been shown to solve machine-learning problems in a competitive way.However,in multi-robot problems,XCS is restricted to solve very small problems modeled by a Markov decision process.In this paper a new learning technique XCSG that combines XCS and gradient descent methods was proposed to solve multi-robot machine-learning problems.XCSG builds low-dimensional approximation of the function,and gradient descent techniques use on-line knowledge to establish a stable approximation of functions,so that the Q-form has been maintained at a low-dimensional stable state.Approximate of the function not only requires smaller storage space,but also allows the robot online knowledge is summarized on the generalization.Simulation results show that XCSG algorithm solves the multi-robot reinforcement learning in a large space,slow learning,learning uncertainty and other issues.
Improved Artificial Bee Colony Algorithms for Function Optimization
GE Yu,LIANG Jing,WANG Xue-ping and XIE Xiao-chuan
Computer Science. 2013, 40 (8): 252-257. 
Abstract PDF(428KB) ( 337 )   
References | RelatedCitation | Metrics
In order to enhance the performance of artificial bee colony algorithm in solving complex function optimization problems,this paper analysed the shortcoming of escape behavior of scout bees,and improved it.The improved algorithm defines escape index,making it precisely reflecting the effect of individual status on the premature convergence of algorithm,redesigns the selection scheme,making scout bees choosing individual escape operation that might result in algorithm premature convergence adaptively,improves the escape operator,reducing the blindness of escape operation.Nine typical experiments prove that the improved algorithm could converge efficiently under assignment convergence accuracy,and the improved algorithm could converge with more convergence accuracy and speed compared with basic artificial colony algorithm and existing typical improved versions,thus proves the improved strategy proposed in this paper could boost capability of solving complex function optimization problems.
Study on Large-scale Smart Group Volume System Conflict Elimination Method
FANG Cai-li,ZHENG Feng-bin and WANG Yu-jing
Computer Science. 2013, 40 (8): 258-260. 
Abstract PDF(320KB) ( 363 )   
References | RelatedCitation | Metrics
The current large-scale smart group volume system uses ultra high frequency identification technology to complete the task scheduling,and commonly used scheduling backscattering coupling way is easy to have the conflict due to lack of time on communication.Therefore this paper presented a power control based on the improved frame time-slot anti-collision algorithm.In control,power is introduced as the basis of regulation,and according to the system dry ratio,read and write device independently regulates system itself transmitted power to guarantee time interval,first sends the query command to the label in the function area,records total labels in the area,makes signal grouping its description,ensures the distance of reading and writing,at the same time decreases the complex degree of the coverage area.Experiments show that this method can ensure that signal multifarious minimize signal coverage area,enhance anti-collision performance of smart group volume system,and is an effective conflict resolution method of smart group vo-lume system.
Knowledge Presentation of Association Rules Based on Conceptual Graphs
GUO Xiao-bo,ZHAO Shu-liang,LIU Jun-dan,ZHAO Jiao-jiao and WANG Chang-bin
Computer Science. 2013, 40 (8): 261-265. 
Abstract PDF(922KB) ( 336 )   
References | RelatedCitation | Metrics
Considering the problems aroused by the traditional association rules presentation formalizing approaches which are powerless to demonstrate the domain knowledge,especially not conducive to express the relationships of items and the implicit information of rules,this paper introduced a novel methodology for the knowledge representation of association rules based on conceptual graphs.The proposed methodology consists of schema definition and schema parse,and these two schemas can effectively parse the association rules into the conceptual graphs representation formalism by using conceptual graphs.At the end ,this paper illustrated the advantages of the new method with the help of experimental data obtained from demographic data of a province,and the realistic application analysis and experimental results turn out that this methodology has much excellent representation effects for the demographic domain knowledge.
Study on Fault Diagnosis of Gear Pump Based on Normal Cloud Neutral Network
MI Xiao-ping and LI Xue-mei
Computer Science. 2013, 40 (8): 266-267. 
Abstract PDF(226KB) ( 372 )   
References | RelatedCitation | Metrics
In order to improve the correctness of fault diagnosis of gear pump,the normal cloud neutral network is applied in it.The normal cloud neutral network was established combining the normal cloud model and neural network,and the structure of normal cloud neutral network was constructed,and the corresponding algorithm of every la-yer was given.The structural identification was achieved based on cloud transfer,and the weight and threshold value was optimized through training.Finally,the simulation of fault diagnosis of gear pump based on normal cloud neutral network was carried out.The results show that normal cloud neutral network can obtain the fault diagnosis of gear pump correctly,and the efficiency of fault diagnosis is high.
Research on Application of Gray Clustering Evaluation Model and Arithmetic of Variable Weight in FDI Comprehensive Welfare Effects
LIU Yu-yan,LIU Yu-lin and ZHAO Qing
Computer Science. 2013, 40 (8): 268-272. 
Abstract PDF(417KB) ( 390 )   
References | RelatedCitation | Metrics
According to the characteristics of “gray” in human thinking,this research raised the clustering evaluation model of FDI comprehensive welfare effects based on the relative indicators.It determines the clustering indicators,ra-ting standards and clustering grays in terms of data collected and on the basis of gray system theory.Then,through establishing whitenization weight functions to carry out gray clustering analysis,it can not only distinguish the categories of welfare effect,but also demonstrate the affiliation levels of each sample on various categories in the gray clustering coefficient matrix.At last,it makes a clustering evaluation and analysis on the FDI comprehensive welfare effect ranging from 1986to 2008and the conclusion is highly reliable and relevant.
Automatic Motion Segmentation of Sparse Feature Points with Mean Shift
JIANG Peng,QIN Na,ZHOU Yan,TANG Peng and JIN Wei-dong
Computer Science. 2013, 40 (8): 273-276. 
Abstract PDF(594KB) ( 377 )   
References | RelatedCitation | Metrics
We proposed an automatic motion segmentation operating on sparse feature points.Feature points are detected and tracked throughout an image sequence,and feature points are grouped using a mean shift algorithm.The motion segmentation is driven by the density of the motion vector in feature space.The kernel density estimation is performed on the mean-shifted motion vector and the number of motion present is estimated by the number of peaks in the kernel density curve.Experimental results on a number of challenging image sequences demonstrate the effectiveness and robustness of the technique.
Multi-sensor Information Fusion Motivate Target Tracking Algorithm Based on Missing Measurements
LI Song,HU Zhen-tao,LI Jing,YANG Zhao and JIN Yong
Computer Science. 2013, 40 (8): 277-281. 
Abstract PDF(387KB) ( 528 )   
References | RelatedCitation | Metrics
In allusion to the situation with missing measurements in which the sensor detecting probability is less than 1,a multi-sensor information fusion motivate target tracking algorithm based on missing measurements was proposed.First of all,based on the algorithm of residual error detection,the wild values from the observed data are distinguished,and accuracy of the measurement data can be determined during the state estimation process of dynamic system.Secondly,according to the improved EKF algorithm based on the conditions with missing measurements,the target motion state is estimated by use of every sensor node’s measurement data respectively,and then the multi-sensor optimal weighte fusion method is utilized to obtain the optimal estimation based on multi-sensor measurement data.Finally,the influence of the probability of the sensor detection on the filtering effect is obtained by simulation experiments using photoelectric sensors.And the simulation results demonstrate the effectiveness of the algorithm.Additionally,the algorithm’s trac-king accuracy is mostly approximate to the state estimation accuracy under the situation of complete measurements.
Self-embedding Fragile Watermarking Based on DDWT Coding
CUI Zhuang,LV Jun-bai and FENG Tao
Computer Science. 2013, 40 (8): 282-284. 
Abstract PDF(588KB) ( 291 )   
References | RelatedCitation | Metrics
Currently,because of the limitations of fragile watermarking on tamper recovery and transform,this paper presented a self-embedding watermarking algorithm based on dual-tree wavelet Transform.Tamper localization and ima-ge recovery are corresponding to authentication watermarking and recovery watermarking.The recovery watermarking is spiht coding after carrier image DDWT transform,and it is embed in the low-frequency of wavelet transform.The authentication watermarking is the relationship of low-frequency coefficients after recovery watermarking is embed,and it is embed in the image sub-block of lsb.The experimental results show that the algorithm can accurately locate the illegal tamper and has nice self-recovery ability.
Novel Image Retrieval Method of Improved K-means Clustering Algorithm
LV Ming-lei,LIU Dong-mei and ZENG Zhi-yong
Computer Science. 2013, 40 (8): 285-288. 
Abstract PDF(325KB) ( 695 )   
References | RelatedCitation | Metrics
The drawbacks of image retrieval based on K-means clustering algorithm were analyzed,and a novel image retrieval method of an improved K-means algorithm was presented in this paper.Firstly,it computers the Euclidean distance of every two color histogram features of all color histogram features in the image feature database.Secondly,it puts the matched condition feature vectors as the initial class centers of the K-means,which is based on the theory “The closer the two objects,the greater the similarity”.Finally,it starts image retrieval.Experimental results demonstrate that proposed method is efficient.
Salient Local Feature Extraction Algorithm Based on Integrated Visual Attention Model
YANG Zu-qiao,CEHN Yue-peng and ZHANG Qing
Computer Science. 2013, 40 (8): 289-292. 
Abstract PDF(857KB) ( 320 )   
References | RelatedCitation | Metrics
Local features are widely used for content-based image retrieval recently.During image retrieval,a lot of local features are extracted,which increases the amount of calculation and complexity of image retrieval,and as a result,affecting the practical applications.With an eye towards this problem,a novel method based on integrated visual attention model was proposed to extract salient local features.Using this method,first,the key points in an image scale-space are extracted,and the salient area of the original image is found using fuzzy growth technology,then the integrated visual saliency is calculated and classified,and SIFT factors are extracted and ranked according to their integrated visual saliency,and at last,only the most distinctive features are kept to enhance the retrieval performance. The experimental results demonstrate that compared to traditional local feature extraction algorithms,this salient local feature extraction algorithm based on integrated visual attention model provides significant benefits both in retrieval accuracy and speed.
Otsu Image Segmentation Method Based on Improved PSO Algorithm
LIU Shen-xiao,WANG Xue-chun and CHANG Chao-wen
Computer Science. 2013, 40 (8): 293-295. 
Abstract PDF(257KB) ( 322 )   
References | RelatedCitation | Metrics
The Otsu image segmentation algorithm has good adaptability due to its contents-independent characteristics.However,its shortcomings like large amount of computation and poor real time quality have limited its application.To solve this problem,we proposed a new segmentation algorithm using the principle of Otsu based on an improved PSO algorithm.Taking the class-between variance of Otsu as the fitness function of PSO,the current segmentation threshold as the particle’s current location,and the updating speed of threshold as the particle's current speed,and using the improvement of particle’s best fitness value as the inertia weight of PSO,the proposed algorithm searches for the thre-shold which makes the maximum value of the class-between variance in grey space dynamically.The experimental results show that the new algorithm can get segmentation result which is equal to the classic Otsu,significantly reduces the time of segmentation process and also has higher efficiency.
Lymph Node Image Segmentation Algorithm Based on Maximal Variance Between-class and Morphology
ZHANG Yan-ling,HE Xin-chi and LI Li
Computer Science. 2013, 40 (8): 296-299. 
Abstract PDF(855KB) ( 417 )   
References | RelatedCitation | Metrics
Lymph nodes are an important organ of the human body immune response.The pathological changes of lymph node are an important basis of malignant tumor detection and judgment of metastasis of cancer (lung cancer,colorectal cancer,breast cancer,liver cancer,cervical cancer,etc.) A segmentation algorithm based on maximal variance between-class and morphology was introduced to segment lymph node.Maximal variance between-class method was used to operate binary enhancement processing for the original image.Mathematical morphology was introduced to do boundary correction for binary image.Erosion and expansion operation was used to solve the problem of the target area connected with excess tissue after binarization.In the end,useful lymph node tissue was better extracted.Experimental results show that the proposed method can get better segmentation effect for lymph node images with surrounding tissue adhesions and larger gray level difference between target and background.
Color Image Encryption Algorithm Based on 2D Logistic Chaotic Map and Bit Rearrange
TUO Chao-yong,QIN Zheng and LI Qian
Computer Science. 2013, 40 (8): 300-302. 
Abstract PDF(596KB) ( 377 )   
References | RelatedCitation | Metrics
Usual color image encryption algorithms fail to take full account of the intrinsic relationship between the RGB color components and have unsatisfactory statistical analysis resistance capabilities.To enhance image scrambling degree and security,a new color image encryption algorithm was proposed based on mixed chaotic scrambling.First,the original image is scrambled by use of pseudo-random sequence generated by 2D logistic maps.Then,the R,G,B components are assembled into a whole and the extended pixels are disturbed by exploiting bit XOR operation and random rear-range.Finally,the encrypted image is obtained by repartitioning RGB components in extended 24-bit pixel.Experiments show that the proposed encryption scheme has good scrambling effectiveness,less time consumption and better security compared with some common color image encryption schemes.
Double-feature Combination Based Approach to Motion Capture Data Behavior Segmentation
PENG Shu-juan and LIU Xin
Computer Science. 2013, 40 (8): 303-308. 
Abstract PDF(499KB) ( 465 )   
References | RelatedCitation | Metrics
The objective of motion capture data behavior segmentation is to divide the original long motion sequence into several motion fragments,and each motion fragment incorporates a particular semantic behavior.In general,the transition parts of some neighboring motion fragments are always encountered with the semantic ambiguity.To this end,this paper presented a double-feature combination based approach to tackle this problem.The proposed approach first extracts two different types of motion features,i.e.,angle,distance,and then utilizes the PPCA algorithm to construct two different comprehensive characteristic functions individually.Subsequently,a subinterval standard deviation approach associated with threshold limiting strategy is employed to segment the comprehensive characteristic functions into several confidence regions and pending regions roughly.Finally,by utilizing the Gaussian mixture model to further determine the pending regions,the robust segmentation result can be obtained.The experimental results show that the proposed approach performs favorably compared to the state-of-the-art methods.
Multi-class Image Classification with Best vs. Second-best Active Learning and Hierarchical Clustering
CAO Yong-feng,CHEN Rong and SUN Hong
Computer Science. 2013, 40 (8): 309-312. 
Abstract PDF(346KB) ( 582 )   
References | RelatedCitation | Metrics
Using the least manually labeled samples to train a good classifier is a key problem in image classification.Aiming at selecting these samples for labeling,this paper proposed a criterion combining two different measures of samples:uncertainty of classification and representativeness.The best vs.second-best (BvSB) method is used to get the measure of uncertainty.The dataset is first hierarchically clustered and then the measure of representativeness of each unlabeled sample is defined based on the structural information of clusters and the distribution information of those labeled samples.The proposed method was compared with the random-selection method and BvSB method on an optical image dataset and a fully-polarimetric synthetic aperture radar (SAR) image dataset.The results show that it has stably better performance.
Otsu Image Segmentation Algorithm Based on Rebuilding of Two-dimensional Histogram
GONG Qu,FU Yun-feng,YE Jian-ying and YAO Yu-min
Computer Science. 2013, 40 (8): 313-315. 
Abstract PDF(507KB) ( 388 )   
References | RelatedCitation | Metrics
Concerning the poor anti-noise performance in two-dimensional Otsu’s method due to the obviously wrong region division and its huge calculation,this paper proposed an Otsu image segmentation algorithm based on rebuilding of the two-dimensional histogram.This method firstly analyzes the obviously wrong region division of two-dimensional histogram and its shortage,then rebuilds the two-dimensional histogram for reducing the noise interference,finally transfers the region division in two-dimensional histogram from four partitions into two partitions,thus reducing the search space of threshold,which saves a lot of processing time.The experimental results show that the proposed algorithm has better anti-noise performance and better segmentation result.
Handwritten Numeral Recognition Based on Multi-scale Features and Neural Network
ZHAO Yuan-qing and WU hua
Computer Science. 2013, 40 (8): 316-318. 
Abstract PDF(253KB) ( 479 )   
References | RelatedCitation | Metrics
Aiming at the problem that tradition handwritten numeral recognition method can not solve the interference from writing arbitrary,a new handwritten numeral recognition method was proposed based on nmulti-scale features and neural network.Firstly,two structural features of outline and strokes were extracted,and multi-angle structural features were extracted by rotating the datum line.Second,Multi-level grayscale pixel features were extracted by dividing the ima-ge to K sub-layer from the inside out.Thirdly,BP neural network model was build based on the two features.Lastly,new method was used for The MNIST font library,and the prediction precision reached 99.8%.The result shows that new algorithm can effectively reduce the impact of tilt.