Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 44 Issue 6, 13 November 2018
  
Survey on Web-based Question Answering
LI Zhou-jun and LI Shui-hua
Computer Science. 2017, 44 (6): 1-7.  doi:10.11896/j.issn.1002-137X.2017.06.001
Abstract PDF(699KB) ( 887 )   
References | Related Articles | Metrics
Microsoft Xiaoice triggers a new round of boom on question answering research.As a new kind of information retrieval technology,question answering offers friendly interaction for users by using natural languages.Web-based question answering extracts answers in natural languages for users’ questions from search results provided by search engines.Web-based question answering has both advantages of search engine and question answering.Firstly,background and history of web-based question answering were summarized.Then,the research progress of the three key technologies of web-based question answering(question analysis,information retrieval and answer extraction) were introduced in detail.Based on the above introduction,the problems to be solved of web-based question answering were analyzed.Finally,the future research trend of web-based question answering was discussed.
Research and Development of Auditing Techniques for Cloud Data Possession
TIAN Hui, CHEN Yu-xiang, HUANG Yong-feng and LU Jing
Computer Science. 2017, 44 (6): 8-16.  doi:10.11896/j.issn.1002-137X.2017.06.002
Abstract PDF(3750KB) ( 843 )   
References | Related Articles | Metrics
As an important branch of cloud computing,cloud storage possesses the advantages of low-cost and high-performance,and has attracted a growing number of organizations and individuals to outsource their data.However,due to the outsourcing characteristic of cloud data and frequent security accidents for cloud storage providers,users are lacking of confidence in the cloud storage server,of which the main problem is how to effectively check the integrity of cloud data.To overcome this challenge,the auditing for cloud data possession has been proposed and gotten widespread attention in recent years,and a comprehensive survey was provided in this paper.First,general models and design goals of the auditing for cloud data possession were reviewed.Second,the existing auditing schemes for cloud data possession were classified according to their auditing functions,their principles were analyzed,and their performances were compared.Finally,the open problems in the auditing for cloud data possession were identified,and the trends of future development were discussed.
Heterogeneous Storage Aware Data Placement of Ceph Storage System
LIU Fei, JIANG De-jun, ZHANG Huan, CHEN Jing, WANG Jun and XIONG Jin
Computer Science. 2017, 44 (6): 17-22.  doi:10.11896/j.issn.1002-137X.2017.06.003
Abstract PDF(1267KB) ( 941 )   
References | Related Articles | Metrics
Ceph distributed storage system is becoming a widely used open source cloud storage solution.Heterogeneous storage can provide large capacity and high performance storage while maintaining low cost if it uses an effective data management strategy.Using heterogeneous storage devices in Ceph currently cannot effectively exploit the performance of heterogeneous storage devices.Since multiple replicas of the data can be stored in different storage media,the performance and cost of different device combinations for multiple replicas are not the same.In this paper,a data placement method was proposed for heterogeneous storage based on Ceph.The method puts different data on different replica combination based on the access intensity and read/write ratio,which can effectively improve the system performance while controlling the cost of the system.
Analysis and Optimization of DiskSeen Prefetching Algorithm
LIU Yan, ZHU Chun-jie and WANG Fang
Computer Science. 2017, 44 (6): 23-30.  doi:10.11896/j.issn.1002-137X.2017.06.004
Abstract PDF(668KB) ( 1002 )   
References | Related Articles | Metrics
To balance the demand for high-speed process and large storage capacity,computer storage hierarchy is a typi-cal pyramid-shaped structure.However,as the information technology develops rapidly,the speed gap between computerprocessor and disk has been widened,which makes the disk access become the bottleneck of the computer system.In recent decades,how to reduce the impact of I/O delay has been a hot issue in storage fields.Prefetching,namely predicting I/O requests in advance and reading the data into cache to hide I/O delay behind application,is a vital technique to alleviate the bottleneck.DiskSeen is a block-level prefetch policy.It can improve the sequential access of disk and prefetching performance by analyzing the relationships between the blocks’ locations and access times.Based on DiskSeen,the following work was done in this paper.First,based on the drawbacks of DiskSeen,we proposed dynamic control for prefetch size and active history-aware prefetch in second match to optimize the efficiency.Secondly,we realized DiskSeen algorithm and the optimization of it.Lastly,we conducted comparison test in a simulation environment.The results show that DiskSeen can effectively increase hit ratio and reduce average response time.Furthermore,our optimization can further improve system performance on above two aspects.
Research on Memory Management and Cache Replacement Policies in Spark
MENG Hong-tao, YU Song-ping, LIU Fang and XIAO Nong
Computer Science. 2017, 44 (6): 31-35.  doi:10.11896/j.issn.1002-137X.2017.06.005
Abstract PDF(501KB) ( 1145 )   
References | Related Articles | Metrics
Spark is a big data processing framework based on Map-Reduce.Spark can make full use of cluster memory,thus accelerating data processing.Spark divides memory into Shuffle Memory,Storage Memory and Unroll Memory according to their functions.These different memory zones have different characteristics.The features of Shuffle Memory and Storage Memory were tested and analyzed.RDD (Resilient Distributed Datasets) is the most important abstract in spark,which can cache in cluster memory.When the cluster memory is insufficient,Spark must select some RDD partitions to discard to make room for the new ones.A new cache replacement policies called DWRP (Distributed Weight Replacement Policy) was proposed.DWRP can compute the weight of every RDD partition based on the time of store in memory,size and frequency of use,and then select possible RDD partition to discard based on distribution features.The performance of different cache replacement policies was tested and analyzed at last.
Optimizing Small XOR-based Non-systematic Erasure Codes
JIN Xing-tong, LI Peng, WANG Gang, LIU Xiao-guang and LI Zhong-wei
Computer Science. 2017, 44 (6): 36-42.  doi:10.11896/j.issn.1002-137X.2017.06.006
Abstract PDF(2659KB) ( 537 )   
References | Related Articles | Metrics
With the development of storage systems,the rapid rise of cloud storage has met the storing problem of highly increased data quantity.However,now the single cloud storage is facing the risks of data confidentiality,security,availability and vendor lock-in.To solve the above problems,we can construct the multi-cloud storage system with the private protecting ability through using privacy protecting code(PPC),which is an erasure code based on the XOR ope-ration.This paper mainly analyzed the optimization coding algorithm on the PPC to improve the performance of encoding operations.First,we designed an algorithm to search the optimal XOR scheduling sequence to reduce the XOR times in encoding.Because the encoding/decoding calculation of the PPC can be expressed as the multiplication of generator matrix (0/1 matrix) and data vectors,it can be observed visually that the computation is proportional to the number of 1 of the generator matrix.And based on the optimal scheduling order,we can get better performance.The searching result can be used to construct the multi-cloud storage system.Second,we can use the AVX2 technique of SIMD parallel optimization to improve the encoding performance based on the optimization schedule.Through the experiment,the performance of encoding based on the optimization schedule improves by 34.8%,and after being further optimized by SIMD technique,the performance improves 107.1%.
Optimization Scheme of UBIFS Based on Hot Data Identification Technology
MA Jun, TONG Wei, LIU Jing-ning and LIU Jing-chao
Computer Science. 2017, 44 (6): 43-50.  doi:10.11896/j.issn.1002-137X.2017.06.007
Abstract PDF(2140KB) ( 987 )   
References | Related Articles | Metrics
NAND Flash memory can’t be managed by traditional file systems for its special access characteristics.Although the problem is solved by using FTL in NAND Flash devices,the structure and algorithm in traditional file systems designed for disk can easily cause severe performance degradation and uneven wear in NAND Flash devices.Flash file systems that combine some FTL functions can make better use of NAND Flash memory with further optimization.UBIFS is a widely used Flash file system in Linux,but there are still severe write-amplification and frequent garbage collection in the storage system using UBIFS.In order to solve these problems of UBIFS,this research used hash table with multi-hash functions to identify hot data in UBIFS to reduce identification overhead and improve recognition accuracy.By using multiple log,cold and hot data are separated into different storage area to reduce the frequency of garbage collection.The delay commit technology of log was also adopted to reduce the number of times write-amplification occurs caused by frequent log commit.A series of experiments were conducted to verify the performance of the proposed method.Experiments show that internal writes to physical block in optimized UBIFS are reduced by 5%~10% compared to that in the original UBIFS,and the times of garbage collection triggered in optimized UBIFS were reduced by 7%~13% compared to that in the original UBIFS.In general,IOPS of storage system is improved by 5%~18% and system performance degradation is effectively alleviated by using optimized UBIFS.
Read-Write Performance Optimization Scheduling Scheme for SSD
ZHU Yue, WU Fei, XIONG Qin and XIE Chang-sheng
Computer Science. 2017, 44 (6): 51-56.  doi:10.11896/j.issn.1002-137X.2017.06.008
Abstract PDF(1333KB) ( 511 )   
References | Related Articles | Metrics
Compared with traditional hard disk drives (HDDs),NAND-Flash-based solid-state drives (SSDs) are non-volatile and can provide better performance as well as lower power consumption.Therefore,they have achieved extensive application in data centers,cloud computing and online transaction trading,etc.However,in NAND Flash memory,the speed of read operation is significantly faster than the write operation.Hence,for a concurrent workload with a mixture of read and write requests,reads may be blocked by writes,which exhibites an enormous read latency.In many read-intensive applications,especially the online transaction trading,in which the proportion of read requests is than 90%,the sharp increase of the read latency influences the overall performance of the system severely.In this paper,we proposed a read-write performance optimization scheduling scheme which achieves remarkable improvement about the read performance by dynamically adjusting the priority sequence of read and write requests beneath the flash translation layer.In the experiment,we designed and built an SSD simulator to evaluate the effectiveness of the scheduling scheme.Experimental results show that by implementing the proposed scheme,the maximum and the average read latency in the system are substantially reduced,with the reduction of 72% and 41%,respectively.
Efficient Mechanism of Hybrid Memory Placement and Erasure Code
WU Yang, FU Yin-jin, CHEN Wei-wei and NI Gui-qiang
Computer Science. 2017, 44 (6): 57-62.  doi:10.11896/j.issn.1002-137X.2017.06.009
Abstract PDF(474KB) ( 708 )   
References | Related Articles | Metrics
With the rapid development of big data and multi-core technology,the growth of traditional memory techno-logy,as a matter of fact,has been far away from satisfying the memory computing needs along with the emergence of a large number of data intensive applications.In recent years,the emergence and development of non-volatile memory (NVM) obviously provides an opportunity to break the bottleneck of traditional memory technology.As a typical emerging non-volatile memory,phase change memory (PCM) and traditional DRAM memory have their own advantages.What’s more,it is widely considered to be most likely to replace traditional DRAM memory and has very good develo-pment prospects in the memory applications.Hybrid memory based on DRAM and PCM makes it possible to play the respective advantages of DRAM and PCM simultaneously.Therefore,in this paper,a hybrid memory architecture of DRAM and PCM was proposed,which is designed to evidently improve the system reliability of the hybrid memory system by using erasure code,based on the efficient reading,writing strategy and data migration mechanism.Experiments firmly show that the hybrid memory system can greatly reduce energy consumption,obviously improve the throughput,and ensure the reliability of reading and writing.
Link Failure Detection and Fast Recovery in Software-defined Satellite Network
ZHANG Fang, DENG Chang-lin, WANG Zhi and GUO Wei
Computer Science. 2017, 44 (6): 63-67.  doi:10.11896/j.issn.1002-137X.2017.06.010
Abstract PDF(1311KB) ( 1069 )   
References | Related Articles | Metrics
Link failure detection is an open issue remained to be solved in satellite network with inter-satellite links.This paper proposed a novel strategy of link failure detection and network service recovery based on software-defined satellite network(SDSN) architecture.In this architecture,the link failure detection mechanism is designed to detect and locate link failures in satellite network,while the failure recovery mechanism combines both protection and restoration to migrate and recover the interrupted services.To prove the efficiency of the proposed strategy,we conducted an experi-ment on a prototype system.The results show that the link failures can be fast located,and the recovery durations are kept in 10±2 milliseconds.The architecture and corresponding strategy can be adapted to any other satellite network topology.
Smooth Energy-saving Algorithm Based on Link-ranking
HUANG Hong and YU Hong-fang
Computer Science. 2017, 44 (6): 68-74.  doi:10.11896/j.issn.1002-137X.2017.06.011
Abstract PDF(578KB) ( 479 )   
References | Related Articles | Metrics
To decrease the high switching frequency of link states,which is existed in many of today’ s energy-saving algorithms,a smooth energy-saving algorithm based on link-ranking was proposed.In the algorithm,there is a unique sorting mechanism to minimize the switching frequency of link state during continuous time,so that the strategy of the algorithm is smooth.Moreover,the proposed algorithm balances the energy consumption of line-cards and links,in order to achieve higher energy-saving efficiency.The simulation results shows that this method outperforms a greedy algorithm based on link-ranking in terms of energy-saving efficiency and switching frequency of link state,and outperforms greenTE,which pursues the optimal efficiency of energy-saving,in terms of switching frequency of link state and time complexity.
Method of Geometric Connected Disk Cover Problem for Wireless Mesh Networks Deployment
LI Yue and LIU Nai-an
Computer Science. 2017, 44 (6): 75-79.  doi:10.11896/j.issn.1002-137X.2017.06.012
Abstract PDF(486KB) ( 529 )   
References | Related Articles | Metrics
User coverage and network connectivity are important for wireless mesh networks planning which are studied separately in traditional ways.In order to effectively combine these two factors,a geometric connected disk cover problem for wireless mesh networks was proposed considering hierarchical network characteristics,user demands,network connectivity and deployment cost.Continuous spatial location selection problem is transformed into discrete problem by the set of candidate points’ generation algorithm.An improved multi-objective genetic algorithm was proposed to get Pareto solutions.Experimental results prove the efficiency of this scheme to deploy mesh networks.
Study on Calculation Method for Internet Topological Parameters Based on MapReduce
ZHU Kai-long, LU Yu-liang and ZHANG Yan-qing
Computer Science. 2017, 44 (6): 80-82.  doi:10.11896/j.issn.1002-137X.2017.06.013
Abstract PDF(1758KB) ( 573 )   
References | Related Articles | Metrics
In order to improve the efficiency of the traditional single machine algorithms in computing Internet topological parameters,we used MapReduce to study the Internet topological parameters algorithms.Through discussing the difficulties of parallelizing the traditional algorithms,this paper gave the concurrent design discipline of graph algorithms based on MapReduce and presented the message passing mechanism of graph algorithms.We designed and improved parallel algorithms to compute 4 topological parameters by using message passing mechanism.The experimental results show that the proposed MapReduce algorithm effectively improves the efficiency,and has a good scalability.
Hypercube Network Diagnosis Algorithm under Comparison Model
CHEN Miao-jiang, LIANG Jia-rong and ZHANG Qian
Computer Science. 2017, 44 (6): 83-90.  doi:10.11896/j.issn.1002-137X.2017.06.014
Abstract PDF(979KB) ( 452 )   
References | Related Articles | Metrics
An efficient diagnosis is very important for a multiprocessor system.The ability to identify all the faulty nodes in a multiprocessor system is known as diagnosability.In the comparison model,the diagnosis is performed by sending two identical signals from a processor to a pair of distinct neighbors,and comparising the responses.To improve the diagnosability of hypercube network,we presented a novel hypercube network algorithm under the comparison mo-del,which uses the characteristic of the hypercube links to produce a topology netword ES(k;n) and obtains a three-binary diagnosis syndrome to determine the fault node of the system.In the optimal conditions,the diagnosability of algorithm is 4n,which is bigger than its ordinary diagnosability n.
Node Reusable Virtual Network Embedding Algorithm Based on Network Shrinking
WU Guo, FANG Li-guo and XU Xiao-hui
Computer Science. 2017, 44 (6): 91-93.  doi:10.11896/j.issn.1002-137X.2017.06.015
Abstract PDF(325KB) ( 538 )   
References | Related Articles | Metrics
A node reusable virtual network embedding algorithm based on network shrinking was presented for the problem that random node reusing fails to take the best of the features.Through parting network embed processing into network shrinking phases and mapping phases,choosing and embedding reusable nodes will be taken apart.In the process of network shrinking,aiming at shrinking network properties,a network shrinking algorithm based on combining neighborhood nodes was presented,which makes network smaller under the condition of constricting biggest node resource needs and biggest periodic line resource needs.Experiments have proved that node reusable virtual network embedding algorithm based on network shrinking has better embedding quality and consumes less time.
Research on Completely Independent Spanning Trees Based on Degree of Vertices
LIN Cheng-kuan, ZHAO Yuan, FAN Jian-xi and CHENG Bao-lei
Computer Science. 2017, 44 (6): 94-96.  doi:10.11896/j.issn.1002-137X.2017.06.016
Abstract PDF(293KB) ( 616 )   
References | Related Articles | Metrics
In the interconnection network of computers,completely independent spanning trees(CISTs) play an important role in the aspects such as reliable transmission,parallel transmission,secure distribution and others of information.Supposing that there exist n spanning trees which are T1,T2,…,Tn in a graph G,for any pair of vertices u and v in G,if the paths from u to v are node-disjoint in the n trees,T1,T2,…,Tn are called CISTs in G.In 2005,Chang et al.[1] gave the proof that for graphs of order n with n≥6,if the minimum degree is at least n-2,there are at least n/3 CISTs in G.Based on the result of Chang et al.,we further studied the relation between vertex degree and number of CISTs in G.For any graph of order n with n≥5,if the minimum degree of vertices is at least n-2,the equation between the number of vertices whose degree is n-2,the number of vertices whose degree is n-1 and the number of CISTs can be obtained.The correctness of the equation was also proved,improving the result in paper [1].
Research on Heterogeneous Network Access Selection Algorithm Based on Improved Multiple Attribute
GAO Xiu-e and LI Ke-qiu
Computer Science. 2017, 44 (6): 97-101.  doi:10.11896/j.issn.1002-137X.2017.06.017
Abstract PDF(408KB) ( 575 )   
References | Related Articles | Metrics
Aiming at the problem that the existing multi attribute handoff algorithm doesn’t consider the load balancing of the target network and can not guarantee the quality of network service,the heterogeneous network access algorithm based on improved multi attribute decision was put forward.Firstly,the gravity model is used to simplify the complica-ted calculation process of multiple attribute decision algorithms,reducing the execution time of network access handoff.Secondly,the optimal target network and suboptimal target network load are calculated by adding the load decision module before switching,to select the final target handoff network.Using MATLAB,NS2 simulation tools,etc,a hete-rogeneous network simulation environment for WLAN and UMTS fusion was built.The simulation results show that the algorithm significantly improves the load balancing degree and resources utilization rate of the whole network,and reduces the handoff blocking rate of the mobile terminal effectively.
Research on Performance of Cooperative Communication Based on Constrained Area Relay in VANET
YE Xiang, ZHANG Guo-an and WU Min
Computer Science. 2017, 44 (6): 102-107.  doi:10.11896/j.issn.1002-137X.2017.06.018
Abstract PDF(493KB) ( 421 )   
References | Related Articles | Metrics
The effect of cooperative communication on network performance in wireless vehicular Ad Hoc network (VANET) was studied.Relay involved in the communication can enhance the reliability of the single-link transmission.However,most studies ignored the detrimental effect of the enlarged interference range due to relay transmissions.The enlarged interference range can block the transmission of its neighboring link.we constructed a diamond-shaped relay selection region to restrict the detrimental effect of the enlarged interference range due to relay transmissions,developed a theoretical performance analysis method for cooperation in VANET and obtained the mathematical expression of outa-ge probability and network throughput.Numerical analysis and simulation results show that cooperative communication can significantly reduce the probability of communication outage,but it is not always beneficial for the throughput performance of the whole network.The size of relay selection region can be set to improve the network throughput.
Subcarrier Pairing Strategy for Energy Efficiency Optimization in Cognitive Radio
ZHAO Cheng, HAO Yin-yin, HUA Jing-yu, YAO Xin-wei and WANG Wan-liang
Computer Science. 2017, 44 (6): 108-113.  doi:10.11896/j.issn.1002-137X.2017.06.019
Abstract PDF(447KB) ( 552 )   
References | Related Articles | Metrics
To improve the energy efficiency in cooperative cognitive radio system,a joint resource allocation algorithm was proposed.Firstly,a heuristic scheme with energy efficiency in priority is considered for subcarrier pairing,QoS and power constraint are satisfied.Then,the power allocation is optimized with the Lagrangian dual algorithm to maximize the system energy efficiency.Simulation results demonstrate a significant enhancement in the energy efficiency and the validity of the proposed algorithm.
Research on End-to-End Model of Reconfigurable Information Communication Basal Network
MA Ding, ZHUANG Lei and LAN Ju-long
Computer Science. 2017, 44 (6): 114-120.  doi:10.11896/j.issn.1002-137X.2017.06.020
Abstract PDF(1276KB) ( 461 )   
References | Related Articles | Metrics
As a clean-slate future Internet architecture,the reconfigurable information communication basal network supports various business types via constructing co-existing virtual networks and implements on-demand addressing by polymorphic addressing and routing mechanism.To serve the diversified end systems and to respond to the continuous changes of the underlying network environment,resources and end-to-end services need to be managed and provided respectively in a flexible and scalable way.To this end,a two-dimensional end-to-end model was proposed with the horizontal dimension corresponding to the data plane and the vertical dimension corresponding to the management plane.It achieves intra-domain and inter-domain autonomic management of resources,services,virtual networks and service paths based on the agents’ capabilities of awareness,self-decision making and interacting with each other.To incorporate end systems in this autonomic management framework,a novel end system architecture was designed and an end system access scheme was proposed,which eventually enables automatic connections to virtual networks and automatic service provisioning.
Survey for Attack and Defense Approaches of OpenFlow-enabled Software Defined Network
WU Ze-hui, WEI Qiang and WANG Qing-xian
Computer Science. 2017, 44 (6): 121-132.  doi:10.11896/j.issn.1002-137X.2017.06.021
Abstract PDF(2657KB) ( 2824 )   
References | Related Articles | Metrics
Software defined network (SDN) grants the network an omnipotent power to increase the flexibility of network deployment,the dynamic of network management and the efficiency of network transmission by centralizing the control plane and separating it with data plane.However,the security of SDN is still outstanding.In this paper,we aimed at analyzing and categorizing a number of relevant research works toward OpenFlow-enabled SDN security.We first provided an overview on threats of SDN with its three layers architecture,and further demonstrated their vulnerabilities within each layer.Thereafter,we presented existing SDN-related attacking approaches according to the procedures of network attacking,such as network probing,defraud inserting and remote controlling.And then we dedicated the next part of this paper to study and compared the current defense approaches underlying probe blocking,system strength,and attack defensing.Furthermore,we reviewed several potential attack and defensed methods as some foreseeable future research challenges.
Dynamic Key AES Encryption Algorithm Based on Compound Chaotic Sequence
YAN Le-le and LI Hui
Computer Science. 2017, 44 (6): 133-138.  doi:10.11896/j.issn.1002-137X.2017.06.022
Abstract PDF(2069KB) ( 1070 )   
References | Related Articles | Metrics
To solve the traditional AES(Advanced Encryption Standard) algorithm’s security reduction problem caused by the unchanged seed key and fixed key space,combining with the chaotic sequence cipher and AES algorithm,this paper proposed a dynamic seed key AES encryption algorithm based on compound chaotic sequences and finished the software implementations.The algorithm uses the private key to construct the same-parameters’ dual chaotic system at both ends of the encryption and decryption.The chaotic sequence is generated and cross-combined through two chaotic systems(logistic and tent) by a number of iterations that related to the plaintext length,and then mapped into the dynamic initial seed keys for the AES block cipher.Through the security analysis,statistical test and performance test,the analytical results demonstrate that the proposed algorithm has the advantages of high security,strong adaptability and fast operation speed,and it is very suitable for cross platform wireless secure data transmission applications.
Method of Constructing Differential Privacy Decision Tree Classifier with Incomplete Data Sets
SHEN Si-qian, MAO Yu-guang and JIANG Guan-ru
Computer Science. 2017, 44 (6): 139-143.  doi:10.11896/j.issn.1002-137X.2017.06.023
Abstract PDF(489KB) ( 719 )   
References | Related Articles | Metrics
We mainly studied the problem of constructing differential privacy decision tree classifier with incomplete data sets.We first introduced the differential privacy ID3 decision tree algorithm and differentially private random decision tree algorithm.Then we considered the weakness of the algorithms talked above,and created a new differentially private random decision tree algorithm with exponential mechanism.Finally,an approach for decision tree classifier with incomplete data sets was proposed,which yields better prediction while maintaining good privacy without inserting values,called WP(Weight Partition).And the experimental results show that our approach is suitable for either differential privacy ID3 decision trees or differentially private random decision trees,either laplace or exponential mechanism.
Improved Location Anonymous Technology for Big Data Based on Bloom Filter
LIU Yan and ZHANG Lin
Computer Science. 2017, 44 (6): 144-149.  doi:10.11896/j.issn.1002-137X.2017.06.024
Abstract PDF(1263KB) ( 533 )   
References | Related Articles | Metrics
As there exists large amounts of user’s sensitive information in the application of big data for location,a kind of anonymous location protection method was put forward in this paper,which is based on Bloom Filter with multi-Hash coding,to solve the privacy leakage in analysis of massive data.Heuristic privacy metrology divides anonymous area to hide real data of location.Keeping the search target adjacent in Euclidean distance can optimize the area of spatial anonymous box,and the introduction of similarity factor in query service for dividing policy can reduce space debris.It can effectively blurred target node,with deployment of trusted anonymous server between the mobile user and the server by third-party,to resist malicious privacy attack.Theoretical analysis and simulation results show that the new algorithm can optimize the anonymous space and improve the privacy protection effectively,and it has better time complexity in the construction of massive data sets.
Image Splicing Blind Detection Method Combined RLRN with GLCM
SU Hui-jia and ZHENG Ji-ming
Computer Science. 2017, 44 (6): 150-154.  doi:10.11896/j.issn.1002-137X.2017.06.025
Abstract PDF(1102KB) ( 538 )   
References | Related Articles | Metrics
An algorithm on blind detection of splicing images by using the statistical characteristics with run-length and gray-level co-occurrence matrix(GLCM) features was researched.Splicing detection can be treated as a two-class pattern problem.This paper proposed an enhanced technique for blind detection of image splicing.It extracts the run-length run-number(RLRN) in CbCr chroma space and combines with GLCM feature to detect the artifacts introduced by the tampering operation.Then,the LibSVM is exploited to classify the authentic and spliced images.The experiment results demonstrate that the proposed algorithm not only improves the accurate recognition rate in CASIA v2.0 dataset,but also can achieve well copy-move tampered detection.
Modified SCPS-SP for Space Information Network
LIAO Yong, CHEN Hong-yu and SHEN Xuan-fan
Computer Science. 2017, 44 (6): 155-160.  doi:10.11896/j.issn.1002-137X.2017.06.026
Abstract PDF(2336KB) ( 588 )   
References | Related Articles | Metrics
Based on IP security (IPSec) and space communication protocol specification-security protocol (SCPS-SP),a modified SCPS-SP (M-SCPS-SP) was proposed to fill up the deficiency of data protection in following aspects for space information network,including security level adaptation and anti-replay attacks.The M-SCPS-SP can meet diffe-rent space information network users’ needs of security level,moreover it is expandable to support efficient encryption and authentication algorithms.Finally,we built a simulation platform based on the SCPS reference implementation,and then verified the feasibility and effectiveness of the proposed M-SCPS-SP.
Two-layer Semantics-based Security Detection Approach for Android Native Libraries
YE Yi-lin, WU Li-fa and YAN Hui-ying
Computer Science. 2017, 44 (6): 161-167.  doi:10.11896/j.issn.1002-137X.2017.06.027
Abstract PDF(656KB) ( 644 )   
References | Related Articles | Metrics
Native code has been widely used in Android applications,providing a new attack vector for attackers,which raises increasing security concerns.Existing Android malware detection approaches mainly focus on the analysis of Java code or the Dalvik code compiled from Java code,ignoring the native code used in Android applications.To combat this emerging threat,this paper proposed a novel two-layer semantics-based security detection method for Android native libraries.To begin with,on the base of native method call paths,the semantics of native method in Java layer is extracted by analyzing the data dependence between native methods and Java methods and the type of the entry points of native method call paths.For semantics of native code in native layer,five kinds of suspicious behaviors are defined,including data uploading,data downloading,reading or writing in sensitive system paths,sensitive strings,suspicious calling of Java methods.More specifically,IDA Pro and IDA Python are utilized to analyze the behaviors of native code mentioned above.Experiments are evaluated using the open source machine learning tool Weka with 5336 benign Android applications and 3426 Android malware,the results of which show that the best accuracy achieves 92.4%.It proves that our method can effectively detect the security of native libraries used in Android applications.
Grid-based Identity Signcryption Algorithm and Application in Ad Hoc Network
CHEN Shao-hua, FAN Xiao-guang, CONG Wei, HUANG Jin-ke and SUN Xian-ming
Computer Science. 2017, 44 (6): 168-173.  doi:10.11896/j.issn.1002-137X.2017.06.028
Abstract PDF(455KB) ( 484 )   
References | Related Articles | Metrics
Identity-based signcryption has the advantage of low computation cost,which is suitable for the key management of Ad Hoc network and can guarantee the confidentiality and authentication of information.Aiming at the deficiencies of the existing identity-based signcryption algorithm,taking a grid as the logical structure,which has the advantages of low overhand,high expandability and high connectivity,this paper proposed an efficient grid-based identity signcryption algorithm.By using this algorithm in the key management scheme of Ad Hoc network,the scheme reduces the communication and computation cost of key management.The analysis show that the signcryption algorithm is secure in the random oracle model,the key management is more safe and efficient,and the Ad Hoc network has good ability of resis-tance to the attack.
Linear Complexity of Quaternary Generalized Cyclotomic Sequences with Period pq
WEI Wan-yin, DU Xiao-ni, LI Zhi-xia and WAN Yun-qi
Computer Science. 2017, 44 (6): 174-176.  doi:10.11896/j.issn.1002-137X.2017.06.029
Abstract PDF(240KB) ( 476 )   
References | Related Articles | Metrics
Based on the theory of Gray mapping and Ding-generalized cyclotomic,a new class of quaternary sequence over Z4 with period pq was constructed firstly.Then we determined the corresponding Fourier spectral sequence of the new sequence over the finite field Fr(r≥5,prime).Finally,we obtained the linear complexity of the new sequence from the weights of its Fourier spectral sequence.Results show that the sequence has large linear complexity and can resist the attack by B-M algorithm.It’s a good pseudorandom sequence from the viewpoint of cryptography.
Test Case Generation Method Based on Adaptive Particle Swarm Optimization
BAO Xiao-an, YANG Ya-juan, ZHANG Na, LIN Qing-xia and YU Cheng-hai
Computer Science. 2017, 44 (6): 177-181.  doi:10.11896/j.issn.1002-137X.2017.06.030
Abstract PDF(417KB) ( 513 )   
References | Related Articles | Metrics
Obtaining minimum coverage array is one of the key issues in the combination test.Particle swarm optimization(PSO),as one of the evolutionary search based methods,can obtain the smallest covering arrays,but its performance is significantly impacted by the parameters.To solve this problem,we combined one-test-at-a-time strategy and particle swarm optimization and proposed an adaptive particle swarm optimization algorithm.Based on the quality of the particles in the population,the strategy adaptively adjusts inertia weights which makes it have stronger ability of application.In order to further improve the performance of the algorithm,we constructed a priority measure function which is used to measure the weight of each combination,and we preferred to select a combination which has the highest weight to generate a single test case.Finally the paper implemented the algorithm by programming,and compared this approach with the original particle swarm optimization algorithm in test suite size and generation time.The results show the competitiveness of this approach.
Reliability Analysis Method of Embedded System AADL Model Based on Fault Tree Analysis
LI Dong-min, LI Jing and LIN Hua-feng
Computer Science. 2017, 44 (6): 182-188.  doi:10.11896/j.issn.1002-137X.2017.06.031
Abstract PDF(535KB) ( 887 )   
References | Related Articles | Metrics
We used architecture analysis and designed language(AADL) to build embedded system semi-formalization model,transformed AADL model to static fault tree (SFT) model,and analyzed the reliability of the system according to the fault tree analysis method.Firstly,the reliability model is built with the attachment of the AADL error model.Then,the semantic mapping rules from AADL model to SFT model are designed and used to transform from AADL model to SFT model.Finally,based on the example of aircraft wheel brake system,the reliability analysis is carried out according to the method proposed in this paper to prove the feasibility and effectiveness of the proposed method.
Anomaly Detection Based on Interval One Cluster and Classification
SUN Qiang, WEI Wei, HOU Pei-xin and YUE Ji-guang
Computer Science. 2017, 44 (6): 189-198.  doi:10.11896/j.issn.1002-137X.2017.06.032
Abstract PDF(3279KB) ( 653 )   
References | Related Articles | Metrics
Anomaly detection is crucial in system maintenance.During operating process,normal operation data is easy to obtain,while anomal data usually take high cost to obtain.Therefore,one classifier could be utilized to solve the anmaly detection problem.Due to measurement uncertain,environment noise and storage problem etc.,uncertainness could be a characteristic of the montoring data.This paper utilized interval number to describe the uncertainess in the monitoring data,and raised an anomaly detection algorithm based on kernelized possibilistic 1-means clustering and 1-classifier for interval samples.The clustering center was considered both in the input space and the feature space.The interval width of the samples could be unbalanced,therefore,an interval splitting stragy was also proposed.Finally,illustrative numberic examples were given in utilizing artificial dataset and UCI machine learning repository.The effectiveness of the proposed algorithm is verified,and improvement is made by comparing with the existing SVM-OCC algorithms.
Group Travel Trip Recommendation Method in LBSNs
LI Xiao-lun and DING Zhi-jun
Computer Science. 2017, 44 (6): 199-205.  doi:10.11896/j.issn.1002-137X.2017.06.033
Abstract PDF(1089KB) ( 534 )   
References | Related Articles | Metrics
With the widespread adoption of GPS-enabled devices,such as smartphone,GPS navigation device,GPS logger,etc.,more and more location information is collected.Recommender systems for location-based social networks (LBSNs) have received more attention.The research on recommending a trip to a group is a hot topic,but most related works mainly focus on recommending trip to a user and lack in recommending trip to a group.Therefore,this paper proposed a trip recommender method for a group in LBSNs.First,according to user’s check-ins,the proposed method uses K-means and spectral clustering to mine groups who have a great many same check-ins.Then,group’s preference is obtained based on their common check-ins.At last, combining group’s time and cost constraint,a trip recommender algorithm is designed to recommend trip which satisfies group’s preference to a group.This paper conducted experiments with users’ real check-ins of Sina weibo.The experimental results show that the proposed method in this paper to recommend trips to a group achieves good effects.
Two Stage Ant Colony Optimization for Type 2 of U-shaped Assembly Line Balancing Problem
ZHENG Qiao-xian, HE Guo-liang, LI Ming and TANG Qiu-hua
Computer Science. 2017, 44 (6): 206-211.  doi:10.11896/j.issn.1002-137X.2017.06.034
Abstract PDF(529KB) ( 466 )   
References | Related Articles | Metrics
A two stage ant colony optimization for the type 2 of U-shaped assembly line balancing problem (UALBP-2) was proposed,which is widespread in the electronics and automobile industry.In the first stage algorithm with the high capability of global search,a better feasible solution is obtained by the scout ants according to the task selection strategy,the task assignment strategy and the iteration compress mechanism.The search space is decreased according to the solution.In the second stage algorithm with the high capability of local search,different elite station loads are searched by the pathfinding ants according to the update strategy of decreasing pheromones.The elite station loads of every station are grouped together into the feasible solutions of UALBP-2 by the elite ants according to the elite copy strategy.The computational results of 33 instances from 18 benchmark examples verify the effectiveness and the stability of the proposed algorithm.
Classification Algorithm Using Linear Regression and Attribute Ensemble
QIANG Bao-hua, TANG Bo, WANG Yu-feng, ZOU Xian-chun, LIU Zheng-li, SUN Zhong-xu and XIE Wu
Computer Science. 2017, 44 (6): 212-215.  doi:10.11896/j.issn.1002-137X.2017.06.035
Abstract PDF(393KB) ( 547 )   
References | Related Articles | Metrics
For the classification problems of high-dimensionality and small-sample data,the predictive accuracy of the classification model is restricted by the complexity of the high dimensional attributes.To further improve the accuracy,a classification algorithm using linear regression and attributes ensemble (LRAE) was proposed.The linear regression is utilized to construct an attribute linear classifier (ALC) for each attribute.To avoid the decrease of accuracy caused by too many ALCs,empirical loss value in the empirical risk minimization strategy is used as the evaluation criteria to select ALCs.The majority voting method is adopted to integrate ALCs.The results of experiments using gene expression data demonstrate that the accuracy of LRAE algorithm is relatively higher than that of logistic regression,support vector machine and random forest algorithms.
Research on Multi-domain Natural Language Question Understanding
YE Zhong-lin, JIA Zhen and YIN Hong-feng
Computer Science. 2017, 44 (6): 216-221.  doi:10.11896/j.issn.1002-137X.2017.06.036
Abstract PDF(545KB) ( 506 )   
References | Related Articles | Metrics
Question understanding is one of the main tasks of question answering system.Current question understan-ding methods aim to solve semantic understanding of simple sentences or specific structure sentences.The method proposed in this paper addresses multi-domain question understanding which includes people,movie,music,book,game,and application domains.Firstly,the question classification based on CRF algorithm and the subject recognition based on CRF algorithm approach are presented.And then the prediction dictionary and semantic analysis are applied to recognize prediction.Finally,the prediction disambiguation method is proposed to deal with the problem that prediction in question has different ways of expression.Experimental results show that the average F-measure value is 93.88% and 92.44% in question classification and semantic analysis experiments.The average accuracy is 91.03% and 81.78% in the prediction recognition and question understanding.Thus,the works in this paper can meet the needs of question understanding.
Parameter Establishment of Differential Evolution Algorithm Based on Uniform Design
BAI Yun, ZHANG Tian-jun, ZHAO Gao-chang and LIU Jie
Computer Science. 2017, 44 (6): 222-225.  doi:10.11896/j.issn.1002-137X.2017.06.037
Abstract PDF(295KB) ( 889 )   
References | Related Articles | Metrics
The parameter establishment of differential evolution algorithm is generally determined by the experience selection method,whose shortcomings include the massive operational parameters,the difficulty in obtaining the best parameter combination,and obstacle in improving the optimization ability of the algorithm to a great extent.The article introduced the uniform design method to differential evolution algorithm in parameter establishment.The optimal parameters combination which can be applied to different types of standard test functions is discovered by means of the uniform design test for three different types of standard test function:the unimodal function,multi-peak function,and morbid function.Finally the differential evolutionary algorithm for parameter establishment can be specified.The result is as follows.When the two groups of optimal parameter combination obtained by the uniform design experiment are applied to the differential evolution,the average global optimal solution is 4.3215 and the average standard deviation is 3.650.It follows that the method of uniform experimental design is feasible and effective to set the parameters of the differen-tial evolution algorithm and the method offers good stability.
Short-term Power Forecasting for Photovoltaic Generation Based on HS-ESN
WEN Run and TAN Li
Computer Science. 2017, 44 (6): 226-231.  doi:10.11896/j.issn.1002-137X.2017.06.038
Abstract PDF(522KB) ( 534 )   
References | Related Articles | Metrics
To improve the accuracy of short-term power prediction for photovoltaic generation,the forecasting model of combination of harmony search(HS) algorithm and echo state network (ESN) algorithm was proposed.The model is based on historical power and weather data provided by a photovoltaic plant.Firstly,it selects the similar day of the forecasting day by the algorithm of similar day,and treats the difference of meteorological feature between the similar day and the forecasting day as the input variable of the model.Secondly,it chooses the training sample to train and forecast with the ESN model based on optimization of HS algorithm.Finally,it takes the photovoltaic plant in Gansu pro-vince as an example to test HS-ESN prediction model.Case analysis shows that the parameters of the reservoir of ESN prediction model optimized by HS algorithm can improve the prediction accuracy effectively,so it has better utility value.
Recommendation Method Based on Random Walk on Graph Integrated with FP-Growth
BIAN Meng-yang, YANG Qing, ZHANG Jing-wei, ZHANG Hui-bing and QIAN Jun-yan
Computer Science. 2017, 44 (6): 232-236.  doi:10.11896/j.issn.1002-137X.2017.06.039
Abstract PDF(411KB) ( 491 )   
References | Related Articles | Metrics
Recommendation is one kind of important strategy to promote the active degree of different social networks.However,it is a big challenge to improve the recommendation performance on social networks for the large scale of nodes as well as the complex relationship.Random walk is an effective method to solve such kind of problem,but the traditional random walk algorithm fails to consider the influence of the neighboring nodes adequately.A recommendation method based on random walk on the graph integrated with FP-Growth was proposed,which is based on the graph structure of the social networks.It introduces the FP-Growth algorithm to mine the frequent degree between the adjacent nodes,and then constructs transition probability matrix for random walk computing.Recommendations will be made according to the importance rank of friends.This method not only retains the characteristics of random walk method,such as alleviating the data sparsity effectively,but also weighs the difference of the relationship between diffe-rent nodes.The experimental results show that the proposed method is superior to the traditional random walk algorithm in the recommendation performance.
Atrial Fibrillation Pulse Detection via Complex Network Method
LI Han, ZHAO Hai, LU Yu-hui and SHAO Shi-liang
Computer Science. 2017, 44 (6): 237-239.  doi:10.11896/j.issn.1002-137X.2017.06.040
Abstract PDF(327KB) ( 477 )   
References | Related Articles | Metrics
In order to explore the complexity of pulse wave,combined with the concept of “atrial fibrillation pulse” in traditional Chinese medicine,a complex network method to detect atrial fibrillation was presented.The photoplethysmograph pulse wave is thereby transformed to a network topology using visibility graph method.A binary classification support vector machine (SVM) based on Gausssian kernel function is designed to distinguish between normal sinus rhythm and atrial fibrillation.The degree distribution of the network and the average heart rate are extracted as the input features of the SVM.According to the experimental results of patients with paroxysmal atrial fibrillation,this methodcan effectively identify the patient’s disease status and normal status.
Adaptive Bat Algorithm with Dynamically Adjusting Inertia Weight
PEI Yu-hang, LIU Jing-sen and LI Yu
Computer Science. 2017, 44 (6): 240-244.  doi:10.11896/j.issn.1002-137X.2017.06.041
Abstract PDF(372KB) ( 508 )   
References | Related Articles | Metrics
In order to improve the performance and the precision of bat algorithm(BA),an adaptive bat algorithm with dynamically adjusting inertia weight(DAWBA) was proposed.Inertia weight which obeys the uniform distribution and beta distribution in the iteration formula is added into the algorithm,thus accelerating the convergence speed.In addition,we introduced a speed correction factor and used it to constraint the step of bat dynamically,which provides the algorithm with effective adaptability.Simulation results show that the performance of DAWBA is significantly improved.
Variable Multi-cost Based Decision Theoretic Rough Sets
CHEN Yu-jin, LI Xu-wu, JIA Ying-jie and ZHANG Xin
Computer Science. 2017, 44 (6): 245-249.  doi:10.11896/j.issn.1002-137X.2017.06.042
Abstract PDF(355KB) ( 416 )   
References | Related Articles | Metrics
By analyzing the limitations of the optimistic multi-cost decision-theoretic rough set and pessimistic multi-cost decision-theoretic rough set,the fusion rules of variable precision multi-cost matrix was proposed and its decision-theoretic rough set was established.By introducing nonlinear mapping function called sigmoid,variable domain expands to real number domain.Furthermore,the properties and relations of these kinds of rough sets were discussed and the measure and cost relations of them were analyzed.Finally,The experimental results show the effectiveness of the model.
Improved Apriori Algorithm and Its Application Based on MapReduce
ZHAO Yue, REN Yong-gong and LIU Yang
Computer Science. 2017, 44 (6): 250-254.  doi:10.11896/j.issn.1002-137X.2017.06.043
Abstract PDF(1132KB) ( 533 )   
References | Related Articles | Metrics
With the rapid development of mobile communications and Internet technology,it becomes one of the hot issues in the field of data mining that how to analyze the requirements of mobile users efficiently and send useful informations in time.In order to recommend the analysis result to users efficiently and timely,a mining method named MRS-Apriori algorithm based on MapReduce was proposed.This method defines a kind of coding rule to optimize database based on classical Apriori algorithm.A judging mark named Judgemark is added to database to decide whether the transaction database is frequent.This mechanism improves the efficiency of MRS-Apriroi algorithm in connecting database to scan database efficiently.On the basis of encoding rules,the MRS-Apriroi algorithm uses MapReduce programming framework model under Hadoop to achieve parallel processing.It improves the performance of iteration when connecting process and reduces the time in dealing with large-scale data.The experiment results show that MRS-Apriroi algorithm can effectively reduce time and have high accuracy in handling large data sets.
Online Data Stream Mining for Seriously Unbalanced Applications
ZHAO Qiang-li and JIANG Yan-huang
Computer Science. 2017, 44 (6): 255-259.  doi:10.11896/j.issn.1002-137X.2017.06.044
Abstract PDF(439KB) ( 414 )   
References | Related Articles | Metrics
Using ensemble of classifiers on sequential blocks of training instances is a popular strategy for data stream mining with concept drifts.Yet for the seriously unbalanced applications where the number of examples for each class in the data blocks is totally different,traditional data block creation will result in low accuracy for the small classes with much less number of instances.This paper provided an updating algorithm UMAE (Unbalanced data learning based on MAE) for seriously unbalanced applications based on MAE (Memorizing based Adaptive Ensemble).UMAE sets an equal-sized sliding window for each class.When each data block comes,each example in the data block comes into the corresponding sliding window based on its classes.During the learning process,a new data block will be created by using the instances in the current sliding windows.This new data block is adopted to generate a new classifier.Compared with five traditional data stream mining approaches,the results show that UMAE achieves high accuracy for seriously unba-lanced applications,especially for the small classes with much less number of instances in the applications.
Nested Incremental Development Model for Design and Implementation of Visualization Systems
CUI Di, HU Wan-qi, GUO Xiao-yan and CHEN Wei
Computer Science. 2017, 44 (6): 260-265.  doi:10.11896/j.issn.1002-137X.2017.06.045
Abstract PDF(2061KB) ( 742 )   
References | Related Articles | Metrics
It remains a challenging problem to design and implement effective visualization systems with a well-studied fashion in the data era.This thesis proposesd a nested incremental model (NIM) for visualization developers from the perspective of software engineering.The key idea and flowchart of our nested incremental model is demonstrated with a visualization prototype for visual exploration of mobile base station data.The experiments verifies the effectiveness and efficiency of the proposed NIM.
Multi-focus Image Fusion Based on SML and PCNN in NSCT Domain
XIE Qiu-ying, YI Ben-shun, KE Zu-fu and LI Wei-zhong
Computer Science. 2017, 44 (6): 266-269.  doi:10.11896/j.issn.1002-137X.2017.06.046
Abstract PDF(2393KB) ( 718 )   
References | Related Articles | Metrics
Aiming at the false edges and artifacts resulted by the existing fusion rules,a new fusion method based on sum-modified Laplacian (SML) and pulse coupled neural network (PCNN) in non-sampled contourlet transform (NSCT) domain was proposed.Firstly,the source images are decomposed into low frequency sub bands which include basic information and multiple high frequency sub bands with detail information via NSCT.Then,the SML of multi-scale coefficients are calculated and used to fuse the low frequency sub-band coefficients.For the high frequency sub-band coefficients,the calculated SML is taken as the input motivation of the PCNN,and the relative output fire map is adopted to select the pixels in the sub-band images.Finally,the dealt coefficients are reconstructed by the NSCT to get the fused image.The experimental results show that the proposed algorithm excellently improves the focus clarity.Compared with the existing algorithms such as SIDWT,DTCWT,NSCT and PCNN-based method,the objective indexes of mutual information,structural similarity and transferred edge information have been increased.
Template-based Method for Auto-measuring Bone Parameters
WANG Lin, HE Kun-jin and CHEN Zheng-ming
Computer Science. 2017, 44 (6): 270-273.  doi:10.11896/j.issn.1002-137X.2017.06.047
Abstract PDF(1102KB) ( 586 )   
References | Related Articles | Metrics
To quickly measure bone parameters,a method for auto-measuring bone parameters was put forward.First of all,a parametric measurement template is generated by building a reference entity and setting hierarchical semantic parameters.Second,a target bone model is matched to the template by rigid transformation.Then,the template is matched to the target bone by non-rigid transformation.Finally,the deformed template is used as a substitution of the target bone.Thereafter,bone parameters are automatically calculated based on feature points defined on the template.Taking the femur as an example,experimental results show that a whole set of anatomical parameters of femur are measured quickly and the results are nearly equal to that measured with recent software.It is also found that the measured femur parameters are strongly correlated.Meanwhile,a part of parameters are strongly correlated,and distributions of most parameters are generally symmetrical,which indicates that they are close to normal distributions.This provides scientific basses for statistical analyses of femoral parameters and parametric design of orthopedic implants in late.
Retrieval Algorithm for Texture Image Based on Improved Dual Tree Complex Wavelet Transform and Gray Gradient Co-occurrence Matrix
ZHAI Ao-bo, WEN Xian-bin and ZHANG Xin
Computer Science. 2017, 44 (6): 274-277.  doi:10.11896/j.issn.1002-137X.2017.06.048
Abstract PDF(1772KB) ( 588 )   
References | Related Articles | Metrics
To overcome the lack of texture space distribution cha racteristics at different scales for dual tree complex wavelet transform,a kind of improved retrieval algorithm for texture image was proposed based on the combining dual tree complex wavelet transform with gray gradient co-occurrence matrix.Firstly,in order to increase the spatial information at different scales,the algorithm will be non-uniform image blocks,and every block image is dual tree complex wavelet transform.Secondly,the four statistical features are extracted by using gray gradient co-occurrence matrix.Then,texture features for image retrieval are obtained using fusion of texture feature extracted by above two methods.Finally,Canberra distance is used similarity measure and output image search results.Experimental results show that this method has better retrieval results for texture images.
Fuzzy Clustering Image Segmentation Algorithm Based on Improved Cuckoo Search
ZHU Chun, LI Lin-guo and GUO Jian
Computer Science. 2017, 44 (6): 278-282.  doi:10.11896/j.issn.1002-137X.2017.06.049
Abstract PDF(1062KB) ( 572 )   
References | Related Articles | Metrics
Fuzzy C-means clustering algorithm(FCM) is a widely used clustering algorithm,however,it is influenced by the initial cluster centers,and is easy to fall into local optima.In this article,we proposed an improved cuckoo search(ICS) based on the standard cuckoo algorithm(CS),which changes the detection probability P with a constant value into a variable number of iterations decreases.This will not only improve the quality of the population,but also ensure the convergence of the algorithm.Therefore,we can use the improved cuckoo search algorithm to generate the FCM clustering centers and avoid FCM falling into local optima effectively.The proposed algorithm has better clustering effect and faster running speed.In this article,ICS_FCM was used in fuzzy clustering image segmentation,and compared with SA_FCM.The experimental results show that ICS_FCM can not only achieve better segmentation results,but also improved efficiency significantly.
Novel Action Recognition via Improved PLSA and CBR
TU Hong-bin, YUE Yan-yan, ZHOU Xin-jian and LUO Kun
Computer Science. 2017, 44 (6): 283-289.  doi:10.11896/j.issn.1002-137X.2017.06.050
Abstract PDF(1863KB) ( 535 )   
References | Related Articles | Metrics
In order to recognize the ambiguous action meaning in the same scene for occlusion,the improved PLSA and CBR algorithm are used to recognize the simple action according to the space-time interest point.The improved algorithm can not only overcome the shortcomings of the over fitting in the traditional PLSA algorithm owe to the generative model independence assumption of observation sequences,but also decrease recognition ambiguity.Experiments based on the proposed method are executed on the public databases such as KTH,Weizmann,UCF sports and the self-building database.The results show that the proposed method has the performance of validity and effectiveness.
Improved Entropy Coding Algorithm for Transform Coefficients in HEVC
SHAN Na-na, ZHOU Wei and DUAN Zhe-min
Computer Science. 2017, 44 (6): 290-293.  doi:10.11896/j.issn.1002-137X.2017.06.051
Abstract PDF(392KB) ( 1083 )   
References | Related Articles | Metrics
Transform coefficients coding has been the bottleneck of influencing the efficiency of video coding because of the complexity of entropy coding and the quantity of transform coefficients in high efficient video coding.There are many zeroes and small absolute values in transform coefficients.According to some features of context adaptive binary arithmetic coding and bypass coding,an improved entropy coding algorithm for transform coefficients was proposed by reducing some context adaptive binary arithmetic coding bins.Experiment results show that compared with HM 10.0,the proposed algorithm can provide 37.31%,26.34% and 20.63% time savings at QP being 2,2 and 22 respectively.
Polygon Fitting Algorithm Based on Minimum Bounding Rectangle
LIU Na, SUN Xiao-liang and TAN Yi-hua
Computer Science. 2017, 44 (6): 294-297.  doi:10.11896/j.issn.1002-137X.2017.06.052
Abstract PDF(1523KB) ( 1934 )   
References | Related Articles | Metrics
In the extraction of the edge of the housing,usually due to the segmentation of the region is not accurate,resulting in the extraction of the outline appear irregularities such as concave or convex parts,which needs to further fitting.Corner detection plays an important role in contour shape extraction.The traditional corner detection is based on the calculation of the maximum curvature point.This calculation method is completely dependent on the curvature variation of the profile,and can not be used for the repair of its irregularities.So it’s difficult in removing some of useless corner points.In this paper,a new method based on the minimum external rectangle to fit its real shape was proposed.In detail,the minimum external rectangle is used as the outline,and the difference between the fitted contour and the outer rectangle is calculated.It sets the appropriate threshold value for the difference,then chooses the right point for the polygon according the threshold.Without needing to calculate the corner points by the curvature of the contour,we can get the right-angled polygon of the object contour,which is simple and efficient.
Graph Regularized and Incremental Nonnegative Matrix Factorization with Sparseness Constraints
SUN Jing, CAI Xi-biao, JIANG Xiao-yan and SUN Fu-ming
Computer Science. 2017, 44 (6): 298-305.  doi:10.11896/j.issn.1002-137X.2017.06.053
Abstract PDF(2545KB) ( 865 )   
References | Related Articles | Metrics
Nonnegative matrix factorization (NMF) not only is a description of the data,but also has intuitive physical meaning after the decomposition of the matrix.With the aim to enhance the validity and classification accuracy,a more reasonable algorithm was proposed,which is graph regularized and incremental nonnegative matrix factorization with sparseness constraints (GINMFSC).It not only preserves the intrinsic geometry of data,but also makes full use of the last step decomposition results as incremental learning,and introduces sparseness constraint to coefficient matrix.Finally,they are integrated into one single objective function and an efficient updating approach is produced.Compared with NMF,GNMF,INMF and IGNMF,experiments on several databases have shown that the proposed method achieves better clustering accuracy and sparsity while reducing the computation time.
Crowd Animation Generation Method Based on Personalized Emotional Contagion
CAO Meng-xiao, ZHANG Gui-juan, HUANG Li-jun and LIU Hong
Computer Science. 2017, 44 (6): 306-311.  doi:10.11896/j.issn.1002-137X.2017.06.054
Abstract PDF(2287KB) ( 605 )   
References | Related Articles | Metrics
At present,most of crowd animation methods only simulate the emotion of behavior rather than consider the impact of human personality on the emotion.In the real world,one’s emotion (e.g.,panic,anxiety) often differs from others which is called personality difference.In order to generate more realistic crowd animation,we presented a crowd animation method based on personalized emotional contagion model.The method is divided into three parts.Firstly,we provided the emotion perception factor and introduced the P-Durupinar model which shows the crowd’s diversity effectively.Secondly,the P-Durupinar model was coupled with the crowd motion.Combined with the reciprocal velocity obstacle,we could activate the crowd’s motion.Finally,the photo-realistic rendering method was employed to obtain the crowd animation.Experimental results show that our personalized emotional contagion-based crowd animation method can simulate the crowd social behavior more realistically.
Research on Image Processing Algorithm Based on Compressed Sensing
LU Zhao and ZHU Xiao-shu
Computer Science. 2017, 44 (6): 312-316.  doi:10.11896/j.issn.1002-137X.2017.06.055
Abstract PDF(1028KB) ( 725 )   
References | Related Articles | Metrics
In the process of recognizing and restoring image data,data’s sparsity usually occurs due to the similarity of images.In the process of compressed sensing image restoration,the lack of prior information on statistical data generally brings about higher computational complexity and lower restoring accuracy.This paper introduced a refined algorithm of compressed sensing to restore the images,and it defined similarity distances of the matrix as well.The similarity distances and similarity of the matrix are defined by the similarity of the image.Based on the present definition,the application of principal component analysis mapping and Bayesian prior information will enhance images’ iterative recovery algorithm.Experimental results show that the proposed method is more accurate than other restoration algorithms and the images restored are of better definition.Comparatively speaking,the proposed algorithm has a lower computational complexity and consumes a shorter period of computational time.
Video Distributed Compressive Sensing Research Based on Multihypothesis Predictions and Residual Reconstruction
ZHAO Hui-min, PEI Zhen-zhen, CAI Zheng-ye, WANG Chen, DAI Qing-yun and WEI Wen-guo
Computer Science. 2017, 44 (6): 317-321.  doi:10.11896/j.issn.1002-137X.2017.06.056
Abstract PDF(1066KB) ( 559 )   
References | Related Articles | Metrics
For a low-cost and low-power demand,distributed compressed sensing (DCS),an emerging framework for signal processing,can be used in video application,especially when available resource at the transmitter side are limited.Therefore,a novel video DCS(VDCS) scheme was proposed in this paper,where multihypothesis(MH) predictions of the current CS frame are generated from one or more previously reconstructed CS frame.Meanwhile at decoder side,the predictions are utilized as a residual signal to improve reconstructed video quality.Experimental results demonstrate that PSNR performances of the proposed VDCS scheme outperforms other methods,such as MH-BCS-SPL and traditional JSM-DCS.