Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 45 Issue 6, 15 June 2018
  
Surveys
Survey on Hidden Subgroup Problem
DAI Wen-jing, YUAN Jia-bin
Computer Science. 2018, 45 (6): 1-8.  doi:10.11896/j.issn.1002-137X.2018.06.001
Abstract PDF(1366KB) ( 2152 )   
References | Related Articles | Metrics
After Shor has presented an effective quantum algorithm for factoring of large integer,quantum computation forces us to re-examine the existing cryptosystems.Hidden subgroup problem is the generalization of quantum computation in group structure,it implicits that more difficult problems might be solved by considering various groups and functions to find new algorithms which are exponentially faster than their classical counterparts.The abelian hidden subgroup problem research has formed a relatively unified framework,while the non-abelian hidden subgroup problem research is always alive.The dihedral hidden subgroup problem may break the cryptosystems of the unique shortest vector problem based on the lattice and the graph isomorphism problem can be reduced to the symmetric hidden subgroup problem.The hidden subgroup problem was summarized to attract more researchers.Finally,this paper provided a suggestion for the direction of future research.
Overview of Threat Intelligence Sharing Technologies in Cyberspace
YANG Pei-an, WU Yang, SU Li-ya, LIU Bao-xu
Computer Science. 2018, 45 (6): 9-18.  doi:10.11896/j.issn.1002-137X.2018.06.002
Abstract PDF(1696KB) ( 2584 )   
References | Related Articles | Metrics
Nowadays,new kinds of cyber-attacks,such as APT and DDoS,have lower concealment,lower attack cost and huge attack effect.These advantages can let them easily escape from the detection of traditional cyber-attack mea-sures.Cyber-space security situation is becoming more and more severe.The detection and prevention of these attacks have become much harder.CTI(Cyber Threat Intelligence) based network defence has been proved to be a promising strategy to address this problem.In this case,both academic and business circle have put many efforts on CTI analysis and sharing.This paper introduced the meaning and value of CTI.Then aiming at the sharing for threat intelligence,it studied and reviewed the works and developments in CTI sharing deeply.In the end,it looked ahead to the future study of CTI sharing.
Survey on Multicast Routing in Mobile Opportunistic Networks
DENG Xia, CHANG Le, LIANG Jun-bin, JIANG Chan
Computer Science. 2018, 45 (6): 19-26.  doi:10.11896/j.issn.1002-137X.2018.06.003
Abstract PDF(1506KB) ( 818 )   
References | Related Articles | Metrics
Mobile opportunistic networks,which disseminate messages through the contact of mobile nodes,have become pervasive in many scenarios,e.g.,mobile social networks,VANET,mobile ad hoc networks,etc.These networks have attracted considerable attention in both the academic and industry community.In such networks,group communication of mobile opportunistic networks has been widely adopted in disaster rescue,community information dissemination,intelligent transportation,etc.,which makes multicast routing as a critical building block in the design of mobile opportunistic networks.This paper presented a survey on the state-of-the-art multicast strategies in mobile opportunistic networks.The multicast routing strategies were classified into two categories:traditional multicast routing and intelligent multicast routing.The algorithms in each category were discussed,with an emphasis on intelligent multicast routing.Different multicast routing strategies were compared and analyzed by using popular performance metrics.The conclusion is drawn that intelligent multicast routing strategies demonstrate desirable comprehensive performance in terms of data delivery ratio,transmission overhead,average delay,storage overhead and scalability,due to the consideration of the inherent characteristics of mobile opportunistic networks,e.g.,contact probability,stable social characteristics,li-mited node cache,exhausted energy consumption,selfish behaviors,etc.At last,corresponding research directions were discussed by concerning the cache management supporting big data,group communication security strategies,multicast in VANET and dynamic-sensing based multicast in mobile opportunistic networks.
Survey of Graph Matching Algorithms
XIANG Ying-zhuo, TAN Ju-xian, HAN Jie-si, SHI Hao
Computer Science. 2018, 45 (6): 27-31.  doi:10.11896/j.issn.1002-137X.2018.06.004
Abstract PDF(1341KB) ( 2483 )   
References | Related Articles | Metrics
Graph has been applied to many fields of science and technology,such as pattern recognition and computer vision,because of its powerful representation of structure and information.When graph is used to represent object structure,calculating the similarity of two objects equals to calculating the similarity of two graphs.The research of graph matching algorithms has been carried out for decades,especially as the big data technology increasingly becomes hot recently.As a representation of relationship among data,graph has been paid more attention in the research.This paper gave a survey of the development of the graph matching technology as well as the foundation of this theory.Then,this paper made a summarization of graph matching methods,and compared the performance of several classical algorithms.
WISA2017
K-clique Heuristic Algorithm for Influence Maximization in Social Network
HU Qing-cheng, ZHANG Yong, XING Chun-xiao
Computer Science. 2018, 45 (6): 32-35.  doi:10.11896/j.issn.1002-137X.2018.06.005
Abstract PDF(2992KB) ( 985 )   
References | Related Articles | Metrics
Influence maximization is the problem of obtaining a set of nodes with specified size in social network to ma-ximize their aggregate influence under certain influence diffusion model,and it can yield significant benefit both in theory and real life.Influence maximization has been proved to be NP-hard by Kempe D et al.This paper proposed a new algorithm for influence maximization named K-clique Heuristic.The basic idea of the algorithm is that the nodes in social network spans multiple social circles.If these nodes are more widely spread in field and range,they have greater intersectionality and influence.The experimental results show that the proposed model is effective,and it may also shed light on the profound problem of influence maximization in social network.
WISA2018
Mining Method of Association Rules Based on Differential Privacy
CUI Yi-hui, SONG Wei, PENG Zhi-yong, YANG Xian-di
Computer Science. 2018, 45 (6): 36-40.  doi:10.11896/j.issn.1002-137X.2018.06.006
Abstract PDF(1691KB) ( 1057 )   
References | Related Articles | Metrics
With the advent of the era big data,the potential value of mining big data has attracted more and more attention from academia and industry.However,at the same time,due to frequent Internet security incidents,users are increasingly concerned about the disclosure of personal privacy data,and user data security issues become one of the most important obstacles to big data analysis.With regard to the study of user data security,the existing researches more focus on access control,ciphertext retrieval and result verification.The above researches can guarantee the security of user data itself,but can not dig out the potential value of protected data.Therefore,how to protect the security and dig the potential value of the data in the meantime is one of the key issues that need to be addressed.This paper proposed an association rules mining method based on differential privacy protection.Data owners use Laplacian mechanism and exponential mechanism to protect user data during data release.Data analysis is associated with differential privacy FP-tree Rule mining.The experimental results show that the performance and accuracy of the proposed method are superior to the existing methods.
WISA2019
Method of Link Prediction in Social Networks Using Node Attribute Information
ZHANG Yu, GAO Ke-ning, YU Ge
Computer Science. 2018, 45 (6): 41-45.  doi:10.11896/j.issn.1002-137X.2018.06.007
Abstract PDF(1764KB) ( 854 )   
References | Related Articles | Metrics
With the development of large social networks,link prediction has become an important research subject.The link prediction problem in social networks using rich node attribute information was studied in this paper.Based on attribute-augmented social network model,which means rebuilding an augmented network by adding additional nodes with each node corresponding to an attribute,called social-attribute network,the classification of node attributes was added to the model as an important parameter.Several methods of assigning weights for different kinds of links were proposed.Then a random walk method was used for link prediction in the network.Experimental results reveal that this method has better performance compared with other similar methods.
WISA2020
Community Discovery in Location Network
ZHENG Xiang-ping, YU Zhi-yong, WEN Guang-bin
Computer Science. 2018, 45 (6): 46-50.  doi:10.11896/j.issn.1002-137X.2018.06.008
Abstract PDF(1363KB) ( 1000 )   
References | Related Articles | Metrics
The location network can portray the spatial structure of city from some unique perspectives.By studying the characteristics of urban location network and its difference with traditional social network,a community discovery algorithm based on location network was proposed.The algorithm takes into account the proximity of location,the connection between the locations and the similarity of user’s travel behavior.Firstly,the initial community is divided.Then,the extent of each site belonging to this community is interatively calculated the places with lower membership degree are adjusted until convergence,so as to find significant urban communities.The validity of the algorithm was verified by analyzing the attributes and correlations of the internal sites.
WISA2021
Subsequence Outsourcing Query Method over Encrypted Genomic Data
WANG Zhan-bing, SONG Wei, PENG Zhi-yong, YANG Xian-di, CUI Yi-hui, SHEN Yuan
Computer Science. 2018, 45 (6): 51-56.  doi:10.11896/j.issn.1002-137X.2018.06.009
Abstract PDF(1713KB) ( 722 )   
References | Related Articles | Metrics
Precision medicine is a medical model that relies heavily on patient genome analysis.The subsequence search plays an important role in performing genome analysis.Recently,the amount of genomic data are increasing dramatically,and the storage cost and processing complexity of them have been far beyond the capacity of hospitals.So,utilizing the powerful cloud computing capability to analyze and process such massive genomic sequence data is becoming popular.Considering that cloud service provider is not completely trusted,encrypting genomic data before uploading is a straightforward and effective solution to guarantee the privacy and security of DNA sequence data.However,how to perform queries over the encrypted genomic sequence data becomes another difficult problem.To address this problem,this paper made a detailed survey on genomic data processing and full-text retrieval fields.It constructed indexes on fix-length windows of the genomic sequence using q-gram mapping,and performed queries in every window.If the query sequence is the prefix of any window in genomic sequence,the query hits.Throughout all the processes,cloud service provider stores indexes and performs subsequence query,without obtaining any privacy details.Moreover,this paper set up the system model and several security assumptions,and proved their security.Experiments were carried out to evaluate the performance of scheme on a public dataset.The results show that the proposed solution achieves better performance in time cost and storage cost,i.e.when w is 100 and q is 6,the building index algorithm costs 15.60s for sequence of 100000 length.Compared with GPSE,the proposed solution has higher execution efficiency in performing queries.
WISA2022
Diversity Measures Method in High-dimensional Semantic Vector Based on Asymmetric Multi-valued Feature Jaccard Coefficient
FENG Yan-hong, YU Hong, SUN Geng, PENG Song
Computer Science. 2018, 45 (6): 57-66.  doi:10.11896/j.issn.1002-137X.2018.06.010
Abstract PDF(2336KB) ( 865 )   
References | Related Articles | Metrics
The diversity measures of semantic vector are important base of natural language processing problem resolved by deep learning methods.There is a problem of “measurement concentration” in the diversity measure of high dimension semantic vector,which leads to the diversity of the semantic vectors disappear when the diversity are obtained by the traditional measure methods.To resolve this problem,a diversity measures method based on the asymmetric multi-valued feature Jaccard coefficient was proposed.From the statistical distribution of the dimension values of the high-dimensional semantic vector,the values of the partial dimensions are densely distributed in a certain range,which makes them impossible to contribute the diversity.Therefore,the contribution of different dimensions to the diversity is diffe-rent and has asymmetry.This method defines the importance function about the dimension value,selects the dimensions of the importance function value satisfying the threshold to participate in the diversity calculation and removes the dimensions that can not contribute the diversity,and then realizes the dimensionality reduction and alleviates the problem of “measurement concentration”.The experiments were respectively conducted on fishery data sets and public data sets.Different measures methods of the different dimension semantic vector were compared.Under the condition that the semantic nature is not markedly reduced,the diversity index of theproposed method is much higher than the current optimal measures method.
WISA2023
Monitoring and Dispatching Service for Heterogeneous Big Data Computing Frameworks
HU Ya-peng, DING Wei-long, WANG Gui-ling
Computer Science. 2018, 45 (6): 67-71.  doi:10.11896/j.issn.1002-137X.2018.06.011
Abstract PDF(3417KB) ( 909 )   
References | Related Articles | Metrics
Various types of large data computing frameworks have their own management methods.The operation of traditional monitoring and scheduling service in heterogeneous environment is limited by the global status of cluster.It not only wastes resource of cluster,but also suffers long executive latencies of job.To solve these problems above,this paper presented an integrated monitoring and dynamic scheduling management service for heterogeneous big data computing framework.The service can monitor multiple types of computing framework automatically and provide integrated dispatching for diverse computing jobs.The work was implemented on Hadoop and Storm.The experimental results show that the service can reduce the complexity of manual operation in heterogeneous environment and improve job scheduling efficiency.
WISA2024
Marine Monitoring Data Replica Layout Strategy Based on Multiple Attribute Optimization
HUANG Dong-mei, DU Yan-ling, HE Qi, SUI Hong-yun, LI Yao
Computer Science. 2018, 45 (6): 72-75.  doi:10.11896/j.issn.1002-137X.2018.06.012
Abstract PDF(1396KB) ( 690 )   
References | Related Articles | Metrics
The integrity and reliability of data are the key to ensure its efficient access.Especially in the cloud storage environment,data replication strategy is the core of the system performance and the availability of data.For a data replica layout,this paper put forward a data replica layout strategy based on multiple attribute optimization (MAO-DRLS).According to the heat of data access and the key attributes of storage node,it sets the number of replicas for each dynamic data,and chooses the appropriate node to distribute the replicas.The experiments show that the MAO-DRLS strategy can effectively improve theutilization rate of data replicas and shorten the system response time.
Netword & Communication
Personalized Trustworthy Group Identifying Model Based on O2O Service-oriented Mobile Social Network
ZHU Wen-qiang
Computer Science. 2018, 45 (6): 76-83.  doi:10.11896/j.issn.1002-137X.2018.06.013
Abstract PDF(1649KB) ( 683 )   
References | Related Articles | Metrics
With the quick development of mobile communication technique at present,the mobile social network is combining with O2O services more tightly than ever.Users are used to consulting their mobile social networks about those O2O services which they want to order,and submitting their feedbacks and ratings about these services to their mobile social networks after enjoyed these services.However,the mobile social networks are openness,and users can submit their feedbacks anonymously,so it is critical for users to identify the trustworthy group to check if these feedbacks are reliable.The present researches on trustworthy user group identifying are mainly focused on cloud computing and online social networks.Most of these researches use global methods to compute the trust value of users,and neglect factors such as social circles,service preferences,personal interests,and so on,so these existing researches are unfit for the O2O service-oriented personal social networks.To identify the trustworthy group in personal mobile social networks effectively,this paper proposed a trust model based on the famous Advogato model.It takes the interaction frequency,similarity of users’ social circles,similarity of users’ interests into consideration,uses the capacity-first maximum flow search method to transfer the trust flow between users to build their personal trust networks,and finally outputs the ranked trustworthy user group.The experimental results on real dataset show that the proposed trust model has superiorperformance of the prediction accuracy(Pre),missing rate(MsR) and top ranking range(Trr) while comparing to the existing group trust models.
Netword & Communication
Synchronization Protocol of TDMA Ad hoc Network Based on Time Slot Alignment
JIN Rui, LIU Zuo-xue
Computer Science. 2018, 45 (6): 84-88.  doi:10.11896/j.issn.1002-137X.2018.06.014
Abstract PDF(2373KB) ( 1074 )   
References | Related Articles | Metrics
Through researching the TDMA time synchronization protocols STS and TISS,this paper proposed a TDMA Ad hoc network synchronization protocol MFSS based on time slot alignment.The MFSS protocol uses the work cycle as the standard of synchronization among the nodes of Ad hoc network.When the node accesses the network firstly,the two-way interaction and time slot alignment are used to eliminate the transmission delay error and initial time deviation,so an initial synchronization can be completed quickly.Then,the clock drift error among nodes can be controled by the monitoring process,and the overhead of resynchronization is also reduced.The simulation results show that compared with STS protocol and TISS protocol,the MFSS protocol achieves better performance in terms of synchronous convergence speed,synchronization accuracy and synchronization overhead.
Netword & Communication
Algorithm for Reducing PAPR of FBMC-OQAM System
WU Jian-xia, YANG Yong-li
Computer Science. 2018, 45 (6): 89-95.  doi:10.11896/j.issn.1002-137X.2018.06.015
Abstract PDF(2865KB) ( 940 )   
References | Related Articles | Metrics
FBMC-OQAM technique combining filter bank multi-carrier (FBMC) and offset quadrature amplitude modulation (OQAM) shows the characteristics such as high spectrum efficiency and without synchronization in wireless communication system.However,the higher peak-to-average power ratio (PAPR) in FBMC-OQAM systems can easily lead to signal distortion,spectrum extension and system performance degradation.Therefore,this paper proposed a method toreduce the PAPR by applying a new expansion hybrid method for FBMC-OQAM systems.The new method called TR-μLaw combines Tone Reservation (TR) and μ-law companding scheme.The TR method has no distortion characteristic and the μ-law companding method causes the distortion to FBMC-OQAM system,but it has obvious effects of reducing PAPR.Therefore,TR-uLaw hybrid method realizes the complementary advantages of the two me-thods.The simulation results show that the performance of reducing PAPR is better than that of both μ-law and TR methods which reduce about 1.0dB and 3.4dB at μ=3 and Iterations=8,respectively,and the BER performance of the TR-μLaw method is better than that of μ-law method.
Netword & Communication
Dynamic Retransmission Algorithm inLow-power Wireless Sensor Networks
WU Wei-nan, LIU Jian-ming
Computer Science. 2018, 45 (6): 96-99.  doi:10.11896/j.issn.1002-137X.2018.06.016
Abstract PDF(2027KB) ( 747 )   
References | Related Articles | Metrics
The quality of channel communication is time-varying.In order to improve the reliability of data transmission,the retransmission mechanism with higher energy utilization should be introduced into low-power wireless sensor network.In low-power wireless sensor networks,the real-time data requirements are not high,but the overall energy is limited.Therefore,the timing and the validity of retransmission are particular important.On the basis of static queue transmission,this paper proposed a reliable and stable dynamic retransmission algorithm.The nodes which fail to send create a random number as a sequence added to the retransmission queue.The algorithm use the random transmission timing to retreat the harsh communication.The experimental results show that the dynamic retransmission algorithm improves the success rate of data transmission while reducing the energy consumption.
Netword & Communication
Duty Cycle Scheme Maximizing Throughput in Energy Harvesting Sensor Networks
CHI Kai-kai, LIN Yi-min, LI Yan-jun, CHENG Zhen
Computer Science. 2018, 45 (6): 100-104.  doi:10.11896/j.issn.1002-137X.2018.06.017
Abstract PDF(1488KB) ( 670 )   
References | Related Articles | Metrics
For energy harvesting wireless sensor networks (EH-WSNs) working under the extremely low-duty-cycle mode and using the super capcitor as energy storage device,both the average energy harvesting rate and the throughput of node greatly depend on the capacitor voltage when it begins to sleep and wakes up.This paper firstly demonstrated that the energy harvesting rate significantly depends on the instant capacitor voltage through the theoretical analysis,then formulated the maximal node throughput problem and finally provided an approach to solve this problem for obtaining the optimal values of capacitor voltage to start sleeping and capactior voltage to wake up.Numerical results show that the throughput of the duty-cycle-optimized scheme is significantly larger than that of the available duty-cycle schemes.
Interest Community Based Message Transmission Scheme in Mobile Social Networks
HOU Lin-qing, CAI Ying, FAN Yan-fang, XIA Hong-ke
Computer Science. 2018, 45 (6): 105-110.  doi:10.11896/j.issn.1002-137X.2018.06.018
Abstract PDF(1481KB) ( 669 )   
References | Related Articles | Metrics
The storage-carrying-forwarding of node for route messages is a short-distance communication way in the mobile social networks,and the transmission performance is the key factor that affects user interaction experience.If users can transmit the message according to the interest or the formation of the community,the transmission perfor-mance can be improved.For the short-distance communication in the mobile social network,the existing researches are mainly either interest-based or community-based transmission.In order to make users have a better interactive expe-rience,this paper proposed InComT(Interest-Community-based Transmission) by combining user interest with community.The interest value of a single node in the mobile social networks was measured,the community was divided accor-ding to the its interest value to determine the whole community interest value and then the relay community and the relay node were selected by the interest value to realize the transmission of message.The simulation results show that the strategy can possess higher transmission success rate in the case of low transmission overhead and low average delay.
Information Security
Improved Identity Authentication Protocol Based on Elliptic Curve Cryptographyin Multi-server Environment
YIN Qiu-shi, CHEN Jian-hua
Computer Science. 2018, 45 (6): 111-116.  doi:10.11896/j.issn.1002-137X.2018.06.019
Abstract PDF(1518KB) ( 951 )   
References | Related Articles | Metrics
Based on the model of user’s name and password,most of the traditional identity authentication protocols are derived from the mathematical difficult problems.They often rely on the complexity of password,the performance of random generator and large computational cost to ensure the security of the communication,so they are lack of high efficiency and practicality.In order to avoid above problems successfully,based on the introduction of biological factors and fuzzy extractor,this paper proposed an improved identity authentication protocol based on elliptic curve cryptography and verified key authentication formally in both sides through Burrows-Abadi-Needham (short for BAN),and then carried out security analysis.Compared with other related protocols in performance,the proposed scheme is more secure and practical.
Optimal Defense Strategy Selection Method Based on Network Attack-Defense Game Model
LIU Jing-wei, LIU Jing-ju, LU Yu-liang, YANG Bin, ZHU Kai-long
Computer Science. 2018, 45 (6): 117-123.  doi:10.11896/j.issn.1002-137X.2018.06.020
Abstract PDF(1674KB) ( 2190 )   
References | Related Articles | Metrics
In order to reduce the loss of security risk and make the optimal network defense decision under the limited resources,the optimal defense strategy selection method based on attack-defense game model was proposed.First,network attack-defense game model was established and the existence of equilibrium model of the mixed strategy Nash was proved.Then,the network attack-defense strategy selection algorithm based on the model was given,including the attack-defense strategy searching algorithm based on network attack-defense strategy graph,the calculation method of utility function under varied attack-defense strategies based on the common vulnerability scoring system and the method for solving mixed strategy Nash equilibrium.Finally,the validity of the model was analyzed and verified in a typical attack-defense experiment.The experimental results show that the model can effectively generate the optimal defense strategy.
Chinese Data Encryption Scheme of Efficient Ciphertext Retrieving in Cloud Storage
ZHANG Shu-nan, CAI Ying, FAN Yan-fang, XIA Hong-ke
Computer Science. 2018, 45 (6): 124-129.  doi:10.11896/j.issn.1002-137X.2018.06.021
Abstract PDF(1659KB) ( 1134 )   
References | Related Articles | Metrics
Data encryption is the primary technology to ensure the data safety in cloud storage,and efficient ciphertext retrieval technology shows its promising capability to improve the retrieval efficiency and reduce the storage overhead.During ciphertext retrieving,some schemes need to decrypt the ciphertext for many times,reducing the retrieval efficiency of ciphertext severely.Other schemes construct a large set of index files,which cost a great deal of storage space in cloud storage.In this paper,Chinese data encryption scheme was proposed by taking both retrieval efficiency and storage overhead into consideration,which utilizes the random sorting of data blocks and label vectors encryption in data encryption process firstly,and thencooperates with the index vector files to retrieve the ciphertext in ciphertext retrieving process.Experiment shows that this scheme can find user’s data accurately under the condition of consuming shorter time and less storage space.
Privacy Protection Model and Privacy Metric Methods Based on Privacy Preference
ZHANG Pan-pan, PENG Chang-gen, HAO Chen-yan
Computer Science. 2018, 45 (6): 130-134.  doi:10.11896/j.issn.1002-137X.2018.06.022
Abstract PDF(1291KB) ( 1202 )   
References | Related Articles | Metrics
The balance between privacy protection and service quality is an issue remained to be solved.This paper proposed a game metric model based on privacy preference.Firstly,the formal definition of the user’s privacy preference was proposed,and a method of quantifying privacy preference was proposed.On the basis of this,the service provider’s strategy selection based on privacy preference was analyzed and the privacy metric model based on game theory was put forward,the strategy entropy was used to measure the user privacy disclosure under the mixed strategy,which can comprehensively consider user’s privacy preferences on the service provider’s game strategy and effectively measure user’sprivacy leak.Finally,the feasibility was demonstrated through a case.
Blind Watermarking Algorithm for Digital Image Based on Fibonacci Scrambling in Wavelet Domain
WANG Qian, YU Lai-hang, CAO Yan, ZHANG Lei, QIN Jie, YE Hai-qin
Computer Science. 2018, 45 (6): 135-140.  doi:10.11896/j.issn.1002-137X.2018.06.023
Abstract PDF(4201KB) ( 991 )   
References | Related Articles | Metrics
For the copyright protection problem of digital image,a blind digital watermarking scheme based on Fibonacci scrambling in wavelet domain was proposed.The region of interest (ROI) of original image is used as the watermarking source to improve the concealment of twatermark.In the watermark embedding process,the original image is divided into blocks,Fibonacci scrambling and discrete wavelet transform (DWT) are performed for each block,and low frequency sub-band is selected for watermark embedding.At the same time,DWT is also executed for the watermark,the low frequency sub-band is selected,and the scrambled matrix is obtained by Fibonacci scrambling,which will be embedded into the block on the main image.In the watermark extraction process,according to the secret key set in the embedding process,the inverse Fibonacci scrambling and inverse DWT process are used to extract the watermark.The simulation results show that the watermarking scheme has high security,robustness and concealment.
Software & Database Technology
Graphic Complexity-based Prioritizing Technique for Regression Testing of Mobile Navigation Service
CHENG Jing, ZHANG Tao, WANG Tao, DONG Zhan-wei
Computer Science. 2018, 45 (6): 141-144.  doi:10.11896/j.issn.1002-137X.2018.06.024
Abstract PDF(1803KB) ( 600 )   
References | Related Articles | Metrics
Mobile navigation service is an important and popular location-based service,which help to recommend routes to mobile users to reach their destinations.Because of various navigation strategies and many complex situations which are related to modern mobile navigation service,it is difficult to validate the mobile navigation service.In this paper,an approach was presented to prioritize test cases for the regression testing of mobile navigation service.The approach proposes some prioritizing metrics based on navigation graph complexity theory.To evaluate the proposed approach,this paper conducted a case study on a popular navigation software.Comparing with random test approach,the proposed prio-ritizing test approach helps to improve test efficiency.
Software & Database Technology
Search of Speculative Symbolic Execution Path Based on Ant Colony Algorithm
LI Hang, ZANG Lie, GAN Lu
Computer Science. 2018, 45 (6): 145-150.  doi:10.11896/j.issn.1002-137X.2018.06.025
Abstract PDF(1818KB) ( 954 )   
References | Related Articles | Metrics
Symbolic execution has been widely used in the field of software testing.The research shows that constraint solving is the most time-consuming part in symbolic execution,even though some optimization techniques are used.The speculative symbolic execution reduces the consuming time of constraint solved by making several continuous constraint solving merge into once.The success rate of every time guess is affected by the depth of conjecture and the direction of search,especially the direction of search.Therefore,how to guide the path search to conduct in the direction of success is very important to improve the efficiency of speculative symbolic execution.In this paper,ant colony algorithm was used to search the path.Firstly,according to the node condition,the weight of branch path was determined.Then,the weight of a branch was updated according to whether this branch is covered in every time guess.This paper chose the direction of search based on the weight of branch.The experimental results show that the proposed method can improve the efficiency of speculative symbol execution effectively.
Termination Analysis of Linear Assignment Loop Program Based on k-ranking Functions
LI Yi, CAI Tian-xun, WU Wen-yuan
Computer Science. 2018, 45 (6): 151-155.  doi:10.11896/j.issn.1002-137X.2018.06.026
Abstract PDF(1302KB) ( 752 )   
References | Related Articles | Metrics
The termination of loop programs plays an important role in program analysis.This paper focused on the termination of linear assignment loop programs which have no traditional linear ranking functions.Based on the precise computation of Anx,this paper expanded the concept of traditionally defined raking functions,gave a definition of k-ranking functions and proved the existence of k-ranking functions.All the computations on the synthesis of k-ranking functions were done by Regularchains,a symbolic computation tool in Maple.Experimental results show that somelinearloop programs which have no traditional linear ranking functions indeed can be proven to be terminating by the proposed method.
Software & Database Technology
Consistency Detction Method of Models Based on Three-dimensional Behavior Relation Graph
ZHAO Pei-hai, WANG Mi-mi
Computer Science. 2018, 45 (6): 156-160.  doi:10.11896/j.issn.1002-137X.2018.06.027
Abstract PDF(2534KB) ( 757 )   
References | Related Articles | Metrics
In the similarity analysis process of business process models,sometimes there may be loop structure in the business process model.Existing methods do not consider the loop structure,and ignore the influence of loop on consistency.A behavior consistency measure method was proposed based on transition multi-sets of Petri nets.Firstly,this paper analyzed five kinds of behavior relations of transitions,and proposed a three-dimensional behavior relation graph based on Petri net branching processes to compare the behavior relations between two models.Secondly,by analyzing the relations between two three-dimensional behavior relation graphs,this paper proposed a consistency measure method based on three-dimensional behavior relation graphs.The theoretical analysis and specific examples show that the me-thod is very effective.
Software & Database Technology
Multi-objective Supervised Defect Prediction Modeling Method Based on Code Changes
CHEN Xiang, WANG Qiu-ping
Computer Science. 2018, 45 (6): 161-165.  doi:10.11896/j.issn.1002-137X.2018.06.028
Abstract PDF(1650KB) ( 682 )   
References | Related Articles | Metrics
Defect prediction based on code changes has the advantage of smaller code inspection cost,easy fault localization and rapid fixing.This paper firstly formalized this problem as a multi-objective optimization problem.One objective is to maximize the number of identified buggy changes,and the other objective is to minimize the cost of code inspection.There exist an obvious conflict between two objectives,so this paper proposed a novel method MULTI.This me-thod can generate a set of non-dominated prediction models.In the empirical studies,this paper chose six large-scale open source projects (with 227417code changes in total) and considerd ACC and POPT as evaluation indicators of perfor-mance.Final results show that the proposed method can perform significantly better than the state-of-the-art supervised methods (i.e.,EALR and Logistic) and unsupervised methods (i.e.,LT and AGE).
Software & Database Technology
Nash Equilibrium Based Method for Mapping AUTOSAR Tasks to Multicore ECU
RAN Zheng, LUO Lei, YAN Hua, LI Yun
Computer Science. 2018, 45 (6): 166-171.  doi:10.11896/j.issn.1002-137X.2018.06.029
Abstract PDF(1717KB) ( 852 )   
References | Related Articles | Metrics
With the growing requirements of automotive applications on processing power,the electronic control units (ECUs) in modern automotive system escalates to multicore architecture.The design,implementation and integration of AUTOSAR applications in multicore ECU will face new challenges.One of these challenges is mapping the tasks to multicore ECU while the real-time performance of system is ensured.In addition,the resource limitation and scheduling analysis in real-time system make the problems more complex in AUTOSAR static configuration.So this paper proposed a Nash equilibrium based method for mapping AUTOSAR tasks to multicore ECU.This method has important practical significance in improving the efficiency of task mapping process by applying the priority of tasks in game process.Finally,the proposed method was applied to the automotive electronic instance in AUTOSAR.The experimental results show that the proposed method has good performance in the worst case response time of the runnable entities in each task.
Group Nearest Neighbor Query Method of Line Segment in Obstacle Space
GUO Ying-ying, ZHANG Li-ping, LI Song
Computer Science. 2018, 45 (6): 172-175.  doi:10.11896/j.issn.1002-137X.2018.06.030
Abstract PDF(1823KB) ( 666 )   
References | Related Articles | Metrics
In order to make up the deficiencies of the existing research results which can not effectively deal with the group nearest neighbor query based on line segment in obstacle space,the group nearest neighbor query method of line segment in obstacle space was proposed.This query process is divided into two stages,including the filtering process and refining process.In the filtering process,according to the properties of the line segment Voronoi graph and the definition of OLGNN,corresponding pruning theorem of data line was proposed,and the OLGNN_Line_Filter algorithm was put forward.According to the definition of line obstruction distance,corresponding pruning theorem of obstruction was proposed,and the OLGNN_Obstacle_Filter algorithm was put forward.In the refining process,in order to get more accurate query results,the corresponding refining theorem and STA_OLGNN algorithm were proposed.Theoretical research and experimental results show that the proposed algorithm can effectively deal with the problem of group nearest neighbor query of line segment in obstacle environment.
Artificial Intelligence
Relationship and Reasoning Study for Three-way Decision Cost Objective Functions
XU Jian-feng, HE Yu-fan, LIU Lan
Computer Science. 2018, 45 (6): 176-182.  doi:10.11896/j.issn.1002-137X.2018.06.031
Abstract PDF(3257KB) ( 1070 )   
References | Related Articles | Metrics
Three-way decision (3WD) is an important method for solving problems under uncertainty.The classical decision rough set theory provides an efficient tri-partition threshold solving method by minimizing the overall decision risk.However,the logical relationship among the three-way decision cost objective functions and its threshold reasoning still need further study.In this study,threshold solutionmodel based on logical relationship among the cost objective functionsof three-way decision was constructed.Furthermore,the derivation method of three-way decision thresholds for different loss function values distributionwas studied,and the three-way classification semantic interpretation of different domain values was given respectively.Finally,a set of typical examples show that the three-way classification based on the above cost objective functions reasoning is valid.
Rough Set Based Knowledge Predicate Analysis of Chinese Knowledge Based Question Answering
HAN Zhao, MIAO Duo-qian, REN Fu-ji
Computer Science. 2018, 45 (6): 183-186.  doi:10.11896/j.issn.1002-137X.2018.06.032
Abstract PDF(1254KB) ( 726 )   
References | Related Articles | Metrics
In knowledge based question answering system,the performance of knowledge predicate analysis can affect the overall match result of knowledge triple.The knowledge predicate analysis of Chinese short question is difficult because of the uncertainty of Chinese knowledge predicate representation.Based on the rough set theory,a new definition of knowledge predicate analysis of knowledge based question snswering was given,and a new method was proposed to analyze the knowledge predicate of question.It can reduce the words which are weakly related with the knowledge predi-cate,and then the words which are more related with knowledge predicate representation will be used to match the knowledge triples to improve the overall performance of system.The experiment results verify the validity of the method.
Improved NSGA2 Algorithm Based on Dominant Strength
LAI Wen-xing, DENG Zhong-min
Computer Science. 2018, 45 (6): 187-192.  doi:10.11896/j.issn.1002-137X.2018.06.033
Abstract PDF(3194KB) ( 1069 )   
References | Related Articles | Metrics
NSGA2 algorithm is a simple,efficient and widely used multi-objective evolutionary algorithm.However,when solving high-dimensional and complex nonlinear multi-objective optimization problems in practical engineering field,NSGA2 has some obvious design defects,such as ineffective identification of pseudo non-dominated solutions,low computational efficiency,poor convergence and distribution.In order to remedy the above drawbacks,this paper proposed an improved NSGA2 algorithm based on dominant strength (INSGA2-DS).INSGA2-DS uses the fast dominant strength sorting method to construct non-dominated set,introduces a new crowding distance with considering variance to improve the distribution of solution sets,and adopts the adaptive elitist retention strategy to adjust elitist retention scale in evolutionary process automatically.The experimental results of INSGA2-DS and NSGA2 with standard test functions show that INSGA2-DS algorithm can improve the convergence and distribution of NSGA2 algorithm effectively.
Matrix Completion Algorithm Based on Subspace Thresholding Pursuit
WANG Zhi, WANG Jian-jun, WANG Wen-dong
Computer Science. 2018, 45 (6): 193-196.  doi:10.11896/j.issn.1002-137X.2018.06.034
Abstract PDF(9313KB) ( 879 )   
References | Related Articles | Metrics
Low rank matrix completion is the most basic problem in machine learning and data analysis.It plays a key role in solving many important problems,such as collaborative filtering,dimensionality reduction,multi-task learning and pattern recognition.Focusing on the problems that the ADMiRA mayhave a slow convergence rate and easily fall into local optimal drawbacks,this paper proposed a new algorithm by adding SVP into SP’s every iteration.Through making use of SVP’s advantage of quick convergence,the proposed algorithm improves SP’s convergence speed,and gets better result.This algorithm was implemented and tested on several simulated datasets and image datasets.Experiments reveal very encouraging results in terms of the found quality of solution and the required processing time.
Parameters Optimization for SVM Based on Particle Swarm Algorithm
CHEN Jin-yin, XIONG Hui, ZHENG Hai-bin
Computer Science. 2018, 45 (6): 197-203.  doi:10.11896/j.issn.1002-137X.2018.06.035
Abstract PDF(1572KB) ( 1273 )   
References | Related Articles | Metrics
Support vector machine has high dependence for Hyper-parameters,so parameter setting determines the classification of SVM such as the parameters of RBF kernel function.In order to select proper parameters corresponding to the classification problem,the data set is mapped to the high-dimensional feature space to calculate average distance between classes and the distance between two centers.The difference between results is taken as the fitness value of parameter assessment.Through global optimization ability of particle swarm algorithm,population representing different parameters are generated in the defined domain.The optimal parameter search is performed by random walk of particles,and the results are taken into SVM for training.Compared with grid algorithm,the parameters setting of the proposed algorithm is more accurate,the classification accuracy is significantly improved,and the complexity of the algorithm doesn’t increase.
Joint Model of Events’ Causal and Temporal Relations Identification
HUANG Yi-long, LI Pei-feng, ZHU Qiao-ming
Computer Science. 2018, 45 (6): 204-207.  doi:10.11896/j.issn.1002-137X.2018.06.036
Abstract PDF(1248KB) ( 2242 )   
References | Related Articles | Metrics
Causal and temporal relations of events are both important event relationships.The previous work regarded theidentification of causal and temporal relations of events as two independent tasks,ignoring the association between them.This paper applied integer linear programming(ILP) to construct joint model of event based onidentification of causal and temporal of event.Based on classifier model,the joint model optimizes the recognition results through constraints.The experimental results show that this joint model significantly enhances the identification performance.
Improved Autoencoder Based Classification Algorithm for Text
XU Zhuo-bin, ZHENG Hai-shan, PAN Zhu-hong
Computer Science. 2018, 45 (6): 208-210.  doi:10.11896/j.issn.1002-137X.2018.06.037
Abstract PDF(2660KB) ( 1086 )   
References | Related Articles | Metrics
Vector representation of words is the premise of applications in text mining.In order to improve the effectiveness of autoencoders in words embedding and theaccuracy of text lassification,this paper proposed an improved autoencoderand applied it for text classification.Based on traditional autoencoder,a global adjustable function is added to the latent layer,which adjusts smaller absolute values to bigger absolute values and implements the sparsity of characteristic vector in the latent layer.With the adjusted latent characteristic vector,a full connected neural network is used to classify text.The experiments on 20news dataset show that the proposed method is more effective in words embedding,and has better performance in text classification.
Complexity of CP-nets Learning
LIU Jing-lei, LIAO Shi-zhong
Computer Science. 2018, 45 (6): 211-215.  doi:10.11896/j.issn.1002-137X.2018.06.038
Abstract PDF(1351KB) ( 605 )   
References | Related Articles | Metrics
CP-nets (Conditional Preference networks) are a kind of simple and intuitively graphical tool for representing conditional preference,three fundamental research questions of which are representation,reasoning and learning.Diffe-rently from the research point from statistical learning theory,the binary-valued CP-nets learning issues were studied based on logic theory.Firstly,an intimate relationship between satisfiability of proposition logic formula and preference assertion is given,therefore,the learning problem on CP-nets is transformed into the proposition reasoning.In the se-cond place,the computational complexity for learning two kinds of specific CP-nets is given,the learning problem of most complicated acyclic CP-nets is NP-complete,whereas the learning problem of the simplest set-structured CP-nets is P.These interesting results establish the upper and lower bound of complexity for learning specific structured (e.g.,list-structured,bounded tree-width) CP-nets.
Symbolic Aggregate Approximation Method of Time Series Based on Beginning and End Distance
JI Hai-juan, ZHOU Cong-hua, LIU Zhi-feng
Computer Science. 2018, 45 (6): 216-221.  doi:10.11896/j.issn.1002-137X.2018.06.039
Abstract PDF(1558KB) ( 931 )   
References | Related Articles | Metrics
The feature representation method of time series data is the key technology of time series data mining task,and the symbolic aggregate approximation (SAX) method is most commonly used in feature representation methods.A symbolic aggregate approximation method based on beginning and end distance (SAX_SM) was proposed because SAX algorithm can not distinguish the similarity between time series when the symbol is consistent in each sequence segment of time series.Time series data have a strong morphological trend,so the proposed method uses the beginning point and the end point to represent the morphological feature of each sequence segment,and then uses the morphological feature and representation symbol of each sequence segment to approximate the time series data,in order to map it from high-dimensional space to low-dimensional space.Next,in order to calculate the morphological distance between the two sequences,this paper constructed beginning and end distance based on the beginning point and the end point.Finally,to measure the similarity between time series more objectively,a new distance metric approach was defined by combining the beginning and end distance and the symbol distance.The theoretical analysis shows that the new distance measure satisfies the lower bound theorem.Experiments on 20 sets of UCR time series data sets show that the proposed SAX_SM method achieves the highest classification accuracy (including the largest side by side) in 13 data sets,while SAX only gets the largest classification accuracy in 6 data sets (including the largest side by side).Therefore,SAX_SM has better classification result than SAX.
Short-term Traffic Flow Prediction Model Based on Gradient Boosting Regression Tree
SHEN Xia-jiong, ZHANG Jun-tao, HAN Dao-jun
Computer Science. 2018, 45 (6): 222-227.  doi:10.11896/j.issn.1002-137X.2018.06.040
Abstract PDF(3040KB) ( 1125 )   
References | Related Articles | Metrics
Short-term traffic flow prediction is an important part of traffic flow modeling,and it also plays an important role in urban road traffic management and control.However,the common time series model (e.g.,ARIMA) and random forest model (RF) are limited in the prediction accuracy due to the residuals generated by the model and the input variables.Aiming at this problem,a short-term traffic flow prediction model based on gradient boosting regression tree(GBRT) was proposed to predict the travel speed.The model (GBRT) first introduces the Huber loss functionto deal with residuals.Secondly,the spatial-temporal correlations are also considered in the input variables.The model adjusts the weight of the weak learners in the training process,and corrects the residuals of the model to improve the prediction accuracy.Experiment was conducted by using traffic speed data of a city expressway,and ARIMA model and random forest modle were compared with the proposed model by using MSE,MAPE and other indicators.Results show that the proposed model has the best prediction accuracy,and the validity of the model in short-term traffic flow prediction is verified.
Multi-label Feature Selection Algorithm Based on Improved Label Correlation
CHEN Fu-cai, LI Si-hao, ZHANG Jian-peng, HUANG Rui-yang
Computer Science. 2018, 45 (6): 228-234.  doi:10.11896/j.issn.1002-137X.2018.06.041
Abstract PDF(2363KB) ( 829 )   
References | Related Articles | Metrics
Multi-label feature selection is one of the essential methods to overcome the curse of dimensionality.It reduces the feature dimension,improves the learning efficiency,and optimizes the classification performance.However,many existing feature selection algorithms hardly take label correlation into consideration,and the range of information entropies are biased within different data sets.To address those problems,this paper proposed a multi-label feature selection algorithm based on the improved label correlation.The algorithm firstly uses symmetrical uncertainty to norma-lize the information entropy,and takes normalized mutual information as relationship measurement to define the label importance,with which the label-related items in dependency and redundancy are weighted.In the end,the score function is put forward to evaluate the feature importance,and the best feature subset is selected with the highest score.Experiments demonstrate that after selecting out the concise and accurate feature subset,the multi-label classification is accelerated in terms of the performance and the efficiency with disperse features.
Convolutional Neural Network Model for Text Classification Based on BGRU Pooling
ZHOU Feng, LI Rong-yu
Computer Science. 2018, 45 (6): 235-240.  doi:10.11896/j.issn.1002-137X.2018.06.042
Abstract PDF(1877KB) ( 973 )   
References | Related Articles | Metrics
Aiming at the problem thatdeep learning has the disadvantages of small adaptability and low precision when it solves the problem of text classification,this paper proposed a convolution neural network model based on bi-directional gated recurrent unit (BGRU) and convolution layer pooling.In the pooling stage,the intermediate sentence gene-rated by BGRU is represented as a local representation obtained from the convolution layer,the representation of high similarity is judged to be important information,and the information is retained by increasing its weight.The model can give end-to-end training and train multiple types of text,and it has good adaptability.The experimental results show that the proposed model has greate advantage compared with other similar models,and the classification accuracy is also improved significantly.
Three-way Decisions Model Based on Evidence Theory
CHEN Yu-jin, LI Xu-wu, XING Rui-kang
Computer Science. 2018, 45 (6): 241-246.  doi:10.11896/j.issn.1002-137X.2018.06.043
Abstract PDF(1428KB) ( 656 )   
References | Related Articles | Metrics
There are similarities in concept and method of information processing between three-way decisions model and evidence theory.First,the evidence theory was introduced into the three-way decisions model and its trust delay interval including variable semantics was analyzed.Second,the certainty three-way decisions model based on evidence theo-ry and the deformable one were established.By adjusting the coefficient of trust values,the strategies based on opti-mism and pessimism semantics were described and the decision rules were derived by combining with the Bayes risk decision,which are able to meet the application requirements of specific cases in reality.Finally,an example of situation evaluation was given to illuminateapplications of the proposed model.
Attribute Reduction of Decision Systems Based on Indiscernibility Relation and Discernibility Relation
QIN Ke-yun, JING Si-hui
Computer Science. 2018, 45 (6): 247-250.  doi:10.11896/j.issn.1002-137X.2018.06.044
Abstract PDF(1255KB) ( 658 )   
References | Related Articles | Metrics
Knowledge reduction and knowledge discovery in the information systems are important topics of rough set theory.Based on the indiscernibility relation and discernibility relation of decision systems,the judgement theorems for relative indiscernibility consistent set and relative discernibility consistent set were provided.This paper gave the attri-bute reduction method by discernibility matrices and discernibility functions,and analyzed it and relevant research work by means of examples.
Unsupervised Active Learning Based on Adaptive Sparse Neighbors Reconstruction
LV Ju-jian, ZHAO Hui-min, CHEN Rong-jun, LI Jian-hong
Computer Science. 2018, 45 (6): 251-258.  doi:10.11896/j.issn.1002-137X.2018.06.045
Abstract PDF(2891KB) ( 706 )   
References | Related Articles | Metrics
In many information processing tasks,individuals are easy to get a lot of unlabeled data,but labeling the unlabeled data is quite time-consuming and usually expensive.As an important learning method in the field of machine lear-ning,active learning reduces the cost of labeling data by selecting the most information data points to label.However,most of the existing active learning algorithms are supervised method based on the classifier,not suitable for the sample selection problem without any label information.Aiming at this problem,a novel unsupervised active learning algorithm was proposed,called active learning based on adaptive sparse neighbors reconstruction,by learning from the optimal experiment design and combining the adaptive sparse neighbors reconstruction.The proposed algorithm adaptively selects the neighborhood scale according to different regional distribution of dataset,searches the sparse neighbors and calculates the reconstruct coefficients simultaneously,and can choose the most representative data points of the distribution structure of dataset without any label information.Empirical results on both synthetic and real-world data sets show that the proposed algorithm has high performance in classification accuracy and robustness under the same labeling cost.
Graphics, Image & Pattern Recognition
Real-time Sub-pixel Object Detection for Hyperspectral Image Based on Pixel-by-pixel Processing
LIN Wei-jun, ZHAO Liao-ying, LI Xiao-run
Computer Science. 2018, 45 (6): 259-264.  doi:10.11896/j.issn.1002-137X.2018.06.046
Abstract PDF(3136KB) ( 830 )   
References | Related Articles | Metrics
Sub-pixel target detection is one of the key technologies in the applications of hyperspectral images.Since the high dimensions of hyperspectral data increase apparently the storage space and complexity of data processing,real-time processing has become a crucial problem for target detection.Adaptive matched filter (AMF) is an effective algorithm for sub-pixel target detection.This paper derived the real-time AMF target detection procedure of hyperspectral images by using AMF as the sub-pixel target detection algorithm,based on the realization of real-time inversing of hyperspectral data’s covariance matrix with the pixel-by-pixel format transmission and storage by using Woodbury lemma.Expe-riments were conducted on synthetic data and real hyperspectral images.The results demonstrate that compared with non-real time AMF,real-time AMF needs less storage space and can achieve the same or slightly better detection accuracy.
Graphics, Image & Pattern Recognition
3D Geometric Reconstruction Based on Bayes-MeTiS Mesh Partition
ZHANG Xiao-hua, HUANG Bo
Computer Science. 2018, 45 (6): 265-269.  doi:10.11896/j.issn.1002-137X.2018.06.047
Abstract PDF(2663KB) ( 760 )   
References | Related Articles | Metrics
In order to improve the compression efficiency of the geometric reconstruction process of 3D model,this paper proposed a bayesian geometric reconstruction algorithm based on MeTiS mesh partition for 3D model.At the encoding part,the MeTiS method is used to realize the subnetting for original 3D grid,the random linear matrix is used to encode the geometry of the subnet,and the pseudo random number generator is used for data sequence construction by considering the neighbor nodes of the boundary nodes.Then,the Bayesian algorithm is used to design the geometric model reconstruction algorithm,and the mean,variance matrix and the model parameters are given in theory to realize the geometric reconstruction of the 3D model.Finally,by comparing with graph Fourier transform spectral compression(GFT),least square compression(LMS) and compressed sensing based graph Fourier transform spectral compression algorithms(CSGFT),the simulation results show that the proposed method has relatively high bit rate compression index and low reconstruction error.
Whole Frame Loss Concealment Method for Stereo Video Basedon Disparity Consistence
LIU Jing-hua, CHEN Jing
Computer Science. 2018, 45 (6): 270-274.  doi:10.11896/j.issn.1002-137X.2018.06.048
Abstract PDF(2513KB) ( 791 )   
References | Related Articles | Metrics
A whole frame loss concealment method for stereoscopic video was proposed in this paper.It consists of two parts.Firstly,based on the consistency of object motions in neighboring views,the frame difference between the current and previous frame in the correctly received view is projected onto the view of lost frame via the estimated disparities.Then,the lost frame is recovered from the projected frame difference and its previous frame.Secondly,according to the characteristics of holes,two alternative methods are adaptively performed to effectively fill the holes.Experimental results show that the proposed frame loss concealment method for stereoscopic video can effectively restore the lost frame at the decoder and outperforms several existing methods.
Graphics, Image & Pattern Recognition
Gabor Occlusion Dictionary Learning via Singular Value Decomposition
LI Xiao-xin, ZHOU Yuan-shen, ZHOU Xuan, LI Jing-jing, LIU Zhi-yong
Computer Science. 2018, 45 (6): 275-283.  doi:10.11896/j.issn.1002-137X.2018.06.049
Abstract PDF(6209KB) ( 948 )   
References | Related Articles | Metrics
Covariate shift incurred by occlusion and illumination variations is an important problem for real-world face recognition systems.This paper explored this problem from the perspective of dictionary coding.By reviewing several extant structured error coding methods,this paper indicated that these error coding methods can be rewritten as a linear system by combining training dictionary and well-designed occlusion dictionary.Due to the importance of occlusion dictionary in structured error coding,this paper studied the dictionary learning method,K-SVD (Singular Value Decomposition),which is used in the Gabor feature based robust representation and classification (GRRC) method,and has been paid great attentions in the field of error coding.The K-SVD learned occlusion dictionary is strongly redundant and lack of natural structures.In addition,K-SVD is time-consuming.This paper proposed an SVD-based occlusion dictionary learning method.It is simple,but generates a more compacted and structured occlusion dictionary.Experiments on three face datasets,including Extended Yale B,UMBDB and AR,demonstrates that the proposed SVD-based GRRC consis-tently outperforms the K-SVD-based GRRC in several challenging situations.
Facial Age Two-steps Estimation Algorithm Based on Label-sensitive Maximum Margin Criterion
XU Xiao-ling, JIN Zhong, BEN Sheng-lan
Computer Science. 2018, 45 (6): 284-290.  doi:10.11896/j.issn.1002-137X.2018.06.050
Abstract PDF(2475KB) ( 888 )   
References | Related Articles | Metrics
Traditional maximum margin criterion usually ignores the differences between classes in the computation of the between-class scatter matrix.However,for facial age estimation,the differences between age labels are very significant.Therefore,this paper proposed a novel dimensionality reduction algorithm,called label-sensitive maximum margin criterion (lsMMC),by introducing a distance metric between the classes.In addition,considering the complicated facial aging process,this paper proposed a two-steps local regression algorithm named K nearest neighbors-label distribution support vector regressor (KNN-LDSVR) for age estimation.The mean absolute error of the proposed facial aging estimation method on the FGNET database subset is 4.1 years,which improves the performance compared with existing age estimation methods.
Moving Shadow Removal Algorithm Based on Multi-feature Fusion
CHEN Rong, LI Peng, HUANG Yong
Computer Science. 2018, 45 (6): 291-295.  doi:10.11896/j.issn.1002-137X.2018.06.051
Abstract PDF(1942KB) ( 678 )   
References | Related Articles | Metrics
Aiming at the problem of the moving cast shadow in the video surveillance,this paper proposed an shadow removal algorithm which combines color feature,normalized vector distance and intensity ratio.First,the background picture is built according to Gaussian mixture model,and motion region is acquired by background subtraction.Then,serial fusion method is adapted to detect and remove shadow pixels.Based on shadow detection according to the color consistent feature in RGB color space,the normalized vector distance distribution histogram is implemented to detect sha-dow pixels further.Finally,in view of the mistaken identification in the testing process,the illumination model of pixel is built and the intensity ratio of shadow pixel and background pixel is calculated to rule out the mistakenly identified foreground pixels according to the confidence interval.The results of experiment show that the proposed method can overcome the limitation of single feature method,and is able to detect and remove shadow under various circumstances efficiently.The adaptability and robustness of this algorithm are validated,and its processing time is moderate.
Graphics, Image & Pattern Recognition
Machine Vision Based Inspection Method of Mura Defect for LCD
QIAN Ji-de, CHEN Bin, QIAN Ji-ye, ZHAO Heng-jun, CHEN Gang
Computer Science. 2018, 45 (6): 296-300.  doi:10.11896/j.issn.1002-137X.2018.06.052
Abstract PDF(2076KB) ( 2872 )   
References | Related Articles | Metrics
Analyzing the necessity of the defect detection and the disadvantage of the manual detection in the liquid crystal display(LCD),this paper studied a kind of online detection system for the Mura defect of LCD based on machine vision.There are some features of Mura such as the low contrast,the fuzzy edge,the irregular shapes,the uneven brightness and so on.The simulation computer vision system was built to imitate human detection.The single frame ima-ge background modeling and background subtraction method were proposed.The methods can effectively suppress the uneven brightness of the LCD,and enhance the features of Mura defect information.Then,based on the maximally stable extremal region(MSER),the Mura defect adaptive threshold segmentation method was proposed.The auto inspection machine vision system was set up by synthesizing the proposed methods.The experimental results show that the proposed detection algorithm can effectively suppress the uneven brightness of the LCD,and accurately segment the Mura defects with good robustness.The system has the advantages of less manual intervention,high accuracy and online automatic detection.
Graphics, Image & Pattern Recognition
Research on Pan-real-time Problem of Medical Detection Based on BPNNs Recognition Algorithm
LIU Yu-cheng, Richard·DING, ZHANG Ying-chao
Computer Science. 2018, 45 (6): 301-307.  doi:10.11896/j.issn.1002-137X.2018.06.053
Abstract PDF(3995KB) ( 681 )   
References | Related Articles | Metrics
Due to the complexity of the urine sediment space environment,there is much redundant information of the collected tangible component image,and it also becomes difficult to extract effective image information.Therefore,the amount of data that need to be deal with is huge.Although the serial version DJ8000 system platform of BP neural network algorithm solves the problem of recognition accuracy of tangible components such as cells,it can’t meet the real-time requirement of urine sediment image medical examination.To solve this problem,this paper presented a system platform of parallel processing GPU framework based on BP neural network algorithm optimization.It uses parallel optimization framework to synchronize and accelerate processing of data efficiently.At the same time,it supports the hardware platform based on GPU computing and test platform.Whether from the hardware indicators,data transmission and bus technology or hardware and software compatibility,it will help solve the problems,which often occur in the uneven load irregularities.Experimental data show that BP neural network algorithm for urinary sediment identification improve the performance parameters such as speedup,aging ratio and running time on GPU platform processing platform.Compared with DJ8000 system platform,the parallel processing GPU framework system platforms of AMD HD7970 and NVIDAGTX680 are optimized,and their corresponding acceleration ratio parameter values are 10.82~21.35 and 7.63~15.28 standard equivalents respectively.The data show that optimizing the mapping relationship between logical data,address data and linear seeking function in the GPU processing system of parallel frame can dynamically allocate and optimize the algorithm structure and optimize the mapping between software and hardware system.Finally,it solves the problem of performance bottlenecks caused by load imbalance between threads.Thus,it effectively resolves the problem of real-time detection in urinary sediment environment.
Fast Face Recognition Algorithm Based on Local Fusion Feature and Hierarchical Incremental Tree
ZHONG Rui, WU Huai-yu, HE Yun
Computer Science. 2018, 45 (6): 308-313.  doi:10.11896/j.issn.1002-137X.2018.06.054
Abstract PDF(4723KB) ( 885 )   
References | Related Articles | Metrics
The off-line training and the high dimension of facial features in face recognition lead to the difficulty of achieving real-time processing performance.To solve this problem,the local fusion features and the hierarchical incremental tree were applied to construct a fast face recognition algorithm.Firstly,the supervised descent method(SDM) is used to locate the facial feature points.The feature of multi block-center symmetric local binary patterns(MB-CSLBP) in the neighborhood of each facial feature point is extracted and fused in series,which constitutes the proposed facial feature of local fusion feature of MB-CSLBP(LFP-MB-CSLBP).Then the above facial feature is sent into hierarchical incremental tree(HI-tree).Because the hierarchical clustering algorithm is used in the HI-tree to achieve incremental learning,it can train the recognition model online.Finally,the recognition rate and consuming time of the proposed algorithm are tested on three face databases and real application of video-based face recognition.The experimental results show that the proposed algorithm has better real-time computation and accuracy compared with other current approaches.
Real-time Detection and Recognition of Traffic Light Based on Time-Space Model
LI Zong-xin, QIN Bo, WANG Meng-qian
Computer Science. 2018, 45 (6): 314-319.  doi:10.11896/j.issn.1002-137X.2018.06.055
Abstract PDF(5944KB) ( 1014 )   
References | Related Articles | Metrics
Detection and recognition of traffic light are important for driverless cars and advanced driver assistance systems(ADAS).In order to satisfy the requirements of traffic light detection and recognition in complex urban environment,a real-time detection and recognition algorithm based on time-space model (TSM) was proposed.It was established based on thetime-space continuous variation relationship of video-frame sequence.The proposed algorithm consists of three parts.The first part is fast image segmentation and compression algorithm based on color,which is used to improve the computational efficiency.Second,time-space model of multi-frame image sequence is introduced to improve the accuracy of detection stage.Third,recognition of traffic lights is achieved by using support vector machine (SVM) with histogram of oriented gradients (HOG) features.Experiment results show that this novel algorithm has strong robustness,high efficiency and accuracy.