Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 44 Issue 7, 13 November 2018
  
Information Security in New Quantum Technology Age
ZHANG Liang-liang, ZHANG Yi-wei, LIANG Jie, SUN Rui-yi and WANG Xin-an
Computer Science. 2017, 44 (7): 1-7, 15.  doi:10.11896/j.issn.1002-137X.2017.07.001
Abstract PDF(676KB) ( 104 )   
References | Related Articles | Metrics
Quantum technologies should have a profound impact on cryptography and information security industry in the coming age.The general quantum computers,which can run quantum algorithms by thousands of qubits,posing a serious threat to the fundamental algorithms of information security.The famous RSA algorithm and other public key ciphers will be broken and the cryptographic strength of block ciphers will be halved.The implementation of the quantum key distribution in quantum communication must alter the physical construction of traditional secure communication.These important application values are also the driving factors for developing quantum technologies.Combining with the current hot news of quantum technologies,the effects to information security technology were reviewed from two perspectives,quantum computation and quantum communication,respectively.Meanwhile,the status of the new developments in these technologies was briefly introduced,including the advances of general and specific quantum compu-ters,the progress of quantum key distribution made in laboratory conditions and the state of development of the space-ground quantum communication network.In the end,the future form of information security technology was given with a summary.In future,the quantum technologies will deeply integrate and co-exist with various existing technologies.
Study on Testing Cloud Computing Elasticity
HU Ya-zhou, DENG Bo, LIN Wang-qun, PENG Fu-yang and WANG Dong-xia
Computer Science. 2017, 44 (7): 8-15.  doi:10.11896/j.issn.1002-137X.2017.07.002
Abstract PDF(1474KB) ( 80 )   
References | Related Articles | Metrics
Elasticity is the key feature of cloud computing,which can support resources to elastically expand,fast configured,flexibly add and remove,and it can make full use of cloud resources and reduced the cost of cloud services providers and users.To evaluate whether elasticity meet the SLA (Service Level Agreement) requirements and make full use of cloud resources,we should evaluate and test elasticity of cloud computing.In this paper,we introduced the definition of elasticity,solutions and metrics for elasticity.Then we proposed the definition of cloud computing elasticity testing,and summarized some correlational researches of cloud computing elasticity testing.Next,we summarized its key technologies and frameworks.We also discussed some challenges and issues on testing cloud computing elasticity.Finally,we put forward the further research direction of cloud computing elasticity testing.
Review on Security Audit Technology for Cloud Computing
WANG Wen-juan, DU Xue-hui, WANG Na and SHAN Di-bin
Computer Science. 2017, 44 (7): 16-20, 30.  doi:10.11896/j.issn.1002-137X.2017.07.003
Abstract PDF(554KB) ( 73 )   
References | Related Articles | Metrics
Now the security concern has become a huge impediment to the development of cloud computing.Due to the specific characteristics such as data and service outsourcing,virtualization,multi-tenant and cross domain sharing,the cloud computing environment faces more complicated threats compared with traditional IT environment,and the security audit technology also needs higher demands.Firstly,this paper analyzed the main challenges that cloud security audit confronts with,proposed a security audit technology framework in cloud environment which provides all-around examination from four dimensions such as user dimension,business dimension,data dimension,infrastructure dimension.Then according to different dimensions,the studies were reviewed from three aspects including log audit,storage audit and configuration audit,in order to provide useful reference to the development research of security audit for cloud computing in our country.
Modeling and Analysis of CPS Unmanned Vehicle Systems Based on Extended Hybrid Petri Net
SONG Xiang-jun and ZHANG Guang-quan
Computer Science. 2017, 44 (7): 21-24.  doi:10.11896/j.issn.1002-137X.2017.07.004
Abstract PDF(306KB) ( 65 )   
References | Related Articles | Metrics
Cyber-physical system (CPS) is a complex system that integrates computing system,communication system,perceptual system,control system and physical system.Its running is a kind of hybrid behavior that discrete computing process and continuous physical process are closely interacted and deeply integrated.Concerning this feature,hybrid Petri net is used for modeling CPS and on this basis adding the time constraint,in other words,adding time delay to discrete places and adding a function of velocity to continuous places.At the same time,the concept of inhibitor arcs and test arc are introduced to improve the expression ability of Petri net.A new model called extended hybrid Petri Net was proposed.Then obstacles avoidance for unmanned vehicle system was modeled,according to some rules,the models were converted into corresponding Simulink models.Through the simulation of Matlab,the dynamic behaviors and attributes of systems were analyzed.
Method of Software Defined Network Path Abnormity Monitoring
LI Yang, CAI Zhi-ping and XIA Jing
Computer Science. 2017, 44 (7): 25-30.  doi:10.11896/j.issn.1002-137X.2017.07.005
Abstract PDF(518KB) ( 211 )   
References | Related Articles | Metrics
In the SDN architecture,the controller has to master the operating state of network data transmission path segment,which is quite significant for controller to implement the network security monitoring and traffic load balancing.Based on the characteristics of SDN architecture,the paper proposed an approach which combines reactive mea-surement and passive measurement for monitoring the network path segment abnormity.The method analyzes the opera-ting state of network traversing path from two aspects of path’s latency and available bandwidth,which can also perform the detection and give an alarm for the abnormal network path segments.Our experiments show that the method can not only measure the latency and available bandwidth of path segment,but also can detect the abnormal path segment out timely and send out alarm signals.In network conditions of constant and variable network cross traffic,the approach can monitor the abnormal path segment and measure the path’s performance parameters efficiently.
Efficient Frequent Patterns Mining Algorithm Based on MapReduce Model
ZHU Kun, HUANG Rui-zhang and ZHANG Na-na
Computer Science. 2017, 44 (7): 31-37.  doi:10.11896/j.issn.1002-137X.2017.07.006
Abstract PDF(517KB) ( 62 )   
References | Related Articles | Metrics
Along with the rapid development of Internet and the rapid growing group of users,many Internet services companies have to manage TB size or higher amount of data every day.How to find useful information in this big data era is becoming an important problem.The data mining algorithm has been widely used in many fields,and finding frequent itemsets is one of the most common and primary applications of data mining,and Apriori algorithm is the most typical algorithm for finding frequent itemsets from a big transaction database.However,when the dataset size is rather huge or a single host is used,the memory would be quickly exhausted and the computation time would also increase dramatically,which make the algorithm performance inefficient.Parallel and distributed computing based on the MapReduce framework has been proposed.An improved reformative MMRA (Matrix MapReduce Algorithm) algorithm which should convert the blocked data into matrixs to find all frequent k-itemsets was proposed in this paper,and the proposed algorithm was compared with current two existed algorithms(one-phase algorithm and k-phase algorithm).Adapting Hadoop-MapReduce as the experiment platform,parallel and distributed computing offer a potential solution for processing vast amount of data.Experimental results show that the proposed algorithm outperforms the other two algorithms.
Modeling of CPS Based on Aspect-oriented Spatial-Temporal Petri Net
SONG Zhen-hua and ZHANG Guang-quan
Computer Science. 2017, 44 (7): 38-41, 73.  doi:10.11896/j.issn.1002-137X.2017.07.007
Abstract PDF(379KB) ( 76 )   
References | Related Articles | Metrics
As a tight integration of computing and physical processes,cyber-physical system (CPS) is the integration of cyber and physical worlds.According to the spatial-temporal and non-function properties in CPS,this paper presented a modeling approach based on aspect-oriented spatial-temporal Petri net.It separates core concern and crosscutting concerns from the system,and allows for analyzing crosscutting cores as aspect modules while ensuring the consistency of time and space.By building rules,these aspect modules will be woven into the system again.This approach can improve the reliability and maintainability for formal analysis of different non-function properties at the system design phase.At last,it is showed that the modeling approach is feasible through an example.
Classification Anonymity Algorithm Based on Weight Attributes Entropy
LIAO Jun, JIANG Chao-hui, GUO Chun and PING Yuan
Computer Science. 2017, 44 (7): 42-46.  doi:10.11896/j.issn.1002-137X.2017.07.008
Abstract PDF(409KB) ( 98 )   
References | Related Articles | Metrics
In order to efficiently protect data privacy being not leaked,which have high availability,a classification anony-mous method based on weight attributes entropy(WECA) was proposed.The method builds on application-specific background of data classification mining,and calculates the classification importance of different standard identifier to sensitive attribute by the concept of information entropy in the data set,which selects the highest ratio of weight attribu-tes entropy in classification quasi-identifier attributes to favorably divide the classification tree.The method also constructs the anonymous information loss measures of classification,which ensures the utility of classification on the premise of protecting privacy data.Finally,the experimental results on the standard data set show that the algorithm has fewer anonymous losses and higher classification accuracy,improving data availability.
Reconfiguring Multiprocessor Arrays for Synchronous Communication
WU Ya-lan, WU Ji-gang, JIANG Wen-chao and LIU Zhu-song
Computer Science. 2017, 44 (7): 47-56.  doi:10.11896/j.issn.1002-137X.2017.07.009
Abstract PDF(755KB) ( 124 )   
References | Related Articles | Metrics
Reconfiguring VLSI arrays to get a logical array with given size and synchronous communication is one of the key problems in reconfigurable topology for high performance computing.This paper presented three algorithms based on three different strategies of excluding logical co-lumns.The first algorithm,named SCA_01,can make the logical co-lumns in the uniform distribution in the host array,based on the divide-and-conquer for excluding logical columns.The second algorithm,named SCA_02,can minimize the number of the long interconnects of the logical array,based on the excluding the long logical column in priority.The third algorithm,named SCA_03,keeps the logical columns distributed in uniform way based on the hybrid strategy from excluding the long logical column and divide-and-conquer.In addition,this paper contributed an algorithm to calculate the lower bound of the communication delay for the given logical array.Simulation results show that,the communication delay of the logical array reconstructed by three algorithms is close to the lower bound proposed in this paper,when fault rate is less than 1% and the exclusion rate of logical columns is over 20%.Algorithm SCA_01 is superior to SCA_02 and SCA_03 in most cases,while SCA_02 and SCA_03 have nearly the same performance.But SCA_02 is better on smaller arrays and SCA_03 is better on large arrays,when the fault rate and exclusion rate are relatively small.The communication delay generated by SCA_01 is less than that of SCA_02 and SCA_03 by 25% on 32×32 host arrays.Moreover,SCA_01 is faster than the other two algorithms,and the running time is saved by 19.4%.It is concluded that SCA_01 is one of the relatively desirable algorithms to generate the logical arrays with minimum communication delay for high performance computing.
Algebraic Properties of Quantitative Context-free Languages
FU Wen-jing and HAN Zhao-wei
Computer Science. 2017, 44 (7): 57-60, 88.  doi:10.11896/j.issn.1002-137X.2017.07.010
Abstract PDF(365KB) ( 46 )   
References | Related Articles | Metrics
By introducing the concepts of quantitative pushdown automata and quantitative context-free grammars,this paper investigated the equivalent relation that a quantitative language can be accepted by quantitative pushdown automatain two different ways.It is showed the fact that quantitative pushdown automata can accept the same quantitative context-free languages which are generated by quantitative context-free grammars over commutative double unital valuation monoid in the meanwhile.
Cloud Fault Tolerance Services Adaption Method Based on Requirement and Resource Constriction
YANG Na and LIU Jing
Computer Science. 2017, 44 (7): 61-67, 97.  doi:10.11896/j.issn.1002-137X.2017.07.011
Abstract PDF(570KB) ( 76 )   
References | Related Articles | Metrics
In the environment of cloud computing,faults have become a normal behavior and the shortage of reliability safeguard not only has become the main obstacle of application promotions in cloud computing,but also has made fault tolerance services in cloud computing become a problem that need to be solved urgently.Aiming at solving the shortage study of fault tolerance services in cloud computing that the definition of user fault tolerance requirements can’t reflect reliability which was concerned by the users directly and the inflexible usage of the resources of cloud fault tolerance service providers,this paper proposed an adaption method of cloud fault tolerance services which was based on user requirement and resource constriction.This paper first defined the fault tolerance requirements of users from the prospective of users,which was conducted by taking a component as a unit and reliability as a basis.Then the optimal adaption method of fault tolerance services was studied from the perspective of cloud fault tolerance services providers under the condition of that the resources of fault tolerance service providers is insufficient or sufficient.This paper solved the fault tolerance services generated by the optimal adaption method using optimization theory.The results of the experiments showed that the fault tolerance services which were generated by our adaption method can better satisfy the requirements of users and cloud fault tolerance service providers.
Single-link Failure Protection Algorithm Based on Hop-by-Hop Routing
GENG Hai-jun, SHI Xin-gang, WANG Zhi-liang, YIN Xia and YIN Shao-ping
Computer Science. 2017, 44 (7): 68-73.  doi:10.11896/j.issn.1002-137X.2017.07.012
Abstract PDF(486KB) ( 26 )   
References | Related Articles | Metrics
When a link fails,the deployed intra-domain routing protocol need to re-convergen.During the process of re-convergence,packets which travel through the link will be dropped.Therefore,IETF proposed a framework,named IPFRR (IP Fast Re-Route),which can effectively reduce packet loss when a single-link failure occurs.However,IPFRR cannot provide complete protection against all single-link failure scenarios.Based on the framework,lots of IP-tunnel-based schemes have been proposed to solve the problem.However,these methods introduce extra overhead and need the assistance of an auxiliary mechanism.In this work,we aimed for hop-by-hop routing algorithm that can provide complete protection against all such single-link failure scenarios.
Opportunistic Forwarding Mechanism Based on Node Movement Tendencies Detecting
LIU Lin-feng, YAN Yu-dao and WU Guo-xin
Computer Science. 2017, 44 (7): 74-78.  doi:10.11896/j.issn.1002-137X.2017.07.013
Abstract PDF(1554KB) ( 72 )   
References | Related Articles | Metrics
The nodes in the social network can be classified into two types as strong mobility and weak mobility.The MTBR (Mobile-Tendency Based Routing) algorithm was proposed.And MTBR introduces the concept of ‘Movement Tendency’ which associates the human movements with their behavior habits.The algorithm detects the movement tendencies of the strong mobility nodes and takes advantage of strong mobility nodes to forward the messages.The simu-lation results indicate that the movement tendency will be more apparent with a stronger mobility.Besides,compared with other algorithms,MTBR algorithm can effectively forward the messages to the ulterior area near the destination,and produce fewer message copies and achieve a higher delivery ratio.
Adaptive Correction Model Location Algorithm Based on CSI
TIAN Li-wen and FENG Xiu-fang
Computer Science. 2017, 44 (7): 79-83, 115.  doi:10.11896/j.issn.1002-137X.2017.07.014
Abstract PDF(500KB) ( 49 )   
References | Related Articles | Metrics
Currently,WLAN-based indoor fingerprinting positioning system has been attractived owing to the advantages of open access and high accuracy.For the shortage that matching features do not consider the current environment variables for the reference environment in positioning stage of traditional indoor fingerprinting positioning systems,a novel CSI-based adaptive correction model location algorithm was proposed.The algorithm is used to indicate sub-carrier fluctuation level with the increasing of the number of people in indoor environment by introducing a indicator called PEM(Percentage of nonzero Elements),which can measure the changes of current indoor environment.At the same time,the algorithm also designs a new positioning correction matching model to compensate fingerprinting characteristic attenuation caused by multipath.Experiments have fully demonstrated that the new positioning scheme has an accuracy improvement of 30% and 15% respectively over previous fingerprint positioning system FIFS and CSI-MIMO.
Impact of Path Loss Exponent on Interference and Carrier Sensing Performance Metrics of 802.11 WLANs
WANG Yue
Computer Science. 2017, 44 (7): 84-88.  doi:10.11896/j.issn.1002-137X.2017.07.015
Abstract PDF(1187KB) ( 116 )   
References | Related Articles | Metrics
We analyzed how path loss exponent affects the interference and carrier sensing performance of 802.11 WLANs and revealed the advantages of larger path loss exponent.This work is based on the SNR threshold reception model and fixed carrier sensing thresholds,and the main bit rates of 802.11a/b/g/n are considered.Firstly,increasing path loss exponent increases SIR and decreases interference range.Secondly,the optimal network capacity increases with the path loss exponent.Thirdly,analysis and simulation reveal that larger path loss exponent results in larger carrier sensing accuracy.The performance of WLANs improves in cities because dense buildings and population increase path loss exponent.
LDPC Coded Primary Transmitter Authentication Schemes in Cognitive Radio Networks
ZHOU Xue-qian, WU Xiao-fu and YU Xun-jian
Computer Science. 2017, 44 (7): 89-93.  doi:10.11896/j.issn.1002-137X.2017.07.016
Abstract PDF(349KB) ( 53 )   
References | Related Articles | Metrics
In this paper,the capacity of embedded primary transmitter authentication channel in cognitive radio is investigated.By employing the log-likelihood ratios(LLRs) of a tag bit and its approximation,we showed that the equivalent authentication channel observed by the secondary receiver can be viewed as a binary-input additive white Gaussian noise(BI-AWGN) channel.Then,we investigated the practical performance of the embedded authentication channel with the use of LDPC coding.With the approximate form of LLRs,we found that it performs very close to that of belief propagation decoding over the ideal BI-AWGN channel.Hence,we concluded that a good error-control coding scheme for a BI-AWGN channel is also good for embedded primary transmitter authentication channel,and the proposed approximate form of LLRs can be well exploited to facilitate the computation in practice with low complexity.
De-jitter Dynamic Slot Allocation Algorithm Based on Cluster Analysis
YU Ming-qiu, ZHOU Chuang-ming and ZHAO Min
Computer Science. 2017, 44 (7): 94-97.  doi:10.11896/j.issn.1002-137X.2017.07.017
Abstract PDF(318KB) ( 82 )   
References | Related Articles | Metrics
Instantaneity of information transmission is a prominent feature of tactical data link.Data chain communication is requited possessing good delay feature as well as reliable delay jitter performance.The generation of random message flow possesses characteristics of randomness and abruptness,and should dynamically allocate timeslot in accor-dance with demands,which requires a high real-time capability for algorithm.Therefore,a de-jitter dynamic slot allocation algorithm based on cluster analysis was proposed and the clustering method was adopted to divide idle timeslot into slot cluster,which can thus simplify the analysis process of slot allocation.The simulation result shows that timeslot allocated by the algorithm can meet the requirement of delay jitter and the algorithm is of low complexity which can settle timeslot assignment of periodic random message flow efficiently.
Multi-user Cooperative Power Control Allocation Scheme for D2D Cellular System Based on Noise Model
YANG Da-yu, LI Jing-zhao and REN Ping
Computer Science. 2017, 44 (7): 98-103.  doi:10.11896/j.issn.1002-137X.2017.07.018
Abstract PDF(445KB) ( 45 )   
References | Related Articles | Metrics
In order to alleviate the interference between the D2D users and improve throughput of the whole cellular system during the process of communication,according to the number of D2D users,channel conditions and the limitation of the system throughput requirements,the problem of multi-user transmission power control under the interfe-rence noise model constraint was analyzed.The interference model was constrained by introducing the Boltzmann’s constant and other parameters,and the utility-based of system average transmit power and maximum sum rate was got finally.What’s more,the power control allocation scheme of multi-user cooperation game based on the reverse iteration combining algorithm was put forward.Simulation results show that the algorithm satisfies the power allocation requirements under the multi-user shared cellular network spectrum resource,when the transmit power of end-user reaches the Nash equilibrium after several reverse iterations,the throughput of system is improved significantly,and the spectrum resource achieves a well-balanced earnings.
Quantum Multi-proxy Blind Signature Protocol
YAO Hong-di and ZOU Hai
Computer Science. 2017, 44 (7): 104-106, 127.  doi:10.11896/j.issn.1002-137X.2017.07.019
Abstract PDF(279KB) ( 68 )   
References | Related Articles | Metrics
In most cases,the original signer only needs to entrust one proxy to sign the file.But in order to decentralize the proxy signer’s rights,more proxies are invited to make a proxy signature on the file.Based on this situation,a quantum multi-proxy blind signature protocol was proposed.The protocol,which makes use of the link between Bell state and Bell measurement,can make the original signer entrust more proxies to have a signature on the file,but also the number of signers can be changed according to actual needs,which increasingly improves the flexibility of the scheme.The security analysis shows that the protocol can resist both internal attacks and external attacks,and it is a safe and achievable protocol.
Method to Construct Secure S-boxes Based on Multimap
CAO Xiao-mei, CHEN Hai-shan and WANG Shao-hui
Computer Science. 2017, 44 (7): 107-110, 119.  doi:10.11896/j.issn.1002-137X.2017.07.020
Abstract PDF(1108KB) ( 42 )   
References | Related Articles | Metrics
The problem of constructing S-boxs was transformed to a problem of searching for the mapping of certain conditions.Using the chaotic characteristics of Tent map,we proposed initial mapping algorithm to get the initial mappings which can be used as initial S-boxes.In order to improve the security of S-boxs,multimap algorithm was proposed which using multiple initial mappings to do nonlinear operations on S-boxs.According to security criteria,the proposed algorithm can obtain stronger S-boxes.At last,by setting a security index,the number of strong S-boxes generated by the algorithm was counted.The results of analysis show that the number of strong S-boxes increases with the increase of the number of initial mappings used in multimap algorithm,and the time cost is proportional to the number of initial mappings used in multimap algorithm.
Ultra-lightweight Mutual Authentication Protocol for Mobile Radio Frequency Identification
HUANG Qi and LING Jie
Computer Science. 2017, 44 (7): 111-115.  doi:10.11896/j.issn.1002-137X.2017.07.021
Abstract PDF(411KB) ( 71 )   
References | Related Articles | Metrics
Aiming at the security problem between the reader and back-end server of mobile radio frequency identification caused by wireless transmission,an ultra-lightweight mutual authentication protocol for mobile radio frequency identification are proposed.In the protocol,the tag pseudonyms and key label are dynamically updated by cascade operation,which can effectively hide the true identity of the tag.And the cyclic check function is used for identity authentication between the tag,reader and back-end server,which can achieve the system mutual authentication.Security analysis shows that the proposed protocol can resist many kinds of malicious attacks,such as tracking attack, impersona-tion attack,replay attack,man-in-the-middle attack and so on.Compared with several existing protocol,the protocol reduces the computational and communication costs of the label,which are of high security and low cost.
Differential Fault Analysis on DBlock Cipher Algorithm
LI Lang, ZOU Yi, LI Zhu-hua and LIU Bo-tao
Computer Science. 2017, 44 (7): 116-119.  doi:10.11896/j.issn.1002-137X.2017.07.022
Abstract PDF(1072KB) ( 40 )   
References | Related Articles | Metrics
DBlock algorithm is a new type of block cipher algorithm proposed in 2015. The packet length and its corresponding key length are 128bit,192bit and 256bit,with 20-round iterations of each.Based on byte fault model,the differential fault analysis on the DBlock was used on the key expansion to import a random fault in the 17th round.Experimental results show that the analysis can recover its primitive 128bit key only by introducing four fault ciphertexts.
Self-adaptive Approach for Container-based Cloud Resources Management
SHU An, PENG Xin and ZHAO Wen-yun
Computer Science. 2017, 44 (7): 120-127.  doi:10.11896/j.issn.1002-137X.2017.07.023
Abstract PDF(677KB) ( 90 )   
References | Related Articles | Metrics
Under the support of cloud computing,more and more applications choose cloud platform as their deployment platform.In order to respond to changing workloads,application scenarios,and user target,service providers need to dynamically adjust the existing resources in a scalable way.However,cloud platform resource management approach via virtual machine has many drawbacks.Virtual machines are heavyweight,it is not suitable for fine-grained and flexible allocation of resources.Meanwhile in the hybrid cloud background,the virtual machines can not migrate quickly.Container technology,to some extent,can make up for the lack of a virtual machine.However,traditional resource management method based on the virtual machine is not very suitable for container technology.To solve the above problems,this approach designed resources infrastructure solution and scheduling way which are more suitable to container.This approach models cloud resources precisely via nonlinear function,and could tune parameters using genetic algorithms,in order to optimize the performance of adjustment.This approach also computed rational allocation of container deployment location to improve physical resource utilization.In addition to that,this approach combines many bottom characteristics of container to adjust the procedure of load-balancing.We also conducted experiments to verify the effectiveness of the method.
Formal Method for Describing Software Evolution Ability Feature
HE Yun, WANG Wei and LI Tong
Computer Science. 2017, 44 (7): 128-136.  doi:10.11896/j.issn.1002-137X.2017.07.024
Abstract PDF(1276KB) ( 46 )   
References | Related Articles | Metrics
The liveness and safety are the most important basis for judging software systems evolution ability.The exi-sting modeling methods use classical logic to describe the liveness and safety of systems.The complexity of the environment and stakeholders leads to conflicting input when analyzing software evolution ability.Because of the non-contradiction law,classical logic can’t model the software system evolution ability effectively.Aiming at this problem,a formal method for describing the software evolution ability was proposed.The method allows contradictions,and can be used for modeling and analyzing systems with contradictions such as software evolution ability.The method uses multi-value temporal logic to describe software system evolution requirements,and proposes a software abstraction model to model software system.It can describe software system evolution ability by analyzing SAM’s liveness and safety.
Applications of Fibrations Theory to Uncertainty Semantic Computation for Indexed Inductive Data Types
MIAO De-cheng, XI Jian-qing, LIU Xin-sheng and SU Jin-dian
Computer Science. 2017, 44 (7): 137-140.  doi:10.11896/j.issn.1002-137X.2017.07.025
Abstract PDF(337KB) ( 68 )   
References | Related Articles | Metrics
This paper discussed the uncertainty semantic computation of indexed inductive data types using Fibrations theory.Firstly,we demonstrated the construction of indexed category,presented the concept of indexed Fibration and its truth and comprehension functors.And then,we proposed a truth-preserving lifting on endo-functor in indexed category.We also presented the definition of partial F-algebra,abstractly described the uncertainty semantic computation of indexed inductive data types using some tools including fold function,and briefly introduced the application by example.At last,we stated the advantages of Fibrations theory by comprising with some related works.
Method of Software Architecture Refinement Based on Algebraic Semantics
LIN Lei-lei, ZHOU Hua, DAI Fei, HE Zhen-li, SHEN Yong and KANG Hong-wei
Computer Science. 2017, 44 (7): 141-146.  doi:10.11896/j.issn.1002-137X.2017.07.026
Abstract PDF(1336KB) ( 45 )   
References | Related Articles | Metrics
Software architecture is a bridge to connect the requirements and achievements.At present,there are two approaches to model architecture,which are formal and informal.In order to strike a balance between convenient operation and accuracy,we applied Petri nets to model the architecture of large and complex systems,especially in distributed system.First of all,this paper proposed a top-down approach for refining transitions by subnets in Petri nets.This method can establish hierarchical structure model of the complex system.In addition,the method effectively solves the problem of state space explosion.Moreover,we used process algebra to define the algebraic semantics of the behavior of Petri nets,which can guarantee the consistency of behavior between the subnets and the specification.Besides,we gave an algorithm for transforming a process term into a subnet.Finally,a case study was given to verify the correctness of above ideas through open source tools.
Web Resource Recommendation Based on Analysis of Developer’s Behavior
YANG Jun-wen, WANG Hai, PENG Xin and ZHAO Wen-yun
Computer Science. 2017, 44 (7): 147-150.  doi:10.11896/j.issn.1002-137X.2017.07.027
Abstract PDF(355KB) ( 55 )   
References | Related Articles | Metrics
Modern integrated development environment (IDE) provides developers with a variety of tools,including error warning,code complementary,code analysis,version control management,etc.,to support software development and improve the developers’ efficiency.However,such tools are deficient,as much more information,such as code sample,configure manifest,and error handling,is needed during development,and frequently switching between Web browser and IDE costs time and effort.A Web information resource recommendation method was proposed,which is based on the analysis of developer’s behavior.The method extracts structured information including code samples from the developers’ browsing history,and classifies them through text clustering.At the same time,the developer’s behavior in the IDE was recorded.The relationship between WEB resources and developer’s behavior will be established so that similar information can be recommended when the same situation happens.At last,an experiments was conducted to demonstrate that our method can save developing time efficiently.
Study of Complex Event Hierarchies Realization Based on Hierarchical Automata Model
JIN Da-wei, SHI Si, YI Cai and YANG Bing
Computer Science. 2017, 44 (7): 151-160.  doi:10.11896/j.issn.1002-137X.2017.07.028
Abstract PDF(804KB) ( 57 )   
References | Related Articles | Metrics
Complex event processing technology extracts event sequences which satisfy the specific patterns (called complex events) from continuous event streams.It processes a large sum of data in real time intelligently and has attracted the attention from both academia and industry in recent years.However,the state-of-the-art researches focus on single-layer complex event detection.In fact,because business systems have different hierarchical needs of information,events in system should be processed according to their hierarchies.Single-layer complex event detection cannot afford to meet the needs of hierarchical information in different managerial hierarchies.To deal with such problem,in this work,a hierarchical complex event detection model,hierarchical automata,was defined based on the concept of event hierarchies and traditional nondeterministic finite automaton (NFA) model.Then we designed a more intuitive and effective EH-Tree structure and proposed hierarchical complex event detection (HCED) algorithm and its cost model.Finally,taking throughput and memory consumption as indexes,massive experiments were performed to compare the spatial and temporal performance of HCED algorithm based on EH-Tree and SASE algorithm based on NFA.The results testify that our HCED algorithm can perform hierarchical complex event detection effectively and efficiently,which fills in the blanks of hierarchical complex event detection and indicates that our work can be a basis for further research of complex event processing.
Relational Database Energy Prediction Model Based on MBRC
YANG De-xian, SUN Hua, YU Jiong and GUO Bing-lei
Computer Science. 2017, 44 (7): 161-166.  doi:10.11896/j.issn.1002-137X.2017.07.029
Abstract PDF(513KB) ( 68 )   
References | Related Articles | Metrics
Power modeling and power profiling of database workload are the foundation of building energy-aware DBMS.To solve the high energy consumption problem of database workload,this paper proposed a method of building energy prediction model for database workload by mapping resource consumption of the main components(CPU and Disk) to their power and time cost.Firstly,power cost of workload is estimated according to their resource consumption pattern.Secondly,energy cost of workload is estimated by combing the power cost and time cost derived from the resource consumption pattern.And finally,a deep research of the model prediction accuracy is conducted by using the MBRC settings.Experimental results show that,the proposed energy prediction modeling method can make accurate estimation of workload energy cost,and the research on accuracy of model prediction can improve the reliability and accuracy of the energy modeling method under different system settings.
Partially-lazy Learning Classification Algorithm Based on Representation of Data Stream Model
JIANG Jing-jing, WANG Zhi-hai and YUAN Ji-dong
Computer Science. 2017, 44 (7): 167-174, 202.  doi:10.11896/j.issn.1002-137X.2017.07.030
Abstract PDF(712KB) ( 41 )   
References | Related Articles | Metrics
Utilizing patterns extracted from large scale data to build classification model is one of important research problems.Exploiting patterns to estimate Bayesian probability is a feasible approach.However,most of the existing pattern-based Bayesian classifiers aim at static data set,which cannot adapt to the dynamic data stream environment.A Bayesian classification model,named PBDS(Pattern-based Bayesian classifier for Data Stream),based on pattern discove-ry over data streams was proposed.PBDS constructs local model for unseen case based on continuously updated frequent item sets with partially-lazy learning method.To accelerate data processing,the simpler data structure,i.e.,hybrid trees structure was proposed,and pattern extracting mechanism was proposed to reduce the generation of candidate itemsets.Smoothing technique was used to handle incomplete itemset extraction in the data stream.Extensive experiments on real-world and synthetic data streams show that PBDS is more accurate than state-of-the-art data stream classifiers.
Emerging Sequences Pattern Mining Based on Location Information
CHEN Xiang-tao and XIAO Bi-wen
Computer Science. 2017, 44 (7): 175-179.  doi:10.11896/j.issn.1002-137X.2017.07.031
Abstract PDF(374KB) ( 62 )   
References | Related Articles | Metrics
Owing to the strong ability of distinguishing,emerging patterns have been widely used to build defective classifier.As most of the existing algorithms focus on the support or the occurrences of sequence patterns,and the location of the sequence patterns in a sequence is usually ignored,some important information may be missed.In this paper,we put forward an emerging sequence pattern with local location information,and a mining algorithm of the emerging sequence pattern with location information.Based on the framework of occurrences,combined with the suffix tree,omitting the generation and selection procedure of candidate patterns,this algorithm can quickly and efficiently mine emerging sequence patterns with the location information.The experimental results show that the classifier which is built by emerging sequence patterns with location information is better than the traditional algorithm of mining the emerging sequence patterns on the average classification accuracy.
Research on Recommendation Method of Restaurant Based on LDA Model
ZHANG Xiao-yang, QIN Gui-he, ZOU Mi, SUN Ming-hui and GAO Qing-yang
Computer Science. 2017, 44 (7): 180-184, 214.  doi:10.11896/j.issn.1002-137X.2017.07.032
Abstract PDF(447KB) ( 178 )   
References | Related Articles | Metrics
With the rapid development of the network,the amount of the evaluation information of the food and bevera-ge has increased dramatically.The effective analysis of the evaluation information can not only help the consumers choose the suitable restaurant,but also help the businesses improve service.For this purpose,a restaurant recommendation method based on LDA(Dirichlet Allocation Latent) model was proposed.First of all,it classifies the evaluation information according to the emotional tendencies,and then gets the positive evaluation and praise rate.Secondly,it manipulates the LDA model for text clustering to generate restaurant tags.Finally,it calculates the similarity between the user’s needs and the restaurant tags,and according to the similarity and the rate of praise,recommends the suitable restaurants to customers.We got the real food and beverage comments from the Internet,and carried out the experiment.As a result,the effect of the restaurant tags produced from this method is good,which could accurately recommend the restaurants to users.
Adaptive Multiclass Boosting Classification Algorithm
WANG Shi-xun, PAN Peng, CHEN Deng and LU Yan-sheng
Computer Science. 2017, 44 (7): 185-190.  doi:10.11896/j.issn.1002-137X.2017.07.033
Abstract PDF(492KB) ( 111 )   
References | Related Articles | Metrics
Many practical problems have involved classification technology which can effectively reduce the comprehension difference between users and computers.In the traditional multiclass Boosting methods,multiclass loss function does not necessarily have guess-aversion,and the combinations of multiclass weak learners are confined to linear weighted sum.To get a final classifier with high accuracy,multiclass loss function need contain three main properties,namely margin maximization,Bayes consistency and guess-aversion.Moreover,the weakness of weak learners may limit the performance of linear classifier,but their nonlinear combinations should provide strong discrimination.Therefore,we designed an adaptive multiclass Boosting classifier,namely SOHPBoost algorithm.At every iteration,our algorithm can add the best weak learner to the current ensemble according to vector addition or Hadamard product.This adaptive process can create the sum of Hadamard products of weak learners,and mine the hidden structure of dataset.The experi-ments show that SOHPBoost algorithm can provide more advantageous performance of multiclass classification.
Micro-blog Retweet Behavior Prediction Algorithm Based on Anomaly Detection and Random Forest
ZHOU Xian-ting, HUANG Wen-ming and DENG Zhen-rong
Computer Science. 2017, 44 (7): 191-196, 220.  doi:10.11896/j.issn.1002-137X.2017.07.034
Abstract PDF(680KB) ( 39 )   
References | Related Articles | Metrics
Aiming to solve the issue that the accuracy of micro-blog retweet behavior prediction is not good enough and features are selected with an arbitrary choice,a new method using anomaly detection and random forest algorithms to predict micro-blog retweet behavior was proposed.Firstly,the basic features of the user,the basic characteristics of blog and blog content theme features are extracted,and the user activity and blog influence are calculated based on relative entropy.Secondly,the best feature set are selected by combining the filter and wrapper feature selection method.Finally,anomaly detection and random forest algorithms are fused to predict micro-blog retweet behavior based on selected features.The algorithm parameters of random forest are selected by analyzing the error estimation of out of bag data.By contrasting with Logistic Regression,Decision Tree,Naive Bias and Random Forest algorithms,which are used in the analysis for micro-blog retweet behavior,the prediction accuracy of the proposed method is higher than that of the optimal random forest method on real data,and reaches 90.5%.Meanwhile,the validity of feature selection method is verified.
Community Detection Based on User Interaction and Link Analysis in Social Networks
LI Peng, LI Ying-le, WANG Kai, HE Zan-yuan, LI Xing and CHANG Zhen-chao
Computer Science. 2017, 44 (7): 197-202.  doi:10.11896/j.issn.1002-137X.2017.07.035
Abstract PDF(505KB) ( 63 )   
References | Related Articles | Metrics
With the rapid development of social media network,the user is also more convenient to participate in social networking,which also brings a large number of complex interaction and connection mode.How to effectively analysis the interactive information and the connection information between network nodes to complete the efficient community detection is the key problem faced by current network analysis.Based on this,this paper put forward a kind of social network community detection method(CDUILS) based on the interaction behavior and link analysis.In this method,the interaction information between nodes is used as the cooperative learning of the community.The non negative matrix factorization is used to analyze the two types of information sources by the way of iterative update,and the community results can be obtained with two kinds of information retrieval.Experiments on real data sets show that the proposed method can effectively utilize the interaction behavior to guide the community division and have better quality of community division.
Improved Water Wave Optimization Algorithm with Adaptive Control Parameters
LIU Ao, DENG Xu-dong and LI Wei-gang
Computer Science. 2017, 44 (7): 203-209, 250.  doi:10.11896/j.issn.1002-137X.2017.07.036
Abstract PDF(1949KB) ( 50 )   
References | Related Articles | Metrics
Water wave optimization algorithm(WWO) is a novel population-based optimization algorithm.Despite of its advantages with few controllable parameters,simple operations,and easy implementation,it still risks slow convergence rate and low search precision.To mitigate the aforementioned risks,firstly,a concise yet powerful theoretical analysis was carried out to derive the convergence condition that the control algorithm parameters should satisfy.Then,an improved WWO was proposed to meet the above conditions by incorporating an adaptive algorithm parameters tuning strategy,and it’s expected to further enhance the capability of balancing between global exploration and local exploitation via this strategy.Finally,simulation experiments and statistical comparisons between four algorithms(ApWWO,WWO,FA,MVO) on 10 benchmark functions were conducted.The results show that ApWWO performs significantly better than WWO and FA in terms of search accuracy,speed and robustness,as well as outperforms MVO in five test function.Compared with PSO and GA,ApWWO can yield good performance,which can be affected by the problem’s dimension and population size,and ApWWO also performs well on the permutation flow shop scheduling problem.
K-Nearest Neighbor Algorithm Based on Hash Technology and MapRecuce
ZHAI Jun-hai, ZHANG Ming-yang, WANG Ting-ting and HAO Pu
Computer Science. 2017, 44 (7): 210-214.  doi:10.11896/j.issn.1002-137X.2017.07.037
Abstract PDF(379KB) ( 70 )   
References | Related Articles | Metrics
K-nearest neighbor(K-NN) is a famous classification algorithm.Because the idea of K-NN is simple and it is easy to implement,K-NN has been widely applied to many fields,such as face recognition,gene classification and decision making,etc.However,in the big data environment,the efficiency of K-NN is very low,even it is not workable.In order to deal with this problem,based on hash technology and MapRecuce,this paper proposed an improved K-nearest neighbor algorithm.In order to verify the effectiveness of the proposed algorithm,some experiments were conducted on 4 big data sets.The experimental results show that the proposed algorithm is effective and efficient.
Shortest Path Network Routing Optimization Algorithm Based on Improved ITO Algorithm
MAN Zhen-zhen, YU Shi-ming and HE De-feng
Computer Science. 2017, 44 (7): 215-220.  doi:10.11896/j.issn.1002-137X.2017.07.038
Abstract PDF(495KB) ( 48 )   
References | Related Articles | Metrics
Through the analysis of shortest path problem on network routing,we built the network model of the shortest path routing problem by using ITO algorithm to get the lowest cost route optimization target.To speed up ITO lowest routing algorithm’s convergence speed,the proposed algorithm involves cost inspired factor in the state transition strategy,and hence improves path weight updating rules.The convergence speed and optimization capability are enhanced simultaneously by introducing the method of population cross and using the information exchange between populations.A 2-opt operator-based inversion operator is adopted to optimize the algorithm and to avoid falling into local optimal solution.The astringency of the algorithm was also analyzed systematically.The case study illustrates that the improved algorithm is effective to speed up the convergence speed and enhance optimization capability.
Detecting Community from Bipartite Network Based on Generalized Suffix Tree
ZOU Ling-jun, CHEN Ling and DAI Cai-yan
Computer Science. 2017, 44 (7): 221-226.  doi:10.11896/j.issn.1002-137X.2017.07.039
Abstract PDF(463KB) ( 96 )   
References | Related Articles | Metrics
In recent years,the problem of detecting communities from bipartite network has drawn much attention of researchers.This paper presented an algorithm based on generalized suffix tree for detecting communities from bipartite networks.The algorithm firstly extracts the adjacent node sequence for each node from the adjacency matrix of the bipartite network,and constructs a generalized suffix tree.Each node in the generalized suffix tree represents a complete bipartite clique.Then the algorithm extracts and adjusts those cliques.The closeness of two cliques is introduced to form initial communities.Finally,isolated nodes are processed to get the final community partition.The proposed algorithm can detect overlapping communities,and is able to get one-to-many correspondence between communities.Experimental results on the artificial networks and real-world networks show that,our algorithm can not only accurately identify the number of communities from bipartite networks,but also obtain high quality of community partitioning.
Time-based Local Collaborative Filtering Recommendation Algorithm on Tensor Factorization
SUN Yan-ge, WANG Zhi-hai and HUANG Dan
Computer Science. 2017, 44 (7): 227-231.  doi:10.11896/j.issn.1002-137X.2017.07.040
Abstract PDF(388KB) ( 108 )   
References | Related Articles | Metrics
Traditional recommendation models are stationary with neglecting time factor.Some recommendation algorithms take time factor into consideration,but what they do is using the latest data or reducing the weight of past data.It may lead to the loss of some useful information.To solve the above problem,a time-based local low-rank tensor factorization algorithm was proposed.In contract to standard collaborative filtering algorithms,our method does not assume that the rating matrix is low-rank.We relaxed the assumption and assumed that the rating matrix is locally low-rank.The algorithm takes time factor into consideration and views rating matrix as 3-dimensional sensor based on the traditional recommendation algorithms which extend the traditional algorithms to tensor field.Experiments show that the algorithm could improve the efficiency of ranking recommendation.
Generalized Intuitionistic Fuzzy Rough Set Model
LU Yan-li, LEI Ying-jie and ZHOU Wei
Computer Science. 2017, 44 (7): 232-236.  doi:10.11896/j.issn.1002-137X.2017.07.041
Abstract PDF(336KB) ( 57 )   
References | Related Articles | Metrics
Intuitionistic fuzzy set,proposed by Atanassov,is one of the most influential generalizations of Zadeh’s fuzzy sets.To extend the capability of Pawlak’s rough set theory in processing multiple uncertainties,intuitionistic fuzzy set and rough set were combined and a general model of intuitionistic fuzzy rough set was discussed by using constructive approach.Firstly,an equivalent definition of intuitionistic fuzzy set on a special lattice was introduced.Secondly,several important properties of intuitionistic fuzzy logic operators and intuitionistic fuzzy relation which are the two fundamental elements of intuitionistic fuzzy approximation space were proved,and then the model of intuitionistic fuzzy rough sets was constructed.Finally,the properties of proposed model were classified and examined respectively.
Novel ABC Algorithm with Adaptive Disturbance
ZHOU Shu-liang, FENG Dong-qing and CHEN Xue-mei
Computer Science. 2017, 44 (7): 237-243.  doi:10.11896/j.issn.1002-137X.2017.07.042
Abstract PDF(478KB) ( 62 )   
References | Related Articles | Metrics
As a new type of algorithm,artificial bee colony simulates the bee behaviors to find food.Since its simple parameters and flexibility,ABC is widely used to solve engineering problems.But the premature convergence and cross-border are disadvantages of ABC.To solve these problems,a novel ABC algorithm with adaptive disturbance(IGABC) was proposed in this paper.This improved algorithm adopted symmetry axis strategy to deal with the cross-border individuals,so the search efficiency is improved.A novel global self-adaptive search equation was proposed in this paper.The new search equation improves the structure of original global search equation,and adds linear increasing strategy with threshold.The search method for onlooker bees and employed bees improves the convergence precision and speed.IGABC algorithm designs a novel method on the base of global adaptive disturbance.The simulation results on 18 benchmark functions show that IGABC algorithm enhances the exploitation capacity,and the convergence speed and accuracy have made great progress,contrasting with other six improved ABC algorithms,which were proposed in the last two years.Especially when the test function is Rosenbrock,which is very difficult to find optimum solution,the convergence precision is increased by 16 orders of magnitude.
Incremental Attribute Reduction Algorithm Based on Binary Discernibility Matrix in Incomplete Information System
DING Mian-wei, ZHANG Teng-fei and MA Fu-min
Computer Science. 2017, 44 (7): 244-250.  doi:10.11896/j.issn.1002-137X.2017.07.043
Abstract PDF(590KB) ( 65 )   
References | Related Articles | Metrics
Incremental attribute reduction algorithm in incomplete information system is one of the important research contents in the area of data mining.For getting the attribute reduction incrementally,the tolerance class needs to be computed.For the purpose of speeding up the tolerance class calculation,an improved static algorithm with rapidityand stability is developed firstly,followed by a novel incremental algorithm,which can update the tolerance class rapidly when a new object is coming.On the basis of the obtained tolerance class and combined with the intuitive and easy of binary matrix,an incremental attribute reduction algorithm based on binary matrix in incomplete information system by updating the binary matrix was proposed.The validity of these algorithms was demonstrated by the simulation and experimental results.
Multi-agent Epistemic Explanatory Diagnosis Class
YU Quan and CHANG Liang
Computer Science. 2017, 44 (7): 251-256, 274.  doi:10.11896/j.issn.1002-137X.2017.07.044
Abstract PDF(596KB) ( 89 )   
References | Related Articles | Metrics
Explanatory diagnosis is an important branch of model-based diagnosis.Based on the decidable fragments of epistemic explanatory diagnosis,we presented the notion of epistemic explanatory diagnosis class,and gave an algorithm of unique epistemic explanatory Diagnosis class within the decidable fragment.
Link Prediction in Networks with Node Attributes Based on Space Mapping
JIANG Mao-sheng, GE Jian-fei and CHEN Ling
Computer Science. 2017, 44 (7): 257-261.  doi:10.11896/j.issn.1002-137X.2017.07.045
Abstract PDF(1148KB) ( 91 )   
References | Related Articles | Metrics
A link prediction algorithm in networks with node attributes based on space mapping was proposed in this paper.In general,networks with node attributes have two types of information,i.e.topology and node attributes.To integrate these two types of information,we mapped them into a new space.After information being mapped,similarities between nodes were calculated in the new space,which are used to predict the possibility of links between nodes.Alternative iteration method was proposed to get the optimal mapping matrix,so as to integrate topology information and node attribute information effectively.The experimental results verify the correctness of space mapping approach,and show that the proposed space mapping based algorithm can obtain high quality prediction results.
Reseach on Improved Apriori Algorithm Based on Hadoop
HUANG Jian, LI Ming-qi and GUO Wen-qiang
Computer Science. 2017, 44 (7): 262-266, 269.  doi:10.11896/j.issn.1002-137X.2017.07.046
Abstract PDF(481KB) ( 47 )   
References | Related Articles | Metrics
Traditional mining based on parallel Apriori algorithms needs much more time in data IO with the increasing size of large transaction database.In this paper,we improved Apriori algorithm in three aspects:compression in the transaction,reducing the number of scanning areas,and simplifying the candidate set generation .We proposed “0” and “1” as the entries to describe the transaction Boolean matrix model,and introduced the weight dimensions to compress the matrix size of the transaction.Meanwhile,dynamic pruning matrix is adopted,and “and” operation of matrix is applied to generate a candidate set.The experiments of the improved algorithm running parallel in Hadoop framework show that the algorithm is suitable for large-scale data mining,and the algorithm has good scalability and effectiveness.
Inexact Proximal Point Algorithm for Structured Variational Inequalities
CHEN Xiao-biao, LI Geng-hua, LIANG Juan and WANG Jian-jun
Computer Science. 2017, 44 (7): 267-269.  doi:10.11896/j.issn.1002-137X.2017.07.047
Abstract PDF(225KB) ( 57 )   
References | Related Articles | Metrics
Recently,the alternating direction method of multipliers has attracted great attention.For a class of variatio-nal inequalities,this method is efficient,when the subproblems can be solved exactly.However,the subproblems could be too difficult or impossible to be solved exactly in many practical applications.In this paper,we proposed an inexact proxi-mal point method based on proximal point method.The subproblem is simple to have a closed form solution.Instead of solving the subproblems exactly,we used the simple projection-correction method to approximate the subproblems’ real solutions.Convergence of the proposed method is proved under mild assumptions and its efficiency is also verified by some numerical experiments.
Human Action Recognition Based on Fisher Discrimination Dictionary Learning
JI Chong, WANG Sheng and LU Jian-feng
Computer Science. 2017, 44 (7): 270-274.  doi:10.11896/j.issn.1002-137X.2017.07.048
Abstract PDF(2632KB) ( 47 )   
References | Related Articles | Metrics
Human action recognition is a hot computer vision research field,and has broad application prospects.This paper explored the application of Fisher discrimination based dictionary learning on human action recognition.First,local spatial-temporal features are extracted from the video sequences and random projection is used to reduce dimension.Then,Fisher discrimination based dictionary learning is performed on the reduced motion features.Last,a new classification scheme is proposed using both reconstruction error and representation coefficients.Experimental results confirm the efficiency and robustness of the proposed scheme.
Research on Action Recognition Algorithm Based on Hybrid Cooperative Training
JING Chen-yong, ZHAN Yong-zhao and JIANG Zhen
Computer Science. 2017, 44 (7): 275-278.  doi:10.11896/j.issn.1002-137X.2017.07.049
Abstract PDF(1167KB) ( 127 )   
References | Related Articles | Metrics
Human action recognition is an important issue in the field of computer vision.Existing action recognition methods mostly belong to supervised learning category,in which a large number of labeled data are needed to train the recognition model.However,in many real-world tasks,labeled data are often expensive to get,while unlabeled data are readily available in abundance.In this paper,a novel human action recognition algorithm,named as Co-KNN-SVM,was proposed based on hybrid collaborative training.Different types of recognition methods for action recognition field are employed in this method to build the base classifiers,which are then iteratively retrained to increase their generalization abilities.In general,our method can decrease the labeling cost and achieve complementary advantages of different recognition algorithms.In order to decrease the impact of the noise in pseudo labeled data and improved the recognition performance,we also improved the selection method for the pseudo label data and the iterative training strategy in co-trai-ning style algorithms.The experimental results show that the proposed algorithms can identify human action in the vi-deo more effectively.
Method to Generate Diagonalizable LDPC Measurement Matrix Based on Compressive Sensing
ZHOU Chun-jia, SUN Quan-sen and LIU Ji-xin
Computer Science. 2017, 44 (7): 279-282.  doi:10.11896/j.issn.1002-137X.2017.07.050
Abstract PDF(1797KB) ( 73 )   
References | Related Articles | Metrics
Compressive sensing is a technique that is suitable for compressing and recovering signals having sparse representations in certain bases.In view of two main problems in currently existing measurement matrices for compressive sensing of natural images,such as difficulty of hardware implementation and low sensing efficiency,this paper proposed a simple measurement matrix.By combining the diagonal block matrix with the LDPC check matrix in the channel co-ding,a new measurement matrix that facilitates the hardware implementation is generated.The diagonalizable LDPC measurement matrix is highly sparse and binary,and reduces the data storage space and computing time.Through the comparison of multiple sets of images,the reconstruction results of this method are much better than the others.
Fast and Robust SAR Image Matching Algorithm
WU Peng, YU Qiu-ze and MIN Shun-xin
Computer Science. 2017, 44 (7): 283-288, 298.  doi:10.11896/j.issn.1002-137X.2017.07.051
Abstract PDF(5065KB) ( 105 )   
References | Related Articles | Metrics
To solve the problem that SIFT and its improved algorithm have low matching performance(poor university,low matching accuracy,high time complexity)in the multi-band SAR image matching,we improved the algorithm respectively from creating scale space and descriptors within the framework of the SIFT algorithm.In scale space level,we proposed to use gauss guided filter to construct scale space and use bilateral filter in image pre-processing stage.This strategy,efficient filter speckles noise and keeps the image’s information,makes full use of gauss guided filter real-time and rotational symmetry and the edge preserving advantages of bilateral filter.In the construction descriptor stage,in order to ensure the distinction and reduce the time of build descriptors,we adopted the local difference binary to describing the local features’ characteristics.In the matching stage,the coarse matching uses the algorithm of nearest neighbor firstly,and then the sparse vector field consensus is used to remove the error matching points quickly.The experimental results show that the proposed algorithm from SAR image matching on time complexity and the matching probability is better than the BFSIFT and KAZE algorithm.In conclusion,our proposed algorithm is an efficient algorithm of real-time,robustness and high matching probability.
Implementation of Time-Spatial Domain Retinex Color Correction Algorithm on ZedBoard
ZHOU Jia-hao, LI Pei-yue and YANG Huai-jiang
Computer Science. 2017, 44 (7): 289-292, 314.  doi:10.11896/j.issn.1002-137X.2017.07.052
Abstract PDF(1020KB) ( 57 )   
References | Related Articles | Metrics
The time-spatial domain retinex color correction algorithm based on white balance was proposed.Hardware platform was designed owing to its high computational complexity and bad real-time quality to realize the algorithm.ZedBoard uses resources of PL and PS to execute white balance processing,extracts light information,and then achieves retinex color correction algorithm efficiently.Eventually projector exports reflection coefficient information images representing objects inherent color.Color data of projection light are collected by color light meter named CL200A from Konica Minolta.Data show that the algorithm spreads color gamut of images to make it approach the standard.And white pixels after being corrected are closer to the standard white pixels.Color resolution is improved and color cast is solved.Compared with time cost of the algorithm’s implementation on PCs,time cost on ZedBoard is reduced about two orders of magnitude.
Visual Attention Modeling Based on Multi-scale Fusion of Amplitude Spectrum and Phase Spectrum
YUAN Xiao-yan, WANG An-zhi, PAN Gang and WANG Ming-hui
Computer Science. 2017, 44 (7): 293-298.  doi:10.11896/j.issn.1002-137X.2017.07.053
Abstract PDF(3000KB) ( 56 )   
References | Related Articles | Metrics
Aiming at the deficiency of most existing frequency domain saliency detection algorithms which only use frequency domain amplitude spectrum or phase spectrum information alone,a visual attention model combining multi-scale frequency amplitude spectrum and phase spectrum was proposed.Firstly,the amplitude spectrum and the phase spectrum are obtained by quaternary transforming the image,and then the gamma correction and Gaussian filtering are performed on the amplitude spectrum.Finally,the entropy is used as weight to fuse the multi-scale saliency map.On the two published data sets,Bruce and Judd,the ROC curve,AUC value and F-Measure method were used to validate and evaluate the algorithm.The experimental results show that the proposed algorithm outperforms five visual attention models, predicts the significant attention areas more accurately,and achieves more satisfactory results.
Real-time Dynamic Sign Language Recognition Based on Hierarchical Matching Strategy
LIANG Wen-le, HUANG Yuan-yuan and HU Zuo-jin
Computer Science. 2017, 44 (7): 299-303.  doi:10.11896/j.issn.1002-137X.2017.07.054
Abstract PDF(1125KB) ( 55 )   
References | Related Articles | Metrics
Dynamic sign language can be described by its trajectory and the key hand-action.However,a large number of statistical data show that most of the commonly used sign languages can be recognized by its trajectory curve.Therefore,a hierarchical matching recognition strategy for dynamic sign language was proposed in this paper.First,the gesture trajectory can be obtained by the somatosensory equipment like Kinect.According to its point density,an algorithm of key frame detection is designed and is used to extract the key gestures.Thus,we can achieve a precise description of dynamic sign language through trajectory curve and key frames.Then the dynamic time warping(DTW) algorithm is optimized and used to do the first-level matching,i.e.trajectory matching.If the recognition results can be get currently,the recognition process can be finished,otherwise the process should go into the second-level,i.e.key frame matching,and then get the final recognition results.Experiments show that this algorithm not only has better real-time performance,but also has higher recognition accuracy.
Lining Seam Elimination Algorithm in Tunnel Concrete Lining Based on Line Feature Unit Extraction
AN Shi-quan, BAI Ling and QU Zhong
Computer Science. 2017, 44 (7): 304-308.  doi:10.11896/j.issn.1002-137X.2017.07.055
Abstract PDF(2765KB) ( 80 )   
References | Related Articles | Metrics
Due to the brightness similarity and linear consistency between surface cracks and inherent lining seams in some tunnel concrete lining,lining seams are prone to sugaring,hollowing,localized rock fall and water leakage.The existing crack detection algorithms cannot accurately extract single cracks.This paper proposed a lining seam elimination algorithm based on the extraction of line feature unit in tunnel concrete lining.Firstly,on the basis of the clustering characteristics detection which is similar to cracks,the significant linear characteristics have been detected by progressive probabilistic hough ransform(PPHT).Secondly,the minimum line feature processing unit of lining seam,which is called Unit-Line,has been extracted by the search calculation of pixel extension.Finally,according to the marking information of the Unit-Line characteristics and the Unit-Line characteristics in fixed area,a part of the lining seams can be removed,and the real surface cracks in tunnel concrete lining can be obtained by the percolation de-noising.The experimental results show that the proposed surface lining seam elimination algorithm based on the extraction of line feature unit,which has strong robustness,fills the gap in the existing technology of tunnel concrete lining surface cracks detection,and the interference of similar linear characteristics to the single real cracks detection can be removed accurately,quickly and effectively.
Hypergraph Dual Regularization Concept Factorization Algorithm and Its Application in Data Representation
YE Jun and JIN Zhong
Computer Science. 2017, 44 (7): 309-314.  doi:10.11896/j.issn.1002-137X.2017.07.056
Abstract PDF(430KB) ( 63 )   
References | Related Articles | Metrics
The concept factorization(CF) algorithm can not take the geometric structures of both the data manifold and the feature manifold into account simultaneously.And CF algorithm can not consider the high-order relationship among samples.In this paper,a novel algorithm called hypergraph dual regularization concept factorization(DHCF) algorithm was proposed,which encodes the high-order geometric structure information of data and feature spaces by constructing two undirected weighted hypergraph Laplacian regularize term,respectively.By this way,the proposed method can overcome the deficiency that traditional graph model expresses pair-wise relationship only.Moreover,we developed the iterative updating optimization schemes for DHCF,and provided the convergence proof of our optimization scheme.Experimental results on TDT2 document datasets,PIE and COIL20 image datasets demonstrate the effectiveness of our method.
Research on Unsupervised Regional Remote Sensing Image Retrieval Algorithm Based on Graph Theory
LI Li-ping, ZHAO Chuan-rong, KONG De-ren and WANG Fang
Computer Science. 2017, 44 (7): 315-317, 323.  doi:10.11896/j.issn.1002-137X.2017.07.057
Abstract PDF(1069KB) ( 90 )   
References | Related Articles | Metrics
In order to improve the content based remote sensing image retrieval technology,a new image retrieval algorithm based on graph theory was proposed.First,the proposed method models each image by a graph and combines local information and related spatial structures,which provides the region based image representation.Each image is initially divided into different regions.The nodes and boundaries of the attribute relation graph respectively represent the regionalfeatures and the spatial relations between them.Then,image retrieval is achieved based on image similarity.To match the corresponding image and realize image retrieval according to image similarities,a new type of non-accurate image matching strategy is used based on sub-graph isomorphism algorithm and spectral graph embedding technology.The experimental results show that compared with the most advanced unsupervised remote sensing image retrieval methods,the retrieval performance of the proposed method is significantly improved.
Study on Grassmann Manifold Dimension Reduction and Its Application
ZENG Qing-song, HUANG Xiao-yu and ZHONG Run-lu
Computer Science. 2017, 44 (7): 318-323.  doi:10.11896/j.issn.1002-137X.2017.07.058
Abstract PDF(1891KB) ( 62 )   
References | Related Articles | Metrics
The key issues of video based face recognition is how to model facial images and measure the similarity between two models.To this end,a dimension reduction method in the Grassmann manifold was proposed to improve the performance of set matching.Firstly,an image set is modeled with a subspace,and the basic element of the Grassmann manifold is presented as the projection matrix by projection mapping.Then,to solve the problem of computational overhead with high dimension matrix,while the model cannot strictly describe the distribution with fewer samples,a two dimensional principal component analysis is implemented to reduce the dimension of the orthogonal basis matrix.By applying QR decomposition on the matrix,a lower dimension and tighten Grassmann manifold is obtained,which can be better to model the image set.Finally,a kernel function that mapped the orthogonal basis matrix from a Grassmann manifold to Euclidean space is used to classify image sets.Extensive experimental results on shared video based dataset show that the proposed method is an effective object matching and face recognition method based on set-to-set matching,and it outperforms other state of the art set-based matching methods with lower computational cost.