Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 41 Issue 3, 14 November 2018
  
Techniques of Distributed Application Access Control Policy Refinement and Policy Conflict Analysis
WU Ying-hong,HUANG Hao and ZENG Qing-kai
Computer Science. 2014, 41 (3): 1-11. 
Abstract PDF(1626KB) ( 495 )   
References | RelatedCitation | Metrics
Policy refinement is an important method to resolve the complexity of distribute access control policy confi-guration.This article analyzed the existing policy refinement techniques,including their system and policies hierarchical description,their methods to derive the lower levels coordination control policies,their ability of policy conflict analysis and dispel in policy refinement,and the associated attribute analysis methods about the completeness and consistency in policy refinement,the consistency among coordination control policies for defense in depth,the composing and mutual exclusion polices relationship of application.The analysis shows that the key problem that affects refinement ability is lack of policies associated attribute analysis ability.The article further analyzed policies associated attribute analysis of the existing policy conflict analysis techniques,usability of distributed application and calculation expansibility.Based on these analysis we pointed out some research problems which can improve policy refinement ability to adapt the service oriented distributed application.
Elementary Theoretical Framework for Software Testing
WANG Zhen-zhen
Computer Science. 2014, 41 (3): 12-16. 
Abstract PDF(554KB) ( 912 )   
References | RelatedCitation | Metrics
Software testing is a dispensable and important part for software development and software engineering.Various techniques of software testing are nowadays refined,however,relevant testing foundations are still missing.This paper,based on experiences of pioneers,tried to provide an elementary theoretical framework in which a sample space of software testing is defined,and a random variable reflecting something (e.g.bugs) of software is introduced and the probability measure and conditional expectation about white-box testing and black-box testing are generalized.This construction aims to deepen our understanding about why software bugs exist,so that software testing can be improved.Moreover it may be beneficial for developing the scientific theory of software testing.
Review on Fault Diagnosis Theory and Application Based on Petri Nets
FANG Huan,FANG Xian-wen and LI De-quan
Computer Science. 2014, 41 (3): 17-22. 
Abstract PDF(493KB) ( 893 )   
References | RelatedCitation | Metrics
Fault diagnosis is an important research aspect in the area of discrete event system,which has great value to ensuring the safety of systems.Research on fault diagnosis based on Petri nets can be mainly divided into two classes,which are the study on diagnosiability theory and the study on the theory of fault diagnoser constructing algorithm.Simi-larly,the diagnosiability research can be divided into two aspects:diagnosiability and K-diagnosiability,while fault diagnoser constructing algorithm can also be further classified according to applying systems.Various study methods and conclusions for diagnosability and K-diagnosiability of fault diagnosis were reviewed,and the constructing methods of fault diagnoser in discrete Petri nets systems,continuous Petri nets systems and distributed Petri net systems were introduced,and their characteristics were analyzed in detail.Finally,the research directions and application problems for further study were discussed,which can provide directive significance.
Comparison of Tabular Notations
CHEN Yi-hai and MIAO Huai-kou
Computer Science. 2014, 41 (3): 23-26. 
Abstract PDF(327KB) ( 406 )   
References | RelatedCitation | Metrics
Tabular notations are both readable and convenient.They allow representing the specifications of systems in a very compact and precise manner.They also make checking such important properties as consistency and completeness natural and relatively easy.In the past 30years,tabular notations have been successfully applied in several safety-critical software systems.This paper presented a fairly comprehensive survey comprising three variants of tabular notations.The paper analyzed and compared all these three variants of tabular notation in details.Moreover,the paper discussed the challenges behind using tabular notations to derive an implementation of a working real-time system and presented some solutions.Finally it also attempted to help the reader navigate the vast literature in the field,to highlight differences and similarities between variants,and to reveal research trends and promising avenues for future exploration.
Critical Routers Detection Based on Distribution Similarity Transfer
MENG Qing-kai,ZHANG Yan,YANG Wan-qi,HU Yu-jing,SHI Ying-huan,PAN Hong-bing and WANG Hao
Computer Science. 2014, 41 (3): 27-31. 
Abstract PDF(469KB) ( 370 )   
References | RelatedCitation | Metrics
Critical infrastructures,which usually have large flow and key position,are common in infrastructure networks (e.g.power transmission network,Internet).The performance and reliability of the critical infrastructures directly influence the local abilities of the whole networks.To improve the ability and security-level of infrastructure networks,we proposed a novel method for critical infrastructures detection,which is mainly based on distribution similarity transfer.The aim of the proposed method is to automatically detect the critical routers in the current route.In the real application,due to several factors (e.g.network status,performance of routers),the behaviors of different routers withindifferent routes often belong to different distributions.Therefore,the proposed method models the problem as the distribution similarity transfer among different routes:First,the suspected routers are detected in the target domain (current route) by using spectral clustering;then,a newly proposed distribution similarity transfer classifier finally classifies the suspected routers obtained from the previous step.The proposed method was evaluated on the real dataset provided by Huawei Inst.The experimental results validate the proposed method can effectively detect the critical infrastructures.Meanwhile, it is demonstrated that the proposed method can successfully adopt the distribution similarity transfer to improve the classification results.
Emotion Analysis on Text Sentences Based on Topics
WANG Lei,MIAO Duo-qian,ZHANG Zhi-fei and YU Ying
Computer Science. 2014, 41 (3): 32-35. 
Abstract PDF(360KB) ( 685 )   
References | RelatedCitation | Metrics
The emotion analysis on internet online information has received much attention from natural language processing field in recent years.A novel emotion vector space model based on topics for text sentences was proposed.The new model including the latent topics features,emotion dictionary and multi-label classification algorithm was applied to analyze the polarity of sentences.Experiment result shows that the model is reasonable and effective in recognizing the polarity of sentences.
Behavior and Score Similarity Based Algorithm for Association Rule Group Recommendation
ZHANG Jia-le,LIANG Ji-ye,PANG Ji-fang and WANG Bao-li
Computer Science. 2014, 41 (3): 36-40. 
Abstract PDF(413KB) ( 566 )   
References | RelatedCitation | Metrics
Applying the association rule recommendation tool often meets the hardness of selection for optimal rule,inadequate utilization of rule information.Using easily obtained background knowledge can solve these problems.Aiming at the situation that only contains commodity’s name and score information,this paper proposed a behavior and score similarity based association rule group recommendation algorithm,in which the rule with its scores is regarded as an expert,and the experts with same conclusion are grouped together and the expert weights are calculated based on both behavior similarity and score similarity.A better recommendation suggestion is reached by aggregating the recommendation opinions of the experts.The experimental example shows that the algorithm is feasible and effective.
Image Clustering Based on Semi-supervised Distance Learning and Multi-modal Information
LIANG Jian-qing and HU Qing-hua
Computer Science. 2014, 41 (3): 41-45. 
Abstract PDF(1463KB) ( 687 )   
References | RelatedCitation | Metrics
The project clustered images by fusing the different model information in the images and taking advantage of a small amount of labeled images for semi-supervised distance learning.First,we extracted histogram information of the RGB color space,texture information in the color images,and Bag of Words by using the SIFT algorithm to re-express the image,thus establishing the multi-modal express mechanism of images based on the image's color,texture and semantic features to project the original image onto the space to express.Then,using a small amount of the marked ima-ge,we obtained a similarity measure in multi-modal information space of images through the semi-supervised distance learning.Finally,we realized grouping images through the semi-supervised clustering method and verified the validity of the proposed method in the plurality of images in the database as well.
Lane Detection in Micro-traffic under Complex Illumination
LUO Qiang,WANG Guo-yin and CHU Wei-dong
Computer Science. 2014, 41 (3): 46-49. 
Abstract PDF(2620KB) ( 1079 )   
References | RelatedCitation | Metrics
In order to resolve the problem of lane detection in Micro-traffic under complex illumination,a lane detection method was proposed for the Micro-traffic under complex illumination.Firstly,the lane images for different illumination are classified by using Naive Bayes.Then,the classified lane images are proceed by using the corresponding image processing method.Finally,improved Otsu and revised Probabilistic Hough Transform are introduced to detect lanes.Simulation experiment on the different lane images under different illumination shows that the lane detection success rate in Micro-traffic is up to 95.5% and the method possesses strong robustness and anti-interference.
Extended Decision-theoretic Rough Set Models Based on Fuzzy Minimum Cost
ZHONG Jin-yi and YE Dong-yi
Computer Science. 2014, 41 (3): 50-54. 
Abstract PDF(421KB) ( 422 )   
References | RelatedCitation | Metrics
The loss function in decision-theoretic rough set theory is generally a single-valued function.Considering the “uncertainty” character in practical decision-making,we introduced a fuzzy-number based loss function to deal with a more general decision-making problem under uncertainty.The fuzzy distributions of the decision thresholds α,β were calculated through series of fuzzy operations,and the corresponding decision rules were given.A method for getting more compact supremum and infimum of the thresholds α,β was also presented.An example of oil investment was given to illuminate the proposed model in applications.
Frequent Itemset Mining Parallel Algorithm Based on Chip Multi-core
ZHANG Bu-zhong,CHENG Yu-sheng and WANG Ze-lin
Computer Science. 2014, 41 (3): 55-58. 
Abstract PDF(328KB) ( 432 )   
References | RelatedCitation | Metrics
The main work in association rule mining is how to find mining frequent itemsets efficiently.When the exis-ting frequent itemsets mining algorithms are analyzed,it can be concluded that the computing scale will increase sharply with the increase of data scale.So association rule mining in large data sets on PC platform is difficult to get desired result.Combining the advantages of Eclat and dEclat algorithm,PEclat,an efficient parallel frequent itemset mining algorithm was presented.PEclat runs in the multi-core and shared-memory computing environment.It works as task-level parallel and can process large data sets.The experiment states that better performance can be achieved regardless of the data density.
Particle Swarm Optimization with Weight Increasing
LIU Jian-hua,ZHANG Yong-hui,ZHOU Li and HE Wen-wu
Computer Science. 2014, 41 (3): 59-65. 
Abstract PDF(516KB) ( 597 )   
References | RelatedCitation | Metrics
Particle swarm optimization (PSO) is an intelligent algorithm which simulates the social behavior of bird swarm or fish group and has been applied widely in all kinds of fields on optimization computation.The inertia weight of PSO has employed the policy of decreasing progressively with iteration,but the variable value of inertia weight is decided in term of the experiment and rarely analyzed with theory.This paper analyzed the inertia weight of PSO with the theoretical modal.And then a kind of PSO modal with inertia weight increasing progressively with iteration was provided.The benchmark functions were used to conduct the experiment.The test results show that PSO with weight increasing is superior to the traditional PSO with weight decreasing and can match with standard PSO.
Measures of Uncertainty of Knowledge and their Relationships
LIU Cai-hui,MIAO Duo-qian,YUE Xiao-dong and ZHAO Cai-rong
Computer Science. 2014, 41 (3): 66-70. 
Abstract PDF(986KB) ( 689 )   
References | RelatedCitation | Metrics
Measures of uncertainty of knowledge are hot topics in the area of artificial intelligence.This paper studied systemically the interrelationships of classical measures of uncertainty of knowledge in rough set theory.The outputs show that information granularity is equivalent to knowledge granularity,whereas,rough entropy and co-entropy can be derived from information entropy.Two examples were employed to explain the results.
Cold-start Recommendation Based on Granular Association Rules
WU Wen-jia and HE Xu
Computer Science. 2014, 41 (3): 71-75. 
Abstract PDF(397KB) ( 454 )   
References | RelatedCitation | Metrics
Recommendation systems have been widely used in many fields such as e-commerce.The cold-start problem is one of difficulties on recommendation systems.This paper designed a cold-start recommendation approach based on granular association rules.First,we used granules to describe users and items.Then we generated rules between users and items through satisfying four measures of granular association rules.Finally,we matched the suitable rules to re-commend items to users.Experiments were undertaken on a publicly available dataset MovieLens.Results show that granular association mining rule can be used for the recommendation on training and testing sets effectively and accurately.
Human Pose Estimation Based on Appearance Model for Constraint Tree Pictorial Structure
WANG Hao,LIU Ze-fen,FANG Bao-fu and CHEN Jin-jin
Computer Science. 2014, 41 (3): 76-79. 
Abstract PDF(1603KB) ( 609 )   
References | RelatedCitation | Metrics
Human pose estimation is one of the hot topic in the field of computer vision,and can be used for pedestrian detection,human activity analysis,human-computer interaction and video surveillance.Aiming at the problem that the human body appearance model of human parts is vulnerable to background interference in the body pose estimation algorithm of the tree pictorial structure model, we put forward body pose estimation algorithm of appearance model which is based on priori segmentation and the appearance transfer mechanism in order to improve the human body appearance model.According to PS model,using the human body detector and foreground highlighting by preprocessing,it can find an approximate position and size of the human body,meanwhile remove the background clutter.We can estimate the appearance model of the human body parts based on a priori segmentation and appearance transformation mechanism.Experiments show that using the algorithm of the human body detector and foreground highlighting can not only reduce the search space of the components,but also improve the accuracy of the body pose estimation.
Rough Set and Analytic Hierarchy Process-based Approach on Supplier Selection
WANG Lei,YE Jun and ZHANG Hong-li
Computer Science. 2014, 41 (3): 80-84. 
Abstract PDF(413KB) ( 384 )   
References | RelatedCitation | Metrics
It is well known that supply chain management(SCM) is a kind of mode of management whose characteristic is systematic,integrative and agile.SCM enables several enterprises to enter into an alliance which can fulfill mass customized-production,so enterprise possesses a strong market response capabilities.Furthermore,the selection of supplier is one of the crucial problems in building a supply chain system,and this is also a foundation on which the enterprise can carry JIT into execution.This paper proposed a novel combinatorial approach in which a combination of the Analytic Hierarchy Process(AHP) and the rough set theory is adopted in the research of the supplier selection in SCM.The main research works include:The overview on common-used methods of supplier selection is presented firstly.The principles and indices for supplier selection are analyzed in detail.A comprehensive judgment matrix is constructed through combining the subjective judgment matrix by AHP approach and the objective judgment matrix by rough set method,then an initial probing is accomplished on the issue of the supplier selection in SCM on the basis of the comprehensive judgment matrix.An application example demonstrates the validity and feasibility of the proposed approach in the supplier selection of SCM.
Graph Representation and 2-part Matrix of Covering-based Rough Sets
SUN Feng and WANG Jing-qian
Computer Science. 2014, 41 (3): 85-87. 
Abstract PDF(301KB) ( 491 )   
References | RelatedCitation | Metrics
Covering-based rough sets were studied through graphs and matrices.Firstly,bipartite graphs associated with a covering were proposed,and any two of them are isomorphic.Then a type of covering-based lower and upper approximation operators were represented through a bipartite graph associated with a covering.Secondly,the definition of 2-part matrix was presented for bipartite graphs.According to a 2-part matrix of a bipartite graph associated with a covering,not only one can know whether the covering is unary,but also reducible elements of the covering can be obtained.Finally,some characteristics of 2-part matrices of a bipartite graph associated with a partition were studied.
Research on Parallel Chinese Syntactic Analysis Based on Hadoop Platform
LIU Sheng-jiu,LI Tian-rui,JIA Zhen and ZHU Jie
Computer Science. 2014, 41 (3): 88-90. 
Abstract PDF(343KB) ( 446 )   
References | RelatedCitation | Metrics
As one of research focuses of natural language processing,syntactic analysis has received much attention.Aiming at low efficiency and mediocre accuracy of current syntactic analysis methods,we investigated a parallel method of Chinese syntactic analysis on the cloud computing platform with the advantage of the computing power of cloud computing.We realized the parallel Chinese syntactic analysis on the built Hadoop cloud computing test platform with open corpus and source parser.Both experimental results and theoretical analysis confirm the feasibility,effectiveness and stability of the proposed method of parallel parsing based on Hadoop platform.
Improved Knowledge Characteristic-driven Task Decomposition Model
FAN Shao-qiang,WANG Guo-yin and LI Mei-zheng
Computer Science. 2014, 41 (3): 91-95. 
Abstract PDF(500KB) ( 423 )   
References | RelatedCitation | Metrics
Task decomposition is widely used to solve those large-scale and complex problems.Many researchers have presented their task decompositions.Knowledge characteristic-driven task decomposition model can divide the original problem into several smaller tasks without much prior experience.But this model forgets to treat the noisy point of subtask.Inserting a process of getting rid of noisy point and expanding the subtask,an improved knowledge characteristic-driven task decomposition model was obtained.We carried some experiments on two-spiral problem,UCI abalone data set and UCI yeast data set.The results show that our method can get a better accuracy.
Energy-level Based Clustering Network Topology Control Algorithm
LI Peng-fei,LI Zhi-hua,YIN Xi,SUN Ya and ZHANG Hua-wei
Computer Science. 2014, 41 (3): 96-99. 
Abstract PDF(340KB) ( 462 )   
References | RelatedCitation | Metrics
Considering the unbalanced energy consumption and the premature death of the nodes in wireless sensor network (WSN),this paper presentsed an energy-level based clustering algorithm (ELBC) and a multi-hop algorithm (M-ELBC).By defining the concept of the energy-level and considering the location of the sink,the algorithms dynamically adjust the proportion of factors to ensure the reasonable distribution of the clusters and balance the energy consumption.By optimizing the preference parameter based on the nodes’ residual energy,the algorithms improve the high energy nodes’ competitiveness to delay the appearance of the first dead node.Experimental results show that the ELBC and M-ELBC algorithms can put off the appearance of the dead nodes,balance the nodes’ energy consumption and prolong the network lifetime.
Stability-oriented Adaptive Routing Overhead Control Algorithm in MANETs
HU Xi,WANG Xin and ZHANG Bin
Computer Science. 2014, 41 (3): 100-104. 
Abstract PDF(494KB) ( 520 )   
References | RelatedCitation | Metrics
To discover and establish a route with a longer lifetime,which can enhance the availability of route and the consistency of data transmission,the in-between nodes executing some stability-oriented routing algorithm need to forward more route request (RREQ) packets,which in turn makes the routing overhead to increase obviously.Therefore,a stability-oriented adaptive routing overhead control algorithm was proposed.It introduces the strategy game to model the forward of RREQ,and then calculate the forwarding probability of RREQ with the mix strategy Nash equilibrium existing in the game,which realizes the probabilistic forward of RREQ.The simulation results show that the proposed stability-oriented route discovery algorithm not only maintains the stability and the packet delivery ratio of route,but also reduces routing overhead and transmission delay effectively.
Moving Speed Aware of Fast Handoff Investigation in IEEE 802.11Networks
WU Yan-ling,LI Ming and HAN Qing-tao
Computer Science. 2014, 41 (3): 105-109. 
Abstract PDF(394KB) ( 428 )   
References | RelatedCitation | Metrics
Wireless Local Area Networks (WLANs) could provide possibilities to mobile users benefiting wireless ser-vices through Access Points (APs).Furthermore,mobile users could move freely in such wireless environment.How-ever,handoffs could be triggered while crossing different WLANs.In the most of existing literatures,the authors did not consider the case that mobile users change suddenly the moving direction during handoff procedures,to enter another WLAN.This paper proposed a Fast Handoff algorithm based moving speed (FHBS) with taking into account the moving direction and moving velocity.Since the network resources are scheduled by Access Control Centers (ACCs),a fast scheduling mechanism is established.Because information of all available channels is broadcasted by ACCs,the access collisions caused by nodes which initiate handovers are eliminated.As the consequence,an outstanding performance is achieved.Simulation results show that FHBS can improve the packet loss,the handoff blocking probability and the handoff latency by about 14%,14% and 55%,respectively.
User Behavior Analysis Based on Web Browsing Logs
GUO Jun-xia,GAO Cheng,XU Nan-shan and LU Gang
Computer Science. 2014, 41 (3): 110-115. 
Abstract PDF(542KB) ( 1351 )   
References | RelatedCitation | Metrics
With the long-term accumulation of the Q & A community information,there is more and more outdated information indexed by search engines, bringing inconvenience to users.The log of a user’s browsing-behaviors contains the user’s behavioral intentions and habits,which can help analyze timeliness of the information.This paper proposed a query-process-division method for users’ browsing logs.Based on this method,a large number of real users’ browsing historical records were statistically analyzed.The results show that in average,a user browses 8.05Web pages in 6.28minutes for one query.In addition,nearly 1/3of total queries carry out concurrently and alternately.It is also found that users rely on inner-site searching more.By analyzing the browsing historical records of a community site,we found that the users are not satisfied with the query results mainly because of the non-high-related results.Out-of-date information is only a small part in the query results.
Research on Method and Strategy of VOD Energy Saving
LU Jian-jun,ZHANG Xiao and ZHAO Xiao-nan
Computer Science. 2014, 41 (3): 116-119. 
Abstract PDF(454KB) ( 428 )   
References | RelatedCitation | Metrics
Now with the rapid development of mass data application,how to reduce the huge energy consumption has become research focus.This paper analyzed the mass storage characteristics of online video on demand.It analyzed video access period according to normal distribution,and combining the characteristics of the disk technology,put forward a kind of energy-saving strategy based on VOD,in which the disk is modeled into different levels.When downgrading,time window is adjusted based on historical time interval,and distribution is formulated depending on the type of video system in order to ultimately meet the demand of the system’s performance and energy conserving at the same time.According to the calculation,average energy saving reaches 44percent after using energy saving strategy.
Time Delay Analysis of TDMA Data Link Message Transmission Based on Queue Theory
YANG Guang,YAO Lu and REN Pei
Computer Science. 2014, 41 (3): 120-123. 
Abstract PDF(346KB) ( 761 )   
References | RelatedCitation | Metrics
Time-effectiveness of information is affected by so many factors such as resource distribution scheme,service rules and characters of information.Reducing the time of data link message transmission is guarantee of the time-effectiveness of the received information.The time delay of TDMA data link message transmission was researched by using queue theory,and the relation of the time of service response with message arrive intensity and message service intensity was analyzed,and then the time delay of TDMA data link message transmission in relaying and forwarding forms was studied and simulated by using OPNET network simulation model.
Method for Fully Distributed Crawler Cluster Based on Kademlia
HUANG Zhi-ming,ZENG Xue-weng and CHENG Jun
Computer Science. 2014, 41 (3): 124-128. 
Abstract PDF(410KB) ( 511 )   
References | RelatedCitation | Metrics
For solving the issues of efficiency,balance,reliability and scalability encountered in organizing a mass of crawler node to form a fully distributed crawler cluster,we proposed a fully distributed crawler cluster method based on kademlia.The method establishes the underlying communication mechanism between crawler nodes by improving the method of kademlia technology.On this basis,we designed and implemented a distributed crawler cluster model with task partitioning,exception handling,node join and exit process and load balance,based on the XOR characteristics in kademlia and available resources of the node.Experiments in the actual system show that this method can take advantages of computing,storage,and bandwidth resources of massive weak terminal to successfully build a fully distributed crawler cluster with efficient,balanced,reliable,and has large-scale development properties.
Low Overhead Time Synchronization Algorithm for Wireless Sensor Network
JIANG Ying,GUO Shu-xia,GAO Jin-qiao and WANG Hong-bo
Computer Science. 2014, 41 (3): 129-131. 
Abstract PDF(339KB) ( 560 )   
References | RelatedCitation | Metrics
Traditional TPSN algorithm can synchronize time between nodes quickly and efficiently,but has poor efficiency when nodes join and fail frequently.This paper proposed an improved time synchronization protocol for wireless Sensor Networks—ITPSN.The algorithm doesn’t construct the topology of the network,and it is efficient when nodes join and fail frequently.Experiments show that the algorithm improves the robustness of the network,and in the case of dense nodes deployment,reduces the energy consumption to improve the life of nodes.
Method of Semantic Cache Consistency Checking in Mobile Computing Environments Based on Agent Technology
LIANG Ru-bing and LIU Qiong
Computer Science. 2014, 41 (3): 132-136. 
Abstract PDF(453KB) ( 378 )   
References | RelatedCitation | Metrics
Callback algorithm is a cache management method which is driven by server,but there are some problems in this approach such as writing delay,and cached data needs to be revalidated when network is reconnected.This paper proposed a novel semantic cache scheme using agent technology to check cache consistency.Firstly,the Client/MSS/Server architecture was presented and the functions of agents were given.Secondly,we discussed cache consistent checking method which is aroused by data query operations from terminals and write operations from server.Using agents to manage terminal’s local cache data records and forward server’s invalidating data,this scheme can not only satisfy terminal’s disconnection,but also reduce writing delay time,and maintain cache strong consistency.The results of experiments show that the proposed cache maintenance method can speed up the query response time and overcome the disadvantage of callback algorithm mentioned above,thus is better adapted for the frequent mobility and disconnect network environments.
Reconfigurable Management for Large Scale Dynamic Publish/ Subscribe Systems
CHEN Jin-hui and DONG Biao
Computer Science. 2014, 41 (3): 137-140. 
Abstract PDF(337KB) ( 396 )   
References | RelatedCitation | Metrics
Reconfigurability makes P/S (Publish/Subscribe) systems amenable to highly dynamic environments.Exis-ting P/S systems are typically not able to rearrange dynamically their operations to adapt to changes which impact the topology of their event broker infrastructure.A new reconfigurable management approach called RS3DS(reconfigurable sparse 3dimensional space) was proposed in large-scale dynamic P/S systems.A reconfiguration strategy for subscription tables was analyzed,which was based on the notion of composition and decomposition from RS3DS.The simulation results show RS3DS has low reconfiguration overhead and high event delivery rate.
Research on Load Balancing of Distributed File System in Cloud Computing
YIN Xiang-dong,YANG Jie and QU Chang-qing
Computer Science. 2014, 41 (3): 141-144. 
Abstract PDF(374KB) ( 402 )   
References | RelatedCitation | Metrics
In cloud computing,the files are divided into chucks,and stored in distributed file system.However,the updates of the system states,such as node joining and leaving,will cause an unbalancing distribution of file chunks in the distributed file system,thereby degrading the system performance a lot.For solving this problem,this paper proposed a distributed load balancing algorithm,and compared it with the centralized and distributed load balancing algorithms.The experiments show that the proposed algorithm increases only a little overhead while solving the single point bottleneck of the centralized algorithm,and it has obviously better performance than the distributed load balancing algorithm.
Preemptive Scheduling for Multiple DAGs in Cloud Computing
SUN Yue,YU Jiong and ZHU Jian-bo
Computer Science. 2014, 41 (3): 145-148. 
Abstract PDF(408KB) ( 469 )   
References | RelatedCitation | Metrics
There are different QoS demands on DAG workflow for different users.Three layers multiple DAG dynamic scheduling model was proposed to solve the fairness and the utilization rate problems.In the algorithm,priority division is in the light of the different QoS demand of DAG,and the high priority DAG gives preference to the resources.The same priority DAGs can preempt the resources according to the slowdown with the heuristic information,improving the fairness of the workflow scheduling.In the circumstances of high priority DAGs giving preference to resources,the resource utilization rate must be considered.Modified Backfill algorithm was introduced based on threshold,improving the resources utilization rate with the guarantee of high priority job in preference to resources.
Discrete Data Fusion with Integral Discrete Guidance in Internet of Things
ZUO Yan-hong and ZHANG Ke-ren
Computer Science. 2014, 41 (3): 149-152. 
Abstract PDF(310KB) ( 382 )   
References | RelatedCitation | Metrics
The discrete data fusion with integral discrete guidance in internet of things was researched.The data fusion process in discrete manufacturing system was the first problem that should be solved under various different things node.In traditional discrete based manufacturing system,the data was processed with distributed method,and data of each node was treated alone,so it cannot integrate all the advantages of data to achieve better overall efficiency.The discrete data fusion with integral discrete guidance in internet of things was proposed,the networking technology in distributed system was used for discrete manufacturing system under unified data collection terminals and integrated,and then the method of discrete points was taken out to acquire all the differential data processing,with which the effective integration of all data was achieved.A group of 100nodes six types of data was taken as target to test the performance,and the experimental result shows that under discrete data fusion with integral discrete guidance,the data is fused well,and the spectral distribution of the data is even,so the algorithm has good application value.
Realization of Bayesian Algorithm for Detecting Botnets Based on MapReduce
SHAO Xiu-li,GENG Mei-jie and JIANG Hong-ling
Computer Science. 2014, 41 (3): 153-158. 
Abstract PDF(480KB) ( 358 )   
References | RelatedCitation | Metrics
Although botnets are detected in a more accurate way by using Bayesian algorithm,it has the character of large flow and the training of Bayesian classification needs to train a large number of network datasets.Therefore,it will lead to meet a bottleneck of calculation of the time and resources by using a single node to detect the botnets.To this end,this paper designed a Bayesian algorithm based on the MapReduce to parallely process the calculation of the prior probability and the conditional probability in the training phase,and the posterior probability in the detection phase of Bayesian algorithm.A large number of experiments running on Hadoop platform show that this method improves the efficiency of bonnets detecting.
Differential Transition Probability Analysis of SHA-3Permutation Function
GAO Xiao-dong,YANG Ya-tao and LI Zi-chen
Computer Science. 2014, 41 (3): 159-162. 
Abstract PDF(294KB) ( 452 )   
References | RelatedCitation | Metrics
By analyzing the permutation function Keccak-f of SHA-3,cycle shift method of three-dimensional array was proposed.According to structure of every step transform in Keccak-f,the boolean expression of the output difference was structured.By analyzing the boolean expressions of the output difference,to the every step transform of Keccak-f,it was proved that the differential transition probability about cycle shift is unchanged.On this basis, by analyzing,it was obtained that when cycle shift properties of two input difference and two corresponding output difference are same, the differential transition probability of the whole permutation function Keccak-f about cycle shift is unchanged.
Permissive Type System for Internal Timing Information Flow in Multi-thread Programs
LI Qin and YUAN Zhi-xiang
Computer Science. 2014, 41 (3): 163-168. 
Abstract PDF(500KB) ( 426 )   
References | RelatedCitation | Metrics
This paper proposed a permissive type theory for checking internal timing channels in multi-thread programs.A formal specification of non-interference was defined based on the set of hidden-racing variables.In the type system,threads were discriminated according to the ways by which they deal with low level variables,so that we could refine the analysis about the scenario in which internal timing channel occurs.In contrast to existing methods,type checking in this work is more permissive because of less false positive.In addition,the soundness of the type system is proved in independent of the scheduler model.
New Square Attack on SNAKE (2)
ZHENG Ya-fei and WEI Hong-ru
Computer Science. 2014, 41 (3): 169-171. 
Abstract PDF(318KB) ( 512 )   
References | RelatedCitation | Metrics
The security of block cipher SNAKE (2) against Square attacks was re-evaluated.The wrong 5-round Square distinguisher based on equivalent structure given in paper [4] was pointed out.A new 6-round Square distinguisher based on both the structure of SNAKE (2) and its equivalent structures was proposed.Using the new 6-round Square distinguisher,Square attack was applied to 7,8,9-round SNAKE(2) to recover some information of the equivalent key.The time complexities are 212.19,221.59,230.41 respectively,and the data complexities are 29,29.59,210 respectively.The results are better than the Square attack given by paper[4].
Image Information Hiding Algorithm with High Security Based on Run-length
XIE Jian-quan,XIE Qing and HUANG Da-zu
Computer Science. 2014, 41 (3): 172-175. 
Abstract PDF(1119KB) ( 490 )   
References | RelatedCitation | Metrics
Aiming at the problem that most information hiding algorithms are not secure enough to afford secret communication,a kind of hiding information algorithm based on Run Length was proposed.The main idea of this algorithm is separating the image into several binary images.Through utilizing the parities of the Run Length of the binary images,one bit of information can be successfully embedded while even one pixel that is located along black-white boundary of the binary image is modified.The blind extraction of the hiding information can be achieved.The algorithm neither obviously changes distribution property of 0and 1in image’s low plane,nor reduces the Length of Run.Hence it can defend various detections against LSB and its improved method.The simulation result shows that the algorithm has high security,good imperceptibility and large embedding capacity.Furthermore,it can be applied in secret communication and other situation that requires high security and large capacity.
DDoS Attacks Detection Method Based on Traffic Matrix and Kalman Filter
YAN Ruo-yu
Computer Science. 2014, 41 (3): 176-180. 
Abstract PDF(427KB) ( 410 )   
References | RelatedCitation | Metrics
Distributed Denial of Service (DDoS) attack traffic often is an unbearable burden on router,so a new DDoS attack detection method was proposed to release the burden and to detect the attack fast and accurately.In this method,traffic matrix between ports on the router is first constructed to precisely describe DDoS attack traffic aggregation cha-racteristics.Then Generalized Likelihood Ratio (GLR) statistical test is used to detect traffic anomaly after Kalman filter is applied to estimate traffic matrix.After that whether each router port is attacked by DDoS is judged.Finally,a simulation experiment with actual data was conducted to compare the method with PCA method,which shows that the proposed method has higher detection rate,lower false alarm rate and smaller detection lag time.
TRBAC-based Dynamic Multilevel Access Control Model for Web Services
CHEN Xue-long,ZHENG Hong-yuan and DING Qiu-ling
Computer Science. 2014, 41 (3): 181-184. 
Abstract PDF(442KB) ( 430 )   
References | RelatedCitation | Metrics
Based on the research status of access control for Web services and according to the shortcoming in current models,a TRBAC-based dynamic multilevel access control model for Web services was proposed.The model defines some concepts and gives the formal representation and constraint rules.To achieve fine-grained permission,the model introduces services and services attributes,and designs the mechanism of three levels access control.The concepts of Role Actor and Task Manager were presented.The connotation of role constraint and task constraint were extended and defined strictly.The model combines temporal constraint,task context,task state and SSOD principle to describe dynamic permission,and improves the security of Web services and the mechanism of access control for Web services.The preliminary result of the model applied in the condition security system of some defense industry enterprise is good.
Hierarchical Affinity Propagation Clustering for Large-scale Data Set
LIU Xiao-nan,YIN Mei-juan,LI Ming-tao,YAO Dong and CHEN Wu-ping
Computer Science. 2014, 41 (3): 185-188. 
Abstract PDF(423KB) ( 453 )   
References | RelatedCitation | Metrics
Affinity Propagation (AP) has advantages on efficiency and accuracy,and has no need to set the number of clusters,but is not suitable for large-scale data clustering.Hierarchical Affinity Propagation (HAP) was proposed to overcome this problem.Firstly,the data set was divided into several subsets that can be effectively clustered by AP to select the exemplars of each subset.Then,AP clustering was implemented again on all the subset exemplars to select exemplars of the whole data set.Finally,all the data points were clustered according to similarities with the exemplars,and realizing efficient clustering of large-scale data set.The experimental results on real and simulated data sets show that,compared with traditional AP and adaptive AP,HAP reduces the time consumption greatly and achieves a good clustering result in the meanwhile.
Autonomic Computing Model Based on Hierarchical Management in Cloud Environment
LIU Wen-jie and LI Zhan-huai
Computer Science. 2014, 41 (3): 189-192. 
Abstract PDF(364KB) ( 360 )   
References | RelatedCitation | Metrics
Autonomic Computing is an effective method to make complex IT system to realize self-recovery and self-management.But in cloud environment,with the rapid growth of the computing resources,traditional resources algorithms and models can not allocate the resources effectively,therefore the resources usage rate is very low.To enhance the resource management efficiency,this paper proposed an autonomic computing model for cloud environment,which divides the autonomic manager into master and slave,uses hierarchical management to solve the bottleneck problem when dealing with massive requests,and uses the mixed policies method for different fault recovery requests.This model can lower the resources management cost in cloud environment and meet the SLA that cloud services need.
Novel Labeling Scheme for XML Update-supporting
FU Peng,JIANG Xia-jun and PI De-chang
Computer Science. 2014, 41 (3): 193-197. 
Abstract PDF(366KB) ( 331 )   
References | RelatedCitation | Metrics
A novel labeling scheme called DVLS(Dynamic Vector Labeling Scheme) was proposed to process update in dynamic XML data.DVLS is made up of three vector codes for overcoming the default of increace of the label length with the depth of the XML document tree that the trandtional prefix shceme can’t avoid.The main idea of DVLS is that it makes use of vector addition to support the update of the XML node data and can simplify for not only static but also dynamic XML document in order to improve the efficiency of the query.The comparative experiments on DDE and DVLS confirm that the DVLS is more efficient based on vector order.
Research on Web-based Project Management Simulation
WANG You-tian and HU Bin
Computer Science. 2014, 41 (3): 198-201. 
Abstract PDF(452KB) ( 494 )   
References | RelatedCitation | Metrics
In order to study the task-processing process in an organization,we summarized simulation methods and techniques,and applied simulation to project managemented.We also implemented the Web service based project management simulation system using rich internet application technique such as SVG and JS,making data validation and graphic presentation available on the browser side during the process of simulation.
Cleaning Method Research of RFID Data Stream Based on Improved Kalman Filter
CHEN Jing-yun,ZHOU Liang and DING Qiu-lin
Computer Science. 2014, 41 (3): 202-204. 
Abstract PDF(372KB) ( 443 )   
References | RelatedCitation | Metrics
RFID (radio frequency identification) technology has been widely applied to the communication between RFID readers and dynamic electronic labels,but the data captured by RFID readers often tends to be noisy.In order to provide a better support for high-level RFID’s applications,it is necessary to clean the collected data.Considering the characteristic of frequent movement of labels,this paper put forward an improved Kalman filter model by combining the sliding window technique with Kalman filter model and then proposed a method of RFID data cleaning based on an improved Kalman filter.The method can not only guarantee the accuracy of data cleaning but also effectively solve the problem of time delay caused by dynamic electronic tags.Thus this method is more adaptable to the situation where labels are moved frequently.The experiment’s result shows this approach can improve the efficiency and accuracy of the data cleaning.
Branching Temporal Description Logic ALC-CTL and its Satisfiability Decision
LI Shen,CHANG Liang,MENG Yu and LI Feng-ying
Computer Science. 2014, 41 (3): 205-211. 
Abstract PDF(578KB) ( 568 )   
References | RelatedCitation | Metrics
As a combination of the description logic and the temporal logic,the temporal description logic offers consi-derable expressive power.However most of the researches on temporal description logic have concentrated on the case where temporal operators occur within concepts and formula.In this setting,reasoning usually becomes quite hard.This paper combined standard description logic ALC with standard temporal logic CTL and proposed a new temporal description logic ALC-CTL.More precisely,we applied temporal operators only to formula not to concept.From the perspective of the computational tree logic,it is equivalent that atomic proposition in CTL is promoted to individual assertion in ALC.This ALC-CTL not only offers considerable expressive power going beyond both ALC and CTL,but also makes the satisfiability problem of it to preserve EXPTIME-complete.Based on a combination of the Tableau algorithm of CTL and the reasoning mechanism provided by ALC,this paper presented a Tableau decision algorithm for the logic ALC-CTL and proved that this algorithm is terminating,sound and complete.
Properties of High-dimensional Data Space and Metric Choice
HE Jin-rong,DING Li-xin,HU Qing-hui and LI Zhao-kui
Computer Science. 2014, 41 (3): 212-217. 
Abstract PDF(399KB) ( 1865 )   
References | RelatedCitation | Metrics
High-dimensional data analysis is the core task of machine learning and data mining.By finding optimal subspace for data representation,dimensionality reduction algorithms can reduce computational cost and improve the performance of subsequent classification or clustering algorithms,leading to effective techniques for high-dimensional data analysis.However,there is very little guidance for theoretical analysis on high-dimensional data.This paper reviewed some statistical and geometrical properties of high-dimensional data space,and gave some intuitive explanations on “concentration of measure” phenomenon from different perspectives.In order to improve performances of classical machine learning algorithms based on distance metric,this paper discussed the effects of metric choice on high-dimensional data analysis.Empirical results show that fractional distance metric can improve performances of K Nearest Neighbor and Kmeans significantly.
Vehicle Routing Algorithm Based on Spatiotemporal Clustering
QI Ming-yao,ZHANG Jin-jin and REN Li
Computer Science. 2014, 41 (3): 218-222. 
Abstract PDF(444KB) ( 1248 )   
References | RelatedCitation | Metrics
A route improvement method that considers spatial and temporal features simultaneously was proposed to solve vehicle routing problem with time windows.A spatiotemporal representation of vehicle routes was presented to measure the spatiotemporal distance between two customers.Then,a genetic algorithm was designed to cluster the customers into a few groups according to spatiotemporal distances.The resulting customer groups were then used for route adjustment:if a customer was moved to another route,only the nearby routes were searched and considered.By this means the search space is dramatically reduced.The calculation on 1000-customer examples designed by Gehring and Homberger shows that the proposed algorithm can get better solution in shorter time.
Labeled-LDA Text Classification Algorithm Based on Graph Model for “Central Topic Oblivion Problem”
LI Wei,MA Yong-zheng and SHEN Yi
Computer Science. 2014, 41 (3): 223-227. 
Abstract PDF(2120KB) ( 1225 )   
References | RelatedCitation | Metrics
Latent Dirichlet Allocation(LDA) is an unsupervised topic model used to mining potential topic information from the corpus.Labeled-LDA as a mutation of LDA can be used to do multi-classification on labeled documents,which establishes the one-to-one mapping from topic to label and learns the relationship between words and labels.Recently,the application of graph model has obtained good results in text mining,which provides a new way to analyze semantics of documents.This paper proposed a new method combining complex network theory and Labeled-LDA to do text classification.The experimental results show that our new method gets an improvement according to Macro_F1compared to the traditional LDA model.
Firefly Algorithm Based on Improved Evolutionism
FU Qiang,TONG Nan,ZHONG Cai-ming and ZHAO Yi-ming
Computer Science. 2014, 41 (3): 228-231. 
Abstract PDF(413KB) ( 575 )   
References | RelatedCitation | Metrics
Analyzing the evolutionary computation mechanism of the Firefly algorithm,a new evolutionary computation model for Firefly algorithm was proposed to solve the evolutionary premature stagnation problem.At the beginning,the fireflies achieve evolution by following the best firefly in global area,and when the mutual system is established among the fireflies for exchanging the information,each firefly is attracted by the brighter glow of other neighboring fireflies.When the population is in local optimization area,Gaussian mutation is used to improve firefly’s diversity.The experiment results of 5classic benchmark functions indicate the feasibility and validity of the improved Firefly algorithm.
Modeling Decision-making Behavior Based on Knowledge of Emotional Graded BDI Agents
ZHANG Xiao-jun,LIN Ying and ZHOU Chang-le
Computer Science. 2014, 41 (3): 232-237. 
Abstract PDF(519KB) ( 543 )   
References | RelatedCitation | Metrics
In order to formalize the degree of influence of knowledge,belief,desire,intention,fear,anxiety and self-confidence on decision-making behavior,we extended the truth value of infinite-valued ukasiewicz Logic from [0,1] to [-1,1].The knowledge of emotional graded BDI agent’s decision-making behavior is determined by the different measure of each context which is added by concrete conditions.The knowledge of emotional graded BDI agent model in this paper explicitly represents the uncertainty of knowledge states,mental attitudes and emotional states.This model is general enough to specify other types of agents.After presenting the language and semantics for this model and illustrating interrelations between contexts for the model,an application of the knowledge of emotional graded BDI agent in military decision-making behavior was given.It is hoped that this study will make contribution to provide a formal support for distributed artificial intelligence and military simulation.
Models of Network Information Propagation Based on Game Theory
GUO Yan-yan,TONG Xiang-rong,LIU Qi-cheng,LONG Yu and LI Ye
Computer Science. 2014, 41 (3): 238-244. 
Abstract PDF(1395KB) ( 378 )   
References | RelatedCitation | Metrics
There is a lot of information whose authenticity is unable to be judged temporarily in the Internet.Whether or not to forward this kind of information for the internet users is a kind of game.Also,whether or not to propagate true information for the owner of information who has known the authenticity of information is a kind of game.Taking the real network information propagation as the background and combining the game theory with multi-agent technology,several game models suitable for different network information propagation processes were established.Players’ strategies were analysed through the payoff matrix while the basic properties and the practical significance of the various models were explained.In order to make these models more suitable to the actual situations of the network information propagation,the liquidity of network information transmission and trust mechanism were taken into account in modeling.These models lay a solid foundation for agent simulation experiment and the models of dynamic game and evolutionary game.
Approximate Reduction for Objects in Inconsistent Decision Tables
ZHAI Cui-hong and QIN Ke-yun
Computer Science. 2014, 41 (3): 245-248. 
Abstract PDF(273KB) ( 370 )   
References | RelatedCitation | Metrics
This paper studied the approximate reduction for the objects in the inconsistent decision tables.Firstly,the judgment theorems with respect to the approximate reduction were obtained.Secondly,the calculation method of the approximate reduction was given by discernibility matrices and functions.Finally,Compared to the overall reduction for a decision table,the reduction for objects in the decision table not only can get more concise knowledge,but also has good application value in real life.
Algorithm for Computing All NE Repeats in Sequence Based on QSA Array
Munina YUSUFU,〓Gulina YUSUFU and ZHANG Hai-jun
Computer Science. 2014, 41 (3): 249-252. 
Abstract PDF(434KB) ( 366 )   
References | RelatedCitation | Metrics
The sequence repeating pattern recognition and extraction algorithms have a wide range of practical applications in the fields of data mining,pattern recognition,data compression and bioinformatics.This paper proposed a new algorithm RPT for computing all NE repeats with constraints based on QSA array.The characteristics of the NE repeats are fully considered in the algorithm design procedure for creating a statistical relation between the features and the repeats detection results.Algorithm constraints include minimum period pmin and maximum gap gmax which are used to filter the qualified NE repeats,and algorithm can output all occurrences of the NE repeats in ascending order.Compared with the existing algorithm based on suffix index,the space efficiency of the algorithm is improved.The experiments on classified data sets show that the algorithm RPT is effective for recognizing and extracting the NE repeats on biological sequences,in particular DNA sequences,as well as Uyghur Web texts.
Research of Multi-layer Intelligent Modeling Method
WU Ai-yan,ZENG Guang-ping and TU Xu-yan
Computer Science. 2014, 41 (3): 253-257. 
Abstract PDF(1958KB) ( 485 )   
References | RelatedCitation | Metrics
The paper studied the research status of multi-layer state-space modeling method,and analysed its advantages and disadvantages.Furtherly,it developed the modeling method with could model and put forward the multi-layer intelligent state-space modeling method,which is able to effectively solve uncertainty of system information,especially coe-xistence of fuzziness and randomness.And it described specification of the model and its intelligent information processing scheme.Finally,it was used to analyse China’s energy system and expound the solution method,and the simulation platform was developed to complete comparison experiment.An active result was got,which shows the method is simple and effective.
Granular Trees Based on Different Data Sets and their Modeling Applications
YAN Lin and SONG Jin-peng
Computer Science. 2014, 41 (3): 258-262. 
Abstract PDF(437KB) ( 376 )   
References | RelatedCitation | Metrics
By classifying a data set into different partitions,a granular tree based on the data set was obtained.By using the information of associated elements,a relation called an associated relation was introduced,and connections between two granular trees based on different data sets were established,so that two associated chains were identified,which are connected with each other by an associated element.Because the granules in each associated chain gradually change from a coarse state to a fine state,the associated element model closely links with the changes of the granules,which is consistent with the data processing mode of granular computing.Consequently,as the basis of an algorithm,a mathematical model was created.It can be used to describe the relationship between talent supply and demand.Also,an example shows the process of using the model to carry out data processing.
New Feature Extraction Method Based on Bottleneck Deep Belief Networks and its Application in Language Recognition
LI Jin-hui,YANG Jun-an and WANG Yi
Computer Science. 2014, 41 (3): 263-266. 
Abstract PDF(1144KB) ( 523 )   
References | RelatedCitation | Metrics
In language recognition,due to the insufficiency of information in each frame,traditional MFCC feature extraction is easily suffered from noise pollution.Meanwhile,the general method of SDC feature extraction depends on artificially setting in parameter selection which increases the uncertainty of recognition performance.In order to overcome these drawbacks,the deep learning method was introduced and a novel feature extraction approach named BN-DBN which is based on deep learning was proposed.Finally, the relevant comparative experiments for the bottleneck layer size,the number of hidden layers and the position of the bottleneck layer were carried out in NIST2007database.Experimental results show that extraction method of the bottleneck features based on deep belief networks are more effective in language recognition,compared with traditional methods.
Multi-valued Attribute Association Rules Mining Based on Concept Lattice
GUO Xiao-bo,ZHAO Shu-liang,WANG Chang-bin,ZHAO Jiao-jiao and LIU Jun-dan
Computer Science. 2014, 41 (3): 267-271. 
Abstract PDF(525KB) ( 494 )   
References | RelatedCitation | Metrics
Considering the problems aroused by the traditional association rules mining algorithms which are lack of efficient data selection mechanism for users,especially not conducive to deal with multi-valued attribute data,this paper presented the redefinition and classification of multi-valued attribute data by using conceptual lattice,proposed an improvement of Apriori algorithm based on the KAF factor and the CHF factor to mine multi-valued attribute association rules and established a complete mining parameters adjustment mechanism acting very well in improving the speed and efficiency of mining,which is convenient for users to select key attribute values to mine and analyze rules while improving speed and mining algorithm efficiency.At the end of this paper,we illustrated the advantages of these new methods with the help of experimental data obtained from demographic data of a province,and the realistic application analysis and experimental results turn out that the improved mining algorithm has a better performance.
Manifold Regularized-based Nonsmooth Nonnegative Matrix Factorization
JIANG Wei,CHEN Yao and YANG Bing-ru
Computer Science. 2014, 41 (3): 272-275. 
Abstract PDF(334KB) ( 425 )   
References | RelatedCitation | Metrics
The classical Nonsmooth Nonnegative Matrix Factorization(nsNMF) method discovers only the global statistical information of data and fails in dealing with nonlinear distributed data,while the manifold learning algorithms show great power in exploring the faithful intrinsic geometry structures of high dimensional data set.To address this issue,based on manifold regularization,we developed a novel algorithm called Manifold Regularized-based Nonsmooth Nonnegative Matrix Factorization(MRnsNMF).It not only considers the geometric structure in the data representation,but also introduces sparseness constraint to both coding coefficient and basis matrix simultaneously,and integrates them into one single objective function.An efficient multiplicative updating procedure was produced along with its theoretic justification of the algorithmic convergence.The feasibility and effectiveness of MRnsNMF were verified on several standard data sets with promising results.
Outlier Detection Algorithm Based on Natural Nearest Neighbor
ZHU Qing-sheng,TANG Hui and FENG Ji
Computer Science. 2014, 41 (3): 276-278. 
Abstract PDF(354KB) ( 836 )   
References | RelatedCitation | Metrics
When the k-nearest neighbor method is used,it is difficult to choose an appropriate parameter k of the algorithm which affects obviously its efficiency and performance.Natural nearest neighbor proposed by us is a novel concept on nearest neighbor,in which each point’s neighbors are formed by an adaptive algorithm without parameter k.In this paper,we proposed an outlier detection algorithm based on natural nearest neighbor(ODb3N) by means of modifying iteration stop condition.The experiments show that our method not only has the advantage of non-parameter,but also has the ability to discover both the outlier and the cluster of outliers.
Fast Calculation Method of Space Transform Model
LU Xiao-jing and HUANG Xiang-sheng
Computer Science. 2014, 41 (3): 279-281. 
Abstract PDF(1589KB) ( 381 )   
References | RelatedCitation | Metrics
A novel fast spatial transformation model calculation approach,based on Sobel operator feature point detection,was proposed.The Sobel descriptor is constructed by taking advantages of the sensitive characteristics of Sobel o-perator on edge.The features are extracted and matched.Then the wrong matching points are eliminated by PROSAC algorithm,and the spatial transformation model between two images is calculated.The experimental results demonstrate that the simple structure Sobel descriptor can extract the feature points with high speed and match the feature points between two images exactly,thus improve the calculation speed,provide a method for augmented reality and image mosaic etc.
New Wavelet Based Directional Transform
XU Hua-nan,PENG Guo-hua and LIU Zhe
Computer Science. 2014, 41 (3): 282-285. 
Abstract PDF(2195KB) ( 709 )   
References | RelatedCitation | Metrics
The directionlet transform is a multidirectional and anisotropic transform for image processing which expands the direction of the wavelet transform.However,not as the wavelet transform,the directionlet don’t retain the resolution-scalability,which leads to weak expressions on the smooth regions in images.To overcome the drawback,wavelet and directionlet transform were combined to construct a multiscale and multidirectional transform called wavelet based directionlet transform.Numerical experiments were performed to assess the applicability of the proposed method.The obtained results show that the proposed method can well express both smooth regions and details in images,and obtain higher PSNR and SSIM than the compared methods.
Screen Content Coding of Combining Full-chroma HEVC and Lossy Matching Dictionary Coder
ZHANG Pei-jun,JIN Xiao-juan,WANG Shu-hui,ZHOU Kai-lun and LIN Tao
Computer Science. 2014, 41 (3): 286-292. 
Abstract PDF(1785KB) ( 533 )   
References | RelatedCitation | Metrics
Since traditional video compressing tools are not suitable for screen contents,a lossy dictionary coding algorithm with adaptive matching error limit was proposed.After combining with full-chroma HEVC,a dual-coder scheme named LDSC(Lossy Dual-coder Single Chroma-sampling- rate) was obtained.According to the coding character of different types of area in screen content,better coding result was selected adaptively and put into bit-stream.To evaluate the performance of lossy string matching,we proposed the LD-Cost model based on the string matching length and distortion by employing lagrangian multiplier method.We also gave an elaborate analysis about the error accumulation phenomenon during lossy dictionary coding and pointed out a feasible way to avoid it.Experimental result show that the proposed method can obviously improve coding efficiency comparing with lossless dictionary coder based dual-coder scheme,and has better adaptability when coding discontinues-tone area of screen content.3%~15% coding gain can be achieved in BD-rate against lossless dictionary coder based dual-coder scheme in all- intra configuration.And the proposed method can obtain more BD-rate performance improvement,compared with latest HEVC reference software.
Robust Foreground Detection Using Local Intensity Ratio
XIANG Jin-hai,LIAO Hong-hong,FAN Heng,DAI Jiang-hua,SUN Wei-ping and YU Sheng-sheng
Computer Science. 2014, 41 (3): 293-296. 
Abstract PDF(1058KB) ( 453 )   
References | RelatedCitation | Metrics
Real time segmentation of foreground objects in video sequences is a fundamental step for surveillance.This paper proposed a local intensity ratio(LIR) to remove shadow.The local intensity ratio has the illumination invariance feature which is based on the analysis of illumination change model.The distribution of the LIR was discussed.We used the local intensity ratio instead of pixel intensity for Gaussian Mixture Model(GMM),and then got the foreground without shadow.Based on experimental results,the LIR feature shows excellent performance under various illumination change conditions while operating in real-time.
Sliding Window Parameter Optimization Method Based on Bipolar Preferences
QIU Fei-yue,JIN Feng-tao,WANG Li-ping and ZHANG Wei-ze
Computer Science. 2014, 41 (3): 297-301. 
Abstract PDF(1320KB) ( 408 )   
References | RelatedCitation | Metrics
One of the commonly-used detection methods in shape matching is the sliding window,in which multiple objects,different in size and position,can be detected.The detection performance is generally measured by detection rate and false positive rate.The two parameters in sliding window detection method,sliding step and the scale step are empirically selected for high detection rate and low false positive rate.However,those two factors can be formulated as a typical two-objective optimization problem,while the empirical selection shows no consideration over decision-makers’ different preferences regarding detection rate and false positive rate.Given the fact that decision makers’ positive pre-ferences are represented by high detection rate and low false positive rate,and negative preferences,by low detection rate and high false positive rate,the paper introduced the bipolar control strategy and then proposed a new way to optimize the sliding window parameters based on Bipolar Preference Multi-objective Particle Swarm Optimization (BPMOPSO).The new method was applied in the detection experiment on Leeds Cows image datasets.The experiment results show that the performance of the new method is largely improved,i.e.,the false positive rate declines sharply and the detection rate improves significantly,and that the efficiency of the algorithm ameliorates considerably as well.
Image Retrieval Method Based on Sparse Low-rank Representation
CHEN Gang,YUE Xiao-dong and CHEN Yu-fei
Computer Science. 2014, 41 (3): 302-305. 
Abstract PDF(362KB) ( 409 )   
References | RelatedCitation | Metrics
The content based image retrieval method extracts the color,textural,shape features of images,which can be represented in the feature space,with similarities among them obtained by some distance between feature vectors.Its accuracy critically depends on the feature vectors.However,images in same class will have different features.This paper presented an image retrieval method based on sparse low-rank representation.After the low-rank components of each set was recovered,both the global mixture of subspaces structure and the locally linear structure of the features were captured.The experimental results show that the method not only has a strong robustness to the unstablefeatures,but also has a good retrieval performance.
Optimized Human Movement Gesture Recognition Algorithm Based on Hu Invariant Moments Features
ZHANG Yong-qiang
Computer Science. 2014, 41 (3): 306-309. 
Abstract PDF(1140KB) ( 509 )   
References | RelatedCitation | Metrics
The body's movement process is relatively complex,so there are many similar movements in the images,for-ming interferences on the characteristics of the traditional recognition,and causing low recognition accuracy.In order to improve its recognition accuracy,this paper put forward a kind of Hu moment invariants and artificial fish optimization support vector machine (SVM) model for human motion recognition (Hu-AFSA-SVM).First of all,based on the two-dimensional continuous images,extracted the image 7Hu moment invariants of the human body movement gesture recognition,and then input into SVM to train,and picked a AFSA to SVM parameter optimization,to find an optimal hyperplane,in as much as possible meet the constraints of classification,all concentrated human motion data classification categories,complete recognition.Finally carried out the simulation experiment.Simulation results show that,compared with other identification model,Hu-AFSA-SVM improves the human motion recognition accuracy,speeds up the recognition, and is an effective method for human movement gesture recognition.
Remote Sensing Image Fusion Method Based on Local Feature of Nonsubsampled Contourlet Coefficients
BAO Cheng-hui,ZHU Kang and HE Xin-guang
Computer Science. 2014, 41 (3): 310-313. 
Abstract PDF(1449KB) ( 317 )   
References | RelatedCitation | Metrics
We proposed a remote sensing image fusion method based on the local feature of NSCT coefficients according to the different fusion purpose of lowpass sub-band and highpass sub-bands decomposed from the multispectral and panchromatic images by using Nonsubsampled Contourlet Transform(NSCT).Firstly,we decomposed the I component of multispectral image and qanchromatic image into multi-resolution representation by using NSCT.Then,in the lowpass sub-band,the method of selective weighted summation was used.In the highpass sub-bands,we employed the algorithm taking the larger absolute value of sub-band coefficients at the highest decomposition level,and the selective method by comparing the region variance of sub-band coefficients in a fixed region at other decomposition levels.Finally,consistency check was implemented for the fused coefficients of highpass sub-bands.The experimental results show that the fusion image not only keeps the spectrum information of source image well,but image resolution has been greatly improved compared to other fusion methods.
Research on Application of Wavelet Tree Structure in Bayesian Compressive Sensing Image Reconstruction
YUAN Qin,WU Xuan-gou and XIONG Yan
Computer Science. 2014, 41 (3): 314-319. 
Abstract PDF(1726KB) ( 440 )   
References | RelatedCitation | Metrics
Combining Bayesian learning and compressive sensing,an image compression and reconstruction method based on wavelet transform was proposed in this article.Utilizing the specific structure and correlation of wavelet transforming coefficients,this method improves the image compression rate and reconstruction accuracy effectively.At same time,a regression model based on prediction is adopted in coefficient reconstruction.Gaussian mixture parameters are used to predefine the prior conditional density of the unknown parameters in order to enforce the sparsity.This method can get a group of model with high probability of the coefficients,and result in reconstruction of the image in sense of MMSE.Compared with other image compression methods and CS based image reconstruction methods,the proposed method can get reconstruction images with high quality and get bigger compression rate.