Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 43 Issue 10, 01 December 2018
  
Specific Content Monitoring on Social Networks Based on Social Computing and Deep Learning
CAO Xiao-chun, JING Li-hua, WANG Rui, ZHANG Rui, DONG Zhen-jiang and XIONG Hong-kai
Computer Science. 2016, 43 (10): 1-8.  doi:10.11896/j.issn.1002-137X.2016.10.001
Abstract PDF(3294KB) ( 682 )   
References | Related Articles | Metrics
Social networks provide great convenience to people’s daily life and information sharing.Unfortunately,these conveniences are accompanied with content security problems,where the social networks are frequently employed to disseminate malicious or sensitive information.This paper proposed a content security solution,which builds a system to monitor specific content based on social computing and deep learning.To search for the specific people,the system achieves a content-sensitive semi-supervised community discovery method with pairwise constraint.By monitoring the discovered people and obtaining their published content,the system performs an automatic detection procedure to identify the content of the published images and videos.In addition,an error correction method was proposed to reduce the false positives when processing the real network data.Experimental results demonstrate that the proposed system gives decent performances under various circumstances.
Survey on Categorical Data Type in Computer Science
SU Jin-dian
Computer Science. 2016, 43 (10): 9-18.  doi:10.11896/j.issn.1002-137X.2016.10.002
Abstract PDF(978KB) ( 882 )   
References | Related Articles | Metrics
Categorical data type refers to using category theory as the mathematical foundations for the researches on the description,computation,semantic and application of data types.The earlier work focuses on inductive data type and uses algebras to describe the construction semantics and recursive properties of finite data type from the inductive perspective.In recent years,coinductive data type,the dual concept of inductive data type,has attracted the attentions of many computer scientists,which use coalgebras to describe the behavioral semantics and corecursive properties of infinite data type from the observational perspective.Category theory can offer a unified mathematical theory foundation for various data types to discuss many important relations and properties,and integrate various important achievements of algebras and coalgebras,i.e.syntactic constructions and dynamic behaviors,recursion and corecursion,congruence and bisimulation and so on.Currently,categorical data type has been widely applied in many fields,such as programming languages,descriptions of calculation,theorem provers and parallel computations,etc.The latest research results of categorical data type in its basic concepts,mathematical foundations,logical foundations and their applications were introducted to attract the attention of the relative researchers.
Review of Collaborative Detection of Threat in Big Data
ZHANG Jian-ge, GUO Yuan-bo, MA Jun and CHEN Yue
Computer Science. 2016, 43 (10): 19-26.  doi:10.11896/j.issn.1002-137X.2016.10.003
Abstract PDF(630KB) ( 515 )   
References | Related Articles | Metrics
Some malicious and illegal persons take advantage of direct or indirect methods to attack some person,organization and nation,so that they suffer from different degrees of threats.The type of information is various,volume of data is large and it needs to be processed at high speed.Therefore,we firstly analyzed five typical collaborative detection models which are Esper model,Hadoop model,Agilis model,Storm model and Spark model.Moreover,we made comparison of them and expatiated the network environment for different models.Then,we analyzed common attack methodsin the network which are DDoS attack,MITM attack and APT attack,and explained detection models for these attacks.Finally,we provided the deployment scheme of collaborative detection of architecture model for threats.The scheme includes two components which are sending component and receiving processing component.Then we pointed out that the architecture of different models can be deployed according to practical requirements.Especially,we provided the deployment scheme of architecture model in peer to peer network,ranked security domain network,and hierarchical structure network.
Review and Prospect on Development of Decision Support System
LIANG Luo-xi and WU Jiang
Computer Science. 2016, 43 (10): 27-32.  doi:10.11896/j.issn.1002-137X.2016.10.004
Abstract PDF(482KB) ( 1061 )   
References | Related Articles | Metrics
An overall research and analysis of past development track of the decision support system (DSS) is of great significance for the further development of its new theories,models,technologies and applications.Thus,this paper comprehensively discussed the development track of DSS,especially its structures and supporting technologies.From the analysis,we could see clearly that demands and technologies are two major driving forces for the development of DSS,especially in the era of big data.Moreover,we illustrated new demands and problems that the researchers of DSS have to face in the era of big data and discussed how to meet these new demands and solve these problems for DSS in the future by integrating technologies of big data and cloud computing.
Cloud Computing System Availability Evaluation for IaaS
LI A-ni, ZHANG Xiao, ZHAO Xiao-nan, ZHANG Bo-yang and LIU Chun-yi
Computer Science. 2016, 43 (10): 33-39.  doi:10.11896/j.issn.1002-137X.2016.10.005
Abstract PDF(543KB) ( 531 )   
References | Related Articles | Metrics
With the expansion of the cloud computing application,people are getting worried about the security,reliability and availability of cloud computing.The cloud computing system promises to provide the services that meet the servi-ce level agreements which commonly include performance and availability.However,because of the inherent complexity of the availability,the evaluation of availability lacks quantitative calculation method,especially for the cloud computing system users.To solve these issues,we proposed an end-user oriented and a cloud computing service provider oriented cloud computing system availability evaluation method.These two methods adopt the definition of the availability to build model and take the boot time of the virtual machine as mean time to repair(MTTR).The mean time to failure(MTTF) of the first method adopts the parameter which is provided by cloud computing service providers.The virtual machine’s MTTF of the second method is statistically obtained,meanwhile,the cluster availability of the cloud computing system is counted.Because it is difficult to carry out experimental verification for the latter,therefore,we only verified the first method.We measured the boot time of the virtual machine in public clouds and private clouds,and combined the MTTF which is provided by cloud computing service provider.Finally,we calculated the availability of two different kinds of cloud computing services.This method facilitates users to calculate the availability quickly and provides recommendations for users to choose different cloud services.In addition,users can judge whether the cloud computing provider’s promise reaches the standard and whether some important industry application services can migrate to the cloud platform.
Second Order Linear Active Disturbance Rejection Control for a Class of Typical High Order System Based on Time Scale
GUO Rui, HU Peng-cheng and FAN Ya-min
Computer Science. 2016, 43 (10): 40-42.  doi:10.11896/j.issn.1002-137X.2016.10.006
Abstract PDF(262KB) ( 490 )   
References | Related Articles | Metrics
Higher order system is difficult to control due to its own nature.In order to control a class of typical high order objects which can be described by transfer function,the second-order linear active disturbance rejection (LADRC) controller was used to set the parameters of high order system.The adjusted high-order system will act as the benchmark system.The parameters of benchmark system are converted to parameters of new high order system by using the conception of time scale,making the new system have the response characteristics of benchmark system.This method can calculate the parameters of the new system conveniently and quickly. The simulation result proves that it is feasible and has a wide range of reference values.
Design of Multi-hop Routing Protocol in WSN Based on Sun SPOT
LI Yang, ZHAO Yun-long, SONG Hong-tao and YAO Nian-min
Computer Science. 2016, 43 (10): 43-46.  doi:10.11896/j.issn.1002-137X.2016.10.007
Abstract PDF(969KB) ( 495 )   
References | Related Articles | Metrics
The multi-hop routing protocol is one of the key technologies in wireless sensor networks.The conventional multi-hop transmission protocols generate some problems in the actual application of wireless sensor networks,such as undue complexity of the deployment process.To solve these problems,we proposed a flexible and practical multi-hop routing protocol called SCMP (Sink Controlling Multi-hop Protocol),which is based on the sink nodes controlling.Sink nodes control sensor nodes by sending command packets to the whole topology,collect the routing information of each node to form the global route,and then control the data transmission of each node on the basis of the global information.We evaluated the SCMP by deploying it on the Sun SPOT platform.The results indicate that the new scheme is more flexible and practical,and has better system performance.
Predicting Resource Consumption in Web Server Using Hybrid Model
YAN Yong-quan and GUO Ping
Computer Science. 2016, 43 (10): 47-52.  doi:10.11896/j.issn.1002-137X.2016.10.008
Abstract PDF(496KB) ( 485 )   
References | Related Articles | Metrics
Software aging is a phenomenon that the state of software degrades and leads to performance degradation,hang/crash failures or both in a long running software application.Software rejuvenation,which involves occasionally stopping the software application,removing the cumulative error factors and then rebooting the application in a clean environment,is used to counteract software aging problems.For software aging and rejuvenation problems,it is a key problem that how to accurately forecast the resource consumption of aging system and find a proper timing to execute rejuvenation.In this paper,a methodology of hybrid model was proposed for resource consumption forecast and a rejuvenation algorithm of time slot in multi-threshold values was given.The experiment results show that the hybrid model is superior to other models in resource consumption farcasting of an IIS Web server and the proposed rejuvenation algorithm is better than the single threshold algorithm.
Performance Reliability Analysis of AFDX Network Based on Dynamic Fault Tree
SUN Li-na, HUANG Ning and ZHANG Shuo
Computer Science. 2016, 43 (10): 53-56.  doi:10.11896/j.issn.1002-137X.2016.10.009
Abstract PDF(412KB) ( 533 )   
References | Related Articles | Metrics
AFDX network is the foundation of modern aircraft integration,and its performance reliability ensures the high reliability of aircraft operations.The current study analyzes the reliability issues from the perspective of perfor-mance evaluation and prediction,but fails to delve into the features such as interacting,communication and dependency of network fault.This paper proposed a dynamic fault tree modeling method based on practical application.It analyzes and models the causes of failure and failure modes of AFDX network’s data transmission.A quantitative calculation method was given.The analysis and method in this paper are of great significance and provide a reference to AFDX network design,dynamic fault tree modeling,reliability analysis and evaluation.
Testing and Invalid Testing Case Localization Model Based on Metamorphic Relation
HUI Zhan-wei, HUANG Song, ZHANG Ting-ting and LIU Jian-hao
Computer Science. 2016, 43 (10): 57-62.  doi:10.11896/j.issn.1002-137X.2016.10.010
Abstract PDF(519KB) ( 832 )   
References | Related Articles | Metrics
Aiming at the limitations of traditional metamorphic testing model MTM,this paper proposed a testing model MRTM based on metamorphic relation.Through comparative analysis,firstly,characteristics such as scope of its application are pointed out.Secondly,in view of the problem that the invalid testing case is hard to determine that both MTM and MRTM are facing,a failed test case localization method of metamorphic testing (FTCL-MT) based on dubiety computation was proposed.As a supplement to the existing testing model,FTCL-MT is helpful to achieve precise positioning of invalid test cases in the case that CMR is not satisfied,so that it can provide support for existing fault location technologies.Finally,experiments show the effectiveness of FTCL-MT.
Novel Anomaly Detection Method of Online Streaming Data
DING Zhi-guo, MO Yu-chang and YANG Fan
Computer Science. 2016, 43 (10): 63-65.  doi:10.11896/j.issn.1002-137X.2016.10.011
Abstract PDF(359KB) ( 1077 )   
References | Related Articles | Metrics
Streaming data has some unique characteristics such as massive,unlimited generation,dynanmic variation distribution and unbalanced data distribution and so on,which make the anomaly detection for streaming data become a research hot spot.An obvious characteristic of anomalous sample is “few and different” compared to these normal data,which makes it more easily to be isolated than normal data by the partition of stochastic space.In this paper,a novel online anomaly detection method for streaming data was proposed based on the isolation principle and online ensemble learning theory.Experiments conducted on four UCI datasets demonstrate the effectiveness of the proposed method.
Trust Model Based on Trusted Computing for Distributed Heterogeneous Networks
PENG Hao, ZHAO Dan-dan, YU Yun-jie, WU Zhen-dong and WU Song-yang
Computer Science. 2016, 43 (10): 66-69.  doi:10.11896/j.issn.1002-137X.2016.10.012
Abstract PDF(335KB) ( 523 )   
References | Related Articles | Metrics
In this paper,a trust model based on trusted computing technology was proposed for distributed heteroge-neous networks,which is composed of trusted computing platform and non-trusted computing platform. The theoretical framework and implemention process of the model were analyzed and studied in detail.The simulation results show that in the model the nodes in the distributed heterogeneous network have better anonymity but no obvious effect on the response time of the distributed heterogeneous network.In this way,the proposed trust model has certain ability to coun-ter malicious nodes.
Dynamic Fault Tree Analysis Based on Multiple-valued Decision Diagrams
WANG Bin, WU Dan-dan, MO Yu-chang and CHEN Zhong-yu
Computer Science. 2016, 43 (10): 70-73.  doi:10.11896/j.issn.1002-137X.2016.10.013
Abstract PDF(402KB) ( 595 )   
References | Related Articles | Metrics
In the system reliability analysis,dynamic fault trees analysis has been used as a very important technique for many years.However,when these kinds of large dynamic subtrees appear,which abound in models of real-world dyna-mic software and embedded computing systems,the state explosion problem is too serious to be removed.In order to improve computing efficiency,this paper introduced an efficient,multiple-valued decision-diagram (MDD)-based DFT analysis approach.This approach restricts the state-space methods only to the subtree components associated with dynamic failure behaviors.By using multiple-valued variables to encode the dynamic gates,a single compact MDD is then generated.Finally,the failure probability is calculated to describe the reliability of the system.Applications and advantages of the proposed approach are illustrated through detailed analysis of a practical case study.
Analysis and Verification for OpenFlow Multi-switch Protocol Based on Model Checking
ZHU Ge, ZENG Guo-sun, DING Chun-ling and WANG Wei
Computer Science. 2016, 43 (10): 74-80.  doi:10.11896/j.issn.1002-137X.2016.10.014
Abstract PDF(601KB) ( 485 )   
References | Related Articles | Metrics
In software defined networking (SDN),OpenFlow protocol is the communication standard between the control plane and forwarding plane.Its validity and rationality will influence the performance of the whole network.This paper focused on a formal method to verify the correctness of the OpenFlow protocol by model checking.Firstly,a key protocol called OpenFlow multi-switch protocol was proposed,which is regarded as an important example.Then,a basic model for this protocol is built by protocol action automata machine.The temporary logic is used to describe some major properties that this protocol has.A model checking algorithm is given to verify these properties.Finally,the special analysis and verification for OpenFlow multi-switch protocol are conducted so that the existing faults hidden in this protocol can be found and modified.
RFID Indoor Symbolic Localization Algorithm Based on Strategy of Perception Ruleset in Constrained Space
SHI Jun-yan, QIN Xiao-lin and WANG Ning
Computer Science. 2016, 43 (10): 81-86.  doi:10.11896/j.issn.1002-137X.2016.10.015
Abstract PDF(3154KB) ( 491 )   
References | Related Articles | Metrics
With the continuous development of universal computing research,indoor localization technology has become a hot topic in current research.Advances in technology on RFID indoor positioning have made RFID deployed into a widely variety of indoor scenes.In order to improve the localization accuracy of indoor space,we proposed an RFID indoor symbolic localization method based on strategy of perception ruleset in constrained space.The algorithm uses the sign of the indoor space and defines the perception of the situation to establish the location rules,so that the algorithm has better adaptability in indoor space and can use smaller amount of RFID readers to achieve higher accuracy localization.In order to improve the localization accuracy,we proposed the concept of the perception ruleset and abstractly extracted the scene for the algorithm to further increase the localization accuracy.We finally performed experiments in constrained indoor corridors,and the results show that the proposed algorithm has higher performance not only on localization accuracy but also on anti-interference ability than existing algorithms.
Research on Channel Esitimation of Scattering Channel Models in Mobile Communication Macrocell Environments
ZHU Wei-na, ZHOU Jie, CAI Shi-qing and SHEN Xiao-yan
Computer Science. 2016, 43 (10): 87-92.  doi:10.11896/j.issn.1002-137X.2016.10.016
Abstract PDF(2104KB) ( 530 )   
References | Related Articles | Metrics
In order to effectively weaken the multipath effect and improve the accuracy of channel parameter estination in wireless communication,a new channel model which is more adapted to different cell types was presented and a new concept of range of arrival was introduced,under the condition of the non-uniform distribution of the scatterer.This model can describe the important parameters in marcocell environments precisely,such as angle of arrival and range of arrival at the base station.In addition, the received signals of the base station has Doppler shift effect due to the mobility feature of mobile station,and the probability distribution of Doppler shift is derived.The numerical simulation results of the model compared with multipath fading channel models in other literatures show that channel parameter estimation results of the model accord with the theory and experience,which expands the research and application of statistical channel model and provides powerful tools for the simulation of wireless communication system.
Fast Handover Strategy Research Based on Mobile IPv6 in VANET
ZHANG Jian-ming, ZHAO Li-jie and FENG Xia
Computer Science. 2016, 43 (10): 93-97.  doi:10.11896/j.issn.1002-137X.2016.10.017
Abstract PDF(453KB) ( 448 )   
References | Related Articles | Metrics
In vehicle ad hoc network(VANET),vehicles need to switch from one road side unit(RSU) to another one frequently in urban traffic congestion,resulting in a long handoff delay of IPv6 mobility support protocol.A fast hand-over scheme based on proxy mobile router was proposed as a solution to the problem.The method that proxy mobile router’s switching in 3G/WiMAX communication domain triggers the handover of other vehicles in 802.11p communication domain was proposed to achieve the purpose of combining macro switch with micro switch.And then,this method can achieve the goal of mass fast switching.The analysis shows that this scheme is superior to other existing programs in switching efficiency and handoff delay.
Partition Based Data Collection Strategy via Mobile Relays
DU Guo-jie and NIU Yu-gang
Computer Science. 2016, 43 (10): 98-102.  doi:10.11896/j.issn.1002-137X.2016.10.018
Abstract PDF(528KB) ( 431 )   
References | Related Articles | Metrics
This paper proposed a partition based data collection strategy to deal with the problem of data collection via mobile relays.Firstly,CPSA algorithm was used to select center points such that the number of stop points may be reduced.Then,CPPA algorithm is applied to partition the monitoring area.By introducing the cost function,the CPPA will iteratively calculate and obtain the optimal result,which can minimize the mobile distance and balance the load of mobile relays.Experimental results show that the proposed strategy can minimize the mobile distance and balance the load of mobile relays.
Estimation Method of Number of Candidate Nodes in Opportunistic Routing
YAO Jin-bao, ZHANG Xin-you and XING Huan-lai
Computer Science. 2016, 43 (10): 103-106.  doi:10.11896/j.issn.1002-137X.2016.10.019
Abstract PDF(400KB) ( 512 )   
References | Related Articles | Metrics
Because the number of candidate nodes in opportunistic routing is too large,this paper proposed an estimation method of the number of candidate nodes based on distance (DBNCE).This method is used to set the number of candidate nodes for each node,which participates in forwarding data package according to the distance between the current node and the destination node,combining with network density and the number of neighbor nodes in current node.Simulation results show that using DBNCE in opportunistic routing will reduce the number of candidate nodes effectively while guarantee the success rate of data transmission,and improve the performance of the network.
Mobile Energy Replenishment and Data Collection Strategies in Rechargeable Sensor Networks
LIU Jun-chen, LIANG Jun-bin, WANG Tian, JIANG Chan and LI Tao-shen
Computer Science. 2016, 43 (10): 107-113.  doi:10.11896/j.issn.1002-137X.2016.10.020
Abstract PDF(671KB) ( 453 )   
References | Related Articles | Metrics
Rechargeable wireless sensor network is a new type of wireless sensor network,which uses mobile wireless charging vehicle (MWCV) to charge the nodes with low energy in the network,as well as collect data from the network.The network can be used for applications that need to perform long time monitoring.However,it is a challenge to control the MWCV to finish data collection within given deadline in an energy efficient fashion,and achieve the target of charging as many low energy nodes as possible.In this paper,a novel algorithm named RSEP (Root Selection with Energy Prediction) was proposed.Firstly,the length of MWCV’s walking path is constrained to guarantee the latency.Then,all the low energy nodes in the walking path are selected to be roots,by which multiple data collection trees are constructed.If a root’s energy is enough for the root to maintain for more than a round,a path equaling to the diameter of the tree should be found.In the path,a node with the largest number of neighbors in the network is selected to be a new root.A new tree is formed by adjusting the old tree’s structure through the new root.All the nodes in the tree transmit their data and energy information to the root.Finally,when MWCV walks along its path to charge the low ener-gy nodes,it can collect all data and energy information from the trees rooted at these nodes.On the other hand,the ener-gy information will become out-of-date with the time elapses.MWCV uses the Markov model to predict the nodes’ energy level at the time when next round of data collection starts,so as to optimize the roots’ selection.Theoretical analysis and simulation results show that compared with existing works,RSEP algorithm can accomplish a round’s charging with lower total energy consumption and shorter time duration.
Double-layer Satellite Communication System Based on Complex Field Network Coding
XIA Gui-yang, LIU Yan-tao, XU Jing and Yasser Morgan
Computer Science. 2016, 43 (10): 114-119.  doi:10.11896/j.issn.1002-137X.2016.10.021
Abstract PDF(485KB) ( 462 )   
References | Related Articles | Metrics
In order to improve the throughput and reliability of satellite network,we proposed a satellite communication scheme based on complex field network coding (CFNC).In the scheme,the original message is pre-encoded before transmission.Specifically,it is multiplied by a coding mtrix of parameter space-time codes over complex field.There exi-sts a bijection mapping between the encoded signals and the original ones.Detailed analysis to the performance of throughput and pairwise error probability (PEP) are implemented.The experimental results prove that when the transmission power is kept constant,the throughput of our scheme is 100% higher than routing and 75% higher than pre-vious complex field network coding scheme.Furthermore,the scheme can also be extended to multi-user network communications with more ground stations.Simulation experiments show that PEP approaches the asymptotic curves at high SNR.This supports the theoretical analysis of PEP.
New Approach for Narrow Band Interference Detection in Satellite Communication Using Morphological Filter
HU Jing, BIAN Dong-ming, XIE Zhi-dong and LI Yong-qiang
Computer Science. 2016, 43 (10): 120-124.  doi:10.11896/j.issn.1002-137X.2016.10.022
Abstract PDF(389KB) ( 498 )   
References | Related Articles | Metrics
How to precisely detect co-channel narrow band interferences using single threshold is a challenge in frequency division multiple access (FDMA) satellite communication system,especially in the condition of many signals which parameters are different.A new method for narrow band interference detection in satellite communication was proposed based on mathematical morphology.The method uses an improved morphological gradient filter which belongs to image and graphics filed to deal with the spectrum data that is viewed as one-dimensional gray-scale image,and then searches the gradient of interference edges to detect the position of it.Simulation results show that the proposed method can detect the narrow band interference at one time when there are different signals with interference.It is not affected by the non-flat noise floor and has low computational complexity.It is suitable for real spectrum monitoring in satellite communication system.
Distributed Voronoi Control Strategy Based on Virtual Force
HUANG Sheng, LIU Guang-zhong and XU Ming
Computer Science. 2016, 43 (10): 125-129.  doi:10.11896/j.issn.1002-137X.2016.10.023
Abstract PDF(423KB) ( 529 )   
References | Related Articles | Metrics
According to the wireless mobile sensor network coverage problem in the target area,a local distributed algorithm was proposed based on the movable distance.Taking advantage of the characteristics of the Voronoi polygons,we divided the target region.Using the vector concept of mechanics,based on edges and vertices of Voronoi diagram,we determinated the direction and magnitude of virtual force of the node,which represent the movable direction and distance of the node.The distributed control of Voronoi algorithm was proposed based on the moving distance to determine the state of node mobility.Simulation results show that the strategy can achieve high coverage in the target area,and can ear-lier reach optimization and control of the network coverage.
Improved Network Security Defense Strategy Generation Method Based on Attack-Defense Graph
QI Yong, MO Xuan and LI Qian-mu
Computer Science. 2016, 43 (10): 130-134.  doi:10.11896/j.issn.1002-137X.2016.10.024
Abstract PDF(1117KB) ( 504 )   
References | Related Articles | Metrics
Complex multi-step cyber-attack is a typical network attack method with strong purpose,and state attack-defense graph is an effective method for modeling and analyzing this problem.But it still has some limitation in practice,for example,the computation of the success probability of atomic attack and the definition of attack severity index are not so reasonable.When the operator is not experienced enough,it is very likely that the result can hardly reflect the realsecurity situation of the network.By analyzing the shortages of existing security defense strategy generation method,the attack severity index of atomic attack and attack path were redefined by improving the vulnerability scoring standard and introducing the concepts like accumulated attack success probability and value of information asset.In this way,the considerations for security defense strategy generation is enlarged and the generation method is optimized,to realize the attack scene modeling and the attack intention mining.At last,a case study is made to prove the feasibility and the objectivity of the improved method,which can provide the network managers with effective assistant.
Assessment of User Influence in Social Networks Based on Multi-label Propagation
XU Wei, LIN Bo-gang, LIN Si-juan and YANG Yang
Computer Science. 2016, 43 (10): 135-140.  doi:10.11896/j.issn.1002-137X.2016.10.025
Abstract PDF(494KB) ( 464 )   
References | Related Articles | Metrics
In order to identify the key figures in social networks,we need to assess the user influence.Influence is gra-dually formed on the base of the information propagation.This paper built a model to describe the process of influence propagation.Then,based on the model,we used label to indicate the original owner of influence and used degree of membership to indicate the effect degree of which users are affected.We utilized multi-label propagation to simulate the process of users’ influence propagation to achieve a new user influence assessment algorithm MLPIA (Multi-label Propagation User Influence Assessment Algorithm).Lastly,the coverage area and closeness centrality of top users’ influence were tested on real data set.The results demonstrate the rationality and validity of the proposed algorithm.
Trust Chain Transfer Model Based on Non-interference Theory
CHEN Liang, ZENG Rong-ren, LI Feng and YANG Wei-ming
Computer Science. 2016, 43 (10): 141-144.  doi:10.11896/j.issn.1002-137X.2016.10.026
Abstract PDF(398KB) ( 526 )   
References | Related Articles | Metrics
As the existing trust chain transfer model is lack of availability and has the disadvantage of extending the credibility of the trust chain to the network system,a new model of trust chain transfer based on non-interference theory was proposed.The model abstracts the system to processes,actions and implementation.The model measures the integrity of static process and dynamic library to ensure the static process credible,uses non-interference theory analysis of the relationship between the interactive processes to determine its legitimacy,and extends the chain of trust to the whole network system through measuring the credibility of the access terminal.Finally,the corresponding formal definition and security proof were given.
Click Fraud Detection Method Based on User Behavior Feature Selection
DONG Ya-nan, LIU Xue-jun and LI Bin
Computer Science. 2016, 43 (10): 145-149.  doi:10.11896/j.issn.1002-137X.2016.10.027
Abstract PDF(431KB) ( 617 )   
References | Related Articles | Metrics
Online advertisement is not only the main sources of income of profit for internet giants,but also provides powerful economic support for the internet development.The commonly used methods of click fraud detection,which are based on the features of client’s behavior,may lead to inefficiency in fraud detection due to redundant features.To solve this problem,a fraud detection method which combines feature selection with classification method was proposed.According to the feature attributes set of fraud advertisement which is found through training set,attribute significance is sorted by Fisher score method.The important attributes is selected and the SVM algorithm is lastly introduced into classification based on these important attributes.Experiments on real data set demonstrate that the proposed detection method is feasible and valid.
Administrative Model for UCON Based on RBAC
LIU Zhi-feng and MAO Zhu-lin
Computer Science. 2016, 43 (10): 150-153.  doi:10.11896/j.issn.1002-137X.2016.10.028
Abstract PDF(338KB) ( 434 )   
References | Related Articles | Metrics
UCON is a new generation access control model.It can control usage continually by variable attribute to meet the current demand of open network.But there are still some drawbacks in UCON model,that is to say the authority management,authority delegation and attribute source management can not be achieved.So the role elements are introduced and divided into the provider role and the consumer role based on the UCON model.Then the authority is divided into the direct usage authority and the authority to be authorized,in order to achieve the management of authority and authority delegation in the UCON model.And through the role of the provider,the management of the source of variable attributes can be achieved,making UOCN more flexible in management of authority and the attribute source more reliable,so the application scope of UCON is more extensive.
New Design of Rerouting-based Anonymous Communication System
WANG Shao-hui, JIANG Ji-hong and XIAO Fu
Computer Science. 2016, 43 (10): 154-159.  doi:10.11896/j.issn.1002-137X.2016.10.029
Abstract PDF(520KB) ( 754 )   
References | Related Articles | Metrics
Based on the analysis of the current anonymous communication system,a new rerouting-based anonymous communication scheme was proposed in this paper.To realize the anonymous communication among different users,a new method is presented to combine the variable length strategy and the next-hop routing selecting strategy in the rerouting mechanism to establish the anonymous communication path.Besides,this new method also introduces the probabilistic forwarding mechanism and encryption mechanism.Moreover,we applied the P2P work mode which is based on multi-server coordinating technology to enhance the stability and resistibility of the anonymous communication system.The new scheme also introduces the fragmentation-redundancy mechanism to protect the communication messages,while maintaining the anonymous forwarding path.Theoretical analysis and experimental results show that our scheme has good stability and anonymity.
Automated Trust Negotiation Protocol Based on Transactional Receipt in Mobile Commerce Environment
LIU Bai-ling, LEI Chao and LI Yan-hui
Computer Science. 2016, 43 (10): 160-165.  doi:10.11896/j.issn.1002-137X.2016.10.030
Abstract PDF(495KB) ( 432 )   
References | Related Articles | Metrics
It is difficult to use trust negotiation in the mobile environment since existing trust negotiation mechanisms have large computational overhead.To address this problem,a transaction receipt-based mobile trust negotiation protocol was proposed.Firstly,both sides of the negotiation evaluate each other’s trust degree based on the transaction receipt.Secondly,they dynamicly adjust the access control policy on the basis of the evaluated trust degree.Finally,they enter into the digital certificate exchanging process,in which the number of certificate authentication is reduced by ta-king advantages of transaction receipt and authentication ticket.The experiments show that this protocol can effectively reduce computational overhead.
Security Authentication Protocol Based on Cluster for Underwater Acoustic Sensor Networks
REN Chao-qun and XU Ming
Computer Science. 2016, 43 (10): 166-171.  doi:10.11896/j.issn.1002-137X.2016.10.031
Abstract PDF(451KB) ( 485 )   
References | Related Articles | Metrics
In recent years,the application of underwater acoustic sensor networks based on cluster is widely used.The underwater acoustic sensor is mainly used in military industry,which emphasizes the importance of safety.In order to improve the safety of underwater acoustic sensor networks,this paper presented a secure authentication protocol,which combines the ECC Diffie-Hellman key exchange,single variable higher order polynomial bilinear mapping and hash function.After analyzing,this protocol satisfies safety requirements in authentication,access control,data confidentiality,data integrity and non-repudiation.This protocol is proved to be secure when combined with BAN logic formal analysis.In addition,the two stages of this protocol are compared with the same type certification protocol separately.The comparison results show that the performance of this protocol is improved in time and space.
Big Data Storage Security Scheme Based on Algebraic Signature Possession Audit in Cloud Environment
XU Yang, ZHU Dan, ZHANG Huan-guo and XIE Xiao-yao
Computer Science. 2016, 43 (10): 172-176.  doi:10.11896/j.issn.1002-137X.2016.10.032
Abstract PDF(396KB) ( 488 )   
References | Related Articles | Metrics
For the issues of the security and dynamic updating of the big data stored in the cloud,a big data storage security scheme based on algebraic signature possession audit was proposed.It builds trusted third party auditor to make data possession audit (DPA) for the big data by using algebraic signature (AS) technology to ensure the integrity of data.In addition,a new type of data structure is built based on the idea of divide and conquer (DC),allowing the data owner to dynamically modify,insert and delete data.At the same time,the computation complexity is reduced by reducing the number of translated data blocks.Experimental results show that the proposed scheme can detect the malicious operation effectively,provide higher data security,and greatly reduce the computation of the server and the audit side.
Optional Transaction Logic with Weighted Predicates and its Application in Access Control
MA Li, HUO Yin-yu, ZHONG Yong and QIN Xiao-lin
Computer Science. 2016, 43 (10): 177-181.  doi:10.11896/j.issn.1002-137X.2016.10.033
Abstract PDF(358KB) ( 386 )   
References | Related Articles | Metrics
Due to the expressiveness and flexibility,logic languages have become one of the bases of authorization language in access control.Concerned on the ability to express transaction and multi-party decision model,this paper extended Datalog language to WT-Logic,which is a kind of optional transaction logic with weighted predicates.Firstly,the syntax and semantics of the logic are discussed.Secondly,the evaluation method of the logic is explained.Lastly,the application of WT-logic in workflow authorization and multi-party voting mechanism is described and exampled,which shows the expressiveness and application of the logic.
Regression Testing Prioritization Fault Localization Method Based on Influence Analysis
ZHANG Hui
Computer Science. 2016, 43 (10): 182-189.  doi:10.11896/j.issn.1002-137X.2016.10.034
Abstract PDF(674KB) ( 413 )   
References | Related Articles | Metrics
Since the fault localization method based on programs’ behavior characteristics sees every program entity isolated,the efficiency of fault localization is influenced.And since the regression test fault localization needs to execute all test cases,the developing and testing costs increase largely.In view of the above problems,this paper put up a regression testing prioritization fault localization method based on influence analysis,which organically integrates the joint dependency graph,the fault localization method based on programs’ behavior characteristics and the regression testing prioritization.The experimental results show that compared with classical methods such as Ochiai,Tarantula,PPDG,CP and Naish,this method can more efficiently position software errors.
Implementation of Parallel K-Nearest Neighbor Join Algorithm Based on CUDA
PAN Qian, ZHANG Yu-ping and CHEN Hai-yan
Computer Science. 2016, 43 (10): 190-192.  doi:10.11896/j.issn.1002-137X.2016.10.035
Abstract PDF(342KB) ( 769 )   
References | Related Articles | Metrics
In order to solve the problem of K-nearest neighbor join query in large scale spatial data,a parallel optimization method of K-nearest neighbor join algorithm based on CUDA programming model was designed.The parallel process of K-nearest neighbor join algorithm is divided into two stages.One is to establish the R-Tree index for the data set Q and P participate in the query,and the other is to carry out the KNNJ query based on R-Tree index.Firstly,MBR is created according to the location of nodes,and the R-Tree index is created based on SRT by CUDA.Then,the KNNJ query is made based on the R-Tree index,including parallel computing and parallel sorting.The distance between two points can be calculated by each thread on the parallel,and quicksort is executed in parallel on the CUDA.Experimental results show that with the increase of sample size,the advantages of parallel K-nearest neighbor algorithm are more obvious,which has high efficiency and scalability.
Self-adaptive Management Strategy for Bad Blocks Based on Long Lifetime On-board NAND Flash
WANG Wen-si and LIN Bao-jun
Computer Science. 2016, 43 (10): 193-195.  doi:10.11896/j.issn.1002-137X.2016.10.036
Abstract PDF(304KB) ( 458 )   
References | Related Articles | Metrics
Aiming at the storage reliability problem on long-term on-orbit operation equipment,a self-adaptive management strategy for bad blocks was proposed.Firstly,a Markov reliability model is built for the on-board NAND Flash storage system.Secondly,according to the wear condition,the bad block numbers in the device are estimated,and the size of the data storage space is set.Then,based on the actual bad block numbers on orbit,the data storage space is adjusted dynamically,ensuring that a stable storage space is maintained and a higher utilization is achieved within a certain period of time.Finally,the simulation analysis of the self-adaptive management strategy is carried out.The results show that using the strategy at a certain write speed the space utilization rate of the device is not less than 85%.
Double Auction Strategy with Invisible Negotiation Space on Web Service
XU Jun, LU Jia-wei, WU Fei-fei, FANG Zhao-ling and XIAO Gang
Computer Science. 2016, 43 (10): 196-199.  doi:10.11896/j.issn.1002-137X.2016.10.037
Abstract PDF(297KB) ( 386 )   
References | Related Articles | Metrics
In the Web service trading market,there is a common phenomenon with low success rate of single bidding transaction between buyer and seller.The paper proposed a double auction strategy with invisible negotiation space on Web service.The strategy proposed defines a bidding negotiation space containing the cost and quoted price of both parties.According to the different environment of service supply-and-demand,the strategy adopts different fast negotiation models to bargain.Extensive simulation results show that the strategy improves the success rate of Web service transaction.Furthermore,it makes the bilateral parties of service transaction transform the traditional,single and fixed incomes into variable interval’s incomes according to the market environment.Meanwhile,the results illustrate that the strategy proposed in this paper is propitious to guide the buyers and sellers to adjust quoted price with the change of the market environment objectively.
Study of Automatic Proofreading Method for Non-multi-character Word Error in Chinese Text
LIU Liang-liang and CAO Cun-gen
Computer Science. 2016, 43 (10): 200-205.  doi:10.11896/j.issn.1002-137X.2016.10.038
Abstract PDF(510KB) ( 671 )   
References | Related Articles | Metrics
Aiming at the insufficiency existing in current automatic proofreading method of Chinese non-multi-character word error,a method based on fuzzy segmentation of automatic error-detecting and automatic proofreading for ‘non-multi-character word error’ in Chinese texts was proposed.Firstly,we used exact matching algorithm for exact word segmentation and fuzzy matching algorithm for Chinese strings to full fuzzy segmentation,and built the word graph.In order to get the best segmentation results,a method based on an improved language model is used to solve the shortest path.After that,the ‘non-multi-character word errors’ in Chinese texts are achieved to be automatic error-detected and automatic proofreaded.The test set has 20000 sentences of domain question answering log which include 664 non-multi-character word errors.Experiments show that the method proposed in our paper can detect the ‘non-multi-character word error’ effectively including character substitution errors,deletion errors and insertion errors.The error-detection recall rate is 75.9% and the error-detection accuracy rate is 85%.The proposed method combines automatic error-detecting and automatic error-correction together.
TSF Feature Selection Method for Imbalanced Text Sentiment Classification
WANG Jie, LI De-yu and WANG Su-ge
Computer Science. 2016, 43 (10): 206-210.  doi:10.11896/j.issn.1002-137X.2016.10.039
Abstract PDF(409KB) ( 553 )   
References | Related Articles | Metrics
In the imbalanced datasets,the imbalanced distribution of the samples is often accompanied by the imbalanced distribution of features.The features,which often appear in the majority class,rarely appear in the minority class.According to the characteristics of the imbalanced feature distribution,we proposed a new two-side fisher (TSF) feature selection method.TSF can control combination of positive features and negative features explicitly and tackle the imba-lanced problem in the level of feature.Experiments are conducted on the book reviews and COAE2014 imbalanced dataset.Experimental results indicate that TSF is an effective feature selection method for the imbalanced problem.
Boundary Characteristics of Inverse P-sets and System Condition Monitoring
REN Xue-fang, ZHANG Ling and SHI Kai-quan
Computer Science. 2016, 43 (10): 211-213.  doi:10.11896/j.issn.1002-137X.2016.10.040
Abstract PDF(278KB) ( 462 )   
References | Related Articles | Metrics
Inverse P-sets are set models which have dynamic characteristics,and their dynamic characteristics come from element (or attribute) dynamic transferring.Elements transferring into the set makes the boundary of the set expand,and elements moving out from the set makes the boundary of the set contract,generating perturbation boundary and stable core.Based on the fact above and the inverse P-set theory,the concepts of boundary and core of inverse P-sets were proposed,their characteristics were given,and the relationships between inverse P-sets and finite ordinary element set were discussed.Then based on the aforementioned results,the perturbation theorems of the boundaries and the perturbation theorems of core were presented.Finally the application of inverse P-set,the boundary and the core in system condition monitoring was given.
Study on Recognition of Spatial-Temporal Events Based on Microblogs
ZHENG Zhe-jun, JIN Bei-hong and CUI Yan-ling
Computer Science. 2016, 43 (10): 214-219.  doi:10.11896/j.issn.1002-137X.2016.10.041
Abstract PDF(498KB) ( 511 )   
References | Related Articles | Metrics
As a kind of social networking service,microblog service can share and broadcast information mainly through the microbloggers’ followers and features strong timeliness of topics and rapid spread.This paper viewed the microblogs as a kind of event sensor which can perceive the dynamic behaviors in the city,and started with identifying the to-pics in microblogs,and then detecting the spatial-temporal events in microblogs.This paper presented a topic model named ST-LDA for analyzing the topics in microblogs.Applying this model,the microblogs with similar semantics and close spatial-temporal nature can be classified into a same topic.Then,this paper gave a method of discovering the spatial-temporal events from topics. Experimental results based on the real data from weibo.com show that our method has a higher recall and precision ratio than LDA-based and TimeLDA-based methods.
Ranking Algorithm of Search Engine Using Improved Spectral Clustering
BAI Liang, YU Tian-yuan, LIU Shi, LAO Song-yang and YANG Zheng
Computer Science. 2016, 43 (10): 220-224.  doi:10.11896/j.issn.1002-137X.2016.10.042
Abstract PDF(406KB) ( 450 )   
References | Related Articles | Metrics
The performance of a search engine is determined by its ranking algorithm.A novel ranking algorithm was proposed which combines the webpage content and its hyperlinks.Spectral clustering is used for analyzing the webpage content and the PageRank value is used for scoring the quality of hyperlinks.Then,final ranking results are generated based on the content relevant value and hyperlink relevant value.Experimental results show that the proposed ranking algorithm is better than traditional ranking algorithms such as TF-IDF,PageRank and HITS.
Feature Selection Method Based on MRMR for Text Classification
LI Jun-huai, FU Jing-fei, JIANG Wen-jie, FEI Rong and WANG Huai-jun
Computer Science. 2016, 43 (10): 225-228.  doi:10.11896/j.issn.1002-137X.2016.10.043
Abstract PDF(1042KB) ( 710 )   
References | Related Articles | Metrics
Feature selection is the most important preprocessing step in text classification.The quality of the feature words has a significant impact on the accuracy of the classification results.Using the traditional feature selection method,such as MI,IG,CHI,will still cause the redundanly among the feature words.For this problem,based on the combination of the term frequency-inverse document frequency (TF-IDF) and the maximal relevance minimal redundancy (MRMR),this paper put forward a MRMR based feature words secondary selection method TFIDF_MRMR.The experimental results indicate that this method is able to reduce the redundancy of feature words and improve the accuracy of classification.
XML Keyword Search Algorithm Based on Intelligent Grouping Strategy
ZHANG Yong, LI Quan-lin and LIU Bo
Computer Science. 2016, 43 (10): 229-233.  doi:10.11896/j.issn.1002-137X.2016.10.044
Abstract PDF(1714KB) ( 440 )   
References | Related Articles | Metrics
As an information retrieval method,the XML keyword search has been a hot issue in the related fields.On the base of the classical query semantic SLCA,an XML keyword search algorithm based on intelligent grouping strategy was designed and realized in this paper.With reasonable grouping strategy,the proposed algorithm can ensure that the ancestor nodes and repeat nodes are removed in time in the operation process,reducing the redundancy calculation and improving the efficiency of the algorithm.Finally,experiments on different XML data were designed.The results show the effectiveness and efficiency of the proposed algorithm.
Metaheuristic Algorithm for Split Demand School Bus Routing Problem
CHEN Xiao-pan, KONG Yun-feng, ZHENG Tai-hao and ZHENG Shan-shan
Computer Science. 2016, 43 (10): 234-241.  doi:10.11896/j.issn.1002-137X.2016.10.045
Abstract PDF(673KB) ( 586 )   
References | Related Articles | Metrics
In school bus route planning for middle and elementary schools,the total cost of school bus service may be reduced if the travel demands of each bus stop can be split and served by several buses.This paper dealt with the split demand school bus routing problem (SDSBRP).Compared with the split delivery vehicle routing problem (SDVRP),the students’ maximum riding time should be considered in SDSBRP.Moreover,the objectives of SDSBRP are minimizing the total number of buses and the total traveling distance.Thus,the methods and strategies for solving classic SDVRP cannot be applied to SDSBRP directly.This paper,for the first time,analyzed the solution properties of SDSBRP,introduced a mathematic formulation for bi-objective SDSBRP,and proposed a metaheuristic algorithm for it.In the algorithm,an initial feasible solution is generated by constructive heuristic.Then,the solution is improved iteratively by neighborhood operators either with or without demand splitting.In addition,a new acceptance criterion and a ruin and recreate mechanism are used to improve the diversity of solutions.For avoiding local optimum,some worsening neighborhood solutions with longer distance can be accepted with a certain probability according to the simulated annealing rule.The efficiency of the proposed algorithm is benchmarked and confirmed by extensive computational experiments.The savings generated from splitting demands are also discussed.
Adaptive Ontology Semantic Similarity Comprehensive Weighted Algorithm
ZHENG Zhi-yun, RUAN Chun-yang, LI Lun and LI Dun
Computer Science. 2016, 43 (10): 242-247.  doi:10.11896/j.issn.1002-137X.2016.10.046
Abstract PDF(1082KB) ( 440 )   
References | Related Articles | Metrics
Ontology semantic similarity computation is the key to solve the semantic heterogeneity in semantic Web.Through analysis and study of the traditional ontology semantic similarity computation,this article introduced the ontolo-gy hierarchy,proposed an improved semantic similarity method,which is based on information content,distance,and attribute,and put forward a ACWA using the principal component analysis ,to address the deficiencies of artificial weight in traditional comprehensive weighted calculation. The experimental results show that the pearson cofficient of the proposed ACWA algorithm results compared with the reference value is 8.1% higher than that of the traditional method,and the accuracy of ontology semantic similarity calculation is effectively increased.
Hybrid Differential Evolution Based on Tabu Search Algorithm for Distribution Network Line Planning
ZHANG Gui-jun, XIA Hua-dong, ZHOU Xiao-gen and ZHANG Bei-jin
Computer Science. 2016, 43 (10): 248-255.  doi:10.11896/j.issn.1002-137X.2016.10.047
Abstract PDF(1113KB) ( 423 )   
References | Related Articles | Metrics
A hybrid differential evolution algorithm (DE) based on tabu search algorithm (TS) was proposed in this paper for the planning of power distribution.Firstly,the distribution constraints are divided into hard constraints and soft constraints.Hard constraints ensure the reasonability of the topological structure of the distribution network,and soft constraints improve the diversity of the population.Secondly,a hierarchical structure is adopted.The outer layer provides excellent initial individual for the inner layer by rapidly convergent DE algorithm,while the inner layer provides a global searching process and avoids being trapped in local optimum by TS algorithm.A repairing operator is also designed to solve the non-feasible solution generated in DE.Finally,the performance of DETS is verified by 10 benchmark functions.In addition,the line planning of medium-low distribution network of a certain city is achieved by the proposed DETS.
Novel Neural Network Training Algorithm Based on Iterated Cubature Kalman Filter
YUAN Guang-yao, HU Zhen-tao, ZHANG Jin, ZHAO Xin-qiang and FU Chun-ling
Computer Science. 2016, 43 (10): 256-261.  doi:10.11896/j.issn.1002-137X.2016.10.048
Abstract PDF(458KB) ( 661 )   
References | Related Articles | Metrics
In view of insufficient accuracy in the existing application of nonlinear filtering algorithm for neural network training,a novel neural network training algorithm based on iterated cubature Kalman filter was proposed.Firstly,the connection weights and bias of feedforward neural network are used as the state vector to establish the state space mo-del.Secondly, using the Spherical-Radial standard to generate cubature points,the state estimation and covariance acquired during the measurement update process are optimized based on Gauss-Newton iteration strategy.The training effect of neural network connecting weights and bias is enhanced through the improvement of the estimation precision of cubature Kalman filter.The theoretical analysis and simulation results show the feasibility and effectiveness of the algorithm.
Collaborative Filtering Recommendation Algorithm Based on Multi-level Item Similarity
XU Xiang-yu and LIU Jian-ming
Computer Science. 2016, 43 (10): 262-265.  doi:10.11896/j.issn.1002-137X.2016.10.049
Abstract PDF(380KB) ( 458 )   
References | Related Articles | Metrics
For the defects in the calculation of item similarity of traditional item-based collaborative filtering,this paper proposed an improved collaborative filtering algorithm based on multi-level item similarity.Firstly,the multi-dimensio-nal heuristic methods are used to analyze the similarity of items comprehensively by analyzing user’s behavior records in four aspects,including user collective rating items,user activity,user rating timeliness and user rating.Secondly,based on the four aspects of item similarity,a method for calculating multi-level item similarity is designed.Experimental results show that,compared with the traditional item-based collaborative filtering recommendation algorithm,the algorithm based on multi-level item similarity has higher recommendation accuracy rate and recall rate,and lower MAE va-lue.
Selective Ensemble Learning Algorithm of Extreme Learning Machine Based on Ant Colony Optimization
YANG Ju, YUAN Yu-long and YU Hua-long
Computer Science. 2016, 43 (10): 266-271.  doi:10.11896/j.issn.1002-137X.2016.10.050
Abstract PDF(490KB) ( 426 )   
References | Related Articles | Metrics
This paper proposed a novel selective ensemble learning algorithm of extreme learning machine (ELM) based on the idea of ant colony optimization.The algorithm can overcome the drawbacks of the existing ensemble learning algorithms of ELM,such as low classification accuracy and generalization ability.Firstly,the proposed algorithm gene-rates lots of ELM classifiers by the strategy of randomly assigning input weights and biases of the hidden layer.It then uses a binary ant colony optimization algorithm to search the optimal combination of ELMs.At last,it uses the extracted combination of classifiers to classify test instances.The experimental results on 12 baseline data sets show that the proposed algorithm has acquired the best performance on nine data sets and the second best performance on the three remaining data sets.Adopting the proposed algorithm can obviously help to improve the classification accuracy and gene-ralization ability.
Research on Structured Method for Chinese Pathological Text
CHEN De-hua, FENG Jie-ying, LE Jia-jin and PAN Piao
Computer Science. 2016, 43 (10): 272-276.  doi:10.11896/j.issn.1002-137X.2016.10.051
Abstract PDF(399KB) ( 1697 )   
References | Related Articles | Metrics
Pathological text as an important kind of unstructured clinical documents,is essential to clinical diagnosis.For the specific Chinese pathological text,this paper put forward a simple and effective structured approach.Firstly the Chinese pathological texts are preprocessed ,including data cleaning,clauses split and trunk extraction,in order to extract the corresponding information of each sample.Then each sample’s final template information is extracted by the way of clauses clustering and statistical parameters filtering.Finally,the templates are used for immediate pathological text structuring process,and the structured results are obtained.Experiments show that the proposed method can achieve satisfactory structured results for similar pathological texts,and the extracted templates will be regularly optimized to meet the needs of the latest text structuring.
Robustness of Interval-valued Triple I Algorithms
LUO Min-xia and CHENG Ze
Computer Science. 2016, 43 (10): 277-281.  doi:10.11896/j.issn.1002-137X.2016.10.052
Abstract PDF(268KB) ( 409 )   
References | Related Articles | Metrics
In this paper,the robustness of interval-valued triple I algorithms based on normalized Minkowski distance was investigated.Firstly,the concepts of maximum sensitivity of interval-valued fuzzy connectives and perturbation of interval-valued fuzzy sets are proposed.Secondly,based on the normalized Minkowski distance,the sensitivity of interval-valued Gdel implication,Lukasiewize implication,Goguen implication and their corresponding t-norms are discussed.Finally,we investigated the robustness of interval-valued fuzzy inference full implication triple I algorithm.
Biogeography-based Optimization with Adaptive Immigration and Dynamic Selection Emigration Strategy
TANG Ji-yong, ZHONG Yuan-chang and ZENG Guang-pu
Computer Science. 2016, 43 (10): 282-286.  doi:10.11896/j.issn.1002-137X.2016.10.053
Abstract PDF(364KB) ( 405 )   
References | Related Articles | Metrics
Dan Simon proposed a biogeography-based optimization to solve engineering optimization problems.The algorithm has captured the attention of many researchers in the field of intelligent optimization algorithm with its unique search mechanism and good performance.In order to improve the global and local search ability of biogeography-based optimization algorithm,an improved biogeography optimization strategy based on dynamic selection emigration and adaptive immigration was proposed.The improved algorithm mixes the stages of evolution,dynamic selection emigration,random emigration and self-variation to increase the global search ability of the algorithm.The results of simulation experiments show that the algorithm is superior to the contrast algorithm in global searching,convergence speed and convergence accuracy.
Image Smoothing Beautification Processing Algorithm in Multi-illumination Color Difference
ZHOU Yi-min and LI Guang-yao
Computer Science. 2016, 43 (10): 287-291.  doi:10.11896/j.issn.1002-137X.2016.10.054
Abstract PDF(982KB) ( 623 )   
References | Related Articles | Metrics
The images collected in night with multiple relighting have color difference,and white balance deviation compensation method is used to realize image smoothing beautification processing and improve the image quality.The traditional method uses circular tracking pixel point feature extraction algorithm,and the color difference compensation effect is not good when the image has white balance deviation.A color differential image smoothing algorithm with multiple relighting was proposed based on white balance deviation compensation.Firstly,image feature extraction and adaptive equalization pre-processing are carried out,white balance color deviation compensation is taken for the image,image blind deconvolution algorithm is taken for image smoothing,and the similarity function of target feature model is used to determine the image color difference feature weigh.Along the gradient direction,the information of image edge is obtained,the feature clustering is used to classify the target space automatically,and the detail features of image are smoothed and beautified to a large extent.The simulation results show that the algorithm can compensate balance deviation,and the image smoothing beautification processing are optimized.It has good image smoothing performance,calculation overhead is less,the detail features smoothing effect of image is the best,and its performance is better than the traditional algorithm.
Large Damaged Area Image Inpainting Algorithm Based on Matching Model for Broken Structure Line
NIE Hong-yu, ZHAI Dong-hai, YU Jiang and WANG Meng
Computer Science. 2016, 43 (10): 292-296.  doi:10.11896/j.issn.1002-137X.2016.10.055
Abstract PDF(2674KB) ( 616 )   
References | Related Articles | Metrics
To solve the problems such as mismatching connecting or unsmooth connecting,when inpainting large damaged region with complicated structure information,an image inpainting algorithm based on matching model for broken structure line was proposed in this paper.Firstly,several potential factors which impact the calculation of the matching degree between the broken structure lines are analyzed deeply,and different weights are assigned into different factors based on their significance.On that basis,a matching model for broken structure lines is constructed to get matching pairs among these broken structure lines.Secondly,the fitting structure lines can be obtained smoothly according to these matching pairs,and they can partition the large damaged region into different blocks.At last,block matching algorithm is adopted to pixel filling in these damaged blocks.Compared with improved Crimimsi algorithm,Hays algorithm and IIPBDR algorithm in 6 experiments,our approach can match the broken structure lines accurately,and can smoothly connect these broken structure lines guided by the matching result.The experimental results demonstrate that our approach can effectively inpaint large damage region with complicated structure,and its inpainting result has a better visualconnectivity.
Fast Fuzzy Local Information C-means Clustering Segmentation Algorithm
HOU Xiao-fan and WU Cheng-mao
Computer Science. 2016, 43 (10): 297-303.  doi:10.11896/j.issn.1002-137X.2016.10.056
Abstract PDF(2388KB) ( 470 )   
References | Related Articles | Metrics
A fast fuzzy local information C-means clustering segmentation algorithm was proposed because the fuzzy local information C-means clustering algorithm is time-consuming.In this algorithm,co-occurrence matrix is introduced which is constituted by the target pixel and its neighboring pixels,the new cluster membership and the cluster center expressions are obtained.To improve the noise immunity of the algorithm,neighborhood pixels membership is used for filter processing in pixel classification.The experimental results demonstrate that the proposed algorithm meets the needs of the effectiveness of image segmentation.Compared with fuzzy local information C-means clustering algorithm,the proposed algorithm has advantages of better segmentation performance and real time.
Object Detection Based on Geometric Evidence Collecting
TANG Fu-yu and WEI Hui
Computer Science. 2016, 43 (10): 304-311.  doi:10.11896/j.issn.1002-137X.2016.10.057
Abstract PDF(3383KB) ( 380 )   
References | Related Articles | Metrics
Artificial objects usually have very stable shape feature,which has persistent and stable properties in geometry and provides evidence for object recognition.Besides,shape feature is more stable and more discriminative than appearance feature,color feature,gray feature,and gradient feature.The difficulty of object recognition based on shape feature is that objects may change in color,lighting,size,position,pose,and background interference.And we are unable to predict all possible conditions.The variety of objects and conditions make object recognition based on geometric features be a very challenging problem.This paper gave a method based on shape templates,which performs geometric evidence selection,collection,and combination discrimination for the edge segments of images,to find out the target object accurately from background,and it is able to point out the semantic attribute for each line segment of the target object.In essence,the method is solving a global optimal combinatorial optimization problem.Although the complexity of the global optimal combinatorial optimization problem seems to be very high,it is no need to define the complex feature vector and there’s no need for a high price training process.It has very good generalization ability,environmental adaptability,and more solid basis of cognitive psychology.The geometric evidence collection process,which is simple and universal,shows a great application prospect for this method.The experimental results prove that the method shows great advantages in response to the changes in the environment,invariant recognition,pinpointing the geometry of objects,search efficiency,calculation,and etc.This attempt contributes to understanding some universal processing during the process of object recognition.
Research of Face Recognition Algorithm Based on Nonnegative Tensor Factorization
LIANG Qiu-xia, HE Guang-hui, CHEN Ru-li and CHU Jian-pu
Computer Science. 2016, 43 (10): 312-316.  doi:10.11896/j.issn.1002-137X.2016.10.058
Abstract PDF(944KB) ( 674 )   
References | Related Articles | Metrics
Face recognition is an active research area of biometric identification.Nonnegative tensor factorization is the multiple linear extension of nonnegative matrix factorization,which has been successfully applied to face recognition and other fields.A face recognition algorithm based on nonnegative tensor factorization was proposed.This method need not transform a face matrix into a vector,thereby maintaining the internal structure of the human face matrix,thus maintaining the overall structure of the facial images and making the extraction of facial feature more accurate. The experimental results show that,compared with the classical face recognition algorithms such as PCA and NMF,the face recognition algorithm based on nonnegative matrix factorization provides better representation of face patterns and improves the accurate rate of face recognition.
Building of fMRI Dynamic Functional Connectivity Network and its Applications in Brain Diseases Identification
MA Shi-lin, MEI Xue, LI Wei-wei and ZHOU Yu
Computer Science. 2016, 43 (10): 317-321.  doi:10.11896/j.issn.1002-137X.2016.10.059
Abstract PDF(1653KB) ( 1698 )   
References | Related Articles | Metrics
The way to extract abundant information from complex fMRI data is the key to improve the accuracy network in identification of brain diseases.For the conventional resting-state fMRI analysis,functional connectivity is assumed to be temporally stationary.A method of building dynamic functional connectivity network based on group ICA was put forward to obtain the dynamic characteristics of functional connectivity network.First,the spatial independent component of fMRI data obtained from group ICA algorithm is used as the network node.And then,the dynamic functional connectivity is built based on sliding time windows.The dynamic functional connectivity network is used as a feature to identify the healthy controls and patients with schizophrenia.Experimental results confirm that the proposed method is applicable and effective.It can obtain information of temporal dimension,improve the recognition rate at the same time,and provide an objective reference for clinical diagnosis.
Face Detection Design Based on Zynq
HUO Yu-lin and FU Yi-de
Computer Science. 2016, 43 (10): 322-325.  doi:10.11896/j.issn.1002-137X.2016.10.060
Abstract PDF(959KB) ( 684 )   
References | Related Articles | Metrics
Compared with the desktops,there are mainly two problems of the embedded face detection systems,which are less resource and slower speed.Here we put forward the method of using software-hardware codesign to accelerate the face detection process.Firstly,we implemented the face detection algorithm on platform Zynq-7000 with C programming language,which is based on AdaBoost cascade classifiers.Then,the performace of the algorithm was tested with Xilinx SDK.Afterwards,a software-hardware partion strategy of the algorithm was raised for acceleration.By transplanting the most computational part of the algorithm to the hardware implementation,we realized the software-hardware codesign of the face detection system on platform Zynq-7000.At last,we presented the acceleration results of two hardware modules,and summarized the related work.