Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 45 Issue 11, 20 November 2018
Communication and Networking Techniques for Formation Control in UAV Ad Hoc Networks
CHENG Xiao, DONG Chao, CHEN Gui-hai, WANG Wei-jun, DAI Hai-peng
Computer Science. 2018, 45 (11): 1-12.  doi:10.11896/j.issn.1002-137X.2018.11.001
Abstract PDF(2323KB) ( 1702 )   
References | Related Articles | Metrics
With the development of technology,the research on the collaboration of multi-UAV (Unmanned Aerial Vehicle) systems gains increasing attention.As a key technique for the collaboration of multiple UAVs,formation control brings huge challenges to the communication and networking techniques for UAV ad hoc networks featuring dynamic channel and topology.Firstly,the related models of the formation control in UAV ad hoc networks were introduced.Then,from the aspects of formation keeping and reconfiguration,mission update,reliability,etc.,five formation control strategies and their requirements of the communication and networking techniques for UAV ad hoc networks were surveyed and analyzed.Finally,the research prospects and directions of the communication and networking technologies in UAV ad hoc networks were proposed.
Research Progress of Network Trust
LIU Jian-sheng, YOU Zhen-xu, YUE Guang-xue, WANG Jie-tai, LIU Jian-hua
Computer Science. 2018, 45 (11): 13-28.  doi:10.11896/j.issn.1002-137X.2018.11.002
Abstract PDF(2614KB) ( 972 )   
References | Related Articles | Metrics
The rapid development of Internet provides people with a free and open way of interaction.Internet services have become an important part of populace’s daily life.How to evaluate the trustworthiness of characterizing node interaction has become one of the core issues in network applications.Due to the anonymity,randomness and dynamic characteristics of open network environment,users face many risks in choosing the target node in the network for interaction.The trust evaluation mechanism based on node interaction becomes an effective strategy mechanism to suppress the malicious and fake behaviors of network.First of all,this paper outlined the related properties of trust,and formally defined and constructed the trust network according to the interactive behavior among network nodes and the characte-ristics of their trust relationship.Then,this paper analyzed the framework of trust mechanism,and discussed the research focuses and security threats of trust mechanism in P2P networks,e-commerce and social networks.At last,this paper focused on the comparison and analysis of typical trust models in different fields,detailed the effectiveness and shortcomings of anti-attack,and proposed the research direction of future trust mechanisms in terms of improving the model algorithm and enhancing the anti-attack capability of model.
Twin Support Vector Machine:A Review
AN Yue-xuan, DING Shi-fei, HU Ji-pu
Computer Science. 2018, 45 (11): 29-36.  doi:10.11896/j.issn.1002-137X.2018.11.003
Abstract PDF(1468KB) ( 1375 )   
References | Related Articles | Metrics
Twin support vector machine (TWSVM) is a useful extension of the traditional support vector machine(SVM).For the binary classification problem,the basic idea of TWSVM is to seek two nonparallel hyperplanes such that each hyperplane is closer to one and is at least one distance from the other.As an emerging machine learning me-thod,TWSVM has attracted the attention of scholars and become a hotspot in machine learnig.This paper reviewed the development of TWSVM.At first,this paper analyzed the basic concept of the twin support vector machine,summarized the models and research process of novel algorithms of TWSVM in the last several years.Then,it analyzed the advantages and disadvantages of them and performed experiments on them.At last,it prospected the research work of TWSVM.
Research and Development of Library Operating System
SHU Hong-mei, TAN Liang
Computer Science. 2018, 45 (11): 37-44.  doi:10.11896/j.issn.1002-137X.2018.11.004
Abstract PDF(1601KB) ( 2959 )   
References | Related Articles | Metrics
The earliest library operating system (LibOS) is based on exokernel,and its purpose is to verify feasibility and high performance for management system resources in user space.However,exokernel is still in the study.Macro kernel and Hybrid kernel are the main operating system architectures in actual application.So LibOS didn’t attract much attention from academia and industry at the beginning.As the rapid development of cloud computing and the rise of the Internet of Things,in order to build secure and high-performance Unikernel which is a kind of special micro-ser-vice,LibOS has become a new research hotspot.Firstly,the basic definition and features of LibOS were introduced,and the classification model of LibOS was put forward.Secondly,the architecture of LibOS was proposed and the key technologies of LibOS were described in detail,including thread management,virtual CPU scheduling,virtual memory ma-nagement in LibOS kernel base,and network service,the disk file I/O and devices access in LibOS functions,etc.Finally,based on the existing research results,this paper discussed the problems and challenges of LibOS.
Survey of D2D-based Traffic Offloading
YAN Xue-wen, DONG Chao, QU Yu-ben
Computer Science. 2018, 45 (11): 45-51.  doi:10.11896/j.issn.1002-137X.2018.11.005
Abstract PDF(1687KB) ( 2266 )   
References | Related Articles | Metrics
With the drastic growth of mobile smart devices and widespread use of various video traffic,the next generation wireless networks will face the explosive increase of data traffic.Meanwhile,the uneven distribution of users leads to an unbalanced distribution of cellular traffic.Device-to-device (D2D)-based traffic offloading is a cost-effective technique to solve the surging cellular traffic and unbalanced traffic distribution.Firstly,according to the role of D2D in traffic offloading,this paper divided D2D-based traffic offloading into two categories,i.e.,regarding D2D communication as target and regarding D2D communication as relaying.Secondly,this paper provided a detailed overview of the advantages and disadvantages of each category.Finally,this paper proposed potential research directions in D2D-based traffic offloading.
Research on Identity Management Authentication Based on Blockchain
DONG Gui-shan, CHEN Yu-xiang, ZHANG Zhao-lei, BAI Jian, HAO Yao
Computer Science. 2018, 45 (11): 52-59.  doi:10.11896/j.issn.1002-137X.2018.11.006
Abstract PDF(2876KB) ( 4986 )   
References | Related Articles | Metrics
Aiming at the problem of identity management in network space,this paper analyzed a universal identity ma-nagement authentication model based on blockchain.First,the definitionand requirements of identity management in network space were outlined,and development experience and early attempts were reviewed.Then,the advantages and disadvantages of the blockchain and some kinds of proof of concepts in identity management were analyzed.Next,based on opening analysis,a general blockchain identity management model and each module were analyzed.Finally,the mature ShoCard company’s application scene and DIMS (Decentralized Identity management system) were analyzed and compared,and the prospect of the future development was put forword.
Network & Communication
Virtual Grid Based Clustering and Routing Algorithm in Wireless Sensor Networks
CHEN Zhan-sheng, SHEN Hong
Computer Science. 2018, 45 (11): 60-65.  doi:10.11896/j.issn.1002-137X.2018.11.007
Abstract PDF(2742KB) ( 864 )   
References | Related Articles | Metrics
In order to solve the energy hole problem caused by the unevenness of the link communication load in WSNs routing protocol,a dynamic clustering algorithm based on virtual grid (IDCS) and a dynamic load balancing routing algorithm (DCDLB) for maximizing network life cycle considering data forwarding delay were presented.In IDCS algorithm,the area is divided into several virtual grids according to node communication radius,and the nodes in the same grid form a cluster.The cluster head is chosen by distributed cluster head selection strategy considering node’s energy and location factors,and a dynamic cluster head rotation mechanism based on cluster head’s energy level is introduced for balancing consumption.In DCDLB routing algorithm,the network lifetime is maximized by considering energy consumption balance among cluster heads and multihop data forwarding delay.The simulation results show that DCDLB routing algorithm is superior to LEACH,HEED and CRVB routing algorithms in terms of extending network lifetime and decreasing data forwarding delay.
Enhanced Adaptive Division Anti-collision Algorithm
WANG Han-wu, YU Tao
Computer Science. 2018, 45 (11): 66-69.  doi:10.11896/j.issn.1002-137X.2018.11.008
Abstract PDF(1524KB) ( 600 )   
References | Related Articles | Metrics
Aiming at the disadvantages of the traditional adaptive division collision tree algorithm in the process of tag identification such as many idle time slots,large communication overhead between the reader and tags,this paper pre-sented an improved adaptive division collision tree algorithm (IACT).The algorithm determines the adoption of the binary-tree or quad-tree by calculating collision factor.As for the binary-tree,if the collision-bit is only one,tags can be identified directly without sending commands again.As for the quad-tree,reader first sends a command for tags to return the corresponding coding of the highest two collision bits,and obtains collision information through encoding.The reader takes the value of counter and the highest two collision bits as the parameter of query command by using the counter,and tags only send the postfix of ID to the reader.The performance analysis and simulation results show that IACT algorithm can effectively reduce the total timeslot consumption and the communication load,and improve the re-cognition efficiency as well.
Analysis and Study on Limited(K=2) Polling Control System with Busy and Idle Sites
YANG Zhi-jun, SUN Yang-yang
Computer Science. 2018, 45 (11): 70-74.  doi:10.11896/j.issn.1002-137X.2018.11.009
Abstract PDF(2171KB) ( 651 )   
References | Related Articles | Metrics
From the perspective of ensuring the fairness of the system and improving the efficiency of the polling control system,this paper proposed a limited (K=2) polling control system with busy and idle sites.Based on the limited (K=2) polling service,the system adopts parallel control mode to send service only to the busy sites with data group.The model can not only guarantee the fairness of the system,but also avoid the query of idle sites,and save the conversion query time,thereby improving the system utilization and work efficiency.This paper established the mathematical model of the system by using the probabilistic mother function and the embedded Markov chain method,and dissected the important performance parameters such as average queue leader and average waiting delay.The theoretical calculation is approximately equal to the simulation value,which indicates that the theoretical analysis is correct and reasonable.Compared with the existing limited (K=1) polling control method,the proposed model has better QoS guarantee.
SDN Dynamic Load Balancing Method for Smart Healthcare Cloud
LI Xiong-ying, DONG Qing-he, HE Qian, ZHOU Shui-ming
Computer Science. 2018, 45 (11): 75-81.  doi:10.11896/j.issn.1002-137X.2018.11.010
Abstract PDF(1907KB) ( 892 )   
References | Related Articles | Metrics
This paper introduced software defined network (SDN)to manage smart healthcare cloud network,and designed a smart healthcare distributed SDN controller system to address the problems of single point failure and unba-lanced load distribution in traditional SDN controller.This system is composed of three layers:SDN controller cluster,data forwarding plane and smart healthcare cloud service system.On this basis,this paper proposed a DAF (Dynamic Adaptive and Fast Load Balancing) algorithm.In this algorithm,the load information aware component periodically collects its own load information,and automatically interacts the load information between controllers.When the load value of controller exceeds its threshold,the switch migration action is triggered to configure the mapping relationship between switches and controllers dynamically.The experimental results show that the distributed SDN control system for smart healthcare cloud has good performance,besides,the DAF algorithm can quickly balance the load among the SDN controllers and improve the network throughput of healthcare cloud.
New Optimized Energy Harvesting Technology for Sensor Networks
WANG Yan-li, YIN Guo-fu, JIN Rong
Computer Science. 2018, 45 (11): 82-86.  doi:10.11896/j.issn.1002-137X.2018.11.011
Abstract PDF(1699KB) ( 774 )   
References | Related Articles | Metrics
Aiming at the problems that the energy is strictly constrained and the energy among nodes is unbalanced in wireless sensor network,this paper constructed cooperative MIMO transmission model which transfers wireless information and power simultaneously.In order to solve the problem of how to obtain the optimal energy efficiency of system,this paper proposed a new effective resource allocation algorithm.Simulation analysis and prototypical experiment results show that the new algorithm can converge when the number of iterations reaches 12 and the energy efficiency quickly increases and keeps stable when the maximum transmission power allowance is 3dBm.Compared with SISO,the energy consumption of cooperative MIMO can save more than one energy level,meanwhile,the wireless information and power transfer technique can maximize network utility.
Computational Complexity Analysis of Virtual Network Mapping Problem
YU Jian-jun, WU Chun-ming
Computer Science. 2018, 45 (11): 87-91.  doi:10.11896/j.issn.1002-137X.2018.11.012
Abstract PDF(1244KB) ( 613 )   
References | Related Articles | Metrics
Virtual network mapping plays a central role in network virtualization,which maps the virtual nodes and virtual links to the underlying substrate network nodes and paths,respectively,while satisfying the constraint of virtual network construction.This paper categorized the virtual network mapping problem according to whether all the virtual nodes are already mapped,whether the substrate network supports path splitting and whether the substrate nodes support repeatable mapping,and then completed the computational complexity analysis of the feasibility and optimization problem of various virtual network mapping for the general network topology model and some special network topology models.
Link Prediction Algorithm Based on Node Intimate Degree
LV Ya-nan, HAN Hua, JIA Cheng-feng, WAN Yan-juan
Computer Science. 2018, 45 (11): 92-96.  doi:10.11896/j.issn.1002-137X.2018.11.013
Abstract PDF(2339KB) ( 1132 )   
References | Related Articles | Metrics
As an important branch of complex network analysis,link prediction has extensive application in different fields.The existing link prediction algorithms usually measure the similarity between two nodes only through the structure information of their common neighborhoods,ignoring the tightness between nodes and their common neighbor nodes.To solve this problem,the paper proposed a link prediction algorithm based on node intimate degree.This algorithm employs the clustering coefficients of edge to measure the intimacy between nodes and their common neighbor nodes,and adopts the receiver operation characteristic Area Under Curve (AUC) as the standard index of link prediction accuracy.The experimental results on four real networks show that the proposed algorithm can improve the prediction precision compared with other link prediction algorithms.
Stability Based Energy-efficient Routing Protocol in Cognitive Wireless Sensor Networks
ZHU Jiang, LEI Yun, WANG Yan
Computer Science. 2018, 45 (11): 97-102.  doi:10.11896/j.issn.1002-137X.2018.11.014
Abstract PDF(1795KB) ( 609 )   
References | Related Articles | Metrics
In order to reduce the energy consumption of nodes and improve the transmission capacity of data in the process of building wireless sensor network at the cognitive sensor node,a new time-limited energy-efficient protocol based on stability was proposed.The protocol introduces a stability factor through employing the channel model to mo-del by authorized users and studies the relationship between path selection and system energy consumption by adjusting the stability factor constantly to find a reasonable path choice which is least affected by users,thus reducing the times of link failure,improving data transmission capacity,ensuring the lesser energy consumption,and achieving the goal of effectively balancing the network energy and extending the network lifecycle.The simulation results are consistent with the theoretical analysis and show that the proposed protocol has better performance in energy consumption and data transmission capacity.
Multi-sink Deployment in Wireless Sensor Networks for Underground Coal MineBased onAdaptive Particle Swarm Optimization Clustering Algorithm
HU Chang-jun, YUAN Shu-jie
Computer Science. 2018, 45 (11): 103-107.  doi:10.11896/j.issn.1002-137X.2018.11.015
Abstract PDF(2209KB) ( 597 )   
References | Related Articles | Metrics
Multi-sink deployment is an important research topic in underground sensor networks,which has a great influence on network performance.In view of the defect of complex calculation process,slow convergence rate,and trapping into local optimization existing in current deployment methods,on the basis of standard particle swarm optimization algorithm,a multi-sink deployment algorithm (A-PSOCA) based on adaptive particle swarm optimization clustering algorithm was proposed.In the A-PSOCA algorithm,the status of particle evolution and aggregation is introduced in the inertia weight coefficient to make the proposed algorithm more adaptive,and a preventive strategy from position overlapping in the iterative process of the algorithm is introduced to prevent particle swarm search from local optimization.Simulation results show that the A-PSOCA algorithm obtains reasonable locations for sink nodes,and its convergence rate is twice as faster as the standard particle swarm clustering algorithm.Compared with the other algorithms based on particle swarm optimization,the A-PSOCA approach has obvious advantages in terms of average energy consumption,proportionality and the lifetime of corresponding network.It is more suitable for underground communication environment.
Information Security
Multi-policy Security Model of Mobile Thin Client Based on Web Operating System
YANG Ying, XIA Jian-feng, ZHU Da-li
Computer Science. 2018, 45 (11): 108-114.  doi:10.11896/j.issn.1002-137X.2018.11.016
Abstract PDF(1509KB) ( 763 )   
References | Related Articles | Metrics
High-security mobile office has put forward growing security requirements on information systems.In this context,thin-client based solution exists.The solution takes the advantages of cloud storage,distributed terminal system and centralized management,and provides better safeguard for users.Nowadays,the main technologies of thin client are virtual desktop infrastructure (VDI) and Web-client,in which the former is the mainstream,while the latter has received widespread attention with the development of Web-based operating system (Web OS).However,there are some problems,including lower confidentiality and integrity in the existing Web OSes.Based on the abstract modeling of Web OS,this paper proposed a hybrid model by mixing BLP model and Biba model.In order to solve the collision of information flow,a lattice structure was introduced.Since information flow model has no constraints on trusted subjects,the principle of least privilege on trusted subject was promoted.To improve the flexibility and availability,a special trusted subject was authorized to change the security level temporarily.Finally,the security and applicability were analyzed.
Design and Simulation of Fair Data Exchange Protocol with Bounded Rationality
LU Zheng-fu, PU Yan-hong, NI Sheng-bin, XU Chen-ming, YANG Chun-yao
Computer Science. 2018, 45 (11): 115-123.  doi:10.11896/j.issn.1002-137X.2018.11.017
Abstract PDF(1718KB) ( 715 )   
References | Related Articles | Metrics
Rational exchange protocol may fail in reality because of the use of the idealized rationality hypothesis.In order to solve the protocol failure problem,based on bounded rationality hypothesis which is more consistent with the reality,the concept of bounded rational fairness was defined and fair data exchange protocol with bounded rationality(FDEP-BR)was designed for the first time.The theoretical analysis shows that the FDEP-BR can resist non-cooperative attack because of its fault-tolerance and bounded rationality fairness at the cost of round complexity Ο(l*v)compared with the rational exchange protocol.A finite automata model for FDEP-BR was constructed,the decision-making method for experiential weighted attraction(EWA) learning model was improved,and the EWA learning decision algorithm was designed.Then the FDEP-BR was simulated on the Jade-Repast integration platform.The simulation results show that the equilibrium state of the FDEP-BR is consistent with expectations.
Multi-authority Encryption Scheme Based on Public and Private Attributes
CHU Xiao-lu, LIU Pei-shun
Computer Science. 2018, 45 (11): 124-129.  doi:10.11896/j.issn.1002-137X.2018.11.018
Abstract PDF(2176KB) ( 768 )   
References | Related Articles | Metrics
The attribute-based encryption method can simplify the problem of key management and access control in cloud computing environment,and it’s suitable for cloud environment.This paper proposed a multi-authority encryption scheme based on public and private attributes.In this scheme,the attributes are divided into public attribute and private attribute.The user’s public property is constitutive of the user’s role authority information,etc.The user’s private property is composed of the password and the identification code of devices,etc.By using the public property to implement access control,the data can be shared safely on the cloud server.By using the private property to implement the security control of information flow,it can ensure that only the specific user uses data on a specific device.This scheme can realize key tracing and attribute revocation.Encryption based on private attributes can also achieve anti-conspiracy attacks.
Perfect Privacy-preserving Batch Provable Data Possession
PANG Xiao-qiong, REN Meng-qi, WANG Tian-qi, CHEN Wen-jun, NIE Meng-fei
Computer Science. 2018, 45 (11): 130-137.  doi:10.11896/j.issn.1002-137X.2018.11.019
Abstract PDF(2791KB) ( 735 )   
References | Related Articles | Metrics
Provable data possession is an important research direction in the field of current cloud storage security.It allows user to verify whether his outsourced data stored in the cloud sever are complete without downloading all files efficiently and remotely.Currently,users tend to entrust TPA,a Third Party Auditor,to verify the integrity of their data instead of themselves.However,most of public auditing PDP schemes only consider whether malicious servers can forge data labels or proofs,rarely consider the case of whether malicious TPA may steal user’s data privacy.In recent years,some of PDP schemes that both ensure the data security in server and protect the data privacy for TPA have been proposed and applied in single-cloud server.Few of batch PDP protocols applied in multi-cloud server can effectively resist the malicious cloud server’s attack and achieve zero knowledge privacy for TPA.Therefore,based on the work proposed by Yu et al,this paper proposed a batch PDP scheme which can both guarantee the data security for malicious cloud servers and realize the perfect data privacy for TPA.Performance analysis and simulation experiments demonstrate the efficiency and feasibility of the proposed protocol.
Study on Detection of Weibo Spammers Based on Danger Theory in Artificial Immunity System
YANG Chao, QIN Ting-dong, FAN Bo, LI Tao
Computer Science. 2018, 45 (11): 138-142.  doi:10.11896/j.issn.1002-137X.2018.11.020
Abstract PDF(2328KB) ( 684 )   
References | Related Articles | Metrics
This paper introduced the danger theory in artificial immunity system into the analysis of user behavior cha-racteristics to identify the spammers in Weibo effectively.Taking Sina Weibo as an example,this paper analyzed the behavior characteristics of Weibo spammers,selected the total number of Weibo,Weibo level,user authentication,sunshine credit and the number of fans as attribute characteristics and used the analysis results of attribute characteristics as the characteristic signals of distinguishing the spammers and the normal users.After that,the recognition of Sina Weibo spammers can be achieved based on Dendritic Cells Algorithm.The real data of Sina Weibo users was used to verify the effectiveness of the proposed algorithm and conducted comparison experiments.The experimental results suggest that this algorithm can effectively detect the spammers in Sina Weibo and has high detection accuracy.
Penetration Testing Method for Cyber-Physical System Based on Attack Graph
XU Bing-feng, HE Gao-feng
Computer Science. 2018, 45 (11): 143-148.  doi:10.11896/j.issn.1002-137X.2018.11.021
Abstract PDF(2049KB) ( 844 )   
References | Related Articles | Metrics
As a typical example of security-related system,cyber-physical system (CPS) is the high-value target of network attack.Therefore,its security protection needs to be effectively assessed.To this end,a penetration testing methodfor CPS based on attack graph is proposed.Firstly,the traditional attack graph is improved and a new attack graph for CPS (AGC) is proposed.Specifically,the physical attack,the duration of the attack and the continuous variable value of physical system are considered in AGC.Additionally,the attack feasibility parameter is added to represent the success rate of single-step attack.Secondly,based on AGC,the optimal attack path selection strategies are represented,including the minimum attack cost,the shortest attack time and so on.Furthermore,the intelligent penetration testing algorithm is designed to accomplish automated penetration.Finally,the effectiveness of the proposed method is verified by case study.The results show that the method can select the optimal attack path to the target,intelligently adjust the subsequent attack steps according to the feedback,and assess the security of CPS effectively.
CP-ABE Based Access Control of Data Set with Conflict of Interest
CHEN Cheng, Nurmamat HELIL
Computer Science. 2018, 45 (11): 149-154.  doi:10.11896/j.issn.1002-137X.2018.11.022
Abstract PDF(1407KB) ( 530 )   
References | Related Articles | Metrics
Cloud storage allows data owners to store their encrypted data in the cloud,so as to provide data sharing services for users.However,there might exist a conflict of interest among different data stored by the same data owner.In this regard,this paper proposed a ciphertext-policy attribute-based encryption (CP-ABE) based access control scheme for the data set with conflict of interest.In this scheme,the data owner embeds a virtual attribute into the access tree with the “AND” gate to get the modified access tree,and encrypts the data in the data set with conflict of interest under the modified access tree,thus avoiding errors,cheats or risks caused by an individual user’s access to some or all data in the data set with conflict of interest.Finally,the efficiency and security of this scheme were analyzed.The analytical results suggest the proposed scheme is efficient and secure.
Secure Outsourcing Modular Exponentiations with Single Untrusted Cloud Server
WANG Jian-yi, WANG Jian
Computer Science. 2018, 45 (11): 155-159.  doi:10.11896/j.issn.1002-137X.2018.11.023
Abstract PDF(1389KB) ( 704 )   
References | Related Articles | Metrics
Modular exponentiation is one of the most fundamental operations in many encryption and signature systems.Due to the heavy computation cost of modular exponentiation,many schemes have been put forward to securely outsource modular exponentiation to cloud.However,most of the existing approaches need two non-colluded cloud servers to implement the secure modular exponentiation,resulting in private data leakage once the cloud servers collude.Besides,most existing schemes assume both base and exponent in modular exponentiation are private,which does not conform to many real-world applications.Usually,in order to reduce the overhead of computation,only the sensitive messages should be privately protected.To solve the above problems,this paper proposed two secure outsourcing schemes based on fixedbase (public base and private exponent) or fixed exponent(private base and public exponent),respectively.In the proposed schemes,the client only needs one cloud server,thus avoiding collusion attack of two servers.Theoretical analysis and experimental results reveal the security and efficiency of the proposed schemes.
Network Security Experiment Environment Multi-emulation Planning Based on Capability Measurement
ZENG Zi-yi, QIU Han, ZHU Jun-hu, ZHOU Tian-yang
Computer Science. 2018, 45 (11): 160-163.  doi:10.11896/j.issn.1002-137X.2018.11.024
Abstract PDF(2276KB) ( 701 )   
References | Related Articles | Metrics
The combined usage of multiple emulation technologies can provide flexible resource allocation for construction of experimental environment for network security.Its difficulty lies in how to balance the fidelity requirements.A multi-emulation planning method based on “distribution on demand” was proposed for this problem.Firstly,the emulation capability is used to define the fidelity requirement,and then the complex,abstract and unstructured requirements are represented as simple,concrete and structured forms.Secondly,a fidelity satisfaction decision criterion based on default rejection strategy and a multi-emulation scheme solving algorithm based on greedy strategy are given.In the expe-riment of emulation environment construction of worm sample propagation,the emulation scheme solved by this method can obtain the minimum emulation cost under the condition of satisfying the fidelity requirement.
Intrusion Detection Method Based on Intuitionistic Fuzzy Time Series Forecasting Model in Cyberspace
XING Rui-kang, LI Cheng-hai, FAN Xiao-shi
Computer Science. 2018, 45 (11): 164-168.  doi:10.11896/j.issn.1002-137X.2018.11.025
Abstract PDF(1311KB) ( 654 )   
References | Related Articles | Metrics
The cyberspace is an emerging combat space that has emerged under the conditions of informatization deve-lopment with the major changes in the world’s military,and has a particularly important impact on air defense and antimissile confrontation.Due to the imperfect security mechanism,the threats that cyberspace faces are constantly increa-sing.Based on this background,this paper proposed an intrusion detection method based on the intuitionistic fuzzy time series forecasting model.This methods calculates the intuitionistic fuzzy prediction error of each characteristic attribute of network data,and distinguishes normal data from intrusion attacks by intuitionistic fuzzy prediction error,so as to achieve the purpose of detection and early warning.Based on this,an intrusion detection framework is established,and a simulation simulation experiment platform is set up to simulate the effectiveness and effectiveness of the algorithm by simulating an abstract and simplified network cyberspace confrontation model.The experimental results show that this method is effective and improves the detection rate of model to some extent.
Homomorphic Evaluation of Lightweight Block Cipher over Integers
MAO He-feng, HU Bin
Computer Science. 2018, 45 (11): 169-175.  doi:10.11896/j.issn.1002-137X.2018.11.026
Abstract PDF(1392KB) ( 688 )   
References | Related Articles | Metrics
Based on the fully homomorphic encryption DGHV scheme proposed by Gentry et EUROCRYPT 2010 and the technology of batch,this paper presented a homomorphic evaluation method of lightweight block cipher SIMON circuit by state-wise bitslicing,and proposed a representation called half-byte-wise bitslicing.On this basis,this paper provided the implementation method of half-byte-wise bitslicing homomorphic evaluation of PRINCE circuit.Lastly,this paper compared PRINCE,SIMON-64/128,SIMON-128/256 with AES-128 with respect to the homomorphic operations,and analyzed the counts of homomorphic evaluation of different block cipher circuits and different implementation methods.
Removable Attribute Encryption Access Control Algorithm Based on CP-ABE
TU Yuan-fei, GAO Zhen-yu, LI Rong-yu
Computer Science. 2018, 45 (11): 176-179.  doi:10.11896/j.issn.1002-137X.2018.11.027
Abstract PDF(1604KB) ( 820 )   
References | Related Articles | Metrics
In order to enhance the security of the computer network and ensure that the information resources in the network will not be used illegally,access control is needed.Based on the access control algorithm of grid virtual organization,the existing algorithms established a different trust domain in the grid,achieved the identity and behavior based access control strategy between the hosts,realized the cross-domain access control of grid virtual organization and the establishment of mutual trust in the core algorithm and logical reasoning,so as to achieve access control algorithm.But these algorithms may make the illegal network as a secure network,so the accuracy of access control is not high.In order to achieve the access control,a removable attribute encryption access control algorithm based on CP-ABE was proposed.First,the access tree based on CP-ABE which can be used to encrypt the access control is constructed.Initial building and key generation of access control are completed through CP-ABE.On the basis of this,writing new file creation,new user authorization,revocation of users,file access and other aspects of the processare designed in the encryption algorithm and decryption algorithm,so as to improve the access control effect of revocable attribute encryption access control algorithm.The experimental results show that the proposed algorithm is easy to control,the consuming time of access control is reduced and the control effect is better.In addition,the implementation process of proposed method is simplified,and this study plays a positive role in the development of research in this field.

Personalized (α,l)-diversity k-anonymity Model for Privacy Preservation
CAO Min-zi, ZHANG Lin-lin, BI Xue-hua, ZHAO Kai
Computer Science. 2018, 45 (11): 180-186.  doi:10.11896/j.issn.1002-137X.2018.11.028
Abstract PDF(1801KB) ( 735 )   
References | Related Articles | Metrics
Aiming at the problem that traditional privacy preservation model is lack of considering the personalized anonymity,this paper analyzed the existing two personalized anonymity mechanisms.On the basis of k-anonymity and l-diversity model,a personalized (α,l)-diversity k-anonymity model was proposed to solve the existing problems.In the proposed model,the sensitive attribute values are divided into several categories according to their sensitivities,eachcate-gory is assigned with corresponding constraints,and the personalized privacy preservation is provided for specific individuals.The experimental results show that the proposed model can provide stronger privacy preservation while supp-lying personalized service efficiently.
Secure Data Deduplication Scheme Based on Merkle Hash Tree in HybridCloud Storage Environments
ZHANG Gui-peng, CHEN Ping-hua
Computer Science. 2018, 45 (11): 187-192.  doi:10.11896/j.issn.1002-137X.2018.11.029
Abstract PDF(2377KB) ( 867 )   
References | Related Articles | Metrics
Deduplication is an efficient data compression and storage optimization technology in cloud storage systems.It can reduce storage space and transmission bandwidth consumption by detecting and eliminating redundant data.The convergence encryption adopted by existing cloud storage systems is vulnerable to brute-force attacks and the time cost of ciphertext generation is excessive.In this paper,an efficient deduplication scheme based on Merkle hash tree in hybrid cloud environment was proposed.The tag used to detect duplicated data is calculated by introducing privilege level function and label coefficients which can realize a secure deduplication system with different privilege levels.At the same time,an additional encryption algorithm is implemented,and cryptographic keys are generated by a Merkle hash tree.These keys are used to encrypt the plaintext at a file-level and block-level deduplication which ensures that the ciphertext becomes unpredictable.The security analysis shows that this scheme can effectively resist the brute-force attacks from internal and external attackers,and improve the confidentiality of data.The simulation results show that the proposed MTHDedup scheme can effectively reduce the computation overhead of ciphertext generation and the storage space of cryptographic keys.With the increase of the number of privilege sets,the performance advantage of MTHDedup scheme is more obvious.
Software & Database Technology
Software Bug Triaging Method Based on Text Classification and Developer Rating
SHI Xiao-wan, MA Yu-tao
Computer Science. 2018, 45 (11): 193-198.  doi:10.11896/j.issn.1002-137X.2018.11.030
Abstract PDF(2592KB) ( 787 )   
References | Related Articles | Metrics
Bug management and repair in open-source software (OSS) projects are meaningful ways to ensure the quality of software and the efficiency of software development,and improving the efficiency of bug triaging is an urgent problem to be resolved.A prediction method based on text classification and developer rating was proposed in this paper.The core idea of building the prediction model is to consider both text classification based on machine learning and rating mechanism based on the source of bugs.According to the experiment on hundreds of thousands of bugs in the Eclipse and Mozilla projects,in the ten-fold incremental verification mode,the best average accuracies of the proposed method reach 78.39% and 64.94%,respectively.Moreover,its accuracies are increased by 17.34% and 10.82%,respectively,compared with the highest average accuracies of the baseline method(machine learning classification +tos-sing graphs).Therefore,the results indicate the effectiveness of the proposed method.
Combinatorial Test Case Generation Method Based on Simplified Particle Swarm Optimizationwith Dynamic Adjustment
BAO Xiao-an, BAO Chao, JIN Yu-ting, CHEN Chun-yu, QIAN Jun-yan, ZHANG Na
Computer Science. 2018, 45 (11): 199-203.  doi:10.11896/j.issn.1002-137X.2018.11.031
Abstract PDF(1345KB) ( 570 )   
References | Related Articles | Metrics
One of the keys for the optimized combinatorial test is that the generated test case can cover more combinations,and the particle swarm algorithm has the distinctive advantage and capability in generating strong combinatorial coverage cases.This paper proposed a combinatorial test case generation method based on simplified particle swarm optimization based on dynamic adjustment.In this method,test case is generated based on particle swarm algorithm,and the mixed priority one-test-at-a-time strategy and simplified particle swarm optimization algorithm based on dynamic adjustment are combined to generate combinatorial test case set,excluding the influence of velocity factors on the process of particle optimization.Then,a particle convergence criterion is defined,and the inertia weight is dynamically adjusted based on the premature convergence degree ofparticle swarm,so as to prevent that the particles fall into the local optimum and its convergence is slow later,thus improving the capability of coverage combination of the coverage table gene-rated by the particle swarm algorithm.Experiments show that the simplified particle swarm optimization algorithm based on dynamic adjustment has certain advantages in the aspect of case scale and time cost.
Incremental Data Extraction Model Based on Variable Time-window
LIU Jie, WANG Gui-ling, ZUO Xiao-jiang
Computer Science. 2018, 45 (11): 204-209.  doi:10.11896/j.issn.1002-137X.2018.11.032
Abstract PDF(2787KB) ( 872 )   
References | Related Articles | Metrics
Continuously extracting and integrating the changed data from different data sources based on appropriate data extraction model is crucial for sharing data between different heterogeneous systems and building the incremental data warehouse to analyze data.There exists a problem of efficiency of data extraction in the traditional timestamp based changed-data-capture method.As long as the exception occurs during the data extraction,the whole data extraction progress willfail.In that case,the database must be rolled back,which reduces the efficiency of extraction.To address the problem above,this paper proposed an incremental data extraction model based on variable time-window.The model extracts a small number of repetitive records and then de-duplicates them based on the idea of time-window.The model reduces the influence of the exception on the data extraction,enhances the reliability for extracting ETL process by the timestamp increment data,and improves the efficiency of data extraction.
Direction-aware Moving Object Range Query Algorithm in Road Network
DONG Tian-yang, SHANG Yue-hui, CHENG Qiang
Computer Science. 2018, 45 (11): 210-219.  doi:10.11896/j.issn.1002-137X.2018.11.033
Abstract PDF(2528KB) ( 630 )   
References | Related Articles | Metrics
The range query on moving objects in road network is one of the classical query methods in spatial query processing,which has been widely used in many fields.However,there are still some shortcomings in the existing moving object range query methods in road networks.On the one hand,most of them only consider the network distance,but pay less attention to the movement direction of moving objects in road networks.On the other hand,a few query me-thods considering the movement direction are based on Euclidean space and can’t not be applied to large-scale road network to determine whether moving objects move toward query point.Aiming at the problem of accurately and efficiently finding moving objects toward the query point in large and complex road network,this paper proposed a direction-aware moving object range query algorithm in road network.In this method,R-tree and simple grid are taking as underlying index support,and an efficient determination method of moving object toward query point in road network is exploited,so as to efficiently query the moving object toward query point.At last,this paper designed experiments in terms of the query range,the number of moving object and the number of grid partition to evaluate the performance of the proposed algorithm.The experimental results show that the algorithm can achieve good query performance even in the case of large-scale complex road network under the condition that the index configuration is reasonable.
Artificial Intelligence
Co-author and Affiliate Based Name Disambiguation Approach
SHANG Yu-ling, CAO Jian-jun, LI Hong-mei, ZHENG Qi-bin
Computer Science. 2018, 45 (11): 220-225.  doi:10.11896/j.issn.1002-137X.2018.11.034
Abstract PDF(2117KB) ( 741 )   
References | Related Articles | Metrics
Name disambiguation is one of the most challenging issues in entity resolution domain,and it aims at solving the problem that the same name is shared by different people.However,most of the conventional approaches rely heavily on sufficient information of entities,and fail to realize the name identification with insufficient information.This paper proposesd a novel name disambiguation approach based on co-authors and authors’affiliates.Specifically,entity relationship diagram is constructed based on co-authorship and authors’affiliates,and the breadth-first search scheme is utilized to search the effective path between each pair of authors with the exactly same name in the constructed entity relationship diagram.A unique metric connection strength between authors is calculated according to the length of effective path,the number of effective path and the type of edge on path.And it is compared with the threshold to achieve name disambiguation.Experimental results show that the proposed approach is better than the state-of-the-art approaches,and it is able to disambiguate the authors sharing the same name without co-authorship.
Neural Machine Translation Based on Attention Convolution
WANG Qi, DUAN Xiang-yu
Computer Science. 2018, 45 (11): 226-230.  doi:10.11896/j.issn.1002-137X.2018.11.035
Abstract PDF(1466KB) ( 816 )   
References | Related Articles | Metrics
The attention mechanism commonly used by the existing neural machine translation is based on the word level.By creating multi-layer convolutional structure on the basis of attention mechanism,this paper improved attention mecha-nism from word-based level to phrase-based level.After convolutional operation,the attention information can reflect phrase structure more clearly and generate new context vectors.Then,the new context vectors are used to integrate into the neural machine translation framework.Experimental results on large-scale Chinese-to-English tasks show that neural machine translation based on attention convolution can effectively capture the phrasal information in statements,enhance the context dependencies of translated words,optimize the context vectors and improve the translation quality.
Adaptive Flower Pollination Algorithm with Simulated Annealing Mechanism
LIU Jing-sen, LIU Li, LI Yu
Computer Science. 2018, 45 (11): 231-237.  doi:10.11896/j.issn.1002-137X.2018.11.036
Abstract PDF(2892KB) ( 704 )   
References | Related Articles | Metrics
Aiming at the shortages of basic flower pollination algorithm,in order to improve the convergence rate and optimization accuracy of the algorithm,this paper proposed an adaptive flower pollination algorithm fusing simulated annealing mechanism and dynamically adjusting the global step length and local reproduction probability according to the iterative evolution.Firstly,the scaling factor of the deformed exponential function is used to control step length in the global pollination of the basic algorithm,so that the individual of flower can be adaptively updated with the number of iterations.Then,through combining Rayleigh distribution function and the number of iterations,the factors of multiplication probability are improved,thus avoiding the precocious convergence and making the solution close to the optimal solution in the later stage.Finally,a simulated annealing cooling operation is incorporated into the improved flower pollination algorithm,which not only increases the diversity of population,but also improves the overall performance of algorithm.The simulation results show that the algorithm has faster convergence speed and higher convergence precision,and the optimization performance of the proposed algorithm is improved.
Two Types of Dynamic Information Models and Their Applications
ZHANG Ling, REN Xue-fang, SHI Kai-quan
Computer Science. 2018, 45 (11): 238-243.  doi:10.11896/j.issn.1002-137X.2018.11.037
Abstract PDF(1325KB) ( 530 )   
References | Related Articles | Metrics
P-set is a type of set model with dynamic characteristics,in which the elements’ attribute satisfies the conjunctive normal form in mathematical logic.P-set is proposed by introducing dynamic characteristics into finite ordinary element set (finite Cantor set) X and improving it.Inverse P-set is another type of set model with dynamic characteristics,in which the elements’ attribute satisfies the disjunctive normal form in mathematical logic.Inverse P-set is also proposed by introducing dynamic characteristics into finite ordinary element set X and improving it.Inverse P-set is the dual form of P-set.Here P-set is defined as a type of dynamic information model,while inverse P-set is defined as ano-ther type of dynamic information model.In this paper,the structures,generations and dynamic characteristics of P-sets and inverse P-sets were introduced.The existence facts of P-sets and inverse P-sets were presented.By employing P-sets,the recursive searching-finding and application of information with attribute conjunctive normal form were given.By using inverse P-sets,the the mining-acquisition and application of information with attribute disjunctive normal form were given.P-sets and inverse P-sets are the new models and new methods for the application research of dynamic information or dynamic data.
K-CFSFDP Clustering Algorithm Based on Kernel Density Estimation
DONG Xiao-jun, CHENG Chun-ling
Computer Science. 2018, 45 (11): 244-248.  doi:10.11896/j.issn.1002-137X.2018.11.038
Abstract PDF(1778KB) ( 977 )   
References | Related Articles | Metrics
The CFSFDP (Clustering by Fast Search and Find of Density Peaks) is a new density-based clustering algorithm,it can identify the cluster centers effectively by finding the density peaks,and it has the advantages of fast clustering speed and simple realization.The accuracy of CFSFDP algorithm depends on the density estimation in the dataset and cut off distance (dc) of artificial selection.Therefore,an improved K-CFSFDP algorithm based on kernel density estimation was presented.The algorithm uses non parametric kernel density to analyze distribution of data points and selects the dc adaptively to search and find the peak density of data points,with the peak point data as the initial cluster centers.The simulated results on 4 typical datasets show that the K-CFSFDP algorithm has better performance in accuracy and better robustness than CFSFDP,K-means and DBSCAN algorithm.
SNS:A Fast and Unbiased Stratified Graph Sampling Algorithm
ZHU Jun-peng, LI Hui, CHEN Mei, DAI Zhen-yu
Computer Science. 2018, 45 (11): 249-255.  doi:10.11896/j.issn.1002-137X.2018.11.039
Abstract PDF(4842KB) ( 1128 )   
References | Related Articles | Metrics
As an effective method of statistical analysis,sampling is commonly used in the field of analyzing the large-scale graph data to improve the performance.However,most of the existing graph sampling algorithms often have the problem of excessive sampling of high and low nodes,resulting in lower accuracy derived from the scale-free characteristic of complex networks.The scale-free characteristic means the degrees of different nodes follow a power law distribution,and the difference between nodes is huge.On the basis of the sampling method on node selection strategy,combining the approximate degree distribution strategy of nodes,this paper proposed and realized an efficient and unbiased stratified graph sampling algorithm named SNS.The experimental results show that SNS algorithm preserves more topological properties on three real data sets than other graph sampling algorithms,and consumes less time than FFS algorithm.Therefore,SNS algorithm is superior to the existing algorithms in terms of the unbiasedness of degree and the accuracy of sampling results.
Study on Big Data Mining Method Based on Sparse Representation and Feature Weighting
CAI Liu-ping, XIE Hui, ZHANG Fu-quan, ZHANG Long-fei
Computer Science. 2018, 45 (11): 256-260.  doi:10.11896/j.issn.1002-137X.2018.11.040
Abstract PDF(1896KB) ( 796 )   
References | Related Articles | Metrics
In order to improve the efficiency and accuracy of big data mining,this paper applied the sparse representation and feature weighting into big data processing.At first,the features of big data are classified by solving the sparse mode of linear equation.In the process of solving the sparse solution,a vector norm is utilized to transform this process into the process of solving the optimization objective function.After feature classification,feature extraction is executed to reduce the dimensionality of data.Finally,the distribution of data in the class is combined sufficiently to conduct weighting effectively,thus realizing data mining.The experimental results suggest that the proposed algorithm is supe-rior to the common feature extraction and feature weighting algorithms in the terms of recall and precision.
Information Dissemination and Mathematical Modeling of Microblog under Graded Opinion Leader
HUANG Xian-ying, YANG Lin-feng, LIU Xiao-yang
Computer Science. 2018, 45 (11): 261-266.  doi:10.11896/j.issn.1002-137X.2018.11.041
Abstract PDF(2122KB) ( 742 )   
References | Related Articles | Metrics
In order to analyze the role of opinion leaders in microblog online social network dissemination and the life cycle of microblog information dissemination,an OLL graded opinion leader model was proposed.Firstly,the microblog data are crawled,and statistical analysis is carried out.Secondly,the dissemination power is constructed as a function related to three factors such as the number of forwarding,the degree of activity and the number of fans,and then a weight calculation method based on hierarchy analysis is established.Finally,the computed dissemination power and OLL mo-del are used to analyze the dissemination function of opinion leader and the life cycle of microblog.The simulation results show that the dissemination effect of opinion leader in themicroblog information dissemination is very strong.The error is caleculated between the OLL model and threereal data sets of errors are 9.6%,13.4% and 4.5% respectively.It is proved that the proposed OLL model is reasonable and effective for analyzing the life cycle of opinion leader in microblog information dissemination.
Graphics, Image & Pattern Recognition
Image Reconstruction Based on Supervised Learning Deep Auto-encoder
ZHANG Sai, RUI Ting, REN Tong-wei, YANG Cheng-song, ZOU Jun-hua
Computer Science. 2018, 45 (11): 267-271.  doi:10.11896/j.issn.1002-137X.2018.11.042
Abstract PDF(5245KB) ( 954 )   
References | Related Articles | Metrics
Aiming at the reconstruction of the damaged information of digital image,this paper proposed a new approach in which the classical unsupervised auto-encoder(AE) is used for supervised learning,and researched the deep model structure and training strategy.Specifically,this paper presented a novel supervised learning based deep auto-encoder model which possesses a set of progressive and interrelated learning strategies through designing multiple groups of supervised single-layer AE.In the novel model,the one-to-one training strategy in classical AE model (one output corresponding to one input) is substituted by the many-to-one training strategy (one output corresponding to many inputs).Then,the structure and training strategy mentioned above were utilized for the damaged or occluded images to test the process of data reconstruction,thus improving the model’s ability to express and reconstruct the feature of those data.Experimental results show that the new method has good reconstruction effect and adaptability to the damaged or occluded samples.
Multi-class Gaussian Mixture Model and Neighborhood Information BasedGaussian Mixture Model for Image Segmentation
CHAI Wu-yi, YANG Feng, YUAN Shao-feng, HUANG Jing
Computer Science. 2018, 45 (11): 272-277.  doi:10.11896/j.issn.1002-137X.2018.11.043
Abstract PDF(5277KB) ( 1080 )   
References | Related Articles | Metrics
Gaussian mixture model is one of the simple,effective and widely used tools in image segmentation.How-ever,the fitting result is not accurate enough when the number of mixture components in the traditional Gaussian mixture model is determined.In addition,because the spatial relationship between pixels is not considered,the segmentation results are easily affected by noise,and the segmentation accuracy is not high.To remedy the defects of the traditional Gaussian model,this paper proposed a multi-class Gaussianmixture model and a neighborhood information based Gaussianmixture model for image segmentation.The multi-class Gaussian mixture model decomposes the traditional mixture model.The traditional mixture model is composed of M different weighted distributions,and multi-class Gaussianmixture model decomposes each of the M components into R different distributions,that is,the multi-class Gaussian mixture model is composed of M different weighted distributions,and each of the M distributions is obtained by mixing R different distributions,thus improving the fitting accuracy of the model.The neighborhood information based Gaussianmixture model adds spatial information to the prior probability and posterior probability in the model,thus enhancing the information association and antinoise capability among pixels.The segmentation results were evaluated by the indexes of structural similarity,misclassification rate and peak signal-to-noise ratio.The experimental results show that compared with the existing segmentation method of mixture model,the segmentation accuracy of the proposed method in this paper is greatly improved,and the noise is effectively restrained.
Image Denoising Method Combining Kernel Function and Nonlinear Partial Differential Equation
CHEN Peng, ZHANG Jian-wei
Computer Science. 2018, 45 (11): 278-282.  doi:10.11896/j.issn.1002-137X.2018.11.044
Abstract PDF(4910KB) ( 813 )   
References | Related Articles | Metrics
The traditional nonlinear diffusion filtering methods use the traditional gradient operator in image denoising,which may easily lead to missing details.In view of this shortcoming,a denoising method based on nonlinear diffusion filter was constructed according to the nonlinear partial differential equation and the image structure information.In this method,the kernel function is used to adaptively adjust the weight coefficient in the multi direction Laplasse operator template,and the influence of image noise is weaken by selecting the appropriate search window width by using the nonlocal information.The experimental results show that this method can not only save the image texture details,but also achieve good denoising results.
Novel Single Image Raindrop Removal Algorithm Based on Deep Learning
Computer Science. 2018, 45 (11): 283-287.  doi:10.11896/j.issn.1002-137X.2018.11.045
Abstract PDF(2320KB) ( 1253 )   
References | Related Articles | Metrics
Raindrops seriously affect the visual effect of images and subsequent image processing applications.At pre-sent,the single image raindrop removal method based on deep learning can effectively mining depth features of image,so its effect of removing rain is better than traditional methods.However,with the increasing of network depth,overfitting is easy to occur,resulting in the bottleneck of rain removal effect.This paper proposed a novel single image raindrop removal algorithm based on deep learning.Firstly,on the basis of inheriting the advantages of deep learning,network learns the residuals between rain images and no-rain images.Secondly,the raindrop-removed image is reconstructed from the residual image and the source image.Through these steps,the depth of network is increased and the convergence speed is accelerated.In terms of performance evaluations,a dataset consisting of images in various scenes was used to test the proposed method,and the results were also compared with those of the state-of-the-art raindrop removal methods.The experimental results show the superiority of the proposed algorithm.
Concrete Surface Cracks Detection Combining Structured Forest Edge Detection and Percolation Model
QU Zhong, JU Fang-rong, CHEN Si-qi
Computer Science. 2018, 45 (11): 288-291.  doi:10.11896/j.issn.1002-137X.2018.11.046
Abstract PDF(3843KB) ( 952 )   
References | Related Articles | Metrics
To improve the robustness of crack detection methods for different concrete surface crack images,this paper utilized structured forest based learning framework to extract crack edge,and merged improved fast percolation algorithm to detect crack,ensuring the precision and efficiency of detection.This approach enhances the crack images by using a linear transform piecewise function to conduct linear transformation for color images.Then,according to the local structured information of crack block and the integral channel features obtained from the crack edge images,the structured forest edge detector is used to extract the crack edge fast,and the improved percolation model is fused to percolate edge fast and denoise.Finally,the morphological method is used to connect small fractures and fill the holes.Experimental results on various crack image datasets show that the proposed approach is fast and robust,and it’s superior to state-of-the-art algorithms in terms of the accuracy of crack detection.
Interdiscipline & Frontier
Study on Parallel K-means Algorithm Based on CUDA
LIU Duan-yang, ZHENG Jiang-fan, SHEN Guo-jiang, LIU Zhi
Computer Science. 2018, 45 (11): 292-297.  doi:10.11896/j.issn.1002-137X.2018.11.047
Abstract PDF(1353KB) ( 1049 )   
References | Related Articles | Metrics
When k-means algorithm is confronted with large dataset,the computation time will grow exponentially with the increase of the dataset.In order to improve the computational performance of this algorithm,this paper designed a parallel k-means algorithm based on CUDA (Compute Unified Device Architecture) programming model,called GS_k-means.According to the parallel analysis of k-means algorithm,the global selection is used to determine whether the cluster of data is changed before distance calculation,thus reducing redundant computation.The calculation speed is accelerated by using the universal matrix multiplication in distance calculation.When the cluster center is updated,all data are grouped by cluster tag,and the data in the group are simply added,thus reducing the atomic memory operation andimproving the overall performance.The experimental results on KDDCUP99 datasetshow that on the condition of ensuring the accuracy of the experimental results,the improved algorithm the calculation speed and is five times faster than the classical GPUMiner algorithm.
Parallel CP Tensor Decomposition Algorithm Combining with GPU Technology
WU Yu, YAN Guang-hui, WANG Ya-fei, MA Qing-qing, LIU Yu-xuan
Computer Science. 2018, 45 (11): 298-303.  doi:10.11896/j.issn.1002-137X.2018.11.048
Abstract PDF(2380KB) ( 1368 )   
References | Related Articles | Metrics
With the emergence of high-dimensional data,tensor and tensor decomposition have draw widespread attention in the field of data alanlysis.The high dimensionality and sparsity of tensor data results in a high computational complexity of tensor methods,which becomes an obstacle to the application of tensor decomposition in practical.Many researchers have introduced the parallel computing methods to improve the efficiency of tensor decomposition algorithm.Based on the existing research,this paper presented a GPU parallel CP tensor decomposition algorithm through simply calculating Khatri-Rao product,called ParSCP-ALS algorithm.The experimental results show that the proposed ParSCP-ALS algorithm can effectively improve the computational efficiency of CP tensor decomposition compared with the existing parallel algorithm.On the Movielens data set,the ParSCP-ALS algorithm reduces the computational time by about 58% compared with the existing parallel algorithm.
Dynamic Data Compression Strategy Based on Internet of Vehicle
HUANG Zhi-qing, LI Meng-jia, TIAN Rui, ZHANG Yan-xin, WANG Wei-dong
Computer Science. 2018, 45 (11): 304-311.  doi:10.11896/j.issn.1002-137X.2018.11.049
Abstract PDF(2672KB) ( 918 )   
References | Related Articles | Metrics
Internet of Vehicle (IoV) can effectively cover the sensing area,so it is applied to large-scale urban sensing.Meanwhile,in order to solve the problem that it is difficult for the IoV to transfer a large amount of data,the compressive sensing (CS) is used to compress the data with spatio-temporal correlation by some researchers.However,the current researches on the applications of CS in the IoV do not consider dynamic changes in data characteristic and vehicle distribution,which may lead to unacceptable errors.In order to ensure the accuracy of data reconstruction,this paper proposed a dynamic CS approach in the IoV.The approach can automatically analyze the relationship among data chara-cteristic,vehicle distribution and the number of measurements.Then,based on the CS,a function of adjusting the number of measurements is added.Through the analysis of data characteristic and vehicle distribution,theparameters of the observation matrixin CS is adjusted in real time so as to improve the accuracy of reconstruction to achieve higher quality data transmission.The experiment shows that the proposed dynamic CS method improves the reconstruction accuracy by 15.3% compared with the existing CS method in the IoV.
Modified Warning Propagation Algorithm for Solving Regular (3,4)-SAT Instance Sets
SHE Guang-wei, XU Dao-yun
Computer Science. 2018, 45 (11): 312-317.  doi:10.11896/j.issn.1002-137X.2018.11.050
Abstract PDF(1450KB) ( 525 )   
References | Related Articles | Metrics
Based on the critical characterization of the minimal unsatisfiable formula,a 3-CNF formula can be reduced to a regular (3,4)-CNF formula within polynomial time.Thus the regular (3,4)-SAT problem is NP-complete.The war-ning propagation algorithm(WP) converges in high probability on the regular (3,4)-SAT instance sets by reducing,but it can’t determine the satisfiability of the formula on any instance,so the algorithm fails to solve the problem.For a reduced regular (3,4)-CNF formula,the difference between the positive and negative occurences of each variable has been found to be stable.With this feature,a WP algorithm based on the rule of positive and negative occurences was proposed to solve the reduced regular (3,4)-SAT instances.The experimental results show that the modified WP algorithm is effective for regular formulas.Therefore,the regularity of the formula can be used to study the convergence of the WP algorithm.