Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 45 Issue 2, 15 February 2018
Survey of Applications Based on Blockchain in Government Department
REN Ming, TANG Hong-bo, SI Xue-ming and YOU Wei
Computer Science. 2018, 45 (2): 1-7.  doi:10.11896/j.issn.1002-137X.2018.02.001
Abstract PDF(1254KB) ( 1334 )   
References | Related Articles | Metrics
As the value of Bitcoin continuously rises,the blockchain technology applied behind it has promptly drawn widespread attention throughout the world,and it has also attracted the attention of governments at the same time.In particular,some countries,represented by the United States,under the support of the governments and the power institutions,have begun to try to apply this technology in various aspects,including the construction of specific information platforms,the operation of equipment supplies,and the control of systems.They think that the features including distribution,traceability and difficult tampering,etc of this technology could give full play to advantages of many aspects such as anonymous data collection,data integrity verification,interconnected communication of intelligent equipments,etc.In the meantime,the government institutions of many countries still maintain a cautious attitude towards the application of blockchain technology and they think that this technology is still faced with lots of problems including security and the universality of application,etc.Through the introduction and analysis of applications of the blockchain technology of government departments,the challenges confronted by current blockchain technology in the applications of government departments were pointed out.Lastly,for these problems,corresponding solutions were brought forward through combing the existing work in the academic field.
Abnormal Detection of Eclipse Attacks on Blockchain Based on Immunity
LV Jing-shu, YANG Pei, CHEN Wen, CAO Xiao-chun and LI Tao
Computer Science. 2018, 45 (2): 8-14.  doi:10.11896/j.issn.1002-137X.2018.02.002
Abstract PDF(1343KB) ( 1065 )   
References | Related Articles | Metrics
The eclipse attack against the blockchain has the characteristics of concurrency and concealment,and often relies on multi-node to collaboratively complete the attack of monopolizing victim’s network connections.Correspon-dingly,the computer immune system has the characteristics of distribution,self-learning and strong adaptive ability.To detect whether the blockchain suffers from eclipse attacks,this paper proposed a new immunity based model to detect eclipse attacks on blockchain.At the same time,this paper established the architecture of the detection model,and pre-sented the formal definitions of each element and the execution processes of each module in this model.The simulated experiments were carried out according to the proposed detection model.The experimental results show higher accuracy and efficiency of this model.
Blockchain-based Big Data Right Confirmation Scheme
WANG Hai-long, TIAN You-liang and YIN Xin
Computer Science. 2018, 45 (2): 15-19.  doi:10.11896/j.issn.1002-137X.2018.02.003
Abstract PDF(2097KB) ( 2648 )   
References | Related Articles | Metrics
Data right confirmation has always been one of the most challenging problems in big data trading.Traditional means of right confirmation adopt the model of submitting the ownership evidence and expert reviewing,but they lack technological credibility and there are some uncontrollable factors such as potential tampering.In order to solve these problems,a strongly operational confirmation scheme is urgently needed.This paper put forward a new big data right confirmation scheme based on the technologies of blockchain and digital watermarking.Firstly,the auditing center and watermarking center are brought in to separate the duties of the integrity auditing of big data and watermarking generation.Secondly,using provable data possession technique and sampling technique,the lightweight audit of the integrity of big data is realized.Thirdly,the special security properties of digital watermarking technology are used to confirm the origin of big data.Finally,in the light of the integrity and persistence of evidence involved in the right confirmation,the native features of blockchain,such as the shared ledger,are used to implement strong consistency of the right confirmation result and relevant evidence.Correctness and security analysis results show that the proposed scheme can provide a new technical solution for the definition of ownership of big data.
Byzantine Consensus Algorithm Based on Gossip Protocol
ZHANG Shi-jiang, CHAI Jing, CHEN Ze-hua and HE Hai-wu
Computer Science. 2018, 45 (2): 20-24.  doi:10.11896/j.issn.1002-137X.2018.02.004
Abstract PDF(1198KB) ( 1350 )   
References | Related Articles | Metrics
Blockchain is a kind of distributed ledger system with peer-to-peer network,which has drawn widespread attention because of its characteristics such as decentralization,non-tempering,security and credibility.In a blockchain system,some nodes have the Byzantine errors such as operational errors,network latency,system crashes,malicious attacks,and so on.The existing consensus algorithms are less tolerant to the Byzantine nodes in the blockchain,and the scalability of the blockchain system is poor.In order to solve these problems,this paper proposed a Byzantine consensus algorithm based on Gossip protocol,which allows the system to tolerate less than half of the nodes as the Byzantine node and achieve the fault-tolerant performance of XFT consensus algorithm.This paper proved that the algorithm can reach consensus in a distributed system with Byzantine defects from the agreement,correctness and termination.At the same time,the system adopts the uniform data structure,and thus has better scalability and facilitates the right node to identify the Byzantine nodes in the blockchain system.In this algorithm,the proposed node is shifted with the change of the length of blockchain,so that all nodes in the system are in the same position,thus avoiding the single point of failure problem,and making the system have better dynamic load balancing performance.
Study on Virtual Power Plant Model Based on Blockchain
SHAO Wei-hui, XU Wei-sheng, XU Zhi-yu, WANG Ning and NONG Jing
Computer Science. 2018, 45 (2): 25-31.  doi:10.11896/j.issn.1002-137X.2018.02.005
Abstract PDF(1554KB) ( 1438 )   
References | Related Articles | Metrics
The power system is evolving from Smart Grid to Energy Internet with the development of new energy technologies and Internet technologies.Distributed energy resources will become the main primary energy in the future Ener-gy Internet.In this situation,virtual power plant technology plays an important role in the convergence of distributed power generation resources and the establishment of virtual power resource transactions.As a new distributed computing paradigm,blockchain has the characteristics of security,transparency and decentralization.This paper proposed a blockchain based virtual power plant model for the future Energy Internet driven by real-time electricity price.The coordinated control method of virtual power plant and the independent grid connected behavior of distributed energy resources are organically linked by the incentive mechanism of blockchain,so as to realize the distributed dispatching calculation of virtual power plant.Simulation results show that the proposed model meets the grid-connected requirements of high penetration,high freedom,high frequency and high speed of distributed energy resources in the future Energy Internet.
Information Security Framework Based on Blockchain for Cyber-physics System
DING Qing-yang, WANG Xiu-li, ZHU Jian-ming and SONG Biao
Computer Science. 2018, 45 (2): 32-39.  doi:10.11896/j.issn.1002-137X.2018.02.006
Abstract PDF(1378KB) ( 837 )   
References | Related Articles | Metrics
Cyber-physics system has drawn widespread attention of academia,and the protection problems and protection measures it faces are also increasingly becoming the research focus in the field.By combing the current research results about the security issues of cyber-physics system and corresponding protective measures at home and abroad,it is found that the security protection measures based on the overall multi-level coordination and distributed architecture have become the current research direction,which is in line with the features of distributed architecture of blockchain technology.Based on the introduction of the distributed topology of blockchain and its information security features,this paper proposed the idea of security protection in which the blockchain technology is integrated with the cyber-physics system,proved the possibility of combining the two parts,and constructed BCCPS framework mechanism of integrating the two parts deeply.The specific construction of BCCPS framework at both the basic level and the integrated level was highlighted.Finally,the security of BCCPS framework was demonstrated from four aspects:confidentiality,integrity,availability and traceability of information security.This research provides a new idea for establishing a secure and robust cyber-physics system.
Public Blockchain of Pharmaceutical Business Resources Based on Double-chain Architecture
BI Ya, ZHOU Bei, LENG Kai-jun and WANG Cun-fa
Computer Science. 2018, 45 (2): 40-47.  doi:10.11896/j.issn.1002-137X.2018.02.007
Abstract PDF(1661KB) ( 812 )   
References | Related Articles | Metrics
Blockchain is a decentralized shared ledger system and computational paradigm.As a core underlying support technology,it is highly compatible with the distributed economic system.The distributed scheduling model of pharmaceutical business resources based on the public service platform is a comprehensive solution to the current situation of pharmaceutical industry which is “scattered,small,disorderly and weak”,and plays an important role in integrating decentralized resource and making on-demand schedule.Aiming at some key problems in the current public service platform,this paper proposed a public blockchain of pharmaceutical business resources based on double chain architecture,and mainly studied the double-chain structure and its storage mode,privacy protection,resource rent-seeking and mat-ching mechanism,and consensus algorithm.The results show that the public blockchain of pharmaceutical business resources based on double-chain architecture can take into account the openness and security of transaction information and the privacy of enterprise information,self-adaptively complete rent-seeking and matching of resources,and greatly enhance the credibility of the public service platform and the overall efficiency of the system.
Remote Attestation Model Based on Blockchain
LIU Ming-da and SHI Yi-juan
Computer Science. 2018, 45 (2): 48-52.  doi:10.11896/j.issn.1002-137X.2018.02.008
Abstract PDF(1920KB) ( 1169 )   
References | Related Articles | Metrics
Remote attestation is the core of constructing the trusted network.However,current remote attestation modelonly looks on centralized network,in which there are some problems,such as centralized gateway and decision by single point,causing that it is not suitable to use this model in decentralized situation.Aiming at the problem that the computing node cannot execute remote attestation in the environment of centralized distributed network,by drawing lessons from the thought of blockchain,this paper proposed a remote attestation model based on blockchain (RABBC),and focused on model frame,core structure of blockchain and protocol process.The analysis shows that RABBC has the safe characteristics of decentralization,traceability,anonymity,non-tampering,and it is efficient.
Optimization Scheme of Consensus Algorithm Based on Aggregation Signature
YUAN Chao, XU Mi-xue and SI Xue-ming
Computer Science. 2018, 45 (2): 53-56.  doi:10.11896/j.issn.1002-137X.2018.02.009
Abstract PDF(1215KB) ( 1226 )   
References | Related Articles | Metrics
With the rise of Bitcoin,Ethernet,Hyperledger and so on,blockchain has been paid more and more attention.Blockchain is the product of many technologies,and the consensus algorithm is an important standard to adjudicate a blockchain system.The adopted consensus algorithm should be different from the blockchain system to another for the different features.Different consensus algorithms have their own advantages,but they also have shortcomings.Currently,efficiency problem is one of the main problems faced by the consensus algorithm in the blockchain.In order to improve the efficiency,the potential optimization scheme of the consensus algorithm in the blockchain was introduced.Then,the dBFT consensus algorithm commonly used in the alliance chain was taken as the research object,and through combining with the aggregation signature and the bilinear mapping technology,the consensus process was modified.At last,compared with the original scheme,the space complexity of the signature in blockchain system can be effectively reduced with the aggregated dBFT.
Blockchain Based Secure Storage Scheme of Dynamic Data
QIAO Rui, DONG Shi, WEI Qiang and WANG Qing-xian
Computer Science. 2018, 45 (2): 57-62.  doi:10.11896/j.issn.1002-137X.2018.02.010
Abstract PDF(2437KB) ( 961 )   
References | Related Articles | Metrics
In order to solve the potential security problems such as tampering and forgery of dynamic data,this paper proposed a secure storage scheme of dynamic data based on blockchain.First,the mathematical model for the problems above was established.Then,the consistency between local behavior of consensus terminals maximizing their own benefits and the overall goals to ensure the system security and effectiveness was analyzed.Furthermore,the consensus mechanism which is suitable for secure storage of dynamic data,the ownership state transition function of instance system and the architecture for the dynamic data storage system were designed.Finally,quality and growth characteristics of the dynamic data storage blockchain were analyzed under stochastic state model.Results show that the scheme can preclude unauthorized changes of “dynamic data book” effectively,thus enhancing the credibility of the dynamic data of instance system.
Feature Selection Algorithm Using SAC Algorithm
ZHANG Meng-lin and LI Zhan-shan
Computer Science. 2018, 45 (2): 63-68.  doi:10.11896/j.issn.1002-137X.2018.02.011
Abstract PDF(1234KB) ( 878 )   
References | Related Articles | Metrics
Feature selection can improve the performance of learning algorithm with the help of removing the irrelevant and redundant features.As evolutionary algorithm is reported to be suitable for optimization tasks,this paper proposed a new feature selection algorithm FSSAC.The new initialization strategy and evaluation function make FSSAC regard feature selection as a discrete space search problem.The algorithm also uses the accuracy of feature subset to guide the sampling period.In the stage of experiment,FSSAC was combined with the SVM,J48 and KNN,and then it was validated on UCI machine learning datasets by comparing with FSFOA,HGAFS,PSO and so on .The experiments show that FSSAC can improve the classification accuracy of classifier and has good generalization.Besides,FSSAC was also compared with other available methods in dimensionality reduction.
Improved Ensemble Method on MicroRNA Prediction Model
DONG Hong-bin, SHI Li and LI Tao
Computer Science. 2018, 45 (2): 69-75.  doi:10.11896/j.issn.1002-137X.2018.02.012
Abstract PDF(2854KB) ( 670 )   
References | Related Articles | Metrics
The existing microRNA prediction methods often present the problems of imbalance data set class and single applicable species.In order to solve the above problems,the main work is as follows.Firstly,a hierarchical sampling algorithm based on sequence entropy was proposed,which can generate a training set enhancing balance positive and negative samples based on the overall distribution of the samples.Secondly,a feature selection algorithm based on signal-to-noise ratio and correlation was designed to reduce the scale of training set and achieve the purpose of improving training speed.Thirdly,the DS-GA was proposed to shorten the optimization time of SVM classifier parameters and avoid the over-fitting problem.At last,based on the idea of ensemble learning,a common microRNA prediction model was established by sampling,feature selection and classifier parameter optimization.Experiments show that the model solves the problem of imbalance effectively,it is not limited to a single species and achieves better results for the hybrid species test set prediction.
Generation Algorithm for Scale-free Networks with Community Structure
ZHENG Wen-ping, QU Rui and MU Jun-fang
Computer Science. 2018, 45 (2): 76-83.  doi:10.11896/j.issn.1002-137X.2018.02.013
Abstract PDF(1393KB) ( 859 )   
References | Related Articles | Metrics
Generating complex network models can help researchers to understand network behaviors and simulate the transmission processes of disease epidemics and information diffusion.It is also important to generate complex networks meeting the characteristics of real networks and having structural diversity.A network generation algorithm TCMSN (Scale-free Network with Tunable Clustering Coefficient and Modularity) was proposed to generate scale-free complex networks with tunable clustering coefficient and modularity.TCMSN can adjust modularity by changing the mixing parameter and adjust clustering coefficient by changing the global preferential attachment probability and mixing parameter of the network.It adopts a reasonable strategy about adding edges in networks to maintain the scale-free characteristics,as much as possible without destroying network diversity.Experimental results on artificial data sets and real networks show that the proposed TCMSN algorithm can not only generate scale-free network model with tunable clustering coefficient and modularity,but also generate network model closed to the community structure of the real networks.
Robust Video Hashing Algorithm Based on Short-term Spatial Variations
YU Xiao, NIE Xiu-shan, MA Lin-yuan and YIN Yi-long
Computer Science. 2018, 45 (2): 84-89.  doi:10.11896/j.issn.1002-137X.2018.02.014
Abstract PDF(1667KB) ( 603 )   
References | Related Articles | Metrics
A robust video hashing algorithm based on short-term spatial variations was proposed to detect near-duplicate videos in the Internet.Feature extraction and feature quantization are key steps in this algorithm.In the feature extraction phase,compared to the existing feature extraction methods based on temporal and spatial information fusion,the innovation of the proposed algorithm is to make full use of short-time variations of local spatial information between adjacent frames (referred to “short-term spatial variations”).In the proposed algorithm,inscribed spheres of the video are constructed first,and then a series of spherical tori are obtained by partitioning the inscribed spheres with the center of the sphere as the starting point to capture short-term changes in spatial information between adjacent frames.After that,the decomposition coefficients by non-negative matrix factorization of spherical tori are used as the feature representation of the video.In the feature quantization phase,to map the feature representation into binary hash sequences,the optimized Manhattan hashing strategy is adopted which better reserves the neighborhood structure in the original data space,and thus improves the accuracy of quantization.Experiments were carried out on a video dataset to evaluate the performance of the proposed video hashing method.Experimental results show that the proposed algorithm has good performance.
Generalized Discriminant Local Median Preserving Projections and Face Recognition
ZHANG Yong and WAN Ming-hua
Computer Science. 2018, 45 (2): 90-93.  doi:10.11896/j.issn.1002-137X.2018.02.015
Abstract PDF(1907KB) ( 523 )   
References | Related Articles | Metrics
To solve the problem of the singularity of the within-class scatter matrix in discriminant local median preserving projections (DLMPP) in the case of small sample problem,an algorithm named generalized local median preserving projection (GDLMPP) was proposed.To solve the small sample problem,GDLMPP firstly transforms the samples into a lower dimensional space equivalently,and then solves the optimal projection matrix.The theoretical analysis shows that GDLMPP is equivalent to DLMPP when the within-class scatter matrix is non-singular.At last,the experimental results validate the effectiveness of the proposed algorithm on the ORL and AR face databases.
Sparsity-adaptive Image Denoising Algorithm Based on Difference Coefficient
JIAO Li-juan and WANG Wen-jian
Computer Science. 2018, 45 (2): 94-97.  doi:10.11896/j.issn.1002-137X.2018.02.016
Abstract PDF(1270KB) ( 665 )   
References | Related Articles | Metrics
With the remarkable adaptability and the details recovery capability,K-SVD is a highly effective method based on sparse representation theory in image denoising.But the sparsity K should be given in advance,and different images have different K values in fact.On the other hand,pursuit algorithms which are used in evaluating the relevance between vectors of an image by calculating vector inner product,are brought into K-SVD to train sparse coefficients.Denoising effect is reduced because a few noisy pixels are likely to cause false relevance.This paper addressed this problem and proposed a novel sparsity-adaptive speeded K-SVD(SASK-SVD) algorithm based on different coefficient,which can improve the efficiency.The different coefficient is to eliminate false relevance.The sparsity K is adaptively generated by using the average correlation as the threshold.This study conducted extensive experiments to demonstrate these ideas.The experimental results show that the proposed method achieves the state-of-the-art denoising performance.
Particle Swarm Optimization Algorithm with Dynamically Adjusting Inertia Weight
DONG Hong-bin, LI Dong-jin and ZHANG Xiao-ping
Computer Science. 2018, 45 (2): 98-102.  doi:10.11896/j.issn.1002-137X.2018.02.017
Abstract PDF(1295KB) ( 1953 )   
References | Related Articles | Metrics
In order to tackle the problems of slow convergence,low accuracy and parameter dependence of the standard particle swarm optimization(PSO) algorithm,a nonlinear exponential inertia weight in particle swarm optimization(EIW-PSO) was proposed.In each iteration,the new algorithm improves its performance by adjusting inertia weight dynamically.The new weight is an exponential function of the minimal and maximal fitness of the particles,which is more conducive for the algorithm being out of local optimization in optimization process. Random factors are introduced to ensure population diversity,so that the particles converge to the global optimal position faster.The standard PSO,linearly decreasing inertia weigh (LDIW-PSO),mean adaptive inertia weigh (MAW-PSO) were tested and compared in different dimensions and population sizes through eight benchmark test functions.Experimental results show that the proposed EIW-PSO algorithm has faster convergence rate and higher solving precision.
Parallel Algorithm for Mining User Frequent Communication Relationship
ZHU Peng-yu, BAO Pei-ming and JI Gen-lin
Computer Science. 2018, 45 (2): 103-108.  doi:10.11896/j.issn.1002-137X.2018.02.018
Abstract PDF(1294KB) ( 663 )   
References | Related Articles | Metrics
With the rapid development of mobile communication technology and Internet,mobile communication equipment has become a portable tool for most people.A parallel algorithm PMFCS was proposed for mining frequent communication sub-graph of mass communication data.The algorithm is based on the Apriori algorithm and sub-graph connect principle.It uses Spark to distribute all the edges to each computing node,then the 1th-order frequent candidate sub-graphs are distributed to each node,the 1th-order frequent candidate sub-graphs are counted at each node,and the 1th-order sub-graphs are got by summarizing candidate sub-graphs.PMFCS iteratively connects the (k-1)th-order sub-graph and the 1th-order sub-graph to generate kth-order candidate sub-graphs.Subsequently,the algorithm terminates until the kth-order frequent sub-graph set is empty.The experimental results show that PMFCS can mine the frequent communication sub-graph efficiently and quickly.
Fuzzy Weighted Clustering Algorithm with Fuzzy Centroid for Mixed Data
JI Jin-chao, ZHAO Xiao-wei, HE Fei, HU Ying-hui, BAI Tian and LI Zai-rong
Computer Science. 2018, 45 (2): 109-113.  doi:10.11896/j.issn.1002-137X.2018.02.019
Abstract PDF(1210KB) ( 541 )   
References | Related Articles | Metrics
In fuzzy c-means type algorithms,fuzy parameters are used to control the degree of possible overlap,but it also has the negative effects that all data objects tend to influence all clusters.To solve this issue,Klawonn and Hppner proposed a fuzzy function for replacing the fuzzier.However,this method is only designed for numeric data.In many real-world applications,data objects are usually described by both numeric and categorical attributes.In this paper,a novel weighted fuzzy clustering algorithm based on fuzzy centroid (FWFC) was proposed for the data with both numeric and categorical attributes,i.e.mixed data.In this method,the mean is first integrated with fuzzy centroid to represent the cluster centers.Then,a measure which can evaluate the influence of different attributes in the process of clustering is used to evaluate the dissimilarity between data objects and cluster centers.Finally,the algorithm is presented for clustering the data with mixed attributes.The proposed algorithm was tested by a series of experiments on three mixed datasets.Experimental results show that the proposed algorithm outperforms traditional clustering algorithms.
Robotic Fish Tracking Method Based on Suboptimal Interval Kalman Filter
TONG Xiao-hong and TANG Chao
Computer Science. 2018, 45 (2): 114-120.  doi:10.11896/j.issn.1002-137X.2018.02.020
Abstract PDF(3092KB) ( 682 )   
References | Related Articles | Metrics
Research of autonomous underwater vehicle (AUV) focuses on tracking and positioning,precise guidance and return to dock,and so on.The robotic fish of AUV has become a hot application in intelligent education,civil and military and so on.From the nonlinear tracking analysis of robotic fish,it is found that the interval Calman filtering algorithm contains all possible filtering results,but the range is wide and relatively conservative,and the interval data vector is uncertain before implementation.This paper proposed a ptimization algorithm of suboptimal interval Kalman filtering.Suboptimal interval Kalman filtering scheme uses the inverse of interval matrix instead of its worst inverse,and it is more approximate to nonlinear state equation and measurement equation than the standard interval Kalman filter,increasing the accuracy of the nominal dynamic system model,and improving the speed and precision of tracking system.Monte-Carlo simulation results show that the optimal trajectory of suboptimal interval Kalman filtering algorithm is better than that of the interval Kalman filtering method and the standard filter method.
User Gender Classification with Dual-channel LSTM
WANG Li-min, YAN Qian, LI Shou-shan and ZHOU Guo-dong
Computer Science. 2018, 45 (2): 121-124.  doi:10.11896/j.issn.1002-137X.2018.02.021
Abstract PDF(1185KB) ( 894 )   
References | Related Articles | Metrics
User gender classification aims at classifying the users into male and female with the provided information.Previous studies on gender classification mainly focus on a single type of features (i.e.,textual features or social features).Different from previous research,this paper proposed a new approach named dual-channel LSTM by making full use of the relationship between textual features (the text which user publishes) and social features (the followers which user concerns).Specifically,this paper first got two kinds of features using single-channel LSTM respectively.Then,it proposed a joint learning method to integrate the features.Lastly,it got the final classification results by the dual-channel LSTM.Empirical studies show that the dual-channel LSTM model achieves effective results for gender classification compared with traditional classification algorithms.
Clustering Algorithm Based on Shared Nearest Neighbors and Density Peaks
LIU Yi-zhi, CHENG Ru-feng and LIANG Yong-quan
Computer Science. 2018, 45 (2): 125-129.  doi:10.11896/j.issn.1002-137X.2018.02.022
Abstract PDF(1235KB) ( 756 )   
References | Related Articles | Metrics
Robust clustering by detecting density peaks and assigning points based on fuzzy weighted K-nearest neighbors(FKNN-DPC) is a simple and efficient clustering algorithm,which can automatically detect the cluster center and assign the non-cluster center sample based on weighted K-nearest neighbors quickly and accurately.It is powerful in recognizing high quality cluster in any scale ,any dimension,any size and any shape of the data set,but the weight calculation in assigning strategies only considers the Euclidean distance between samples.In this paper,a similarity measure based on shared neighborhood was proposed,and the sample assigning strategy was improved by this similarity,so that the cluster is more consistent with the real attribution,thus improving the clustering quality.The effectiveness of the algorithm is verified by comparing the experiments on the UCI real data set with the K-means,DBSCAN,AP,DPC,and FKNN-DPC algorithm.
Fast Image Segmentation Method Based on Image Complexity through Curve Fitting
WANG Hai-feng, ZHANG Yi and JIANG Yi-feng
Computer Science. 2018, 45 (2): 130-134.  doi:10.11896/j.issn.1002-137X.2018.02.023
Abstract PDF(1302KB) ( 638 )   
References | Related Articles | Metrics
The classical Otsu algorithm,maximum entropy algorithm,and minimum cross entropy algorithm have poor segmentation image effect when image signal noise ratio (SNR) is low.The paper proposed a kind of image segmentation method based on image background and target object complexity from the perspective of the image complexity,greatly reducing redundancy with the curve fitting method and improving the real-time performance and stability of the algorithm.According to the experiment results,compared with the classical algorithm,the fast segmentation algorithm proposed in the paper has high operation speed,stability and reliability,and can effectively solve dissatisfactory image segmentation effect when image SNR is low.
Study on Three-way Decisions Based on Intuitionistic Fuzzy Probability Distribution
XUE Zhan-ao, XIN Xian-wei, YUAN Yi-lin and LV Min-jie
Computer Science. 2018, 45 (2): 135-139.  doi:10.11896/j.issn.1002-137X.2018.02.024
Abstract PDF(1221KB) ( 638 )   
References | Related Articles | Metrics
The fusion of intuitionistic fuzzy sets theory and possibility theory is a hot spot for dealing with uncertain questions.This paper proposed a three-way decisions model based on the probability distribution of intuitionistic fuzzy probability measurement (IFPM).First of all,the intuitionistic fuzzy decision space and the possibility distribution of the space were defined,and the properties of them were proved.Then,the calculation method of possibility means value for domain object membership degree and the non-membership degree was given.Thirdly,by analyzing the relationship possibility mean value of domain object membership degree and the non-membership degree between decision threshold,its probability distribution was discussed.Thus the three-way decisions model based on the probability distribution to the possibility distribution of transformation relations was expanded.An IFPM decision-making risk calculation method was given.Finally,this paper provided the formulas and analyzed the dynamic decision process of the three-way decisions through analyzing the changing of IFPM under different domain elements,and validated the effectiveness of the model through examples.
Comparative Research on Computational Experiment of Social Manufacturing Based on Social Learning Evolution Paradigm
SHI Man, WANG Jun-feng, XUE Xiao and ZHOU Chang-bing
Computer Science. 2018, 45 (2): 140-146.  doi:10.11896/j.issn.1002-137X.2018.02.025
Abstract PDF(4811KB) ( 770 )   
References | Related Articles | Metrics
Under the background of the Internet society,the advanced manufacturing models need to realize collaboration between intra-firm and inter-enterprise from information,social and services.As a new type of manufacturing mode,social manufacturing can adapt to the future socialization,service and large-scale personalized manufacturing environment,and it can solve the problem of multi-participants’ resource sharing,collaboration and interaction in the future manufacturing industry,so it is important to research on this issue.However,the complexity of the social manufacturing system has led to the difficulties of modeling and evaluating the cooperation strategy which has attracted the attention of many researchers.Therefore,this paper presented a social manufacturing computing model based on SLE paradigm including three parts:individual model,interaction model and social model,and further introduced the idea of computatio-nal experiment.The calculation of experiment shows that this model is feasible and effective.It plays a role in promoting the research of social manufacturing.
Blind Single Image Super-resolution Using Maximizing Self-similarity Prior
LI Jian-hong, LV Ju-jian and WU Ya-rong
Computer Science. 2018, 45 (2): 147-151.  doi:10.11896/j.issn.1002-137X.2018.02.026
Abstract PDF(6034KB) ( 713 )   
References | Related Articles | Metrics
The relationship between the self-similarity property of image and image quality is close,and almost all the patches in the clear natural image have recurrence patches in itself or its lower scale.However,in the image which was processed by blur or noise,this appearance is not dramatically.Aiming at this phenomenon,this paper proposed a blind single image super-resolution algorithm using the prior of maximizing self-similarity.This algorithm estimates the high-resolution image and the blur kernel by iterative computation,thus making any patch in the final estimated high resolution image exists in the inputted low resolution image with maximizing probability.The proposed algorithm not only estimates the degradation kernel and the high-resolution image accurately,but also adapts the prior according to the inputted image to make the result more robust.Extensive experiments illustrate that our algorithm shows obvious advantages when comparing to other main stream algorithms in terms of PSNR and SSIM.
Optimal Granularity Selection of Attribute Reductions in Multi-granularity Decision System
SHI Jin-ling, ZHANG Qian-qian and XU Jiu-cheng
Computer Science. 2018, 45 (2): 152-156.  doi:10.11896/j.issn.1002-137X.2018.02.027
Abstract PDF(1216KB) ( 828 )   
References | Related Articles | Metrics
Granular computing,as an important theory method of artificial intelligent,studies the solution of uncertain,imprecise issues or complicated problems from different angles and granularity levels.On the basis of decision system theory of multi-granularity,information granulation and granularity partition were analyzed through different granularity levels.Then the concepts of granulating measurement and granular roughness which can exactly express the size of different granularity partition were defined for the problems of attribute reductions and efficient decision making in decision system.After discussing the local reduction method based on objects,an algorithm of optimal granularity reductions was proposed based on both universe and objects for overcoming the drawbacks of decision system reductions in traditional methods,which are only focused on the universe of decision system.Finally,the experimental results show the validity of the proposed algorithm.
Participant Selection Algorithm for t-Sweep k-Coverage Crowd Sensing Tasks
ZHOU Jie, YU Zhi-yong, GUO Wen-zhong, GUO Long-kun and ZHU Wei-ping
Computer Science. 2018, 45 (2): 157-164.  doi:10.11896/j.issn.1002-137X.2018.02.028
Abstract PDF(26351KB) ( 593 )   
References | Related Articles | Metrics
With the rapid development and popularization of the wireless network technology and mobile intelligent terminal,the research of crowd sensing has been concerned by more and more related research workers.The crowd sensing uses the idea of crowdsourcing to assign tasks to users who have mobile devices and then the users respectively upload the data sensed by their own mobile devices.Therefore,the choice of the participants directly determines the quality of information collection and related costs.Selecting as few users as possible to accept the perceptual tasks to achieve the quality requirements of the time and space coverage of the specified location set is very improtant.First,the “t-sweep k-coverage” crowd sensing tasks was defined.It is an NP-hard problem to complete the task with the mini-mum cost.Through the construction of special skills,linear programming can be used to solve the problem while the scale of the problem is small.With the increase of the scale of the problem,the linear programming fails to solve it .Therefore,the participant selection algorithm based on the theory of greedy strategy was proposed.Based on the information of the mobile users’ CDR,two kinds of participant selection method were simulated in the experiment.The results of experiment show that when the problem scale is small,both the above two methods can find the user set to meet the coverage requirements.The size of user set of the greedy strategy is about twice bigger than that of the linear programming.When the scale of the problem becomes larger,the linear programming fails to solve the problem sometimes,while the greedy strategy can still get a reasonable result.
Capacity Analysis of Energy Harvesting Wireless Communication Channel Based on Hybrid Energy Storage
YAO Xin-wei, ZHONG Li-bin, WANG Wan-liang and YANG Shuang-hua
Computer Science. 2018, 45 (2): 165-170.  doi:10.11896/j.issn.1002-137X.2018.02.029
Abstract PDF(1320KB) ( 513 )   
References | Related Articles | Metrics
Due to the constraints of the instability of energy source and the limited storage capacities of devices in exis-ting energy harvesting technology,a hybrid energy storage structure composed by super capacitor and battery was proposed for device,and the corresponding channel capacity of the proposed structure model was analyzed.Firstly,an energy harvesting channel model based on hybrid energy storage structure was presented for a point-to-point energy harvesting communication system.Secondly,by considering the intermittent peculiarities of energy harvesting,this paper assumed that the energy arrival process conforms to the Bernoulli stochastic process.A near optimal allocation policy was proposed with the upper and lower bounds of the average system throughput.In particular,the gap of two bounds is derived to be a constant,then the approximate channel capacity is obtained.Finally,simulation results illustrate that the gap between the upper and lower bounds of channel capacity is 1.77bps/Hz and 2.49bps/Hz respectively,when harvesting energy is less than and more than storage capacity of super capacitor.Meanwhile, the experiment results show that compared with the conventional wireless node with single battery storage,the hybrid energy storage structure can improve the energy utilization and increase the channel capacity of system.The upper bound of channel capacity can be increased up to 70% when the storage capacity ratio of supper capacitor and battery is 12.
Dynamic Community Detection Based on Evolutionary Spectral Method
FU Li-dong and NIE Jing-jing
Computer Science. 2018, 45 (2): 171-174.  doi:10.11896/j.issn.1002-137X.2018.02.030
Abstract PDF(1220KB) ( 669 )   
References | Related Articles | Metrics
In order to effectively analyze the function and characteristics of the community structure in the dynamic network,the module density function and the negative average correlation function were optimized based on the evolutionaryclustering algorithm under the evolutionary time smoothing framework,and the theoretical feasibility was demonstrated.The evolution spectrum algorithm was proposed based on community structure of the dynamic network.The accuracy and effectiveness of the proposed algorithm was verified and compared with other algorithms in the computer synthesis and real dynamic network respectively.The experimental results show that the proposed algorithm is still very accurate and effective in the community detection of dynamic network.
Evaluation Method for Node Importance in Air Defense Networks Based on Functional Contribution Degree
LUO Jin-liang, JIN Jia-cai and WANG Lei
Computer Science. 2018, 45 (2): 175-180.  doi:10.11896/j.issn.1002-137X.2018.02.031
Abstract PDF(2256KB) ( 667 )   
References | Related Articles | Metrics
In order to evaluate the importance of node in air defense networks which is a kind of functional social network,on the basis of analyzing the shortcomings of current evaluation method for network node importance,a evaluation method for node importance based on functional contribution was put forward.The method comprehensively considers the functional properties and structural properties of the node.In order to verify the validity and superiority of the method,two types of efficiency index for networked systems,such as network connectivity efficiency and combat loop,were built,and the evaluation method was used to evaluate the node importance in ARPA and air defense networks.Experimental results show that the evaluation method has some advantages in accuracy and applicability of the network node importance evaluation.
Authentication Method Synthesizing Multi-factors for Web Browsing Behavior
CHEN Dong-xiang, DING Zhi-jun, YAN Chun-gang and WANG Mi-mi
Computer Science. 2018, 45 (2): 181-188.  doi:10.11896/j.issn.1002-137X.2018.02.032
Abstract PDF(2401KB) ( 571 )   
References | Related Articles | Metrics
In the process of electronic trades,users trade through the PC browser.Due to the threats of phishing sites and other hacking way, there is a failure risk in the traditional account-password-authentication mode.The existing methods of user Web-browsing-authentication mainly aim at authenticating one aspect of the user’s behaviors.For a large number of users,if the authentication is done only from the single aspect,it is difficult to distinguish the features among similar users,which will result in authentication invalidation.Based on the sequence behavior of user browsing Web,the behavior of hyperlink usage and browser manipulation,a authentication method synthesizing multi-factors was proposed by using the machine learning method.The experimental results show that this method achieves more than 90% detection rate under a certain false positive rate.
Pixel Prediction Based Reversible Data Hiding Scheme for Image
XIANG Yu-dong and WU Gui-xing
Computer Science. 2018, 45 (2): 189-196.  doi:10.11896/j.issn.1002-137X.2018.02.033
Abstract PDF(2180KB) ( 817 )   
References | Related Articles | Metrics
The pixel prediction based reversible data hiding is an emerging and state-of-the-art technology for the low distortion and high capacity.In particular,for the prediction based difference expansion (DE) and histogram shift (HS) schemes,an accurate prediction can increase the payload and reduce the distortion simultaneously.This paper proposed a pixel prediction based histogram shift method in order to increase payload and reduce distortion.This method is designed based on the modified warped distance algorithm and the local gradient of an image,which can increase the prediction accuracy and furthermore improve the HS algorithm.Meanwhile,this paper gave some advice about how to avoid overflow after shifting the histogram.Experiments demonstrate that the proposed method outperforms the previous counterparts significantly in terms of both the prediction accuracy and the final embedding performance,and the tradeoff between the payload and the distortion can be run by modifying the embedding level as well.Moreover,the usage of local gradient and the local geometric similarity can improve the payload-distortion performance of reversible data hiding.
ABAC Static Policy Conflict and Redundancy Detection Algorithm Based on Mask Key
JIANG Ze-tao, XIE Zhen, WANG Qi and ZHANG Wen-hui
Computer Science. 2018, 45 (2): 197-202.  doi:10.11896/j.issn.1002-137X.2018.02.034
Abstract PDF(1253KB) ( 603 )   
References | Related Articles | Metrics
A static policy conflict detection algorithm based on ordered attribute set and binary mask key was proposed.The algorithm can detect all of the static policy conflicts and redundancy in attribute-based access control model. Compared with the typical violence algorithm and the attribute segmentation algorithm,the proposed algorithm can reduce the time complexity and space complexity .Furthermore,it supports adding and removing attributes from set.New algorithm can meet the requirements of modern complex network environments.
Study on Security Enhancement Mechanism of Android System Kernel Based on Security Domain
CHEN Wei, YANG Qiu-hui and CHENG Xue-mei
Computer Science. 2018, 45 (2): 203-208.  doi:10.11896/j.issn.1002-137X.2018.02.035
Abstract PDF(2564KB) ( 795 )   
References | Related Articles | Metrics
Android system is facing challenges on security with its explosive development recently.Android system’s security is composed of system security and software security.System security is the cornerstone of integral security,which is vital to Android.In order to strengthen the system security,this paper proposed an improved domain generation algorithm to enhance the kernel level security mechanism based on TOMOYO Linux.In addition,the experiment shows that the proposed method is efficient to enhance the security of Android system.
Modeling for Three Kinds of Network Attacks Based on Temporal Logic
NIE Kai, ZHOU Qing-lei, ZHU Wei-jun and ZHANG Chao-yang
Computer Science. 2018, 45 (2): 209-214.  doi:10.11896/j.issn.1002-137X.2018.02.036
Abstract PDF(1253KB) ( 761 )   
References | Related Articles | Metrics
Compared with other detection methods,the intrusion detection methods based on temporal logic can detect many complex network attacks effectively.There is no network attack temporal logic formula,so common back,ProcessTable and Saint attacks can not be detected using the above method.Thus,this paper employed propositional interval temporal logic (ITL) and real-time attack signature logic (RASL) to model the temporal logic formula for the three attacks,respectively.In general,based on attack basic principle of the three attacks,the key attack steps are decomposed into atomic actions.Next,this paper defined atomic propositions.Lastly,according to the relationship between the atomicpropositions,this paper constructed the network attack temporal logic formula which is an input of the model checker.In addition,the automaton was used to model the log library as another input of the model checker.The output of the model checker is the result of intrusion detection in the three network attacks.Besides,the intrusion detection method for three attacks was given.
Mimic Security Defence Strategy Based on Software Diversity
ZHANG Yu-jia, PANG Jian-min, ZHANG Zheng and WU Jiang-xing
Computer Science. 2018, 45 (2): 215-221.  doi:10.11896/j.issn.1002-137X.2018.02.037
Abstract PDF(1290KB) ( 873 )   
References | Related Articles | Metrics
With the development of reverse engineering,the software industry has suffered a great loss from the software piracy and malicious attack for a long time.Code obfuscation techniques which can hide specific function of a program from malicious analysis for malware is thus frequently employed to mitigate this risk.However,most of the exis-ting obfuscation methods are language embedded and depend on the target architecture,this paper proposed a method of compile-time obfuscation,and further presented a prototype implementedation based on the LLVM compiler infrastructure.Furthermore,this paper implemented a mimic security defence system which is free from malicious attack with the software diversity method.
Collision Attack on MIBS Algorithm
DUAN Dan-qing and WEI Hong-ru
Computer Science. 2018, 45 (2): 222-225.  doi:10.11896/j.issn.1002-137X.2018.02.038
Abstract PDF(1254KB) ( 606 )   
References | Related Articles | Metrics
MIBS algorithm is a lightweight block cipher,which was proposed in 2009.In order to further evaluate its security,the ability of MIBS algorithm against the collision attack was studied.Based on the equivalent structure of MIBS,6-round distinguisher was constructed.By adding two rounds behind the distinguisher and two rounds in front of it in turn,the collision attack was applied to 8/9/10-round MIBS,and the attacking process and complexity analysis were given.The attacking results show that 8/9/10-round MIBS is not immune to collision attack.
PCA-AKM Algorithm and Its Application in Intrusion Detection System
NIU Lei and SUN Zhong-lin
Computer Science. 2018, 45 (2): 226-230.  doi:10.11896/j.issn.1002-137X.2018.02.039
Abstract PDF(1830KB) ( 529 )   
References | Related Articles | Metrics
The initial clustering center is the point or object selected for the first time in the clustering process.Aiming at the instability of clustering results in traditional K-means algorithm caused by choosing the initial clustering centers randomly,the PCA-AKM algorithm was proposed.The algorithm uses the principal component analysis to extract the main components of the data set to achieve data dimensionality reduction,and then uses the self-defined indicators Dw to choose the initial clustering centers,avoiding the clustering center local optimum.Comparison with the K-means algorithm on the UCI data set proves that the clustering stability of the PCA-AKM algorithm is higher than that of K-means.Experiment proves that the algorithm has high detection rate and low false detection rate on KDD CUP99 data set when it is used to simulate intrusion detection,and the algorithm can improve the accuracy of intrusion detection effectively.
Novel Network Intrusion Detection Method Based on IPSO-SVM Algorithm
MA Zhan-fei, CHEN Hu-nian, YANG Jin, LI Xue-bao and BIAN Qi
Computer Science. 2018, 45 (2): 231-235.  doi:10.11896/j.issn.1002-137X.2018.02.040
Abstract PDF(1270KB) ( 630 )   
References | Related Articles | Metrics
Network intrusion detection has always been the research focus in the field of computer network security,and the current network is facing many potential security problems.In order to improve the accuracy of network intrusion detection,this paper improved the particle swarm optimization (PSO) algorithm,and then optimized the parameters of support vector machine (SVM) by using the improved PSO algorithm.On this basis,this paper also designed a novel network intrusion detection method based on IPSO-SVM algorithm.The experiment results show that the proposed IPSO-SVM algorithm is efficient.Compared with the classical SVM algorithm and PSO-SVM algorithm,IPSO-SVM algorithm not only improves the convergence speed of the network training obviously,but also improves the accuracy rate of network intrusion detection by 7.78% and 4.74% respectively,decreases the false positive rate by 3.37% and 1.19%,and decreases the false negative rate by 1.46% and 0.66%.
Surge:A New Low-resource and Efficient Lightweight Block Cipher
LI Lang and LIU Bo-tao
Computer Science. 2018, 45 (2): 236-240.  doi:10.11896/j.issn.1002-137X.2018.02.041
Abstract PDF(1253KB) ( 699 )   
References | Related Articles | Metrics
Lightweight cryptography algorithm has become a hot research.The paper presented a new lightweight block cipher algorithm named Surge.Surge has low resource,high performance and high security.Block length of Surge cipher is 64 bits.Its variable key uses 64,80 or 128-bit length.Surge is based on the SPN structure.The round function is divided into 5 modules.Key expansion module is no expansion.Round-constants add module uses 0 to 15 to combine so that it can achieve efficient and highly confused round-constrants add operation.MixColumn module uses (0,1,2,4) to composite hardware-friendly matrix on the GF (24).Low resource and highly efficient of Surge is attained by this novel design.Surge is implemented and downloaded in FPGA.Experimental results show that it has smaller area resources and better cryptographic properties. The security expriment proves that surge can be against differential and linear attacks,algebraic attacks.
Impacts of Correlation Effects among Multi-layer Faults on Software Reliability Growth Processes
YI Ze-long, WEN Yu-mei, LIN Yan-min, CHEN Wei-ting and LV Guan-yu
Computer Science. 2018, 45 (2): 241-248.  doi:10.11896/j.issn.1002-137X.2018.02.042
Abstract PDF(3519KB) ( 583 )   
References | Related Articles | Metrics
Faults in the software systems,which eventually cause the system failures,are usually connected with each other in complicated ways.Software reliability growth models based on non-homogeneous Poisson processes are widely adopted tools when describing the stochastic failure behavior and measuring the reliability growth in software systems.Considering a group of correlated faults,a new model was built to examine the reliability of software systems and assess the model’s performance from real-world data sets.Numerical studies show that the new model captures correlation effects among multi-layer faults,fits the failure data well and performs better than traditional models.The optimal software release policy,which considers both the reliability requirement and the software testing cost,was also formally studied.It is found that if the correlation effects among different layers of faults are ignored by the software testing team,the best time to release the software package to the market will be much earlier while the overall cost will be much higher.
Optimization Method of Low Power Test Vectors Based on Hamming Sorting for X Bits Padding
TAN En-min and FAN Yu-xiang
Computer Science. 2018, 45 (2): 249-253.  doi:10.11896/j.issn.1002-137X.2018.02.043
Abstract PDF(1195KB) ( 603 )   
References | Related Articles | Metrics
The test power consumption in the process of integrated circuit testing is usually much higher than the normal power consumption of the integrated circuit.However,the high test power consumption may cause the circuit to be damaged or the chip to be burned.An optimization method of low power test vectors based on Hamming sorting for X bits padding was proposed to reduce the test power consumption.Firstly,the test vectors in the test set are ranged from high X bits to low X bits.Then,the test vectors are sorted in ascending order according to the Hamming distance.Finally,the test power consumption is reduced by padding X bits for the sorted test set reasonably,which increases the correlation between test vectors.The ISCAS’85 standard circuit was used as the test object.The experimental results show that compared with using the non-optimized test set,the test power consumption is reduced obviously with the optimized test set.
Intuitionistic Fuzzy Numbers Decision-theoretic Rough Sets
CHEN Yu-jin and LI Xu-wu
Computer Science. 2018, 45 (2): 254-260.  doi:10.11896/j.issn.1002-137X.2018.02.044
Abstract PDF(1258KB) ( 718 )   
References | Related Articles | Metrics
The cost function of decision-theoretic rough set does not include fuzzy concept,which can not describe the decision of the fuzzy information discreetly.For the above shortage,firstly,the concept of precise value of loss function was generalized to intuitionistic fuzzy numbers and the intuitionistic fuzzy numbers decision-theoretic rough set was established.Then,the expected loss with intuitionistic fuzzy numbers based on down-ideal and up-ideal was analyzed.The strategies based on conservatism,activism and variable semantics were described and decision rules were derived.The corresponding propositions of the intuitionistic fuzzy numbers decision-theoretic rough sets were analyzed.Finally,an example of disposition schemes for strategic target air-defense operation was given to illuminate the proposed model in applications.
Named Entity Recognition Method Based on BLSTM
FENG Yan-hong, YU Hong, SUN Geng and SUN Juan-juan
Computer Science. 2018, 45 (2): 261-268.  doi:10.11896/j.issn.1002-137X.2018.02.045
Abstract PDF(1337KB) ( 1249 )   
References | Related Articles | Metrics
Traditional named entity recognition methods directly rely on plenty of hand-crafted features and special domain knowledge,and have resolved the problem that there are few supervised learning corpora which are available.But the costs of developing hand-crafted features and obtaining domain knowledge are expensive.To solve this problem,a neural network model based on BLSTM(Bidirectional Long Short-Term Memory) was proposed.This method does not directly use hand-crafted features and domain knowledge any more,but utilizes the word embedding based on context and word embedding based on characters.The former expresses the information about context of named entities,and the latter expresses the information about prefix,postfix and domain knowledge which make up the named entities.Simultaneously,it constrains the cost function of BLSTM by using the dependency between the labels in tagged sequence,and integrates the domain knowledge into the cost function,furtherly improving the recognition ability of the model.The experiments show that the recognition effect of the method in this paper is superior to traditional methods.
Optimization Method of Production Scheduling in Flexible Job
ZHANG Gui-jun, DING Qing, WANG Liu-jing and ZHOU Xiao-gen
Computer Science. 2018, 45 (2): 269-275.  doi:10.11896/j.issn.1002-137X.2018.02.046
Abstract PDF(4188KB) ( 967 )   
References | Related Articles | Metrics
To meet the needs of the production scheduling of flexible manufacturing enterprises,an optimization method for production scheduling was proposed.Firstly,an overall flow of the production scheduling which meets the workshop application requirements and various resource constraints is designed by analyzing the characteristics of the production scheduling problem in the enterprise workshop,and a constraint condition based production objective relation model is presented.Secondly,a differential evolution with dynamic strategy is proposed.In the proposed algorithm,the mutation strategy is dynamically selected according to the crowding degree between each individual in the current population.Moreover,a decoding scheme is designed based on the position of the process.Therefore,the optimal scheduling scheme is obtained to improve the operational efficiency of equipment to maximize the utilization of resources.Finally,the effectiveness of the proposed method is verified by six benchmark functions,FT6-6 scheduling problem and practical example.
Important Micro-blog User Recommendation Algorithm Based on Label and PageRank
WANG Rong-bing, AN Wei-kai, FENG Yong and XU Hong-yan
Computer Science. 2018, 45 (2): 276-279.  doi:10.11896/j.issn.1002-137X.2018.02.047
Abstract PDF(1198KB) ( 788 )   
References | Related Articles | Metrics
Massive micro-blog information makes it difficult for new users to obtain the content they are interested in.Important micro-blog user recommendation provides an effective way for new users to access information.At present,inadequate consideration of the relationship between users and the lack of user personalized label processing make the recommendation accuracy of important micro-blog user be not high.Therefore, an important micro-blog user recommendation algorithm based on label and PageRank was proposed.Firstly,the personalized label is processed by word segmentation,de-noising and setting weight,and the processed result is used as the representative of user interest.Se-condly,the relationship between users is analyzed by PageRank calculation model.Finally,important micro-blog users are recommended to new users with similar interests by label similarity calculation.The experiment shows that the proposed algorithm improves the recommendation accuracy of important micro-blog users compared with the recommendation algorithm based on label and collaborative filtering,because the analysis of the importance of micro-blog user relationship and user’s personalized label is integrated into this algorithm.
Cross Evaluation Method Based on Intuitionistic Fuzzy Entropy
FAN Jian-ping, XUE Kun and WU Mei-qin
Computer Science. 2018, 45 (2): 280-286.  doi:10.11896/j.issn.1002-137X.2018.02.048
Abstract PDF(1264KB) ( 653 )   
References | Related Articles | Metrics
This paper endeavored to extend the secondary goal model based on the relative closeness to the fuzzy environment and made full use of fuzzy information.Then,this paper proposed a new method to convert triangular fuzzy efficiency to intuitionistic fuzzy set,so that it can integrate the fuzzy efficiency with intuitionistic fuzzy entropy.After that,ternary directional distance index was used to rank the whole fuzzy efficiencies.At last,the citation efficiencies of ten management science and system science journals which are identified as the important journals by Management Scien-ce Department of National Natural Science Foundation of China in 2011 were analyzed to illustrate the feasibility and validity of the proposed method.
Study on Fast Incremental Clustering Algorithm for High Complexity Dynamic Data in Cloud Computing Environment
CHEN Gan-lang, YAN Fei-long and PAN Jia-hui
Computer Science. 2018, 45 (2): 287-290.  doi:10.11896/j.issn.1002-137X.2018.02.049
Abstract PDF(1187KB) ( 496 )   
References | Related Articles | Metrics
In order to solve the problems that the traditional clustering algorithm has the disadvantages of high cost,poor clustering quality and slow clustering speed,this paper proposed a new fast clustering algorithm based on incremental density of high complexity dynamic data in cloud computing environment.First of all,on the basis of density under the environment of high complexity of dynamic data clustering in cloud computing,this algorithm finds some subspace from the data space.The data mapped to the space area can produce high density point set,and the set of connec-ted regions is regarded as the clustering results.Secondly,it executes incremental clustering by DBSCAN algorithm, and studies the original clustering merger or split caused by inserting or deleting data.Finally,by dealing with all the core data in the neighborhood of changing the core status in the process of updating,the incremental clustering is analyzed from two aspects of inserting or deleting data.The experimental results show that the proposed algorithm has the cha-racteristics of low cost,fast clustering speed and high clustering quality.
Early Classification of Time Series Based on Piecewise Aggregate Approximation
MA Chao-hong and WENG Xiao-qing
Computer Science. 2018, 45 (2): 291-296.  doi:10.11896/j.issn.1002-137X.2018.02.050
Abstract PDF(1274KB) ( 807 )   
References | Related Articles | Metrics
Early classification on time series is more and more significant in the field of time series data ming.As the high dimension of time series data,it is of highly necessary to choose an efficient and appreciate dimensionality reduction method in the practical application of early classification on time series.Thus,this paper aimed at applying piecewise aggregate approximation to time series data,and then implemented early classification in lower dimension.In addition,through making comparison with some existing methods,the experiments were carried on in forty-three datasets.The experimental result indicates that this proposal is better than other existing methods in accuracy,earliness and reliability.
Color Image Enhancement Algorithm Based on Lab Color Space and Tone Mapping
ZHAO Jun-hui, WU Yu-feng, HU Kun-rong and PU Bin
Computer Science. 2018, 45 (2): 297-300.  doi:10.11896/j.issn.1002-137X.2018.02.051
Abstract PDF(3433KB) ( 1046 )   
References | Related Articles | Metrics
According to the detail preserving and color constancy problems in the low illumination image enhancement processing,this paper proposed a novel Retinex enhancement algorithm based on Lab color space and color mapping image contrast.Firstly,the input image of Lab color space with a low contrast is decomposed into luminance and chrominance components,and adaptive bilateral filtering is used for estimation of illumination intensity,so that appropriate adjacent pixels can be considered according to the brightness and color values.Then the tone mapping function based on the parabola is used to improve the contrast of the estimation of illumination image.Finally,the enhanced brightness and the original chroma are combined together to produce an enhanced color output image.Experimental results show that the proposed algorithm can enhance the image details and edge structure by reducing the image artifacts, and can preserve the image’s nature by avoiding the color shift.
Multi-focus Image Fusion Based on Redundant Wavelet Transform and Guided Filtering
YANG Yan-chun, LI Jiao, DANG Jian-wu and WANG Yang-ping
Computer Science. 2018, 45 (2): 301-305.  doi:10.11896/j.issn.1002-137X.2018.02.052
Abstract PDF(1249KB) ( 780 )   
References | Related Articles | Metrics
For the problem of edge halo in multi-focus image fusion based on traditional multi-scale transform,this paper proposed a novel method of image fusion based on redundant wavelet transform and guided filtering.Firstly,the source images are decomposed by the redundant wavelet transform,and a similar plane and a series of wavelet planes are obtained.The multi-scale decomposition can effectively extract the detail information in the source images.Then the weighted fusion rules of guided filtering are respectively used in the similar plane and the wavelet planes,and the weighted maps are constructed to obtain the weighted fusion coefficients of the similar plane and the wavelet planes.Finally,the redundant wavelet inverse transform is used to obtain the fusion image.The experiment results show that the proposed method can better reflect edge detail features of the images and can achieve better fusion results compared with the traditional fusion methods.
Multi-person Behavior Recognition Method Based on Convolutional Neural Networks
GONG An, FEI Fan and ZHENG Jun
Computer Science. 2018, 45 (2): 306-311.  doi:10.11896/j.issn.1002-137X.2018.02.053
Abstract PDF(5002KB) ( 997 )   
References | Related Articles | Metrics
In order to solve the problems in multi-person behavior recognition,for example,it is difficult to distinguish many characters,it is difficult to express and learn increased feature dimension of image,the behavior background is complex and it is easy to cause interference,this paper proposed a method of multiplayer behavior recognition based on convolutional neural network.At first,considering the complexity of multi-person behavior recognition,the simple two-person interactive behavior is choosen as the research object and the picture database is collected.Then,because multiplayer behavior recognition has complicated background and many features in the recognition progress,a method using the Dense-sift algorithm for feature pretreatment mode is proposed.Against the complexity of the multiplayer behavior recognition,this network makes various modifications,such as input dimensions which is expanded to include layer convolution,convolution kernel increasing,output reduction,etc.Experimental results show that the proposed method can recognize simple multi-person behavior recognition,such as boxing,hug and kissing effectively.
Image Denoising Optimization Algorithm Combined with Visual Saliency
ZHAO Jie, MA Yu-jiao and LIU Shuai-qi
Computer Science. 2018, 45 (2): 312-317.  doi:10.11896/j.issn.1002-137X.2018.02.054
Abstract PDF(1274KB) ( 851 )   
References | Related Articles | Metrics
The image is disturbed by noise in the process of sampling,processing,transmission and storage,which leads to the decline of the visual information of image.Based on the difference of sensitivity of the human eyes to the noise in different regions,this paper put forward an improved image denoising algorithm combined with visual significance.Firstly,the algorithm preprocesses the image by visual significance,and gets the interesting region in the image.Then this paper used the BM3D algorithm,which can better protect the image texture,to denoise this region,and used the arithmetic mean filter algorithm with faster computing speed to denoise the non-interest region.The results show that the proposed method can not only obtain higher subjective image quality evaluation,but also reduce the computational time by using BM3D algorithm.
Local Feature Fuzzy Segmentation Algorithm for Single Defocused Image
WANG Liang and TIAN Xuan
Computer Science. 2018, 45 (2): 318-321.  doi:10.11896/j.issn.1002-137X.2018.02.055
Abstract PDF(1207KB) ( 651 )   
References | Related Articles | Metrics
At present,the fuzzy segmentation algorithm of local features does not preprocess a single defocused image,resulting in low definition of the single defocused image and affecting the segmentation effect.The original fuzzy segmentation algorithm requires a large number of pixel labels in the process of pixel segmentation,and its segmentation process is complicated.Therefore,this paper proposed a method of using immune spectral clustering algorithm to excute fuzzy segmentation of the local features for a single defocused image .Firstly,the local fuzzy image is blurred again by using the method of block.Then,the variation of the singular value for the defocused image is compared,and the defocused image is identified based on this variation.Finally,the singular value features of a single defocused image are extracted,and the local features of a single defocused image are blurred.The spectral clustering method is used to cluster the pixels in the defocused image and the Nystrm approximation method is used to calculate the eigenvectors of the pixel similarity matrix,which reduces the computational complexity.The immune algorithm improves the accuracy of the clustering results and ensures the fuzzy segmentation results of the local features for defocused images.The experimental results show that the proposed algorithm can effectively segment the defocused image,the segmentation result is better and the calculation process is simpler.