Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 46 Issue 4, 15 April 2019
  
Big Data & Data Science
Survey of Online Sequential Extreme Learning Algorithms for Dynamic Data Stream Analysis
GUO Wei, YU Jian-jiang, TANG Ke-ming, XU Tao
Computer Science. 2019, 46 (4): 1-7.  doi:10.11896/j.issn.1002-137X.2019.04.001
Abstract PDF(1311KB) ( 736 )   
References | Related Articles | Metrics
Dynamic data stream analysis has become a research focus for its widespread application prospects,and online learning method is key to solve this problem.Among existing online learning methods,online sequential extreme lear-ning machine (OSELM) is a novel and practical online learning algorithm,and it has been successfully applied in the field of dynamic data stream analysis.Firstly,the theoretical foundation and the execution process of OSELM were reviewed.Then,regarding dynamic data flow analysis as the application background,this paper classified and summarized different kinds of improved OSELM algorithms,including the sliding window based OSELM,forgetting factor based OSELM,sample weighting based OSELM and other methods.This paper focused on the design ideas and implementation strategies of different kinds of algorithms,compared and analyzed the advantages and disadvantages of main algorithms.Finally,the possible works for future research were presented.
Quality Control Agent Based on Probability Inference
XU Yao-li, LI Zhan-huai
Computer Science. 2019, 46 (4): 8-13.  doi:10.11896/j.issn.1002-137X.2019.04.002
Abstract PDF(1347KB) ( 401 )   
References | Related Articles | Metrics
Entity resolution (ER) is the fundamental problem of data integration and cleaning,while inconsistency reconciliation(IR) further improves the resolution performance through reconciling inconsistent pairs resolved by existing diverse ER approaches.However,previous IR approaches have a limitation that the reconciliation solution has no quality guarantee.To solve this problem,this paper firstly proposed a quality control agent based on probability inference,denoted as QCAgent.QCAgent does not require any manually labeled pair,and can automatically output reconciliation result with the highest recall on the premise of satisfying the given precision threshold.Its core idea is as follows.Firstly,the outlier detection model is utilized to estimate the matching probability for each inconsistent pair,and then the estimated precision and recall are regarded as the environmental feedback according to these probabilities.Next,the binary search algorithm is used to select a flipping solution as the next action of QCAgent,which can make flipped reconciliation result satisfy the precision requirement with the highest recall.Then the outlier detection model is retrained by using the new consistent pairs,and the recall and precision of flipped reconciliation result are estimated.The iterative process terminates until the newest estimated precision meets the constraints.On the real data set,the experimental results show that QCAgent can effectively solve the quality control problem of reconciliation result.
Visual Analysis Method of Traffic Accident Spatial-Temporal Pattern
RAO Yong-ming, ZHANG Yan-kong, XIE Wen-jun, LIU Lu, LIU Xin-yue, LUO Yue-tong
Computer Science. 2019, 46 (4): 14-21.  doi:10.11896/j.issn.1002-137X.2019.04.003
Abstract PDF(4084KB) ( 925 )   
References | Related Articles | Metrics
With the advancement of urbanization and the rapid growth of urban population and vehicles,urban traffic accidents are increasingly frequent,which becomes a hot social concern.This paper took the traffic accident record data of Hefei City in the past ten years as the research object,used the visual analysis method to analyze the accident time and the place information in the traffic accident record data,explored the time and space pattern of the traffic accident,and constructed the traffic accident analysis system,so as to assist relevant departments to improve frequent traffic accidents.In this paper,the concept of road accident risk degree was put forward for the first time,and a new identification method of accident-prone road section was constructed based on multi-scale time statistical zigzag chart and periodic time statistical circle map.Compared with the traditional method,this method does not need to deal with the road segmentation,thus avoiding the impact of the quality of the segmentation on the recognition results.On this basis,this paper combined traffic accident data with urban road network data,and used visual analysis technology to build a visual analysis system of traffic accidents.This system can help relevant departments to understand the time pattern of the overall urban traffic accidents and single road and the accident-prone sections,and explore the accident-prone sections under the continuous time limit or cycle time limit.In addition to time conditions,the system can also identify accident-prone sections under different weather and other limited conditions,so that traffic police departments can make decision management through road accident risk under different circumstances,deploy rescue police forces reasonably and reduce accident hazards.The proposed system has important practical significance for mitigating and curbing the growth of traffic accidents,reducing and preventing road traffic accidents.It is also conducive to the scientific and effectivemana-gement of road traffic.
Weighted Oversampling Method Based on Hierarchical Clustering for Unbalanced Data
XIA Ying, LI Liu-jie, ZHANG XU, BAE Hae-young
Computer Science. 2019, 46 (4): 22-27.  doi:10.11896/j.issn.1002-137X.2019.04.004
Abstract PDF(1435KB) ( 744 )   
References | Related Articles | Metrics
Imbalanced data affect the performance of traditional classification algorithms to some extent,leading to a lower recognition rate for minority classes.Oversampling is one of the common methods for processing Imbalanced data-sets.Its main idea is to increase the number of minority class samples so that the number of minority classes and majority classes can be balanced to a certain extent.Existing oversampling methods have problems of synthesis of overlapping samples and overfitting.This paper proposed a weighted oversampling method based on hierarchical clustering for Imbalanced data,named WOHC.It uses hierarchical clustering algorithm to divide the minority class samples into several clusters first,then it calculates the clusters’ density factors to determine the sampling rate of each cluster,and finally determines the sampling weights according to the distance between the minority classes and the boundary of majority classes.In the experiments,WOHC method is adopted for oversampling and C4.5 algorithm is combined to perform the classification experiment on several datasets.Results show that the proposed method can improve the performance of algorithm by 7.6% and 5.8% on F-measure and G-mean respectively,which indicates the effectiveness of the method.
Distributed Subgraph Matching Algorithm for Large Scale Graph Data
XU Wen, SONG Wen-ai, FU Li-zhen, LV Wei
Computer Science. 2019, 46 (4): 28-35.  doi:10.11896/j.issn.1002-137X.2019.04.005
Abstract PDF(1934KB) ( 726 )   
References | Related Articles | Metrics
The explosive growth of graph data size makes it difficult to process subgraph matching on single machine.Although the existing distributed algorithms could solve the problem to some extent,the network communication cost among the distributed computing nodes still affects the performance of algorithm.To solve this problem,this paper proposed a distributed subgraph matching algorithm,named DSsearch.It includes four steps:spliting query graph,preprocessing data graph,filtering candidate and combining intermediate results.In the data graph preprocessing,the strategy of graph partitioning and perfect neighbor vertex are used to reduce the communication cost among the distributed computing nodes.The DSgraph storage structure is designed to store candidate vertices during the filtering candidate vertex stage,and the redundant intermediate results are reduced by delaying the Cartesian product.Finally,a comparative experiment was designed and real data sets verification was performed on a Spark distributed cluster with 7 compute nodes.The experimental results show that the DSsearch algorithm can complete subgraph matching of data graphs of millions of vertices in seconds,especially in dealing with complex query graphs and dense data graphs.The experimental results of the data graph preprocessing strategy illustrate the feasibility of reducing network communication cost in a distributed environment through vertex replication.Compared with other algorithms such as TwinTwigJoin and PSgL,the increase of running time of DSsearch algorithm becomes slowly as the number of vertices in the query graph increases.When the number of query graph vertices reaches 14,the running time is only half of that of TwinTwigJoin and PSgL algorithms.The experimental data fully demonstrates that the data transmission cost and the number of intermediate results in the distributed system are the main factors affecting the efficiency of distributed subgraph matching algorithm.Realization of preprocessing the data graph and delaying the Cartesian product solves the bottleneck problem of the performance of distributed subgraph matching and effectively completes the subgraph matching of large-scale graph data.
Tag-aware Recommendation Method with Implicit Feedback
LI Hong-mei, DIAO Xing-chun, CAO Jian-jun, FENG Qin, ZHANG Lei
Computer Science. 2019, 46 (4): 36-43.  doi:10.11896/j.issn.1002-137X.2019.04.006
Abstract PDF(2845KB) ( 371 )   
References | Related Articles | Metrics
In order to further improve the performance of tag-aware personalized recommendation with implicit feedbacks,aiming at the problems of the redundancy,ambiguity of tagging information and the sparsity and imbalance of implicit feedbacks,this paper proposed a personalized recommendation method based on fine-grained preference assumption and augmented weighted matrix factorization.First,one kind of candidate items that the target user may prefer are mined by leveraging its neighbor user,which are preferred by neighbor users which have not been selected by the target user.Thus,a type of fine-grained preference relationship among three kinds of items for target users is obtained,i.e.,observed item>candidate item>other unobserved data.This kind of operation can help to alleviate the sparsity and imbalance problem.Then,the deep learning method is used to extract the in-depth semantic features from tag space.In this way,representations of users’ profiles become more abstract and advanced,and user neighbors are obtained based on the in-depth semantic features.Afterwards,a revised weighted matrix factorization model is formulated based on the fine-grained preference relationship for personalized recommendation.And a fast eALS algorithm is used for model optimization in terms of low time complexity.Experiments on real-world datasets show that the proposed method outperforms competing methods on several evaluation metrics,including Pre@5,NDCG@5,MRR.The three indicators are respectively increased by 9%,8%,and 9%,which indicates the effectiveness of the proposed methods.
Massive Data Parallel Query Platform Based on Distributed Shared-nothing Architecture
QIN Dong-ming, YU Jian, ZHANG Bo, ZHAO Qin
Computer Science. 2019, 46 (4): 44-49.  doi:10.11896/j.issn.1002-137X.2019.04.007
Abstract PDF(1742KB) ( 405 )   
References | Related Articles | Metrics
In view of the challenges of data loading and parallel query controlling in massive data query systems,this paper proposed a massive parallel data query platform based on distributed shared-nothing architecture.The platform uses the distributed shared-nothing architecture to support the unified processing of structured and unstructured data in massive data query,and then achieves the aggregated calculation of data in the platform.The key technologies of the proposed platform are as follows.Firstly,the platform provides cross-platform storage and unified data loading of multiple types of data.Then,a multiple-node data query task flow distribution technology is proposed based on load balancing,and a global query execution strategy is generated.Finally,the platform uses Hash and Range methods to achieve parallel controlling for the query task flow.According to the performance verification of the proposed platform,the query time consumption of this platform is saved by 40% compared with nonparallel method.The experimental results show that this platform has good performance in the accuracy,reliability and concurrency of massive data query.
Single-Pass Short Text Clustering Based on Context Similarity Matrix
HUANG Jian-yi, LI Jian-jiang, WANG Zheng, FANG Ming-zhe
Computer Science. 2019, 46 (4): 50-56.  doi:10.11896/j.issn.1002-137X.2019.04.008
Abstract PDF(1701KB) ( 345 )   
References | Related Articles | Metrics
Online social network has become an important channel and carrier,and it has formed a virtual society interacting with the real world.Numerous network events rapidly spread through social networks,and they can become hot spots in a short period of time.However,the negative events vibrate national security and social stability,and may cause a series of social problems.Therefore,mining hotspot information contained in social networks is of great significance both in public opinion supervision and public opinion early warning.Text clustering is an important method for mining hotspot information.However,when the traditional long text clustering algorithms process massive short texts,their accuracy rate will become lower and the complexity will increase sharply,which will lead to long time-consuming.The exis-ting short text clustering algorithms also have low accuracy and takes too much time.Based on the keywords of text,this paper presented an association model combining context and similarity matrix to determine the relevance between the current text and the previous text.In addition,the text keyword weights were modified according to the association model to further reduce the noise.Finally,a distributed short text clustering algorithm on Hadoop platform was implemented.Through the experiments,it is verified that the proposed algorithm has better results and performance compared with K-MEANS,SP-NN and SP-WC algorithms in terms of the speed of mining topics,the accuracy and the recall rate.
Data Scaling Method for Multi-scale Data Mining
ZHANG Fang, ZHAO Shu-liang, WU Yong-liang
Computer Science. 2019, 46 (4): 57-65.  doi:10.11896/j.issn.1002-137X.2019.04.009
Abstract PDF(1515KB) ( 288 )   
References | Related Articles | Metrics
Multi-scale mining has been applied in the fields of graphic images,geographic information,signal analysis,data mining,etc,and also has related research and application in the fields of association rules,clustering and classification mining.Nevertheless how to divide datasets into common scales and how to construct multi-scale datasets have not been studied in depth.Starting with the task of multi-scale data mining,this paper defined the concept of scale and gave a multi-scale dataset model and a benchmark scale scoring model.This paper proposed a multi-scale partition algorithm based on the discretization method of probability density estimation,which extends the data types of divisible scales,and its partition results are closer to the multi-scale characteristics of data with lower time complexity.This paper also proposed a multi-scale dataset method,a multi-scale data set algorithm and a benchmark scale selection algorithm.Multi-scale entropy and information entropy were used as evaluation methods.On the basis of expanding the multi-scale dataset method,the scale effect produced by the meso-scale derivation of multi-scale data mining can be effectively reduced,and the time complexity can be controlled.The proposed algorithm and model were validated and analyzed by using the real population dataset of H province,UCI common dataset and IBM dataset.The experimental results show that the proposed method is feasible and the proposed model is effective.The application of the proposed methods improvescoverage by 1.6%,F1-measure by 2.1% andaccuracy by 3.7% in scale deduction process,and has low average support error.
High Order Statistics Structured Sparse Algorithm for Image Genetic Association Analysis
RU Feng, XU Jin, CHANG Qi, KAN Dan-hui
Computer Science. 2019, 46 (4): 66-72.  doi:10.11896/j.issn.1002-137X.2019.04.010
Abstract PDF(1497KB) ( 415 )   
References | Related Articles | Metrics
The development of neuroimaging technology and molecular genetics has produced a large number of imaging genetic data,which has greatly promoted the study of complex mental diseases.However,because the dimensions of the data are too high and the correlation measure is based on the assumption that data obey Gaussian distribution,traditionalalgorithms often fail to explain the dependencies between two types of data.In order to solve the shortcomings of traditional algorithms,this paper proposed a method for correlation analysis of a large number of SNP and fMRI data.This method guides fused lasso to perform feature selection by constructing a network structure of features,and uses higher-order statistics to extract statistically significant variables.Thus,biomarkers associated with mental illness are identified.The experimental results show that the distribution of typical vector values obtained by the algorithm in simulation data are almost consistent with the real data,and the correlation coefficient obtained is the closest to the correlation coefficient in the real dataset.The average correlation coefficient of the proposed algorithm is up to 81%,which is about 20% higher than L1-SCCA and about 3% higher than FL-SCCA.Compared with the other two algorithms in real data,the proposed algorithm can find more genes and brain regions that have potential effects on schizophrenia.The experimental results show that the proposed algorithm can effectively identify risk genes and abnormal brain regions within a reasonable time.
Data Mining Algorithm of Abnormal Network Based on Fuzzy Neural Network
XU Lei, WANG Jian-xin
Computer Science. 2019, 46 (4): 73-76.  doi:10.11896/j.issn.1002-137X.2019.04.011
Abstract PDF(1466KB) ( 370 )   
References | Related Articles | Metrics
Abnormal network data are affected by the fuzzy weighted disturbance of the clustering center,which leads to poor clustering of data mining.This paper proposed a data mining algorithm of anomaly network based on fuzzy neural network.Similarity analysis is performed according to the mixed classification attributes of abnormal network data,the numerical and classification attributes of abnormal network data are extracted,and the fuzzy fusion processing of the abnormal network data is carried out by using the joint association rule analysis method.The classification fuzzy set of abnormal network data is constructed based on fuzzy centroid heterogeneity measurement method.In the fuzzy data set,abnormal network data are weighted by mixing and adaptive block matching,and the weak correlation characteristic quantity of abnormal network data is extracted.The extracted features are input into the fuzzy neural network classifier for data classification and recognition,and the optimized mining of abnormal network data is completed.The simulation results show that the proposed method has good data clustering ability for anomaly network data mining.The mining process has strong convergence and anti-interference.
Associated Users Mining Algorithm Based on Multi-information Fusion Representation Learning
HAN Zhong-ming, ZHENG Chen-ye, DUAN Da-gao, DONG Jian
Computer Science. 2019, 46 (4): 77-82.  doi:10.11896/j.issn.1002-137X.2019.04.012
Abstract PDF(1687KB) ( 498 )   
References | Related Articles | Metrics
With rapid development and popularization of Internet technologies,more and more users have begun to share and exchange various information through social networks.The same user in the network may apply for multiple diffe-rent accounts to distribute information,and these accounts constitute the associated users in the network.Effectively mining associated users in social networks can suppress false information and illegal behaviors in the network,and thus ensure the security and fairness of the network environment.Existing associated user mining methods only consider user attributes or user relationship information without merging multiple types of information contained in the network comprehensively.In addition,most methods draw lessons from the methods in other fields,such as de-anonymization,and they can’t accurately solve the problem of associated user mining.In light of this,this paper proposed an associated user mining algorithm based on multi-information fusion representation learning(AUMA-MRL).In this algorithm,the idea of network representation learning is utilized to learn various dimensional information in the networks,such as user attributes,network topology,etc.Then the learned multi-information is effectively fused to obtain multi-information node embedding,which can accurately characterize multiple types of information in networks,and mine associated users in networks through similarity vectors between node embedding.The proposed algorithm was validated on three real networks namely protein network PPI and social network Flickr,Facebook.In the experiment,the accuracy and recall rate is selected as the performance evaluation indexes.The results show that the recall rate of proposed algorithm is increased by 17.5% on average compared with the existing classical algorithms,and it can effectively mine associated users in networks.
Network & Communication
Design Method of Semantic-driven Cyberspace Resource Symbol
ZHANG Long, ZHOU Yang, TIAN Jiang-peng, ZHAO Hai-peng
Computer Science. 2019, 46 (4): 83-88.  doi:10.11896/j.issn.1002-137X.2019.04.013
Abstract PDF(1985KB) ( 321 )   
References | Related Articles | Metrics
Cyberspace resource is the basic unit of cognitive cyberspace.Systematic and structured symbol system of network resources is significant for correctly understanding the cyberspace situation,rapidly sharing and identifying cyberspace situation mapping and objectively grasping the distribution,state and attribution of cyberspace resources.Therefore,referring to the design method of semantic-driven mapping symbol,this paper introduced natural semantics theory into the process of symbol design of cyberspace resources,and proposed a design method of semantic-driven symbol cyberspace resources.Firstly,the paper analyzed and sorted out the cyberspace resource structure,and proposed the cyberspace resource symbol structure and symbolic semantic model.Then,it elaborated the symbolic design process and the method of cyberspace resources.Finally,it conducted the cyberspace resource symbolic cognition experiment and comparison analysis with the US military cyberspace symbol.The experimental results show that the symbol of cyberspace resources designed by this method has the characteristics of visualization and systematization.
Semantic Description of IoT Services:A Method of Mapping WSDL to OWL-S
LING Jing, JIANG Ling-yun
Computer Science. 2019, 46 (4): 89-94.  doi:10.11896/j.issn.1002-137X.2019.04.014
Abstract PDF(1312KB) ( 342 )   
References | Related Articles | Metrics
Aiming at the description of IoT services,the existing standard is WSDL(Web Services Description Language) based on the XML(Extensible Markup Language).However,WSDL lacks semantic information and can not describe the IoT services semantically,thus having an impact on the accuracy of service discovery.Among the current exi-sting semantic service description languages,OWL-S (Ontology Web Language for Services) has a most profound and lasting influence.In order to describe services semantically,this paper presented a conversion method from WSDL to OWL-S.This method can convert the existing WSDL file to OWL-S file through operation mapping and ontology mapping.Some test sets and examples verify the effectiveness of the proposed method for converting files.And the precision ratio and recall ratio of the conversion results are better than that of MWSAF method.
Research of Consensus in Multi-agent Systems on Complex Network
ZHANG Sen, LIU Wen-qi, ZHAO Ning
Computer Science. 2019, 46 (4): 95-99.  doi:10.11896/j.issn.1002-137X.2019.04.015
Abstract PDF(1758KB) ( 662 )   
References | Related Articles | Metrics
How to improve uniform convergence rate of multi-agent systems is an important issue in uniform research.The uniform convergence rate can be well performed by the smallest non-zero eigenvalues of the Laplacian matrix.According to the computer simulation,this paper found that the uniform convergence rate is significantly led by different factors in different complex networks.The methods for impraing uniform convergence rate on the different complex networks are listed as follows.For the nearest-neighbor coupled network,the number of nodes N should be reduced,or the number of coupling K should be increased.For the NW small-world network,the number of nodes N should be increased,or the probability of random edged p should be increased.This paper found that the convergence rate has a good linear relationship between the number of nodes N and the probability of random edges p.For the Waxman random graph network,the number of the nodes N should be increased,or the network parameters α and β should be increased.The convergence rate is linear when β increases,but there is a slight fluctuation.The results can help to optimize the convergence rate of multi-agent network.
RFID Data-driven Vehicle Speed Prediction Using Adaptive Kalman Filter
FENG An-qi, QIAN Li-ping, HUANG Yu-pin, WU Yuan
Computer Science. 2019, 46 (4): 100-105.  doi:10.11896/j.issn.1002-137X.2019.04.016
Abstract PDF(2120KB) ( 427 )   
References | Related Articles | Metrics
This paper proposed a radio frequency identification (RFID) data-driven vehicle speed prediction method using adaptive Kalman filter.First of all,when the vehicle moves through one RFID tag,the reader needs to acquire the state information (i.e.,current speed and time stamp) of the last vehicle across this tag,meanwhile transmits its own state information to this tag.Then,the state space model can be formulated according to the acquired state information.Finally,the adaptive Kalman filtering algorithm is used to predict and adjust the vehicle speed.Adaptive Kalman filtering algorithm realizes the adaptive updating of variable forgetting factor by using the error between the expected output value and the actual output value,and thus realize the online updating of the prediction model.The numerical results further show that compared with the least square method and the conventional Kalman filtering algorithm,the proposed algorithm can improve the speed prediction accuracy by 87.5% and 50% respectively,implying that the proposed algorithm can provide better real-time effectiveness for the practical applications.
Mobile Sink Based Data Collection Strategy for Farmland WSN
YANG Ying, YANG Wu-de, WU Hua-rui, MIAO Yi-sheng
Computer Science. 2019, 46 (4): 106-111.  doi:10.11896/j.issn.1002-137X.2019.04.017
Abstract PDF(2062KB) ( 370 )   
References | Related Articles | Metrics
In order to solve the problem of poor scalability and uneven node energy consumption in farmland WSN,mobile sink path planning strategy and routing strategy were proposed according to the characteristics of large numbers of nodes,large monitoring area,and low node density.The proposed strategy divides the network into variable grids and introduces hop constrained routing tree to select route.In view of the large amount of data transmission and collision prone near the sink node,the sparse processing of nodes and the time-sharing routing of the region effectively reduce the mutual interference in data transmission.Simulation results show that the algorithm can reduce interference and prolong the network lifetime.
Analysis of Channel Capacity of Molecular Communication Model Based on Inter-user Interference
CHENG Zhen, LIN Fei, ZHAO Hui-ting, ZHANG Yi-ming
Computer Science. 2019, 46 (4): 112-117.  doi:10.11896/j.issn.1002-137X.2019.04.018
Abstract PDF(1983KB) ( 276 )   
References | Related Articles | Metrics
In the diffusive multi-user molecular communication model,the released molecules follow Brownian motion rules,the Inter-Symbol Interference (ISI) and Inter-User Interference (IUI) among the molecules inevitably exist.Therefore,how to increase the channel capacity of system is a challenge in the current research on the molecular communication model.For the diffusive multi-user molecular communication model with On-Off Keying (OOK) modulation,the ISI and IUI were analyzed.The minimum average error probability criterion was used to obtain the optimal decision threshold in the detection process for the receiver nanomachine,and then the channel capacity was optimized.Finally,MATLAB simulations were used to demonstrate the effects of different parameters on the channel capacity performance of this molecular communication model.The simulation results show that by setting appropriate prior probability,increasing the diffusion coefficient and time slot duration,reducing the distance between the transmitter nanomachine and the receiver nanomachine and the number of users,the channel capacity of the molecular communication model based on IUI can be improved,and the bit error rate is reduced.
Dictionary Refinement-based Localization Method Using Compressive Sensing inWireless Sensor Networks
WU Jian, SUN Bao-ming
Computer Science. 2019, 46 (4): 118-122.  doi:10.11896/j.issn.1002-137X.2019.04.019
Abstract PDF(1700KB) ( 270 )   
References | Related Articles | Metrics
Traditional Compressive Sensing (CS)-based localization methods divide physical space into a fixed grid and assume that all targets fall on the grid precisely,therefore formulating the localization problem into a sparse reconstruction problem.In fact,it is very difficult to find such a fix grid because of the randomness of targets.As a result,there always exists mismatch between the assumed and actual sparse dictionaries,deteriorating localization performance significantly.This paper addressed this problem and proposed a novel dictionary refinement-based localization method using CS.In this method,the true sparse dictionary is modeled as a parameterized dictionary which views grids as adjustable parameters.Based on the model,the sparse dictionary is gradually refined by dynamically adjusting the grid.Consequently,the localization problem is formulated into a joint parameter estimation and sparse reconstruction problem,and this problem is solved under variational Bayesian inference framework.Simulation results show thatthe proposed localization method is more efficient and robust compared with traditional CS-based methods.
Information Security
Digital Image Sharing Algorithm Based on Triangular Partition
YUAN Xi-xi, CAI Zhan-chuan
Computer Science. 2019, 46 (4): 123-128.  doi:10.11896/j.issn.1002-137X.2019.04.020
Abstract PDF(2337KB) ( 368 )   
References | Related Articles | Metrics
The digital image is easily lost,damaged,and stolen by cheaters and used for illegal transmission because of the insecure network.Therefore,the digital image encryption technology can effectively enhance the security of image.The digital image sharing algorithm is an important encryption technology.However,the conventional image sharing technologiesrepeat encryption pixel by pixel without considering the gray distribution characteristic,resulting in both reduced security and unnecessary space-time overhead.To handle the above problems,a new digital image sharing algorithm was proposed by adopting the gray distribution characteristic based non-uniform triangular partition algorithm and the threshold scheme.First,the non-uniform triangular partition algorithm is used to obtain the mesh of the image according to the gray distribution characteristic.Second,the threshold scheme is used to encrypt and share the vertex pixels of each sub-triangle in the mesh.Finally,the original image is reconstructed by using the Lagrange interpolation polynomial and the mesh coding information.The experimental results show that the proposed method reduces the redundant encryption of pixels,improves the security and has good image reconstruction effect,so it is an effective image sharing algorithm.
Client Puzzle Based Access Control Model in Public Blockchain
WU Dai-yue, LI Qiang, YU Xiang, HUANG Hai-jun
Computer Science. 2019, 46 (4): 129-136.  doi:10.11896/j.issn.1002-137X.2019.04.021
Abstract PDF(1776KB) ( 280 )   
References | Related Articles | Metrics
Public blockchain characterizes non-centralization and decentration and allows any node to join,and thus possesses the advantages of high efficiency,low cost and high data security.However,that it allows any node to join increases the vulnerability of blockchain network.This paper proposed a Client Puzzle based access control model named CPACM.In this model,new nodes have to make use of computing power to conduct proof of work before joining the network.Only the successful nodes could join the public blockchain network.This model adopts access control when keeping public blockchain decentralizing.Experimental results show that the proposed model can restrict the low computing-power nodes and unfaithful nodes to access with high probability when not affecting the faithful nodes to access and prevent nodes collusion,thus preventing the malicious behaviors and improving the security of public blockchain.
Reliable Security Lock Key Updating Scheme over Narrow Band Internet of Things
LIU Meng-jun, SHA Tao, LI Dan, LIU Shu-bo
Computer Science. 2019, 46 (4): 137-143.  doi:10.11896/j.issn.1002-137X.2019.04.022
Abstract PDF(1744KB) ( 284 )   
References | Related Articles | Metrics
In narrow band internet of things communication system,the data among devices are transmitted by connectionless UDP protocol.Based on the unreliable UDP protocol,it is difficult to reliably update the keys for the security door system.This paper designed a reliable key updating scheme over connectionless communication link.This scheme makes use of the characteristics of smart lock key updating,and adopts the carefully designed interactive key transmission mechanisms,so as to makelock devices get keys by UDP protocol and achieve key updating reliably.Theoretical analysis and prototype experiment results show that the proposed scheme holds small communication and computation overhead while reliably updating the key.
Low Expansion Rate Encryption Algorithm Based on MLWE
KE Cheng-song, WU Wen-yuan, FENG Yong
Computer Science. 2019, 46 (4): 144-150.  doi:10.11896/j.issn.1002-137X.2019.04.023
Abstract PDF(1299KB) ( 466 )   
References | Related Articles | Metrics
The lattice encryption algorithm Kyber based on MLWE has the advantage of anti-quantum attack and high encryption efficiency.However,the expansion rate of ciphertext is as high as about 1∶25,which is just suitable for a few scenes such as key encapsulation.In order to give an encryption algorithm that can be applied to common public key encryption scenes,an improved MLWE based low expansion rate public key encryption algorithm was proposedin this paper.A new encryption parameter dp was introduced into this algorithm to expand the plaintext space.By strict theoretical deduction and experiment,the influence of dp on the correctness of the encryption algorithm was analyzed,and the encryption parameter was optimized to reduce the expansion rate of ciphertext.The improved algorithm will work in a large finite field,which leads to low encryption efficiency.To avoid this situation,this paper proposed a polynomial mul-tiplication method based on FFT in complex field by using floating point arithmetic.And the improved algorithm was implemented in C++.Experimental results show that the new method is more efficient and the floating error is controllable.Compared with Kyber,the ciphertext expansion rate is reduced from 1∶25 to about 1∶4.25.
Bluetooth Key Agreement Scheme with Zero Secret Storage in Slave Device
LI Sen-sen, HUANG Yi-cai, YU Bin
Computer Science. 2019, 46 (4): 151-157.  doi:10.11896/j.issn.1002-137X.2019.04.024
Abstract PDF(1844KB) ( 381 )   
References | Related Articles | Metrics
To solve the problem that the existing bluetooth pairing protocol is difficult to resist the man-in-the-middle attacks and replication attacks,a bluetooth key agreement scheme with zero secret storage in slave device was proposed.By using the Physical Unclonable Functions(PUF),this scheme realized the mutual authentication and link key agreement between the master device and the slave device through “three-time handshake” in the case that the slave device need not store any secret parameters.Theoretical analysis and experimental results show that the proposed scheme not only has high security,but also needs less communication,calculation and storage cost.
Location Anonymous Algorithm Based on User Collaboration under Distributed Structure
WU Dan-dan, LYU Xin
Computer Science. 2019, 46 (4): 158-163.  doi:10.11896/j.issn.1002-137X.2019.04.025
Abstract PDF(1508KB) ( 309 )   
References | Related Articles | Metrics
With the increasing popularity of mobile terminals and the rapid development of communication technology,location-based service applications have been widely used in people’s daily life.However,the users’ location information is commonly privacy-related.Thus,how to preserve users’ privacy while guaranteeing the service quality has become a hot issue in current research.An anonymous region constructing algorithm,through collaborating users,was proposed in this paper.The anchor is randomly selected in the designed set,and the cooperative users can be discovered by broadcasting hop-by-hop,until the anonymous demand is satisfied.During the collaboration responding process,the anchor-centered anonymous region is formed based on the distance between the user and the anchor.The experimental results show that the proposed algorithm is able to effectively resist collusion attacks and anonymous center attacks,meanwhile,the average area of anonymous region is greatly decreased.
Bidirectional Anonymous Secret Communication Protocol Based on Onion Routing
ZHAO Meng-yao, LI Xiao-yu
Computer Science. 2019, 46 (4): 164-171.  doi:10.11896/j.issn.1002-137X.2019.04.026
Abstract PDF(1696KB) ( 649 )   
References | Related Articles | Metrics
In the network,the identity of communicators is an important privacy.Anonymous communications can hide the sender and the recipient.Most of the research on anonymous communication is about the sender’s anonymity.There is less research on the receiver’s anonymity and bidirectional anonymity.In onion routing system,onion path is constructed by using source routing protocol and layer by layer encryption.The message is forwarded through orderly transit nodes according to onion path,which hides the sender’s address,realizes the sender’s anonymity and effectively prevents eavesdropping and traffic analysis.A new bidirectional anonymous secret communication protocol was proposed based on onion routing in this paper.The onion path constructed by the sender contains all the nodes in the system.Every hopping transfer node must judge whether the node is the receiver or not.If not,the message continues to be forwarded,and else,the recipient receives the message and the forwarding terminates.The identity of the sender(receiver) is not captured by the other party or any other user.Besides both sides of the communication,any transit node or intrudercan’t get the message.Therefore,the protocol achieves a two-way anonymous secret communication well.The anonymity of the receiver is realized without multicast,which effectively reduces the traffic in the system.The protocol is only based on onion routing anonymity system and is relatively simple.The experimental results show that with the increase of system users,the average response time and the average bidirectional communication time increase almost linearly,which indicates that the system is still stable and robust in the case of a large number of users.
Improved Efficient Certificateless Short Signature Scheme
ZUO Li-ming, CHEN Zuo-song, XIA Ping-ping, TANG Peng-zhi, KANG Wen-yang
Computer Science. 2019, 46 (4): 172-176.  doi:10.11896/j.issn.1002-137X.2019.04.027
Abstract PDF(1363KB) ( 391 )   
References | Related Articles | Metrics
Certificateless public key cryptography has always been a hot topic in cryptography research,which solves not only the problem of storing and managing certificates in the PKI (public key infrastructure) certificate cryptosystem but also the key escrow problem in the identity-based cryptography system.Aiming at the problem that the traditional certificateless digital signature scheme is susceptiable to the public key substitute attacks,the definition of traditional certificateless digital signature was improved,and a short signature scheme based on the new definition was proposed.It was proved that the scheme is secure under the difficult problem of Inv-CDH (inverse computational Diffie-Hellman) and random oracle model,and the scheme was implemented.Finally,efficiency analysis and experiment comparison with several classic schemes were carried out.The result shows that the scheme has low computational complexity and high efficiency,and is suitable for application scenarios with weak computing capability and transmitting capability.
Security Analysis on VANETs Authentication Schemes:CPAV and ABV
WANG Qing-long, QIAO Rui, DUAN Zong-tao
Computer Science. 2019, 46 (4): 177-182.  doi:10.11896/j.issn.1002-137X.2019.04.028
Abstract PDF(1247KB) ( 440 )   
References | Related Articles | Metrics
Recently,many different anonymous authentication schemes have been proposed for privacy protection of vehicles in vehicular ad hoc networks (VANETs).In 2018,Vijayakumar et al.proposed a computationally efficient privacy preserving anonymous authentication scheme for VANETs (CPAV) and anonymous batch authentication scheme for VANETs (ABV).The schemes can achieve anonymous mutual authentication between the vehicle and the road side unit(RSU),as well as anonymous batch authentication of vehicle by the RSU,and resist bogus attacks,forgery attack,and associated attacks.TA (Trusted Agency) can track the true identity of registered vehicles when necessary.This paper deeply analyzed the security of CPAV and ABV.In CPAV scheme,the external attackers are fully able to successfully conduct bogus attack and forgery attack,which proves that this scheme does not satisfy non-repudiation,nor can it conduct conditional tracking for vehicles.In addition,because the anonymous identity used in the scheme is unique,the scheme cannot resist the associated attacks,which indicates that this scheme doesn’t possess the so-calledunlinkability.At last,it’s also proved that the anonymous batch authentication (ABV) scheme can’t resist forgery attack.
Dynamic Threat Assessment Model Based on Intuitionistic Fuzzy Multiple Attribute Decision Making
CHEN De-jiang, WANG Jun, ZHANG Hao-wei
Computer Science. 2019, 46 (4): 183-188.  doi:10.11896/j.issn.1002-137X.2019.04.029
Abstract PDF(1269KB) ( 305 )   
References | Related Articles | Metrics
Aiming at the characteristics of target multi-attribute and dynamic changes of attribute values in air defense operations,this paper proposed a method of dynamic threat assessment based on intuitionistic fuzzy multiple attribute decision making.Firstly,aiming at the unknown weight of target attribute,based on the basis of comprehensive investigation in subjective preference of decision-makers(DMs) and objective information,this paper established a linear weightedattribute weight optimization model,and determined the weight of target attributes.Secondly,in view of the one-sidedness of the traditional threat assessment algorithms which only study the current time attribute information,the Poisson distribution method was used to empower the time series,the fusion of multi-time target information was realized andthe dynamic change process of the combat situation was investigated .Thirdly,the idea of TOPSIS was used to solve the normalized threat value of targets,and then the dynamic threat ranking results of the incoming targets were obtained.The simulation example shows the flexibility and rationality of the proposed algorithm,which provides more effective auxiliary decision-making for air defense combat DMs.
Dependency Analysis Based Cloud Composition Service Information Flow Control Mechanism
LIU Ming-cong, WANG Na, ZHOU Ning
Computer Science. 2019, 46 (4): 189-196.  doi:10.11896/j.issn.1002-137X.2019.04.030
Abstract PDF(1599KB) ( 269 )   
References | Related Articles | Metrics
Cloud composition service can provide users with richer capabilities,but sensitive information may flow through multiple cloud services in business process,so information flow control must be implemented to prevent information leakage or unauthorized access.Aiming at the security problem of information flow in cloud composite service,this paper proposed a data flow control mechanism based on dependency analysis.The information flow in cloud composite service was analyzed by the dependency between data and the information flow was controlled by using security label.Firstly,a cloud composition service weighted directed graph model with complex combination structure is constructed.Based on the security attributes,the attribute certificate of cloud service,the confidentiality label and integrity label of data are defined,then the input dependencies between services and resource dependencies between services are proposed,and the input dependence and resource dependency computing method based on historical information are given.After that,the output data security label algorithm is given according to the dependency analysis, the compositional information flow policy is defined and the distributed information flow control mechanism is designed,realizing the confidentiality and integrity protection of information flow in cloud composition service under complex compositional structure.At last,an example is giventoanaylze the effectiveness and performance of the mechanism.
Code Obfuscation Effectiveness Assessment Model Based on Nonlinear Fuzzy Matrices
SU Qing, LIN Ze-ming, LIN Zhi-yi, HUANG Jian-feng
Computer Science. 2019, 46 (4): 197-202.  doi:10.11896/j.issn.1002-137X.2019.04.031
Abstract PDF(2147KB) ( 344 )   
References | Related Articles | Metrics
In order to solve the problem of present code obfuscation assessment method for low level of the code obfuscation discrimination,this paper proposed a code obfuscation effectiveness assessment model based on nonlinear fuzzy matrices(MNLFM),and gave a proof of several MNLFM’s features,such as assessing rationality,monotonicity,continuity,highlighting.MNLFM can obviously improve the current situation of poor distinction in the field of obfuscation assessment.The model can be carried out by quantifying the assessment index parameters,determining the membership functions and constructing the nonlinear fuzzy matrices.A test case suite of Java program was set up and several code obfuscation technologies based on flatten control flow and opaque predicate were used to check the validation of the model.And then it was compared with other code obfuscation assessment models.The experimental results verify that MNLFM can compare the comprehensive complexity between the obfuscation codes and clearly distinguish the degree of different obfuscation algorithms for original code.
Alert Processing Method Based on Hierarchical Clustering
WU Yi-fan, CUI Yan-peng, HU Jian-wei
Computer Science. 2019, 46 (4): 203-209.  doi:10.11896/j.issn.1002-137X.2019.04.032
Abstract PDF(1818KB) ( 267 )   
References | Related Articles | Metrics
Aiming at the problem that there generally exist redundant alarms in intrusion detection system and it affects the judgment of attack types,this paper processed an alert processing method based on improved hierarchical clustering,so as to reduce redundant alarms and improve the accuracy of attack type detection.On the basis of hierarchical clustering,this method uses the content of alarm as the unique attribute value of cluster,increases the percentage of effective alert with prior knowledge as the criteria for the selection of clustering thresholds,and improves the processing method of directly discarding the class whose value is higher than threshold in conventional clustering.The improved method uses the cosine similarity algorithm to calculate the representative alert above the threshold class,effectively avoiding discarding useful alarms.After clustering through suitable thresholds,the deduplicated and clustered alarm results within the time window are displayed in the order of the time axis to quickly determine the attacker’s attack type.The experimental results show that the improved clustering method has better deduplicated effect.
Artificial Intelligence
Local Path Planning of Mobile Robot Based on Situation Assessment Technology
CHAI Hui-min, FANG Min, LV Shao-nan
Computer Science. 2019, 46 (4): 210-215.  doi:10.11896/j.issn.1002-137X.2019.04.033
Abstract PDF(1637KB) ( 360 )   
References | Related Articles | Metrics
From the cognitive perspective,an approach by situation assessment technology was proposed for local path planning of mobile robot.First,the angle rang [10°,170°] in the front of mobile robot is divided into five parts under the robot coordinate system.The environmental situation factors of the five parts can be extracted for mobile robot from the fusion results of laser data and image data.Then,Bayesian networks model for robot action choosing is constructed.The environmental situation factors are regarded as evidences for the Bayesian networks mode.The inference can be made and the action in which the posterior probability is maximum is chosen.The action can be straight line walking,obstacle avoidance,escaping from U trap.Furthermore,the chosen action is processed to let robot move to the next grid.The next grid cell is chosen according to sonar data and the direction of robot is adjusted at the same time.Experiments including eleven typical simulation scenes were given.In these experiments,one scene test fails and the rest ten scenes are successful.In the ten scenes,mobile robot can reach the destination with the shortest route or secondary shortest route.The results show that the approach about situation assessment technology is effective and available for local path planning of mobile robot.
Detecting Community from Bipartite Network Based on Spectral Clustering
ZHANG Xiao-qin, AN Xiao-dan, CAO Fu-yuan
Computer Science. 2019, 46 (4): 216-221.  doi:10.11896/j.issn.1002-137X.2019.04.034
Abstract PDF(1772KB) ( 3073 )   
References | Related Articles | Metrics
Bipartite network is a special kind of network,which plays an important role in exploring the deep structure of the network.However,themethods of dividing the bipartite network community still have some problems,such as low precision of division.Through the application of normalized spectral clustering algorithm,an algorithm of detecting community- spectral clustering interaction (SPCI) was proposed.First,a similarity matrix is constructed based on the relationship between two kinds of nodes.Then,a cluster is clustered by spectral clustering algorithm.Finally,the community partition of two points network is realized by using two kinds of node’s interaction index.Through the verification on artificial data and real data,the result shows that SPCI not only has higher accuracy and modularity than the algorithm based on resource distribution matrix,edge clustering coefficient and spectral co-clustering,but also can accurately determine the number of community partition.
Prediction of Protein Functions Based on Bi-weighted Vote
TANG Jia-qi, WU Jing-li, LIAO Yuan-xiu, WANG Jin-yan
Computer Science. 2019, 46 (4): 222-227.  doi:10.11896/j.issn.1002-137X.2019.04.035
Abstract PDF(1301KB) ( 295 )   
References | Related Articles | Metrics
Proteins are the essential molecules to accomplish important biological activities.It will greatly promote the advance of life science research and application to accurately grasp their functions.A tremendous amount of protein sequences has been generated with the development of high-throughput techniques.Thus,prediction of large-scale protein functions with computation technology has become one of the key tasks in bioinformatics today.Currently,the prediction method based on protein-protein interaction network,which is a research hotspot of protein function prediction,still has shortcomings at such aspects as reducing the impact of data noise,making full use of network topology characteristics,integrating multi-source data,and so on.In this paper,the Bi-Weighted Vote(BIWV) algorithm was proposed to predict protein functions,which combines the global topological similarity produced by Random Walk with Resistance (RWS) and the semantic similarity between terms.In addition,the Bi-Weighted Vote algorithm with pathway (BiWV-P) was presented by integrating the information of biological pathway.By using the data sets of saccharomyces cerevi-siae and homo sapiens,experiments were performed to compare TMC,UBiRW,ProHG,BiWV and BiWV-P.The experimental results indicate that BiWV algorithm and BiWV-P algorithm can predict protein functions effectively,and achieve higher micro-accuracy and micro-F1 than other algorithms in many data sets.
Point of Interest Recommendation Based on User’s Interest and Geographic Factors
SU Chang, WU Peng-fei, XIE Xian-zhong, LI Ning
Computer Science. 2019, 46 (4): 228-234.  doi:10.11896/j.issn.1002-137X.2019.04.036
Abstract PDF(1623KB) ( 800 )   
References | Related Articles | Metrics
In the location-based social network,collaborative filtering is the most widely used recommended technology,but it has some drawbacks,such as data sparsity and cold start.In light of this,this paper presented a point of interest(POI) recommendation algorithm combining user’s interest and geographic factors.In this method,firstly,the adaptive kernel density distribution,naive Bayesian algorithm and the popularity of POIs are combined to mine user’s geographi-cal preferences,and some candidate recommended POIs are screened out according to the geographical preference model.Then,in order to overcome the problems of data sparsity and cold start in collaborative filtering algorithm,a user pre-ference model is constructed to carry out POI recommendation based on the similarities of user checked-in,category information and user trust.Finally,the Yelp data set was used to conduct the experimental analysis.The results show that the proposed POI recommendation model based on user’s interest and geographical factor obtains good recommendation effect.
Personalized Recommendation Algorithm Based on Recent Neighborhood Recommendation and Combined with Context Awareness
ZHANG Hong-li, BAI Xiang-yu, LI Gai-mei
Computer Science. 2019, 46 (4): 235-240.  doi:10.11896/j.issn.1002-137X.2019.04.037
Abstract PDF(1364KB) ( 380 )   
References | Related Articles | Metrics
Aiming at the problem that the traditional context-aware recommendation algorithm is not accurate and the applicable environment is limited,this paper proposed a feasible solution.It can find relevant media content based on the detected context information,which is more effective than relying solely on feature extraction.First,the context data and the search information are used to identify a hidden relationship between the selected context and the user’s interest in a particular context,and a recommendation model of the unknown ranking is constructed.The contextually perceived ra-ting of the user’s expected ranking score is then calculated by using the given contextual list.The user’s situation is used to participate in the selection of new items so that the detected situation helps to facilitate the search for related items.The optimization function is further used to maximize the average accuracy (MAP) of the result recommendation.The experimental results show that compared with the more advanced algorithms,the proposed method shows betterperformance than the traditional collaborative filtering algorithm,and the absolute error value is reduced by 1.8% and 1.2% respectively in the recommendation accuracy and recall.The rate is also superior to the two comparison methods.
Online Obstacle Avoidance and Path Planning of Quadrotor Oriented to Urban Environment
CHENG Hao-hao, YANG Sen, QI Xiao-hui
Computer Science. 2019, 46 (4): 241-246.  doi:10.11896/j.issn.1002-137X.2019.04.038
Abstract PDF(2287KB) ( 609 )   
References | Related Articles | Metrics
Aiming at the online obstacle avoidance and path planning problem of quadrotor for urban environment,this paper studied the improved algorithm of rapidly-exploring random tree (RRT) and artificial potential field.In order to solve the problem of slow convergence speed and tortuous path of RRT algorithm,first,the probability guidance is used to guide the growth direction of random tree,and then the track is cut and the B-spline curve is smoothed to generate a feasible track that satisfies the performance requirements of quadrotor.In order to solve the problems that the artificial potential field method is trapped into local minimums and oscillations,the initial path is first generated by using the improved potential field function,and then the path planning is optimized by using path point clipping and B-spline curve.Finally,under the urban environment model,the improved RRT algorithm is compared with the improved artificial potential field method from the aspects of algorithm planning time,planning track length and turning angle.The results show that the improved RRT algorithm is more suitable for online obstacle avoidance and path planning of quadrotor.
Path Planning of Mobile Robot Based on PG-RRT Algorithm
XI Feng-fei, ZENG Xi, JI Shi-ming, CHEN Guo-da, CAI Chao-peng
Computer Science. 2019, 46 (4): 247-253.  doi:10.11896/j.issn.1002-137X.2019.04.039
Abstract PDF(3839KB) ( 512 )   
References | Related Articles | Metrics
The path planning problem is a vital problem in the field of mobile robots and is the basis for the development of smart factories.Rapidly-expanding random tree algorithm (RRT algorithm) is widely used in path planning because of its excellent solving performance.Aiming at the problem of low execution efficiency and poor path repeatability of the RRT algorithm when faced with complex maps,a plant growth guidance based RRT path planning algorithm (PG-RRT algorithm) for mobile robot was proposed to improve the stability and efficiency of path optimization.By using three principles followed by plant growth (Phototropism,Obstacle influence characteristics and Negative geotropism),combing variable step technique and inflation technique,the PG dilation guide field used for RRT algorithm can be obtained.Finally,the ideal path is obtained by using the RRT algorithm with random sampling characteristics.Abundant simulations show that the PG-RRT algorithm reduces the number of iterations and obtains better path distance compared to the traditional RRT algorithm and the single PG algorithm.It is noteworthy that the search efficiency of the presented path planning algorithm is improved compared with the A* algorithm.Moreover,the actual vehicle test of robot verifies the practicability of the PG-RRT algorithm.
Graphics ,Image & Pattern Recognition
Fidelity Index in Image Magnification Based on Hierarchical Feature and Radial Basis Function
LI Chun-jing, HU Jing, TANG Zhi
Computer Science. 2019, 46 (4): 254-260.  doi:10.11896/j.issn.1002-137X.2019.04.040
Abstract PDF(4409KB) ( 337 )   
References | Related Articles | Metrics
As an important information carrier,image is indispensable in life,and how toretain and acquire the information in the image to the greatest extent has been a big topic for a long time.In recent years,radial basis function (RBF) interpolation has become a new effective method to solve the problem of scattered data interpolation.In the image magnification based on radial basis function,the values of different parameters have a great influence on the magnified ima-ge.The appropriate fidelity index is particularly critical for the image quality evaluation and the study on the parameters.This paper mainly presented the definition of fidelity index for image magnification based on the multilevel feature of image and the radial basis function of the block matrix,which consists of the global distortion index and the edge distortion index.The experimental results show that the definition of fidelity index is effective.Furthermore,the correlations between the parameters of MQ,inverse MQ and the Gauss radial basis functions and the image texture amplification mechanism were studied.
Image Fusion Algorithm Based on Improved Weighted Method and AdaptivePulse Coupled Neural Network in Shearlet Domain
WANG Ying, LIU Fan, CHEN Ze-hua
Computer Science. 2019, 46 (4): 261-267.  doi:10.11896/j.issn.1002-137X.2019.04.041
Abstract PDF(4390KB) ( 341 )   
References | Related Articles | Metrics
Since traditional multi-focus image fusion algorithm has the problem of low contrast ratio,this paper presented a multi-focus image fusion algorithm based on improved weighted method and adaptive pulse coupled neural network (PCNN) in Shearlet domain.Firstly,the source images are decomposed by Shearlet transform to generate a low-frequency subband and a series of high-frequency subbands with different scales in different directions,then the weighted sum of the low-frequency subbands and the absolute value of the difference of the low-frequency subbands are conducted,the weight is calculated by the average gradient,and finally the fused low-frequency subbands are obtained.At the same time,the high-frequency subbands are fused by adaptive PCNN fusion rule,the motivation for PCNN is calculated by sum-modified Laplacian,the linking strength for PCNN is adaptively calculated by the regional spatial frequency of each source images,and the fused high-frequency subbands are obtained according to the ignition map of PCNN.Finally,the fusion image is acquired by the Shearlet inverse transform.One group of artificial simulated multi-focus images named Cameraman and three groups of real multi-focus images named Pepsi,Clock and Peppers are selected respectively for experiments,seven different fusion methods are chosen as a comparison,and four common quality evaluation indexes are used to evaluate the fusion images objectively.The experimental results show that the proposed method has good performance both on subjective vision and objective evaluation.
Image Description Model Fusing Word2vec and Attention Mechanism
DENG Zhen-rong, ZHANG Bao-jun, JIANG Zhou-qin, HUANG Wen-ming
Computer Science. 2019, 46 (4): 268-273.  doi:10.11896/j.issn.1002-137X.2019.04.042
Abstract PDF(1833KB) ( 542 )   
References | Related Articles | Metrics
For the overall quality of the sentence describing the generated image is not high in the current image description task,and an image description model fusing word2vec and attention mechanism was proposed.In the encoding stage,the word2vec model is used to describe the text vectorization operations to enhance the relationship among words.The VGGNet19 network is utilized to extract image features,and the attention mechanism is integrated in the image features,so that the corresponding image features can be highlighted when the words are generated at each time node.In the decoding stage,the GRU network is used as a language generation model for image description tasks to improve the efficiency of model training and the quality of generated sentences.Experimental results onFlickr8k and Flickr30k data sets show that under the same training environment,the GRU model saves 1/3 training time compared to the LSTM model.In the BLEU and METEOR evaluation standards,the performance of the proposed model in this paper is significantly improved.
On-line sEMG Hand Gesture Recognition Based on Incremental Adaptive Learning
LI Yu, CHAI Guo-zhong, LU Chun-fu, TANG Zhi-chuan
Computer Science. 2019, 46 (4): 274-279.  doi:10.11896/j.issn.1002-137X.2019.04.043
Abstract PDF(1893KB) ( 405 )   
References | Related Articles | Metrics
Due to the individual difference of surface electromyography (sEMG),an individual person always needs long-time pre-training for obtaining his own accurate classification model when using sEMG as control source of external equipment.For solving this problem,on the basis of the original KKT-SVM incremental learning method,a new SVM incremental learning algorithm (D-ISVM) based on DBSCAN density clustering was proposed and it was applied in the on-line sEMG hand gesture recognition.Firstly,considering that the new samples and initial non-SV samples can affect new SV set,the closeness of sample distribution is analyzed and clustered according to DBSCAN,and the new samples and initial non-SV samples which are close to initial SV set are selected.Then,these samples are furtherselec-ted based on core point and distance between samples and hyperplane.Finally,all selected samples and initial SV set are trained together to obtain new SV set.The experimental results show that,compared with general algorithms,the proposed D-ISVM incremental learning algorithm can achieve higher classification accuracy and further improve the learning speed of classification model.This method can effectively solve the individual difference problem during the on-line sEMG hand gesture recognition.
Multispectral Image Matching Algorithm Based on Improved SIFT
SUN Xue-qiang, HUANG Min, ZHANG Gui-feng, ZHAO Bao-wei, CONG Lin-xiao
Computer Science. 2019, 46 (4): 280-284.  doi:10.11896/j.issn.1002-137X.2019.04.044
Abstract PDF(2678KB) ( 308 )   
References | Related Articles | Metrics
In order to solve the problem that the speed and accuracy need to be taken into account simultaneously when conducting multispectral image matching,this paper improved the SIFT algorithm from the following several aspects.Aiming at the problems such as the slow matching speed and low matching rate caused by high dimension of feature descriptors,this paper improved the structure of feature descriptors to reduce the dimensions of descriptors.In the aspect of SIFT feature matching,firstly,the feature point is determined as the maximum point or minimum point according to the trace of Hessian matrix,which can narrow subsequent search range for the feature vector matching.Then,the partial matching point pairs are eliminated based on the position information of feature points.The experimental results show that the improved algorithm not only preserves the invariance advantages of the traditional algorithm,such as rotation and brightness,but also can effectively reduce the running time,and improve the matching rate on a certain extent.
Color Morphology Image Processing Method Using Similarity in HSI Space
HE Xiao-jun, XU Ai-gong, LI Yu
Computer Science. 2019, 46 (4): 285-292.  doi:10.11896/j.issn.1002-137X.2019.04.045
Abstract PDF(4922KB) ( 470 )   
References | Related Articles | Metrics
Morphological methods use structural units to measure and extract the target shape in the image to achieve the purpose of image analysis and processing,and these methods have been widely used in binary and grayscale image processing.In order to extend the gray morphology to the color image,this paper defined the color similarity in the HSI color space and proposed the color morphological image processing method.Firstly,the color similarity measure is defined by combining hue,saturation and intensity in the HSI space to characterize the similarity degree between color vectors.Then,the new type of color morphology is constructed by using color similarity,including the basic operations such as dilation,erosion,opening and closing.Finally,the morphological basic operations combined with color similarity are applied to extract color image edges.Experiments make in-depth analysis and research on the color morphological image processing performance,and it can be found that the color morphology operation is relatively better when the parameter k≤0.05.The experimental results show that the proposed method has the ability of smoothing the edge of the color target and the edge extraction performance.At the same time,it also shows the practicability and effectiveness of the image processing.
Image Fusion Using Quaternion Wavelet Transform and Copula Model
LI Kai, LUO Xiao-qing, ZHANG Zhan-cheng, WANG Jun
Computer Science. 2019, 46 (4): 293-299.  doi:10.11896/j.issn.1002-137X.2019.04.046
Abstract PDF(2739KB) ( 270 )   
References | Related Articles | Metrics
Quaternion wavelet transform (QWT) is a new multi-scale transform tool which can provide both amplitude and phase information.In this paper,copula model is used to capture the correlation of QWT coefficients,and a novel image fusion method based on QWT and Copula modelwas proposed.First,QWT is performed on the source images.Second,the dependency among the magnitude-phase of high frequency subbands and the corresponding phase of low frequency phase is established by Copula models.Next,a choose-max fusion rule based on the comprehensive feature constructed by the regional energy of Copula joint probability density,the gradient of phases,the QWT coefficient energy and the local contrast,is proposed for high frequency subbands.A choose-max fusion rule based on the comprehensive feature constructed by gradient and local variance of low frequency phases is proposed for low frequency subbands.Finally,the fusion image is obtained by inverse QWT.Experimental results demonstrate that the performance of the proposed method is superior to the traditional fusion methods.
Interdiscipline & Frontier
Online Advertising Model via Blockchain
LU Ge-hao, LI Xiang-yu
Computer Science. 2019, 46 (4): 300-308.  doi:10.11896/j.issn.1002-137X.2019.04.047
Abstract PDF(1681KB) ( 481 )   
References | Related Articles | Metrics
Although the digital media advertising industry represented by the internet advertising has rapidly developed in recent years,some urgent problems are continuously emerging,such as the opaque process of digital media adverti-sing,falsified feedback data,low efficiency caused by multi-level agency,and advertisers’ resources and funds are wasted without knowing statistical response.Blockchain technology provides a possible solution with a decentralized trusted P2P distributed network.Using the technologies such as Blockchain and smart contract,this paper attempted to construct an online advertising model via Blockchain which possesses some extraordinary characteristics like distributed P2P network,decentration,non-repudiation,tamper-resistant,transparency in trades,traceability and trusted consensus mechanism,so as to explore a new advertising commercial ecosystem,thus providing a safe,unconstrained,efficient,automated and intelligent electronic marketplace for advertisers and other advertising practitioners.In the combination of advertising practice and Blockchain technologies,this paper also depicted a complete running process of online adverti-sing model via Blockchain.At last,this paper gave some prospects of smart contracts and business models,as well as further combination with other mature technologies.
Reconvergence Phenomena Analysis Method in Combinational Circuits Based on SAT Solver
ZHANG Lu-jie, LIU Chang, ZHANG Long, GUO Yang
Computer Science. 2019, 46 (4): 309-314.  doi:10.11896/j.issn.1002-137X.2019.04.048
Abstract PDF(1466KB) ( 405 )   
References | Related Articles | Metrics
In order to study reconvergence phenomena of combinational circuits,this paper proposed an analysis method based on SAT solver.All paths between transient pulses generation nodes and output nodes are confirmed by depth-first search algorithm.Elements in the check-list are imposed with sensitization constraints,and elements satisfaction is judged by Minisat.Finally,whether there are input vectors satisfying the conditions so that the transient pulse reconverge at the output node through different paths or not are determined.The proposed method can analyze reconvergence phenomena of large-scale combinational circuits effectively.EPFL and ISCAS’85 were used as test sets.The experimental results show that transient pulses generated by certain nodes,which are about half of the total nodes of the ISCAS’85 test set,can reconverge finally.The ratio is significantly higher than that of the EPFL test set.Therefore,there is a significant difference of the occurrence rate of reconvergence for different types of functional circuits.
Travel Time Analysis of SP-AS/RS with New Configuration for I/O Point
TONG Ze-ping, WU Ying-qiang, REN Liang, LI Wei
Computer Science. 2019, 46 (4): 315-320.  doi:10.11896/j.issn.1002-137X.2019.04.049
Abstract PDF(2037KB) ( 217 )   
References | Related Articles | Metrics
In order to establish the dual-command travel time analysis model of the split-platform automated storage and retrieval system and seek the optimal design of the automated storage and retrieval system,this paper introduced a new system input and output point configuration,establised a travel timemathematical model of dual-command cycle,and verified the effectiveness of model optimization.From the perspective of the expected travel time,when the shape factor b=1,the proposed model is more efficient,increased by 27.92%.The innovation of this paper is that after introducing the new entrance and exit location structure,the travel time of the automatic storage system is analyzed with more practical storage mode,and the expected travel time model is optimized.In the dual-command cycle storage and retrieval mode,the split-platform automated storage and retrieval system introduces a new configuration type of system input and output point,which is more efficient than the traditional split-platform automated storage and retrieval system.
Performance Optimization of FT Program Based on SW26010 Processor
TAO Xiao-han, PANG Jian-min, GAO Wei, WANG Qi, YAO Jin-yang
Computer Science. 2019, 46 (4): 321-328.  doi:10.11896/j.issn.1002-137X.2019.04.050
Abstract PDF(2230KB) ( 536 )   
References | Related Articles | Metrics
Sunway TaihuLight is a supercomputer independently developed by China.Its processor is SW26010 heterogeneous many-core processor,which is also independently developed by Chinese.Each processor includes four core-groups,and each core-group includes one management processing element(MPE) and 64 computing processing elements(CPEs).The function of NPB-FT program is to solve three-dimensional partial differential equations by using Fast Fourier Transform,and it is widely used in the evaluation of cluster computing and aggregation capabilities.Therefore,it is of great importance to use the FT program to analyze the multi-level parallel resources provided by Sunway TaihuLight and the performance of the architecture.First of all,the program is rewritten as master-slave version by accelerating athread library,so that the program core can be executed by the CPEs.Second,the data transposition process in the FT program is eliminated by using register communication of CPEs and the data transmission channel between the MPE and CPEs.Further,the computing and communication hiding are realized to avoid the computing resources in the core being in idle state while communicating between cores.Finally,the vectorization and instruction flow technology are used to enhance the program’s data-level and instruction-level parallelism.The experimental results show that the 3D-32 program executing on a single core has an acceleration ratio of 66.The acceleration ratio of 3D-512 program executing on 64 cores is 20 while the acceleration ratio of 3D-2048 program executing on 256 cores is 46.
Sustainable Cooling Method of CPU Hot Spot Based on Target Matrix
YAN Bing-qing, YUAN Jing-ling, CHEN Min-cheng, LIU Dong-ling, JIANG Tao
Computer Science. 2019, 46 (4): 329-333.  doi:10.11896/j.issn.1002-137X.2019.04.051
Abstract PDF(2785KB) ( 291 )   
References | Related Articles | Metrics
In order to solve the CPU overheating problems,many scholars have proposed their own CPU cooling models to achieve energy saving.Based on the previous model of heat cycling,this paper quantitatively analyzed the mathematical conditions of the establishment of the CPU hotspot sustainable cooling model,established the target heat matrix model based on the temperature change during the CPU cooling process,analyzed the temperature change of the hot spot,and verified the correctness of the mathematical relation model.On the basis of comparing the previous models of heat recovery,a cooling model considering the self-heating factor of the system was proposed,which can be carried out by using our proposed target heat matrix.The experimental results show that the cooling efficiency of the cooling model is 0.937%.