Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 43 Issue 1, 01 December 2018
  
Research on ONS Security
WANG Hu-qing and SUN Zhi-xin
Computer Science. 2016, 43 (1): 1-7.  doi:10.11896/j.issn.1002-137X.2016.01.001
Abstract PDF(640KB) ( 217 )   
References | Related Articles | Metrics
The IoT (Internet of Things) security problem is affecting and restricting application prospect of IoT,and has become one of the hotspots in the field of Internet of Things.ONS(Object Naming Service) is responsible for mapping function from EPC code information to URI (Uniform Resource Identifier).The security mechanism of ONS has been extensively studied by more and more scholars in recent years.The purpose of this paper is to do a survey on ONS security problem.Firstly,the function and resolution process of ONS were introduced.Secondly,the main security risk of ONS was analyzed.Thirdly,the latest research achievements were summarized based on identity authentication,digitalsignature and secure transmission.The current problems of ONS security and research trends in this area were also discussed finally.
Opportunities,Challenges and Methods of Electric Data Auditing in Big Data Environments
CHEN Wei and SMIELIAUSKAS Wally
Computer Science. 2016, 43 (1): 8-13.  doi:10.11896/j.issn.1002-137X.2016.01.002
Abstract PDF(836KB) ( 655 )   
References | Related Articles | Metrics
The research and application of electronic data auditing are a hot topic in audit area.The arrival of the era of big data is creating opportunities and challenges for electric data auditing practice and research.However,there are few studies on this issue.In this paper,the importance of researching electric data auditing in big data environments was ana-lyzed firstly.Then,the concept and principle of electric data auditing were analyzed.Then,opportunities and challenges of electric data auditing in big data environments were studied.With the characteristics of big data existing and big data analysis technologies and tools,methods of electric data auditing in big data environments were discussed.Finally,advices for implementing electric data auditing in big data environments were given.Research results in this paper can provide theory for implementing electric data auditing in big data environments.
Parallelization of Hydrostatic Numerical Forecasting Model of Marginal Sea
PANG Ren-bo, ZHANG Yun-quan, TAN Guang-ming, XU Jian-liang, JIA Hai-peng and XIE Qing-chun
Computer Science. 2016, 43 (1): 14-17.  doi:10.11896/j.issn.1002-137X.2016.01.003
Abstract PDF(668KB) ( 231 )   
References | Related Articles | Metrics
The hydrostatic numerical forecasting model of marginal sea is the numerical forecasting model developed independently in China according to the characteristics of the marginal sea.There are many physical equations in the mo-del,and some equations are not appropriate for parallelism like SOR,so it takes a long time to run the program.To solve these problems,the parallel SOR method,which is based on 3D computing grids and characters of ocean forecasting models,is used to solve the parallel problem and keep dependencies between data in 3D computing grids.The methods are also used to improve the efficiency of the parallel program,such as using MPI non-blocking communication,cutting the process of computing and communication to more steps,and overlapping the steps of communication with computing.The experiments show that the performance of the parallel hydrostatic numerical forecasting model increases 60.71 times compared to the serial program,and root mean square error of three-day forecasting results (25920 time steps) is less than 0.001,which meets the numerical ocean forecasting requirements of timeliness and accuracy.
Multi-sensor Fusion Based Indoor Real-time and Highly Accurate Trajectory Generation
LIU Ding-jun, JIANG Xin-long, LIU Jun-fa and CHEN Yi-qiang
Computer Science. 2016, 43 (1): 18-21.  doi:10.11896/j.issn.1002-137X.2016.01.004
Abstract PDF(666KB) ( 179 )   
References | Related Articles | Metrics
This paper proposed a multi-sensor fusion based indoor real-time and highly accurate trajectory generation method.Combing the Wi-Fi location and sensors based location,the proposed method can generate the real-time trajectory in indoor area.This method gets the initial position by Wi-Fi location,and then obtains the real-time velocity and heading orientation based on sensors to calculate the next position by dead reckoning method.Finally,Kalman filter is used to get a smoother trajectory.The experiment results show that this proposed method can get more smoother and denser trajectory than the Wi-Fi positioning,and can get higher tracking accuracy than some other similar methods.
New Type of Soft Weak BCI-algebras
LIAO Cui-cui, LIAO Zu-hua, ZHANG Long-xiang, TONG Juan, LUO Qing and LIU Wei-long
Computer Science. 2016, 43 (1): 22-24.  doi:10.11896/j.issn.1002-137X.2016.01.005
Abstract PDF(292KB) ( 206 )   
References | Related Articles | Metrics
In this paper ,by endowing a parameter set with soft weak BCI-algebras’ algebra structure,the concept of a new type of soft weak BCI-algebras was introduced firstly.Then its some basic properties were discussed by using the intersection and the AND operation of soft sets.Besides,by applying the dual soft sets,the equivalent characterizations of the new type soft weak BCI-algebras were given.At last,the properties of the homomorphism image and inverse image of the new type of soft weak BCI-algebras were investigated.
Research on Improved PrefixSpan Algorithm and its Application in Web User Behavior Patterns Mining
JI Hao-bo and WANG Jun-hong
Computer Science. 2016, 43 (1): 25-29.  doi:10.11896/j.issn.1002-137X.2016.01.006
Abstract PDF(436KB) ( 785 )   
References | Related Articles | Metrics
Sequential pattern mining is mining relative time or other mode of high frequency from sequence databases.Based on the PrefixSpan algorithm,the paper proposed an improved adaptive algorithm to improve the problem of expensive construction projection database and low scanning efficiency,through the methods such as using sequence expanding instead of item expanding,abandoning the project databases that the number of sequence is less than min_support and so on.Then the new method was used for Web user behavior pattern mining to analyze and research log records law.Experimental results show that,compared to the PrefixSpan algorithm,the improved algorithm has been improved in the algorithm efficiency.
Three-way Decision Model Based on Probabilistic Graph
XUE Zhan-ao, WANG Peng-han, LIU Jie, ZHU Tai-long and XUE Tian-yu
Computer Science. 2016, 43 (1): 30-34.  doi:10.11896/j.issn.1002-137X.2016.01.007
Abstract PDF(404KB) ( 367 )   
References | Related Articles | Metrics
Probability graph model is a general term for a class of graph patterns that express the probabilistic model,and the problem of solving the loss cost by using the model has become a hot research hotspot.In this paper,the three-way decision model was put forward based on probability graph and three-way decision theory.By analyzing the information data set,the Bayesian network of the model is constructed.Then according to the interdependencies of the nodes in the model,the conditional probability distribution function is calculated.And combining with the prior condition of the data set and three decision-making cost loss function,its decision rules are established.The method to solve the problem of cost minimization is given in probability reasoning decision-making.Finally,a teaching evaluation illustration was presented to verify the effectiveness of model.
Kernel Canonical Correlation Analysis Feature Fusion Method and Application
XU Jie, LIANG Jiu-zhen, WU Qin and LI Min
Computer Science. 2016, 43 (1): 35-39.  doi:10.11896/j.issn.1002-137X.2016.01.008
Abstract PDF(908KB) ( 369 )   
References | Related Articles | Metrics
A feature fusing algorithm based on kernel canonical correlation analysis KCCA) was constructed in this paper.First,the image data are mapped through kernel function,and then two groups of feature vectors with the same pattern are extracted and the correlation criterion function between the two groups of feature vectors are established.Se-condly,two groups of canonical projective vectors are extracted according to this function.Thirdly,feature fusion for classification is done by using proposed strategy.The advantage of the proposed algorithm lies in the following aspects.Firstly,it suits for information fusion.Secondly,it eliminates the redundant information within the features,and it simplifies the computation without decomposing the mapped matrix and gains more discriminated features.The results of experiments on AR face database,PIE face database,ORL face database,Yale face database and UCI handwritten digit database show that this algorithm is efficient and robust.
New Heuristic Attribute Reduction Algorithm Based on Sample Selection
YANG Xi-bei, YAN Xu, XU Su-ping and YU Hua-long
Computer Science. 2016, 43 (1): 40-43.  doi:10.11896/j.issn.1002-137X.2016.01.009
Abstract PDF(321KB) ( 205 )   
References | Related Articles | Metrics
Attribute reduction is one of the important topics in rough set theory.Heuristic attribute reduction algorithm with greedy strategy is one of the widely used approaches to compute reduction.Traditional heuristic algorithm uses all of the samples in the information system.However,it should be noticed that for a data set,different samples contribute different importance to find reduction,and it follows that the time efficiency of the heuristic attribute reduction algorithm suffers from the redundant samples.To solve such problem,a heuristic attribute reduction algorithm based on sample selection was proposed.The algorithm is composed of three stages.Firstly,the most informative samples are selected.Secondly,a new information system is formed by using these selected samples.Finally,one of the reductions can be computed by using heuristic algorithm.The experimental results show that the proposed algorithm can efficiently reduce the computational time.
Study on Model of Covering-based Rough Intuitionistic Fuzzy Sets
XUE Zhan-ao, SI Xiao-meng, ZHU Tai-long and WANG Nan
Computer Science. 2016, 43 (1): 44-48.  doi:10.11896/j.issn.1002-137X.2016.01.010
Abstract PDF(456KB) ( 227 )   
References | Related Articles | Metrics
It is a hot research topic for the combination of rough sets and intuitionistic fuzzy sets.In this paper,the concept of fuzzy covering-based rough membership and non-membership were defined based on the rough sets theory,intuitionistic fuzzy sets theory and covering theory.The relationships between the memberships and non-memberships of the elements themselves and the minimal descriptions of elements w ere taken into full account.Then two new models were established,which are covering-based rough intuitionistic fuzzy sets model and covering-based rough interval-valued intuitionistic fuzzy sets model.And their properties were proved.At the same time,a new similarity measure formula was proposed in the intuitionistic fuzzy sets,and we used an example to illustrate its application.
Research on Vertical Segmentation Knowledge Reduction Algorithm Based on Tolerance Rough Set Theory
JIAO Na
Computer Science. 2016, 43 (1): 49-52.  doi:10.11896/j.issn.1002-137X.2016.01.011
Abstract PDF(282KB) ( 218 )   
References | Related Articles | Metrics
Rough set theory is an efficient mathematical tool for further reducing redundancy.However,practical data sets are always continuous and the structure is complex.The efficiency of Many existing knowledge reduction methods based on rough set theory are low.This paper combined tolerance relation together with rough set theory.Then,we put forward a new knowledge reduction method based on vertical segmentation.The large information system was divided into one reduced form and other small scale information forms,then they were joined together in order to solve the original information system.An example demonstrates that the proposed algorithm is effective.
Full Affine Curve in Curvature Scale Space Image Registration
CHAI Xian-tao, LIANG Jiu-zhen and LANG Long-ya
Computer Science. 2016, 43 (1): 53-56.  doi:10.11896/j.issn.1002-137X.2016.01.012
Abstract PDF(1161KB) ( 254 )   
References | Related Articles | Metrics
To solve the problems that viewpoint change of curve image results in difficult matching and recognition,we used full affine transformation model library in the gallery to find the optimal sample correlated with the target by the curvature information,so as to achieve the matching and identification of curve image.In this paper,based on image matching method of curvature full affine curve, a sample library of sample images was established according to the full affine model.In a sample library,each sample is sampled by equal interval offset sampling.Then curvature subset of samples is established for the sub-samples.A fast nearest neighbor search algorithm is carried out using the target ima-ge and the sample subset for similarity matching.Finally the optimal matching sample is found by matching results.Experimental results show that the full affine curve image registration based on curvature has a high success rate,and has more advantages compared with traditional algorithms.
Matroidal Structure Induced by Neighborhood
LI Hui, ZHU Feng and LIN Zi-qiong
Computer Science. 2016, 43 (1): 57-60.  doi:10.11896/j.issn.1002-137X.2016.01.013
Abstract PDF(298KB) ( 299 )   
References | Related Articles | Metrics
This paper constructed a matroidal structure by the neighborhood of a covering.At first,a family of sets was constructed through neighborhood and complementary neighborhood on a covering and the family of sets was proved to satisfy independent set axioms.Thus a matroidal structure of a covering was established.Then,we investigated some characteristics of this kind of matroid,such as dependent set,circuit,rank function and closure.At last,we gave some equivalent characterizations of dual matroid of the matroid,such as independent set and cicuit.
Measuring Consistency of Two Datasets Using Shannon Entropy
CHE Xiao-ya and MI Ju-sheng
Computer Science. 2016, 43 (1): 61-63.  doi:10.11896/j.issn.1002-137X.2016.01.014
Abstract PDF(281KB) ( 229 )   
References | Related Articles | Metrics
The basic idea of rough set theory is based on an indiscernibility relation,and through a pair of approximate operators,it can approximatively represent a given concept.It is used in the study of a data set for classification consistency to another data set.This paper presented a new approach to measure consistency degree of two datasets,and defined classification consistency by Shannon entropy.Taking the influence of neighborhood relations of different data into account,a general consistency measure was defined by introducing the expert knowledge into a fuzzy inference system,then we constructed a consistent generalized metric.Moreover,this method can prevent the “ black box ” phenomenon encountered in many modeling techniques and produce robust and interpretable results.
Research of SVM Multiclass Model Based on Granular Computing & Huffman Tree
CHEN Li-fang, CHEN Liang and LIU Bao-xiang
Computer Science. 2016, 43 (1): 64-68.  doi:10.11896/j.issn.1002-137X.2016.01.015
Abstract PDF(393KB) ( 166 )   
References | Related Articles | Metrics
In view of the multi-classification problems,we built the SVM multiclass model based on granular computing and Huffman-tree.After applying granular computing to grain classification problem,we could calculate the granularity and build the Huffman tree based on granularity weight set,which solves the uneven distribution of samples in the class and lows classification efficiency.We also designed SVM classifier for coarse grain nodes,and selected the low temperature storage tanks material multi-classification problem as the research background to simulate our model.Meanwhile,we compared our model with other methods.The result shows that the new model improves the efficiency of classification.It provides a new idea and a perfect method for multi-classification problem.
Set Pair Attributes Soft Computing Method and its Application
LIU Bao-xiang, BAI Bin, LI Li-hong and LI Yan
Computer Science. 2016, 43 (1): 69-72.  doi:10.11896/j.issn.1002-137X.2016.01.016
Abstract PDF(343KB) ( 216 )   
References | Related Articles | Metrics
The key to set pair analysis method is to compute the connection degree.The structure of set pair correlation function provides a new soft computing method of describing the relationship between the collections and determining the connection degree express formula.Firstly,based on rough set,the attribute correlation function was defined and the basic properties were discussed.Secondly,the set pair correlation function was defined and it was proved that the set pair correlation degrades into the attribute correlation function when any collection of set pair is expended to the whole region.The basic properties of the set pair correlation function were discussed.What’s more,the synthetic operations and operation laws were given.Finally,the feasibility and practicability of the set pair of soft computing method about attributes were illustrated through a case study.
Updating Approximations for a Type of Covering-based Rough Sets
LI Chang-qing and ZHANG Yan-lan
Computer Science. 2016, 43 (1): 73-76.  doi:10.11896/j.issn.1002-137X.2016.01.017
Abstract PDF(309KB) ( 168 )   
References | Related Articles | Metrics
Rough set theory is a useful tool for data mining.The covering-based rough set theory is one of the most important parts of the rough set theory.An updating algorithm was presented for a pair of covering approximation operators with the increase of objects.Two living examples were employed to evaluate the effectiveness of the proposed method.
Cost and Risk Control of Conversion of Three-way Decisions’ Boundary Domain
LI Li-hong, LI Yan and LIU Bao-xiang
Computer Science. 2016, 43 (1): 77-80.  doi:10.11896/j.issn.1002-137X.2016.01.018
Abstract PDF(328KB) ( 249 )   
References | Related Articles | Metrics
The three-way decisions theory is formulated based on the notions of acceptance,rejection and non-commitment,which is an extension of the commonly used binary-decision model and has been widely used in many fields and disciplines.Considering that the costs of non-commitment are often the same with misjudgment in positive region or negative region,firstly,the risk of non-commitment decision was analyzed.Secondly,conversion process into two decision-making commitments was studied,and based on the minimum conversion price,non-commitment decision risk control model was given.Finally,case study shows the effectiveness of the model.
Solving Continuous Optimization Problem by Hybrid Wading across Stream Algorithm-Estimation Distribution Algorithm
GAO Shang and CAO Cun-gen
Computer Science. 2016, 43 (1): 81-84.  doi:10.11896/j.issn.1002-137X.2016.01.019
Abstract PDF(569KB) ( 235 )   
References | Related Articles | Metrics
According to advantage of wading across the stream and estimation distribution algorithm,a hybrid wading across stream algorithm-estimation distribution algorithm (Hybrid WSA-EDA) was put forward.The Hybrid WSA-EDA acts a solution as a start point,then searches several random solutions near the start point,and finds out the best solution of these solutions.The center of some selection of good individual is crossed with the best solution and this solution is taken as the next start point,and then several random solutions near this start point are searched,and so on.For solving continuous optimization problem,the improved wading across stream algorithm shrink the search space gradually.The experiment results of some classic benchmark functions show that the Hybrid WSA-EDA extraordinarily improves the convergence velocity and precision.
Classification Model of Visual Attention Based on Eye Movement Data
WANG Feng-jiao, TIAN Mei, HUANG Ya-ping and AI Li-hua
Computer Science. 2016, 43 (1): 85-88.  doi:10.11896/j.issn.1002-137X.2016.01.020
Abstract PDF(1171KB) ( 329 )   
References | Related Articles | Metrics
Visual attention is a very important part of the human visual system.Most of the existing visual attention models emphasize bottom-up attention,considering less top-down semantic.There is few specific attention model for different categories of images.Eye tracking technology can capture the focus of attention objectively and accurately,but its application in visual attention model is still relatively rare.Therefore,we proposed a classification model of visual attention (CMVA) combining bottom-up with top-down factors,which trains classification models for different categories of images on the basis of eye movement data so as to predict visual saliency.Our model was compared with other existing eight models,proving its superior performance than other models.
Unsupervised Learning from Categorical Data:A Space Transformation Approach
WANG Jian-xin and QIAN Yu-hua
Computer Science. 2016, 43 (1): 89-93.  doi:10.11896/j.issn.1002-137X.2016.01.021
Abstract PDF(429KB) ( 212 )   
References | Related Articles | Metrics
The unsupervised learning method of categorical data plays a more and more important role in such areas as pattern recognition,machine learning,data mining and knowledge discovery in the recent years.Nevertheless,in view of many existing clustering algorithms for categorical data (the classical k-modes algorithm and so on),there is still a large room for improving their clustering performance in comparison with the performance of clustering algorithms for numeric data.This may arise from the fact that categorical data lack a clear space structure as that of numeric data.To effectively discover the space structure inherent in a set of categorical objects,we adopted a novel data representation scheme:a space transformation approach,which maps a set of categorical objects into a corresponding Euclidean space with the new dimensions constructed by each of the original features.Based on the new general framework for categorical clustering,we employed the Carreira-Perpin’s K-modes algorithm for clustering to find more representative modes.The performance of the new proposed method was tested on the nine frequently-used categorical data sets downloaded from the UCI.Comparisons with the traditional clustering algorithms for categorical data illustrate the effectiveness of the new method on almost all data sets.
Research on Multi-features Hierarchical Answer Quality Evaluation Method
CUI Min-jun, DUAN Li-guo and LI Ai-ping
Computer Science. 2016, 43 (1): 94-97.  doi:10.11896/j.issn.1002-137X.2016.01.022
Abstract PDF(404KB) ( 255 )   
References | Related Articles | Metrics
Social media question-answer pairs can provide the answer to the automatic question and answering system,but the quality about some of the answers is not so high.So the evaluation method of answer quality has the research value.The existing evaluation methods without considering the problem of question types feature use the uniform evaluation method for different question types.This paper presented a hierarchical classification model.Firstly we analyzed the types of question,and then extracted four features of text,non-text,language translation,number of links in the answer.According to this objective phenomenon that the influence of feature classification varies with the types of different questions,we used logistic regression algorithm to evaluate various types of answer quality based on these features,achieving good results.Finally the main features that influence the anawer quality of all kinds of questions were analyzed.
Double Quantitative Rough Fuzzy Set Model Based on Logical And Operator and Logical Disjunct Operator in Ordered Information System
HU Meng and XU Wei-hua
Computer Science. 2016, 43 (1): 98-102.  doi:10.11896/j.issn.1002-137X.2016.01.023
Abstract PDF(353KB) ( 209 )   
References | Related Articles | Metrics
A novel rough set model,called double quantitative rough fuzzy set model,was established based on the “logical and operator” and “logical disjunct operator” in ordered information system.It overcomes fuzzy problems which cannot be solved by using traditional “logical and operator” and “logical disjunct operatror” rough model.Some important properties were presented,and significance of the model was shown by the supermarket case study.
Color and Space Distance Based Salient Region Detection Using Fixed Threshold Segmentation
QIAN Kun, LI Fang and WEN Yi-min
Computer Science. 2016, 43 (1): 103-106.  doi:10.11896/j.issn.1002-137X.2016.01.024
Abstract PDF(934KB) ( 182 )   
References | Related Articles | Metrics
Against to the problem that the existing salient region detection algorithm based on spatial domain depends on image segmentation algorithm in segmentation of salient region,we proposed an algorithm based on color and space distance for detecting salient region using the fixed threshold segmentation algorithm.By this algorithm,spatial scales are created using dyadic Gaussian pyramids and the color is quantized and image is blocked at each scale.Then,the color and space distance of image patches are calculated in the each scale.Finally the salient region is segmented.Experimental results in the MSRA1000 public database show that the performance of the algorithm is better than others in precision,recall and F-measure.Therefore,the proposed algorithm is comparable to the state-of-art salient region detection algorithms and can segment the salient region simply.
Algorithm for Mining Adjoint Pattern of Spatial-Temporal Trajectory Based on Grid Index
YANG Yang, JI Gen-lin and BAO Pei-ming
Computer Science. 2016, 43 (1): 107-110.  doi:10.11896/j.issn.1002-137X.2016.01.025
Abstract PDF(854KB) ( 412 )   
References | Related Articles | Metrics
In the field of data mining,adjoint pattern of spatial-temporal trajectory is an important research direction.CMC(Coherent Moving Cluster) algorithm is a classical algorithm for mining adjoint pattern,and it is applied to mine clusters of arbitrary shape.However,it reduces the efficiency of the algorithm.We presented an algorithm for mining adjoint pattern of spatial-temporal trajectory called MAP-G(Mining Adjoint Pattern of spatial-temporal trajectory based on the Grid index).The experimental results demonstrate that the proposed algorithm is more efficient compared to the CMC algorithm,and the accuracy is higher as our algorithm can filter some wrong results.
Coalitional Game Spectrum Sensing Scheme
HE Huan-huan, WANG Xing-wei, LI Jie and HUANG Min
Computer Science. 2016, 43 (1): 111-115.  doi:10.11896/j.issn.1002-137X.2016.01.026
Abstract PDF(402KB) ( 179 )   
References | Related Articles | Metrics
To help cognitive users conduct fair and efficient spectrum sensing in the cognitive environment where primary users are heterogeneous and the perceived performances of cognitive users are low due to multipath fading and sha-dow effect,a coalitional game spectrum sensing scheme was proposed.The spectrum sensing problem was modeled as a coalitional game with nontransferable utilities.Firstly,cognitive users are divided into several sub-coalitions,and each sub-coalition head node is selected according to head node selection principle.Secondly,merging and division of sub-coalitions are taken according to specific rules.Finally,sub-coalition head nodes use “OR” rules to do data fusion,and get final sense results and broadcast them to all cognitive users in sub-coalition.Simulated implementation and performance evaluation were done,and the results show that the proposed scheme has good performance.
Multi-party Authorization Model for Social Networks
HUO Ying-yu, MA Li, ZHONG Yong and QIN Xiao-lin
Computer Science. 2016, 43 (1): 116-121.  doi:10.11896/j.issn.1002-137X.2016.01.027
Abstract PDF(456KB) ( 170 )   
References | Related Articles | Metrics
Most of the current access control mechanisms focus on data of private space of users,which cannot control the data beyond the personal space,such that the remarks published by a user in the space of his friend cannot be controlled by the user and the shared resource cannot be controlled jointly by the sharer.The paper presented the MRuleSN,a multi-party authorization model for social networks.The model processes the problem of ownership by single ownership and multi-party shareholders,and adopts extended w-Datalog rules to express authorization,which owns more powerful flexibility,fine-grained access control and authorization expressiveness.The rule structure,syntax and semantic of authorization language of the model were analyzed and explained.Finally,application and expressiveness of the model were exampled and discussed.
Simple and Smoothed Fair Round Robin Scheduling Algorithm
GUO Zi-rong, ZENG Hua-xin and DOU Jun
Computer Science. 2016, 43 (1): 122-127.  doi:10.11896/j.issn.1002-137X.2016.01.028
Abstract PDF(463KB) ( 258 )   
References | Related Articles | Metrics
A simple and smoothed fair round-robin(SSFRR) scheduling algorithm for packet or cell switching was proposed based on the idea of packet generalized processor sharing(GPS) service discipline.The algorithm uses the reserved rates of active flows as their weights to build scheduling tree for allocating slots.The basic idea of SSFRR is to distribute all flow IDs over the leaf nodes of a complete binary tree according to the weight of each flow by scanning the weight matrix when a new flow arrives.At each time slot,SSFRR visits next leaf node of the binary tree in sequence from left to right,takes out the flow ID from the current leaf node and schedules the cell of this flow.SSFRR scheduling algorithm has O(1) scheduling time complexity.Compared with other classical fairness round robin scheduler,SSFRR algorithm is simpler to implement.Analysis and simulation show that SSFRR scheduling algorithm has good fairness property,and can provide end-to-end delay bounds for those flows which are token-bucket regulated by sources.SSFRR can provide flow-based QoS guarantees.
Distributed Construction for (k,m)-Fault Tolerant Connected Dominating Set in Wireless Sensor Network
MA Chen-ming, WANG Wan-liang and HONG Zhen
Computer Science. 2016, 43 (1): 128-132.  doi:10.11896/j.issn.1002-137X.2016.01.029
Abstract PDF(499KB) ( 163 )   
References | Related Articles | Metrics
Virtual backbone based on connected dominating set can prolong the lifetime of wireless sensor network.However,considering nodes are prone to failure,virtual backbone also needs to have a certain degree of fault tolerance.In this regard,a fully distributed algorithm was proposed for fault tolerance constructing k-connected m-dominated set for arbitrary k and m values.The algorithm can be extended in the heterogeneous network.It constructs connected do-minating set firstly,then makes all common nodes m-dominating with the idea of maximum independent set and greedy,and finally extends the connected dominated set k-connectivity by the common neighbor nodes in the local topology.Simulation experiments confirm that the algorithm can obtain better size k-connected m-dominating set with low message overhead.
Richards Model Based Data Center Backbone Network Bandwidth Allocation Policy
MENG Fei, LAN Ju-long and HU Yu-xiang
Computer Science. 2016, 43 (1): 133-136.  doi:10.11896/j.issn.1002-137X.2016.01.030
Abstract PDF(657KB) ( 181 )   
References | Related Articles | Metrics
There are large numbers of bursty short flows in inter-data center backbone network.It is difficult to dynamically adapt to the traffic patterns allocating bandwidth in real-time.For the problem,a Richards model based bandwidth allocation policy (RBA) was proposed and the closed-loop feedback control system was used for real-time bandwidth allocation.Based on Richards curve,the feedback control factor of each link was designed as the feedback of the system,which can make smooth response to bursty traffic.Furthermore,according to the time delay sensitivity of flows,diffe-rent flow rate growth curves can be derived depending on different allometric parameters,which meet the QoS requirements of delay sensitive flows.Simulation on a Mininet testbed shows that the proposed policy can effectively guarantee the bandwidth for flows while ensuring the allocation fairness.
Research of Smartphone Energy Saving Based on Buffer Threshold Adaptive Adjustment
JIANG Bo, LI Tao-shen and GE Zhi-hui
Computer Science. 2016, 43 (1): 137-140.  doi:10.11896/j.issn.1002-137X.2016.01.031
Abstract PDF(354KB) ( 204 )   
References | Related Articles | Metrics
To solve the low energy efficiency problem of LTE wireless streaming media terminal,a new energy-efficient streaming media transmission control method based on dynamic buffer threshold adjustment (DBTA) was proposed.This algorithm adaptively adjusts the buffer lower threshold of receiver according to the monitored network bandwidth and streaming media encoding rate information,making the size of the buffer space more fit the current bandwidth and streaming media encoding rate,and improves the time ratio of dormant to work ,thereby extending the life of the terminal.Simulation results show that this algorithm (DBTA) can effectively resolve the problems of the excessive energy consumption of LTE wireless streaming media terminal,and achieve up to 22% energy saving.
Trustworthy Web Servcie Selection Based on Social Network
WU Ju-hua, CHENG Xiao-yan, CAO Qiang and MO Zan
Computer Science. 2016, 43 (1): 141-144.  doi:10.11896/j.issn.1002-137X.2016.01.032
Abstract PDF(344KB) ( 172 )   
References | Related Articles | Metrics
The technique of Web services enables the development of large-scale applications in open environments.The increasing number of Web service providers have produced numerous Web services providing the same or similar functionality,and this necessitates the use of tools and techniques to search Web services available over the Web.However,the majority of selection mechanisms are designed to use a fixed set of indicators as the selection criteria,so when dea-ling with unknown Web services,they are unable to adapt to the diverse demands of users.In this paper,considering the service provider’s network and the service consumer’s network,from the perspective of social network,we proposed a trustworthy service selection framework.The synthetic value of service provider was identified by fuzzy synthetic eva-luation method and the reliability of Web services was calculated upon user ratings.User makes selection decision with demands of QoS,QoS data,the synthetic evaluation of service provider and the reliability of Web services.
System Throughput Optimization for Hybrid Device-to-Device Cellular Networks
QIAN Cheng, QIAN Li-ping, WU Hang and CHEN Qing-zhang
Computer Science. 2016, 43 (1): 145-148.  doi:10.11896/j.issn.1002-137X.2016.01.033
Abstract PDF(397KB) ( 185 )   
References | Related Articles | Metrics
Since each DU (Devcie-to-Device user) perceives different channel gains when sharing resource with different shared cellular user (CU),it is important to associate a DU with the right CU for the improvement of spectrum efficiency.Despite its importance,the resource optimization in hybrid Device-to-Device (D2D) cellular networks has been not solved perfectly due to the non-convex nature.This paper proposed a hybrid network resource optimization method (hnRoM) based on the linear programming and bipartite matching.The hnRoM algorithm first decomposes the resource optimization problem into the DU-CU assignment optimization subproblem and the time resource allocation optimization subproblem.The optimal solution of the time resource allocation optimization subproblem is obtained by linear programming.By utilizing the obtained optimal solution,the optimal DU-CU assignment is achieved based on the bipartite graph model.Finally,the simulation results show that all DUs can be served for the maximal system throughput while satis-fying the minimum transmission rate requirements of all users.
Anti-jamming Method for Frequency Hopping Communication Based on Single Channel BSS and EMD
QI Yang-yang and YU Miao
Computer Science. 2016, 43 (1): 149-153.  doi:10.11896/j.issn.1002-137X.2016.01.034
Abstract PDF(667KB) ( 179 )   
References | Related Articles | Metrics
Traditional blind source separation (BSS)methods can not be used in frequency hopping communication system with only one antenna equipped.To deal with this problem,a single channel BSS anti-jamming method based on empirical mode decomposition (EMD) was presented.Firstly,the signal received by single channel is expanded by EMD according to theoretical analysis and the simulation results,transforming the single channel underdetermined problem into the multi channel positive definite one.Then the totally-blind and semi-blind BSS algorithm are employed respectively to realize the separation of the signal and the jamming.Simulations were carried out under different signal to jamming ratio and signal to noise ratio to demonstrate the superior performance of the proposed method.
Novel Reliability Analysis Algorithm Based on MDDs in Networks with Imperfect Nodes
WANG Hong-gang, DONG Rong-sheng and QIAN Jun-yan
Computer Science. 2016, 43 (1): 154-158.  doi:10.11896/j.issn.1002-137X.2016.01.035
Abstract PDF(368KB) ( 249 )   
References | Related Articles | Metrics
The reliability of networks with imperfect nodes or edges is an NP-hard problem,and the assumption of networks with imperfect nodes and edges is closer to real life.A formal model of networks with binary state nodes and edges was constructed,and a novel network reliability analysis algorithm was proposed.Any node and its adjacent non-visi-ted edges’ combination states are enumerated to merge isomorphic sub-networks.Then,a MDD variable is used to represent the reduced state vector and corresponding probability vector.Finally,the MDD representing for the network is constructed by a custom operation.Experiment shows that the level and size of decision diagram generated by the proposed algorithm are less than the corresponding binary decision diagram.
Random Walk Based Node Ranking Algorithm in Heterogeneous Networks
JIA Li-juan
Computer Science. 2016, 43 (1): 159-162.  doi:10.11896/j.issn.1002-137X.2016.01.036
Abstract PDF(330KB) ( 185 )   
References | Related Articles | Metrics
There are usually nodes of different types in heterogeneous networks.In order to satisfy requests of information retrieval for different types from users,different types of nodes need raking.In order to solve the above problem and improve the effectiveness at the same time,this paper proposed a random walk based node ranking algorithm in he-terogeneous networks.Firstly,we formally described a heterogeneous network including nodes of users,images and text.Secondly,we defined the similarity between nodes,and proposed a modified similarity measure that includes neighbors of different types.Thirdly,we proposed an algorithm for computing importance of node based on random walk,and analyzed the selection of bias vector in the random walk model.Finally,according to massive experiments on real dataset,we validated the effectiveness of the proposed approach while handling users’ request in information retrieval.
Indoor Positioning Algorithm Based on Dynamic K Value and AP MAC Address Match
WANG Pei-zhong, ZHENG Nan-shan and ZHANG Yan-zhe
Computer Science. 2016, 43 (1): 163-165.  doi:10.11896/j.issn.1002-137X.2016.01.037
Abstract PDF(248KB) ( 260 )   
References | Related Articles | Metrics
After describing algorithm for indoor positioning based on dynamic K value and weighted localization (EWKNN),this article put forward an improved indoor localization algorithm based on AP MAC address match.First of all,EWKNN is used to dynamically choose reference points.Then according to the AP MAC matching degree bet-ween reference points and test point,further optimal positioning reference points are determined.Finally the weighted positioning is obtained in terms of signal distance between optimal reference points and test point.Experimental studies indicate that the improved algorithm has better performance at positioning accuracy comparing with EWKNN localization algorithm.
Research on Privacy Access Control Based on RBAC
ZHANG Xue-ming, HUANG Zhi-qiu and SUN Yi
Computer Science. 2016, 43 (1): 166-171.  doi:10.11896/j.issn.1002-137X.2016.01.038
Abstract PDF(601KB) ( 179 )   
References | Related Articles | Metrics
RBAC can be used to control the service provider to access the privacy of users in Web service.In order to solve the problem that RBAC cannot precisely describe the privacy access control policy for the lack of privacy attri-butes when it is applied in the privacy scene,this paper put forward a privacy access control model focused on RBAC,and provided the ranking method of the credibility of the service provider.Service providers with different credibility ranks were assigned with different roles to control their access to the sensitive privacy information.This paper also verified the validity and feasibility of the model through a specific example.
Compound Approach for Sybil Users Detection in Social Networks
KANG Kai, ZHANG Ying-jun, LIAN Yi-feng and LIU Yu-ling
Computer Science. 2016, 43 (1): 172-177.  doi:10.11896/j.issn.1002-137X.2016.01.039
Abstract PDF(462KB) ( 189 )   
References | Related Articles | Metrics
We mainly focused on the detection of Sybil attack in social networks.By analyzing the collected 100000 data in social networks,we extracted the users’ features,and combining the network reliability,we proposed a method to detect Sybil users.Finally,we conducted some experiments to validate the effective of our method.
Authentication Protocol for Cooperation of Unmanned Vehicles
NIU Wen-sheng, LI Ya-hui and GUO Peng
Computer Science. 2016, 43 (1): 178-180.  doi:10.11896/j.issn.1002-137X.2016.01.040
Abstract PDF(332KB) ( 176 )   
References | Related Articles | Metrics
The cooperation of unmanned vehicles needs dynamic networking for the secure group communication.Based on the different security domain of wireless environments,an authentication protocol with identity protection for coopera-tion of unmanned vehicles was proposed,which consists of a four-party security information exchange protocol for the data transmission between nodes in the dynamic network and an ID-based anonymous authentication scheme for the secure session keys of a node,which supports the combination of the decryption process and the identity protection.Finally,it is shown that the session key has the security characters of unforgeability,confidentiality and non-repudiation,and the new protocol has a secure,flexible and efficient method of key management for dynamic networking.
Efficient Solution to SMP Based on Coding and Homomorphic Encryption
TANG Xuan, ZHONG Hong, SHI Run-hua and CUI Jie
Computer Science. 2016, 43 (1): 181-185.  doi:10.11896/j.issn.1002-137X.2016.01.041
Abstract PDF(427KB) ( 389 )   
References | Related Articles | Metrics
Socialist millionaires’ problem(SMP) is the problem of two millionaires wanting to know whether they are equally rich,whose solutions can be used to build the basic protocol in many application systems.Firstly,we presented a new encoding scheme to encode private data.And then based on this encoding scheme and the ElGamal homomorphic encryption algorithm,we designed a creative scheme for socialist millionaires’ problem.The validity,security and efficiency were analyzed.Finally,the comparison of protocols indicates that our scheme has high efficiency.
ABE Scheme with Generalized Wildcards on Prime Order Groups
LI Zuo-hui and CHEN Xing-yuan
Computer Science. 2016, 43 (1): 186-190.  doi:10.11896/j.issn.1002-137X.2016.01.042
Abstract PDF(442KB) ( 248 )   
References | Related Articles | Metrics
Traitor tracing and revocation are crucial to use of ABE.ABE scheme with generalized wildcards (GWABE) is a convenient way for solving these problems.Previous adaptively secure GWABE scheme suffers from superfluous computation overhead because they are designed on composite order groups.To tackle this problem,an adaptively secure GWABE scheme on prime order groups was proposed when a dual pairing vector space approach was employed.The proposed scheme is proven adaptively secure from the decisional linear assumption.Performance analysis indicates that this scheme is more efficient while achieving the adaptive security.
CP-ABE Scheme with Supporting Policy Elastic Updating in Cloud Storage Environment
XIONG An-ping, XU Chun-xiang and FENG Hao
Computer Science. 2016, 43 (1): 191-194.  doi:10.11896/j.issn.1002-137X.2016.01.043
Abstract PDF(332KB) ( 238 )   
References | Related Articles | Metrics
In recent years,CP-ABE has been researched extensively as an access control mechanism in cloud storage environment.Because existing access control schemes based on CP-ABE can not support the elastic update with the system properties in cloud storage environment,this paper used the cloud storage service provider’s(CSP’s) storage and computing resources advantages,and proposed a cloud storage access control scheme which supports the system attribu-tes revocation or recovery based on the attribute-based access control with efficient revocation(AB-ACER) scheme.The scheme introduces virtual attributes for the access control tree,and when system attributes have been revoked or recovered,CSP only provides small re-encryption computation.Security analysis and performance analysis show that the proposed scheme not only supports a changeable access control policy for data owner(DO),but also ensures the confidentiality of data and the fine-grained access control,and reduces a large number of encryption calculation works for DO.
Defense Strategies Selection Method Based on Non-cooperative Game Attack Forecast
ZHANG Heng-wei, ZHANG Jian and HAN Ji-hong
Computer Science. 2016, 43 (1): 195-201.  doi:10.11896/j.issn.1002-137X.2016.01.044
Abstract PDF(1108KB) ( 177 )   
References | Related Articles | Metrics
To better solve the issue of information security defense strategies selection,in view of the characters that attacker and defender’s objectives are oppositional,strategies are interdependent and relationship is non-cooperative,the non-cooperative nonzero-sum attack-defense game model was built.In the model,an improved payoff calculation method was presented.The method takes the defender counterattack payoff into account,therefore the equilibrium is calculated more accurately.With analyzing the mixed strategy game equilibrium,attack action can be credibly forecasted based on rationality hypothesis.On the basis of attack action forecast,an algorithm of defense strategies selection was proposed,which can select the optimal defense strategies against the attack threat.The example analysis proves the effectiveness of the model and algorithm.
Research of Trustworthiness Evaluation Model Based on Software Behavior
DING Wei-tao and XU Kai-yong
Computer Science. 2016, 43 (1): 202-206.  doi:10.11896/j.issn.1002-137X.2016.01.045
Abstract PDF(499KB) ( 232 )   
References | Related Articles | Metrics
In order to evaluate the trustworthiness of the software accurately and reasonably,a trustworthiness evaluation model based on software behavior was proposed.Firstly,the monitoring points are set up in the software behavior trace.According to the attribute of the monitoring points and the function in the trusted evaluation system,the monitoring points are divided into control flow and data stream.Secondly,for the attribute of control flow,the evaluation me-thod of the software behavior trace based on support vector machine (SVM) is proposed.And for the attribute of data stream,the evaluation method of scene property based on fuzzy AHP is proposed.Finally,the experimental analysis shows that the trustworthiness evaluation model based on software behavior can evaluate the trustworthiness of software accurately and the efficiency.
Research of PE File Information Hiding Based on Import Table Migration
TIAN Zu-wei, LI Yong-fan and LIU Yang
Computer Science. 2016, 43 (1): 207-210.  doi:10.11896/j.issn.1002-137X.2016.01.046
Abstract PDF(333KB) ( 195 )   
References | Related Articles | Metrics
On the base of analyzing the import table of PE file,using the work principle of Windows loader and the uncertainty of PE file import table’s storage location,an algorithm based on the import table migration was proposed.Information is hidden in the original import table space of PE file.Theory analysis and experiment result show that the algorithm overcomes the disadvantages of previous hiding information schemes,such as hidden information convergence and destruction of import table,which improves hidden and anti-attack ability.
Transformation and Verification Method of AADL Data Flows for Real-time System Using Uppaal
SHEN Ning-min, LI Jing, BAI Hai-yang and ZHUANG Yi
Computer Science. 2016, 43 (1): 211-217.  doi:10.11896/j.issn.1002-137X.2016.01.047
Abstract PDF(583KB) ( 386 )   
References | Related Articles | Metrics
Architecture analysis and design language (AADL) is a modeling language for formalization and graphics,which can support integrated modeling and multi-analysis.We used timed automata formal checking method to transform and verif y data flow in AADL models.Considering the difference between single data flow and mixed data flows,we designed corresponding transformation rules from AADL models to timed automata,and analyzed and verified the data flow comprehensively by establishing timed automata network.An automated model conversion Eclipse plug-in was designed and integrated into the AADL modeling tool OSATE.Finally,timed automata modeling and Uppaal were used to simulate and verify the transformed timed automata while verifying whether the AADL model equivalently satisfies the real-time requirement of system.Experimental results show that the model transformation method is valid and flow latency qualities can be verified effectively with Uppaal.
Testing Concurrent Behavior of System Based on CPN
LI Hua, SUN Tao, WANG Xian-rong, XING Yi, LI Ying-jie and XIA Xing-hang
Computer Science. 2016, 43 (1): 218-225.  doi:10.11896/j.issn.1002-137X.2016.01.048
Abstract PDF(1403KB) ( 206 )   
References | Related Articles | Metrics
The basic concurrent behavior was firstly modeled with CPN and the state space was obtained through CPN Tools.After that the complexity of the CPN was increased to show the possibly problems along with the state space quickly increasing.Secondly the phase of test generation was divided into three parts to guarantee the test coverage of concurrent behavior.Among them,the test sequences which only focused on the coverage of concurrent behaviors were generated and the other two parts were generated according to the regular path generation methods.The concurrency start(end) places were mapped with the state space nodes according to the CPN model execution and the sets of start(end) nodes in the state space were achieved.After analyzing the relationship between the nodes in the sets of start(end) nodes,the sequences sets were built according to the pre or pro relationship in the sets.The start(end) parts of the test sequences were selected from such sets,and the middle test sequences between start sequence to end sequence were generated.Furthermore,to illustrate the usage of the modeling method and the test generation,a simple P2P software system which is inherited concurrent behaviors was implemented and modeled with hierarchy CPN and the test sequences were generated to coverage the concurrent behavior.Finally,a TTCN-3 test scheme was designed according to the requirement of test sequences and a test scenario was designed.And the implemented software system and the TTCN-3 tester were deploying in one scenario to execute designed TTCN-3 testing suite.The test results show the correctness of the designed and implemented test work.
Loop Unrolling in Vectorized Programs
GAO Wei, ZHAO Rong-cai, YU Hai-ning and ZHANG Qing-hua
Computer Science. 2016, 43 (1): 226-231.  doi:10.11896/j.issn.1002-137X.2016.01.049
Abstract PDF(528KB) ( 456 )   
References | Related Articles | Metrics
Loop unrolling is a loop transformation that has been developed well and widely used in serial programs.However,it usually loses its efficiency in practical vectorized applications.To solve this problem,this paper proposed a loop unrolling technique for vectorized loops.First,we presented the CUFVL algorithm to automatically determine the unroll factors via taking the register pressure and code explosion problems into account.Second,we designed a complete unrolling strategy according to the specific characters of vectorized loops.Finally,we showed the flow chart of the proposed technique based on the CUFVL algorithm and the complete unrolling strategy.Experimental results show that the proposed technique can find out the most suitable unroll factor for the programs used in the experiments,thereby enhancing their performance efficiently.
Study on Data Repair and Consistency Query Processing
LIU Bo, CAI Mei and ZHOU Xu-chuan
Computer Science. 2016, 43 (1): 232-236.  doi:10.11896/j.issn.1002-137X.2016.01.050
Abstract PDF(465KB) ( 177 )   
References | Related Articles | Metrics
There are usually query inconsistencies problems of disobeying data constraints in databases and integration systems.Repairing is one of the main approaches to tackle these problems,but few unified models have been studied based on repairing,constraints and queries at present.We proposed the consistency query algorithm based on deleting tuples and given constraints,which produces results being consistent with the constraints,and is suitable for query processing with the various kinds of data constraints (i.e.,not specific one).It defines the concise syntax for constraint expressions and query sentences.A new query and repair system model was built,which includes relational instances,a non-null constraint set,query definitions and repair methods,so that the query results satisfy the constraints.The me-thod,language syntax and model proposed in the paper are of universal and applicable properties,which are not limited to specific repair methods,constraints and queries.
News-summarization Extraction Method Based on Weighted Event Elements Strategy
GUO Yan-qing, ZHAO Rui, KONG Xiang-wei, FU Hai-yan and JIANG Jin-ping
Computer Science. 2016, 43 (1): 237-241.  doi:10.11896/j.issn.1002-137X.2016.01.051
Abstract PDF(460KB) ( 226 )   
References | Related Articles | Metrics
To facilitate the readers’ fast understanding of the contexts of news events,this paper analyzed the effect of event elements on the summarization generation,and by combining the character of news evolution along the timeline proposed a news-summarization extraction method based on weighted event elements strategy.The transition probability matrix is improved by weighting the event elements,which turns out to be effective in extracting the news timeline summarization.By this way,the generated summarization contains more details of the news elements,whilst becomes more readable to readers.Experimental results demonstrate the superiority of the proposed algorithm.
Similarity of Objects in Incomplete Information Systems
LUO Jun-fang and QIN Ke-yun
Computer Science. 2016, 43 (1): 242-245.  doi:10.11896/j.issn.1002-137X.2016.01.052
Abstract PDF(284KB) ( 179 )   
References | Related Articles | Metrics
The knowledge reduction and knowledge discovery in the information systems are important topics of rough set theory.Based on the analysis of existing similarity relations and related rough set models in incomplete information systems,we proposed a new kind of similarity relation,called limited similarity relation,to characterize the objects similarity in incomplete information systems.The rough set model based on the limited similarity relation was established with its basic properties and the relationships between the existing rough set models were discussed.
Text Clustering Method Study Based on MapReduce
LI Zhao, LI Xiao, WANG Chun-mei, LI Cheng and YANG Chun
Computer Science. 2016, 43 (1): 246-250.  doi:10.11896/j.issn.1002-137X.2016.01.053
Abstract PDF(474KB) ( 198 )   
References | Related Articles | Metrics
Text clustering is the key technology of text organization,information extraction and topic retrieval.Appropriate similarity measure selection is an important task of clustering,which has great affection on the clustering results.Classical similarity measures,such as distance function and the correlation coefficient,can only describe the linear relationship between documents.However,clustering results based on classical clustering methods are usually unsatisfactory due to the complicated relationship among text documents.Some complicated clustering methods have been studied.But,with the growing scale of text data,the computational cost increases markedly with the increase of dataset size.Classical clustering methods are out of work in dealing with large scale dataset clustering problems.In this paper,a distributed clustering method based on MapReduce was proposed to deal with large scale text clustering.Furthermore,we proposed an improved version of k-means algorithm,which utilizes information loss as the similarity function.For improving clustering speed,parallel PCA method based on MapReduce was used to reduce the document vector dimension.The experimental results demonstrate that the proposed method is more efficient for text clustering than classic clustering methods.
Social Strength Learning between Users Based on Spatiotemporal Data
CHEN Yuan-juan, YAN Jian-feng, LIU Xiao-sheng and YANG Lu
Computer Science. 2016, 43 (1): 251-254.  doi:10.11896/j.issn.1002-137X.2016.01.054
Abstract PDF(382KB) ( 229 )   
References | Related Articles | Metrics
Word2vec is a high-efficiency open-source tool issued by Google,which represents a word with a float vector.We used this tool to process spatiotemporal data.Each user is represented with a float vector to predict social strength among users.To predict social strength among users more precisely,this paper proposed a location-weight algorithm that dynamically adjusts the learning rate in the process of learning with word2vec.According to the number of different users at different locations,we added location weight to the algorithm in the process of learning.Meanwhile,we explored the effect of location-weight on prediction of social strength among users.Experimental results validate perfor-mance of the proposed algorithm.
Hybrid Recommendation Algorithm Based on User’s Trust in Social Networks
WEN Jun-hao, HE Bo and HU Yuan-peng
Computer Science. 2016, 43 (1): 255-258.  doi:10.11896/j.issn.1002-137X.2016.01.055
Abstract PDF(321KB) ( 209 )   
References | Related Articles | Metrics
In order to solve the problem of insufficient coverage of current social networking Web service recommendation algorithm based on user’ trust,this paper incorporated the research ideas of direct trust,indirect trust and community trust, and did a lot of extended research on the calculation about trust.Based on these work,we proposed a new hybrid trust algorithm.We did a lot of experiments based on recall rate,user trust and user dispute.The experimental result shows that the hybrid algorithm has a good performance on coverage problem.Meanwhile,it can solve the problem that single trust algorithm’s sparse data lead to poor recommendation result.
Pattern Drift Detection in Association Analysis Based on Relation Entropy and J Measure
YANG Ying-jie, LIU Shuai and CHANG De-xian
Computer Science. 2016, 43 (1): 259-263.  doi:10.11896/j.issn.1002-137X.2016.01.056
Abstract PDF(401KB) ( 163 )   
References | Related Articles | Metrics
Aiming at the problem that traditional concept drift detection technology cannot be applied in association analysis directly,this paper proposed a method of concept drift detection based on relation entropy and J measure.Four features such as relation count,relation entropy,windows count and J measure are extracted.Concept drift is detected and located through hypothesis test of features distribution in two sliding windows.Event set and event pair set mostly effected by concept drift which can provide support to adjustment and refreshment of concept are obtained.Experimental results show that this method is accurate and feasible.
Iterated Local Search Algorithm with Improved Perturbation Mechanism for Vehicle Routing Problem
HOU Yan-e, KONG Yun-feng and DANG Lan-xue
Computer Science. 2016, 43 (1): 264-269.  doi:10.11896/j.issn.1002-137X.2016.01.057
Abstract PDF(495KB) ( 298 )   
References | Related Articles | Metrics
This paper proposed an iterated local search (ILS) algorithm with a new perturbation mechanism for vehicle routing problem.The perturbation mechanism is based on ruin and recreating principle,which includes destroying and repair procedures.The best local neighborhood solution is firstly destroyed by a method which takes randomness and correlation into account.In the destroying process,a perturbation factor is also introduced into the perturbation mechanism to control the strength of destroying.And then the destroyed solution is repaired by randomly selecting basic greedy insertion and regret greedy insertion algorithms.The experiments on the benchmark instances prove that the developed algorithm is more effective when compared with quantum evolutionary algorithm and artificial bee colony.
Binary Granular Computing Model
ZHENG Lu-bin, CHEN Yu-ming, ZENG Zhi-qiang and LU Jun-wen
Computer Science. 2016, 43 (1): 270-274.  doi:10.11896/j.issn.1002-137X.2016.01.058
Abstract PDF(374KB) ( 187 )   
References | Related Articles | Metrics
Granular computing is a theory dealing with uncertain data,including rough set,fuzzy set,quotient space,computing with words,etc.At present,the granulation of data and granular computing are mainly related to the set ope-rations.As we know,these set operations are inefficient,resulting in restrict the applications of granular computing algorithms.Therefore,we proposed a binary granular computing model,which has the three layer structure including granule,granule swarm and granule library.We defined binary granules and granule operations,which can transform the set operations into the binary number calculations.Furthermore,we proposed a distance metric of two binary granules,which represents the distance of the set of equivalence classes,and discussed some properties of the granule distance.The binary granular computing model defines the concept of binary granule swarm distance,gives the calculation method of binary granule swarm distance,and puts forward the method of attribute reduction based on binary granule swarm distance.We proved the equivalence of our proposed reduction method and the classical Pawlak reduction method.We presented two kinds of reduction algorithm,which use the binary granule swarm distance as the heuristic information.
Improved Wind Driven Optimization Algorithm with Strong Development Ability
REN Zuo-lin, TIAN Yu-bo and SUN Fei-yan
Computer Science. 2016, 43 (1): 275-281.  doi:10.11896/j.issn.1002-137X.2016.01.059
Abstract PDF(572KB) ( 202 )   
References | Related Articles | Metrics
The wind driven optimization (WDO) algorithm is a population-based iterative heuristic global optimization algorithm.However,in order to deal with the problem that WDO algorithm is easily trapped into local optima,we introduced five WDO algorithms based on different mutation strategies.They are wavelet mutation strategy,chaotic mutation strategy,non-uniform mutation strategy,Gaussian mutation strategy and Cauchy mutation strategy.Different WDO mutation strategies were used to implement simulation experiments for several typical test functions and compared with particle swarm optimization (PSO) algorithm.Experiments show that the WDO with wavelet mutation (WDOWM) algorithm has a strong developing ability,which has capability to jump out of the local optima.The WDOWM algorithm is superior to the PSO algorithm,WDO algorithm and other improved WDO algorithms in terms of convergence rate,convergence accuracy and stability.
Maximum Fuzzy Marginal Projection via Patch Alignment Framework
XU Jie
Computer Science. 2016, 43 (1): 282-285.  doi:10.11896/j.issn.1002-137X.2016.01.060
Abstract PDF(557KB) ( 177 )   
References | Related Articles | Metrics
Patch alignment (PA) framework provides us a useful way to obtain the explicit mapping for dimensionality reduction.Under the PA framework,we proposed a fuzzy maximum marginal projection for dimensionality reduction.In our paper,the fuzzy set theory was introduced in the design of new method.The similar neighbors obtained by the nonnegative least squares method were used to construct the similar membership degree matrix.Based on the similar membership degree matrix,we redefined the fuzzy weight and the fuzzy marginal patch means.The fuzzy weight can reduce the influence caused by the overlap and outliers to some extent.The fuzzy marginal patch means specify the contributions of each sample to the classification.Under the PA framework,the difference among intra-class samples caused by the variety of the illumination can be degraded.And the distances between different categories are enlarged in the transformed space.The experimental results on the UCI Wine,Yale and Yale-B databases demonstrate the effectiveness of the new methods,especially in dealing the changing illumination on images.
Parallel Hierarchical Association Rule Mining in Big Data Environment
ZHANG Zhong-lin, TIAN Miao-feng and LIU Zong-cheng
Computer Science. 2016, 43 (1): 286-289.  doi:10.11896/j.issn.1002-137X.2016.01.061
Abstract PDF(316KB) ( 284 )   
References | Related Articles | Metrics
To deal with big data’s demand of real-time processing,we proposed the parallel hierarchical association rule mining algorithm based on partitioning.First,the algorithm divides the transactions of D into n nonoverlapping partitions randomly,and all the local frequent itemsets mining is parallelized.Second,apriori property is utilized to collect frequent itemsets from all partitions and form the global candidate itemsets with respect to D.Then the actual support of each candidate is counted to determine the global frequent itemsets.At last,the algorithm’s high efficiency was analyzed by modeling.
Inlier Selection Algorithm for Feature Matching Based on K Nearest Neighbor Consistency
XIAO Chun-bao and FENG Da-zheng
Computer Science. 2016, 43 (1): 290-293.  doi:10.11896/j.issn.1002-137X.2016.01.062
Abstract PDF(561KB) ( 176 )   
References | Related Articles | Metrics
Feature matching for wide baseline images is an extremely challenging task in computer vision applications.A large number of outliers are inevitably included in the initial matching results due to significant changes between views of wide baseline images.An inlier selection algorithm called K nearest neighbor consistency (KNNC) was proposed to efficiently select matches with high reliability from initial feature matching results of wide baseline images.An affine-invariant structure similarity is utilized to measure the degree of structure similarity between two groups of K nearest neighboring features.Adopting the coarse-to-fine strategy,KNNC algorithm selects inliers by the processes of K nearest neighbor correspondence consistency checking and K nearest neighbor structure consistency checking.Experimental results show that the proposed algorithm approximates or surpasses several state-of-the-art inlier selection algorithms in performance on precision,recall and computational time,and is applicable to wide baseline images with large differences in viewpoint,scale and rotation.
Image Inpainting Method Based on Sparse Decomposition
ZHU Xuan, ZHANG Xu-feng, LI Qiu-ju, WANG Ning and Tao Ji-yao
Computer Science. 2016, 43 (1): 294-297.  doi:10.11896/j.issn.1002-137X.2016.01.063
Abstract PDF(572KB) ( 183 )   
References | Related Articles | Metrics
This paper proposed a new couple of dictionaries which are redundant discrete wavelet transformation and wavelet atomic transformation,and applied it to image sparse morphological component decomposition to get the structure and texture.Then,based on the fact that the structure and texture have different characteristics,it uses curvature driven diffusion model which has curvature driven,edge enhancement and smooth denoising characteristics,and uses Criminisi texture synthesis method to inpaint the structure and texture respectively.At last,they are compounded and the inpainting result is got.The experiment results show that the new method can not only decompose the image very well,but also inpaint the image with strong and fairing edge,complete and clear texture.This method shows better results in image restoration compared to the classical ones.
Mixture Noise Image Denoising Using Reweighted Low-rank Matrix Recovery
WANG Zhen-ping, ZHANG Jia-shu and CHEN Gao
Computer Science. 2016, 43 (1): 298-301.  doi:10.11896/j.issn.1002-137X.2016.01.064
Abstract PDF(582KB) ( 303 )   
References | Related Articles | Metrics
The traditional image denoising algorithm based on low-rank matrix recovery only has the low rank restraint,and when Gaussian noise is too large,it will lead to insufficient denoise or serious loss of detail.To overcome the disadvantages of the image denoising algorithm based on low-rank matrix recovery,a novel robust image denoising algorithm was proposed,which adds Gaussian restraint into the low rank restraint model.Inspired by reweighted L1 minimization for sparsity enhancement,reweighting singular values were used to enhance low rank of a matrix,and an efficient iterative reweighting scheme was proposed for enhancing low rank and sparsity simultaneously.Finally,to verify the denoi-sing capability of the presented approach,images with different noise types and simulation parameters were generated using the presented method and the results were compared with the traditional image denoising algorithm based on low-rank matrix recovery.Performance analysis of peak signal to noise ratio and structural similarity index were carried on at the same time.The experimental results show that the mixture noise image denoising using reweighted low-rank matrix recovery algorithm can enhance low rank and sparsity of a matrix simultaneously,guarantee visual effect and keep the details,at the same time,the objective evaluation indexes are improved.
Noise Insensitive Feature Descriptors for Histogram and Application in Image Retrieval
LIU Shu-qin and PENG Jin-ye
Computer Science. 2016, 43 (1): 302-305.  doi:10.11896/j.issn.1002-137X.2016.01.065
Abstract PDF(336KB) ( 161 )   
References | Related Articles | Metrics
Content based image retrieval methods usually represent images as histograms,and retrieve images based on similarity between images.The noise in digital photography makes its histogram even more smoothing,and at the same time,makes two images more similar,which increases the number of result images during retrieving.In order to improve the performance of image retrieval,this paper proposed a feature descriptor,which is insensitive to noise for histogram,and computed similarities between images based on the proposed feature descriptor.We described the noise in an image as stationary additive Gaussian white noise,and gave its corresponding histogram.Then,we defined the feature descriptor of histogram by the general moments of random variable,and analyzed how to recover original histogram of an image using the proposed feature descriptor.During the validation of the proposed method,we compared it with four popular related algorithms,and validated its efficiency according to image retrieval experiments in two real image databases.
Compression Tracking Algorithm for Online Rectangle Feature Selection
CAO Yi-qin, CHENG Wei and HUANG Xiao-sheng
Computer Science. 2016, 43 (1): 306-309.  doi:10.11896/j.issn.1002-137X.2016.01.066
Abstract PDF(1177KB) ( 167 )   
References | Related Articles | Metrics
Compressive tracking algorithm can not select appropriate object futures which will result in drifting or make tracking not accurate when the object is occluded or its appearance changes.To address this problem,this paper proposed a real-time compressive tracking algorithm based on rectangle feature selection.Firstly,generate projection matrixes are generated in an initial phase.And the projection matrixes are used to extract the feature to construct a feature pool.The rectangle feature is used to represent the characteristics of target in the feature pool, and the rectangular features with greater difference from the target characteristics are removed.Finally,the classifier is taken to process candidate samples by Bayes classification and response results to the classifier are taken as tracking results.The experimental results show that the proposed algorithm is about 13% lower than that of compressive tracking.It improves the trac-king accuracy and robustness,and the processing frame rate is 20 frame/s on a 320pixel×240pixel video sequence,which meets the requirements of real-time tracking.
Dynamic Multi-threshold Binarization Method for Visual Music Images
YANG Fang, TENG Gui-fa and TIAN Xue-dong
Computer Science. 2016, 43 (1): 310-314.  doi:10.11896/j.issn.1002-137X.2016.01.067
Abstract PDF(1415KB) ( 178 )   
References | Related Articles | Metrics
A dynamic binarization method was proposed for solving the uneven illumination problems in irregular regions of visual music images,to reduce the background influence and improve the effectiveness of multi-threshold me-thod.This method is based on pixel grayscale difference,and slide window is the segment unit.The foreground is detected by the difference variance ratio in each window,and the irregular illumination regions are decided by window background gray estimation.The Otsu’s binarization threshold is employed in each region in this algorithm.Foreground detection accuracy is decided by window size and variance ratio threshold.Those two parameters were discussed in detail in experiment.The experimental results show that this method can divide foreground into irregular illumination regions with a high foreground recall ratio.And it is effective to solve the uneven illumination binarization problem of music images.This method can also be applied to the binarization of general document images.
Dorsal Hand Vein Recognition Algorithm Based on Effective Dimensional Feature
JIA Xu, SUN Fu-ming, CAO Yu-dong, CUI Jian-jiang and XUE Ding-yu
Computer Science. 2016, 43 (1): 315-319.  doi:10.11896/j.issn.1002-137X.2016.01.068
Abstract PDF(921KB) ( 170 )   
References | Related Articles | Metrics
During dorsal vein recognition processing,for the problem that there may be some distractive information in acquired image,a recognition algorithm based on effective dimensional feature was proposed.Firstly,adaptive median filtering algorithm is applied in acquired image for denoising.Secondly,image is divided into blocks,and every sub image feature is extracted based on Gaussian mixture model and gradient information.Then,according to feature similarity between sub images,the method which judges whether the sub image includes distractive information or not is proposed.Finally,vein image feature vector can be acquired through fusing the characteristics of all sub images with real vein information,and sparse representation algorithm is proposed to match the feature vector of unknown sample with diffe-rent effective dimensions.Experiments show that high recognition rate can be achieved by the proposed algorithm,and even though part of the region in vein image is blocked,good recognition effect can be also acquired.