Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of CCDM2018 in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Uncertainty Measure of Rough Fuzzy Sets in Hierarchical Granular Structure
    YANG Jie, WANG Guo-yin, ZHANG Qing-hua, FENG Lin
    Computer Science    2019, 46 (1): 45-50.   DOI: 10.11896/j.issn.1002-137X.2019.01.007
    Abstract483)      PDF(pc) (1855KB)(740)       Save
    There has been a consensus that the uncertainty of Pawlak’s rough sets model is rooted in the objects contained in the boundary region of the target concept,while the uncertainty of rough fuzzy sets results from three regions,because the objects in the positive or negative regions are probably uncertain.Moreover,in rough fuzzy sets model,a fuzzy concept can be characterized by different rough approximation spaces in a hierarchical granular structure,so how will the uncertainty of a fuzzy concept change with granularity? This paper firstly proposed a fuzziness-based uncertainty measure,analyzed the rough fuzzy set model through the average fuzzy sets and drew a conclusion,that is the uncertainty measure for rough fuzzy sets is also suitable for probabilistic rough sets.Based on the fuzziness-based uncertainty measure,this paper revealed the change rules oftheir uncertainty of rough fuzzy sets in a hierarchical granular structure.Then,it discussed the uncertainties of the three regions (positive region,boundary region and negative region) and revealed the change rules of their uncertainty in a hierarchical granular structure.Finally,experimental results demonstrate the effectiveness of the proposed uncertainty measure theory.
    Reference | Related Articles | Metrics
    Network Dimension:A New Measure for Complex Networks
    LIU Sheng-jiu, LI Tian-rui, LIU Xiao-wei
    Computer Science    2019, 46 (1): 51-56.   DOI: 10.11896/j.issn.1002-137X.2019.01.008
    Abstract513)      PDF(pc) (1467KB)(1424)       Save
    How to measure complex networks has always received much attention.This paper proposed a new method based on the analysis of fractal dimension of self-similarity complex networks,named network dimension,to measure complex networks.Network dimension is expressed as the division of logarithm of the sum of edges’ weights and logarithm of the sum of nodes’ weights of complex networks.The weights of both edge and node are extended to real and complex number fields.The calculation methods of network dimensions of weighted networks with different types of weights were presented.Finally,several representative classical complex network models were taken as examples to discuss some properties of the proposed network dimension.
    Reference | Related Articles | Metrics
    Adversarial Multi-armed Bandit Model with Online Kernel Selection
    LI Jun-fan, LIAO Shi-zhong
    Computer Science    2019, 46 (1): 57-63.   DOI: 10.11896/j.issn.1002-137X.2019.01.009
    Abstract371)      PDF(pc) (1253KB)(901)       Save
    Online kernel selection is an important component of online kernel methods,and it can be classified into three categories,that is,the filter,the wrapper and the embedder.Existing online kernel selection explores the wrapper and the embedder categories,and empirically adopts the filter approach.But there have been no unified frameworks yet for comparing,analyzing and investigating online kernel selection problems.This paper proposed a unified framework for online kernel selection researches via multi-armed bandits,which can model the wrapper and the embedder of online kernel selection simultaneously.Giving a set of candidate kernels,this paper corresponds each kernel to an arm in an adversarial bandit model.At each round of online kernel selection,this paper randomly chose multiple kernels according to a probability distribution,and updated the probability distribution via the exponentially weighted average method.In this way,an online kernel selection problem was reduced to an adversarial bandit problem in a non-oblivious adversary setting,and a unified framework was developed for online kernel selection researches,which can model the wrapper and the embedder uniformly.This paper further defined a new regret concept of online kernel selection,and proved that the wrapper within the framework enjoys a sub-linear weak expected regret bound and the embedder within the framework enjoys a sub-linear expected regret bound.Experimental results on benchmark datasets demonstrate the effectiveness of the proposed unified framework.
    Reference | Related Articles | Metrics
    Multi-source Online Transfer Learning Algorithm for Classification of Data Streams with Concept Drift
    QIN Yi-xiu, WEN Yi-min, HE Qian
    Computer Science    2019, 46 (1): 64-72.   DOI: 10.11896/j.issn.1002-137X.2019.01.010
    Abstract739)      PDF(pc) (3631KB)(994)       Save
    The existing algorithms for classification of data streams with concept drift always train a new classifier on new collected data when new concept is detected,and forget the historical models.This strategy always lead to insufficient training of classifier in a short time,because the training data for the new concept are always not collected enough in initial stage.And further,some existing online transfer learning algorithms for classification of data streams with concept drift only take advantage of single source domain,which sometimes lead to poor classification accuracy when the historical concepts are different with the new concept.Aiming to solve these problems above,this paper proposed a multi-source online transfer learning algorithms for classification of data stream with concept drift (CMOL),which can utilize the knowledges from multiple historical classifiers.The CMOL algorithm adopts a dynamic classifier weight adjustment mechanism and updates classifier pool according to the weights of classifiers in it.Experiments validate that CMOL can adapt to new concept faster than other corresponding methods when concept drift occurs,and get higher classification accuracy.
    Reference | Related Articles | Metrics
    Image Retrieval Algorithm Based on Transfer Learning
    LI Xiao-yu, NIE Xiu-shan, CUI Chao-ran, JIAN Mu-wei, YIN Yi-long
    Computer Science    2019, 46 (1): 73-77.   DOI: 10.11896/j.issn.1002-137X.2019.01.011
    Abstract453)      PDF(pc) (1560KB)(843)       Save
    In recent years,with the development of the Internet and the popularity of smart devices,the number of online store image is explosively growing.At the same time,the number of users who use different types of social networks and media continues to grow.In this case,the multimedia data type that the user uploaded to the network also has changed,the image uploaded by the user contains the visual information that is carried by the image itself,and also contains the label information and text information that the user sets for it.Therefore,how to provide fast and accurate image retrieval results to users is a new challenge in the field of multimedia retrieval.This paper proposed an image retrieval algorithm based on transfer learning.It learns the visual information and the text information at the same time,then migrates the results learnt to the visual information domain,and thus the feature contains cross modal information.Experimental results show that the proposed algorithm can achieve better image retrieval results.
    Reference | Related Articles | Metrics
    Confidence Interval Method for Classification Usability Evaluation of Data Sets
    TAN Xun-tao, GU Yi-yi, RUAN Tong, YUAN Yu-bo
    Computer Science    2019, 46 (1): 78-85.   DOI: 10.11896/j.issn.1002-137X.2019.01.012
    Abstract396)      PDF(pc) (2175KB)(1686)       Save
    It is always a difficult problem to evaluate the usability of training data sets effectively,which hinders the application of intelligent classification systems.Aiming at the issue of data classification in the field of machine learning,based on interval analysis and information granulation,this paper proposed an evaluation method of data classification usability to measure the separability of data sets.In this method,dataset is defined as the classification information system,and the concept of classification confidence interval is put forward,then the information granulation is carried out by interval analysis.Under this information granulation strategy,this paper defined the mathematical model of classification usability,and further gave the calculation method of the classification usability for single attribute and the total data set.In this paper,18 UCI standard data sets were selected as evaluation objects,the evaluation results of classification usability were given,and 3 classifiers were selected to classify the above data sets.Finally,the effectiveness and feasibility of this evaluation method are verified by the analysis of experimental results.
    Reference | Related Articles | Metrics
    Image Restoration Method Based on Improved Inverse Filtering for Diffractive Optic Imaging Spectrometer
    ZHANG Ming-qi, CAO Guo, CHEN Qiang, SUN Quan-sen
    Computer Science    2019, 46 (1): 86-93.   DOI: 10.11896/j.issn.1002-137X.2019.01.013
    Abstract471)      PDF(pc) (6764KB)(827)       Save
    In order to solve the image-blurring problem caused by the interference from out-of-focus optical images in in-focus image within a diffractive optic imaging spectrometer (DOIS),an improved inverse filtering restoration method was proposed to solve the ill-posed problem in inverse filtering and restore the diffraction spectrum image.This method changes the solution of the primal problem by introducing a regularization matrix to regularize the inverse filtering function,thus suppressing the influences of noises on restored images.It achieves the purpose of reducing morbidity of the matrix and obtaining a better restoration result through the following three procedures:convert the image restoration process into a process of matrix inversion,add a regular filter to the SVD (singular value decomposition) method,and adjust the form of the regularization matrix and the values of parameters.Experiments show that the improved inverse filtering method is effective for restoring the spectral images formed with a diffractive optic imaging spectrometer.It can not only increase the Laplacian Gradient and QI(Quality Index) value,but also reduce RMSE(Root-Mean-Square Error) value to a certain extent.In the meantime,this method can suppress the noise interferences of the blurred images,enhance the image clarity,restore a single spectrum image with a higher similarity to the reference image,and obtain better spectral curves to analyze the geomorphological features.
    Reference | Related Articles | Metrics
    Sample Adaptive Classifier for Imbalanced Data
    CAI Zi-xin, WANG Xin-yue, XU Jian, JING Li-ping
    Computer Science    2019, 46 (1): 94-99.   DOI: 10.11896/j.issn.1002-137X.2019.01.014
    Abstract366)      PDF(pc) (1347KB)(986)       Save
    In the era of big data,the imbalanced data is ubiquitous and inevitable,which has been a critical classification issue.Taking binary classification as an example,traditional learning algorithms can not sufficiently learn the hidden patterns from the minority class and may be biased towards majority class.To solve this problem,an effective way is using the cost-sensitive learning to improve the performance of prediction for the minority class which assigns ahighercost to misclassification of the minority.However,these methods equally treat the instances within one class.Actually,different instances may make different contributions to learning process.In order to make the cost-sensitive learning more effective,this paper proposed a sample-adaptive and cost-sensitive strategy for the classification of imbalanced data,which assigns a different weight to every single instance if misclassification occurs.Firstly,the strategy determines the distances between the boundary and instances according to the local distribution of the instances.Then,it assigns higher weights to the instances nearer to the boundary on the top of the margin theory.In this paper,the proposed strategy was applied to the classical LDM method.And a series of experiments on the UCI datasets prove that the sample-adaptive and cost-sensitive strategy can effectively improve the classifier’s performance on imbalanced data classification.
    Reference | Related Articles | Metrics
    Optimized Selection Method of Cycle-consistent Loss Coefficient of CycleGAN in Image Generation with Different Texture Complexity
    XU Qiang, ZHONG Shang-ping, CHEN Kai-zhi, ZHANG Chun-yang
    Computer Science    2019, 46 (1): 100-106.   DOI: 10.11896/j.issn.1002-137X.2019.01.015
    Abstract708)      PDF(pc) (4469KB)(1533)       Save
    High-quality image generation has always been a difficult and hot topic in the field of computer vision and other exploration.CycleGAN achieves good results in unsupervised image generation tasks by using cycle-consistent losses.However,in face of image generation tasks with different texture complexity,CycleGAN’s cycle-consistent loss coefficient is unchanged by default,and its generated images have weak points such as texture distortion or even disappear,which can not guarantee the quality of generated images.In this paper,the complexity of image texture was mea-sured by integrating the spatial dimension and time dimension of images,the importance of cycle-consistent loss function in optimizing objective function was clarified,the correlation between the size of the cycle-consistent loss coefficient and the quality of image with different texture complexity was discovered and explained.The higher the texture complexity,the larger the cycle-consistent loss coefficient should be selected.Otherwise,the smaller coefficient should be taken.Using benchmarks and self-acquired image data sets,the classification accuracy based on migration learning was introduced to generate image quality assessment indicators.The experimental results show that the optimal choice of the appropriate cycle-consistent loss factor can effectively improve the quality of generated images.
    Reference | Related Articles | Metrics
    Edge Bundling Method of Spiral Graph Based on Interval Classification
    ZHU Li-xia, LI Tian-rui, TENG Fei, PENG Bo
    Computer Science    2019, 46 (1): 107-111.   DOI: 10.11896/j.issn.1002-137X.2019.01.016
    Abstract561)      PDF(pc) (2753KB)(814)       Save
    Spiral graph is a common visualization method in visualizing time series data.It can not only simultaneous display the multiple-stages data in one plane space,but also demonstrate the data with different time length in a limited space.In order to solve the problem of visual clutter caused by the intersection of helical lines in the present spiral image visualization methods,a method of edge bundling is of great significance.First,the data points on the state circle are classified.Then the virtual bundling circles are set between the adjacent state circles,and the data points on the state ring are mapped to the corresponding virtual bundling circle by the function of edge bundling.Finally,in order to achieve the effect of curve bundling,the Bézier curve is drawn between the state circle and its corresponding virtual bundling circle,and the spiral curve is drawn between the virtual bundling circle and the virtual bundling circle.Experimental results show that the edge-bundling algorithm is effective for large-scale data visualization and can effectively alleviate the problem of visual clutter.
    Reference | Related Articles | Metrics
    Migration Optimization Algorithm Based on State Transition and Fuzzy Thinking
    ZHONG Da-jian, FENG Xiang, YU Hui-qun
    Computer Science    2019, 46 (1): 112-116.   DOI: 10.11896/j.issn.1002-137X.2019.01.017
    Abstract536)      PDF(pc) (2051KB)(630)       Save
    Inspired by the existing animal migration optimization algorithm (AMO),a novel migration optimization algorithm based on state transition and fuzzy thinking (SMO) was proposed for solving global optimization problems.In the proposed algorithm,the state model and fuzzy opposite model are constructed.Firstly,the state model describes the distribution of the whole group with two states:the dispersed state and the centralized state.In the dispersed state,the whole group is distributed in the solution space randomly and a probabilistic decision-making method is used to search the solution space.It’s the process of exploration.As the individuals learning from each other,the differences between individuals become smaller and smaller,and the state of the group changes into the centralized state.Meanwhile,a step based searching strategy is used to find the optimal value.It’s the process of exploitation.Therefore,the balance between exploration and exploitation can be obtained by using different searching strategies according to the state of the group.Secondly,the algorithm uses a fuzzy opposite model.It can make full use of the fuzzy opposite position of indivi-duals and increase the diversity of species.Moreover,it can improve the convergence precision of the algorithm.Then,the convergence of the algorithm is proved theoretically,and twelve benchmark functions are used to verify the perfor-mance of the proposed algorithm.Finally,the algorithm is compared with three other optimization algorithms.Experimental results attest to the effectiveness of SMO.
    Reference | Related Articles | Metrics
    Network Representation Learning Based on Multi-view Ensemble Algorithm
    YE Zhong-lin, ZHAO Hai-xing, ZHANG Ke, ZHU Yu
    Computer Science    2019, 46 (1): 117-125.   DOI: 10.11896/j.issn.1002-137X.2019.01.018
    Abstract340)      PDF(pc) (2517KB)(949)       Save
    The existing network representation learning algorithms mainly consist of the methods based on the shallow neural network and the approaches based on neural matrix factorization.It has been proved that network representation learning based on shallow neural network is to factorize feature matrix of network structure.In addition,most of the existing network representation algorithms learn the features from the structure information,which is a single view representation learning for networks.However,there are various kinds of views in the network.Therefore,this paper proposed a network representation learning approach based on multi-view ensemble (MVENR).The algorithm abandons the neural network training process and integrates the idea of matrix information ensemble and factorization into the network representation vectors.MVENR gives effective combination strategy between the network structure view.The link weight view and the node attribute view.Meanwhile,it makes up the shortage of neglecting the network link weight,and solves the sparse network feature problem for using single view training.The experimental results show that the proposed algorithm outperforms the commonly joint learning algorithms and the methods purely based on network structure features,and it is a simple and efficient network representation learning algorithm.
    Reference | Related Articles | Metrics
    Hybrid Recommendation Algorithm Based on Deep Learning
    ZENG Xu-yu, YANG Yan, WANG Shu-ying, HE Tai-jun, CHEN Jian-bo
    Computer Science    2019, 46 (1): 126-130.   DOI: 10.11896/j.issn.1002-137X.2019.01.019
    Abstract665)      PDF(pc) (1505KB)(1192)       Save
    Recommendation system is playing an increasingly indispensable role in the development of e-commerce,but the sparsity of user’s rating data for the items in the recommendation system is often an important reason for the low recommendation accuracy.At present,the recommendation technology is usually used to process the auxiliary information to alleviate the sparsity of the user evaluation and improve the accuracy of the prediction score.Text data can be used to extract the hidden features of the item through related models.In recent years,the deep learning algorithm has developed rapidly.Therefore,this paper chose a variational autoencoder(VAE),which is a new type of network structure with powerful feature extraction capabilities.This paper proposed a novel context-aware recommendation model integrating the unsupervised method VAE into the variable matrix factorization (VAEMF) in the probability matrix factorization (PMF).Firstly,TD-IDF is used to preprocess the evaluation documents of the item.Then,the VAE is utilized to capture the context information features of the item.Finally,the probability matrix factorization is used to improve the accuracy of the prediction score.The experimental results on two real data sets show that this method is superior to the autoencoder and the probability matrix factorization recommendation methods.
    Reference | Related Articles | Metrics
    Approach for Knowledge Reasoning Based on Hesitate Fuzzy Credibility
    ZHENG Hong-liang, HOU Xue-hui, SONG Xiao-ying, PANG Kuo, ZOU Li
    Computer Science    2019, 46 (1): 131-137.   DOI: 10.11896/j.issn.1002-137X.2019.01.020
    Abstract524)      PDF(pc) (1262KB)(1005)       Save
    In order to solve the problem of inaccurate estimation of credibility in uncertainty reasoning,hesitant fuzzy set was introduced into the reasoning of uncertainty in this paper.The concept of hesitant fuzzy credibility was given in this article.On the basis of the knowledge representation based on credibility,the method of knowledge representation of hesitant fuzzy credibility is defined.In order to solve the problem of missing information in the process of reasoning by experts,an information complement method for solving the average value was proposed.A single rule and multiple rules of parallel relationship of the hesitant fuzzy credibility were constructed,and the specific steps of knowledge representation and uncertainty reasoning based on the hesitant fuzzy credibility was given.Finally,an example was given to illustrate the effectiveness and feasibility of the proposed method.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 1, 14 records