
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK


-
Performance Analysis of GPU Programs Towards Better Memory Hierarchy Design
唐滔,彭林,黄春,杨灿群. 面向存储层次设计优化的GPU程序性能分析[J]. 计算机科学, 2017, 44(12): 1-10.
TANG Tao, PENG Lin, HUANG Chun and YANG Can-qun. Performance Analysis of GPU Programs Towards Better Memory Hierarchy Design[J]. Computer Science, 2017, 44(12): 1-10. - TANG Tao, PENG Lin, HUANG Chun and YANG Can-qun
- Computer Science. 2017, 44 (12): 1-10. doi:10.11896/j.issn.1002-137X.2017.12.001
-
Abstract
PDF(1617KB) ( 701 )
- References | Related Articles | Metrics
-
With higher peak performance and energy efficiency than CPUs,as well as increasingly mature software environment,GPUs have become one of the most popular accelerators to build heterogeneous parallel computing systems.Generally,GPU hides memory access latency through flexible and light-weight thread switch mechanism,but its memory system faces severe pressure because of the massive parallelism and its actual performance is enormously impacted by the efficiency of memory access operations.Therefore,the analysis and optimization of GPU program’s memory access behavior have always been hot research topics in GPU-related studies.However,few existing works have analyzed the impact of memory hierarchy design on performance from the view of architecture.In order to better guide the design of GPU’s memory hierarchy and program optimizations,we analyzed the influence of GPU’s each memory hierarchy on the program performance in detail from the view of experiment in this paper,and summarized several strategies for both the memory hierarchy design of future GPU-like architectures and program optimizations.
-
Validity Protection Strategy for Real Time Data in CPS Based on Semantics
汤小春,田凯飞. 基于语义模型的实时数据有效性保证策略研究[J]. 计算机科学, 2017, 44(12): 11-16.
TANG Xiao-chun and TIAN Kai-fei. Validity Protection Strategy for Real Time Data in CPS Based on Semantics[J]. Computer Science, 2017, 44(12): 11-16. - TANG Xiao-chun and TIAN Kai-fei
- Computer Science. 2017, 44 (12): 11-16. doi:10.11896/j.issn.1002-137X.2017.12.002
-
Abstract
PDF(992KB) ( 352 )
- References | Related Articles | Metrics
-
The validity of real time data and CPU processing capability is a contradiction in the cyber physical system (CPS).While increasing the sampling frequency can guarantee the validity of the real time data,it can also increase the CPU workloads and reduce the system’s computing power.Firstly,this paper used the semantic features of real time data to establish the validity model of the data.Then,by setting the pre-scheduling task during the idle period of the CPU,making good using of the data validity model,setting the new validity interval and the start time of the update transaction of the real time data,the CPU execution time was reduced.Finally,the strategy based on semantic model was evaluated systematically on the parameters such as rotation,revolution and oil pressure of cotton picking ingot.The CPU load can be reduced by about 15%.
-
Multi-dimensional Quantitative Evaluation Method of Open Knowledge Base Construction Technology
陈新蕾,贾岩涛,王元卓,靳小龙,程学旗. 开放知识库构建技术的多维量化评价方法[J]. 计算机科学, 2017, 44(12): 17-22.
CHEN Xin-lei, JIA Yan-tao, WANG Yuan-zhuo, JIN Xiao-long and CHENG Xue-qi. Multi-dimensional Quantitative Evaluation Method of Open Knowledge Base Construction Technology[J]. Computer Science, 2017, 44(12): 17-22. - CHEN Xin-lei, JIA Yan-tao, WANG Yuan-zhuo, JIN Xiao-long and CHENG Xue-qi
- Computer Science. 2017, 44 (12): 17-22. doi:10.11896/j.issn.1002-137X.2017.12.003
-
Abstract
PDF(852KB) ( 514 )
- References | Related Articles | Metrics
-
With the coming of the era of network big data,the construction of open knowledge base attracts more and more attention from both academia and industry.In recent years,the applications based on open knowledge base construction technologies are emerging in an endless stream.However,there is no unified and comprehensive multi-dimensional quantitative evaluation method which evaluates the construction technique of the open knowledge base.Based on existing work,the multi-dimensional specification system of open knowledge base construction technique was put forward,which combines three dimensions,i.e.,the accuracy,the construction time,and the scale of a knowledge base.Furthermore,a multi-dimensional quantitative evaluation method was proposed to evaluate a group of open knowledge base construction techniques.Experiments show the rationality and comprehensiveness of the proposed evaluation me-thod,and it can deduce different results to meet the demand of different applications according to the importance of the dimensions.
-
Key Nodes Mining Algorithm Based on Association with Directed Network
梁莹莹,黄岚,王喆. 一种基于关联关系的有向网络关键节点挖掘算法[J]. 计算机科学, 2017, 44(12): 23-27.
LIANG Ying-ying, HUANG Lan and WANG Zhe. Key Nodes Mining Algorithm Based on Association with Directed Network[J]. Computer Science, 2017, 44(12): 23-27. - LIANG Ying-ying, HUANG Lan and WANG Zhe
- Computer Science. 2017, 44 (12): 23-27. doi:10.11896/j.issn.1002-137X.2017.12.004
-
Abstract
PDF(734KB) ( 483 )
- References | Related Articles | Metrics
-
The importance of key nodes in the network is higher than most other nodes,and key nodes mining is an important research area of network analysis.It is of great significance for study of network,such as study of network structure,network relations and so on.Many key nodes mining algorithms evaluate key nodes from different emphases.In this paper,we proposed a key nodes mining algorithm based on association with directed network,combining nodes’ partial information and association with neighbors in the network.This algorithm calculates the local centrality of nodes at first.In addition,it also calculates the association centrality between associated nodes based on nodes’ local centrality and the strength of association at the same time.In order to measure the accuracy of the key nodes mining algorithm,we designed influence propagation experiment to test the algorithm.Results show that compared with the classical centrality and other key nodes mining algorithms,key nodes mined by our algorithm have higher ability to spread influence in experimental network,which also illustrates the accuracy of our algorithm.
-
Coupled Topic-oriented Influence Maximization Algorithm
吕文渊,周丽华,廖仁建. 一种面向主题耦合的影响力最大化算法[J]. 计算机科学, 2017, 44(12): 28-32.
LV Wen-yuan, ZHOU Li-hua and LIAO Ren-jian. Coupled Topic-oriented Influence Maximization Algorithm[J]. Computer Science, 2017, 44(12): 28-32. - LV Wen-yuan, ZHOU Li-hua and LIAO Ren-jian
- Computer Science. 2017, 44 (12): 28-32. doi:10.11896/j.issn.1002-137X.2017.12.005
-
Abstract
PDF(716KB) ( 592 )
- References | Related Articles | Metrics
-
As networks are main tool for communication in the modern society,digging the most influential network users has become a hot issue.This study proposed a coupled topic-oriented influence maximization algorithm.It analyzes couplings among topics and extends the independent cascade model by considering couple similarity among topics and users’ preference on different topics.The classical greedy algorithm is used to dig the most influential users on the extended model.Compared with the influence maximization algorithm without the coupled topic,the proposed algorithm digs out more rational users who can affect more users in networks.
-
Study of ELM Algorithm Parallelization Based on Spark
刘鹏,王学奎,黄宜华,孟磊,丁恩杰. 基于Spark的极限学习机算法并行化研究[J]. 计算机科学, 2017, 44(12): 33-37.
LIU Peng, WANG Xue-kui, HUANG Yi-hua, MENG Lei and DING En-jie. Study of ELM Algorithm Parallelization Based on Spark[J]. Computer Science, 2017, 44(12): 33-37. - LIU Peng, WANG Xue-kui, HUANG Yi-hua, MENG Lei and DING En-jie
- Computer Science. 2017, 44 (12): 33-37. doi:10.11896/j.issn.1002-137X.2017.12.006
-
Abstract
PDF(686KB) ( 435 )
- References | Related Articles | Metrics
-
Extreme learning mechine(ELM) has high training speed,but with lots of matrix operations, it remains poor efficiency while applied to massive amount of data.After thorough research on parallel computation of Spark resilient distributed dataset (RDD),we proposed and implemented a parallelized algorithm of ELM based on Spark.And for convenience of performance comparison,Hadoop-MapReduce-based version was also implemented.Experimental results show that the training efficiency of the Spark-based ELM parallelization algorithm is significantly improved than the Hadoop-MapReduce-based version.If the amount of data processed is greater,the advantage of Spark in efficiency is more obvious.
-
Differential Privacy Protection Method for Location Recommendation
夏英,毛鸿睿,张旭,裴海英. 面向位置推荐的差分隐私保护方法[J]. 计算机科学, 2017, 44(12): 38-41.
XIA Ying, MAO Hong-rui, ZHANG Xu and BAE Hae-young. Differential Privacy Protection Method for Location Recommendation[J]. Computer Science, 2017, 44(12): 38-41. - XIA Ying, MAO Hong-rui, ZHANG Xu and BAE Hae-young
- Computer Science. 2017, 44 (12): 38-41. doi:10.11896/j.issn.1002-137X.2017.12.007
-
Abstract
PDF(709KB) ( 464 )
- References | Related Articles | Metrics
-
Location recommendation service makes it easier for people to get surrounding information about Point of Interest (POI).However,there are some risks related to location privacy.In order to avoid the negative influence resulted from leaking location privacy,a privacy protection method for location recommendation service was proposed.On the premise of maintaining location trajectory and frequency characteristics of check-in,uniform distribution and geometry distribution were presented to control privacy budget allocation effectively based on path prefix tree (PP-Tree) and its balanced level,and thus the Laplace noise of differential privacy could be added according to the allocation result.Expe-riments indicate that this method can protect location privacy effectively.The impaction of differential privacy noise on the quality of location recommendation is reduced by reasonable privacy budget allocation.
-
SVRRPMCC:A Regularization Path Approximation Algorithm of Support Vector Regression
王梅,王莎莎,孙莺萁,宋考平,田枫,廖士中. SVRRPMCC:一种支持向量回归机的正则化路径近似算法[J]. 计算机科学, 2017, 44(12): 42-47.
WANG Mei, WANG Sha-sha, SUN Ying-qi, SONG Kao-ping, TIAN Feng and LIAO Shi-zhong. SVRRPMCC:A Regularization Path Approximation Algorithm of Support Vector Regression[J]. Computer Science, 2017, 44(12): 42-47. - WANG Mei, WANG Sha-sha, SUN Ying-qi, SONG Kao-ping, TIAN Feng and LIAO Shi-zhong
- Computer Science. 2017, 44 (12): 42-47. doi:10.11896/j.issn.1002-137X.2017.12.008
-
Abstract
PDF(12211KB) ( 421 )
- References | Related Articles | Metrics
-
The regularization path algorithm is an efficient method for numerical solution to the support vector regression (SVR) problem, which can obtain all possible values of regularization parameters and the solutions of SVR in the time complexity equivalent to a SVR solution.Existing SVR regularization path algorithms include solving a system of iteration equations.The existing accurate approaches are difficult to apply to large-scale problems.Recently,there has been many interests about the approximation approach.And a new approximation algorithm for SVR regularization path named SVRRPMCC was proposed in this paper.Firstly,SVRRPMCC applied Monte Carlo method to randomly sample the coefficient matrix of the system of iteration equations.Then it used the Cholesky factorization method to obtain the coefficient inverse matrix.Further,the error bound and the computational complexity about the algorithm SVRRPMCC were analyzed.Experimental results on benchmark datasets it used show the validity and efficiency of the SVRRPMCC.
-
Improved MIMLSVM Algorithm Based on Concept Weight Vector
环天,郝宁,牛强. 基于概念权重向量的MIMLSVM改进算法[J]. 计算机科学, 2017, 44(12): 48-51.
HUAN Tian, HAO Ning and NIU Qiang. Improved MIMLSVM Algorithm Based on Concept Weight Vector[J]. Computer Science, 2017, 44(12): 48-51. - HUAN Tian, HAO Ning and NIU Qiang
- Computer Science. 2017, 44 (12): 48-51. doi:10.11896/j.issn.1002-137X.2017.12.009
-
Abstract
PDF(678KB) ( 423 )
- References | Related Articles | Metrics
-
In order to solve the problem that the MIMLSVM algorithm only constructs cluster from the bag level,while ignoring the distribution of the instance in the bag,this article proposed an improved MIMLSVM algorithm I-MIMLSVM.Firstly,we constructed clustering from the sample level,and explored the potential cluster of concepts in the example.Then,we used R-PATTERN algorithm to calculate the weight of each concept cluster,and calculated the importance degree of each concept cluster in each bag with TF-IDF algorithm.Finally,each bag was represented as a concept vector,and each dimension of the vector was equal to the multiplication of the weight of each concept cluster and its importance degree in this bag.The natural data set containing 2000 images was used in our experiments.The experimental results show that the improved algorithm performs better than the original algorithm,especially in the Harming loss,Coverage and Average precision.
-
Collaborative Filtering Algorithm Based on Bhattacharyya Coefficient and Item Correlation
臧雪峰,刘天琦,孙小新,冯国忠,张邦佐. 一种基于Bhattacharyya系数和项目相关性的协同过滤算法[J]. 计算机科学, 2017, 44(12): 52-57.
ZANG Xue-feng, LIU Tian-qi, SUN Xiao-xin, FENG Guo-zhong and ZHANG Bang-zuo. Collaborative Filtering Algorithm Based on Bhattacharyya Coefficient and Item Correlation[J]. Computer Science, 2017, 44(12): 52-57. - ZANG Xue-feng, LIU Tian-qi, SUN Xiao-xin, FENG Guo-zhong and ZHANG Bang-zuo
- Computer Science. 2017, 44 (12): 52-57. doi:10.11896/j.issn.1002-137X.2017.12.010
-
Abstract
PDF(878KB) ( 430 )
- References | Related Articles | Metrics
-
In order to satisfy the information needs of users in the big data era,the personalized recommender system has been widely used.Collaborative filtering is a simple and effective recommendation algorithm.However,most traditional similarity methods only compute the similarity based on the users’ co-rated scores.In addition,they are not very suitable in sparse data environment.This paper proposed a new similarity method based on Bhattacharyya coefficient.It uses all users’ rating information for items,which can not only obtain similar interest feature of users through the user’srating behavior,but also obtain the correlation between the items that the users have rated.Meanwhile,the new me-thod also takes into account each user’s rating preference,since different users have different rating habits.Considering more relevant factor about user similarity,more appropriate neighborhood can be selected for the target users,efficiently improving the recommendations.With experiments on two real data sets,the results show that our method outperforms the other state-of-the-art similarity metrics.
-
Fuzzy Clustering Algorithm for Incomplete Data Considering Missing Pattern
郑奇斌,刁兴春,曹建军. 结合缺失模式的不完整数据模糊聚类[J]. 计算机科学, 2017, 44(12): 58-63.
ZHENG Qi-bin, DIAO Xing-chun and CAO Jian-jun. Fuzzy Clustering Algorithm for Incomplete Data Considering Missing Pattern[J]. Computer Science, 2017, 44(12): 58-63. - ZHENG Qi-bin, DIAO Xing-chun and CAO Jian-jun
- Computer Science. 2017, 44 (12): 58-63. doi:10.11896/j.issn.1002-137X.2017.12.011
-
Abstract
PDF(882KB) ( 539 )
- References | Related Articles | Metrics
-
Data integrality is an important metric for data availability.For the problems in data acquisition,datasets in real world are always incomplete.Missing data are usually ignored or imputed in common clustering algorithm.When data missing is missing not at random,ignorance or imputation will result poor clustering accuracy.Considering the relationship of the data missing pattern and the missing value,two PCM (Possibilistic c-means) clustering algorithms were proposed:PatDistPCM based on minimizing the sum of missing pattern distance and PatCluPCM based on missing pattern clustering.The experiments on public datasets show that the two proposed fuzzy clustering algorithms PatDistPCM and PatCluPCM can improve clustering precision and recall when clustering data are of missing not at random.
-
Time Synchronization Scheme for Distributed Cognitive Radio Networks Based on M&S Model
汤璘,刘俊霞,赵丽,齐兴斌. 基于M&S模型的分布式认知无线电网络时间同步机制[J]. 计算机科学, 2017, 44(12): 64-67.
TANG Lin, LIU Jun-xia, ZHAO Li and QI Xing-bin. Time Synchronization Scheme for Distributed Cognitive Radio Networks Based on M&S Model[J]. Computer Science, 2017, 44(12): 64-67. - TANG Lin, LIU Jun-xia, ZHAO Li and QI Xing-bin
- Computer Science. 2017, 44 (12): 64-67. doi:10.11896/j.issn.1002-137X.2017.12.012
-
Abstract
PDF(725KB) ( 365 )
- References | Related Articles | Metrics
-
Aiming at the time synchronization in distributed cognitive radio network (DCRN),a collaborative time synchronization scheme based on M&R model was proposed.First,the selected primary user (PU) broadcasts the free spectrum list to the neighbor nodes through control channel,to determine the available common channel between the node pairs.Then,the neighbor node takes this PU node as a reference node,and transmits the synchronization information on this common channel.At the same time,the M&S synchronization model is used to adjust the internal timer of nodes,to realize the clock synchronization.After several iterations,finally the entire network time synchronization is rea-lized.Experimental results show that the proposed scheme can achieve the time synchronization for DCRN,and has faster convergence speed and lower cost.
-
Parameter Independent Access Point Positioning Method Based on CSI
李耀辉,陈兵. 一种基于CSI的参数无关接入点定位方法[J]. 计算机科学, 2017, 44(12): 68-71.
LI Yao-hui and CHEN Bing. Parameter Independent Access Point Positioning Method Based on CSI[J]. Computer Science, 2017, 44(12): 68-71. - LI Yao-hui and CHEN Bing
- Computer Science. 2017, 44 (12): 68-71. doi:10.11896/j.issn.1002-137X.2017.12.013
-
Abstract
PDF(545KB) ( 433 )
- References | Related Articles | Metrics
-
With the popularity of location-based services,indoor positioning systems have attracted more and more attention.WiFi-based indoor localization has attracted a number of attention because of its open access and low cost pro-perties.This paper leveraged the fine-grained channel state information instead of RSSI to reduce the influence of indoor multipath effect at the receiver.We used a model of parameter independent of the positioning system called PILM to determine space position.By extracting effective CSI values and the distance relation model to transform formula,the problem is converted to solve the point which satisfies the condition of the two norm of the minimum vector,that is,using the least square method to solve the problem.Experiments on two kinds of typical indoor scenarios verify the system performance.
-
Imporved UCA-ESPRIT Algorithm Based on Uniform Circular Array
刘艳,廖勇. 基于均匀圆阵的改进UCA-ESPRIT算法[J]. 计算机科学, 2017, 44(12): 72-74.
LIU Yan and LIAO Yong. Imporved UCA-ESPRIT Algorithm Based on Uniform Circular Array[J]. Computer Science, 2017, 44(12): 72-74. - LIU Yan and LIAO Yong
- Computer Science. 2017, 44 (12): 72-74. doi:10.11896/j.issn.1002-137X.2017.12.014
-
Abstract
PDF(393KB) ( 779 )
- References | Related Articles | Metrics
-
DOA (Direction of Arrival) estimation has become one of research hotspots and difficulties in array signal processing.This paper discussed an improved algorithm by using theory of special spectrum estimation based on smart antenna.An imporved UCA-ESPRIT algorithm was discussed.Firstly,the array of the input signal can be reordered according to the central symmetry of uniform circular array (UCA).Then the data array is real-value transformed,and the covariance matrix of the transformed array is carried out by eigenvalue decomposition.Finally,to solve the azimuth angle and elevation angle,SVD is carried out.Simulation results show that the proposed algorithm is not only suitable for the DOA estimation of incoherent signals and coherent signals,but also superior to UCA-RB-MUSIC algorithm and UCA-ESPRIT algorithm.
-
Research on Fuzzy Decoupling Energy Efficiency Optimization Algorithm in Cloud Computing Environment
邢文凯,高雪霞,侯小毛,翟萍. 云计算环境下的模糊解耦能效优化算法研究[J]. 计算机科学, 2017, 44(12): 75-79.
XING Wen-kai, GAO Xue-xia, HOU Xiao-mao and ZHAI Ping. Research on Fuzzy Decoupling Energy Efficiency Optimization Algorithm in Cloud Computing Environment[J]. Computer Science, 2017, 44(12): 75-79. - XING Wen-kai, GAO Xue-xia, HOU Xiao-mao and ZHAI Ping
- Computer Science. 2017, 44 (12): 75-79. doi:10.11896/j.issn.1002-137X.2017.12.015
-
Abstract
PDF(652KB) ( 411 )
- References | Related Articles | Metrics
-
Under the premise of ensuring the high computing performance and excellent service quality of cloud computing environment,the optimization of system energy consumption has become the key problem for cloud computing’s wide promotion.In order to adapt to the multi-load and multi-task cloud computing environment,a fuzzy decoupling ener-gy efficiency optimization scheme was designed.Firstly,the input,output and intermediate variable parameters were set.Then FNN model and decoupling rules were established,key parameters for affecting the energy efficiency were extracted and optimized.This method can find and evaluate the key factors which affects the energy efficiency quickly,thus achieving stable and controllable energy efficiency optimization.Finally,the parameter disturbance self adjustment design was added,and fuzzy decoupling is adopted to adjust the parameter disturbance of the decoupling operation to improve the robustness of the system.
-
Novel Tag Anticollision Protocol with Splitting Binary Tracking Tree
李占青,李光顺,吴俊华,孔令增. 分裂二进制追踪树标签防碰撞协议[J]. 计算机科学, 2017, 44(12): 80-85.
LI Zhan-qing, LI Guang-shun, WU Jun-hua and KONG Ling-zeng. Novel Tag Anticollision Protocol with Splitting Binary Tracking Tree[J]. Computer Science, 2017, 44(12): 80-85. - LI Zhan-qing, LI Guang-shun, WU Jun-hua and KONG Ling-zeng
- Computer Science. 2017, 44 (12): 80-85. doi:10.11896/j.issn.1002-137X.2017.12.016
-
Abstract
PDF(913KB) ( 391 )
- References | Related Articles | Metrics
-
To solve the problem of tag collision in large scale RFID (Radio Frequency Identify) systems,a new tag anti-collision protocol was proposed which combines the bit tracking technology and the optimal partition theory.The protocol consists of two phases,namely,a binary splitting phase and a binary tracking tree identifying phase.The first phase repeatedly divides the set of the current response tags into two subsets by choosing “0” or “1” randomly until a readable slot or an idle slot is obtained.The second phase first handles the number of tags in the left subset by using the optimal partition theory to obtain the size of slots in the right subset.And then the second phase finishes the identification of tags by utilizing the binary tracking tree slot algorithm on all right subsets in a bottom-up manner.The splitting process is simple and easy to implement,and the recognition process is not required to estimate the number of tags in advance,so the computing power of the device is low,moreover,the optimal partition can obviously reduce the idle slots.Theoreti-cal analysis and simulation results demonstrate that the protocol can improve identification efficiency and do better in large scale RFID system.
-
Wavelet Neural Network Model for Cognitive Radio Spectrum Prediction
朱正国,何明星,柳荣其,刘泽民. 应用于认知无线电频谱预测的小波神经网络模型[J]. 计算机科学, 2017, 44(12): 86-89.
ZHU Zheng-guo, HE Ming-xing, LIU Rong-qi and LIU Ze-min. Wavelet Neural Network Model for Cognitive Radio Spectrum Prediction[J]. Computer Science, 2017, 44(12): 86-89. - ZHU Zheng-guo, HE Ming-xing, LIU Rong-qi and LIU Ze-min
- Computer Science. 2017, 44 (12): 86-89. doi:10.11896/j.issn.1002-137X.2017.12.017
-
Abstract
PDF(530KB) ( 304 )
- References | Related Articles | Metrics
-
Accurate spectrum prediction can effectively reduce the energy consumption of cognitive radio system and improve the throughput of cognitive radio system.In order to solve the problem of the prediction accuracy of spectrum prediction method,a wavelet neural network model was proposed to predict the state of channel occupancy.The discrete wavelet transform was used to generate the time-frequency distribution of the signal,and a time series was used to represent the state of a sub channel.The tradeoff between prediction accuracy,utilization and parameter initialization was analyzed to select a near optimal model.The experimental results show that,compared with the model based on BP neural network,the proposed model shows better performance in terms of prediction accuracy and energy consumption.
-
Research on Frame Aggregation Algorithm of Wireless LAN Based on Effective Capacity Link Model
童旺宇,张颖江. 基于有效容量链路模型的无线局域网帧聚合算法研究[J]. 计算机科学, 2017, 44(12): 90-93.
TONG Wang-yu and ZHANG Ying-jiang. Research on Frame Aggregation Algorithm of Wireless LAN Based on Effective Capacity Link Model[J]. Computer Science, 2017, 44(12): 90-93. - TONG Wang-yu and ZHANG Ying-jiang
- Computer Science. 2017, 44 (12): 90-93. doi:10.11896/j.issn.1002-137X.2017.12.018
-
Abstract
PDF(568KB) ( 333 )
- References | Related Articles | Metrics
-
In order to provide statistical delay guarantees in the case of link quality fluctuations,a novel frame aggregation algorithm was proposed,which can be used in high speed IEEE802.11 wireless local area network.First,the QoS is guaranteed by the form of the target delay bound and the timeout probability,and it is considered as an optimization problem to construct the effective capacity model.Then,a simple formula is derived by applying the appropriate approxi-mation,which is solved using a proportional integral derivative (PID) controller.The proposed PID controller aggregation algorithm can independently adapt to the time limit of each link,and this only needs to be implemented in the transmitter side (such as access to AP),without the need for medium access control (MAC) to make any changes.NS-3 si-mulation results show that compared with the earliest deadline first algorithm,the proposed scheme has a better perfor-mance.
-
Audio Encryption Algorithm Based on Chaos and Wavelet Transform
魏雅娟,范九伦,任方. 基于混沌和小波变换的音频加密算法[J]. 计算机科学, 2017, 44(12): 94-99.
WEI Ya-juan, FAN Jiu-lun and REN Fang. Audio Encryption Algorithm Based on Chaos and Wavelet Transform[J]. Computer Science, 2017, 44(12): 94-99. - WEI Ya-juan, FAN Jiu-lun and REN Fang
- Computer Science. 2017, 44 (12): 94-99. doi:10.11896/j.issn.1002-137X.2017.12.019
-
Abstract
PDF(759KB) ( 726 )
- References | Related Articles | Metrics
-
To make audio information broadcast across the Web safely,an audio encryption algorithm based on chaotic systems and the wavelet transform in MPEG background was proposed.Firstly,the signal is added with a random matrix to change the value of signal.Secondly,logistic map is used to scramble and diffuse the position of each signal in time and the wavelet domains for three times.Finally,we can obtain the secure signal.The random matrix used as keys was generated by piecewise Logistic map and the random seeds.The experiment results show the mentioned algorithm not only makes gray histogram uniform and signal correlation weake,but also increases key space and high sensitivity to the keys.Therefore,the audio encryption algorithm has higher security to protect audio information.
-
Reversible Image Watermarking Algorithm Based on Multi-scale Decomposition and Prediction Error Expansion
张正伟,吴礼发,严云洋. 基于多尺度分解与预测误差扩展的可逆图像水印算法[J]. 计算机科学, 2017, 44(12): 100-104.
ZHANG Zheng-wei, WU Li-fa and YAN Yun-yang. Reversible Image Watermarking Algorithm Based on Multi-scale Decomposition and Prediction Error Expansion[J]. Computer Science, 2017, 44(12): 100-104. - ZHANG Zheng-wei, WU Li-fa and YAN Yun-yang
- Computer Science. 2017, 44 (12): 100-104. doi:10.11896/j.issn.1002-137X.2017.12.020
-
Abstract
PDF(708KB) ( 396 )
- References | Related Articles | Metrics
-
In view of the contradiction between the embedding capacity and the visual quality of the existing reversible image watermarking algorithm,a reversible image watermarking algorithm based on multi-scale decomposition and prediction error expansion was proposed.Firstly,the original image is decomposed into homogeneous and non-homogenous blocks by multi-scale decomposition.Then,the watermark information is embedded by the prediction error expansion of homogeneous blocks.Finally,based on the calculation of non-homogenous block information entropy,the non-homogenous blocks are selected according to the amount of embedded watermark information,and the remaining watermark information is embedded in the middle and high frequency by integer wavelet transform.Experimental results show that the proposed algorithm is easy to implement and is completely reversible,and it can effectively improve the watermark embedding capacity and has higher visual quality.
-
Safety Assessment of User Behaviors under Environment of Cloud Computing Based on Improved VIKOR Method
李存斌,蔺帅帅,徐方秋. 基于改进VIKOR法的云计算环境下用户行为安全的评估研究[J]. 计算机科学, 2017, 44(12): 105-113.
LI Cun-bin, LIN Shuai-shuai and XU Fang-qiu. Safety Assessment of User Behaviors under Environment of Cloud Computing Based on Improved VIKOR Method[J]. Computer Science, 2017, 44(12): 105-113. - LI Cun-bin, LIN Shuai-shuai and XU Fang-qiu
- Computer Science. 2017, 44 (12): 105-113. doi:10.11896/j.issn.1002-137X.2017.12.021
-
Abstract
PDF(821KB) ( 385 )
- References | Related Articles | Metrics
-
Cloud computing brings great convenience and reduces the cost to the cloud users for the use of resources.But it also makes it possible for illegal users to get core and privacy data easily.Therefore,the analysis and evaluation of user behavior is the key to effectively improve the security of the cloud.Firstly,according to the characteristics of user behavior under the environment of cloud computing,the evaluation index system was established.Then,an improved VIKOR method based on AHP-entropy weight method was proposed to overcome the shortcomings of subjective and objective weighting method.Finally,five users was selected for case analysis.Comparative analysis with other comprehensive evaluation methods shows that the proposed method is scientific and effective,and has some advantages.
-
Research on Opaque Predicate Obfuscation Technique Based on Chaotic Opaque Expression
苏庆,孙金田. 基于混沌不透明表达式的不透明谓词混淆技术研究[J]. 计算机科学, 2017, 44(12): 114-114.
SU Qing and SUN Jin-tian. Research on Opaque Predicate Obfuscation Technique Based on Chaotic Opaque Expression[J]. Computer Science, 2017, 44(12): 114-114. - SU Qing and SUN Jin-tian
- Computer Science. 2017, 44 (12): 114-114. doi:10.11896/j.issn.1002-137X.2017.12.022
-
Abstract
PDF(844KB) ( 632 )
- References | Related Articles | Metrics
-
In order to improve the code confusion,a chaotic opaque expression construction method based on chaotic map and quadratic map was proposed.According to the definition of chaotic opaque expression,the chaotic map with the properties of initial value sensitive dependence,pseudo-randomness,uniform distribution of state space,multiple bran-ching and non-special symbols is used.Taking two-dimensional tent map as an example,the matched quadratic map maps the running state space of the chaotic map to the result space of the expression to construct a chaotic opaque expression.Combining chaotic opaque expression with opaque predicate,a new method for constructing opaque predicate is formed.At the same time,an opaque predicate insertion method was proposed in which a new construction predicate is merged with an original predicate.The combination of the both forms a new opaque predicate obfuscation technique.The experimental results show that the technique has obvious improvement in various software complexity indexes,and the cost of the program is relatively lower.
-
Optimization Algorithm for Extensible Access Control Markup Language Policies
卢秋如,陈建平,马海英,陈韦旭. 一种可扩展访问控制标记语言的策略优化算法[J]. 计算机科学, 2017, 44(12): 115-119.
LU Qiu-ru, CHEN Jian-ping, MA Hai-ying and CHEN Wei-xu. Optimization Algorithm for Extensible Access Control Markup Language Policies[J]. Computer Science, 2017, 44(12): 115-119. - LU Qiu-ru, CHEN Jian-ping, MA Hai-ying and CHEN Wei-xu
- Computer Science. 2017, 44 (12): 115-119. doi:10.11896/j.issn.1002-137X.2017.12.023
-
Abstract
PDF(689KB) ( 368 )
- References | Related Articles | Metrics
-
Extensible access control markup language XACML is widely used.To improve the efficiency of XACML policy evaluation,an XACML policy optimization algorithm based on Venn graphic method was proposed.The XACML policy and rule structure are expressed as the Venn diagrams in the set theory.On the basis of setting the combination algorithm priorities,the conflicts and redundancies among the policies and rules are detected and eliminated according to the intersection and union relations between the sets.The experimental tests show that the algorithm reduces the evalua-tion time by 10% to 20% for the mainstream engines and decreases the occupied memory space at the same time,which hence achieves the purpose of the policy optimization.
-
New Improved Algorithm Based on REESSE3+
董大强,殷新春. 基于REESSE3+算法的改进算法[J]. 计算机科学, 2017, 44(12): 120-125.
DONG Da-qiang and YIN Xin-chun. New Improved Algorithm Based on REESSE3+[J]. Computer Science, 2017, 44(12): 120-125. - DONG Da-qiang and YIN Xin-chun
- Computer Science. 2017, 44 (12): 120-125. doi:10.11896/j.issn.1002-137X.2017.12.024
-
Abstract
PDF(835KB) ( 382 )
- References | Related Articles | Metrics
-
REESSE3+is an 8 rounds block cipher algorithm proposed by Professor Su in 2014.Based on the REESSE3+,this paper made some improvements.Since REESSE3+ is inspired by IDEA which was proposed by Professor Lai,three incompatible group operations were used to ensure their security,and we used the Markov model proposed by Professor Lai to make a comparison between REESSE3+(16) and the 16 bit input of improved algorithm.Through experiments,we found that in the face of differential crypt analysis,the 16 bit input of improved algorithm is more secure than the original REESSE3+(16).
-
Research on Static Analysis Formalism Supporting Abstract Interpretation
张弛,黄志球,丁泽文. 支持抽象解释的静态分析方法的形式化体系研究[J]. 计算机科学, 2017, 44(12): 126-130.
ZHANG Chi, HUANG Zhiqiu and DING Zewen. Research on Static Analysis Formalism Supporting Abstract Interpretation[J]. Computer Science, 2017, 44(12): 126-130. - ZHANG Chi, HUANG Zhiqiu and DING Zewen
- Computer Science. 2017, 44 (12): 126-130. doi:10.11896/j.issn.1002-137X.2017.12.025
-
Abstract
PDF(890KB) ( 777 )
- References | Related Articles | Metrics
-
Interpretation ZHANG Chi HUANG Zhi-qiu DING Ze-wen (School of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China) Abstract In safety-critical domain,software safety assurance becomes a widely concerned issue.Static program analysis is an effective method for automating program validation. Static analysis is an efficient formal method for the use of verification of non-functional safety-critical properties.CPA(Configurable Program Analysis) is a formalism for static ana-lysis.It intends to describe the analysis phrase in static analysis by a general formal framework.This paper aimed at formally modeling the analysis phrase in abstract interpretation with the formalism CPA.We illuminated the rules of transformation from source code to the formalism of configurable program analysis.This paper provided a way to automatically verify the software correctness in safety-critical domain and provided a feasible solution for static analysis tool based on abstract interpretation.
-
Multiple Kernel Dictionary Learning for Software Defect Prediction
王铁建,吴飞,荆晓远. 基于多核字典学习的软件缺陷预测[J]. 计算机科学, 2017, 44(12): 131-134.
WANG Tie-jian, WU Fei and JING Xiao-yuan. Multiple Kernel Dictionary Learning for Software Defect Prediction[J]. Computer Science, 2017, 44(12): 131-134. - WANG Tie-jian, WU Fei and JING Xiao-yuan
- Computer Science. 2017, 44 (12): 131-134. doi:10.11896/j.issn.1002-137X.2017.12.026
-
Abstract
PDF(730KB) ( 302 )
- References | Related Articles | Metrics
-
A multiple kernel dictionary learning approach for software defect prediction was proposed.Software historical defect data have complicated structure and marked characteristic of class-imbalance.Multiple kernel learning is an effective technique in the field of machine learning which can map the historical defect data to a higher-dimensional feature space and make them express better.We got a multiple kernel dictionary learning classifier which has the advantages of both multiple kernel learning and dictionary learning.The widely used datasets from NASA MDP datasets are employed as test data to evaluate the performance of all compared methods.Experimental results demonstrate the effectiveness of the proposed multiple kernel dictionary learning approach for the software defect prediction task.
-
Type-2 Fuzzy Logic Based Multi-threaded Data Race Detection
杨璐,余守文,严建峰. 基于二型模糊逻辑的多线程数据竞争检测方法研究[J]. 计算机科学, 2017, 44(12): 135-143.
YANG Lu, YU Shou-wen and YAN Jian-feng. Type-2 Fuzzy Logic Based Multi-threaded Data Race Detection[J]. Computer Science, 2017, 44(12): 135-143. - YANG Lu, YU Shou-wen and YAN Jian-feng
- Computer Science. 2017, 44 (12): 135-143. doi:10.11896/j.issn.1002-137X.2017.12.027
-
Abstract
PDF(1326KB) ( 473 )
- References | Related Articles | Metrics
-
Multi-threaded mechanism has been widely used in software development because of its advantages.However,with the growth of program scales,there are plenty of potential parallel defects in multi-threaded programs.The most common parallel defects are data race and deadlock.However,none of the traditional defect detection methods take into account the uncertainty of time sequence analysis and run-time environment.And it is hard to calculate the probability of parallel defects to generate a priority order list based on the probability.To solve these problems,we proposed a data race detection method based on type-2 fuzzy logic.This method considers the influence of run-time environment factors,and uses the traditional parallel defects detection methods as pre-processing step.Then it builds a time sequence analysis model for the target program based on type-2 fuzzy logic and hidden Markov model.It can calculate the probability of all the potential defects,then generates a priority order list for software developers to deal with defects and allocate resources.
-
Software Component Retrieval Method Based on Ontology Concept Similarity
柯昌博,黄志球,肖甫. 基于本体概念相似度的软件构件检索方法[J]. 计算机科学, 2017, 44(12): 144-149.
KE Chang-bo, HUANG Zhi-qiu and XIAO Fu. Software Component Retrieval Method Based on Ontology Concept Similarity[J]. Computer Science, 2017, 44(12): 144-149. - KE Chang-bo, HUANG Zhi-qiu and XIAO Fu
- Computer Science. 2017, 44 (12): 144-149. doi:10.11896/j.issn.1002-137X.2017.12.028
-
Abstract
PDF(813KB) ( 409 )
- References | Related Articles | Metrics
-
With the development of software reuse and technology of product line,how to develop software product quickly with component based on product line has become the focus of research.The key of implementing this technology is the high efficient component retrieval method.In this paper,we described component with ontology Web language and transformed it into ontology tree to fuzzy matching.Then we restructured the mismatching components and revised similarity concept of query ontology trees with KMP algorithm,in order to retrieve more accurate component and satisfying user requirement.At last,we proposed a component retrieval algorithm and developed a prototype of component repository query system accordingly.By comparing with the query method with facet and feature,we proved its feasibility and effectiveness through experiment.
-
Methodology for Classes Design Quality Assessment
胡文生,杨剑锋,赵明. 类设计质量评估方法的研究[J]. 计算机科学, 2017, 44(12): 150-155.
HU Wen-sheng, YANG Jian-feng and ZHAO Ming. Methodology for Classes Design Quality Assessment[J]. Computer Science, 2017, 44(12): 150-155. - HU Wen-sheng, YANG Jian-feng and ZHAO Ming
- Computer Science. 2017, 44 (12): 150-155. doi:10.11896/j.issn.1002-137X.2017.12.029
-
Abstract
PDF(893KB) ( 368 )
- References | Related Articles | Metrics
-
This paper introduced a metric suite of C&K suggested by Chidamber and Kemerer in detail,and combined with grey relational analysis theory.A methodology for classes design quality assessment based on a metric suite of C&K and grey theory was proposed.Firstly,this methodology provides the best class design standards according to the thresholds of C&K and the definition of acceptable class.The grey relational analysis is carried out between the best class design standards and all classes of an object-oriented program,and the worst class will be found.This methodology can help the software designers to find out the flawed classes in the early phases of the software life cycle,thereby improving the reliability and maintainability of software systems.
-
Real-time System Verification Approach Based on Observer Patterns
赵鹤,洪玫,杨秋辉,高婉玲. 基于观察者模式的实时系统验证方法[J]. 计算机科学, 2017, 44(12): 156-162.
ZHAO He, HONG Mei, YANG Qiu-hui and GAO Wan-ling. Real-time System Verification Approach Based on Observer Patterns[J]. Computer Science, 2017, 44(12): 156-162. - ZHAO He, HONG Mei, YANG Qiu-hui and GAO Wan-ling
- Computer Science. 2017, 44 (12): 156-162. doi:10.11896/j.issn.1002-137X.2017.12.030
-
Abstract
PDF(1036KB) ( 486 )
- References | Related Articles | Metrics
-
The verification of complex real-time system is always high-profile.A common way to describe the verification properties is using temporal logic,and the way is complex and difficult for laypeople.Observer patterns is an additional subsystem.It can transform the complex verification properties into simple reachability problem.The use of observer patterns can avoid using complex verification algorithm.The observer patterns proposed by Etienne and Nouha Abid were applied to the real-time system,Train-Gate system.UPPAAL was used to construct the observer models according to some scenarios in Train-Gate.Comparative experiment was used to compare the verification result with and without observer patterns.The experiment result shows that the use of observer patterns and verification properties both can get correct results.And the use of observer patterns can save time and it’s easier to accept for laypeople.Therefore,the use of observer patterns is feasible in the real-time systems like Train-Gate verification.
-
Evolutionary CTT-SP Algorithm for Cost-effectively Storing Scientific Datasets in Cloud
郭梅,袁栋,杨耘. 云计算环境下低成本存储科学数据的演化CTT-SP算法[J]. 计算机科学, 2017, 44(12): 163-168.
GUO Mei, YUAN Dong and YANG Yun. Evolutionary CTT-SP Algorithm for Cost-effectively Storing Scientific Datasets in Cloud[J]. Computer Science, 2017, 44(12): 163-168. - GUO Mei, YUAN Dong and YANG Yun
- Computer Science. 2017, 44 (12): 163-168. doi:10.11896/j.issn.1002-137X.2017.12.031
-
Abstract
PDF(846KB) ( 382 )
- References | Related Articles | Metrics
-
Massive computation power and storage capacity of cloud computing systems allow scientists to deploy computation and data intensive applications in the cloud,where large application datasets can be stored.Based on the cloud service’s pay-as-you-go model,taking the status adjustment cost caused by cloud service’s price changes into consideration for the original datasets storage status,we proposed an evolutionary CTT-SP algorithm based on the traditional mini-mum cost benchmarking CTT-SP algorithm for cost-effectively storing large volume of generated scientific datasets in the cloud.The algorithm can automatically decide whether a generated dataset should be stored or not in the cloud,and also achieve better trade-off between computation and storage at the new price.Random simulations conducted with Amazon’s cost model show that the proposed evolutionary CTT-SP algorithm can save the overall cost of storing scientific datasets significantly when the cloud service’s price changes.
-
System Failure Reachability Graph Generation Method Based on Temporal Relation
范亚琼,陈海燕. 基于时序关系的系统失效可达图生成方法[J]. 计算机科学, 2017, 44(12): 169-174.
FAN Ya-qiong and CHEN Hai-yan. System Failure Reachability Graph Generation Method Based on Temporal Relation[J]. Computer Science, 2017, 44(12): 169-174. - FAN Ya-qiong and CHEN Hai-yan
- Computer Science. 2017, 44 (12): 169-174. doi:10.11896/j.issn.1002-137X.2017.12.032
-
Abstract
PDF(772KB) ( 338 )
- References | Related Articles | Metrics
-
In view of the state space explosion problem in the process of system reachability diagram for state/event fault tree,a method of system failure reachability diagram based on temporal relation was proposed in this paper.By ana-lyzing the relationship between the triggering and the triggered event,the sequence of events are sorted.According to the temporal relation,all the pairs of the unreachable states of the system components can be obtained.Through establishing the Cartesian product of the reachable state of the components,all the reachable states of the system can be obtained.According to the connection table and the minimum cut set,the system can obtain the state reachable graph of the system failure,which effectively solves the problem of state space explosion in the generation process of the system failure map.The system failure reachability graph method based on sequence relation is used to generate the reachability graph of the torpedo attack system.The experiment verfied the feasibility and stability of the method.And the experiment shows that the method can alleviate the problem of state space explosion effectively,and provide a new method for the system to generate the system reachable graphs.
-
Research on Passing Quality Quantification Based on GraphX Passing Network
廖彬,张陶,国冰磊,于炯,牛亚锋,张旭光,刘炎. 基于GraphX传球网络的传球质量量化研究[J]. 计算机科学, 2017, 44(12): 175-182.
LIAO Bin, ZHANG Tao, GUO Bing-lei, YU Jiong, NIU Ya-feng, ZHANG Xu-guang and LIU Yan. Research on Passing Quality Quantification Based on GraphX Passing Network[J]. Computer Science, 2017, 44(12): 175-182. - LIAO Bin, ZHANG Tao, GUO Bing-lei, YU Jiong, NIU Ya-feng, ZHANG Xu-guang and LIU Yan
- Computer Science. 2017, 44 (12): 175-182. doi:10.11896/j.issn.1002-137X.2017.12.033
-
Abstract
PDF(1317KB) ( 758 )
- References | Related Articles | Metrics
-
Although the big data technology continues to mature,relevant application research in the field of competitive sports is still in the exploratory stage.Conventional basketball technical statistics lacks the record of the passing data as well as the statistical analysis,data mining,and application of the passing data.Firstly,based on GraphX,we created the passing network graph,which laid a foundation of future research on passing quality.Secondly,PESV (Pass Expectation Score Value),a method of evaluating the quality of passing basketball,was proposed.Compared with the traditional ATR (Assist Turnover Ratio) that defined the ratio of assists to turnovers,PESV can make a more comprehensive eva-luation of the passing quality.Finally,we introduced a few application scenarios of PESV based on passing network,including the analysis of the impact of passing quality on game result,passing route selection based on PESV value,and taking the Chinese player Jeremy Lin as an example to calculate his passing expectation values in the 2015-2016 seasons.
-
Analysis of Development Status of World Artificial Intelligence Based on Scientific Measurement
李悦,苏成,贾佳,许震,田瑞强. 基于科学计量的世界人工智能领域发展状况分析[J]. 计算机科学, 2017, 44(12): 183-187.
LI Yue, SU Cheng, JIA Jia, XU Zhen and TIAN Rui-qiang. Analysis of Development Status of World Artificial Intelligence Based on Scientific Measurement[J]. Computer Science, 2017, 44(12): 183-187. - LI Yue, SU Cheng, JIA Jia, XU Zhen and TIAN Rui-qiang
- Computer Science. 2017, 44 (12): 183-187. doi:10.11896/j.issn.1002-137X.2017.12.034
-
Abstract
PDF(676KB) ( 495 )
- References | Related Articles | Metrics
-
This paper analyzed the field of artificial intelligence in the past 15 years to predict the future development trend,and to help researchers quickly grasp the field profile.SciMAT software was used to carry out the co-occurrence analysis of the keywords,and the development trend and sub-domain maturity were predicted by the generated theme evolution chart,and the strategy map was used to predict the future development trend.The total number of advertisements and the total number of keywords in the field of artificial intelligence are on the rise,indicating that the development of the field is good.The increase in the number of groups at various stages indicates that the development of the field in many ways.In the past 15 years,neural networks and intelligent robots have been hot topics in the field of artificial intelligence,and as time goes on,the scale of the research expand,has been also gradually mature.Artificial intelligence is taking place from theory to application of the change.Neural network and intelligent robot will be the future development of artificial intelligence field.
-
Multi-granularity Text Sentiment Classification Model Based on Three-way Decisions
张越兵,苗夺谦,张志飞. 基于三支决策的多粒度文本情感分类模型[J]. 计算机科学, 2017, 44(12): 188-193.
ZHANG Yue-bing, MIAO Duo-qian and ZHANG Zhi-fei. Multi-granularity Text Sentiment Classification Model Based on Three-way Decisions[J]. Computer Science, 2017, 44(12): 188-193. - ZHANG Yue-bing, MIAO Duo-qian and ZHANG Zhi-fei
- Computer Science. 2017, 44 (12): 188-193. doi:10.11896/j.issn.1002-137X.2017.12.035
-
Abstract
PDF(998KB) ( 635 )
- References | Related Articles | Metrics
-
Text sentiment classification is a very important branch of natural language processing.Researchers focus on the accuracy of sentiment classification but ignore the time cost of training and classification.Bag-of-words feature used in most methods for text sentiment classification has high dimension and bad interpretability.To solve the above problems,we presented a multi-granularity text sentiment classification model based on three-way decisions for document-level sentiment classification.With the aid of granular computing,we made a structure of text that contains three levels of granularity-word,sentence and document,and presented a new kind of feature-SSS(sentence-level sentiment strength) feature which represents a document,in which the value of each dimension is the sentence-level sentiment strength.In classification process,we firstly utilized three-way decisions method to divide the objects into three regions.The objects in positive region and negative region are classified into positive class and negative class,respectively.We employed the state-of-the-art classifier-SVM to classify the objects in boundary region.Experimental results show that combining three-way decisions method and SVM can improve the accuracy of classification.The SSS-feature reduces the time-cost of feature extraction and training greatly because of its low dimension.Three-way decisions method can reduce the time-cost of classification,and they can ensure good performance in classification accuracy at the same time.
-
Adaptive Nearest Neighbor Algorithm with Dynamic Neighborhood
冯骥,张程,朱庆生. 一种具有动态邻域特点的自适应最近邻居算法[J]. 计算机科学, 2017, 44(12): 194-201.
FENG Ji, ZHANG Cheng and ZHU Qing-sheng. Adaptive Nearest Neighbor Algorithm with Dynamic Neighborhood[J]. Computer Science, 2017, 44(12): 194-201. - FENG Ji, ZHANG Cheng and ZHU Qing-sheng
- Computer Science. 2017, 44 (12): 194-201. doi:10.11896/j.issn.1002-137X.2017.12.036
-
Abstract
PDF(1144KB) ( 417 )
- References | Related Articles | Metrics
-
Traditional nearest neighbor algorithm includes k-nearest neighbor (KNN) and reverse nearest neighbor (RNN),and they have been proposed in the literature,but most of them are vulnerable to their parameter choice.In this paper,a novel algorithm of nearest neighbor was proposed,named natural neighbor (NaN).In contrast to KNN and RNN,it is a scale-free nearest neighbor,and it can be used in any dataset effectually,especially data on manifold.This article discussed the theoretical model and its detailed implementation algorithm of natural neighbor in a different field,and the related questions of NaN concepts were discussed by the experimental tests.
-
Research on Multi-objective Evolutionary Algorithm Based on Decomposition Using Interacting Variables Grouping
邱飞岳,胡烜,王丽萍. 关联变量分组的分解多目标进化算法研究[J]. 计算机科学, 2017, 44(12): 202-210.
QIU Fei-yue, HU Xuan and WANG Li-ping. Research on Multi-objective Evolutionary Algorithm Based on Decomposition Using Interacting Variables Grouping[J]. Computer Science, 2017, 44(12): 202-210. - QIU Fei-yue, HU Xuan and WANG Li-ping
- Computer Science. 2017, 44 (12): 202-210. doi:10.11896/j.issn.1002-137X.2017.12.037
-
Abstract
PDF(1308KB) ( 335 )
- References | Related Articles | Metrics
-
Optimization problem with large-scale decision variable is one of the hot and difficult points in the multi-objective evolutionary algorithm research field.When solving the problem of large-scale variable,the current evolutionary algorithm does not find the related information between decision variables and treats all the decision variables as a whole to optimize.But the variable dimensionality will become the bottleneck as the decision variables in the optimization problem increase,which will affect the performance of the algorithm.To settle these problems,this paper proposed a interacting variable grouping strategy to identify the internal relation among the decision variables and allocate the inte-racting variables to the same group.Thus,it can decompose a difficult high-dimensional problem into a set of simpler and low-dimensional subproblems that are easier to solve.In order to make the algorithm as far as possible to retain the relationship between variables and keep the interdependencies among different subproblems minimal,this strategy increases the probability of assigning the interacting variables to the same group so as to improve the quality of the optimal solution of the subproblems and ultimately gets the best Pareto optimal solution set.Comparative simulation experiment was conducted after the variable extension on standard test function.The convergence and diversity of the algorithm were compared and analyzed using a variety of performance indicators.Experiment results show that this algorithm can produce higher quality Pareto optimal solution set and is of better convergence and distribution than the classical multi-objective evolutionary algorithms like NSGA-II,MOEA/D and RVEA as the dimension of decision variables increases in multi-objective optimization problem with large-scale variable.
-
Propositional Logic-based Association-rule Mining Algorithm L-Eclat
徐卫,李晓粉,刘端阳. 基于命题逻辑的关联规则挖掘算法L-Eclat[J]. 计算机科学, 2017, 44(12): 211-215.
XU Wei, LI Xiao-fen and LIU Duan-yang. Propositional Logic-based Association-rule Mining Algorithm L-Eclat[J]. Computer Science, 2017, 44(12): 211-215. - XU Wei, LI Xiao-fen and LIU Duan-yang
- Computer Science. 2017, 44 (12): 211-215. doi:10.11896/j.issn.1002-137X.2017.12.038
-
Abstract
PDF(678KB) ( 348 )
- References | Related Articles | Metrics
-
Association rule mining is an important topic in the field of data mining,and it has been widely used in lots of practical applications.Generally,association rule mining algorithms have to set the minimal support threshold and the minimal confidence threshold.But it is hard for most mining algorithms to set these two values.Not only is tremendous related knowledge needed to select the support threshold,but also the mining results are too large and difficult to understand.To solve these problems,the idea of propositional logic was introduced into Eclat,which is one of the classical association rule mining algorithms.We proposed logic-based association rule mining algorithm called L-Eclat.Then,we compared L-Eclat with Eclat.The results show that L-Eclat can optimize and compress the result rule sets at certain degree,and it results in less time consumption and high-quality association rules.Furthermore,L-Eclat can run with a smaller support threshold,and it decreases the dependence on the support threshold and avoids spending much time on choosing a suitable support threshold.
-
Self-correction of Word Alignments
龚慧敏,段湘煜,张民. 自纠正词对齐[J]. 计算机科学, 2017, 44(12): 216-220.
GONG Hui-min, DUAN Xiang-yu and ZHANG Min. Self-correction of Word Alignments[J]. Computer Science, 2017, 44(12): 216-220. - GONG Hui-min, DUAN Xiang-yu and ZHANG Min
- Computer Science. 2017, 44 (12): 216-220. doi:10.11896/j.issn.1002-137X.2017.12.039
-
Abstract
PDF(829KB) ( 482 )
- References | Related Articles | Metrics
-
Word alignment is an important part of statistical machine translation systems.Previous works obtain word alignment through sequential models,which do not take into account the structure information and linguistic features of the language,leading to bad word alignments violating linguistic characteristics.This paper proposed a novel self-correction method for word alignments,aiming to correct the alignment errors which violate linguistic characteristics by exploiting linguistic prior knowledge.First,we conducted a coarse correction on short alignments obtained by binary segmentation based on punctuation method.Second,we proposed a fine-grained correction method for each short alignment based on statistical features.Third,corrected short alignments were merged to original alignments.This process does not rely on any third-party word aligner and additional parallel corpus.Experimental results show that our method significantly improves the accuracy machine translation results.
-
Active,Online and Weighted Extreme Learning Machine Algorithm for Class Imbalance Data
王长宝,李青雯,于化龙. 面向类别不平衡数据的主动在线加权极限学习机算法[J]. 计算机科学, 2017, 44(12): 221-226.
WANG Chang-bao, LI Qing-wen and YU Hua-long. Active,Online and Weighted Extreme Learning Machine Algorithm for Class Imbalance Data[J]. Computer Science, 2017, 44(12): 221-226. - WANG Chang-bao, LI Qing-wen and YU Hua-long
- Computer Science. 2017, 44 (12): 221-226. doi:10.11896/j.issn.1002-137X.2017.12.040
-
Abstract
PDF(976KB) ( 606 )
- References | Related Articles | Metrics
-
It is well known that most existing active learning algorithms often fail to provide excellent performance and cost much training time when they are used in the scenario of class imbalance.To deal with this problem,a hybrid active learning algorithm named AOW-ELM algorithm was proposed.The algorithm uses ELM (extreme learning machine) which has rapid modeling speed as base classifier in active learning.In addition,weighted ELM algorithm is adopted to guarantee the impartiality in the procedure of active learning.Next,to further accelerate the process of active learning,i.e.,decreasing the time consumption of active learning,online learning procedure of weighted ELM algorithm was deduced in theory.Experimental results on 12 baseline binary-class imbalanced data sets indicate the effectiveness and feasibility of the proposed algorithm.
-
Sentimental Analysis Stacked Denoising Auto-encoder with Sparse Factor
蒋宗礼,王一大. 融合稀疏因子的情感分析堆叠降噪自编码器模型[J]. 计算机科学, 2017, 44(12): 227-231.
JIANG Zong-li and WANG Yi-da. Sentimental Analysis Stacked Denoising Auto-encoder with Sparse Factor[J]. Computer Science, 2017, 44(12): 227-231. - JIANG Zong-li and WANG Yi-da
- Computer Science. 2017, 44 (12): 227-231. doi:10.11896/j.issn.1002-137X.2017.12.041
-
Abstract
PDF(681KB) ( 362 )
- References | Related Articles | Metrics
-
Feature extraction based on deep learning is a hot research topic in data dimensionality reduction now.In deep learning,stacked auto-encoder is commonly used.The encoder just simply learns the features of sample and can’t get a good feature expression for the data which are mixed with noise and sparsity.Sparse factor is added in each hidden layer of the stacked denoising auto-encoder to solve the problem of feature extraction about data with noise and sparsity in this paper.The sentimental analysis experiments on COAE data set show that the precision and recall ratio are improved.
-
Text Mining Algorithm and Application of Telecom Big Data
汪东升,黄传河,黄晓鹏,倪秋芬. 电信大数据文本挖掘算法及应用[J]. 计算机科学, 2017, 44(12): 232-238.
WANG Dong-sheng, HUANG Chuan-he, HUANG Xiao-peng and NI Qiu-fen. Text Mining Algorithm and Application of Telecom Big Data[J]. Computer Science, 2017, 44(12): 232-238. - WANG Dong-sheng, HUANG Chuan-he, HUANG Xiao-peng and NI Qiu-fen
- Computer Science. 2017, 44 (12): 232-238. doi:10.11896/j.issn.1002-137X.2017.12.042
-
Abstract
PDF(908KB) ( 438 )
- References | Related Articles | Metrics
-
Major telecom data contain a large number of unstructured text data,which are difficult for conventional methods to mine information.Text mining can do better than conventional methods under this circumstance.Based on the text data,this paper proposed a new word identification algorithm and a named entity recognition algorithm.At this process,we analyzed the customers’ complaint texts and judged their categories,and then identified the user’s terminal types from their information,which provides better user supports and experiences for the telecom industry.Experiment results validate that the proposed algorithm achieves good performance for the identification of customers’ complaint texts in the telecom.
-
Decision-theoretic Rough Set Model Based on Weighted Multi-cost
陈玉金,李续武. 基于加权代价的决策粗糙集模型[J]. 计算机科学, 2017, 44(12): 239-244.
CHEN Yu-jin and LI Xu-wu. Decision-theoretic Rough Set Model Based on Weighted Multi-cost[J]. Computer Science, 2017, 44(12): 239-244. - CHEN Yu-jin and LI Xu-wu
- Computer Science. 2017, 44 (12): 239-244. doi:10.11896/j.issn.1002-137X.2017.12.043
-
Abstract
PDF(746KB) ( 292 )
- References | Related Articles | Metrics
-
Classical decision-theoretic rough set was proposed based on only one cost matrix,which dose not take the diversity and complexity of cost into account.To make up for the shortcomings, a risk analysis method based on multi-cost fusion was introduced and a decision-theoretic rough set method based on weighted multi-cost was proposed. Futher-more,the properties and the relations of these kinds of rough sets were discussed.The measure and cost relations of them were analyzed.Finally,the validity and robustness of the method were verified by UCI dataset.
-
Point-of-interest Recommendation Based on Comment Text in Location Social Network
王啸岩,袁景凌,秦凤. 位置社交网络中基于评论文本的兴趣点推荐[J]. 计算机科学, 2017, 44(12): 245-248.
WANG Xiao-yan, YUAN Jing-ling and QIN Feng. Point-of-interest Recommendation Based on Comment Text in Location Social Network[J]. Computer Science, 2017, 44(12): 245-248. - WANG Xiao-yan, YUAN Jing-ling and QIN Feng
- Computer Science. 2017, 44 (12): 245-248. doi:10.11896/j.issn.1002-137X.2017.12.044
-
Abstract
PDF(733KB) ( 383 )
- References | Related Articles | Metrics
-
With the rapid development of the location-based social networks(LBSN), the point-of-interest(POI) recommendation is becoming more and more important to users and businesses.At present,the recommendation algorithm based on social network mainly uses the user’s historical data and social network data to improve the quality of recommendation,but ignores the POI’s comment text data.And the data in LBSN often have some missing information, how to guarantee robustness is a huge challenge for the point-of-interest recommendation algorithms.To this end,this paper proposed a new model of point-of-interest recommendation,called SoGeoCom model.The model combines the user’s social network data,geographic location data and the POI’s comment text data to carry on the POI recommendation.Experimental results based on real data set from Yelp show that,compared with other mainstream POI recommendation models,the SoGeoCom model can improve the precision and recall rate,have good robustness,and get a better recommendation effect.
-
MPSO and Its Application in Test Data Automatic Generation
焦重阳,周清雷,张文宁. 混合拓扑结构的粒子群算法及其在测试数据生成中的应用研究[J]. 计算机科学, 2017, 44(12): 249-254.
JIAO Chong-yang, ZHOU Qing-lei and ZHANG Wen-ning. MPSO and Its Application in Test Data Automatic Generation[J]. Computer Science, 2017, 44(12): 249-254. - JIAO Chong-yang, ZHOU Qing-lei and ZHANG Wen-ning
- Computer Science. 2017, 44 (12): 249-254. doi:10.11896/j.issn.1002-137X.2017.12.045
-
Abstract
PDF(885KB) ( 631 )
- References | Related Articles | Metrics
-
To date,meta-heuristic search algorithms have been applied to automate test data generation.The topology of particle swarm optimization (PSO) is one of the key factors that affect algorithm performance.In order to overcome the phenomena of falling into local optimal and premature convergence of the standard particle swarm algorithm (PSO),a mixture neighborhood structure (MPSO) was proposed to generate software structure test data automatically.Based on the analysis of the different neighborhood topology structure effect on the performance of particle optimization,this paper presented a new particle swarm optimization with mix topological structure.MPSO is based on the combination of global optimization and local optimization.In each generation,by observing the feedback information of diversity of the population,the particle speed update method is selected by global topology model or local topology model.The experimental results show that MPSO increases the swarm diversity,avoids falling into local optimization,and improves the convergence speed of the proposed algorithm.
-
Feature Construction Method for Learning to Rank Based on Optimization of Matrix Factorization
杨潇,崔超然,王帅强. 基于矩阵分解优化的排序学习特征构造方法[J]. 计算机科学, 2017, 44(12): 255-259.
YANG Xiao, CUI Chao-ran and WANG Shuai-qiang. Feature Construction Method for Learning to Rank Based on Optimization of Matrix Factorization[J]. Computer Science, 2017, 44(12): 255-259. - YANG Xiao, CUI Chao-ran and WANG Shuai-qiang
- Computer Science. 2017, 44 (12): 255-259. doi:10.11896/j.issn.1002-137X.2017.12.046
-
Abstract
PDF(718KB) ( 383 )
- References | Related Articles | Metrics
-
Feature selection can improve ranking efficiency and accuracy.Current study mainly prefers selecting the most distinguishing feature set rather than feature construction,where the selection is mostly according to the significance of features and the similarity between features.Since the features are mostly induced manually,there are inevitably overlap and redundancy between them.In order to reduce the redundancy,matrix decomposition is used to generate new features set.An optimization algorithm was designed according to the effect of the feature matrix decomposed,and the gap between the decomposed feature matrix and the original matrix.Then a matrix decomposition based optimization for learning to rank,which named by MFRank,was proposed to take into account,the ranking result acquired by the features,which cannot be handled by matrix decomposition method such as singular value decomposition (SVD),etc.A stochastic projective sub-gradient algorithm was used in experiments to obtain the approximate optimal values for the optimization problems,and experimental result on MQ2008,which is an open test set,shows that the proposed MFRank algorithm can obtain comparative result as RankBoost,RankSVM-Struct which are the state-of-the-art algorithms.
-
MR Brain Image Segmentation Method Based on Wavelet Transform Image Fusion Algorithm and Improved FCM Clustering
耿艳萍,郭小英,王华夏,陈磊,李雪梅. 基于小波图像融合算法和改进FCM聚类的MR脑部图像分割算法[J]. 计算机科学, 2017, 44(12): 260-265.
GENG Yan-ping, GUO Xiao-ying, WANG Hua-xia, CHEN Lei and LI Xue-mei. MR Brain Image Segmentation Method Based on Wavelet Transform Image Fusion Algorithm and Improved FCM Clustering[J]. Computer Science, 2017, 44(12): 260-265. - GENG Yan-ping, GUO Xiao-ying, WANG Hua-xia, CHEN Lei and LI Xue-mei
- Computer Science. 2017, 44 (12): 260-265. doi:10.11896/j.issn.1002-137X.2017.12.047
-
Abstract
PDF(825KB) ( 426 )
- References | Related Articles | Metrics
-
Concerning the problems that many image segmentation algorithms based on fuzzy C mean (FCM) are sensitive to noise and contour segmentation is not clear,an improved algorithm based on wavelet image fusion and FCM clustering algorithm was proposed.And it is applied to MR medical image segmentation successfully.In the first stage of the image segmentation system,the Haar wavelet multi-resolution characteristics were used to maintain spatial information between pixels.In the second stage,wavelet image fusion algorithm was adopted to fuse the obtained multi-resolution image and original image,thus to enhance the clarity of processed images and to reduce noise.In the third stage,FCM technology was used for image segmentation.Experiments on BrainWeb datasets show that compared with the current algorithms,the proposed algorithm has higher segmentation accuracy and robustness to noise,and the processing time is not obviously increased.
-
Research on Parallel Algorithm of Image Saliency Estimation
沈洪,李晓光. 图像显著估计的并行算法研究[J]. 计算机科学, 2017, 44(12): 266-273.
SHEN Hong and LI Xiao-guang. Research on Parallel Algorithm of Image Saliency Estimation[J]. Computer Science, 2017, 44(12): 266-273. - SHEN Hong and LI Xiao-guang
- Computer Science. 2017, 44 (12): 266-273. doi:10.11896/j.issn.1002-137X.2017.12.048
-
Abstract
PDF(1219KB) ( 527 )
- References | Related Articles | Metrics
-
Saliency estimation has become an important tool in digital image processing.However,in the existing algorithms,it is difficult to take into account the accuracy and real-time application requirements.In view of the characteristics of the model of the bottom-up image estimation model,a parallel algorithm was proposed to meet the requirement of real-time application,which is based on the advantages of NVIDIA CUDA parallel computing architecture.Super pixel segmentation and Warshall graph theory are combined for edge blurred vision and background probability estimation.Based on the contrast model,the compact area with lower difference of color is high lighted to obtain the optimized saliency map.With the prerequisite of the detection performance,the efficiency was improved,and the requirements of real time application were met.
-
Algorithm of Eliminating Image Stitching Line Based on Improved IGG Model
瞿中,李秀丽. 基于改进IGG模型的全景图像拼接缝消除算法[J]. 计算机科学, 2017, 44(12): 274-278.
QU Zhong and LI Xiu-li. Algorithm of Eliminating Image Stitching Line Based on Improved IGG Model[J]. Computer Science, 2017, 44(12): 274-278. - QU Zhong and LI Xiu-li
- Computer Science. 2017, 44 (12): 274-278. doi:10.11896/j.issn.1002-137X.2017.12.049
-
Abstract
PDF(680KB) ( 397 )
- References | Related Articles | Metrics
-
In order to improve the quality of panorama obtained by image stitching,L-M(Levenberg-Marquardt) algorithm is usually applied to the parameter optimization for image mosaic of the transform model,but it cannot eliminate the influence of mismatching points of the model solution.To solve this problem,robust L-M algorithm based on IGG function model was proposed.Firstly,the iterative process of IGG algorithm has the characteristics of strong resistance to gross error and high convergence speed,which contribute to optimize transformation model and improve the accuracy of image registration.Secondly,the method of combining Laplacian multi-resolution fusion algorithm of the adaptive region and the optimal stitching line is used to eliminate transition discontinuity,which is caused by the seam and uneven illumination.The experimental results show that the improvesd algorithm not only effectively improves the registration accuracy,but also achieves a seamless and high quality panorama.
-
Personalized Human Body Modeling Method Based on Measurements
高一荻,蒋夏军,施慧彬. 基于尺寸建立个性化人体模型的方法[J]. 计算机科学, 2017, 44(12): 279-282.
GAO Yi-di, JIANG Xia-jun and SHI Hui-bin. Personalized Human Body Modeling Method Based on Measurements[J]. Computer Science, 2017, 44(12): 279-282. - GAO Yi-di, JIANG Xia-jun and SHI Hui-bin
- Computer Science. 2017, 44 (12): 279-282. doi:10.11896/j.issn.1002-137X.2017.12.050
-
Abstract
PDF(713KB) ( 872 )
- References | Related Articles | Metrics
-
In recent years,human body modeling has become an important task in the field of computer graphics.This paper proposed a new modeling method called optimistic segmentation method to generate 3D human body models from some anthropometric measurements.Firstly,according to the MPI scanned human body model database,the human body shape deformation parameters and measurement parameters are obtained,and the linear correlation analysis method is used to estimate the full human body measurement set.Secondly,the relationship between the 3D human body shape and the 2D measurement data is analyzed by the linear regression method,and the human body model is further refined according to the input measurements.The results of the experiment show that this method can generate a human body model which can accurately reflect the appearance of the human body.
-
Face Recognition Based on LDP Feature and Bayesian Model
王燕,李鑫. 基于LDP特征和贝叶斯模型的人脸识别[J]. 计算机科学, 2017, 44(12): 283-286.
WANG Yan and LI Xin. Face Recognition Based on LDP Feature and Bayesian Model[J]. Computer Science, 2017, 44(12): 283-286. - WANG Yan and LI Xin
- Computer Science. 2017, 44 (12): 283-286. doi:10.11896/j.issn.1002-137X.2017.12.051
-
Abstract
PDF(691KB) ( 328 )
- References | Related Articles | Metrics
-
For the existing Local Directional Pattern method,the LDP feature of the image itself is only proposed,and a method of combining the LDP feature histogram and the Bayesian model is proposed,which effectively improves the recognition rate by using the face image test information.The method firstly learned the prior information of similarity of LDP histogram in the same class and in different class,and evaluated the same class conditional probability density function and different class conditional probability density function.When a probe image was discriminated,the method calculated the similarity of the probe and a template image in database using their LDP histogram features,and then evalua-ted the post erior probability of the pair images coming from the same person.Finally,the probe image was classified by Bayes rule.The propose method fuses the LDP feature and prior information into face data.Experiment results on ORL and Yale databases show that it is valid and the recognition accuracy rates are improved considerably on ORL database and Yale database compared with PCA method,LBP method and LDP method.
-
Study on Spatial-Temporal Multiscale Adaptive Method of Gesture Recognition
王海鹏,龚岩,刘武,李泽,张思美. 一种时空多尺度适应的手势识别方法研究[J]. 计算机科学, 2017, 44(12): 287-291.
WANG Hai-peng, GONG Yan, LIU Wu, LI Ze and ZHANG Si-mei. Study on Spatial-Temporal Multiscale Adaptive Method of Gesture Recognition[J]. Computer Science, 2017, 44(12): 287-291. - WANG Hai-peng, GONG Yan, LIU Wu, LI Ze and ZHANG Si-mei
- Computer Science. 2017, 44 (12): 287-291. doi:10.11896/j.issn.1002-137X.2017.12.052
-
Abstract
PDF(745KB) ( 421 )
- References | Related Articles | Metrics
-
Human gesture is a natural and efficient way of human computer interaction.It has wide applications in smart home and intelligent transportation.The gestures with identical semantic may have multiple presentations in temporal and spatial dimensions,due to its derivatives on gesture speed,spatial constrains,and the user demographics.All of these bring a significant challenge to the recognition precision.This article presented a novel dynamic time warping(DTW) based recognition approach——spatial-temporal dynamic time warping(SDTW),which applies a decentralized slot method to achieve both adaptiveness to spatial and temporal multi-scale.A smartphone-based prototype system is developed,which exploits the accelerometer to capture the gesture and applies SDTW to do gesture recognition.The experiment shows that the SDTW can promote the recognition precision effectively.
-
Sample Point Group Based Binary Method for Robust Binary Descriptor
刘红敏,李璐,王志衡. 基于采样点组二值化策略的鲁棒二值描述子研究[J]. 计算机科学, 2017, 44(12): 292-297.
LIU Hong-min, LI Lu and WANG Zhi-heng. Sample Point Group Based Binary Method for Robust Binary Descriptor[J]. Computer Science, 2017, 44(12): 292-297. - LIU Hong-min, LI Lu and WANG Zhi-heng
- Computer Science. 2017, 44 (12): 292-297. doi:10.11896/j.issn.1002-137X.2017.12.053
-
Abstract
PDF(1029KB) ( 397 )
- References | Related Articles | Metrics
-
Since a binary descriptor with a sampling pattern usually extracts information with high correlation and behaves less robust,through improving the retina sampling pattern,this paper proposed a novel sampling point group based binaryzation strategy to generate descriptor.Firstly,by reducing the number of sampling layers and enlarging the distance between sample points,an improved retina sampling pattern with low sampling density and low overlapping between sampling fields is designed.Then a sample point group is constructed by extracting some points surrounding a sample point in the pattern.Next,the binary result of a pair of sample point group is determined by voting their corresponding points’ intensity tests.Finally,the image gradient information is also computed and added to the final descriptor so as to enhance descriptor’s description power.Experiment results reveal that the proposed descriptor is robust to various image transformations and outperforms the four compared descriptors.
-
Research on Image Salient Regions Detection Combing Edge Boxes and Low-rank Background
申瑞杰,张军朝,郝敬滨. 基于边缘盒与低秩背景的图像显著区域检测算法[J]. 计算机科学, 2017, 44(12): 298-303.
SHEN Rui-jie, ZHANG Jun-chao and HAO Jing-bin. Research on Image Salient Regions Detection Combing Edge Boxes and Low-rank Background[J]. Computer Science, 2017, 44(12): 298-303. - SHEN Rui-jie, ZHANG Jun-chao and HAO Jing-bin
- Computer Science. 2017, 44 (12): 298-303. doi:10.11896/j.issn.1002-137X.2017.12.054
-
Abstract
PDF(904KB) ( 277 )
- References | Related Articles | Metrics
-
Aiming at the problem that traditional saliency detection methods suffer from unclear boundary and bad robust detection performance,an novel image salient regions detection method was proposed,combining two detection stages including edge boxes for rough location and low-rank background model for refining,to enhance the performance of salient regions detection.First,it improves the image salient regions detection method based on edge boxes.It uses OTSU method for adaptively computing the optimal threshold value of edge magnitude,to replace fixed threshold method and reduce boundary detection error.Second,on the basis of suspicious salient regions detected by edge boxes based method,it uses robust principal component analysis method to obtain the low-rank component of the image for building a background model,and eliminates the background regions based on background subtraction method to reduce false detection of salient regions.Experimental results on the PASCAL VOC 2007 dataset show that,this method can significantly improve the precision and recall metrics of salient regions detection,and has higher detection efficiency.
-
Mixed Gaussian Target Detection Algorithm Based on Entropy and Related Close Degree
李睿,盛超. 基于熵和相关接近度的混合高斯目标检测算法[J]. 计算机科学, 2017, 44(12): 304-309.
LI Rui and SHENG Chao. Mixed Gaussian Target Detection Algorithm Based on Entropy and Related Close Degree[J]. Computer Science, 2017, 44(12): 304-309. - LI Rui and SHENG Chao
- Computer Science. 2017, 44 (12): 304-309. doi:10.11896/j.issn.1002-137X.2017.12.055
-
Abstract
PDF(797KB) ( 294 )
- References | Related Articles | Metrics
-
Aiming at that the background modeling of the hybrid gaussian model with fixed model number is slow and the detected moving targets have following contour when they move,an imprvoed moving object detection method based on mixture gaussian model with Tsallis entropy and related close degree was proposed.The improved algorithm automatically chooses model numbers to accelerate the background modeling.For model matching judgment condition cannot reflect spatial correlation of adjacent pixels, this paper proposed the conception of related close degree as another qualification condition to remove following contour.The experimental results show the improved algorithm greatly improves in real-time and detection accuracy.
-
Application Research on Split Bregman Algorithm in Edge Detection of Remote Sensing Image
景雨,刘建鑫,刘朝霞,李绍华. Split Bregman算法在遥感图像边缘检测中的应用研究[J]. 计算机科学, 2017, 44(12): 310-315.
JING Yu, LIU Jian-xin, LIU Zhao-xia and LI Shao-hua. Application Research on Split Bregman Algorithm in Edge Detection of Remote Sensing Image[J]. Computer Science, 2017, 44(12): 310-315. - JING Yu, LIU Jian-xin, LIU Zhao-xia and LI Shao-hua
- Computer Science. 2017, 44 (12): 310-315. doi:10.11896/j.issn.1002-137X.2017.12.056
-
Abstract
PDF(890KB) ( 264 )
- References | Related Articles | Metrics
-
Considering the drawbacks of the edge detection method based on the level set,such as weak anti-noise performance,weak capability of dealing with weak edge boundaries and intensity inhomogeneity,lower computational efficiency, the accuracy of edge detection results depends greatly on the location of the initial contour,and curve evloution is easy to get into minimal value.This paper presented an edge detection method based on the global optimal convex function variational model and Split Bregman number minimization.The proposed algorithm constructs a generalized convex function variational model which can get the global optimal solution,according to the principle of CV model and Chan’s global optimization idea.In the process of the active contour evolving toward object boundaries and numerical minimization,a fast iterative algorithm based on Split Bregman is used for overcoming drawbacks of noise and others.Finally,the curve can evolve to the target boundaries quickly and accurately.Experimental results show that the proposed edge detection method has higher computational efficiency and can meet the real-time requirements of remote sensing image,and also has higher precision and better universality.