Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK
-
Review of Defense Methods Against Advanced Persistent Threat in Cloud Environment
张浩,王丽娜,谈诚,刘维杰. 云环境下APT攻击的防御方法综述[J]. 计算机科学, 2016, 43(3): 1-7.
ZHANG Hao, WANG Li-na, TAN Cheng and LIU Wei-jie. Review of Defense Methods Against Advanced Persistent Threat in Cloud Environment[J]. Computer Science, 2016, 43(3): 1-7. - ZHANG Hao, WANG Li-na, TAN Cheng and LIU Wei-jie
- Computer Science. 2016, 43 (3): 1-7. doi:10.11896/j.issn.1002-137X.2016.03.001
- Abstract PDF(1463KB) ( 1011 )
- References | Related Articles | Metrics
-
A large number of organizations and institutions have been attracted to use the cloud platform for its features,such as rapid deployment,flexible configurations.However,compared to traditional network attack persistent,the emerging attack mode advanced persistent threat(APT for short) is more persistent,high hidden and long-term buried,which makes the protection to protect security and privacy challenging.Therefore,how to protect the cloud platform from APT effectively becomes an urgent problem.The basic concepts,attack procedures and attack methods of APT were introduced ,and then we analyzed the multiple security challenges brought by APT new features,and introduced the research progress in APT protection aspects.To address the security challenges,we presented a proposal framework to protect cloud platform from APT,which includes the strategies before attack and during attack,and takes advantage of the data mining of big data to analyze the potential APT attack comprehensively and to position and track the threats.Finally,the research progress of some key technologies in our framework was introduced,the advantages and disadvantages were pointed out respectively,and some future research directions were given at the end.
-
Research Advance of SAT Solving Algorithm
郭莹,张长胜,张斌. 求解SAT问题的算法的研究进展[J]. 计算机科学, 2016, 43(3): 8-17.
GUO Ying, ZHANG Chang-sheng and ZHANG Bin. Research Advance of SAT Solving Algorithm[J]. Computer Science, 2016, 43(3): 8-17. - GUO Ying, ZHANG Chang-sheng and ZHANG Bin
- Computer Science. 2016, 43 (3): 8-17. doi:10.11896/j.issn.1002-137X.2016.03.002
- Abstract PDF(774KB) ( 4224 )
- References | Related Articles | Metrics
-
SAT is one of the most widely studied NPC problems.Because of SAT characteristic,unless P=NP, polynomial time worst-case complexity of SAT algorithm does not exist.To design fast and efficient SAT algorithms is still a research hotspot.Firstly,the basic concept of SAT problem was introduced.Secondly,the recent research progress was summarized from three categories of complete algorithms,incomplete algorithms and combination algorithms.The general process of previous algorithms was thoroughly analyzed,and comparative analysis of some advanced solvers from the perspective of suitable problem class,algorithm character and solution efficiency was carried on.Finally,the main challenges of SAT solving algorithm was discussed,and some research issues in future were pointed out.
-
Multi-layer Partition Hash Join Algorithm on Array-based Manycore Architecture
石嵩,宁永波,李宏亮,郑方. 阵列众核结构上的一种多层分区Hash连接算法[J]. 计算机科学, 2016, 43(3): 18-22.
SHI Song, NING Yong-bo, LI Hong-liang and ZHENG Fang. Multi-layer Partition Hash Join Algorithm on Array-based Manycore Architecture[J]. Computer Science, 2016, 43(3): 18-22. - SHI Song, NING Yong-bo, LI Hong-liang and ZHENG Fang
- Computer Science. 2016, 43 (3): 18-22. doi:10.11896/j.issn.1002-137X.2016.03.003
- Abstract PDF(1120KB) ( 577 )
- References | Related Articles | Metrics
-
Join is one of the most time-consuming and most frequently used operations in database query processing,so it has great importance to improve the speed of join operation.The array-based manycore processor is one of the most important classes of manycore processors and has great parallelism,and can be used to accelerate parallel computing.In this paper,we designed and optimized an efficient algorithm called multi-layer partition hash join algorithm on array-based manycore architecture.Our algorithm achieves high performance by multi-layer partitioning which reduces the times of main memory accessing,and by partition rearrangement which eliminates the influence of data skew.Experimental results show that the performance of multi-layer partition hash join algorithm on DFMC(Deeply-Fused Many Core) processors is 8.0x times higher than that of the fastest hash join on CPU-GPU coupled-architecture,which de-monstrates that array-based manycore processors have advantages in data query processing.
-
Formalization of Consistency of Fractional Calculus in HOL4
李姗姗,赵春娜,关永,施智平,王瑞,李晓娟,叶世伟. 分数阶微积分定义的一致性在HOL4中的验证[J]. 计算机科学, 2016, 43(3): 23-26.
LI Shan-shan, ZHAO Chun-na, GUAN Yong, SHI Zhi-ping, WANG Rui, LI Xiao-juan and YE Shi-wei. Formalization of Consistency of Fractional Calculus in HOL4[J]. Computer Science, 2016, 43(3): 23-26. - LI Shan-shan, ZHAO Chun-na, GUAN Yong, SHI Zhi-ping, WANG Rui, LI Xiao-juan and YE Shi-wei
- Computer Science. 2016, 43 (3): 23-26. doi:10.11896/j.issn.1002-137X.2016.03.004
- Abstract PDF(428KB) ( 607 )
- References | Related Articles | Metrics
-
Fractional calculus has three commonly used definitions,including Grunwald-Letnikov,Riemann-Liouville and Caputo definition.There are connections among these three kinds of definitions.They are interchangeable under certain conditions.This paper established a formal model of fractional calculus based on Caputo definition in the higher-order logic proof tool HOL4 using real,integral,limitation and transcendental functions.In order to achieve the conversion of these three definitions in HOL4,we verified the relationships among Caputo,Grunwald-Letnikov and Riemann-Liouville definition.This work will make these three definitions unite in a certain extent,and will also perfect the theo-rem library of higher-order logic.
-
Research on Salient Region Extraction Technology
梁晔,于剑,郎丛妍,刘宏哲. 显著区域检测技术研究[J]. 计算机科学, 2016, 43(3): 27-32.
LIANG Ye, YU Jian, LANG Cong-yan and LIU Hong-zhe. Research on Salient Region Extraction Technology[J]. Computer Science, 2016, 43(3): 27-32. - LIANG Ye, YU Jian, LANG Cong-yan and LIU Hong-zhe
- Computer Science. 2016, 43 (3): 27-32. doi:10.11896/j.issn.1002-137X.2016.03.005
- Abstract PDF(557KB) ( 533 )
- References | Related Articles | Metrics
-
Salient region detection technology is a very active research area and is applied extensively.How to find sa-lient region of the image quickly and accurately has not yet formed a complete theoretical system.In addition,salient region detection technology is closely related to application.So this technology is still a challenging research topic.A survey on salient region detection technology was given in the paper.Firstly,bottom-up and top-down salient region detection approaches were discussed in detail,including technique classification and typical techniques.Secondly,evaluation criteria and open saliency evaluation databases were discussed.At last,the main problems and challenges were highlightedbased on analysis of current research.
-
Algorithm for Discovering Network Community with Centrality and Overlap
刘井莲,王大玲,赵卫绩,冯时,张一飞. 一种面向度中心性及重叠网络社区的发现算法[J]. 计算机科学, 2016, 43(3): 33-37.
LIU Jing-lian, WANG Da-ling, ZHAO Wei-ji, FENG Shi and ZHANG Yi-fei. Algorithm for Discovering Network Community with Centrality and Overlap[J]. Computer Science, 2016, 43(3): 33-37. - LIU Jing-lian, WANG Da-ling, ZHAO Wei-ji, FENG Shi and ZHANG Yi-fei
- Computer Science. 2016, 43 (3): 33-37. doi:10.11896/j.issn.1002-137X.2016.03.006
- Abstract PDF(489KB) ( 553 )
- References | Related Articles | Metrics
-
Many social networks have central nodes and overlap nodes in more than one initial community,so a two-stage algorithm for discovering network community with centrality and overlap was proposed in this paper.In the first stage,initial communities are found.First,the top-k maximum degree nodes are chosen as candidate central nodes and the central nodes with their own neighbor nodes form separate candidate initial communities,and then the overlap degree of the candidate initial communities is computed one by one with generated initial communities.If all the overlap degrees are less than a given threshold,a new initial community is formed.In the second stage,the community division is adjusted.A concept of deviate degree is defined,and the corresponding nodes are merged to a closely linked community with maximum deviate degree.Finally community division is formed.Experimental results show that it can not only reveal the tightly-knit network communities in network with centrality,but also deal with problems of overlap initial communities effectively.Compared with existing algorithms,the algorithm in this paper is not sensitive to the prior number of candidate initial communities k,and has a high accuracy and flexibility.
-
Trend Forecast of Stock Price Based on Deviated Characteristics and Risk Preference
姚宏亮,黄曼,王浩,李俊照. 特征背离和风险偏好分析的股价态势预测方法[J]. 计算机科学, 2016, 43(3): 38-43.
YAO Hong-liang, HUANG Man, WANG Hao and LI Jun-zhao. Trend Forecast of Stock Price Based on Deviated Characteristics and Risk Preference[J]. Computer Science, 2016, 43(3): 38-43. - YAO Hong-liang, HUANG Man, WANG Hao and LI Jun-zhao
- Computer Science. 2016, 43 (3): 38-43. doi:10.11896/j.issn.1002-137X.2016.03.007
- Abstract PDF(462KB) ( 580 )
- References | Related Articles | Metrics
-
Due to the inconsistent tendency of stock price and technical index,the share price trend prediction algorithm based on the technical features performs poorly.A share price trend prediction algorithm from the point of deviated characteristics(DCPA) was proposed.Deviated features firstly are extracted, the degree of swerve is calculated,and then the trend of price is forecasted by BP neural network according to the degree of swerve and the stock’s closing price.While the risk appetite is high,the correlation between stock price trend and deviated characteristics is weak,thus a risk preference based share price trend prediction algorithm named RPDCA was put forward on the basis of DCPA algorithm.Firstly, features which are associated with risk appetite are extracted and the current market risk preference type is acquired through the risk appetite computational model.Secondly,by means of Bayesian network,the structural relationship among risk appetite,deviated characteristics and the trend of stock price is learned,and then the interdependent relationship between risk appetite and deviated characteristics is analyzed by using node asymmetric information entropy.Last,the trend of stock price is forecasted self-adaptedly according to the relationship between risk appetite and deviated characteristics by BP neural network.Based on the comparison and analysis on the actual data,the experimental results show that the RPDCA algorithm has higher precision of short-term prediction.
-
Energy-aware Scheduling Algorithm for Internet of Vehicles on Cloud Platform
邓聃婷,滕飞,杨燕. 云计算平台中面向车联网应用的能耗感知调度算法[J]. 计算机科学, 2016, 43(3): 44-48.
DENG Dan-ting, TENG Fei and YANG Yan. Energy-aware Scheduling Algorithm for Internet of Vehicles on Cloud Platform[J]. Computer Science, 2016, 43(3): 44-48. - DENG Dan-ting, TENG Fei and YANG Yan
- Computer Science. 2016, 43 (3): 44-48. doi:10.11896/j.issn.1002-137X.2016.03.008
- Abstract PDF(416KB) ( 580 )
- References | Related Articles | Metrics
-
For the issue of high energy consumption of the internet of vehicles on cloud platform,we put forward an energy-aware scheduling algorithm,called task set consolidation algorithm (TSC).The main idea of the algorithm is reducing the number of active physical servers to reach the goal of bringing the cloud platform’s energy consumption down.We built the objective function and the variable factors through the energy consumption model,the internet of vehicles task set model and cloud platform model.Our simulation experiment comparesd TSC with the existing algorithms on cloud platform environment,with the performance index of physical servers’ active time and number and the energy consumption of cloud platform.The experimental results show that TSC is able to minimize the number of activate physical servers on the cloud platform,as well as the energy consumption.
-
Mahalanobis Distance-based Twin Multi-class Classification Support Vector Machine
张谢锴,丁世飞. 基于马氏距离的孪生多分类支持向量机[J]. 计算机科学, 2016, 43(3): 49-53.
ZHANG Xie-kai and DING Shi-fei. Mahalanobis Distance-based Twin Multi-class Classification Support Vector Machine[J]. Computer Science, 2016, 43(3): 49-53. - ZHANG Xie-kai and DING Shi-fei
- Computer Science. 2016, 43 (3): 49-53. doi:10.11896/j.issn.1002-137X.2016.03.009
- Abstract PDF(386KB) ( 851 )
- References | Related Articles | Metrics
-
Twin support vector machine (TWSVM) is a recent hot spot in the field of machine learning.TWSVM achieves fast training speed and good performance for data classification.However,it does not take full advantage of the statistical information in training samples.As an improved algorithm of TWSVM,mahalanobis distance-based twin support vector machine (TMSVM) takes the covariance of each class of data into consideration and is suitable for many real problems.However,the learning speed of TMSVW remains to be improved.Moreover,the approach only deals with binary classification problems.To solve these problems,this paper formulated a least squares version of TMSVM by considering equality type constraints instead of inequalities of TMSVM,which makes the solution follow from solving a set of linear equations instead of quadratic programming,and extended the binary classier into a new multiclass classification algorithm with directed acyclic graph (DAG).In order to reduce error accumulation in DAG structure,a mahalanobis distance-based distance measure was designed as the class separability criterion.The experimental results on toy dataset and UCI datasets show that the proposed algorithm is efficient and has better classification accuracy than traditional multi-class SVM.
-
Thai Syllable Segmentation Based on Conditional Random Fields
赵世瑜,线岩团,郭剑毅,余正涛,洪玄贵,王红斌. 基于条件随机场的泰语音节切分方法[J]. 计算机科学, 2016, 43(3): 54-56.
ZHAO Shi-yu, XIAN Yan-tuan, GUO Jian-yi, YU Zheng-tao, HONG Xuan-gui and WANG Hong-bin. Thai Syllable Segmentation Based on Conditional Random Fields[J]. Computer Science, 2016, 43(3): 54-56. - ZHAO Shi-yu, XIAN Yan-tuan, GUO Jian-yi, YU Zheng-tao, HONG Xuan-gui and WANG Hong-bin
- Computer Science. 2016, 43 (3): 54-56. doi:10.11896/j.issn.1002-137X.2016.03.010
- Abstract PDF(316KB) ( 1336 )
- References | Related Articles | Metrics
-
Syllable is the basic unit of word-formation and pronunciation of Thai.Thai syllable segmentation is significant to lexical analysis,speech synthesis and speech recognition.Combined with the characteristics of Thai syllables,Thai syllable segmentation method based CRFs (Conditional Random Fields) was proposed.In order to achieve Thai syllable segmentation,the algorithm not only combines the Thai alphabet categories and letter position to define features,but also employs CRFs for letters in Thai sentence to do sequence labeling.In this paper,Thai syllable segmentation corpus was marked on the basis of InterBEST 2009.Experiments for the corpus demonstrate the method can effectively achieve Thai syllable segmentation by adopting the category and location information of alphabetical letters,and the va-lues of precision,recall and F reach 99.115%,99.284% and 99.199%.
-
Topic Model Embedded in Collaborative Filtering Recommendation Algorithm
高娜,杨明. 嵌入LDA主题模型的协同过滤推荐算法[J]. 计算机科学, 2016, 43(3): 57-61.
GAO Na and YANG Ming. Topic Model Embedded in Collaborative Filtering Recommendation Algorithm[J]. Computer Science, 2016, 43(3): 57-61. - GAO Na and YANG Ming
- Computer Science. 2016, 43 (3): 57-61. doi:10.11896/j.issn.1002-137X.2016.03.011
- Abstract PDF(494KB) ( 757 )
- References | Related Articles | Metrics
-
Collaborative filtering(CF) recommendation algorithm has become one of the most popular algorithms in the field of recommendation due to its accuracy and efficiency.CF algorithm constructs interest models of users through analyzing their history rating records.Then it generates a set of recommendations for users.While the rating records of users in the recommendation system are limited,it results in the traditional CF algorithm facing with serious problem of data sparsity.Therefore,to address the problem of sparsity,we proposed an improved collaborative filtering recommendation algorithm that embeds the LDA topic model,named LDA-CF.This algorithm utilizes LDA topic model method to discover latent topics information in tags of users and items.Then it unifies both the document-topic probability distribution matrix and rating matrix simultaneously to measure the similarities between users and items.The experiment results indicate that the developed ULR-CF algorithm can alleviate the sparsity problem,and improve the accuracy of recommendation system simultaneously.
-
Imbalanced Online Sequential Extreme Learning Machine Based on Principal Curve
王金婉,毛文涛,王礼云,何玲. 基于主曲线的不均衡在线贯序极限学习机研究[J]. 计算机科学, 2016, 43(3): 62-67.
WANG Jin-wan, MAO Wen-tao, WANG Li-yun and HE Ling. Imbalanced Online Sequential Extreme Learning Machine Based on Principal Curve[J]. Computer Science, 2016, 43(3): 62-67. - WANG Jin-wan, MAO Wen-tao, WANG Li-yun and HE Ling
- Computer Science. 2016, 43 (3): 62-67. doi:10.11896/j.issn.1002-137X.2016.03.012
- Abstract PDF(478KB) ( 488 )
- References | Related Articles | Metrics
-
Many traditional machine learning methods tend to get biased classifier which leads to lower classification precision for minor class in sequential imbalanced data.To improve the classification accuracy of minor class,a new imbalanced online sequential extreme learning machine based on principal curve was proposed.The core idea of the method is to get balanced samples based on the distribution features of online sequential data,reducing the blindness in the process of synthetic minority,which contains two stages.In offline stage,the principal curve is introduced to establish the distribution model of two kinds of samples.Over-sampling is done by using SMOTE for minor class.Then the membership degree of each sample is set according to the projection distance respectively,and the majority and virtual minor samples are deleted according to the under interval.Then the initial model is established.In online stage,over-sampling is done by using SMOTE for online sequential minor samples,getting the balanced samples according to the under interval.Then network weight is updated dynamically.The proposed algorithm has upper bound of the loss of information through the theoretical proof.The experiment was taken on three UCI datasets and the real-world air pollutant forecasting dataset,which shows that the proposed method outperforms the traditional methods in terms of prediction accuracy and numerical stability.
-
Intuitionistic Fuzzy Reasoning Based on Truth-valued Support Degrees
徐本强,谭雪微,邹丽. 基于真值支持度的直觉模糊推理方法[J]. 计算机科学, 2016, 43(3): 68-71.
XU Ben-qiang, TAN Xue-wei and ZOU Li. Intuitionistic Fuzzy Reasoning Based on Truth-valued Support Degrees[J]. Computer Science, 2016, 43(3): 68-71. - XU Ben-qiang, TAN Xue-wei and ZOU Li
- Computer Science. 2016, 43 (3): 68-71. doi:10.11896/j.issn.1002-137X.2016.03.013
- Abstract PDF(279KB) ( 509 )
- References | Related Articles | Metrics
-
In order to reduce the complexity of the operation of the intuitionistic fuzzy set,membership degree and the non-membership degree were considered at the same time in the process of reasoning.In this paper,intuitionistic fuzzy reasoning method based on the truth-valued support degrees was proposed.Strong-truth degrees,truth-valued support degrees were studied and their related properties were get.The method of intuitionistic fuzzy reasoning based on the truth-valued support degrees and detailed steps for reasoning computation were given.A concrete example was given to de-monstrate the correctness and validity of the method.
-
Subspace Clustering Based on Sequential Feature
陈丽萍,郭躬德. 一种基于顺序特性的子空间聚类方法[J]. 计算机科学, 2016, 43(3): 72-74.
CHEN Li-ping and GUO Gong-de. Subspace Clustering Based on Sequential Feature[J]. Computer Science, 2016, 43(3): 72-74. - CHEN Li-ping and GUO Gong-de
- Computer Science. 2016, 43 (3): 72-74. doi:10.11896/j.issn.1002-137X.2016.03.014
- Abstract PDF(906KB) ( 488 )
- References | Related Articles | Metrics
-
Inspired by Tierney’s subspace clustering of the sequence data,a novel subspace clustering method based on sequential character was proposed.In the beginning,the lifting wavelet transform is applied to extract low-frequency information of the signal,and then a stronger special penalty term is applied to emphasize the similarity between adjacent samples,in which the penalty factor is automatically adjusted according to the noise.The proposed method performs better compared with the most characteristic sparse subspace clustering methods in experiment carried out on a synthetic data set and some data sets from real-world applications.
-
Knowledge Reasoning Method Based on Extended Fuzzy Petri Net
周如旗,陈忆群,冯嘉礼. 基于扩展模糊Petri网的知识推理方法研究[J]. 计算机科学, 2016, 43(3): 75-79.
ZHOU Ru-qi, CHEN Yi-qun and FENG Jia-li. Knowledge Reasoning Method Based on Extended Fuzzy Petri Net[J]. Computer Science, 2016, 43(3): 75-79. - ZHOU Ru-qi, CHEN Yi-qun and FENG Jia-li
- Computer Science. 2016, 43 (3): 75-79. doi:10.11896/j.issn.1002-137X.2016.03.015
- Abstract PDF(403KB) ( 505 )
- References | Related Articles | Metrics
-
In order to make fuzzy Petri nets have ability to describe the fuzzy knowledge under the variable fuzzy membership,taking the advantage of the characteristic that the criterion transformation can well express the variable fuzzy membership,the fuzzy Petri net was extended based on the qualitative mapping and the qualitative criterion transformation.The formal definition and the basic operation mechanism of the expanded Petri net were given.The qualitative mapping was used to describe the fuzzy production rule.A new model of knowledge representation and reasoning method was put forward.The new method is good for constructing fuzzy Petri net learning mechanism based on cognition.Results show that the network model has strong ability of knowledge representation.It is suitable for processing fuzzy uncertainty knowledge.The cognitive reasoning process can reflect certain cognitive characteristics,so it is especially suitable for building the intelligent system with qualitative judgment.
-
Quantum Clustering Algorithm Based on One-dimensional Three-state Quantum Walk
徐永振,郭躬德,蔡彬彬,林崧. 基于一维三态量子游走的量子聚类算法[J]. 计算机科学, 2016, 43(3): 80-83.
XU Yong-zhen, GUO Gong-de, CAI Bin-bin and LIN Song. Quantum Clustering Algorithm Based on One-dimensional Three-state Quantum Walk[J]. Computer Science, 2016, 43(3): 80-83. - XU Yong-zhen, GUO Gong-de, CAI Bin-bin and LIN Song
- Computer Science. 2016, 43 (3): 80-83. doi:10.11896/j.issn.1002-137X.2016.03.016
- Abstract PDF(300KB) ( 578 )
- References | Related Articles | Metrics
-
As compared to classical random walk,quantum walk exhibits some remarkably different features.Thus,it has been exploited to solve some kinds of problems,e.g.element distinctness,combinatorial optimization,graph mat-ching problem and so on.Considering the two related research fields of quantum walks and data cluster analysis,a quantum clustering algorithm based on the one-dimensional three-state quantum walk was proposed.In this algorithm,each data point is regarded as one particle.Then,these particles perform the three-state quantum walk.After that,according to the measurement results,the attribute values of the corresponding data points are updated.At last,the data points belonging to the same group will gather together,while those belonging to different group will be separated.The simulation results demonstrate that the proposed algorithm is effective.
-
Queue Length Based Per-hop AC Self Adaptation for MANET
齐法制,张红梅,张瀚文,孙智慧,曾珊,夏明山. MANET网络中基于队列长度的逐跳AC自适应调整机制[J]. 计算机科学, 2016, 43(3): 84-88.
QI Fa-zhi, ZHANG Hong-mei, ZHANG Han-wen, SUN Zhi-hui, ZENG Shan and XIA Ming-shan. Queue Length Based Per-hop AC Self Adaptation for MANET[J]. Computer Science, 2016, 43(3): 84-88. - QI Fa-zhi, ZHANG Hong-mei, ZHANG Han-wen, SUN Zhi-hui, ZENG Shan and XIA Ming-shan
- Computer Science. 2016, 43 (3): 84-88. doi:10.11896/j.issn.1002-137X.2016.03.017
- Abstract PDF(399KB) ( 433 )
- References | Related Articles | Metrics
-
This paper proposed a queue length based per-hop AC self adaptation (QLACSA) mechanism for Mobile Ad Hoc networks (MANET).The objective of QLACSA is to balance the end-to-end delay for packets with different routing path,thus to guarantee the end-to-end delay for delay sensitive services.QLACSA serves in the MAC layer.For the flows with delay requirement,QLACSA divides the flows’ total delay requirement into each-hop expected delay requirement,and dynamically adjusts the AC priority for each packet hop-by-hop according to the packet’s delay situation,local delay requirements,and each AC queue’s state,such that the packets can reach their destination within the required end-to-end delay specified by the QoS requirement.QLACAS utilizes the policy of “survival of the fittest” through a traffic shaping strategy.It drops the packets which can probably not reach the destination before the deadline in time,thus to release all the associated resources for other packets.Simulation results show that QLACSA is able to guarantee end-to-end delay for the delay sensitive services.The throughput of the flows with long hop increases by 211%~245%.
-
Approach of Channel Sensing Order Based on Genetic Algorithm
韩寒,周君,王敬超. 一种基于遗传算法的信道感知顺序设计[J]. 计算机科学, 2016, 43(3): 89-92.
HAN Han, ZHOU Jun and WANG Jing-chao. Approach of Channel Sensing Order Based on Genetic Algorithm[J]. Computer Science, 2016, 43(3): 89-92. - HAN Han, ZHOU Jun and WANG Jing-chao
- Computer Science. 2016, 43 (3): 89-92. doi:10.11896/j.issn.1002-137X.2016.03.018
- Abstract PDF(418KB) ( 431 )
- References | Related Articles | Metrics
-
In cognitive radio,the cognitive user has to sense all the candidate channels to search idle channels.In practice,its sensing ability is limited,which makes the user sense the channel in one-by-one mode.However,the more channels are to sense till an idle is found,the more time is needed to spend in the sensing process,so that the less time is remainder for transmission.Therefore,sensing order is critical for the spectrum efficiency.In this paper,the optimization of sensing order problem was modeled completely with the idle probability and channel capacity considered,which is different from the previous studies.Since the modeled problem is NP-hard,the traditional optimization lacks computation efficiency.Therefore,the genetic algorithm was adopted.Different from the classical genetic algorithm,the proposed algorithm develops a unique elite reservation strategy.Moreover,three crossing operators were proposed intensively for the problem,i.e.the single-spot crossing,the multiple-spot crossing and the coding crossing.In simulation,the computation complexity and the accuracy are compared between the genetic algorithm and brute force algorithm.After obtaining the advantage of the genetic algorithm,we compared the three proposed crossing operators in terms of expected throughput and maximum throughput,and verified the superiority of the coding crossing operator.
-
TraDR:A Destination Prediction Method Based on Trajectory Decomposition and Reconstruction in Geo-social Networks
薛迪,吴礼发,李华波,洪征. TraDR:一种基于轨迹分解重构的移动社交网络位置预测方法[J]. 计算机科学, 2016, 43(3): 93-98.
XUE Di, WU Li-fa, LI Hua-bo and HONG Zheng. TraDR:A Destination Prediction Method Based on Trajectory Decomposition and Reconstruction in Geo-social Networks[J]. Computer Science, 2016, 43(3): 93-98. - XUE Di, WU Li-fa, LI Hua-bo and HONG Zheng
- Computer Science. 2016, 43 (3): 93-98. doi:10.11896/j.issn.1002-137X.2016.03.019
- Abstract PDF(552KB) ( 482 )
- References | Related Articles | Metrics
-
With the development of geo-social networks,the practice of utilizing the locations published by GeoSN users to offer them personalized reference services not only benefits users,but also brings the business providers potential profits.As the fundamental enabling technology of the location-based reference services,destination prediction becomes one of the most significant research topics in GeoSNs.Considering the features of GeoSNs,this paper proposed a novel destination prediction method named TraDR specially for GeoSNs based on trajectory decomposition and reconstruction to construct personalized inference model for each target GeoSN user,which not only solves the “trajectory data sparsity problem” faced by common location inference models,but also takes advantages of the rich commonly available public background information.Experiments based on real world dataset were carried out,and results prove the high perfor-mance of the presented method both in prediction accuracy and running efficiency.
-
Research on Channel Loss of Tropospheric Scatter Communication Based on MIMO
唐夲,张杰,皮羽茜,沈轩帆,廖勇. 基于MIMO的对流层散射通信信道损耗研究[J]. 计算机科学, 2016, 43(3): 99-102.
TANG Tao, ZHANG Jie, PI Yu-qian, SHEN Xuan-fan and LIAO Yong. Research on Channel Loss of Tropospheric Scatter Communication Based on MIMO[J]. Computer Science, 2016, 43(3): 99-102. - TANG Tao, ZHANG Jie, PI Yu-qian, SHEN Xuan-fan and LIAO Yong
- Computer Science. 2016, 43 (3): 99-102. doi:10.11896/j.issn.1002-137X.2016.03.020
- Abstract PDF(317KB) ( 638 )
- References | Related Articles | Metrics
-
The channel loss of the tropospheric scatter communication in Multiple-Input Multiple-Output (MIMO) scenarios was studied in this paper.When designing a long distance MIMO communication system,one of the key technologies is the estimation of link loss,but all of existing technologies ignore the energy loss of collisions between beam and scatterers when estimating loss.According to the theory of turbulent incoherent scattering,firstly,we modeled troposphere as many differentiable curves,so that it can constitute curved trapezoid together with transmit and receive antenna array.And then we calculated the angular relation of angle of incidence,angle of reflection and scattering angle on the basis of this geometric channel model.Secondly,we established the energy distribution model of the scattered wave on the surface of scatterers after electromagnetic beam injection.Finally,according to the propagation distance of beam,we quantified the scattering channel loss,and derived the fading coefficients and channel matrix.By comparing simulation results with the actual data,we verified validity and accuracy of the proposed model on the problem of estimation of link loss,meanwhile gave suggestions for the design on elevation angle of receive antenna according to the simulation results.
-
Compressive Sensing Based Target Localization and Power Estimation
钱鹏,郭艳,李宁,孙保明. 基于压缩感知的多目标定位与功率估计[J]. 计算机科学, 2016, 43(3): 103-106.
QIAN Peng, GUO Yan, LI Ning and SUN Bao-ming. Compressive Sensing Based Target Localization and Power Estimation[J]. Computer Science, 2016, 43(3): 103-106. - QIAN Peng, GUO Yan, LI Ning and SUN Bao-ming
- Computer Science. 2016, 43 (3): 103-106. doi:10.11896/j.issn.1002-137X.2016.03.021
- Abstract PDF(350KB) ( 463 )
- References | Related Articles | Metrics
-
Since the localization problem in wireless sensor networks has an intrinsic sparse nature,the compressive sensing (CS) theory has been wildly used to achieve target localization with limited number of measurements.However,most of the existing CS-based localization approaches require the prior knowledge of transmitting powers of targets,which is not conformed to the reality that targets are complete unknown.Thus,we proposed a multiple target localization and power estimation approach,which formulates the locations and transmitting powers of targets as a sparse vector,transforming the localization and power estimation problem into a sparse vector estimation problem.Our work includes two stages:the offline stage and online stage.The main task of offline stage is deploying some RF emitters and collecting the received signal strength (RSS) to construct the sensing matrix.At the online stage,by deploying a small number of sensors to measure RSSs from targets and solving the 1-minimization program,the sparse vector can be accurately recovered.Finally,simulation results demonstrate the effectiveness and robustness of our localization and power estimation approach.
-
Borrowing Address within Two-hop Neighbors in Mesh Network
孙聪,朱艺华,池凯凯,袁利永. Mesh网络中两跳邻居地址借用策略[J]. 计算机科学, 2016, 43(3): 107-112.
SUN Cong, ZHU Yi-hua, CHI Kai-kai and YUAN Li-yong. Borrowing Address within Two-hop Neighbors in Mesh Network[J]. Computer Science, 2016, 43(3): 107-112. - SUN Cong, ZHU Yi-hua, CHI Kai-kai and YUAN Li-yong
- Computer Science. 2016, 43 (3): 107-112. doi:10.11896/j.issn.1002-137X.2016.03.022
- Abstract PDF(454KB) ( 456 )
- References | Related Articles | Metrics
-
IEEE 802.15.5 standard introduces mesh networking,which has the strength that routing is conducted without routing table by assigning logical addresses to nodes.In a dynamic mesh network,orphan node problem (ONP) exis-ts,i.e.,a newly coming node is unable to join the network as its parent node does not have an unused address for the new node.To overcome the ONP,an address borrowing scheme that allows a node to borrow an address from its two-hop neighbors,which is suitable for the mesh network,was presented in this paper so that the probability of successful joining (PSJ) the network and the ratio of the number of the used addresses to the total addresses are improved while the energy expended by borrowing address is reduced.Theoretical analysis and simulation results show the proposed scheme outperforms the existing address assigning schemes in terms of the PSJ,the ratio,and the energy consumption.
-
Research on Resources Scheduling Method in Cloud Computing Based on PSO and RBF Neural Network
赵宏伟,李圣普. 基于粒子群算法和RBF神经网络的云计算资源调度方法研究[J]. 计算机科学, 2016, 43(3): 113-117.
ZHAO Hong-wei and LI Sheng-pu. Research on Resources Scheduling Method in Cloud Computing Based on PSO and RBF Neural Network[J]. Computer Science, 2016, 43(3): 113-117. - ZHAO Hong-wei and LI Sheng-pu
- Computer Science. 2016, 43 (3): 113-117. doi:10.11896/j.issn.1002-137X.2016.03.023
- Abstract PDF(483KB) ( 536 )
- References | Related Articles | Metrics
-
In order to implement the multi-objective optimization scheme in cloud computing system,firstly,a dynamic management framework was proposed,providing the structure of the resources scheduling in cloud computing system.Secondly,a multi-objective optimization model was established,which ensures the quality of cloud applications and improves the utilization rate of resources.The RBF neural network and improved particle swarm algorithm were combined to solve the model.Finally, the result of the experiment on the CloudSim simulation platform indicates that the framework and the proposed algorithm can effectively reduce the number of virtual machine migration and the number of used physical nodes,and the scheduling system can not only improve the utilization rate of resources,but also ensure the QoS of cloud application.
-
Multiple Permissions Secure Access Control Scheme Combining CP-ABE and XACML in Cloud Storage
刘晓建,王力生,廖新考. 基于CP-ABE和XACML多权限安全云存储访问控制方案[J]. 计算机科学, 2016, 43(3): 118-121.
LIU Xiao-jian, WANG Li-sheng and LIAO Xin-kao. Multiple Permissions Secure Access Control Scheme Combining CP-ABE and XACML in Cloud Storage[J]. Computer Science, 2016, 43(3): 118-121. - LIU Xiao-jian, WANG Li-sheng and LIAO Xin-kao
- Computer Science. 2016, 43 (3): 118-121. doi:10.11896/j.issn.1002-137X.2016.03.024
- Abstract PDF(319KB) ( 855 )
- References | Related Articles | Metrics
-
In order to protect the confidentiality of user data and user privacy in cloud storage system,multiple permissions secure access control scheme combining ciphertext-policy attribute-based encryption(CP-ABE) and XACML was proposed.The confidentiality of user data is ensured by CP-ABE encryption and properties of fine-grained access control are implemented by XACML framework.In cloud storage system user data is encrypted by symmetric encryption mecha-nism,and symmetric key encryption uses the CP-ABE.Simulation results show that the model is efficient,flexible,and secure.Security analysis shows that the scheme can resist collusion attacks,has data confidentiality and backward forward confidentiality.
-
Automated Trust Negotiation Based on Diverse History Information
李健利,王艺谋,谢悦,丁洪骞. 一种基于多样化历史信息的自动信任协商策略[J]. 计算机科学, 2016, 43(3): 122-126.
LI Jian-li, WANG Yi-mou, XIE Yue and DING Hong-qian. Automated Trust Negotiation Based on Diverse History Information[J]. Computer Science, 2016, 43(3): 122-126. - LI Jian-li, WANG Yi-mou, XIE Yue and DING Hong-qian
- Computer Science. 2016, 43 (3): 122-126. doi:10.11896/j.issn.1002-137X.2016.03.025
- Abstract PDF(494KB) ( 425 )
- References | Related Articles | Metrics
-
For the problem of efficiency which appeares in the repeated automated trust negotiation,a negotiation strategy based on diverse history negotiation information was proposed.This strategy combines history information with negotiation by means of policy directed graph and ticket.The former one is used to finish the negotiation and the latter one is used to storage history negotiation information,meanwhile,digital signature technology is adopted to ensure the authenticity and integrity of the information.According to the different modes of history negotiation which is generated,trust-ticket and history-ticket were proposed.In addition,proper formation,verification and working procedure were designed in view of their characteristics.Finally,simulation results demonstrate that this model can improve efficiency in repeated negotiation.
-
Distributed Real-time Botnet Detection Algorithm
陈连栋,张蕾,曲武,孔明. 一种分布式的僵尸网络实时检测算法[J]. 计算机科学, 2016, 43(3): 127-136.
CHEN Lian-dong, ZHANG Lei, QU Wu and KONG Ming. Distributed Real-time Botnet Detection Algorithm[J]. Computer Science, 2016, 43(3): 127-136. - CHEN Lian-dong, ZHANG Lei, QU Wu and KONG Ming
- Computer Science. 2016, 43 (3): 127-136. doi:10.11896/j.issn.1002-137X.2016.03.026
- Abstract PDF(1039KB) ( 671 )
- References | Related Articles | Metrics
-
Compared with other types of malware,botnets have recently been adopted by hackers for their resiliency against take-down efforts.Besides being harder to take down,modern botnets tend to be stealthier in the way they perform malicious activities by using the infected computer,making current detection approaches ineffective.Given the malicious activities botnets can realize,detection and mitigation of botnet threats are imperative.In this paper,we presented a novel approach for botnet detection,called distributed real-time botnet detection algorithm.It uses Spark engine,where Netflow related data are correlated as the host Netflow graph structure and the host access chain structure,and a feature extraction method based on the Spark Streaming is leveraged for exacting implicit characteristics.Meanwhile,this paper established distributed BotScanner detection system based on the Spark Streaming,which is a distributed real-time steam processing engine.We trained BotScanner system on the five representative bot families and evaluated BotScanner on simulated network traffic and real-world network traffic.The experimental results show that the BotScanner is able to detect bots in network traffic without the need of deep packet inspection,and achieves high detection rates with very few false positives.When the traffic data from the Internet service provider are very large,the BotScanner is able to detect botnets in real-time by adding the compute nodes,and BotScanner has approximate linear speedup.It proves the feasibility of Applying Spark Streaming engine to distributed botnet detection.
-
Method of Anonymous Area Generation for Sensitive Location Protection under Road Networks
戴佳筑,华亮. 路网环境下敏感位置匿名区域的生成方法[J]. 计算机科学, 2016, 43(3): 137-144.
DAI Jia-zhu and HUA Liang. Method of Anonymous Area Generation for Sensitive Location Protection under Road Networks[J]. Computer Science, 2016, 43(3): 137-144. - DAI Jia-zhu and HUA Liang
- Computer Science. 2016, 43 (3): 137-144. doi:10.11896/j.issn.1002-137X.2016.03.027
- Abstract PDF(1566KB) ( 608 )
- References | Related Articles | Metrics
-
Location information contains personal privacy,and the precise location information may disclose users’ hobby,behavior and other sensitive information.Therefore,the protection of location information privacy is very important.The existing methods of generating anonymous area for location privacy protection are mostly based on the k-anonymous algorithms under Euclidean space.Though they can anonymize user’s location in a certain extent,in real life,the location is influenced by the road networks environment seriously,at the same time,the k-anonymous algorithms under Euclidean space do not consider whether the generated anonymous area is still within sensitive region.This paper proposed a method of generating anonymous area for sensitive location protection under road networks.The method is based on space partition.Firstly,it generates the Voronoi cell for the node of road networks according to the demands of road networks L-diversity.Then combined with the sensitivity of the user’s location,it generates the anonymous area for user’s location.The experiment results show that compared with the existing k-anonymous algorithms,this algorithm achieves the goal that the generated anonymous area is not within sensitive region,therefore,it provides better protection for location privacy.
-
Method for Cyberspace Failure Spreading and Diffusion Based on Hypergraph
王友俊,赵耘田,张红旗,张传富,杨超. 基于超图的Cyber空间故障传播研究方法[J]. 计算机科学, 2016, 43(3): 145-150.
WANG You-jun, ZHAO Yun-tian, ZHANG Hong-qi, ZHANG Chuan-fu and YANG Chao. Method for Cyberspace Failure Spreading and Diffusion Based on Hypergraph[J]. Computer Science, 2016, 43(3): 145-150. - WANG You-jun, ZHAO Yun-tian, ZHANG Hong-qi, ZHANG Chuan-fu and YANG Chao
- Computer Science. 2016, 43 (3): 145-150. doi:10.11896/j.issn.1002-137X.2016.03.028
- Abstract PDF(484KB) ( 581 )
- References | Related Articles | Metrics
-
Due to inaccurate evaluation of the impact on Cyberspace resource failure caused by the current link-based methods for failure spreading,a mission based method was proposed to analyze failure spreading and diffusion in Cyberspace.This article analyzed basic relations between missions and resources by utilizing the Petri net methodology,and built a task/resource model based on the hypergraph theory.This model defines concepts of spreading ability,spreading coefficient,diffusion coefficient and fault influence,which can be conducted and quantified by adjacent matrix.On this basis,the influence function of resource failure was derived.Finally,we tested process of failure spreading and diffusion in a case study.
-
Privacy Preserving Method Based on Random Projection for Weighted Social Networks
兰丽辉,鞠时光. 一种基于随机投影的加权社会网络隐私保护方法[J]. 计算机科学, 2016, 43(3): 151-157.
LAN Li-hui and JU Shi-guang. Privacy Preserving Method Based on Random Projection for Weighted Social Networks[J]. Computer Science, 2016, 43(3): 151-157. - LAN Li-hui and JU Shi-guang
- Computer Science. 2016, 43 (3): 151-157. doi:10.11896/j.issn.1002-137X.2016.03.029
- Abstract PDF(635KB) ( 466 )
- References | Related Articles | Metrics
-
A privacy preserving method based on random projection namely vectors set random projection was put forward on the publication of weighted social networks.The method protects sensitive information security through perturbing network structures and edge weights.It partitions weighted social networks into multiple sub-networks with the same number of nodes.Based on the theory of edge space,it describes the sub-networks by vectors consisted of edges information and constructs vector set of weighted social networks as the released model.It uses random projection technology for dimension reduction and maps the original vector set into the targeted vector set.It constructs the released weighted social networks based on the targeted vector set.The experimental results demonstrate that the vector set random projection method can ensure privacy information security and protect some structure characteristics of the social network analysis.
-
Hybrid Kmeans with KNN for Network Intrusion Detection Algorithm
华辉有,陈启买,刘海,张阳,袁沛权. 一种融合Kmeans和KNN的网络入侵检测算法[J]. 计算机科学, 2016, 43(3): 158-162.
HUA Hui-you, CHEN Qi-mai, LIU Hai, ZHANG Yang and YUAN Pei-quan. Hybrid Kmeans with KNN for Network Intrusion Detection Algorithm[J]. Computer Science, 2016, 43(3): 158-162. - HUA Hui-you, CHEN Qi-mai, LIU Hai, ZHANG Yang and YUAN Pei-quan
- Computer Science. 2016, 43 (3): 158-162. doi:10.11896/j.issn.1002-137X.2016.03.030
- Abstract PDF(406KB) ( 817 )
- References | Related Articles | Metrics
-
Network intrusion detection algorithm is one of the hot and difficult topics in the field of network security research.At present,many algorithms like KNN and TCMKNN,which process relatively small data samples,are still very time-consuming when processing large scale date set.Therefore,this paper put forward a hybrid algorithm(Cluster-KNN),which is adaptive to large scale data set.The algorithm is divided into the offline data preprocess phase(data indexing) and the online real-time classification phase.The offline phase establishes the cluster index for the large data set.Then the online phase uses the index to search neighbors,and finally outputs the result by KNN algorithm.The experimental results show that compared with the traditional KNN algorithm,Cluster-KNN algorithm has high time efficiency in the classification phase,and it has considerable advantages as well compared to intrusion detection methods of the same field in the accuracy rate,false positive rate,false negative rate and other aspects.Cluster-KNN can clearly distinguish the abnormal and normal scenes,and it has a high online classification speed.Thus,it is more suitable for the real network application environment.
-
Interaction Network Traffic Anomaly Detection Method Based on Cusp Catastrophic Model
邱卫,杨英杰. 基于尖点突变模型的联动网络流量异常检测方法[J]. 计算机科学, 2016, 43(3): 163-166.
QIU Wei and YANG Ying-jie. Interaction Network Traffic Anomaly Detection Method Based on Cusp Catastrophic Model[J]. Computer Science, 2016, 43(3): 163-166. - QIU Wei and YANG Ying-jie
- Computer Science. 2016, 43 (3): 163-166. doi:10.11896/j.issn.1002-137X.2016.03.031
- Abstract PDF(460KB) ( 489 )
- References | Related Articles | Metrics
-
As the exiting methods do not consider the nonlinear dynamics feature of interaction network traffic,and cannot distinguish between normal interaction traffic and abnormal attack traffic effectively,we proposed an interaction traffic anomaly detection method based on cusp catastrophe.The normal traffic cusp catastrophe model is established on the nonlinear dynamics parameters of interaction network traffic,and the equilibrium surface is used to describe the behavior of network traffic system and the balance surface of normal network traffic behavior is structured.Then the devia-tion of normal balance surface is taken as basis to detect anomaly.Experimental results show that this method gets higher detection rate and lower false alarm rate.
-
Database Access Control Policy Based on Attribute in Cloud Storage Platform
黄保华,贾丰玮,王添晶. 云存储平台下基于属性的数据库访问控制策略[J]. 计算机科学, 2016, 43(3): 167-173.
HUANG Bao-hua, JIA Feng-wei and WANG Tian-jing. Database Access Control Policy Based on Attribute in Cloud Storage Platform[J]. Computer Science, 2016, 43(3): 167-173. - HUANG Bao-hua, JIA Feng-wei and WANG Tian-jing
- Computer Science. 2016, 43 (3): 167-173. doi:10.11896/j.issn.1002-137X.2016.03.032
- Abstract PDF(595KB) ( 483 )
- References | Related Articles | Metrics
-
Cloud storage becomes more and more popular in large scale Database’s data store recently because it has features as low-cost,high efficiency,easy-to-use.To shift database from local to cloud storage server is still facing many challenges,especially in access control of database.We designed a “Weight Cipertext-Policy Attribute-Based Encryption” (WCPABE) scheme and proposed a database access control policy based on WCPABE under the cloud storage platform.Through introducing the concept of attribute weight,WCPABE can dynamically reflect each property’s important degrees in database and enhance the ability for database access control.We proposed three kinds of access control strategies based on WCPABE,and proposed WCPABE’s database encryption model in cloud storage platform and achieved effective and safe access control for database,enhancing database security and solving users’ private key problem in distribution and management.Experimental results show that WCPABE has feasibility and effectiveness and the database owner has more diversified means of enhancing security of the database under cloud storage platform.
-
Improved Group Signature Scheme Based on Chinese Remainder Theorem
黄丛林,仲红,汪益民. 基于中国剩余定理的群签名改进方案[J]. 计算机科学, 2016, 43(3): 174-178.
HUANG Cong-lin, ZHONG Hong and WANG Yi-min. Improved Group Signature Scheme Based on Chinese Remainder Theorem[J]. Computer Science, 2016, 43(3): 174-178. - HUANG Cong-lin, ZHONG Hong and WANG Yi-min
- Computer Science. 2016, 43 (3): 174-178. doi:10.11896/j.issn.1002-137X.2016.03.033
- Abstract PDF(416KB) ( 556 )
- References | Related Articles | Metrics
-
The group signature scheme based on Chinese remainder theorem was first proposed by Chen Ze-wen.Since then,several improved schemes have been proposed,but no scheme achieves the non-relation without using the third party to sign or verify.In order to solve this problem,we proposed an improved group signature scheme based on Chinese remainder theorem,which combines with the notion of complete subtree method in the subset cover framework.In addition,our scheme can provide non-relation,joining or revoking safely and quickly for each member without changing other members’ private key,and also resist against authority trap attacks.Finally,we analyzed the security and efficiency of our scheme,and comparison results suggest that our scheme has some advantages over the existing schemes.
-
Discrete Characteristic-based Test Execution Selection for Software Fault Localization and Understanding
刘梦冷,杨小双,赵磊,王丽娜. 面向软件错误定位与理解的测试执行离散特征筛选[J]. 计算机科学, 2016, 43(3): 179-187.
LIU Meng-leng, YANG Xiao-shuang, ZHAO Lei and WANG Li-na. Discrete Characteristic-based Test Execution Selection for Software Fault Localization and Understanding[J]. Computer Science, 2016, 43(3): 179-187. - LIU Meng-leng, YANG Xiao-shuang, ZHAO Lei and WANG Li-na
- Computer Science. 2016, 43 (3): 179-187. doi:10.11896/j.issn.1002-137X.2016.03.034
- Abstract PDF(819KB) ( 459 )
- References | Related Articles | Metrics
-
Software fault localization and understanding are key steps in software debugging,while the way that debuggers use suspicious scores calculated by coverage-based fault localization from high to low to analysis every program statement does not correspond to real debugging.To select test executions which are more helpful for fault understan-ding,especially the failed executions which lead to program failure,and help debuggers conduct differentiation analysis dynamically,three prioritization strategies which are based on coverage rate of high suspicious scores,fault exposing potential and discrete characteristic of suspicious scores of covered statements were presented respectively for failed executions,and weighted cosine similarity matching was presented for passed executions.These failed test execution prioritization strategies and random sorting were compared in general fault localization techniques.And results show that the selection based on discrete characteristic of suspicious scores of statements covered by test executions has stronger positive influence and weaker negative influence on the change of fault understanding expense before and after selection,can understand more faults under same expense,and is more helpful for the application of fault localization results to fault understanding.
-
Fibrations Method of Co-inductive Data Types in Programming
苗德成,奚建清,戴经国,苏锦钿. 程序语言中共归纳数据类型的一种fibrations方法[J]. 计算机科学, 2016, 43(3): 188-192.
MIAO De-cheng, XI Jian-qing, DAI Jing-guo and SU Jin-dian. Fibrations Method of Co-inductive Data Types in Programming[J]. Computer Science, 2016, 43(3): 188-192. - MIAO De-cheng, XI Jian-qing, DAI Jing-guo and SU Jin-dian
- Computer Science. 2016, 43 (3): 188-192. doi:10.11896/j.issn.1002-137X.2016.03.035
- Abstract PDF(527KB) ( 474 )
- References | Related Articles | Metrics
-
Traditional methods of co-inductive data types in programming,such as co-algebra and category theory,have some problems in analyzing semantics behaviors and describing co-inductive rules.Considering the status quo of co-inductive data types researches both at home and abroad,a fibrations method was proposed in this paper.Firstly, some basic logical structures of co-inductive data types are systematically analyzed over a fibration including reindexed functor,op-reindexed functor and truth functor,and the corresponding relationship between co-inductive data types and their semantic behaviors on program logic is established by equality functor and quotient functor.Then semantic behaviors of co-inductive data types are deeply analyzed .Secondly, co-recursive operations are constructed to describe abstractly co-inductive rules of co-inductive data types with universality by endo-functors in base categories and their equality-preserving lifting in total categories.At last applications of fibrations method are briefly introduced by examples.
-
Symbolic Execution Engine with Shape Analysis
梁家彪,李兆鹏,朱玲,沈咸飞. 支持形状分析的符号执行引擎的设计与实现[J]. 计算机科学, 2016, 43(3): 193-198.
LIANG Jia-biao, LI Zhao-peng, ZHU Ling and SHEN Xian-fei. Symbolic Execution Engine with Shape Analysis[J]. Computer Science, 2016, 43(3): 193-198. - LIANG Jia-biao, LI Zhao-peng, ZHU Ling and SHEN Xian-fei
- Computer Science. 2016, 43 (3): 193-198. doi:10.11896/j.issn.1002-137X.2016.03.036
- Abstract PDF(514KB) ( 870 )
- References | Related Articles | Metrics
-
Dynamic testing,static analysis and formal verification are three major methods for improving the depen-dability of software.Since the result of dynamic testing depends on the design of test case suite,it is unstable and can lead to high false negative rate.Formal verification is a complementary method for proving the correctness of software.So far most of the proof still need to be implemented by hand which makes formal verification hard with high cost.Static analysis is an efficient and low-cost method for detecting bugs in software.One of the promising techniques of static analysis is symbolic execution,while it has a good control over the degree of accuracy.Targeting at the path explosion problem and poor scalability of symbolic execution,and taking advantage of shape analysis,loop invariants and function specification inference during symbolic execution,we implemented an efficient analysis tool for C programs.
-
Similarity Analysis of Multi-dimension Features of Android Applications
张希远,张刚,沈立炜,彭鑫,赵文耘. 多维度的安卓应用相似度分析[J]. 计算机科学, 2016, 43(3): 199-205.
ZHANG Xi-yuan, ZHANG Gang, SHEN Li-wei, PENG Xin and ZHAO Wen-yun. Similarity Analysis of Multi-dimension Features of Android Applications[J]. Computer Science, 2016, 43(3): 199-205. - ZHANG Xi-yuan, ZHANG Gang, SHEN Li-wei, PENG Xin and ZHAO Wen-yun
- Computer Science. 2016, 43 (3): 199-205. doi:10.11896/j.issn.1002-137X.2016.03.037
- Abstract PDF(705KB) ( 617 )
- References | Related Articles | Metrics
-
The popularity of Android smart device and the development of mobile Internet bring prosperity to the Android application.But it also brings problem of development,maintenance and security about mobile application.This paper adopted a variety of techniques and extracted the features of Android applications including functional description,permission and source code,and performed statistical analysis,similarity calculation,clustering and cross-comparison on the information of 1173 Android applications.Through similarity analysis on three dimensions of features,we obtained some related regular pattern,which can assist kinds of development and management tasks on Android application such as excessive permission detection,re-packed detection,application description improvement,class library discovery and extraction of certain domain.Thereby it can help improve the ecology of Android market and improve the development efficiency of Android application.
-
Dynamic Similarity Based Fault Localization Prioritization
蒲进兴,李登辉,李征,赵瑞莲. 基于动态相似度的错误定位优先排序方法[J]. 计算机科学, 2016, 43(3): 206-212.
PU Jin-xing, LI Deng-hui, LI Zheng and ZHAO Rui-lian. Dynamic Similarity Based Fault Localization Prioritization[J]. Computer Science, 2016, 43(3): 206-212. - PU Jin-xing, LI Deng-hui, LI Zheng and ZHAO Rui-lian
- Computer Science. 2016, 43 (3): 206-212. doi:10.11896/j.issn.1002-137X.2016.03.038
- Abstract PDF(1326KB) ( 487 )
- References | Related Articles | Metrics
-
Fault localization prioritization (FLP) reorders test cases to improve efficiency of fault localization.It combines the two key processes in software regression testing,fault detection and fault localization,in order to reduce the test cost.We proposed a dynamic similarity based fault localization prioritization,in which the statement suspicious va-lue is introduced as a weight of similarity for test case implementation to improve the effectiveness of fault localization.The impact of different test case prioritization algorithms was empirically analyzed and validated.In the experiments,three widely used test case prioritization algorithms were used to combine with two fault localization technologies.The results based on six benchmark C programs show that the proposed approach can effectively improve the accuracy and efficiency of fault localization.
-
Automatic Ontology Population Based on Heuristic Rules
李伊潇,李宏伟,沈立炜,赵文耘. 基于启发式规则的自动化本体扩充[J]. 计算机科学, 2016, 43(3): 213-219.
LI Yi-xiao, LI Hong-wei, SHEN Li-wei and ZHAO Wen-yun. Automatic Ontology Population Based on Heuristic Rules[J]. Computer Science, 2016, 43(3): 213-219. - LI Yi-xiao, LI Hong-wei, SHEN Li-wei and ZHAO Wen-yun
- Computer Science. 2016, 43 (3): 213-219. doi:10.11896/j.issn.1002-137X.2016.03.039
- Abstract PDF(1277KB) ( 542 )
- References | Related Articles | Metrics
-
The cycle of building ontology can be shortened by means of automatically extracting domain ontology in Internet resources,but automatic ontology population is still a challenge in ontology engineering.There are two difficulties in this area,which are how to extract terms and how to construct the mapping relationship between the new terms and the existed ontology.Therefore,this paper proposed a method for automatic ontology population based on the proposed heuristic rules.This method extracts natural language texts from the Internet,combines traditional natural language processing methods for text preprocessing,discovers domain terms by preferentially matching object properties,enriches the ontology by matching these terms using heuristic rules,and finally checks the consistency of the enriched ontology.On the base of the proposed method,this paper also implemented a Web-based tool for ontology population.Using an urban landscape information core ontology as a case study,the experimental results show that the method for enriching ontology individuals has a high precision and recall.The results also prove that the proposed method is effective and feasible.
-
HMSST+:HMSST Algorithm Optimization Based on Distributed Memory Database
董书暕,汪璟玢,陈远. HMSST+:基于分布式内存数据库的HMSST算法优化[J]. 计算机科学, 2016, 43(3): 220-224.
DONG Shu-jian, WANG Jing-bin and CHEN Yuan. HMSST+:HMSST Algorithm Optimization Based on Distributed Memory Database[J]. Computer Science, 2016, 43(3): 220-224. - DONG Shu-jian, WANG Jing-bin and CHEN Yuan
- Computer Science. 2016, 43 (3): 220-224. doi:10.11896/j.issn.1002-137X.2016.03.040
- Abstract PDF(487KB) ( 428 )
- References | Related Articles | Metrics
-
To solve the bottleneck of HMSST(HashMapSelectivityStrategyTree) algorithm which is limited to the memory in a centralized environment,this paper proposed a novel distributed SPARQL optimized query algorithm named HMSST+.This algorithm presents a distributed storage solution based on the Redis(Remote Dictionary Ser-ver),and realizes the query of massive RDF data in the memory of distributed cluster by a parallel expansion of storage nodes and distributed scheduling .The method was tested on LUBM Benchmark and it worked well when the number of universities reaches 1000.The result shows that the method has better scalability than the HMSST algorithm and higher query efficiency than the existing query schemes.
-
Top-k Query Calculation of Uncertain Data Based on Uncertainty Theory
郭长友,郑雪峰,高秀莲. 基于不确定理论的不确定性数据Top-k查询计算[J]. 计算机科学, 2016, 43(3): 225-230.
GUO Chang-you, ZHENG Xue-feng and GAO Xiu-lian. Top-k Query Calculation of Uncertain Data Based on Uncertainty Theory[J]. Computer Science, 2016, 43(3): 225-230. - GUO Chang-you, ZHENG Xue-feng and GAO Xiu-lian
- Computer Science. 2016, 43 (3): 225-230. doi:10.11896/j.issn.1002-137X.2016.03.041
- Abstract PDF(472KB) ( 561 )
- References | Related Articles | Metrics
-
The Top-k query in the uncertain data set based on parametric ranking function has been focused in recent years.This paper gave out a new solution.The tuples of uncertain data set is modeled as uncertain network,Top-k query of the orderly tuples is transformed equivalently into uncertain measure relations of edges in corresponding sample fi-gures,and the sample figures are classified according the ranking position of edge contained in them.So the Top-k query in the uncertain data set based on parametric ranking function is transformed equivalently into different limited query with different Top-k value.The proposed algorithm avoids calculating the ranking uncertain measure values of all tuples in the sample figures,and improves the computation efficiency of Top-k query in uncertain figure.Theoretical analysis and experimental results show that the proposed Top-k query algorithm can solve Top-k query calculation of uncertain data from the uncertainty perspective.
-
Research on Parallel Transform of XML Data Based on XSLT Stylesheet Partitioning
李宁,高晓光,侯霞,张伟,田英爱. 基于式样单划分的XML数据并行转换方法研究[J]. 计算机科学, 2016, 43(3): 231-237.
LI Ning, GAO Xiao-guang, HOU Xia, ZHANG Wei and TIAN Ying-ai. Research on Parallel Transform of XML Data Based on XSLT Stylesheet Partitioning[J]. Computer Science, 2016, 43(3): 231-237. - LI Ning, GAO Xiao-guang, HOU Xia, ZHANG Wei and TIAN Ying-ai
- Computer Science. 2016, 43 (3): 231-237. doi:10.11896/j.issn.1002-137X.2016.03.042
- Abstract PDF(484KB) ( 566 )
- References | Related Articles | Metrics
-
Based on the exploration of current research of XSLT acceleration,a new method to transform XML data in parallel mode based on XSLT Stylesheet partitioning was proposed.The related topics were discussed including:1)how to obtain the independent transform units which can be executed concurrently without affecting each other,2)what is the proper number of parallel tasks according to the underlying platform being used and how to balance their workload,3)how to merge the individual intermediate results correctly.The proposed method was applied into the real project,i.e. the Open XML-UOF document format translation project,achieving fairly good result.This research is significant to the improvement of XML transform performance under parallel processing environment.
-
Expectation Maximization Based Name Deduplicating Algorithm in Spatial Records
孙晓玲,郑勉,李伟勤,罗恩韬. 位置信息记录中基于期望最大化的名称消重算法[J]. 计算机科学, 2016, 43(3): 238-241.
SUN Xiao-ling, ZHENG Mian, LI Wei-qin and LUO En-tao. Expectation Maximization Based Name Deduplicating Algorithm in Spatial Records[J]. Computer Science, 2016, 43(3): 238-241. - SUN Xiao-ling, ZHENG Mian, LI Wei-qin and LUO En-tao
- Computer Science. 2016, 43 (3): 238-241. doi:10.11896/j.issn.1002-137X.2016.03.043
- Abstract PDF(1055KB) ( 441 )
- References | Related Articles | Metrics
-
In check-in records with corresponding locations,each record only contains the attributes of name and location,i.e.,longitude and latitude.Traditional name deduplicating algorithms deduplicate names by matching attributes between two entities or computing similarity between names of the two entities,and thus neglect the particularity of locations.In order to improve the quality of name deduplicating in spatial records,this paper proposed an expectation maximization based name deduplicating algorithm.Firstly,we proposed a text name model containing core and background words,and gave an expectation maximization algorithm for computing parameters of the model.Secondly,we introduced location into the text name model,partitioned the whole world into tiles,computed the distributions of core and background words in each tile,and proposed a text name model including location.Finally,we used the location text name model to deduplicate names in location records,and presented corresponding name deduplicating algorithm.The experiments show that,our proposed algorithm can better recognize core word in a name than related works,and thus performs better while deduplicating name in location records.
-
Method for Attribute Reduction Based on Rough Sets Boundary Regions
刘芳,李天瑞. 基于边界域的不完备信息系统属性约简方法[J]. 计算机科学, 2016, 43(3): 242-245.
LIU Fang and LI Tian-rui. Method for Attribute Reduction Based on Rough Sets Boundary Regions[J]. Computer Science, 2016, 43(3): 242-245. - LIU Fang and LI Tian-rui
- Computer Science. 2016, 43 (3): 242-245. doi:10.11896/j.issn.1002-137X.2016.03.044
- Abstract PDF(339KB) ( 435 )
- References | Related Articles | Metrics
-
The method of attribute reduction for incomplete information systems was studied by using matrix.The concept of tolerance relation matrix was introduced to calculate the upper and lower approximations in the decision table.A method for calculating the boundary region of decision table based on the tolerance relation matrix was presented.The criterion for evaluating the attribute reduction was based on the equal of cardinal number of the boundary region.A heuristic reduction method based on boundary region was proposed in this paper.At last,the feasibility of the operation method and the algorithm of attribute reduction were illustrated by examples.
-
Design of Lower Limb Rehabilitation Exoskeleton Suit Based on Event Detection
张向刚,石宇亮,张羿,王慧琴. 基于事件检测的下肢康复外骨骼服的设计[J]. 计算机科学, 2016, 43(3): 246-251.
ZHANG Xiang-gang, SHI Yu-liang, ZHANG Yi and WANG Hui-qin. Design of Lower Limb Rehabilitation Exoskeleton Suit Based on Event Detection[J]. Computer Science, 2016, 43(3): 246-251. - ZHANG Xiang-gang, SHI Yu-liang, ZHANG Yi and WANG Hui-qin
- Computer Science. 2016, 43 (3): 246-251. doi:10.11896/j.issn.1002-137X.2016.03.045
- Abstract PDF(490KB) ( 536 )
- References | Related Articles | Metrics
-
Lower limbs rehabilitation exoskeleton can assist or replace the doctor to complete rehabilitation training of lower limbs,and can help the patients recover self-reliance,and then reintegrate into society.A kind of lower limb rehabilitation exoskeleton suit based on event detection was designed in the paper.The so-called events refer to the information sequence detected by the man-machine interface.The movement intention detection based on events has two main purposes.One is to detect the wearer’s movement intention in a transparent way.The other is to improve the accuracy of motion intention detection.In the paper,the lower extremity exoskeleton suit includes a bionic mechanical structure,a man-machine interface,a controller,a driving subsystem based on motor and a power subsystem.The bionic structure is used to support the weight of the exoskeleton and the wearer’s body.A disk type motor is used to drive the knee joints and hip joints.Many sensors and a body sensor network are designed in the man-machine interface,and motion perception technology based on event is used to detect patient’s movement state and movement intention.The controller adopts the position closed loop control.Simulation results show the feasibility and the practical effect of the system.
-
Chinese Event Argument Inference Approach Based on Markov Logic Network
朱少华,李培峰,朱巧明. 基于MLN的中文事件论元推理方法[J]. 计算机科学, 2016, 43(3): 252-255.
ZHU Shao-hua, LI Pei-feng and ZHU Qiao-ming. Chinese Event Argument Inference Approach Based on Markov Logic Network[J]. Computer Science, 2016, 43(3): 252-255. - ZHU Shao-hua, LI Pei-feng and ZHU Qiao-ming
- Computer Science. 2016, 43 (3): 252-255. doi:10.11896/j.issn.1002-137X.2016.03.046
- Abstract PDF(437KB) ( 452 )
- References | Related Articles | Metrics
-
Currently,previous Chinese argument extraction approaches mainly use syntactic structure as the major features to describe the relationship between trigger and its arguments.However,they suffer much from those inter-sentence arguments which are not in the same sentence or clause of the trigger.To address this issue,this paper brought forward a novel argument inference mechanism based on the Markov logic network.It first learns the probabilities of an entity fulfilling a specific role from the training set and obtains those extracted argument mentions with high confidences in the test set.Then it uses them to extract those argument mentions with lack of effective context information or low confidences.Experimental results on the ACE 2005 Chinese corpus show that our approach outperforms the baseline significantly,with the improvements of 6.0% and 4.4% in argument identification and role determination respectively.
-
Fuzzy Soft Subspace Clustering Method Based on Intelligent Optimization Algorithm
张恒巍,何嘉婧,韩继红,王晋东. 基于智能优化算法的模糊软子空间聚类方法[J]. 计算机科学, 2016, 43(3): 256-261.
ZHANG Heng-wei, HE Jia-jing, HAN Ji-hong and WANG Jin-dong. Fuzzy Soft Subspace Clustering Method Based on Intelligent Optimization Algorithm[J]. Computer Science, 2016, 43(3): 256-261. - ZHANG Heng-wei, HE Jia-jing, HAN Ji-hong and WANG Jin-dong
- Computer Science. 2016, 43 (3): 256-261. doi:10.11896/j.issn.1002-137X.2016.03.047
- Abstract PDF(465KB) ( 544 )
- References | Related Articles | Metrics
-
To solve the issue of clustering on selected characteristics and the problems that fuzzy C-means is sensitive to initial value and easy to fall into local optimum,a new fuzzy subspace clustering method based on improved firefly algorithm was proposed.Based on fuzzy C-means clustering algorithm,the method uses the way to calculate feature weighting in reliability-based k-means algorithm,and combines with the global search capability of firefly algorithm to search for all the subspace.An objective function was designed to evaluate the clustering results and feature-dimension included in subspace,and it was adopted to improve the searching formula of firefly algorithm.Experimental results show that the proposed clustering method can effectively converge to the global optimal solution,and has good clustering effect and noise immunity.
-
Multi-granularity Uncertain Linguistic Group Decision Making Method Based on Similarity Measure
谭敏,史越,杨俊超,延静. 基于相似度的多粒度不确定语言群决策方法[J]. 计算机科学, 2016, 43(3): 262-265.
TAN Min, SHI Yue, YANG Jun-chao and YAN Jing. Multi-granularity Uncertain Linguistic Group Decision Making Method Based on Similarity Measure[J]. Computer Science, 2016, 43(3): 262-265. - TAN Min, SHI Yue, YANG Jun-chao and YAN Jing
- Computer Science. 2016, 43 (3): 262-265. doi:10.11896/j.issn.1002-137X.2016.03.048
- Abstract PDF(357KB) ( 488 )
- References | Related Articles | Metrics
-
Based on interval two-tuple linguistic information and similarity of vector,a new method was proposed for multiple attribute group decision-making problems with multi-granularity uncertain linguistic information.It hopes to supplement the insufficiency of the method based on distance measure.Firstly,the multi-granularity linguistic information is uniformed using a two-tuple linguistic transformation function.Then,an optimization model,which aims to maximize the similarity degree of the alternatives to the positive ideal solution and minimize that to the negative ideal solution,is established,and then the attribute weights can be determined.The interval two-tuple aggregation operator is utilized to aggregate the linguistic assessment information corresponding to each alternative,and through the optimal ordinal ranking method,the alternatives can be ranked.An example was given to demonstrate the feasibility and efficiency of the proposed method.
-
Modified Chaotic ITO Algorithm to Vehicle Routing Problem
华茂,余世明. 一种改进的混沌伊藤算法求解车辆配送问题[J]. 计算机科学, 2016, 43(3): 266-270.
HUA Mao and YU Shi-ming. Modified Chaotic ITO Algorithm to Vehicle Routing Problem[J]. Computer Science, 2016, 43(3): 266-270. - HUA Mao and YU Shi-ming
- Computer Science. 2016, 43 (3): 266-270. doi:10.11896/j.issn.1002-137X.2016.03.049
- Abstract PDF(397KB) ( 481 )
- References | Related Articles | Metrics
-
In order to improve the efficiency of ITO algorithm searching optimal solution,the C-W saving method was introduced to the strategy of state transition,meanwhile,the distance heuristic factor and the update rule of path weight were modified on the basis of the characteristic of ITO algorithm.For the purpose of enhancing the diversity of initial population and the final optimization capability,the weight coefficient of each factor was adjusted linearly in the process of optimization.According to the fitness of particles,the adaptive disturbance was designed for fluctuation operator and drifting operator to overcome the ITO easy-to-stagnation phenomenon.Based on the four local search operators,a chaoticlocal optimization method using power function carrier was proposed,which greatly improves the ergodicity and the sufficiency of the local search.The simulation results show the effectiveness of the proposed algorithm.
-
Learning Strategy Based on Neighboring-boundary Granular Support Vector Machine
张春艳,倪世宏,查翔. 一种基于近邻边界的粒度支持向量机学习策略[J]. 计算机科学, 2016, 43(3): 271-274.
ZHANG Chun-yan, NI Shi-hong and ZHA Xiang. Learning Strategy Based on Neighboring-boundary Granular Support Vector Machine[J]. Computer Science, 2016, 43(3): 271-274. - ZHANG Chun-yan, NI Shi-hong and ZHA Xiang
- Computer Science. 2016, 43 (3): 271-274. doi:10.11896/j.issn.1002-137X.2016.03.050
- Abstract PDF(988KB) ( 565 )
- References | Related Articles | Metrics
-
Granular support vector machine will lead to loss of partial classification information and accuracy degradation while dividing granules and extracting representative points.To solve this problem,a learning strategy based on neighboring-boundary granular support vector machine (NGSVM) was proposed.Samples were divided into granules with kmeans method firstly,different granules were dealt with different rules to extract representative points,and then these representative points were put into fixed set or reduced set as requested,by which support vector machine (SVM) was trained.After completion of classifier,classification plane would be rectified by extracting neighboring-boundary samples according to the kernel distance.The simulation results show that NGSVM gains a higher classification accuracy by extracting neighboring-boundary samples near classification plane and fixing classification plane.
-
Improved PageRank Algorithm Based on User Interest and Topic
王冲,纪仙慧. 基于用户兴趣与主题相关的PageRank算法改进研究[J]. 计算机科学, 2016, 43(3): 275-278.
WANG Chong and JI Xian-hui. Improved PageRank Algorithm Based on User Interest and Topic[J]. Computer Science, 2016, 43(3): 275-278. - WANG Chong and JI Xian-hui
- Computer Science. 2016, 43 (3): 275-278. doi:10.11896/j.issn.1002-137X.2016.03.051
- Abstract PDF(1153KB) ( 662 )
- References | Related Articles | Metrics
-
Aiming at the drifting theme and ignoring user interest of traditional PageRank algorithm,an improved algorithm based on user interest and topic(ITPR) was proposed.In order to satisfy the needs of user better,both browsing time and page length were used to build user interest factor,and its change was predicted by the linear fitting of the hits per month.Meanwhile,topic correlation factor based on page content was introduced,modifying the PR appropriately.The simulation experiment results show that the proposed algorithm achieves better page ranking quality,precision ratio and user’s satisfaction.
-
Approach to Text Knowledge Depth Mining Based on Pattern Recognition
温浩,温有奎,王民. 基于模式识别的文本知识点深度挖掘方法[J]. 计算机科学, 2016, 43(3): 279-284.
WEN Hao, WEN You-kui and WANG Min. Approach to Text Knowledge Depth Mining Based on Pattern Recognition[J]. Computer Science, 2016, 43(3): 279-284. - WEN Hao, WEN You-kui and WANG Min
- Computer Science. 2016, 43 (3): 279-284. doi:10.11896/j.issn.1002-137X.2016.03.052
- Abstract PDF(1238KB) ( 556 )
- References | Related Articles | Metrics
-
A new method of text mining was presented to make up for the disadvantages of big data knowledge acquisition.Firstly,we constructed triple ontology model about academic inventive features “problem,methods,results”.Se-condly,pattern recognition techniques were used for statistical analysis,feature extraction,machine learning and pattern determination analysis.Finally,depth mining of triples of the creative core academic knowledge was realized.The Expe-rimental results show that the new method can effectively reduce the retrieval noise of academic literature,which is convenient for users to quickly locate the research problem.The methods and results can determine the reading value of papers and provide a basis for depth knowledge mining of large data and related discovery.The method has not been reported in the literature,and it is a kind of exploratory research and experimentation.
-
Sugeno Measure Rough Set Model Based on Covering and its Three-way Decision
薛占熬,刘杰,朱泰隆,司小朦,王朋函. 基于覆盖的Sugeno测度粗糙集模型及其三支决策[J]. 计算机科学, 2016, 43(3): 285-290.
XUE Zhan-ao, LIU Jie, ZHU Tai-long, SI Xiao-meng and WANG Peng-han. Sugeno Measure Rough Set Model Based on Covering and its Three-way Decision[J]. Computer Science, 2016, 43(3): 285-290. - XUE Zhan-ao, LIU Jie, ZHU Tai-long, SI Xiao-meng and WANG Peng-han
- Computer Science. 2016, 43 (3): 285-290. doi:10.11896/j.issn.1002-137X.2016.03.053
- Abstract PDF(442KB) ( 435 )
- References | Related Articles | Metrics
-
The classical probabilistic rough set model is an important theoretical basis for dealing with the problem of uncertain information.It has a wide range of applications in uncertain information system.However,both the equivalence relation and probability measure in probabilistic rough set model are too strict to obtain in the practical applications.Therefore,it is necessary to expand the model,so that it should widen the scope of application.Based on the proba-bilistic rough set,Sugeno measure and three-way decision theory,the covering Sugeno measure rough set model was first proposed and the new three-way decision rules were researched.Firstly,the covering Sugeno measure rough set model was constructed and the upper and lower approximate operators were also defined.Secondly,the properties about the union,intersection and complement of the operators were provided.Finally,combined with the three-way decision theory,the three-way decision rules based on the model were proposed and a comprehensive illustration was presented to verify the model’s effectiveness and feasibility.
-
Non-parametric Dynamic Background Modeling Based on Direction Feature of Vector
蒋泳森,肖泉,王守觉. 基于矢量方向特征的非参数动态背景建模[J]. 计算机科学, 2016, 43(3): 291-295.
JIANG Yong-sen, XIAO Quan and WANG Shou-jue. Non-parametric Dynamic Background Modeling Based on Direction Feature of Vector[J]. Computer Science, 2016, 43(3): 291-295. - JIANG Yong-sen, XIAO Quan and WANG Shou-jue
- Computer Science. 2016, 43 (3): 291-295. doi:10.11896/j.issn.1002-137X.2016.03.054
- Abstract PDF(1467KB) ( 643 )
- References | Related Articles | Metrics
-
For the problem that the traditional background modeling results in empty hole and wrong shadow detection,we proposed a dynamic background modeling method based on vector characteristics.Learning algorithm is divided into two parts namely initial background model and updating background model.The initial background model maps the RGB feature to the direction feature in spherical coordinates.K clustering centers are calculated by the lastest images using the method of mean vector clustering algorithm,and the K clusters are considered to be the background model of the pixel.When a new image’s corresponding pixel falls into any one of the K clusters,the new pixel is considered to be background.The updating algorithm is the successor of the initial background model algorithm.Besides doing background analyse for the new image,it also uses the new image to update the background model.The algorithm can effectively reduce the empty hole and wrong shadow detection,and can update the background timely when the scene changes.
-
Background Subtraction Based on Color and Local Binary Similarity Pattern
任典元,王文伟,马强. 基于颜色和局部二值相似模式的背景减除[J]. 计算机科学, 2016, 43(3): 296-300.
REN Dian-yuan, WANG Wen-wei and MA Qiang. Background Subtraction Based on Color and Local Binary Similarity Pattern[J]. Computer Science, 2016, 43(3): 296-300. - REN Dian-yuan, WANG Wen-wei and MA Qiang
- Computer Science. 2016, 43 (3): 296-300. doi:10.11896/j.issn.1002-137X.2016.03.055
- Abstract PDF(1960KB) ( 586 )
- References | Related Articles | Metrics
-
As the ViBe(Visual Background Extractor) algorithm is not adaptable enough for illumination variation and dynamic background,an improved ViBe algorithm was proposed.The new algorithm builds the background model with color feature and local binary similarity pattern (LBSP) against illumination variation.A twice spatial diffusion process is introduced to speed ghost-eliminating in the model-update phase.A self-adaptive threshold is obtained via the stan-dard deviation of the current pixel and its neighborhoods in order to inhibit disturbances from dynamic background with a more quick response.Experimental results on the Change Detection dataset show that the new algorithm can rapidly suppress ghosts while keeping a slow inclusion of real static foreground objects and can adapt to complex dynamic background and illumination variation.Compared with the ViBe algorithm,its F-measure is improved by 19.29%.
-
Classification of Hyperspectral Remote Sensing Image Based on Random Subspace and Kernel Extreme Learning Machine Ensemble
宋相法,曹志伟,郑逢斌,焦李成. 基于随机子空间核极端学习机集成的高光谱遥感图像分类[J]. 计算机科学, 2016, 43(3): 301-304.
SONG Xiang-fa, CAO Zhi-wei, ZHENG Feng-bin and JIAO Li-cheng. Classification of Hyperspectral Remote Sensing Image Based on Random Subspace and Kernel Extreme Learning Machine Ensemble[J]. Computer Science, 2016, 43(3): 301-304. - SONG Xiang-fa, CAO Zhi-wei, ZHENG Feng-bin and JIAO Li-cheng
- Computer Science. 2016, 43 (3): 301-304. doi:10.11896/j.issn.1002-137X.2016.03.056
- Abstract PDF(928KB) ( 478 )
- References | Related Articles | Metrics
-
This paper presented a novel classification algorithm of hyperspectral remote sensing image based on random subspace and kernel extreme learning machine ensemble.Firstly,many feature subsets of the same size are generated from the whole feature of hyperspectral remote sensing image data with random subspace method.Then the base classifiers of kernel extreme learning machine are trained based on these feature subsets.Finally,the classification result is decided by voting strategy.The experimental results on hyperspectral remote sensing image indicate that the proposed method has better performance than the methods based on kernel extreme learning machine and random forest respectively,and has a higher classification overall accuracy.
-
Retrieval Intention Modeling Based on Perception Hash Algorithm and Browsing Preferences
石宏彬,郭克华. 基于感知哈希与用户偏好的检索意图建模方法[J]. 计算机科学, 2016, 43(3): 305-308.
SHI Hong-bin and GUO Ke-hua. Retrieval Intention Modeling Based on Perception Hash Algorithm and Browsing Preferences[J]. Computer Science, 2016, 43(3): 305-308. - SHI Hong-bin and GUO Ke-hua
- Computer Science. 2016, 43 (3): 305-308. doi:10.11896/j.issn.1002-137X.2016.03.057
- Abstract PDF(407KB) ( 512 )
- References | Related Articles | Metrics
-
Aiming at the problem of commodity retrieval and ranking,the ranking method based on user’s query request and browsing preferences was proposed.The objective is to improve user’s satisfaction to the retrieval result without increasing input requests.The commodity records from database were selected and ranked primly according to the query requests.Based on this,user’s interest degree was analyzed by their browsing behavior and the user preference model was refined from browsing history.Then the similarity of the attribute information of commodities and preference model was calculated,and the ranking result was adjusted.The experimental result shows that the retrieval result taking user’s preference into consideration is more fit to their retrieval intention.
-
Otsu Image Threshold Segmentation Method Based on Improved Particle Swarm Optimization
刘桂红,赵亮,孙劲光,王星. 一种改进粒子群优化算法的Otsu图像阈值分割方法[J]. 计算机科学, 2016, 43(3): 309-312.
LIU Gui-hong, ZHAO Liang, SUN Jin-guang and WANG Xing. Otsu Image Threshold Segmentation Method Based on Improved Particle Swarm Optimization[J]. Computer Science, 2016, 43(3): 309-312. - LIU Gui-hong, ZHAO Liang, SUN Jin-guang and WANG Xing
- Computer Science. 2016, 43 (3): 309-312. doi:10.11896/j.issn.1002-137X.2016.03.058
- Abstract PDF(329KB) ( 724 )
- References | Related Articles | Metrics
-
The thresholding method only needs the gray information to spilt image,which is more intuitive and much easier to be implemented.Aiming at the problem that the traditional PSO algorithm used for image segmentation is easy to fall into local optimum,this paper proposed an Otsu image threshold segmentation method based on the improved PSO.We took the inter-class variance of Otsu as the fitness function,and selected the particles with better fitness and added new particles to increase the diversity of the particles.The experimental results show that,compared with Otsu methods and PSO algorithm,the improved PSO accelerates the speed of convergence and computation,and improves the accuracy of image segmentation.
-
Super-resolution Reconstruction Method Based on Wavelet Analysis and Polynomial Subdivision Location
贺清碧,黄大荣,杨永琴. 基于小波分析和多项式细分定位的超分辨率图像重建算法[J]. 计算机科学, 2016, 43(3): 313-316.
HE Qing-bi, HUANG Da-rong and YANG Yong-qin. Super-resolution Reconstruction Method Based on Wavelet Analysis and Polynomial Subdivision Location[J]. Computer Science, 2016, 43(3): 313-316. - HE Qing-bi, HUANG Da-rong and YANG Yong-qin
- Computer Science. 2016, 43 (3): 313-316. doi:10.11896/j.issn.1002-137X.2016.03.059
- Abstract PDF(318KB) ( 480 )
- References | Related Articles | Metrics
-
Image super-resolution reconstruction is an important task in the study of image enhancement and image restoration.It is widely used in the fields of high definition television,medical imaging and remote sensing imaging,etc.In this paper,the wavelet analysis was adopted to detect the pixel edges,and the polynomial subdivision algorithm was used to locate the sub-pixel edges.Therefore,the original image was divided into three parts,namely smoothing region,neighborhood edge region and minuteness edge region.According to different regional characteristics,the different interpolation methods were carried out to realize the image super-resolution reconstruction.The simulation results show that the boundary of reconstructed high resolution image is clear and natural,and the subjective judgment and objective eva-luation are better than traditional reconstruction algorithm.The method in this paper achieves good effects and reaches good feasibility and validity.
-
Fast GM-PHD Filter for Multi-target Tracking
陈金广,秦晓姗,马丽丽. 快速多目标跟踪GM-PHD滤波算法[J]. 计算机科学, 2016, 43(3): 317-321.
CHEN Jin-guang, QIN Xiao-shan and MA Li-li. Fast GM-PHD Filter for Multi-target Tracking[J]. Computer Science, 2016, 43(3): 317-321. - CHEN Jin-guang, QIN Xiao-shan and MA Li-li
- Computer Science. 2016, 43 (3): 317-321. doi:10.11896/j.issn.1002-137X.2016.03.060
- Abstract PDF(398KB) ( 1460 )
- References | Related Articles | Metrics
-
In the traditional GM-PHD filter,all measurements received at current time are used to update different types of targets.Much time is spent on updating targets because of using invalid measurements.A kind of fast multi-target tracking filter was proposed in this paper.Firstly,Gaussian components are divided into two parts.One part is birth targets and the other is survival targets.Then the residuals between survival targets and all measurements are calcula-ted.Next,only the measurements which fall in the elliptical gate are used to update survival targets.Similarly,the resi-duals between birth targets and remaining measurements are calculated,and only those measurements which fall in the elliptical gate are used to update birth targets.In this way,we could minimize invalid measurements and reduce the computing complexity.The experimental results show that the new method not only reduces the time complexity greatly,but also insures the accuracy of target tracking.Its performance is better than the traditional GM-PHD filter as a whole.