Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 44 Issue 8, 13 November 2018
  
Research on Domain-specific Question Answering System Oriented Natural Language Understanding:A Survey
WANG Dong-sheng, WANG Wei-min, WANG Shi, FU Jian-hui and ZHU Feng
Computer Science. 2017, 44 (8): 1-8, 41.  doi:10.11896/j.issn.1002-137X.2017.08.001
Abstract PDF(831KB) ( 341 )   
References | Related Articles | Metrics
Natural language understanding (NLU) in the last decade has made considerable progress in the line of domain-independent research.Such research is very important,but the gap between research results and real-world applications is large,and the strong realistic demand is in contradiction with the current processing capacity of the NLU community,which makes the development of domain-specific NLU technologies very necessary.We firstly compared open domain and domain specific question answering(QA) system.Then some typical natural language understanding technologies used in restricted domain question answering system were detailedly analyzed.The evaluation standard of the natural language understanding technology for domain-specific question answering system was introduced.At last,main problems and the future development of domain specific QA system were summarized.
Survey of Recent Isotropic Triangular Remeshing Techniques
YAN Dong-ming, HU Kai-mo, GUO Jian-wei, WANG Yi-qun, ZHANG Yi-kuan and ZHANG Xiao-peng
Computer Science. 2017, 44 (8): 9-17.  doi:10.11896/j.issn.1002-137X.2017.08.002
Abstract PDF(5227KB) ( 99 )   
References | Related Articles | Metrics
Remeshing of 3D mesh models is an important task of computer graphics,and it is a key component of many geometric algorithms.In recent years,the rapid development of 3D applications,such as finite element modeling,computer animation,3D printing,requires higher quality of mesh,and promotes the rapid development of remeshing of 3D mesh models,resulting in many new remeshing technologies.In this paper,we first introduced the evaluation criteria of meshing quality.Then,we surveyed the latest development in isotropic remeshing and discussed the advantages and disadvantages of various remeshing algorithms.Finally,we discussed some new problems and directions for future research.
Real-time 4K Panoramic Video Stitching Based on GPU Acceleration
LU Jia-ming and ZHU Zhe
Computer Science. 2017, 44 (8): 18-21, 26.  doi:10.11896/j.issn.1002-137X.2017.08.003
Abstract PDF(2300KB) ( 178 )   
References | Related Articles | Metrics
Virtual reality is a popular technique in recent years,and panoramic video capture is an important way of producing the content of virtual reality.In this paper,we proposed a practical system that can stitch 4K panoramic video in real time.The inputs of our system are 6 candidate videos with 2K resolution,and output is a 4K stitched video.With the parallelization of the whole procedure,we can efficiently implement the whole system in GPU.Experimental results show that our system can produce high-quality 4K panoramic video.
Object Tracking Based on Mean Shift Algorithm and Spatio-Temporal Context Algorithm
ZHOU Hua-zheng and MA Xiao-hu
Computer Science. 2017, 44 (8): 22-26.  doi:10.11896/j.issn.1002-137X.2017.08.004
Abstract PDF(1595KB) ( 57 )   
References | Related Articles | Metrics
When the target undergoes heavy occlusion,the spatio-temporal context (STC) algorithm can track the object accurately,but the mean shift algorithm is shaking in this situation.After occlusion,the mean shift algorithm can track the object again,however,the STC method cannot finish it.In order to make full use of these advantages,we developed a new algorithm MSandSTC to combine these two algorithms.Our algorithm can solve the problem of heavy occlusion.The efficiency,accuracy and robustness of the proposed algorithm are verified through experiments on a number of challenging data sets.
Cartoon Animations Segmentation and Vectorization Based on Canny Optimization
LI Rui-long, LIANG Yuan and ZHANG Song-hai
Computer Science. 2017, 44 (8): 27-30, 53.  doi:10.11896/j.issn.1002-137X.2017.08.005
Abstract PDF(2622KB) ( 86 )   
References | Related Articles | Metrics
Vectorization of image/video has many potential advantages comparing to a raster format,such as higher compression ratios for storage and allowing display on devices with differing capabilities.Cartoon animations are more suitable for vectorizing than the videos from real world because of their clear edge and region.We proposed a new algorithm of image segmentation based on Canny operator,solved the problem of discontinuity of edge detection and created a material box for each cartoon animation.We proposed a way to low the compression ratio by reusing the material of cartoon in the material box,and also solved the problem of flicker during the vectorization with the material box.We made a system to automaticly vectorize the cartoon animations including the shot segmentation,active region extraction and creating the material box.Our system can give good result on different kinds of cartoon,even those with complex texture in it.
Double Adjacency Graphs Based Orthogonal Neighborhood Preserving Projections for Face Recognition
XUE Xiao-yu and MA Xiao-hu
Computer Science. 2017, 44 (8): 31-35.  doi:10.11896/j.issn.1002-137X.2017.08.006
Abstract PDF(1517KB) ( 75 )   
References | Related Articles | Metrics
Orthogonal neighborhood preserving projections (ONPP) is a typical graph-based dimensionality reduction technique,which preserves not only the locality but also the local and global geometry of the height dimensional data,and has been successfully applied to face recognition.The supervised ONPP tries to find the optimal embedding of low-dimensional subspace by setting up homogeneous adjacency graphic and minimizing the homogeneous local reconstruction errors.However,it only uses the homogeneous information,which leads to unconspicuous structure of heteroge-neous data.Motivated by this fact,we proposed a novel method called double adjacency graphs based orthogonal neighborhood preserving projections (DAG-ONPP).By introducing homogeneous and heterogeneous neighbor adjacency graphs,the homogeneous reconstructing errors will be as small as possible and the heterogeneous reconstructing errors will be more obvious after data being embedded in low-dimensional subspace.The results of the experiments on the ORL,Yale,YaleB and PIE databases demonstrate that the proposed method can markedly improve the classification ability of the original method and outperforms the other typical methods.
Research on Acceleration of Matrix Multiplication Based on Parallel Scheduling on MPSoC
YANG Fei, MA Yu-chun, HOU Jin and XU Ning
Computer Science. 2017, 44 (8): 36-41.  doi:10.11896/j.issn.1002-137X.2017.08.007
Abstract PDF(1197KB) ( 93 )   
References | Related Articles | Metrics
Matrix multiplication is the basic algorithm of the numerical analysis,graphics and image processing.General matrix multiplication accelerator has always been a research focus in the embedded system design.However,due to the high complexity and the low processing efficiency,matrix multiplication becomes the bottleneck of computation speed of embedded systems.In order to use matrix multiplication in the embedded field,a synergy acceleration architecture of software and hardware based on MPSoC was proposed in this paper.With MPSoC architecture,the partitioning of the matrix considering hardware constraints is implemented in our HW/SW system to enable the computation of general matrix multiplications.The parallel computation with multiple cores and hardware function unit is realized with the load balance algorithms.Parallel efficiency and speed-up ratio are improved.The experimental results show that the proposed general matrix multiplication approach can achieve significant speed-up over the traditional approaches with single core.
New Task Scheduling Technique for Multicore Arrays
CHEN Yi-ou, LV Xin-ke and LING Xiang
Computer Science. 2017, 44 (8): 42-45, 70.  doi:10.11896/j.issn.1002-137X.2017.08.008
Abstract PDF(1828KB) ( 88 )   
References | Related Articles | Metrics
As the complexity of the signal processing increases,the multicore parallel architecture has become an effective solution for digital signal processing (DSP) systems.This paper proposeed a new task scheduling technique in wireless multicore systems for DSP systems.Three optimization objectives,such as power consumption,thermal distribution and latency,were chosen to achieve the performance and cost requirements of DSP systems and wireless multicore arrays.The respective models were designed as the objective functions of the multi-objective optimization algorithm.Also,an improved crowded strategy and the initial population selection based on NSGA-II algorithm with new fitness functions were proposed to balance the performance of the three optimization objectives and increase the possibility of exploring better solutions.Finally,experiments were taken under several task graphs in wireless multicore systems.Simulation results prove that the proposed algorithm is effective and can achieve better performance than the traditional ones.
Efficiency Optimization Method for MapReduce Similarity Computing Based on Spark
LIAO Bin, ZHANG Tao, YU Jiong, GUO Bing-lei and LIU Yan
Computer Science. 2017, 44 (8): 46-53.  doi:10.11896/j.issn.1002-137X.2017.08.009
Abstract PDF(3438KB) ( 197 )   
References | Related Articles | Metrics
With the exponential growth of both internet users and contents,the similarity computation of big data needs more efficiency.In order to improve the performance of the algorithm,the implementation of the algorithm was analyzed,as the characteristics of the Spark is suitable for the iterative and interactive tasks.The algorithm based on the 2D partition algorithm was transplanted from the MapReduce to the Spark.And through the parameter adjustment,memory optimization etc.we improved the efficiency of the algorithm.The experimental results with 2 data sets on 3 different sizes of clusters indicated that compared Spark with MapReduce,the algorithm implementation efficiency of Spark platform is 4.715 times higher than MapReduce,and energy consumption is only 24.86% of the average energy consumption of Hadoop,which is about 4 times higher than Hadoop.
Virtual Machine Placement Strategy Based on Dynamic Programming
ZHANG Xun, GU Chun-hua, LUO Fei, CHANG Yao-hui and WEN Geng
Computer Science. 2017, 44 (8): 54-59, 75.  doi:10.11896/j.issn.1002-137X.2017.08.010
Abstract PDF(1094KB) ( 117 )   
References | Related Articles | Metrics
In the environment of IaaS cloud,the key factor of the resource allocation management is how to place virtual machine.It is mostly probably that an improper placement strategy may cause the loss of resource and more energy consumption.Thus,a multi-objective optimization model was established which devotes to reduce the resource loss and energy consumption of the whole data center.Further more,a kind of placement strategy about virtual machine based on dynamic programming theory was proposed.In the strategy,the problem of placement is transformed into a knapsack problem of multi-stage decision,in which the knapsack problem is divided into a series of smaller sub-problems with the idea of dynamic programming.And the optimal solution of the original problem is obtained through solving the optimal solution of the sub-problems.Finally,the simulation experiment shows that this strategy can greatly reduce the energy consumption and the resource loss of data center.
Non-uniform Clustering Routing Algorithm
HE Chao and WANG Kun
Computer Science. 2017, 44 (8): 60-63.  doi:10.11896/j.issn.1002-137X.2017.08.011
Abstract PDF(304KB) ( 165 )   
References | Related Articles | Metrics
On the basis of analyzing the classical clustering routing algorithms,in order to prolong the network survival time,through designing nodes in a cluster,the treatment of isolated nodes and transmission between clusters and so on,a non-uniform clustering routing algorithm was proposed.Compares with EEUC and UCRA algorithm,the energy utilization of nodes of the proposed algorithm is higher,and the network survival time is obviously increased.
Crowdsourcing-based Indoor Localization via Embedded Manifold Matching
ZHOU A-peng, QIN Xi-zhong, JIA Zhen-hong and NIKOLA Kasabov
Computer Science. 2017, 44 (8): 64-70.  doi:10.11896/j.issn.1002-137X.2017.08.012
Abstract PDF(564KB) ( 62 )   
References | Related Articles | Metrics
With the boom of pervasive applications,indoor localization becomes more and more important.The traditionalfingerprint based positioning method requires the site survey in which the required time and workload are huge and the real-time updating needs to adapt to the changes in the room.All these factors greatly limit its scope of application.Therefore,the form of crowdsourcing was utilized to collect indoor information and record a large number of path information.The consistency of the low dimensional embedded manifold in the path were used for geographical position matching in order to establish the database of location fingerprints.Gauss particle filter denoising sensor data were used to further solve the problem of pedestrian step difference.According to the continuity of the user’s location and the path information,the reasonable nearest neighbor points were selected,and the accurate positioning was realized.The experiments have been carried out in the meeting room of 84m2,and the experiments can achieve comparable accuracy to the traditional method.The proposed method can adapt to the environmental changes in real time,and the positioning accuracy is better than the traditional positioning method after 2 weeks and even after 1 month.
Node Localization Based on Multipath Distance and Neural Network in WSN
YAN Jun-ya, QIAN Yu-hua, LI Hua-feng and MA Shang-cai
Computer Science. 2017, 44 (8): 71-75.  doi:10.11896/j.issn.1002-137X.2017.08.013
Abstract PDF(388KB) ( 75 )   
References | Related Articles | Metrics
In order to realize the object location in 802.15.4a wireless sensor networks,a new algorithm for object location and detection based on multipath distance and neural network was proposed in this paper.Firstly,the time differenceof arrival time is estimated when the presence of the object causes disturbances in the multipath effect,and the multipath distance between the nodes of the communication sensor is calculated.Then the multipath distance is used as the input of the neural network,and the object location is used for neural network training.Finally,minimum cost group function of the difference between the multipath distance estimation and its measurement is chosen to locate the object position.The simulation results of the localization and detection for single object and multiple objects show that the cumulative distribution function of error does not increase and the positioning error is smaller compared to other localization algorithms using the positioning algorithm proposed in this paper even when the quantity of sensors and objects in the network increases,thereby the robustness of the network is enhanced and the ability of network sensor to withstand failure is improved.
Energy-consumption Optimization Strategy Based on Cooperative Caching for Content-centric Network
XU Hui-qing, WANG Gao-cai and MIN Ren-jiang
Computer Science. 2017, 44 (8): 76-81, 106.  doi:10.11896/j.issn.1002-137X.2017.08.014
Abstract PDF(2123KB) ( 39 )   
References | Related Articles | Metrics
Now most researches about caching decisions of CCN(Content-Centric Networking) don’t syntheticallyconsiderthese related elements such as request hot,network energy consumption,content popularity,nodes cooperation and so on.This article proposed an energy-efficient caching strategy based on coordinated caching.This strategy regards the content routers in an autonomous domain as a cooperative cache group,and divides the cache capacity of every node in collaborative cache group into two sections,one for cooperative caching content and the rest for independently caching the local most popular contents,to improve content replicas diversity in cooperative caching group and reduce duplication of content transmission,enabling to minimize network energy consumption.We established relevant energy consumption model and used an advanced genetic algorithm (GA) to solve the optimization energy problem in this cooperative cache group.The results show that the proposed strategy to contrast current strategies can effectively reduce the energy consumption of CCN,which can improve its scalability and guide the evolution and deployment of CCN.
Resource Allocation for D2D Communication Underlaid Cellular Networks Using Bipartite Hypergraph
WANG Zhen-chao, ZHAO Yun and XUE Wen-ling
Computer Science. 2017, 44 (8): 82-85, 94.  doi:10.11896/j.issn.1002-137X.2017.08.015
Abstract PDF(952KB) ( 72 )   
References | Related Articles | Metrics
In this paper,we proposed a bipartite hypergraph based spectrum sharing algorithm in device-to-device (D2D) underlaid cellular network.Our design aims to maximize the system sum-rate assuming that each channel can be assigned to multi-links.To solve this NP-hard problem,we proposed the concept of bipartite hypergraph,construction rules of hyper-edges,and optimal matching algorithm.Simulation results show that,compared with the weighted bipartite graph based algorithm,the system sum-rate can be increased approximately by 40b/s/Hz and the system capacity can be improved about 50% through our algorithm.
Type of Data Gathering Algorithm Based on Uneven Clustering for Hybrid Wireless Sensor Networks
SHA Chao, WU Meng-ting and WANG Ru-chuan
Computer Science. 2017, 44 (8): 86-89, 114.  doi:10.11896/j.issn.1002-137X.2017.08.016
Abstract PDF(374KB) ( 44 )   
References | Related Articles | Metrics
A type of data gathering algorithm for hybrid wireless sensor networks was proposed in this paper.The network is divided into non-equalization grids and a primary as well as a secondary cluster head are selected in each grid to construct data collection paths for transmitting scalar and vector sensor data.Simulation results show that,compared with the layer-based data gathering protocol MTP and the cluster based protocol CDFUD,the proposed algorithm performs well in balance of energy consumption.
Virtualization Deep Packet Inspection Deployment Method
WANG Xue-shun, YU Shao-hua and DAI Jin-you
Computer Science. 2017, 44 (8): 90-94.  doi:10.11896/j.issn.1002-137X.2017.08.017
Abstract PDF(406KB) ( 76 )   
References | Related Articles | Metrics
Network function virtualization (NFV) changes the network architecture and the deployment of network services.Traffic is scanned only once in the virtual network architecture for the virtualization deep packet inspection (DPI),but DPI deployment is a difficult problem.In this paper,DPI engine deployment was formulated as linear programming problem (ILP) to satisfy some constraints.A greedy algorithm based on cost minimization and an optimal greedy algorithm were proposed to solve the function depoyment problem of deep paclet inspection.The proposed algorithm compromises the DPI deployment cost and network resource cost,and minimizes the cost of deployment.Simulation results show that the proposed scheme can achieve the approximate optimal solution of DPI deployment.
Cache Location Decision and Operating Allocation Schema Based on SDN in WMN
YE Xiao-qin, REN Yan-yang, SUN Ting and HENIGULI·Wumaier
Computer Science. 2017, 44 (8): 95-99, 123.  doi:10.11896/j.issn.1002-137X.2017.08.018
Abstract PDF(505KB) ( 187 )   
References | Related Articles | Metrics
For the issue that cache management efficiency of information center network is low,a method was proposed to improve cache management efficiency,and the concept of software defined network (SDN) in the environment of wireless mesh network (WMN) was taken full use of.The main work is reflected in the SDN content management,in which the location of network topology,content size and the location of cache node resource are considered for cache location decision.The request of clients and the position of cache nodes are considered for cache operation,and the operation is divided into off-path caching through the branch point and on-path caching through cache node content streams.The controller determines the distribution of the allocation work by the cached content table.Experiments are carried out in two kinds of environments,small network where the clients are locally converged and large network where the clients are locally distributed.The proposed schemes decrease the average response delay by 23.95% with only 5.13kb control traffic load per second in comparison to random cache location scheme.Compared with other in-network cache schemes,the efficiency of the node cache in WMN is maximized,and the content cache allocation performance is significantly enhanced,also the system does not have a large number of additional overhead.
Personalized Trust Model in Network Environment
LIAO Xin-kao, WANG Li-sheng, LIU Xiao-jian and XU Xiao-jie
Computer Science. 2017, 44 (8): 100-106.  doi:10.11896/j.issn.1002-137X.2017.08.019
Abstract PDF(637KB) ( 75 )   
References | Related Articles | Metrics
Trust is the foundation of human society,and plays an important role in the fields of science,technology,busi-ness,daily life and so on.A healthy society cannot keep well without trust.On the basis of studying the defects of the existing trust models and combining with all kinds of trust situations in the real life,a personalized trust model in network environment was proposed.This trust model can identify different trust meanings for various entities,and can re-commend all trust paths for users which meet the whole conditions in specific context environment.Because of the better dynamic adaptability and context correlation,the model can effectively improve the accuracy of trust model,which is demonstrated by the experimental results.
Temporal-Spatial-based Mandatory Access Control Model in Collaborative Environment
FAN Yan-fang
Computer Science. 2017, 44 (8): 107-114.  doi:10.11896/j.issn.1002-137X.2017.08.020
Abstract PDF(630KB) ( 52 )   
References | Related Articles | Metrics
Secure information sharing is a common goal for any information system.Critical applications in the collaborative environment put forward higher requirements for security and flexibility of information sharing.The existing mandatory access control model based on BLP model can’t meet the requirements of access control for critical applications in collaborative environment.In this paper,a temporal-spatial-based mandatory access control model was proposed,which integrates task,time with space issues into access control model.Logic security is integrated with physical location in this model.So,it not only can enhance the security of access control,but also meets the flexibility of access control in collaborative environment.The security of the model was proved with non-interference theory.
Acquiring Minimal Role Set Algorithm in Role Engineering
HAN Dao-jun
Computer Science. 2017, 44 (8): 115-123.  doi:10.11896/j.issn.1002-137X.2017.08.021
Abstract PDF(732KB) ( 74 )   
References | Related Articles | Metrics
Role engineering,which focuses on role acquisition and optimization,is an important research area in role-based access control (RBAC).Recently,there have been many researches on role acquisition and optimization.Howe-ver,these researches either have high time-complexity (NP complete),or cannot guarantee the acquired results optimal.In this paper,we proposed a new algorithm to acquire the optimal (or minimal) role set.Our algorithm has polynomial time complexity,and guarantees to acquire the optimal result.We first pretreated the role set using an algebra measure,and maximum linear independent group was introduced to simplify the role set.And after analyzing the characteristic of every set operator,we built equivalence classes using concept lattice model,and finally attained the optimal role set.The experiment results show that our algorithm is effective.
Novel Trajectory Privacy Preserving Mechanism Based on Dummies
DONG Yu-lan and PI De-chang
Computer Science. 2017, 44 (8): 124-128, 139.  doi:10.11896/j.issn.1002-137X.2017.08.022
Abstract PDF(526KB) ( 47 )   
References | Related Articles | Metrics
The popularity of location-based services(LBS) has brought great convenience to people’s life,but it also brings serious privacy leakage at the same time.Dummy is a popular technology at present,but most existing methods do not take the individual needs of users into account.To address this problem,a improved privacy model was proposed,guided by which we designed a dummy trajectories generation algorithm.The model includes five reasonable parameters,namely short-term disclosure,long-term disclosure,trajectories distance deviation,trajectories local similarity and services request probability.People can customize these metrics through their own needs and generate dummies by dummy trajectories generation algorithm to avoid leakage of privacy.The experiment results show that the algorithm can generate fewer dummy trajectories to satisfy the same privacy-preserving requirement.So it’s more effective than exis-ting works in preserving movement trajectories,especially the probability of services request is discussed.
Survivability Evaluation Model for Wireless Sensor Network under Multiple Attacks
LIU Zhi-feng, CHEN Kai, LI Lei and ZHOU Cong-hua
Computer Science. 2017, 44 (8): 129-133, 161.  doi:10.11896/j.issn.1002-137X.2017.08.023
Abstract PDF(1278KB) ( 81 )   
References | Related Articles | Metrics
Survivability of wireless sensor networks has emerged as a fundamental concern for sensor network deployment.One survivable wireless network demands continuous supplies of critical services under concurrent attacks.Due to such requirement,a survivability evaluation model of WSN under mixed means of attacks,where the network topology is described as a cluster-based one was presented.Because of several nodes being attacked in one single cluster,a threshold mechanism to trigger the transition between the states of WSN under mixed ways of attacks was designed to make countermeasures more accurately.Adopting the continuous time Markov chain,one survivability evaluation model of WSN is constructed,thus attributes as availability and survivability can be obtained.Parameters which influence the availability and survivability have been analyzed.Simulation results show that enhancing the repair rate and response rate of attack can improve the availability and survivability effectively and the introduced model can reflect means of attacks correctly.
Research of Privacy-preserving Tag-based Recommendation Algorithm
CAO Chun-ping and XU Bang-bing
Computer Science. 2017, 44 (8): 134-139.  doi:10.11896/j.issn.1002-137X.2017.08.024
Abstract PDF(515KB) ( 75 )   
References | Related Articles | Metrics
In tag-based recommendation,tags play a role in the link between users and information resources.However,compared to rating data,since the semantic properties of the tag data,tag data reflects user preferences more directly,so the privacy issues in tag-based recommendation are more serious.Recommender server collects user history tag records,once an attacker accesses the user information by attacking the recommender server,it will cause serious leakage of user privacy.A resource recommendation method (CDP k-meansRA) based on tag k-means clustering with privacy protection is proposed.Sender anonymity protection is provided by using Crowds network and ε-differential privacy are fused into an improved tag clustering based recommendation algorithm.The experiments show that compared to k-meansRA and so on,the CDP k-meansRA can keep the quality of the recommendation under the premise of user privacy preservation.
Dynamic Business-oriented Access Control Model
TAN Ren, YIN Xiao-chuan, LI Xiao-hui and BIAN Yang-yang
Computer Science. 2017, 44 (8): 140-145, 167.  doi:10.11896/j.issn.1002-137X.2017.08.025
Abstract PDF(1739KB) ( 104 )   
References | Related Articles | Metrics
Aiming at the problems of rough-grained access control and unable to adjust authorization dynamically in business process of traditional role-based access control (RBAC) model,a business-oriented dynamic RBAC model (BO-RBAC) was proposed in this paper.Taking TBAC model as reference,business step and authorization step are introduced into this model and basic model set is defined formally.Meanwhile,the authorization process is divided into two parts,role authorization and step authorization,and the execution is regarded as random process which shows a Markov chain-based dynamic authorization method.Finally,the model is implemented with C++14 programming language.BO-RBAC model combines the features of RBAC and TBAC,which introduces such advantages of fine-grained access control,dynamically-adjusting authorization and satisfy security specifications.
Design and Characteristic Study on Fast Stream Cipher Algorithm Based on Camellia
DING Jie, SHI Hui, GONG Jing and DENG Yuan-qing
Computer Science. 2017, 44 (8): 146-150.  doi:10.11896/j.issn.1002-137X.2017.08.026
Abstract PDF(2020KB) ( 88 )   
References | Related Articles | Metrics
As the encryption standard of the block cipher of NESSIE,Camellia algorithm has the same security and applicability as AES algorithm.In this paper,a novel fast stream cipher algorithm was proposed based on Camellia algorithm.The idea is to extract parts of the internal state at certain round function F and give them as the output keystream.We analyzed the relative characteristics of the new algorithm.The result shows that the new algorithm achieves almost the same performance as the optimal performance obtained in LEX,in terms of keystream generation speed and randomness.Besides,it can resist slide attack,with both input and key changing in each Camellia module.
Improved Opportunistic Routing Algorithm Based on Node Trustworthiness for WMNs
YIN Xin-qi, WU Jun, MO Wei-wei and BAI Guang-wei
Computer Science. 2017, 44 (8): 151-156.  doi:10.11896/j.issn.1002-137X.2017.08.027
Abstract PDF(495KB) ( 85 )   
References | Related Articles | Metrics
Opportunistic routing improves the reliability and throughput of WMNs,at the same time,the network performance degrades due to the existence of malicious nodes in candidate set.In order to solve the problem that how to timely identify and isolate malicious nodes,a node trustworthiness evaluation model was proposed.Based on Bayesian Networks Algorithm,considering the abnormal behavior caused by non-invasive factors,the uncertainty interaction factor is introduced to improve the estimate method of direct trust,and entropy is used to assign weights for calculation and update of trust value.The trustworthiness of the node is obtained by combining the trust value and the positive factor of behavior which is introduced to reflect the real participation of nodes,and the future trustworthiness for those nodes whose trustworthiness is uncertain is predicted to identify potential malicious nodes.Finally,the model was applied to ExOR and an opportunistic routing algorithm based on trustworthiness BTOR was proposed.The experimental results show that the algorithm can effectively detect the malicious nodes,and has advantages than original routing algorithms in performances.
Improved Algorithm for Privacy-preserving Association Rules Mining on Horizontally Distributed Databases
ZHANG Yan-ping and LING Jie
Computer Science. 2017, 44 (8): 157-161.  doi:10.11896/j.issn.1002-137X.2017.08.028
Abstract PDF(413KB) ( 49 )   
References | Related Articles | Metrics
An improved privacy-preserving algorithm based on homomorphic for association rules mining on horizontally partitioned environment was proposed in this paper.The algorithm utilizes the approach of randomized response with partial hiding and homomorphic encryption technology,introduces a semi-trusted third party,disruptes and hides the data sets of each site,convertes the horizontal format into vertical format,calculates the number of local support by bit operation and uses Paillier algorithm to compute the number of global support.The algorithm has some advantages,such as no need for communication between sites,high computational efficiency of support,fewer I/O operations and safe transport.Experimental results show that the algorithm can improve the computational efficiency of local support and reduce the number of I/O operations.
Improved Influence Network Approach to Target Threat Assessment
CHEN Hui and MA Ya-ping
Computer Science. 2017, 44 (8): 162-167.  doi:10.11896/j.issn.1002-137X.2017.08.029
Abstract PDF(464KB) ( 86 )   
References | Related Articles | Metrics
Target threat assessment is the key problem to be solved in the operational decision making.Aiming at the limitations of the traditional influence network only describing binary state of events,it was extended to describe N-nary state of events.Based on this,improved influence network model was established,and constraint conditions of effect parameters and conditional probability were derived.Taking the target of air defense system as an example,based on the analysis of the target threat attribute,the cloud model was used to make the qualitative and quantitative transformation of attribute value,and the target threat assessment was carried out by using the improved influence network method.Finally,the simulation example was given.The results verify the effectiveness and feasibility of the improved influence network method.
Improved Certificateless Aggregate Signature Scheme with Universal Designated Verifier
HU Xiao-ming, MA Chuang, SI Tao-zhi, JIANG Wen-rong, XU Hua-jie and TAN Wen-an
Computer Science. 2017, 44 (8): 168-175.  doi:10.11896/j.issn.1002-137X.2017.08.030
Abstract PDF(659KB) ( 42 )   
References | Related Articles | Metrics
Certificateless aggregate signature scheme with universal designated verifier (CTL-ASWUDV) can effectively solve the problem of protecting the privacy of the signer.An improved CTL-ASWUDV scheme (CTL-ASWUDV-1) was proposed according to the problems existing in Zhang et al.’s CTL-ASWUDV scheme on the invalid construction and two types of adversary attacks.The improved scheme not only keeps the advantages of constant aggregate signature length and constant bilinear pairing operation number,but also overcomes the attacks from two types of adversaries.This paper further proposed a highly efficient CTL-ASWUDV scheme (CTL-ASWUDV-2).In the random oracle mo-del,the security of the second improved scheme can be reduced to computational Diffie-Hellman problem.At the same time,compared with the existing similar schemes,the proposed second scheme has the following advantages.It has no bilinear pairing operation in both single signature and aggregate signature,and the number of bilinear pairing operation needed by the aggregate signature verification is independent on the number of signers and it is equivalent to the number of a single signature verification,i.e.one pairing operation.The length of an aggregate signature and the length of a desi-gnated verifier signature are both independent on the number of signers and they are equivalent to the length of a single signature verification,i.e.one element,which largely saves the network bandwidth.
Cost-sensitive Software Defect Prediction Method Based on Boosting
YANG Jie, YAN Xue-feng and ZHANG De-ping
Computer Science. 2017, 44 (8): 176-180, 206.  doi:10.11896/j.issn.1002-137X.2017.08.031
Abstract PDF(476KB) ( 80 )   
References | Related Articles | Metrics
Boosting resampling is a common method to expand data sets for small samples.Firstly,aiming at dimension disaster phenomenon during resampling process,a randomly feature selection method is used to reduce the dimensions.In addition,considering the characteristic that software defect prediction’s penalties for missing of true positives and the wrongly reported of negatives are different,cost-sensitive algorithm is added in feature selection process.On the basis of multi-normal k-NN weak learning,taking minimum costs as the principle,preditor which consists of k value and attri-butes subset of the current sampling set is get,cost-sensitive theory is imported to update weight vector during Boosting resampling process,and different instances are given corresponding weights.An adaptive ensemble k-NN learning is constructed using all the predictors,and a software defect prediction model is established.The results using NASA’s data sets show that under the condition of small samples,with this model,missing of true positive rate reduces largely and the wrongly reported of negative rate increases to some extent.On the whole,compared with the origen boosting-based learning,the method of cost-sensitive software defect prediction based on boosting greatly improves the prediction effect.
Diverse Ranking Algorithm Based on Minimal Independent Dominating Set
YIN Jia, CHENG Chun-ling and ZHOU Jian
Computer Science. 2017, 44 (8): 181-186.  doi:10.11896/j.issn.1002-137X.2017.08.032
Abstract PDF(517KB) ( 73 )   
References | Related Articles | Metrics
In order to meet the diversified needs and improve the satisfaction of user’s query,the research of the diverse ranking algorithm has been developed.However,the current diverse ranking algorithm cannot achieve good balance between diversification and relevance,and the efficiency of query processing cannot fully meet the actual needs of the interaction.Thus,a new diverse ranking algorithm based on minimal independent dominating set(MIDS-DR) was proposed.First,the problem of selecting diversified subset is transformed into the minimal independent dominating set of the undirected weighted graph,so as to balance the diversification and relevance of query results.To speed up the algorithm,the concept of abandoned subset is introduced to reduce the comparison of distances between redundant vertex pairs during the solving process.The simulation results show that the proposed algorithm can improve the performance and query efficiency.
Context-dependent Double-layered Data Model for Indoor Space
LI Jing-wen, LIU Yu-lei and QIN Xiao-lin
Computer Science. 2017, 44 (8): 187-192.  doi:10.11896/j.issn.1002-137X.2017.08.033
Abstract PDF(1187KB) ( 60 )   
References | Related Articles | Metrics
In the management of moving objects for indoor space,how to build the data model is the most important problem to be solved.With the development of context-aware information system,the concept of the context is getting more and more attention.How to integrate the context and preference information into the indoor spatial data management has become the focus of attention.Aiming at this problem,considering three kinds of information,such as geometry,topology and context,a context-dependent double-layered data model for indoor space was built.After analyzing the classical methods of space partition, we introduced a new method called the fine-grained partition for indoor space and gave its formal definition.We adopted the idea of hierarchical complementary to organize the indoor space,and the context information was added to the model by using ontology,which provides a flexible representation for indoor space.Finally,the feasibility and validity of the modeling method are illustrated by the examples and the advantages of this model.
Sparse Trajectory Destination Prediction Algorithm Based on Markov Model
XU Guang-gen, YANG Lu and YAN Jian-feng
Computer Science. 2017, 44 (8): 193-197, 224.  doi:10.11896/j.issn.1002-137X.2017.08.034
Abstract PDF(508KB) ( 73 )   
References | Related Articles | Metrics
With the popularity of mobile devices and the maturity of location technologies,a variety of location-based applications emerge.In order to make such applications provide users with accurate location-based services,timely,accurately,reliably forecast uncertainty track of moving objects is particularly important.Currently,most traditional methodspredict the destination of a given trajectory by calculating the similarity between two trajectories,and this algorithm does not fully consider the drawbacks of backward and forward linkages between trajectory time series,resulting in a larger prediction error.Theory demonstrates that the Markov model to deal with time series data have very good effect.Therefore,we proposed a sparse trajectory destination prediction algorithm based on Markov model.Meanwhile we investigated a new partitioning strategies for the sample motion space——grid partitioning based on K-d tree.The experimental results show that compared with traditional methods,using Markov model to predict the destination of the tra-jectory will significantly improve the accuracy of the algorithm,and the predicted time will be shortened.
MB-RRT* Based Navigation Planning Algorithm for UAV
CHEN Jin-yin, SHI Jin, DU Wen-yao and WU Yang-yang
Computer Science. 2017, 44 (8): 198-206.  doi:10.11896/j.issn.1002-137X.2017.08.035
Abstract PDF(3268KB) ( 101 )   
References | Related Articles | Metrics
With the wide application of unmanned aerial vehicle(UAV),it is important to improve the capacity of automatic navigation.Navigation for UAV is the algorithm that can automatically find out the obstacle-free,smoothing path from start position to target position.Most current navigation algorithms for UAV still have shortcomings including low convergence speed,large memory cost,fixed navigation step setting and smoothing challenges.MB-RRT* algorithm was proposed in this paper which has three outstanding strategies for UAV.Lazy sampling was adopted to improve convergence speed and achieve less memory cost.Self-adaptive step length algorithm was applied to solve navigation limitation near obstacles and improve initial solutions’ quality and speed.Down sampling and curve fitting were introduced to improve the convergence rate of the algorithm and the smoothness of the final path.Finally abundant simulations were carried out to testify the high performances of MB-RRT* compared with RRT* and BRRT*.
Research on Text Classification Algorithm Based on Triadic Concept Analysis
LI Zhen, ZHANG Zhuo and WANG Li-ming
Computer Science. 2017, 44 (8): 207-215.  doi:10.11896/j.issn.1002-137X.2017.08.036
Abstract PDF(758KB) ( 110 )   
References | Related Articles | Metrics
With the emergence of three-dimensional data in the network,the advantages of triadic concept analysis (TCA) have been reflected gradually.As a relatively new field,TCA has a bright prospect.This paper proposed a text classification algorithm based on TCA,which is a novel idea and a development of TCA in application aspect.The main idea of this algorithm is firstly preprocessing the dataset so that we can convert it into triadic context,meanwhile extend the binary relation in the context to a fuzzy value between 0-1 which represents membership degree about attribute for object under certain conditions.Based on this,we can build triadic concepts and utilize it to express the ternary relation among text,term and category.Then,combined with the approach degree in fuzzy theory,we can analogize the similarity formula of triadic concepts,accordingly calculate the training set’s similar value about triadic concept for a new text.Compared to support vector machine(SVM),K-nearest neighbor (KNN),convolution neural network (CNN) algorithm and classification based on formal concept analysis model,the results indicate that the proposed model in specific dataset is effective and achieves a better performance.
Metaheuristic Algorithm for Solving Multi-school Heterogeneous School Bus Routing Problem
HOU Yan-e, KONG Yun-feng, DANG Lan-xue and WANG Yu-jing
Computer Science. 2017, 44 (8): 216-224.  doi:10.11896/j.issn.1002-137X.2017.08.037
Abstract PDF(746KB) ( 81 )   
References | Related Articles | Metrics
This paper aimed to deal with the multi-school bus routing problem (SBRP) with different types of school buses.Based on the iterated local search (ILS),we proposed a metaheuristic algorithm,which employs the neighborhood operators originally designed for pickup and delivery vehicle routing problem with time windows (PDPTW) to improve the route solution.In the local search phase,the algorithm adopts the variable neighborhood descent search (VND) to search neighborhood solutions by the neighborhood operators.Guided by an adjustment strategy based on route segment,these operators adjust bus type as much as possible in order to reduce the cost.Our algorithm accepts some worse neighborhood solutions within the scope of cost deviation to keep the search diversification,and it uses the perturbation method with multiple points shift to avoid trapping in the local optima.The results of benchmark datasets show that the proposed algorithm is effective to solve both the mixed-load SBRP and single-load SBRP.In addition,for the multi-school homogeneous SBRP,our algorithm outperforms existing algorithms,such as post-improvement heuristic,simulation annealing and record-to-record travel algorithm.
Imbalanced Data Classification Method Based on Neighborhood Hybrid Sampling and Dynamic Ensemble
GAO Feng and HUANG Hai-yan
Computer Science. 2017, 44 (8): 225-229.  doi:10.11896/j.issn.1002-137X.2017.08.038
Abstract PDF(1806KB) ( 61 )   
References | Related Articles | Metrics
The class imbalance problems severely affect the performance of the traditional classification algorithm,lea-ding to decrease the recognition rate of the minority.In order to solve this problem,a hybrid sampling technology based on neighborhood characteristic was proposed to enhance the classification accuracy of minority class.This technology changes the sampling weight according to the class distribution in the samples neighborhood,and uses the hybrid samp-ling to obtain the balanced data subset.Then the base classifiers are generated,for each test sample,a dynamic ensemble method based on local confidence is proposed to select the optimal base classifier sets.The experiments on UCI datasets show that the method has high classification accuracy rate of both minority and majority class for imbalance datasets.
Collaborative Filtering Recommendation Algorithm Combing Category Information and User Interests
HE Ming, XIAO Run, LIU Wei-shi and SUN Wang
Computer Science. 2017, 44 (8): 230-235, 269.  doi:10.11896/j.issn.1002-137X.2017.08.039
Abstract PDF(541KB) ( 102 )   
References | Related Articles | Metrics
Collaborative filtering is the most successful and widely used information technology to make personalized prediction by exploiting the historical behaviors of users.The accuracy of the recommendation depends on effectiveness of the similarity measure.The methods of traditional similarity measure,which mainly concern with the similarity of the common ratings but ignore the category information in the rated items,are suffering from data sparsity problem.To address this issue,we proposed a ratings matrix filling method which is based on classification information by combining with user interest similarity calculation method and consider the category information fully to make the measure of interest more realistic.The experimental results show that the proposed algorithm can relieve the influence of the sparsity of user-item ratings on collaborative filtering algorithm and improve recommendation accuracy,diversity,and novelty.
Micro-blog’s Text Classification Based on MRT-LDA
PANG Xiong-wen, WAN Ben-shuai and WANG Pan
Computer Science. 2017, 44 (8): 236-241, 259.  doi:10.11896/j.issn.1002-137X.2017.08.040
Abstract PDF(1084KB) ( 85 )   
References | Related Articles | Metrics
Micro-blog’s widespread use has produced a large number of micro-blog data,which contains a large number of valuable information.However,due to the short text content of micro-blog information and its own information on the social network,the traditional model method is not so effective to deal with micro-blog information.For this kind of special text,the traditional text mining algorithm can’t be very good.Based on latent dirichlet allocation (LDA),this paper put forward a micro blogging generation model MRT-LDA according to the characteristics of micro blog information,which takes the relations between Chinese micro-blog documents and other Chinese micro-blog documents into consideration to help topic mining in micro-blog.Gibbs sampling method is used to inference the model,the results indicate that the model can offer an effective solution to text mining for Chinese micro-blog.
Probabilistic Weighted Extreme Learning Machine for Robust Modeling
ZHOU Chuang, FAN Bin, ZHU Lei and LU Xin-jiang
Computer Science. 2017, 44 (8): 242-245.  doi:10.11896/j.issn.1002-137X.2017.08.041
Abstract PDF(297KB) ( 78 )   
References | Related Articles | Metrics
Extreme learning machine (ELM) has attracted a lot of attention in the machine learning field and gained great success in application.However,it is sensitive to outliers and non-Gaussian noise in the training dataset,which greatly hinder the application of ELM.Probabilistic weighted ELM was proposed to model the dataset with the outliers and non-Gaussian noise.First,a distributed local ELM modeling is developed,upon which the probability distribution function (PDF) of multiple local models is estimated by the Parzen window method.Then,the distribution function is further used as weights to integrate all local models to construct a global robust ELM model.The successful application of this robust probabilistic weighted ELM method to both artificial case and real life case as well as its comparison to traditional ELM,regularization ELM and robust ELM demonstrate its superiority in the modeling.
Hybrid Collaborative Filtering Recommendation Algorithm Based on Friendships and Tag
ZENG An and XU Xiao-qiang
Computer Science. 2017, 44 (8): 246-251.  doi:10.11896/j.issn.1002-137X.2017.08.042
Abstract PDF(481KB) ( 52 )   
References | Related Articles | Metrics
The recommendation preference of a recommendation system was greatly affected by data sparseness.In order to solve this problem,a hybird collaborative filtering recommendation algorithm based on social network and tag information was proposed in this paper.The topology similarity characteristics among user nodes can be incorporated in the prediction of links in social network,and a circle of friends can exhibit a user’s interests.Thus,network resource allocation algorithm is firstly utilized to extract social network structure information.Then,the tag information is reso-nably extracted with the help of TF-IDF method.Finally,the recommendation is made by linearly combining both the social network structure information and the tag information.The experiment results on Last.fm and Delicious dataset suggest that the advocated alogrithm is superior to other advanced approaches in both accuracy and reliability.
Scientific Workflow Scheduling Algorithm Based on Hybrid Multi-objective Particle Swarm Optimization in Cloud Environment
DU Yan-ming and XIAO Jian-hua
Computer Science. 2017, 44 (8): 252-259.  doi:10.11896/j.issn.1002-137X.2017.08.043
Abstract PDF(611KB) ( 84 )   
References | Related Articles | Metrics
For realizing the more efficient scheduling of scientific workflow tasks,the multi-objective optimization problem of workflow scheduling in cloud environment was researched and a workflow scheduling algorithm HPSO of hybrid particle swarm optimization based on non-dominance sort was presented.First,the multi-objective optimization model of workflow scheduling under budget and deadline constraint is established,which introduces three optimizaiton objectives,including the execution makespan of workflow,the execution cost and the execution energy consumption.Second,a hybrid particle swarm optimizaiton algorithm is designed to solve this three conflicting objectives optimization.Our algorithm can obtain the solutions set of workflow scheduling satisfying Pareto optimal by non-dominance sort.Finally,through the simulation experiments of three types of scientific workflow case,we compared the proposed algorithm to the same types of multi-objective scheduling algorithms,such as NSGA-II,MOPSO and ε-Fuzzy.The experimental results show that the scheduling solution obtained by HPSO not noly has better convergence,but also has better uniform spacing distribution among the solutions,which can better accord with the workflow scheduling optimization in cloud environment.
Regularized Fuzzy Twin Support Vector Machine
LI Kai, GU Li-feng and HU Shao-fang
Computer Science. 2017, 44 (8): 260-264.  doi:10.11896/j.issn.1002-137X.2017.08.044
Abstract PDF(321KB) ( 42 )   
References | Related Articles | Metrics
Fuzzy twin support vector machine is an important machine learning method and it overcomes the impact of noise and outlier data on classification.However,this method still accomplishes minimization of empirical risk so that overfitting is easily produced in the process of training.In order to solve this problem,a modified fuzzy twin support vector machine model was presented by introducing regularized item.Classifier was obtained by using quadratic programming and over-relaxation method to solve the model.Some UCI datasets were selected to conduct the experiments.The results validates the effectiveness of the proposed method.
Chinese Word Sense Induction Model by Integrating Distance Metric and Gaussian Mixture Model
ZHANG Yi-hao, LIU Zhi and ZHU Chang-peng
Computer Science. 2017, 44 (8): 265-269.  doi:10.11896/j.issn.1002-137X.2017.08.045
Abstract PDF(380KB) ( 117 )   
References | Related Articles | Metrics
Word sense induction is an important topic in solving knowledge acquisition of word sense,and the most widely used method to word sense induction is based on cluster analysis algorithm.By comparing K-Means clustering algorithm with EM clustering algorithm on the model of word sense induction,we proposed a new hybrid clustering algorithm by integrating distance metric and Gaussian mixture model,which combine the advantages of distance metric and data distributed computing in the two cluster algorithms respectively to mine the role of geometrical properties and normal distribution information of training data in clustering analysis and then improve the performance of performance of word sense model.Experimental results show that the hybrid clustering algorithm proposed in this paper is very effective to improve the performance of word sense induction model.
Research on Evolution Model of Microblog Topic Based on Time Sequence
WANG Zhen-fei, LIU Kai-li, ZHENG Zhi-yun and WANG Fei
Computer Science. 2017, 44 (8): 270-273, 279.  doi:10.11896/j.issn.1002-137X.2017.08.046
Abstract PDF(1201KB) ( 58 )   
References | Related Articles | Metrics
Topic evolution research is helpful to track the user preferences and development trend of topics,and it is of great significance for public sentiment warning.Current topic evolution methods focus on using topic generation model to achieve the topic evolution analysis,and ignore the time factors of topic and background word.Based on the tradi-tional topic generation model LDA,this paper extended it to the micro-blog topic generation model MTLDA.Conside-ring the background word,MTLDA model improves the efficiency of the topic generation.Meanwhile,the micro-blog topic set is divided into time slices,KL divergence is used to calculate the distance between adjacent time slices,and topic evolution is analyzed.Taking Sina Micro-blog data as an example,the experimental results show that the MTLDA model completes the generation of micro-blog topic by using the time slice,and the topic evolution results are tally with the actual situation.
Rule Acquisition of D-type Probabilistic Decision Formal Context
ZHAO Fan and WEI Ling
Computer Science. 2017, 44 (8): 274-279.  doi:10.11896/j.issn.1002-137X.2017.08.047
Abstract PDF(417KB) ( 72 )   
References | Related Articles | Metrics
Based on uncertainty decision problems,D-type probabilistic decision formal context was proposed.On this probabilistic decision formal context,a new operator “△” was defined,the probabilistic concepts were obtained and the corresponding concept lattice was constructed.Then we defined the consistence of D-type probability decision formal context and studied the rule acquisition on the consistent D-type probability decision formal context.Furthermore,by eliminating the redundant rules,the rules were simplified.Finally,the algorithms for generating the probability concept lattice and acquiring the decision rules were presented.
Case Base Reasoning Method Improved by Memory and Forgetting Strategy
ZHANG Chun-xiao and ZHAO Hui
Computer Science. 2017, 44 (8): 280-284, 289.  doi:10.11896/j.issn.1002-137X.2017.08.048
Abstract PDF(518KB) ( 57 )   
References | Related Articles | Metrics
In the case-based reasoning (CBR),with the continuously growing of the size of case base,there may be so called “swamping problem” when the time cost of retrieval exceeds the benefit of the accuracy.From the perspective of cognitive science,a case base maintenance method with the ability of selective memory and intentional forgetting was proposed,which can selectively save the new cases and intentionally delete the old cases.The contrast experiments show the effectiveness of the proposed method.The selective memory and intentional forgetting policy can significantly reduce the time and space complexity,and preserve or improve the accuracy of CBR classifier,thus improve the performance of CBR.
Scaling-up Algorithm of Multi-scale Association Rules
LI Chao, ZHAO Shu-liang, ZHAO Jun-peng, GAO Lin and CHI Yun-xian
Computer Science. 2017, 44 (8): 285-289.  doi:10.11896/j.issn.1002-137X.2017.08.049
Abstract PDF(399KB) ( 59 )   
References | Related Articles | Metrics
Great achievements have been made on multi-scale research of data mining.However,multi-scale data mining research is far from being deep and perfect.Current research,which mainly focuses on space and image data,pays less attention to multi-scale data mining on the general data.With the continuous development of big data applications,research of multi-scale data mining becomes particularly important.Regarding the issue above,this paper carried out a study of scale-conversion methods on universal multi-scale association rules data mining.First of all,this paper gave an approach of frequent items based on the similarity theory of including degree.Then,the paper proposed an algorithm named MSARSUA (Multi-Scale Association Rules Scaling Up Algorithm) based on the theory of image pyramid.Finally,experimental results on data sets from H province,UCI and IBM show that algorithm MSARSUA has higher coverage,higher F1-measure and lower estimation error of average support.Algorithm MSARSUA outperforms both Apriori algorithm and FP-Growth algorithm on efficiency aspect.Meanwhile,the results indicate that algorithm MSARSUA possesses superior performance compared with algorithm SU-ARMA.
Canonical Basis Based on Decision Implication of Decision Context
HE Jian-ying
Computer Science. 2017, 44 (8): 290-295.  doi:10.11896/j.issn.1002-137X.2017.08.050
Abstract PDF(471KB) ( 46 )   
References | Related Articles | Metrics
The decision criterion of the decision backdrop was introduced primarily.The decision backdrop is categorized into groups by the uncertainty threshold value which can lead to findings the decision criterion subordinating to the decision backdrop,and it’s further proved that every branch-patching decision implication group is complete,non-redundant and the most superior,and the set of decision criterion on the decision backdrop patch is complete,non-redundant and the most superior on the whole decision backdrop.The minimum spanning piece algorithm was optimized while generating the decision backdrop criterion,at the same time,the derived algorithm based on the decision backdrop was generated.Experiments show that improving and optimizing the grouping tactics and algorithm can greatly suppress the formation of the redundant decision criterion more efficient and sturdy.
Segmentation of Lung CT Image Sequences Based on Improved Self-generating Neural Networks
LIAO Xiao-lei and ZHAO Juan-juan
Computer Science. 2017, 44 (8): 296-300, 317.  doi:10.11896/j.issn.1002-137X.2017.08.051
Abstract PDF(2407KB) ( 112 )   
References | Related Articles | Metrics
Existing lung segmentation methods cannot fully segment all lung parenchyma images and have slow proces-sing speed.The position of the lung was used to obtain lung ROI sequences,and an algorithm of superpixel sequences segmentation was then proposed to segment the ROI image sequences.In addition,improved self-generating neural networks were utilized for superpixel clustering and the grey and geometric features were extracted to identify and segment lung image sequences.The experimental results show that our method’s average processing time is 0.61 second for a single slice and it can achieve average volume pixel overlap ratio of 92.09±1.52%.Compared with the existing me-thods,our method has higher segmentation precision and accuracy with less time.
Algorithm of Image Salt and Pepper Noise Elimination Based on Particle Swarm Algorithm
ZHANG Ai-ling, LI Peng and LIU Sheng
Computer Science. 2017, 44 (8): 301-305.  doi:10.11896/j.issn.1002-137X.2017.08.052
Abstract PDF(1804KB) ( 90 )   
References | Related Articles | Metrics
To eliminate salt and pepper noise in images,we proposed an adaptive switching median filter algorithm based on particle swarm algorithm. The proposed algorithm consists of two stages:noise detection and noise filtering.Compared with the standard median filtering,the adaptive switching median filter algorithm was put forward to generate pollution image noise wave. Contaminated and non-contaminated pixel information can be get through the noise wave images.In the filtering process,the filter calculates the value of the adjacent pixels and replaces the contaminated pixels.The simulation results show that the proposed algorithm is effective and can improve the peak signal to noise ratio (SNR) and image quality.
Iterated Unscented H∞ Filter Based Mobile Robot SLAM
LUO Yuan, SU Qin, ZHANG Yi and GUAN Guo-lun
Computer Science. 2017, 44 (8): 306-311.  doi:10.11896/j.issn.1002-137X.2017.08.053
Abstract PDF(445KB) ( 82 )   
References | Related Articles | Metrics
To alleviate the problems of low estimation accuracy,serious inconsistencies and poor robustness in mobile robot simultaneous localization and mapping(SLAM),a novel iterated unscented H∞ filter based SLAM algorithm was derived.The unscented transformation is introduced into the extended H∞filter to estimate the system state mean and covariance matrix,avoiding the derivation of Jacobian matrix and linearization error accumulation.Meanwhile,the numerical stability of the algorithm is enhanced.With the iterative update method,the observation information is utilized to repeatedly correct the state mean and the covariance matrix to further lower the estimation error.The proposed algorithm was compared with EKF-SLAM,UKF-SLAM and CEHF-SLAM under different environments and noises in simulation experiments.Results show that the proposed SLAM can still maintain the high estimation accuracy and robustness in different terrible noises,and adapts to different environments.The effectiveness and feasibility of the algorithm are verified.
Pulmonary Nodule Diagnosis Using Dual-modal Denoising Autoencoder Based on Extreme Learning Machine
ZHAO Xin, QIANG Yan and GE Lei
Computer Science. 2017, 44 (8): 312-317.  doi:10.11896/j.issn.1002-137X.2017.08.054
Abstract PDF(1078KB) ( 68 )   
References | Related Articles | Metrics
The existing deep learning framework used in diagnosing lung cancer still mainly focuses on lung Computed Tomography(CT) images,but it cannot obtain more higher diagnostic rate,when using only one images in the process of daily diagnosis.Therefore,in this paper,a new pulmonary nodule diagnosis method using dual-modal combined with CT and Positron Emission Tomography(PET) deep denoising autoencoder based on extreme learning machine (SDAE-ELM) was proposed to improve the diagnostic performance effectively.First of all,the method gets discriminative features information separate from the input data CT and PET.Secondly,it inputs CT and PET about candidate lung respectively in whole network.Thirdly,it extracts the high level discriminative features of nodules by alternating stack denoising autoencoder layers.Finally,it makes the fusion strategy of multi-feature fusion as the output of the whole framework.The experiment results show that classification accuracy of the proposed method can reach 92.81%,sensitivities up to 91.75% and specificity up to 1.58%.Meanwhile,the method achieves better discriminative results and is highly suited to be used for pulmonary nodule diagnosis.
Robust Control Algorithm of Bionic Robot Based on Binocular Vision Navigation
LI Xiu-juan, LIU Wei and LI Shan-hong
Computer Science. 2017, 44 (8): 318-321.  doi:10.11896/j.issn.1002-137X.2017.08.055
Abstract PDF(343KB) ( 126 )   
References | Related Articles | Metrics
In the course of pose determination,the bionic robot is prone to generate control errors due to the influence of spatial perturbation factors.Accurate calibration of robots is required,and the positioning accuracy of bionic robot need to be improved.A robust control algorithm of bionic robot based on binocular vision navigation was proposed.The optical CCD binocular vision dynamic tracking system is used to measure the terminal position and orientation parameters of the bionic robot.The kinematic model of the controlled object is established.Taking the 6 degree of freedom parameter in the rotational joint of a robot as the control constraint paramater,a hierarchical space motion planning model of robot is established.The binocular vision tracking method is used to realize the adaptive correction of the position and orientation of the bionic robot,and robust control is achieved.Simulation results show that when appling the method to position control of biomimetic robot,the fitting error of robot end position parameters is low,and the performance of dynamic tracking is better.