Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 42 Issue 11, 14 November 2018
  
Survey on Verifiable Delegation of Computation
SUN Yi, CHEN Xing-yuan, DU Xue-hui and XU Jian
Computer Science. 2015, 42 (11): 1-7.  doi:10.11896/j.issn.1002-137X.2015.11.001
Abstract PDF(698KB) ( 644 )   
References | Related Articles | Metrics
In order to resolve safety issue of outsourcing data and verifiable delegation of computation,the theory of veri-fiable computation starts to draw public’s attention recently.This paper illustrated how verifiable delegation of computation technique resolves above verifiability issues and provided formal definition of verifiable computation.This is the first time to summarize and induce current schemes from different classification methods,and illustrate key techniques and deficiency.Then this paper compared and analyzed these schemes from two aspects of function and performance.Finally,combined with the application hotspots,we forecasted tendency and development perspective in this research area from different application directions.
Research on Online Social Network
LI Li-yao, SUN Lu-jing and YANG Jia-hai
Computer Science. 2015, 42 (11): 8-21.  doi:10.11896/j.issn.1002-137X.2015.11.002
Abstract PDF(1448KB) ( 1478 )   
References | Related Articles | Metrics
The online social network (OSN) has become the most popular application in the Web2.0 age.Its services have extended from social relationship management to media information,application integration and e-commerce.With its huge amounts of active users,OSN has played a critical role in the academic research area such as online behavior,data security,information diffusion and other interdisciplinary studies.Since the appearance of Facebook,various research works have been emerging,which help us look inside the OSN.We firstly summarized the development of OSN and its challenges.Then we categorized and evaluated the research work in four aspects including measurement technolo-gy,user behaviors analysis,information dissemination and user privacy.Finally,we summarized the problems of current work and the prospects of future research topics.The purpose of this paper is to provide some enlightenment for researches of this field.
Research on Medical Image Registration Classification
LIU Yi-han, YAN De-qin and LIU Cai-feng
Computer Science. 2015, 42 (11): 22-27.  doi:10.11896/j.issn.1002-137X.2015.11.003
Abstract PDF(525KB) ( 636 )   
References | Related Articles | Metrics
Medical image registration is an important subject in medical image research field.Registration types include rigid and non-rigid registration,and registration forms include the re-gistration between different individuals and the registration of different atlases of same individual.This paper introduced three categories of image registration based on the processing,image characteristics and deformation respectively.Non-rigid registration technology is more sophisticatedthan the rigid registration both in stability and computing efficiency,and has more important application significance.With the application and development of the computer technology,the non-rigid registration becomes a very active area of research in this field.Related models and methods have drawed people’s attention.
Optimized Adaptive Hybrid Indexing for In-memory Column Stores
XUE Zhong-bin, ZHOU Xuan, ZHANG Yan-song, ZHOU Xin and WANG Shan
Computer Science. 2015, 42 (11): 28-31.  doi:10.11896/j.issn.1002-137X.2015.11.004
Abstract PDF(432KB) ( 316 )   
References | Related Articles | Metrics
Analytical database has been widely deployed in modern corporations which are posing increasing demand for the speed of data analysis.In the era of big data,analytical database is faced with a number of new challenges.Firstly,the complexity of data keeps increasing,therefore,more efforts have to be put into system configuration,such as index creation.Secondly,without prior knowledge about the patterns of workload,system administrators have to build and rebuild indexes repeatedly,in order to meet the time constraints.Apparently,traditional approaches to index construction and maintenance can not work well in the new environment.Database cracking provides an alternative to solve the problem.Using database cracking,a DBA does not need to fine-tune the system configuration.Instead,the database can automatically adjust itself to fit the workload during query execution.In recent years,a series of database cracking algorithms have been proposed,while none of them is optimal in all situations.The paper proposed a cache conscious cost model for database cracking.Based on the model,we created a new adaptive index,which can combine the advantages of several previous cracking approaches.Extensive experiments were conducted to demonstrate the effectiveness of our method.
Research on Histogram Generation Algorithm Optimization Based on OpenCL
AN Xiao-jing, ZHANG Yun-quan and JIA Hai-peng
Computer Science. 2015, 42 (11): 32-36.  doi:10.11896/j.issn.1002-137X.2015.11.005
Abstract PDF(688KB) ( 297 )   
References | Related Articles | Metrics
Application developers increasingly adopt GPUs as standard computing accelerators to improve application performance with their easier programmability and increasing computing power.The histogram generation algorithm is a common algorithm of computer vision,and is widely used in image processing,pattern recognition and image search.With the scale enlargement of image processing and the demand of real-time,improving the performance of histogram generation algorithm by GPU is in increasingly high demand.We introduced the realization and optimization on GPU of histogram to research the major optimization methodologies and technologies.Experimental results show that the applications of access optimization of histogram backup,memory optimization,data localization and mergence optimization,and some other optimization strategies,bring about a 1.8 ~ 13.3 times speedup for the algorithm on AMD HD 7850,than versions before optimization,and brings about a 7.2~ 210.8 times speedup than CPU versions.
Node-level Memory Access Optimization on Intel Knights Corner
LIN Xin-hua, LI Shuo, ZHAO Jia-ming and M ATSUOKA Satoshi
Computer Science. 2015, 42 (11): 37-42.  doi:10.11896/j.issn.1002-137X.2015.11.006
Abstract PDF(472KB) ( 321 )   
References | Related Articles | Metrics
Traditional programming optimization (TPO) has limited effects on Intel Knights Corner (KNC).Therefore,we proposed memory access optimization (MAO) for KNC.We applied MAO to TPO version of Diffusion 3D,and its performance is improved by 39.1%.We made two contributions in this paper:1) MAO is indispensable to KNC and TPO+MAO is the path to Ninja Performance—the best optimized performance.2) Intrinsic-based MAO is more efficient to stencil code than compiler-based MAO.Our findings on MAO will inspire optimizations of large-scale applications on KNC.
Pareto Optimization and Scheduling of Synchronous Dataflow Graphs on Heterogeneous Multicore Platform
GU Yu-lei, ZHU Xue-yang, YAN Rong-jie and ZHANG Guang-quan
Computer Science. 2015, 42 (11): 43-47.  doi:10.11896/j.issn.1002-137X.2015.11.007
Abstract PDF(661KB) ( 244 )   
References | Related Articles | Metrics
Synchronous dataflow graphs (SDFGs) are widely used to model streaming applications such as multimedia and digital signal processing applications.Streaming applications are usually required to reach a high throughput to guarantee smooth running.Using heterogeneous multicore processors to improve the throughput of streaming applications has become a feasible solution.However,a higher throughput is usually achieved with the increase of energy consumption.We presented a method to explore the Pareto space of energy consumption and throughput and find the schedule of each Pareto point.A system model includes an SDFG description for the application and a heterogeneous multicore platform.The system model is transformed to a network of timed automata (NTA) and the optimization goals are formalized as temporal logic formulae.The NTA and formulae are then used as the input of the real time model checking tool UPPAAL.The Pareto points and corresponding schedules by the trace provided by UPPAAL can be obtained.Our method is exact,which is benefited from the exhaustive search nature of model checking technique.The method can be used to help designers to understand the quantitative relationship between energy consumption and throughput in the design period,shortening development cycles and reducing costs of system developments.
Analysis of Architecture Characteristics of Big Data Workloads
LUO Jian-ping, XIE Meng-yao and WANG Hua-feng
Computer Science. 2015, 42 (11): 48-52.  doi:10.11896/j.issn.1002-137X.2015.11.008
Abstract PDF(447KB) ( 282 )   
References | Related Articles | Metrics
Aiming at the big data workloads with off-line analysis and interactive queries,first we analyzed some common features of these workloads,extracted the common set of operations and arranged the workloads in groups.Then,we tested the big data workloads on the BigDataBench platform and got the micro-architecture characteristics using PCA and SimpleKMeans algorithm for dimensionality reduction and clustering analysis.Our study revealed that big data workloads share a common set of operations such as the Join and Cross Production.We also observed that some of the big data workloads have many similar features.For example,the Difference and Projection operations share micro-architectural characteristics.The result of our experiment has a guiding significance for the design of hardware platforms like processors and the optimization of applications.Meanwhile,it also provides valuable insights into the implementation of the big data benchmark platform.
Realization and Optimization of Cross-correlation Based on YHFT-QDSP
YANG Lin, WU Jia-zhu, HU Xiao and TIAN Xi
Computer Science. 2015, 42 (11): 53-55.  doi:10.11896/j.issn.1002-137X.2015.11.009
Abstract PDF(246KB) ( 316 )   
References | Related Articles | Metrics
In the field of signal processing,cross-correlation plays an important role in finding an unknown signal’s feature.Cross-correlation is often used in areas such as image matching and particle image velocimetry.For large cross-correlation calculation and requirements of faster computing speed in real-time systems,in order to improve cross-correlation’s performance in real-time systems,we used the FFT accelerators of YHFT-QDSP to achieve 2D-FFT.Cross-correlation was realized based on YHFT-QDSP and this paper proposed some optimizations based on the size of input to improve the performance of cross-correlation.
Parallel Implementation of Finite-element Mesh Integration Algorithm on Many Integrated Core
KOU Da-zhi and KONG Da-li
Computer Science. 2015, 42 (11): 56-58.  doi:10.11896/j.issn.1002-137X.2015.11.010
Abstract PDF(340KB) ( 209 )   
References | Related Articles | Metrics
A C++ 3-D finite-element mesh integration algorithm was implemented and profiled on a heterogeneous Intel CPU/MIC architecture.By virtually programing in the offload mode[1] with explicit copies,a sequence of key element-wise operations are fully parallelized utilizing massive concurrency of OpenMP threads on MIC devices.It is remarkably demonstrated that,in the sense of overall run-time efficiency,one fully employed 3115A MIC card outweighs two 8-core Intel XeonTM E5-2670 CPUs.However,possibly owing to cache contention among physical threads on individual MIC core,scalability is somehow below an ideal level.Current test unveils a good chance of transplanting a full finite-element analysis code onto a multi-CPU nodes/multi-MIC devices platform based on this single-process multi-thread building block presented here.
Quantitative Analysis of Flow-setup Cost in OpenFlow Network
WU Jie, FU Bin-zhang, CHEN Ming-yu and ZHANG Li-xin
Computer Science. 2015, 42 (11): 59-62.  doi:10.11896/j.issn.1002-137X.2015.11.011
Abstract PDF(351KB) ( 258 )   
References | Related Articles | Metrics
OpenFlow introduces a new network architecture by separating control plane from data plane and handing it over to a centralized software application named OpenFlow controller.Due to the devolving of two planes,the process of flow-setup will involve an interaction between the controller and the OpenFlow switches which shall bring ignorable cost.Experiments show that the flow-setup cost will lead to at least twice increment in packet transmission time.So it is meaningful to give a quantitative analysis of flow-setup cost and find the factors that influence packet transmission latency.Additionally,building an OpenFlow network to evaluate how serious the affection is can be necessary.
Sparse Matrix Blocking Method for Custom Architecture
WU Gui-ming, WANG Miao, XIE Xiang-hui, DOU Yong and GUO Song
Computer Science. 2015, 42 (11): 63-64.  doi:10.11896/j.issn.1002-137X.2015.11.012
Abstract PDF(494KB) ( 275 )   
References | Related Articles | Metrics
Sparse matrix vector multiplication is one of the most important applications in scientific computing.Using custom architectures to implement sparse matrix vector multiplication is introduced to improve the performance of scientific computing.To address the problem in existing sparse matrix blocking method and representation,a two-dimension uniform blocking method and its according representation were proposed in this paper.The experimental results show that the proposed sparse matrix blocking method and representation can reduce the padding zero significantly.
MapReduce Parallel Model Based on Tree Structure
TANG Bing and HE Hai-wu
Computer Science. 2015, 42 (11): 65-67.  doi:10.11896/j.issn.1002-137X.2015.11.013
Abstract PDF(602KB) ( 245 )   
References | Related Articles | Metrics
MapReduce is a distributed computing model introduced by Google,which has been widely used in the field of massive data processing.A novel MapReduce parallel model was presented in this paper.The model is suitable for massive scientific data analysis,using unreliable desktop PC resources in the Internet or Intranet environment.Computing nodes are organized in the form of P2P,and the P2P-MPI framework is utilized in the lower layer,while message pas-sing interface model is utilized to achieve the MapReduce application layer.In the implementation of MapReduce application layer,the way of broadcast is used to distribute data chunks in the Map stage,and an inverse binary tree is constructed to realize effective intermediate results reduction in the Reduce stage.The proposed MapReduce mode was compared with existing popular MapReduce modes.The results show that the proposed tree structure-based MapReduce parallel model has a good performance in terms of fault-tolerance and it is simple and easy for application development.
High-productivity Model Based on Proactive Cognition and Decision
YANG Jin, PANG Jian-min, WANG Jun-chao, YU Jin-tao and LIU Rui
Computer Science. 2015, 42 (11): 68-72.  doi:10.11896/j.issn.1002-137X.2015.11.014
Abstract PDF(423KB) ( 245 )   
References | Related Articles | Metrics
With the development of HPCs,increasingly importance has been attached to reducing power consumption and raising productivity.We proposed a high productivity computing model to deal with the HPCs’ productivity problem,which adopts the concept of reconfigurable computing and is based on proactive cognition and decision system.This model apperceives the real-time states of application tasks,evaluates the matching degree of application state and current application structure,and then reconfigures the application structure to lower energy consumption and increase productivity.In order to verify the model’s effectiveness,we constructed a prototype experimental platform,implemented a video-copy-detection program and a password recovery program,and then used real Internet traffic statistical curve to simulate the programs’ loads.Experimental results demonstrate that under this environment the application system based on the model has raised its productivity by 58% compared with traditional method.
Chinese Character Computing Model Based on Cloud Information Protection
LI Qing-sheng, ZHANG Li, LIU Quan, XIONG Jing and YANG Xin-xin
Computer Science. 2015, 42 (11): 73-79.  doi:10.11896/j.issn.1002-137X.2015.11.015
Abstract PDF(1106KB) ( 1010 )   
References | Related Articles | Metrics
A new cloud information protection model was proposed based on information protection.The method of Chinese strokes abstract was designed by abstracting the digraph from the strokes of Chinese characters.Using this me-thod,a dynamic description library for the glyphs of Chinese characters was constructed,and an algorithm of characters generation was shown based on the library.The dream of storing the glyphs of Chinese characters on the Web and outputting the characteristic glyphs on the client was realized.In addition,the model provides a solution to the storage of Chinese characters and the protection of information in the clouds.The model of Chinese computing based on information security is not only helpful for machines to understand linguistics but also helpful to protect information,semantic computation and information security computation.
Parallelization of MIC Algorithm Based on MapReduce
LV Rui, CAI Guo-yong and PEI Guang-zhan
Computer Science. 2015, 42 (11): 80-83.  doi:10.11896/j.issn.1002-137X.2015.11.016
Abstract PDF(953KB) ( 239 )   
References | Related Articles | Metrics
MIC is a kind of method to analyze the possible relationships existing between variables,which can not only effectively identify the various complex types of relationships,but also accurately describe the impact of noise on the relationships.Exploring variable relationships in the large data sets is considered significant to big data mining.Aiming at the shortage of performance in dealing with the data set containing a large number of variables,this paper proposed a parallelization method based on MapReduce.Firstly,a finer and smaller partition to the raw algorithm was conducted,and then a parallel model based on the task chain Map-Reduce-Map was adopted.The model not only effectively increases the parallel computing units,but also greatly reduces unnecessary consumption of system resource.Theoretical ana-lysis and experimental verification demonstrate that the improved algorithm has the same accuracy as well as the original algorithm and a great improvment in terms of running speed.The relationship between speed-up ratio and the amount of process Map2 shows that our method has a good scalability in the aspects of system resources.
Automated Refactoring Framework for Java Locks
ZHANG Yang, ZHANG Dong-wen and QIU Jing
Computer Science. 2015, 42 (11): 84-89.  doi:10.11896/j.issn.1002-137X.2015.11.017
Abstract PDF(807KB) ( 280 )   
References | Related Articles | Metrics
Java locks,such as synchronized,ReentrantLock,and ReadWriteLock,often obtain different performances when applied on different data structures.To learn about which lock will obtain the best performance,there is a strong need to transform from one lock to another automatically.This paper presented an automated refactoring framework for the transformation of Java locks.The framework extensively relies on static program analysis to perform the refacto-ring.The evaluation was performed using three benchmarks,such as red-black tree,producer-consumer problem and SPECjbb2005.The successful refactoring results are observed and the time of the refactoring is acceptable.
Hypergraph Dimensionality Reduction with Multiple Feature Fusion Based on GPU Parallel Acceleration
HONG Chao-qun, CHEN Xu-hui, WANG Xiao-dong, LI Shi-jin and WU Ke-shou
Computer Science. 2015, 42 (11): 90-93.  doi:10.11896/j.issn.1002-137X.2015.11.018
Abstract PDF(399KB) ( 263 )   
References | Related Articles | Metrics
Graph-based learning methods are currently popular for dimensionality reduction.However,for multiple feature data,different relationships from different features are hard to be integrated into a single graph.In this paper,a novel semi-supervised dimensionality reduction method was proposed for multiple feature data.First,the hyperedges in hypergraph are assumed as patches.In this way,hypergraph is applied to patch alignment framework.Then,the weights of hyperedges are computed with statistics of distances between neighboring pairs and the patches from different features are integrated.Second,the speed of computing Euclidean distances and matrix multiplication is improved by using GPU,since they take most of time in constructing the Laplacian matrix.The experimental results demonstrate the improvement on both classification performance and learning speed.
Implementation and Performance Analysis of GFSR(521,2) Parallelization Based on MIC
GU Xiao-lu, ZHOU Jin-yu, HUA Cheng, LIU Xiao and ZHOU Xiao-hui
Computer Science. 2015, 42 (11): 94-95.  doi:10.11896/j.issn.1002-137X.2015.11.019
Abstract PDF(246KB) ( 211 )   
References | Related Articles | Metrics
The GFSR is a kind of feedback shift random number generator.Based on the study of GFSR (521,2) serial algorithm, we used Strided skip ahead method to realize the parallelization.Experimental results show that the paralleli-zed GFSR (521,2) generator’s TestU01 test results are the same as serial algorithm’s.And the best speedup based on the MIC platform reaches 7.58 relative to single-thread with the CPU.
Neural-network-based Method for Automatic Acquisition of User’s Video Rating
JI Shu-juan, WANG Li, LIANG Yong-quan and ZHAO Jian-li
Computer Science. 2015, 42 (11): 96-100.  doi:10.11896/j.issn.1002-137X.2015.11.020
Abstract PDF(517KB) ( 203 )   
References | Related Articles | Metrics
In future intelligent system,a real intelligent video recommendation system should have the ability of automatically acquiring users’ interest and preference without users’ rating actions and the ability of accurately recommending videos for them.In implementing a real intelligent video recommendation system,a key problem we must solve is the design of some technologies that can acquire users’ ratings (which reveal their interests and preferences) in case of no rating actions.To acquire user’s implicit rating on videos automatically,this paper presented a neural-network-based method.Experimental results on samples about users’ video-viewing behavior and ratings show that this method is effective in gaining users’ implicit rating information.
Towards Unified Description for Reduction Algorithms
XIONG Yu-qing
Computer Science. 2015, 42 (11): 101-103.  doi:10.11896/j.issn.1002-137X.2015.11.021
Abstract PDF(232KB) ( 638 )   
References | Related Articles | Metrics
Reduction algorithms are used widely in parallel computing.There are many reduction algorithms applied to different situations.These reduction algorithms are different from each other.Logical topology is the key cause to make the differences.In order to unify the description of reduction algorithms,and uncover their common feature,a definition of logical topology and its properties were presented in this paper.Based on it,a unified description for reduction algorithms was proposed.The description contributes to understand reduction algorithms and thus to design efficient reduction algorithms for different computational applications and environments.It can also be regarded as a reduction algorithmic framework for integrating different semantics and thus maybe guide to design new reduction algorithms with the semantics.Essentially,the unified description is a definition of reduction algorithms formally,and so it helps to verify correctness of reduction algorithms.
Single-frame Image Super-resolution Reconstruction Algorithm Based on Nonnegative Neighbor Embedding and Non-local Means Regularization
PENG Yang-ping, NING Bei-jia and GAO Xin-bo
Computer Science. 2015, 42 (11): 104-107.  doi:10.11896/j.issn.1002-137X.2015.11.022
Abstract PDF(950KB) ( 250 )   
References | Related Articles | Metrics
Single-frame image super-resolution(SR) reconstruction aims to obtain a high-resolution (HR) image from a low-resolution (LR) input image.To overcome the limitations of traditional neighbor-embedding-based algorithm,we proposed a single-frame image super-resolution reconstruction algorithm based on nonnegative neighbor embedding and non-local means regularization.In the training phase,the LR images are magnified 2 times at first,leading to better preservation of neighborhood between LR and HR images in case of high magnification factor.In the reconstruction phase,non-negative neighbor embedding is employed to select neighborhood number effectively.Finally,a non-local means regularization term is introduced into the final reconstruction process by taking advantage of the non-local similarity between natural image patches.Experimental results demonstrate that the proposed method can achieve results with richer textures and sharper edges compared with those from traditional methods.
Transition Phenomenon and its Theory
HONG Long
Computer Science. 2015, 42 (11): 108-111.  doi:10.11896/j.issn.1002-137X.2015.11.023
Abstract PDF(306KB) ( 335 )   
References | Related Articles | Metrics
Transition is a universal phenomenon in nature and everyday life,and it is the problem confronting people in science research,technology innovation and social management.This paper introduced research history and status on transition phenomena,and respectively used transitive relation,interval and logical predicate to characterize the general concept of transition.We introduced two symbols, and 】,for describing interval,and presented interval adjacency.With these concepts,we defined a transition,and discussed its essentials,including the transition variable,transition area,beginning point,increasing transition and decreasing transition.Additionally,we presented a perspective of studying transition,such as transition logic,application-oriented method quantifying transition phenomenon,etc.,and the results of above--mentioned contents will initially form a knowledge system on transition.
Effective Algorithm to Inhibit Halo Effect Based on Sigmoid Function
CHEN Li, GUO Yu-kun and LI Jin-ping
Computer Science. 2015, 42 (11): 112-117.  doi:10.11896/j.issn.1002-137X.2015.11.024
Abstract PDF(1027KB) ( 457 )   
References | Related Articles | Metrics
Dark channel prior theory is usually employed to reduce the effect of fog,however,a bad halo effect often exi-sts.After extensive analysis of halo effect,we proposed an effective algorithm to inhibit the halo effect based on Sigmoid function.Firstly,after many observations of images with halo effect,we found the characteristic of halo effect’s location and sturcture,so we constructed a Sigmoid model with directivity.Secondly,we performed the edge detection of transmission image obtained by dark channel prior,and then obtained the coordinates and edge directions of pixels located at the place of sudden change of depth of field.Thirdly,we used the constructed model to determine the precise locations of those pixels,filled halo region with pixel values of non-halo region,and then obtained optimized transmission image.Finally,we could obtain the clear image with weak halo effect by using the optimized transmission image and introducing a kind of tolerance mechanism.Our algorithm’s characteristic consists in only processing the halo region using the constructed model rather than processing the whole image region,so our method avoids such problem as color distortion in no halo region.The experimental results demonstrate that our method is simple and easy,with faster computational speed and better restoration effect for fog image.
Motion Characteristics Based Video Salient Region Extraction Method
ZHOU Ying, ZHANG Ji-hong, LIANG Yong-sheng and LIU Wei
Computer Science. 2015, 42 (11): 118-122.  doi:10.11896/j.issn.1002-137X.2015.11.025
Abstract PDF(1208KB) ( 224 )   
References | Related Articles | Metrics
The human eyes only observe the salient region of the video.Thus a motion characteristics based salient region extraction method was proposed.Spatial saliency map is extracted by analyzing the log spectrum of each frame in the frequency domain.Temporal saliency map is obtained by global motion estimation and block matching.According to the human visual characteristics and the subjective perception of different motion characteristics,the region of saliency is fused dynamically by spatial and temporal saliency map.The experiment was analyzed from both subjective and objective indicators.Visual observation and quantitative indicators show that the proposed method can reflect the human visual attention area more accurately than other classical extraction methods.
α-Semantic Resolution Method Based on Linguistic Truth-valued Lattice-valued Propositional Logic
ZHANG Jia-feng, XU Yang and CHEN Qin
Computer Science. 2015, 42 (11): 123-129.  doi:10.11896/j.issn.1002-137X.2015.11.026
Abstract PDF(531KB) ( 289 )   
References | Related Articles | Metrics
Linguistic-values-based intelligent information processing is one of the most important research directions in artificial intelligence,and resolution-based automated reasoning has been extensively studied because of its easy implement on computer.For improving the efficiency of α-resolution principle in linguistic truth-valued lattice-valued logic,we applied the semantic resolution strategy to α-resolution and investigated the resolution-based automated reasoning methodin lattice-valued logic.Firstly,the equivalence of α-semantic resolution in linguistic truth-valued lattice-valued propositional logic LV(n×2)P(X) and the α-semantic resolution in corresponding resolution level for LnP(X) was given,and the effectiveness of α-semantic resolution method was illustrated through an example.Subsequently,the semantic resolution algorithm for this resolution was investigated,and sound theorem and weak complete theorem of this semantic resolution method were proved.
Distribution of Symmetrical Logic Formulas in L*4-logic Metric Space
HUI Xiao-jing, ZHAO Ma-nao and GAO Jiao
Computer Science. 2015, 42 (11): 130-133.  doi:10.11896/j.issn.1002-137X.2015.11.027
Abstract PDF(324KB) ( 213 )   
References | Related Articles | Metrics
In four-valued logic system L*4,the concept of symmetric logic formulas was given.The counting problem of the symmetrical logic formulas in the L*4- logic metric space was studied by using Matlab,and the numbers of symmetric logic formulas with 3n,3n+1 and 3n+2 atoms were given.It was proved that the ratio of the number of symmetric formulas with n atoms over the numbers of all formulas with n atoms converges to zero when n tends to infinite.
Abnormal Data Detection Algorithm in Heterogeneous Complex Information Network
MU Li-wen, PENG Xian-bo and HUANG Lan
Computer Science. 2015, 42 (11): 134-137.  doi:10.11896/j.issn.1002-137X.2015.11.028
Abstract PDF(353KB) ( 264 )   
References | Related Articles | Metrics
Heterogeneous information network carries different protocols and network channels,and realizes the resource scheduling through the cloud storage,resulting in abnormal data which can cause security threat and storage overhead to the network information space,so accurate detection of abnormal data is essential.The traditional detection algorithms use the simplified gradient algorithm for outlier detection,which cannot effectively remove multiple abnormal data of known interference frequency components,so the detection performance is not good.An abnormal data detection algorithm was proposed based on adaptive notch cascade model.The heterogeneous information network system model is constructed,and by using the intrinsic mode decomposition,the abnormal data signal analytical model is decomposed into multiple narrowband signals.Two order lattice notch filter structure is designed,and a plurality of fixing notch filters cascade is used to suppress the interference components.Matching projection method is used to seek the optimized feature solution,and all matching feature point pairs are find out,realizing the improvement of outlier detection.Simulation results show that,when the algorithm is used in data detection,signal amplitude is larger than the amplitude of noise interference data.It can improve the detection performance,and anti-interference performance is better.
Gradient and Constant-game Based RFID Indoor Localization Algorithm
SHI Jun-yan, QIN Xiao-lin and WANG Ning
Computer Science. 2015, 42 (11): 138-143.  doi:10.11896/j.issn.1002-137X.2015.11.029
Abstract PDF(493KB) ( 169 )   
References | Related Articles | Metrics
With the continuous development of universal computing research,indoor localization technology has become a hot topic in current research.Due to the complexity of the indoor space,enhancing the positioning accuracy of the indoor space has been unable to meet the needs of the application.In order to obtain a more efficient and stable algorithm,this paper proposed an indoor localization algorithm based on gradient and game theory.The algorithm can more effectively improve the positioning accuracy in indoor space.In the algorithm,a method of dividing the indoor space based on the symbol is proposed.At the same time,it segments the indoor space to support localization.Finally, the results of experimental validation of the algorithm show that the algorithm has good effect,and has more stable effect than the exi-sting algorithm.
Cluster-chain Based Low-energy Consumption Hierarchical Routing Protocol
WANG Meng-ying, WANG Xin and JIANG Hua
Computer Science. 2015, 42 (11): 144-148.  doi:10.11896/j.issn.1002-137X.2015.11.030
Abstract PDF(453KB) ( 242 )   
References | Related Articles | Metrics
The number of active node and the communication distance between cluster heads are two important factors influencing the network life cycle in the LEACH protocol.The paper designed a cluster-chain based low-energy consumption hierarchical routing protocol on the basis of LEACH protocol.The protocol layers the network into clusters,cluster head nodes divide the near nodes within the cluster which collects similar information into “similar” group,and in “similar” group only one node sends data to the cluster head according to the serial number every time,reducing the number of active nodes within the cluster and the load of cluster head.And the chain communication is introduced between cluster heads.Finally,it shows that the algorithm can balance the network energy consumption and prolong the network life cycle through the theoretical proof and simulation results.
OSN Activities Analysis Based on User Feeds
HE Chan-yang, SUN Lu-jing and YANG Jia-hai
Computer Science. 2015, 42 (11): 149-153.  doi:10.11896/j.issn.1002-137X.2015.11.031
Abstract PDF(543KB) ( 430 )   
References | Related Articles | Metrics
This paper explored how one social network becomes inactive.We investigated the change of users’ activeness by analyzing the activeness period and Feeds inter-event time of users in a specific OSN.Our findings reveal that these users decrease their activity frequency or depart from the social network for various reasons and increase the interval time of Feeds,resulting in the decrease of information flow in velocity and diversity for the whole community.As a result,active users will feel the inactivity from their friends and increase the probability of being inactive as a feedback,which may be the underlying reason for the inactivity of OSN.Our simulation experiment shows that when 30% of users become inactive in generating Feeds,the whole community will be affected and collapse in a short time.
Research on SSD-based RFID Indoor Location Method
LI Jun-huai, JIA Jin-peng, WANG Huai-jun, WANG Zhi-xiao and ZHANG Xiang
Computer Science. 2015, 42 (11): 154-157.  doi:10.11896/j.issn.1002-137X.2015.11.032
Abstract PDF(705KB) ( 250 )   
References | Related Articles | Metrics
Aimming at the signal strength difference caused by using different tags and power differnece of tag in RFID indoor location,this paper proposed a new signal strength difference(SSD) based RFID fingerprint position model.The model introduces a virtual reference point.At the offline stage,it computes the RRSI of virtual reference point according to the signal transmission model,establishes SSD fingerprint map,and uses SSD to eliminate difference of device and overcome the attenuation of RSSI with time.At online stage,it uses the method of position matching seeking intersection to eliminate noise,and then uses the K-NN algorithm to estimate the target location.The experiment proves that the proposed location model is robust and has better accuracy.
Forest Fire Monitoring Algorithm Based on Wireless Sensor Network Data Fusion
LIU Yong-xing, ZHAO Juan-juan and CHANG Xiao-min
Computer Science. 2015, 42 (11): 158-163.  doi:10.11896/j.issn.1002-137X.2015.11.033
Abstract PDF(548KB) ( 223 )   
References | Related Articles | Metrics
Aiming at the existing problems in the application of wireless sensor networks in forest fire monitoring,we proposed a hierarchical clustering data fusion algorithm.In-cluster sensor node uses the weighted average method to achieve data-level fusion of the raw data,which reduces the redundancy of raw data and the traffic from the in-cluster sensor nodes to the cluster head node.Then,we established a D-S evidence theory based recognition framework in the cluster head node.Through the decision-level fusion of the feedback signal of the cluster members,the recognition accuracy of fire event and robustness of the network are improved.The experimental results show that the algorithm can effectively eliminate redundant data in wireless sensor networks,and it can work correctly in the case that the number of faulty nodes does not exceed 40% of the total number of nodes.
Bi-dimensional Wireless Energy Transfer Algorithm for Maximum Network Throughput
YAO Xin-wei, ZHENG Xing-hang, WANG Wan-liang, ZHAO Cheng and YANG Shuang-hua
Computer Science. 2015, 42 (11): 164-169.  doi:10.11896/j.issn.1002-137X.2015.11.034
Abstract PDF(497KB) ( 170 )   
References | Related Articles | Metrics
A bi-dimensional wireless energy transfer optimization algorithm for maximum network throughput was proposed to overcome the limited energy of each node in wireless network.Due to the inequality of energy distribution of each node,some energy is transferred successively in node dimension and time dimension to optimize the energy allocation among wireless nodes.Network throughput model was presented by integrating the Gaussian two-way communication model of two wireless nodes and wireless energy transfer.To obtain the maximum network throughout,an analytical energy transfer model was also presented by introducing the energy transfer efficiency of node dimension and time dimension,and a bi-dimensional energy transfer mechanism was proposed based on the model analysis.Experimental results demonstrate that the proposed mechanism can effectively optimize the energy distribution among wireless nodes,and maximize the total network throughput.
Research on Improved Indoor Mobile Robot Fuzzy Position Fingerprint Localization
MAO Qin, ZENG Bi and YE Lin-feng
Computer Science. 2015, 42 (11): 170-173.  doi:10.11896/j.issn.1002-137X.2015.11.035
Abstract PDF(334KB) ( 255 )   
References | Related Articles | Metrics
On the basis of further research on existing indoor robot localization algorithm,this paper introduced an improved indoor robot fuzzy position fingerprint localization method,named IRF.This method firstly uses the spatial correlation of sample nodes to establish the fingerprint database by focus Lagrange interpolation algorithm.Then it transforms the problem of solving high order coordinates into the problem of space membership degree.The fuzzy neartude of unknown nodes and sample nodes can be calculated,which can determine the coordinates of unknown points.Kalman filtering algorithm is used to fix the error of the method.The result of experiment proves that IRF has higher performance in reducing the error,improving stability and efficiency than other traditional algorithms.
Research on Cooperative MAC Protocol for Vehicular Ad Hoc Network
YE Xiang, ZHANG Guo-an and CHENG Dai-yue
Computer Science. 2015, 42 (11): 174-177.  doi:10.11896/j.issn.1002-137X.2015.11.036
Abstract PDF(316KB) ( 247 )   
References | Related Articles | Metrics
The design of the medium access control (MAC) protocol is one of the key technologies of the vehicular ad hoc Networks(VANET),which directly impacts the network performance such as the access fairness,throughput delay and the packet-lost-radio etc.We presented a cooperative scheme for TDMA MAC due to the requirements of the MAC agreement in VANET,called C-TDMA MAC.In cooperative TDMA MAC,neighboring nodes use the unreserved time slots to cooperatively relay a packet not reaching its target receiver,without affecting the normal transmission of other data packets.Numerical analysis and simulation results show that the proposed protocol improves the successful probability of packet transmission.
Prediction Model of Energy Consumption for MapReduce Based on Job Running History Logs
LIAO Bin, ZHANG Tao, YU Jiong and SUN Hua
Computer Science. 2015, 42 (11): 178-183.  doi:10.11896/j.issn.1002-137X.2015.11.037
Abstract PDF(529KB) ( 170 )   
References | Related Articles | Metrics
The problem of high energy consumption produced in big data processing is an important issue that needs to be solved,especially under the background of data explosion.The energy consumption model is the basis for research to improve the energy efficiency of the MapReduce.Using traditional model to calculate the MapReduce job's energy consumption faces challenges.After research on the cluster structure,job task decomposition and task slot mapping mechanism,we proposed the prediction model of energy consumption for MapReduce based on job running history logs.Through the analysis of historical operating information of different jobs,we got the computing power and energy consumption characteristics of DataNode running different tasks,and then implemented the forecast of the energy consumption of the MapReduce job before its execution process.The experimental results demonstrate the feasibility of energy prediction model,and the purpose of improving the prediction accuracy of the model can be achieved by adjusting the correction factor.
Analysis and Research on Address Message of Unknown Single Protocol Data Frame
ZHENG Jie and ZHU Qiang
Computer Science. 2015, 42 (11): 184-187.  doi:10.11896/j.issn.1002-137X.2015.11.038
Abstract PDF(412KB) ( 184 )   
References | Related Articles | Metrics
Network protocols are sets of standards for certain network communications.The protocol identification and analysis have great significance for network management and security.The technologies of protocol identification are varied,but in the process of protocol identification,in order to simplify the identification process and improve the efficiency of protocol identification,it usually needs to separate the unknown mixed multi-protocol into single protocol,and then makes further identification.This paper presented an efficient method to determine the single protocol address message based on the previous work to separate unknown mixed data frame into single protocol.By this way the data frames of single protocol are split into point to point data frame according to the address,and then the final identification of unknown protocol is achieved.In the end,we evaluated the method by analyzing the ARP and TCP data.The results show that this method can find out more than 2/3 address information.
Approximating Triangle Counting Based on Sampling in Complex Networks
HUANG Qu-zhi and ZHANG Jun-chao
Computer Science. 2015, 42 (11): 188-190.  doi:10.11896/j.issn.1002-137X.2015.11.039
Abstract PDF(306KB) ( 281 )   
References | Related Articles | Metrics
Counting of triangles in complex networks is the basis for research on homophily and transitivity.In order to improve the performance of triangles counting,this paper proposed a sampling based approximating triangle counting algorithm.Firstly,we sampled every edge in a graph with constant probability,and counted the number of triangles in the sub-graph.Secondly,based on probability theory of sampling,we approximated the number of triangles in the original graph with the induced sub-graph.Finally,we analyzed the mean and variance of our proposed sampling method,and gave its corresponding speedup.Theoretical analysis and experiments show that, compared with the classic node iteration approach,the proposed algorithm improves the execution time a lot while keeping high accuracy,and thus is more suitable for triangle counting based applications in large scale networks.
Mandatory Access Control Model for Android Based on Dynamic Privilege Set
XU Qian and TAN Cheng-xiang
Computer Science. 2015, 42 (11): 191-196.  doi:10.11896/j.issn.1002-137X.2015.11.040
Abstract PDF(1114KB) ( 197 )   
References | Related Articles | Metrics
In order to prevent Android platform from being attacked by the privilege escalation,this paper proposed a mandatory access control model based on the dynamic privilege set.The model analyzes the privilege characteristics of strongly connected component and constructes the privilege partition.Coupling the information flow together with privilege set,the privilege escalation path is abstracted.At last the access control algorithm which has linear complexity was proposed.With the help of tracking the privilege sets dynamically,the fine grained decision strategy was realized.The test result on the prototype system and the comparison with the existing models both show that the model proposed in this paper can fix the privilege escalation attack efficiently.
Secure Channel Free Searchable Encryption in Standard Model
FANG Li-ming, HUANG Zhi-qiu and WANG Jian-dong
Computer Science. 2015, 42 (11): 197-202.  doi:10.11896/j.issn.1002-137X.2015.11.041
Abstract PDF(498KB) ( 201 )   
References | Related Articles | Metrics
Recently,Baek et al.proposed an efficient public key encryption scheme with keyword search based on the scheme of Boneh et al.However,the security model of Baek et al.seriously limits the ability of the adversary.Rhee et al.enhanced the security model of the public key encryption with keyword search to properly incorporate the ability of an adversarys,and presented a PEKS in the random oracle model.Unfortunately,a proof in the random oracle model has shown that it possibly leads to insecure schemes when the random oracles are implemented in the standard model.This paper constructed an efficient public key encryption scheme with keyword search secure in the enhanced security model without random oracle.
Method of User Identification Based on Keystroke Behavior and its Application
WANG Yu-jie, ZHAO Pei-hai and WANG Mi-mi
Computer Science. 2015, 42 (11): 203-207.  doi:10.11896/j.issn.1002-137X.2015.11.042
Abstract PDF(529KB) ( 311 )   
References | Related Articles | Metrics
Most researches on keyboard behavior have been limited in experimental environment,which force user to input a specific string or limite the number of participants,so the result of research can’t apply into real production environment.To overcome this limitation and resolve the problem of the unbalanced samples between positive and negative and the problem of lacking negative samples,ten million keystroke behaviors were analyzed,and a method for user identification was introduced.The duration and the interval of keystroke describing the user’s behavior are used,and the similarity of different keystrokes is measured by Mahalanobis distance.The result of experiment shows that our method gets an accuracy of 80%.
Impossible Differential Cryptanalysis of CLEFIA-128
QIU Feng-pin and WEI Hong-ru
Computer Science. 2015, 42 (11): 208-211.  doi:10.11896/j.issn.1002-137X.2015.11.043
Abstract PDF(270KB) ( 248 )   
References | Related Articles | Metrics
To study the ability to resist the impossible differential cryptanalysis of the block cipher CLEFIA-128,the 13-round CLEFIA-128 without whitening key was analyzed based on one 9-round impossible differential role.It uses the output and input differences of S-boxes to recover round keys,utilizes the keys relations to reduce the number of guessed keys,and introduces the key-byte guessing(Early Abort) technique to reduce the complexity effectively.Computing result shows that the data complexity and time complexity of this method are O(2103.2) and O(2124.1) respectively.
Design of Modbus TCP Industrial Control Network Protocol Abnormal Data Detection Rules Based on Snort
JIANG Wei-wei, LIU Guang-jie and DAI Yue-wei
Computer Science. 2015, 42 (11): 212-216.  doi:10.11896/j.issn.1002-137X.2015.11.044
Abstract PDF(1033KB) ( 518 )   
References | Related Articles | Metrics
The vulnerability of industrial control network communication protocol is the main reason on industrial control network suffering from attacks.The vulnerability of Modbus TCP which is the typical industrial control network communication protocol was analyzed and synthesized.The abnormal behaviors of Modbus TCP were analyzed and categorized according to the detection mechanisms exploited by Snort,and the detection rule template defined in Snort for anomaly Modbus TCP data was constructed.According to the corresponding analysis,the rule template designing methodcan be generally extended to other network-based industrial control protocols.
Improved Meet-in-the-middle Attack on Reduced-round Crypton Cipher
LI Yong-guang, ZENG Guang and HAN Wen-bao
Computer Science. 2015, 42 (11): 217-221.  doi:10.11896/j.issn.1002-137X.2015.11.045
Abstract PDF(1201KB) ( 222 )   
References | Related Articles | Metrics
Crypton cipher is one of the AES candidates proposed by korean scholars.This paper studied the structure of Crypton and the property of a class of truncated differential trail,used the differential enumeration technique to weigh memory complexity and data complexity,and proposed a new 4-round and 4.5-round distinguishing property for the meet-in-the-middle attack on Crypton cipher,which can diminish the size of multisets stored in the pre-computed lookup table effectively,and lower the memory complexity.The first meet-in-the-middle attack on 7-round Crypton-128 based on the 4-round distinguishing property requires time complexity of 2113,data complexity of 2113,memory complexity of 290.72.The first meet-in-the-middle attack on 8-round Crypton-192 based on the 4.5-round distinguishing property requires time complexity of 2172,data complexity of 2113,memory complexity of 2138.
Password Strength Metric Based Classification Proactive Model
SHEN Ying, LIAO Liu-cheng and DONG Tian-yang
Computer Science. 2015, 42 (11): 222-227.  doi:10.11896/j.issn.1002-137X.2015.11.046
Abstract PDF(508KB) ( 308 )   
References | Related Articles | Metrics
Refusing user-defined weak password is an important means to protect information system.Different from rule based proactive password checker,we proposed a combination password proactive classification model.The model firstly uses Markov model and constructes effective password strength metric integrating typical password strength factors such as length,frequency and first letter.Then strength metric assesses each password and grades them with suitable threshold values.The model deployes multilevel bloom filter to record classification result.It not only reduces time-consuming in password strength assessment and retrieval,but also keeps proactive model and graded password in secret.Experimental results show that password strength evaluation results are reasonable compared with other metrics,and classification result can across password datasets.
Stigmergy-based Collaborative Conceptual Modeling in Web Environment
JIANG Yi, ZHANG Wei, ZHAO Hai-yan and JIN Zhi
Computer Science. 2015, 42 (11): 228-234.  doi:10.11896/j.issn.1002-137X.2015.11.047
Abstract PDF(824KB) ( 257 )   
References | Related Articles | Metrics
With the increasing of the domain’s scale and complexity,constructing the conceptual model of the domain is usually beyond individual’s ability.The problem can be solved through gathering and organizing the collective intelligence.In this way,the collaborative conceptual modeling approach was of great significance.Based on a social-insects collaboration mechanism,called stigmergy,an approach was proposed to help the large-scale modelers to conduct colla-borative conceptual modeling in the Web environment.The main characteristic of this approach is twofold.First,this approach provides an environment-based indirect interaction method to solve scalability problem in the communication.Second,this approach provides a stimulation-based collaboration mechanism to initially solve the collaboration problem in such open and distributed environment.Based on this,the collaborator can learn from each experience and knowledge to improve the conceptual model.We conducted a case to evaluate the feasibility of this approach.
Algorithm for k-means Based on Data Stream in Cloud Computing
WANG Fei, QIN Xiao-lin, LIU Liang and SHEN Yao
Computer Science. 2015, 42 (11): 235-239.  doi:10.11896/j.issn.1002-137X.2015.11.048
Abstract PDF(512KB) ( 427 )   
References | Related Articles | Metrics
k-means algorithm is one of the most commonly used clustering algorithm.Now data scale is exploding,and traditional centralized algorithm can not meet the requirements,so it is an urgent problem to design distributed k-means clustering algorithm currently.Existing distributed k-means algorithms are based on MapReduce framework and don’t consider the clustering center.Since each MapReduce job reads and writes data from distributed file system,it is inefficient to express dependencies between jobs.Then this paper proposed a framework based on data stream.Based on MapReduce framework,this framework models according to the data flow diagram.And it proposed an efficient k-means algorithm on the framework.It uses an improved algorithm based on sampling to confirm clustering center for load ba-lance and reducing iterations.Experimental results demonstrate that the algorithm can efficiently resolve the large scale k-means cluster.
Control Flow Vectorization Based on Conditions Classification
SUN Hui-hui, ZHAO Rong-cai, GAO Wei and LI Yan-bing
Computer Science. 2015, 42 (11): 240-247.  doi:10.11896/j.issn.1002-137X.2015.11.049
Abstract PDF(616KB) ( 258 )   
References | Related Articles | Metrics
Modern compilers increasingly rely on SIMD instructions to improve the performance of vectorization.However,the complexity of control flow seriously inhibits the exploitation of SIMD vectorization.Current vectorization methods of control flow are proved effective for single-level control flow,but attain poor performance for nested control flow.Hence,a method of control flow vectorization based on conditions classification was presented.Loop Unswitching is applied in the order of level traverse if the condition of the control flow is loop invariant.When the condition of the control flow is loop variant,IF conversion is implemented recursively combined with statement matching and condition join.Then SIMD instructions are generated correspondingly and vectorization of nested control flow is realized.The experimental results show that the proposed method can efficiently remove the nested control flow from the loops and promote the ability of vectorization.Effective acceleration is achieved for the test applications.
Improved Unbiased Node Label Prediction Algorithm
YU Gang and ZHANG Quan-fang
Computer Science. 2015, 42 (11): 248-250.  doi:10.11896/j.issn.1002-137X.2015.11.050
Abstract PDF(318KB) ( 172 )   
References | Related Articles | Metrics
In social networks,predictions of attributes and locations of users and labels of images are extensively applied in many fields.In order to improve the performance of label prediction,this paper proposed an improved unbiased node label prediction algorithm.Firstly,we formalized the label prediction problem in social networks.Secondly,based on the mismatch of the maximization of joint likelihood of training objective under all observed labels and the single variable marginal prediction scores conditioned by the observed labels,we proposed an improved graphical model training algorithm.Finally,according to the unbiased estimation of confidence,we proposed a training model not including additional labels based on sub-graph method.Experiments on the Twitter and Pokec datasets show that,compared with related works,the proposed algorithm has better accuracy and execution efficiency while predicting labels.
Iterative Imputation Algorithm Based on Reduced Relational Grade for Gene Expression Data
HE Yun and PI De-chang
Computer Science. 2015, 42 (11): 251-255.  doi:10.11896/j.issn.1002-137X.2015.11.051
Abstract PDF(472KB) ( 225 )   
References | Related Articles | Metrics
Gene expression data frequently suffers from missing value,which adversely affects downstream analysis.A new similarity metric method named reduced relational grade was proposed.Based on this,we presented the iterative imputation algorithm for gene expression data (RKNNimpute).Reduced relational grade is an improvement of gray relational grade.The former can achieve the same performance as the latter while greatly reducing the time complexity.RKNNimpute imputes missing value iteratively by considering the reduced relational grade as similarity metric and expanding the set of candidate genes to nearest neighbors with imputed genes,which improves the effect and performance of the imputation algorithm.We selected data sets of different kind,such as time series,non-time series and mixed,and then experimentally evaluated the proposed method.The results demonstrate that the reduced relational grade is effective and RKNNimpute outperforms common imputation algorithms.
Research on Parameter Self-selection Strategy of Differential Evolution
WANG Shen-wen, ZHANG Wen-sheng, DING Li-xin, XIE Cheng-wang and GUO Zhao-lu
Computer Science. 2015, 42 (11): 256-259.  doi:10.11896/j.issn.1002-137X.2015.11.052
Abstract PDF(314KB) ( 417 )   
References | Related Articles | Metrics
The selection of the parameter itself is a combinatorial optimization problem.Although a considerable number of works have been conducted,it is known to be a puzzled task.In this paper,a DE algorithm was proposed that uses a new mechanism to parameter self-selection,which dynamically learns from their previous experiences and selects the best performing combinations of parameters for the next generation during the convergence process.We firstly designed the mechanism including three aspects:building of parameter database,score of parameter performance and selection of parameter combination,then we conducted the experiments on some benchmark functions to judge the performance.The results show that the DE with the new mechanism obtains promising performance.
Creativity Driven Optimization Algorithm
ZOU Ru and FENG Xiang
Computer Science. 2015, 42 (11): 260-265.  doi:10.11896/j.issn.1002-137X.2015.11.053
Abstract PDF(467KB) ( 240 )   
References | Related Articles | Metrics
Solving problem by mimicking human’s creative thinking process has been one of the hotspots in artificial intelligence.According to the current researches about creative thinking,and also learning from the present nature-inspired algorithms,a novel intelligent algorithm named creativity driven optimization algorithm(CDOA) was proposed.First,the creativity driven optimization model was constructed and special operators were designed for its five sub-mo-dels.Then,the execution steps of CDOA were given out.In order to test CDOA’s effectiveness,eight CEC-2013 real-parameter benchmark functions were used.The optimization results of CDOA were compared with three state-of-the-art algorithms.The result shows that CDOA has an appealing optimization performance,especially on the complex composition functions.At last,an experiment was carried out to analyze the computation complexity of CDOA.The results indicate that CDOA has lower computation complexity than the other three comparison algorithms.
Seven-spot Ladybird Optimization and its Application in Multidisciplinary Collaborative Optimization
WANG Peng, LI Yang and WANG Kun-lun
Computer Science. 2015, 42 (11): 266-269.  doi:10.11896/j.issn.1002-137X.2015.11.054
Abstract PDF(352KB) ( 263 )   
References | Related Articles | Metrics
Based on the life habits of seven-spotted ladybird,we gave a new heuristic algorithm called seven-spot ladybird optimization(SLO).Compared with two existing heuristic algorithms,GA and PSO,its optimization ability was presented.In view of the defects of multidisciplinary design optimization (MDO),such as low solving efficiency,poor robustness,we took SLO to the system level optimization of collaborative optimization (CO) and verified the searching capability of SLO algorithm through a project example.
Speech Emotion Recognition in Mandarin Based on PCA and SVM
JIANG Hai-hua and HU Bin
Computer Science. 2015, 42 (11): 270-273.  doi:10.11896/j.issn.1002-137X.2015.11.055
Abstract PDF(355KB) ( 405 )   
References | Related Articles | Metrics
Feature selection and extraction play a vital role in speech emotion recognition.At present,no effective speech emotion features are proposed.For these reasons,according to the characteristic of Mandarin which is different from western languages,some effective emotional features and related statistics,including Mel-frequency cepstral coefficients,pitch frequency,short-time energy,short-time average zerocrossing rate,the first formant and so on,were analyzed on a Mandarin emotional corpus which contains 6 kinds of emotions.Then we chose principal component analysis (PCA) for extraction and presented a speech emotion recognition method based on support vector machine (SVM) for classification.The experimental results show that the proposed method achieves high emotion recognition accuracy compared with several significant methods,and the emotion extraction and modeling are reasonable and effective.
Research of Ranking Method Based on Improved Possible Degree Dominance Relation
ZHANG Qi-wen, WANG Xue-qin and ZHUANG Xin-lei
Computer Science. 2015, 42 (11): 274-278.  doi:10.11896/j.issn.1002-137X.2015.11.056
Abstract PDF(427KB) ( 178 )   
References | Related Articles | Metrics
The dominance relation is an important method of solving the multi-attribute decision problem and resear-ching the incomplete interval-valued information systems.According to the research of the ranking methods in the incomplete interval valued information system,we proposed a ranking method of improved possible degree dominance relation to solve the problem that too many attributes maybe result in the ranking failure.This new ranking method combines the interval-valued dominance relation with the possible degree,improves the weakness of the definition of classic dominance relation and defines the average comprehensive dominance degree.Finally,compared with other ranking methods such as ∝-β dominance relation and tolerance dominance relation in specific cases,this ranking method of improved possible degree dominance relation can not only solve the problem in the incomplete interval-valued information systems,but also make the ranking results more reasonable and effective.
Reconstruction Algorithm in Compressed Sensing Based on Maximum Posteriori Estimation
ZHUANG Yan-bin, WANG Zun-zhi, XIAO Xian-jian and ZHANG Xue-wu
Computer Science. 2015, 42 (11): 279-283.  doi:10.11896/j.issn.1002-137X.2015.11.057
Abstract PDF(664KB) ( 212 )   
References | Related Articles | Metrics
This paper proposed a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing which owns a relatively low computational cost for general observation matrix.It is known that the cost of 1-norm minimization using a standard linear programming algorithm is O(N3).The cost of proposed method can be reduced to O(N2) by applying the approach of posterior maximization.By introducing the division ratio,the algorithm achieves better convergence.Furthermore,in principle,the algorithm from our approach is expected to achieve the widest successful reconstruction region,which is evaluated from theoretical argument.
Web Page Optimal Segmentation Algorithm Based on Visual Features
LI Wen-hao, PENG Hong-chao, TONG Ming-wen and SHI Jun-jie
Computer Science. 2015, 42 (11): 284-287.  doi:10.11896/j.issn.1002-137X.2015.11.058
Abstract PDF(713KB) ( 270 )   
References | Related Articles | Metrics
The Web page segmentation technique is a key point to realize Web page adaptive presentation.To overcome the shortcomings of the classical Web page segmentation algorithm VIPS(Vision-based Page Segmentation Algorithm) including fragmented content and semi-automatic,a novel Web page segmentation VWOS(Vision-based Web Optimal Segmentation) was proposed based on the optimal division of graph.The Web page is constructed as the weighted undirected connected graph from the perspective of visual features and structure of the Web page.Therefore,the problem of Web page segmentation is transformed into the optimal division of graph.VWOS was designed by combining Kruskal algorithm and the process of the Web page segmentation.It was proved by the experimentation that the effect of Web page segmentation produced by VWOS is better than that by VIPS.
Feature Extraction Algorithm Based on Evolutionary Deep Learning
CHEN Zhen, XIA Jing-bo, BAI Jun and XU Min
Computer Science. 2015, 42 (11): 288-292.  doi:10.11896/j.issn.1002-137X.2015.11.059
Abstract PDF(398KB) ( 537 )   
References | Related Articles | Metrics
The contradiction of comprehensive information and dimension curse is the preliminary problem of network situation awareness in the times of big data.Feature extraction is a mainstream method to dimensionality reduction,but performs not well when solving high-dimension and nonlinear data.Deep learning is a multi-layer and nonlinear algorithm which can realize the approximation of complicated function,however,it is sensitive to parameters related to hidden layer.Based on above analysis,a feature extraction algorithm based on evolutionary deep learning was proposed.The algorithm combines evolutionary algorithm(EA) and deep learning,takes advantage of the characteristics of GA and ES,and optimizes the learning structure and relevant parameters.Theoretical analysis and simulation results both prove the effectiveness of this algorithm.
Human Action Recognition by Visual Word Based on Local and Global Features
XIE Fei, GONG Sheng-rong, LIU Chun-ping and JI Yi
Computer Science. 2015, 42 (11): 293-298.  doi:10.11896/j.issn.1002-137X.2015.11.060
Abstract PDF(1061KB) ( 306 )   
References | Related Articles | Metrics
Different from the method based on low-level features,the human action recognition based on visual word adds mid-level semantic information to features and then improves the accuracy of recognition.For complex background or dynamic scenes,the efficiency of visual words might deteriorate.We proposed a new method which is a combination of local and global feature to generate visual words.Firstly,our approach uses saliency map to detect the rectangles around human.And then inside these rectangles,3D-SIFT will be calculated around interest points detected from dynamic threshold matrix to describe local features.We also added HOOF to describe the global motion information.These visual words provide the important semantic information in the video such as brightness contrast,motion information,etc.The performance of this method in action recognition can be improved 6.4% on KTH dataset and 6.5% on UCF dataset compared with state-of-the-art methods.The experiment results also indicate that our visual dictionary has more advantages in both simple background and dynamic scene than others.
Compressed Sensing Reconstruction of MRI Images Based on Nonsubsampled Contourlet Transform
CHEN Xiu-mei, WANG Jing-shi, WANG Wei, ZHAO Yang and TANG Min
Computer Science. 2015, 42 (11): 299-304.  doi:10.11896/j.issn.1002-137X.2015.11.061
Abstract PDF(1056KB) ( 227 )   
References | Related Articles | Metrics
Compressed sensing is a newly developed theoretical framework for information acquisition and processing.Using the non-linear optimization methods,the signals can be recovered accurately from fewer linear and non-adaptive measurements by taking advantages of the sparsity or compressibility inherent in real world signals.Compressed sensing represents compressible signals at a sampling rate significantly below the Nyquist rate,and it has been applied in compressive imaging and medical image processing.The flow chart of our algorithm is as follows.The nonsubsampled con-tourlet transform is applied to sparsely represent the original image,then the iterative soft-thresholding algorithm is used to reconstruct the medical MRI images based on Fourier matrix as the measurement matrix.The peak signal to noise rate (PSNR),mutual information (MI) and artifacts power (AP) are used to compare the reconstruction effects of wavelet transform(WT),sharp frequency localization contourlet transform (SFLCT) and nonsubsampled contourlet transform (NSCT).Experimental results demonstrate that our method is superior to the other two methods in PSNR,remained proportion of original information and reconstruction precision.Our algorithm can be extended and widely used in rapid medical imaging technology.
Algorithm of Moving Target Tracking Based on SIFT Feature Optical Flow
LI Yan-ping, LIN Jian-hui and YANG Ning-xue
Computer Science. 2015, 42 (11): 305-309.  doi:10.11896/j.issn.1002-137X.2015.11.062
Abstract PDF(927KB) ( 322 )   
References | Related Articles | Metrics
The available feature-optical-flow algorithms have great shortages of computing complexity and anti-noise performance.Concerning this problem,a moving target tracking algorithm based on scale invariant feature transform and Kalman filter algorithm was proposed.First,the SIFT features are extracted in images.Then,the feature points of moving target are matched according to the minimum absolute error criterion and the optical flow vectors of SIFT features are estimated by Kalman filter algorithm.Finally,recognition and tracking of moving target are achieved using the clustering algorithm based on optical features.The experimental results suggest that the algorithm performs well on the feature points tracking in natural scene.The algorithm is easy to calculate and achieve .
Novel Target Tracking Algorithm Based on Joint Estimation of System Error and State
HU Yu-mei, HU Zhen-tao, ZHENG Shan-shan, LI Xian and GUO Zhen
Computer Science. 2015, 42 (11): 310-313.  doi:10.11896/j.issn.1002-137X.2015.11.063
Abstract PDF(400KB) ( 201 )   
References | Related Articles | Metrics
Aiming at adverse effects resulted from system error on state estimation precision in linear system,a novel target tracking algorithm based on joint estimation of system error and state was proposed in Kalman filter framework.Firstly,the influence of system error on target state estimation and state estimation error covariance matrix were analyzed quantitatively.Secondly,combined with the extension method of state dimensions,the registration process of system error was constructed.Finally,the realization steps of new algorithm were designed according to the iterative process of standard Kalman filter.Simulation results show the feasibility and effectiveness of new algorithm dealing with system error registration and state estimation when system error is constant or time-vary.
Moving Object Detection in Omnidirectional Vision-based Mobile Robot
TANG Yi-ping, HU Da-wei, CAI Ying-mei, HUANG Ke and JIANG Rong-jian
Computer Science. 2015, 42 (11): 314-319.  doi:10.11896/j.issn.1002-137X.2015.11.064
Abstract PDF(780KB) ( 204 )   
References | Related Articles | Metrics
For moving object detection in dynamic background,the movement of moving object needs to be extracted by considering the background which has also changed by the ego-motion of mobile robot.Affine transformation is widely used to estimate the background transformation between images.However,using omnidirectional vision sensors (ODVS) in mobile robot will cause inconsistencies in the background motion image due to distortion of the omnidirectional image.So only one affine transformation model can not represent the whole background changes.In this paper,the image was divided into grid windows and the area of moving objects were obtained from the background transformation-compensated frame difference using every affine transformation for each window.Finally,according to the imaging characteristics of ODVS,we obtained the distance and orientation information of moving obstacle by visual method.The results demonstrate that the proposed method is very efficient in moving object detection.It can also realize the precise localization of moving obstacles and greatly improve the ability of abstacle avoidance of mobile robot.