Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 43 Issue Z11, 01 December 2018
  
Advances in Study of Auditory System Brain Model
WANG Cong, ZHANG Qiao-li, ZHAO Di and CHI Xue-bin
Computer Science. 2016, 43 (Z11): 1-5, 15.  doi:10.11896/j.issn.1002-137X.2016.11A.001
Abstract PDF(465KB) ( 125 )   
References | Related Articles | Metrics
The integration of the information and communication technology and the biology have developed to a certain stage recently.The Brain Plan of the European Union,the American Brain Program and the Japanese Brain Program have been carrying out the research of the relevant brain science,and the Chinese Brain Program is also underway.The EU and the United States invest one billion euros and 4.5 billion US dollars for their brain plans.Both of the two plans are using computer simulation to draw a detailed model of the human brain,we and simulating the human brain is important content.In this paper,by referencing the paper of the auditory system at home and abroad,and analyzing the auditory system of the brain,we summarized the research focus and dynamics of the auditory system and predicted the future trend in this direction,which provide the basis and methods in the field of the theoretical basis in China.
Survey on Gray GM(1,1) Model
XU Ze-dong and LIU Fu-xiang
Computer Science. 2016, 43 (Z11): 6-10.  doi:10.11896/j.issn.1002-137X.2016.11A.002
Abstract PDF(156KB) ( 91 )   
References | Related Articles | Metrics
Grey prediction is an important branch of the grey system theory.Six aspects for the optimization research of GM(1,1) model in recent years are given more comprehensive introduction.They are the technology of grey generating,improvement of initial value,methodology improvement of background value,parameters optimization,residual sequence optimization and comprehensive optimization,meanwhile some suggestions are given for the future study of grey prediction models.
Adaptive Fault-tolerant Scheduling Algorithm for Unresponsive Task Based on Speculation
CUI Yun-fei, WU Xiao-jin, DAI Ye, CHENG Xiao and GUO Gang
Computer Science. 2016, 43 (Z11): 11-15.  doi:10.11896/j.issn.1002-137X.2016.11A.003
Abstract PDF(273KB) ( 54 )   
References | Related Articles | Metrics
Current fault-tolerant scheduling algorithm for unresponsive task,based on static execution failed time threshold,can not adapt to dynamic cluster load of large data processing center.To address this issue,an adaptive execution failed time threshold method was proposed.Based on this method,an adaptive fault-tolerant scheduling algorithm (AFTS) for unresponsive task was designed.AFTS adjusts unresponsive task’s time threshold to be determined failure dynamically and to reduce the job response time,according to the information of job size,the size of individual tasks and the remaining operating time.A prototype system using AFTS is developed,on which the validation of the adaptive execution failed time threshold method and the evaluation of AFTS’s performance are carried out.It is shown that AFTS outperforms current fault-tolerant scheduling algorithm in term of the job response time.
Efficient Prediction Method of Essential Proteins Based on PPI Network
HONG Hai-yan and LIU Wei
Computer Science. 2016, 43 (Z11): 16-20, 25.  doi:10.11896/j.issn.1002-137X.2016.11A.004
Abstract PDF(283KB) ( 66 )   
References | Related Articles | Metrics
The essential protein is indispensable for cell life,so it is very helpful for us to understand the minimum requirements of cellular life and design the drug through identifying essential protein.With the development of high-throughput technologies,more and more protein- protein interactions (PPI) data has been obtained,which makes it possible to study essential protein from the network level.At present there are already a number of computational methods proposed for essential proteins identification,but these methods do not solve the PPI data false positive issues.In addition,existing methods generally just consider the topology of the network not considering biological information of protein on the network,and is still relatively lacking.Protein for human life activities of cells not only related to the topology of the network,but also related with protein biological information on the network.To solve the above problems,this paper presented an efficient new method to predict essential protein called EPP (Essential Proteins Predict).The algorithm predicts essential proteins through computing the importance score of protein in the PPI network,the higher importance score of protein is, the protein is more likely to be essential.We take the importance of rank P% of the protein as essential protein.When computing the importance score of essential protein,we synthetically considered the semantic similarity and credibility factors.Our method has low complexity,and considers not only the topology of the network but also the biological meaning of protein itself.Experimental results show that,compared with other conventional methods,our method can identify more essential protein,and its statistical indicators is also higher than other methods.
WSN Fault Diagnosis with Improved Rough Set and Neural Network
ZHOU Xi and XUE Shan-liang
Computer Science. 2016, 43 (Z11): 21-25.  doi:10.11896/j.issn.1002-137X.2016.11A.005
Abstract PDF(307KB) ( 47 )   
References | Related Articles | Metrics
Integrating the advantages of rough set theory and artificial neural network,improved rough set theory algorithm was proposed,and combined with artificial neural network,an intelligent fault diagnosis method of wireless sensor network (WSN) node was achieved.First,based on the analysis of application environment of WSN and fault characte-ristics,diagnosis decision table is obtained through data acquisition,data pretreatment and data compression,and is reduced by the improved inductive attribute reduction algorithm (IIARA) of rough set,thus the minimum fault diagnosis feature set is extracted which contributed most to the fault diagnosis,and then the topology of the radial basis function neural network (RBFNN) is determined.Finally,diagnosis results are obtained through the nonlinear mapping relationship between fault symptoms and fault types established by the network training.Simulation results show that at the time of fault diagnosis of WSN nodes,the diagnosis algorithm can effectively reduce network input layer,simplify the structure of neural network,reduce the training time of the network and improve the diagnosis accuracy of the models.
Forecasting of Hospital Outpatient Based on Deep Belief Network
YANG Xu-hua and ZHONG Nan-yi
Computer Science. 2016, 43 (Z11): 26-30.  doi:10.11896/j.issn.1002-137X.2016.11A.006
Abstract PDF(533KB) ( 115 )   
References | Related Articles | Metrics
Forecasting of hospital out patient plays an important role in intelligent management of modern hospital me-dical resources.The existing researches on outpatient visit analysis and forecasting mostly aim at a single set of data,lacking in in-depth analysis of the data to fully tapped.Thus a hospital outpatient prediction method based on deep belief network (DBN) was proposed.The DBN can excute unsupervised learn from hospital out-patient departments of outpatient data and complete outpatient data feature extraction for mining hidden relationships between various out patient clinic outpatient data.On top of the network layer a logistic regression is added,and the extracted data is based to forecast the future outpatient clinic outpatient.The experimental results show that prediction model based on deep learning achieves better forecasting effect of traffic outpatient capacity.
Approach of Extracting Web Page Informational Content Based on Node Type Annotation
XIE Fang-li, ZHOU Guo-min and WANG Jian
Computer Science. 2016, 43 (Z11): 31-34, 49.  doi:10.11896/j.issn.1002-137X.2016.11A.007
Abstract PDF(322KB) ( 43 )   
References | Related Articles | Metrics
An approach based on DOM node type annotation was proposed to extract web page informational content.According to noise patterns in web page,we firstly classify DOM nodes into four types:text,image,anchor and ignorance ,and provide a method to calculate node degree of coherence(DoC).By adding two new attributes,type and DoC,to DOM node,we can select text nodes that have greater DoC than threshold during content extraction phase,and then integrate them as Web page informational content.In comparison to three other content extraction tools,the results show that in F1 index the proposed method is 95.1%,which is 0.3% higher than Evernote tool and 5.01% higher than YNote tool.
Algorithm of Maintaining Concept Lattice Based on Binary Relation Decrement
WANG Chun-yue, WANG Li-ming and ZHANG Zhuo
Computer Science. 2016, 43 (Z11): 35-41.  doi:10.11896/j.issn.1002-137X.2016.11A.008
Abstract PDF(617KB) ( 64 )   
References | Related Articles | Metrics
In view of the problem of how to maintain the concept lattice in limited space,a concept lattice maintenance algorithm was proposed to decrease redundant binary relation in formal context.Traditional algorithms need to re-construct the concept lattice after deleting redundant relations,this approach is more time-consuming.For the above,we proposed a concept lattice based on the original direct adjustment to obtain new concept lattice method,which can handle cases anywhere in the binary relations of decrement.It uses the bottom-up breadth first traversal grid nodes.Firstly,according to whether the current node contains redundant relational object or redundant relational attribute,the current node is divided into the affected nodes and the constant node.Then based on the relationship between the parent-child node of the current node and extent and intent,then the affected nodes subdivided into four categories,namely reducing object node,reducing attribute node,splitting nodes,deleting nodes.At last,according to the type of parent-child node update edges.Experimental results show that,to some extent,compared with the traditional algorithm,it can get better time performance.
System Identification with Data Dropout
XU Piao-piao and BU Xu-hui
Computer Science. 2016, 43 (Z11): 42-44, 55.  doi:10.11896/j.issn.1002-137X.2016.11A.009
Abstract PDF(440KB) ( 63 )   
References | Related Articles | Metrics
Existing system parameter identification methods are primarily based on the input-output data which are fully available.However,owing to the sensor failure or network transmission mechanism failure in the actual system,data dropout phenomenon often occurres.For system identification problem of a class of linear systems under the condition of input or output data dropout,the data dropout phenomenon is described as a Bernoulli random sequence.And a new algorithm was presented to estimate the parameters with data dropout.Finally,a numerical example validates the effectiveness of the proposed algorithm.The results show that the proposed algorithm has a better convergence than recursive least squares method.
Speech Recognition System Based on Deep Neural Network
LI Wei-lin, WEN Jian and MA Wen-kai
Computer Science. 2016, 43 (Z11): 45-49.  doi:10.11896/j.issn.1002-137X.2016.11A.010
Abstract PDF(256KB) ( 70 )   
References | Related Articles | Metrics
Speech recognition is an important subject in the field of human computer interaction pattern recognition.A speech recognition system based on deep neural network was constructed in this paper.The model was trained without supervision by using the method of anti-chirp contrast divergence and anti-chirp least squares error.The model optimization was carried out using the average value normalization. The fitting degree of the network to the training set is improved and the error rate of speech recognition was reduced.The system used the multi-condition activation function for the model optimization,then the error rate of speech recognition without noise and noise measurement was further reduced.So the system can reduce the over fitting phenomenon.The model was reduced by using the method of singular value decomposition and reconstruction.Experimental results show that the system can greatly reduce the complexity of the system without affecting the error rate of speech recognition.
Fuzzy BCK-algebras and its Fuzzy Left (Right) Reduced Ideals
PENG Jia-yin
Computer Science. 2016, 43 (Z11): 50-55.  doi:10.11896/j.issn.1002-137X.2016.11A.011
Abstract PDF(210KB) ( 46 )   
References | Related Articles | Metrics
Introducing the concepts of fuzzy spaces and fuzzy binary operations proposed by Dib into BCK-algebra,a new aroappch to study fuzzy BCK-algebra was given.The concepts of fuzzy subalgebras,fuzzy left (right) reduced idealsand fuzzy homomorphisms of fuzzy BCK-algebras were put forward.A new theory of fuzzy BCK algebra was preliminarily established.The results show that the classical fuzzy sub-algebra and fuzzy left (right) reduced ideal of BCK-algebra are the special cases of the new theory,so the new method provides a powerful tool to develop the theory of fuzzy BCK-algebras.
Method on Human Activity Recognition Based on Convolutional Neural Networks
WANG Zhong-min, CAO Hong-jiang and FAN Lin
Computer Science. 2016, 43 (Z11): 56-58, 87.  doi:10.11896/j.issn.1002-137X.2016.11A.012
Abstract PDF(237KB) ( 52 )   
References | Related Articles | Metrics
In order to improve the accuracy of human activity recognition based on intelligent terminal,we proposed a recognition method based on convolution neural network.We pre-process the raw acceleration data,and input the processed data directly into the convolution neural network to do local feature analysis.After processing,we got the characteristic output items,which can be directly inputted into the Softmax classifier,which can recognize five activity,such as walking,running,going downstairs,going upstairs and standing.By comparing the experimental results,the recognition rate of different experimenters is 84.8%,which proved that the method is effective.
Research on Collaborative Optimization of Supply Chain Inventory Based on Immune Genetic Algorithm
YAN Jun, DING Xin-pei and LIU Yong-rui
Computer Science. 2016, 43 (Z11): 59-62.  doi:10.11896/j.issn.1002-137X.2016.11A.013
Abstract PDF(334KB) ( 47 )   
References | Related Articles | Metrics
With the global economic integration,the competition between enterprises has been altered essentially from the competition between individual enterprises to the competition of the supply chains among enterprises.This new mode of competition gradually urges the enterprises to maximize the profits of the whole supply chain.The author aimed to explore how can the inventory be effectively controlled under the collaboration running of the supply chain by establishing a multi-level supply chain model of inventory and using the improved immune genetic algorithm to find optimized solution to the established model.This study took the P1 and P2 produced by a solid wood furniture company located in the western areas of China as examples,combined the established multi-level inventory model of cost optimization and the corresponding algorithms,and used MATLAB simulation software to find solutions.
Optimized Approach on Stomach Nourishing Decision Based on PSO-BP Neural Network
ZHANG Lu and LEI Xue-mei
Computer Science. 2016, 43 (Z11): 63-66, 72.  doi:10.11896/j.issn.1002-137X.2016.11A.014
Abstract PDF(541KB) ( 43 )   
References | Related Articles | Metrics
The conventional BP neural network has the problems of slow convergence and multi-local extreme value.An optimized approach on stomach nourishing decision based on PSO-BP neural network was proposed.The robust sear-ching ability of particle swarm optimization enables the weight and threshold of BP neural network to optimize.By the error curve and linear regression,we compared PSO-BP and BP method.The results show that the proposed approach can get a more accurate decision on stomach nourishing and provide better guidance on food selection.
Study on Cost Sensitive Attribute Reduction for Fuzzy Decision Theoretic Rough Sets
LIU Cai and QIN Liang-xi
Computer Science. 2016, 43 (Z11): 67-72.  doi:10.11896/j.issn.1002-137X.2016.11A.015
Abstract PDF(375KB) ( 48 )   
References | Related Articles | Metrics
Aiming at the cost problem that generally exists in desicion-making,on the basis of fuzzy theory and decision theoretic rough sets,we studied the method of cost sensitive attribute reduction.We introduced the total cost including misclassification cost and test cost into attribute reduction for fuzzy decision theoretic rough sets (FDTRS).Thus the target of reduction is not only to considere the size of positive region,but also to find the optimal subset of attributes with the minimum total cost.We proposed a cost sensitive attribute reduction (named as COSAR) algorithm for FDTRS.The algorithm uses a heuristic method to search the optimal subset.We provided the procedure of the algorithm and compared the performance of the algorithm with the existing FDTRS attribute reduction algorithm,called QuickReduct.The experimental results show that COSAR algorithm has stronger attribute reduction capability,lower total classification cost,shorter running time than QuickReduct algorithm,and with the increasing of test samples,the difference of total classification cost between two methods is growing larger.
Estimation of Fetal Weight Based on Deep Neural Network
LI Kun, CHAI Yu-mei, ZHAO Hong-ling, ZHAO Yue-shu and NAN Xiao-fei
Computer Science. 2016, 43 (Z11): 73-76, 82.  doi:10.11896/j.issn.1002-137X.2016.11A.016
Abstract PDF(258KB) ( 80 )   
References | Related Articles | Metrics
Fetal weight is an important indicator which reflects the fetus’s growth and development status,so the estimation of fetal weight becomes a crucial foundation in obstetrical decision.Most traditional fetal weight prediction mo-dels are based on medical knowledge and feature selection,which are leading to the hard repetition and promotion of the model building process.For these problems,we proposed a deep neural network structure for building fetal weight prediction model,and introduced the process in which parameters are extracted from electronic health records and the fil-ling strategies for missing values.The experimental results show the deep neural network based prediction model outperforms traditional methods,and the filling strategy can reinforce the training of the model and improve the accuracy.Finally,the generalization ability and universality of the deep neural network model can help different areas and hospitals to build personalized fetal weight prediction model.
Uniform Converting Mechanism for Cross-characters Search Engine of Uyghur
Ibrayim·OSMAN and WANG Yue
Computer Science. 2016, 43 (Z11): 77-82.  doi:10.11896/j.issn.1002-137X.2016.11A.017
Abstract PDF(623KB) ( 191 )   
References | Related Articles | Metrics
With the development of the web technologies in Xinjiang,more and more websites for Uyghur people are on line.Due to the historical reasons,the Uyghur language has many different forms of characters,such as Uyghur ErebYziqi (UEY),Uyghur Latin Yziqi (ULY),and Uyghur SirilYziqi (USY).Current Uyghur search engines only support UEY,however,the most common used characters in international communication are the ULY and USY.Therefore,how to design a search engine to support the multi-characters of Uyghur will be a big challenge for the Uyghur information retrieval area.The related breakout may affect the “The Belt and Road Initiative” deeply.This paper stu-died the converting technologies between UEY,ULY and USY,and proposed the corresponding converting algorithms based on the Unicode coding system.This paper also implemented a uniform converting prototype system to retrieve the contents of UEY webpages through the ULY and USY.We verified our methods converting different characters of Uyghur precisely and smoothly in the experiments.The search results by using ULY or USY reach the same rank of UEY based search engines in our prototype system.
Research on Vector Representation of Formula in n-valued ukasiewicz Propositional Logic System
GAO Xiao-li, HUI Xiao-jing and ZHU Nai-diao
Computer Science. 2016, 43 (Z11): 83-87.  doi:10.11896/j.issn.1002-137X.2016.11A.018
Abstract PDF(0KB) ( 57 )   
References | Related Articles | Metrics
The vector representation form of formula was given first based on the assignment and assignment order of formula in n-valued ukasiewicz propositional logic system.Then,using assignment and assignment order of formula,the truth degree of formula and the definition of three kinds of similarity degree and pseudo-metric between two formulas were also introduced.Finally,some good properties of truth degree,similarity degree and pseudo-metric between formulas were discussed in detail.
Uncertain Extension of Description Logic SROIQ(D) for Uncertain Knowledge Representation
CHEN Hui and MA Ya-ping
Computer Science. 2016, 43 (Z11): 88-92, 107.  doi:10.11896/j.issn.1002-137X.2016.11A.019
Abstract PDF(207KB) ( 50 )   
References | Related Articles | Metrics
To enhance ability for knowledge representation of description logic,an uncertain expansion of description logic SROIQ(D) was proposed in this paper based on the uncertainty theories and description logic SROIQ(D).Aimed at widely existed fuzziness knowledge,roughness knowledge and randomness knowledge in knowledge representation,the formulas of conditional probability of fuzzy rough concepts are defined,which is the basis of the uncertain expansion of description logic SROIQ(D).Then,based on fuzzy rough logic and probability logic,the syntax,semantics and reasoning task of extended description logic SROIQ(D) are presented respectively.The extended description logic SROIQ(D) possesses the ability of handling the three types of uncertainty.
Spark Based Large-scale Semantic Data Distributed Reasoning Framework
CHEN Heng
Computer Science. 2016, 43 (Z11): 93-96.  doi:10.11896/j.issn.1002-137X.2016.11A.020
Abstract PDF(304KB) ( 53 )   
References | Related Articles | Metrics
With the emergence of large-scale semantic data,the study of efficient parallel semantic reasoning has already become a hot topic.In terms of scalability,most of the existing reasoning frameworks still have deficiencies,so it is hard to meet the needs of large-scale semantic data.To solve this problem,a distributed reasoning framework for large-scale semantic data based on Spark was proposed in this paper,which is composed of 3 modules,including semantic modeling,rule extraction and Spark-based parallel reasoning.The result of the process analysis and reasoning instance reveals that computing performance of the proposed distributed parallel reasoning (T(n)=O(log2n)) is far better than that of sequential reasoning (T(n)=O(n)).
t Truth Degree of Formulas and Approximate Reasoning in Gdel n-valued Propositional Logic System
ZHU Nai-diao, HUI Xiao-jing and GAO Xiao-li
Computer Science. 2016, 43 (Z11): 97-102.  doi:10.11896/j.issn.1002-137X.2016.11A.021
Abstract PDF(199KB) ( 34 )   
References | Related Articles | Metrics
By adding new operators Δ and ~,axiomatic expansion of Gdel n-valued propositional logic system is introduced,which is denoted as Gdel~,Δ.In this paper,the definition of t truth degree of propositional formula was put forward(t take Δ,~),and the MP rule,HS rule,meet and union inference rules and some related properties of t truth degree were discussed.The concepts of t similarity degree,t pseudo-metric between propositional formulas and their some related properties are obtained.Three different types of approximate reasoning patterns are introduced in logic metric space,and they are proved to be equivalent.
Adaptive Fireworks Explosion Optimization Algorithm Using Opposition-based Learning
WANG Li-ping and XIE Cheng-wang
Computer Science. 2016, 43 (Z11): 103-107.  doi:10.11896/j.issn.1002-137X.2016.11A.022
Abstract PDF(153KB) ( 44 )   
References | Related Articles | Metrics
Due to the insufficiency of the global optimization ability for the basic fireworks explosion algorithm (FEA for short),which results in the premature convergence of FEA easily.In the paper,a mechanism of opposition-based learning was introduced into the FEA to generate opposition-based population,which can expand the scope of exploration of the algorithm.In addition,an adaptive explosion radius was also assigned to the individual based on individual’s fitness value.The above two strategies are integrated into the FEA to form an adaptive fireworks explosion algorithm using opposition-based learning (AFEAOL).The AFEAOL is compared with other four swarm intelligence algorithms to validate the algorithm’s efficiency on twelve classic test instances,and the experimental results demonstrate that the AFEAOL algorithm has a significant performance advantage over other three peer algorithms.
Research on Technique of Course Ontology Automatically Constructing
TONG Ming-wen, NIU Lin, YANG Lin, ZOU Jun-hua and SHANG Chao-wang
Computer Science. 2016, 43 (Z11): 108-112.  doi:10.11896/j.issn.1002-137X.2016.11A.023
Abstract PDF(518KB) ( 60 )   
References | Related Articles | Metrics
The course ontology used in the artificial intelligent learning system widely is one of the important technologies for knowledge organization in courses.To solve the problems of manual technique to construct the course ontology,a novel technique was proposed to construct the course ontology automatically.The technique employes some techniques such as Web spider,Chinese character division,and relation rule mining,to extract the concepts and relations of the course ontology from diverse web course resources.By experimental evaluation,it is proved that the technique can construct the course ontology automatically with high quality and efficiency.
Improvement and Simulation Application Based on Standard Firefly Algorithm
ZANG Rui and LI Hui-hui
Computer Science. 2016, 43 (Z11): 113-116, 132.  doi:10.11896/j.issn.1002-137X.2016.11A.024
Abstract PDF(521KB) ( 35 )   
References | Related Articles | Metrics
Through studying a kind of intelligent optimization algorithm namely firefly algorithm,the standard updating formula of firefly algorithm is improved by introducing the new adaptive inertia weight to increase the convergence speed of the algorithm and the virtual firefly is used to enhance the cooperation and the exchange of information between the fireflies.For the firefly algorithm in cross-border issues and early boundary problem,a symmetric boundary mutation is introduced,so as to improve the optimization rate of algorithm.The experiment results of six standard test functions show that the effectiveness and the convergence speed of the improved firefly algorithm are improved.In the end,the algorithm is applied to two classical engineering optimization problems,the superiority of the improved firefly algorithm is confirmed by the experiment results,and the applicability of the improved firefly algorithm is verified.
Multi-objective Evolutionary Algorithm Based Weight Vectors Generation Method of MOEA/D
MA Qing
Computer Science. 2016, 43 (Z11): 117-122, 160.  doi:10.11896/j.issn.1002-137X.2016.11A.025
Abstract PDF(817KB) ( 167 )   
References | Related Articles | Metrics
In the evolutionary multi-objective optimization (EMO) community,multi-objective optimization refers to simultaneous optimization of multi-objective problems with more than one objective,which has gained more and more attention in recent years.After the raise of MOEA/D,the aggregation-based multi-objective evolutionary algorithms have obtained more and more research,and there have been many achievements with regard to the improvement of MOEA/D.While,there has been little research about the generation method of weight vectors for MOEA/D.This paper proposed a method to generate any number of well-distributed weight vectors using MOEAs.And the generated weight vectors are applied to MOEA/D,MSOPS and NSGA-III.Then,the three aggregation-based multi-objective evolutionary algorithms are comprehensively compared through testing on DTLZ test suit,multi-objective TSP and rectangle test problem in order to study their optimization abilities on continuous and combinatorial problems and a visual observation in the decision space,respectively.The experimental results show that,none of the algorithms is able to solve problems with all different properties.While,the performance of MOEA/D_Tchebycheff and MOEA/D_PBI is better than MSOPS and NSGA-III in most cases.
Research Advance of Facial Expression Recognition
HUANG Jian, LI Wen-shu and GAO Yu-juan
Computer Science. 2016, 43 (Z11): 123-126.  doi:10.11896/j.issn.1002-137X.2016.11A.026
Abstract PDF(142KB) ( 35 )   
References | Related Articles | Metrics
Facial expression recognition (FER) is an active research topic in the fields of computer vision,machine learning and artificial intelligence,and has become a hot research.In this paper,we introduced the FER system processes,summarized the common method of facial expression feature extraction and facial expression classification,as well as the improvement of these methods proposed by domestic and foreign scholars in recent years,and compared the advantages and disadvantages of these methods.Finally,the current difficult problems of FER studies were analyzed,and the future development direction of FER was presented.
Research on Application of Face Recognition in Area of Public Security
XIAO Jun
Computer Science. 2016, 43 (Z11): 127-132.  doi:10.11896/j.issn.1002-137X.2016.11A.027
Abstract PDF(371KB) ( 175 )   
References | Related Articles | Metrics
The face recognition has made certain achievements in the practice of public security,which has used the methods of the principal component analysis,projection drawing and characterization matching,facial symmetry restoration,3D face similarity evaluation within iso-geodesic regions and hidden markov model.Some conundrums have been exposed more clearly,including inherent flaw of some algorithms,being magnified,gap between theoretical research and practical use,lack of specialized technical personnel for research and development and operation of the face recognition system.In order to build a complete and advanced public security system based on face recognition,the tasks such as developing algorithms to increase recognition rate,combining into other bio-recognizable technologies to enhance stringency,improving the hardware conditions to provide support,promoting convergence of theory research and practical application and training a team of specialized technical personnel along with improving the expertise and operational skills of the involving police-persons should be done.
Research on Importance of Image Mosaic Technology
YANG Cheng, XU Xiao-gang and WANG Jian-guo
Computer Science. 2016, 43 (Z11): 133-135.  doi:10.11896/j.issn.1002-137X.2016.11A.028
Abstract PDF(106KB) ( 92 )   
References | Related Articles | Metrics
Image registration technology is the most critical step of image mosaic technology.Image registration directly determines the quality of the results of the image mosaic.This paper reviewed the current work of the image registration,and introduced the method of image registration based on region registration and feature registration.We analyzed the advantages and shortages of different kinds of image registration.And we pointed out current questions and some useful development directions.
Survey and Prospect of Collision Detection Based on Virtual Assembly Environment
PAN Ren-yu, SUN Chang-le, XIONG Wei and WANG Hai-tao
Computer Science. 2016, 43 (Z11): 136-139.  doi:10.11896/j.issn.1002-137X.2016.11A.029
Abstract PDF(123KB) ( 56 )   
References | Related Articles | Metrics
The virtual assembly system can simulate the electromechanical products to generate the assembly sequence and assembly track,and the collision detection technology is used to verify the correctness of assembly sequence and assembly track.The collision detection algorithms in virtual assembly environment mainly include algorithm based on the time,algorithm based on geometric space and algorithm based on image space.A survey on the present situation of these algorithms was proposed.At last,the current problems,future research direction and difficulties of collision detection algorithm were discussed and analyzed based on the survey.
Data Acquisition and Pre-processing Based on Light Field Photography
ZHAO Qing-qing, ZHANG Tao and ZHENG Wei-bo
Computer Science. 2016, 43 (Z11): 140-143, 182.  doi:10.11896/j.issn.1002-137X.2016.11A.030
Abstract PDF(2599KB) ( 105 )   
References | Related Articles | Metrics
Light field is a representation of full four-dimensional radiance of all rays with spatial and angular information in free space,and capture of light field data enables many new development potentials for computational imaging.The light field camera can obtain 4D light field information,which can capture two more variance than traditional camera.Lytro Illum camera has more advantages than Lytro1.0 in aspect of image quality in the market.This paper used Lytro Illum to capture 4D light field information,extracting the valid light field data from the data file in the light field camera.And 8bi/pixel light field image data can be got by decoding and inverting.Then through a series of process to the light field image data based on microlenses model,4D light field data array and many sub aperture pictures can be got,laying a good foundation of the light field data development and application.
1 Bit Compressed Sensing Reconstruction Algorithm Based on Block Sparse
XIONG Jie, CHEN Hao and YAN Bin
Computer Science. 2016, 43 (Z11): 144-146.  doi:10.11896/j.issn.1002-137X.2016.11A.031
Abstract PDF(256KB) ( 38 )   
References | Related Articles | Metrics
As a typical sparse signal,block sparse signal is widely used in compressed sensing reconstruction algorithm.But ordinary reconstruction algorithm cannot find its internal structure,which leads to the reconstruction accuracy decrease.Based on this theory,considering ordinary 1 bit compressed sensing reconstruction algorithm cannot have good performance in a block sparse signal,we proposed a specific reconstruction algorithm to solve block signal reconstruction.In this algorithm,each block is a reconstruction unit,and is executed reconstruction in binary iterative hard thresho-lding algorithm model.The numerical experiments show that compared with the BIHT algorithm,the precision of BLOCK-BIHT algorithm increases by 3dB.
Face Recognition Algorithm Based on Fusion of Optimized Wavelet Transform and Improved LDA
CHU Jian-pu, HE Guang-hui and LIU Yu-xin
Computer Science. 2016, 43 (Z11): 147-150, 166.  doi:10.11896/j.issn.1002-137X.2016.11A.032
Abstract PDF(584KB) ( 42 )   
References | Related Articles | Metrics
A face recognition algorithm based on fusion of optimized wavelet transform and improved LDA(OWT+ILDA) was proposed.First,we extracted features of the preprocess face images through the 2-level wavelet transform,and an alternating direction method is used to solve the projection matrix and the corresponding optimal fusion coefficients of the high frequency wavelet sub-bands which are fused by the improved LDA.Then we combined representations of the low frequency and high frequency.Finally,the nearest neighbor classifier was used to perform face classification.Experiments were carried out on the ORL and YALE face databases,which indicate that the method is more effective than other traditional methods.
Method of Human Activity Recognition Based on Feature Enhancement and Decision Fusion
HUAN Ruo-hong and CHEN Yue
Computer Science. 2016, 43 (Z11): 151-155.  doi:10.11896/j.issn.1002-137X.2016.11A.033
Abstract PDF(279KB) ( 59 )   
References | Related Articles | Metrics
Activity recognition via tri-axial accelerometer has been a research focus in the fields of sensor data proces-sing and pattern recognition.In some cases,it is difficult to distinguish the similar acceleration data,especially the data of walking,ascending stairs and descending stairs,which makes it difficult to recognize those three activities correctly.In this paper,a method of activity recognition based on feature enhancement and decision fusion was proposed for recognizing similar activities such as walking,ascending stairs and descending stairs,which is implemented by enhancing a part of features and decision fusing several classification results.Experimental results show that the method can overcome the situations of low correct recognition rate and high recognition error due to the similarity of acceleration data,and effectively improve the correct recognition rate of human activity and distinguish human activities in actual applications in real time.
3D-volume Reconstruction of Medical Images Based on Ray-Casting Algorithm
ZHOU Juan
Computer Science. 2016, 43 (Z11): 156-160.  doi:10.11896/j.issn.1002-137X.2016.11A.034
Abstract PDF(577KB) ( 73 )   
References | Related Articles | Metrics
D-volume reconstruction of medical images,the basic flow scheme of volume rendering and the theory of its optical model are analysed,especially the model and theory of the ray casting algorithm are examined in depth and the critical technologies of color evaluation,points resample and picture synthesis and so on are deeply disscussed.Finally,the volume rendering function of medical images grounds on MITK platform is realized.The 3D volume model rendered by the algorithm can be interactively operated on its photo-illumination attribute,surface attribute and environmental parameters.The system can execute normal plane incision and dynamic vision of the plane.Furthermore,it can rotate,zoom and shift the 3D volume model.
Integrating Fast Sparse Respresentation and Collaborative Representation for Face Recognition
LIU Zi-yuan, JIANG Yan-xia and WU Teng-fei
Computer Science. 2016, 43 (Z11): 161-166.  doi:10.11896/j.issn.1002-137X.2016.11A.035
Abstract PDF(814KB) ( 49 )   
References | Related Articles | Metrics
On the basis of compressed sensing theory,fast sparse representation classification (FSRC) and collaborative representation classification (CRC) were proposed.Different emphases restrict further improvement in face recognition.Focused on this,this paper proposed an improved method named integrated fast sparse representation and collaborative representation.Firstly,the face mirror image is introduced into the sample library.Then,the residuals matrix is solved with FSRC and CRC.Finally,the weights of residuals matrix get a summation by weighted fusion and the recognition rate is obtained according to the minimum value’s position information.Experiments on different face databases show that the proposed method can get better recognition performance than FSRC,CRC and others.
Applied Analysis of Image Accelerating Distortion Correction of OpenCL Technology on Heterogeneous Platform
WEI Bo-wen, LI Tao, LI Guang-yu, WANG Zhi-heng, HE Mu, SHI Yue-ling, LIU Lu-yao and ZHANG Rui
Computer Science. 2016, 43 (Z11): 167-169, 196.  doi:10.11896/j.issn.1002-137X.2016.11A.036
Abstract PDF(686KB) ( 62 )   
References | Related Articles | Metrics
According to the increasing problem of low processing efficiency and computational bottleneck in massive remote sensing data,for providing a more efficient approach of big data processing of remote sensing,using OpenCL pa-rallel processing technology can accelerate the processing of remote sensing image based on program of general purpose GPU.The analyses of test results on heterogeneous GPU platforms indicate that OpenCL technology can dramatically increase velocity of distortion calibration of image and can achieve 29.1 times best speedup.Comparing to parallel processing of CUDA suggests that using OpenCL technology on heterogeneous platform has a common superiority.
Research and Realization of Palmprint ROI Segmentation Algorithm
ZHANG Xiu-feng, ZHANG Zhen-lin and XIE Hong
Computer Science. 2016, 43 (Z11): 170-173.  doi:10.11896/j.issn.1002-137X.2016.11A.037
Abstract PDF(2110KB) ( 58 )   
References | Related Articles | Metrics
Palmprint region of interest (ROI) segmentation is a key step of palmprint recognition.The significant pro-blem is that the fixed points are not easy to identify and the deviation of extracting similar image ROI is large in the recent research.So we proposed a new ROI segmentation algorithm to solve the problem.The points of two valleys in the palm are determined at first,then the fitting straight line is gotten by using the special area of palm outline.This paper presented the algorithms to find the palm print information-rich areas by using the two points,and rectangular coordinate system is setted based on the fitting line.After extracting the palmprint ROI,proper vector is matched and recognized.Experimental results show that using this algorithm can find the ROI area more accurately and rapidly,and the segmentation of the same image shift is extremely small.The ROI extraction rate is up to 98.2%,and the palm correct identification rate is increased by about 3%.So the algorithm provides a basis to develop the authentication system based on palmprint recognition.
Rapid Calculation of Forest Growth Model Based on Reciprocity
DONG Tian-yang, WANG Hao, CHEN Qiao-hong and YU Jiao-hong
Computer Science. 2016, 43 (Z11): 174-178.  doi:10.11896/j.issn.1002-137X.2016.11A.038
Abstract PDF(985KB) ( 67 )   
References | Related Articles | Metrics
The research and exploration of reciprocity between plants can strengthen the construction of mixed forest,so as to improve the productive forces of forest.In order to simulate the growth of plants under reciprocity,this paper presented a rapid method of forest scene visualization.This method considers the plant benefit index,growth rate,plant biomass,plant current environmental conditions and other parameters.It uses a pixel method for the rapid calculation of Lotka-Volterra model and individual plant growth model,and simulates the plant growth in 3D space according to the dynamic change.Through simulating the mixed forest of ash and larch,the experimental results demonstrate that the approach to simulating the plant growth based on reciprocity is effective and practical.
Fingerprint Classification Approach Based on Orientation Descriptor
ZHU Zhi-dan, MA Tin-huai and MEI Yuan
Computer Science. 2016, 43 (Z11): 179-182.  doi:10.11896/j.issn.1002-137X.2016.11A.039
Abstract PDF(611KB) ( 45 )   
References | Related Articles | Metrics
Fingerprint classification which is aimed to reduce the number of comparisons that are required to be performed in a large fingerprint database by classifying fingerprints into predefined classes is a significant technique of the fingerprint identification system.Inspired by existing literature,a new algorithm of fingerprint classification named as large-scale orientation field descriptor was proposed in this paper.The algorithm describes approximate orientation pattern near the core point by extracting the direction of the nodes belonging to a large-scale annular mesh structure surrounding the core point as feature vector.Because of the simple and efficient feature extraction,experiments show that comparing with FingerCode,the proposed method achieves similar classification accuracy with 20 times computation speed.
Visualization Research of ECG Sequence in Sinus Arrhythmia
LENG Li-hua and ZHENG Zhi-jie
Computer Science. 2016, 43 (Z11): 183-185.  doi:10.11896/j.issn.1002-137X.2016.11A.040
Abstract PDF(854KB) ( 53 )   
References | Related Articles | Metrics
The research on the relationship between ECG sequence and angiocardiopathy is a classical topic in the study of the clinical diagnosis of heart disease.ECG is an important tool for the detection of heart disease,and long-term volu-me datas have been collected,so it has practical significance to deal with it.In this paper,we used the variable value ECG measurement system to dispose two types of ECG data,T wave change in sinus rhythm and normal ECG sequence to form 2D scatter map.The distribution characteristics and similarities and differences of the two types of ECG signals are displayed in a visual form.Compared with the traditional ECG,it has the characteristics of being intuitive and easy to understand,and the visualization results of the ECG sequences with different measurement variables are also listed in this paper.
Robust Moving Object Foreground Extraction Approach to Illumination Change
YANG Biao, NI Rong-rong and JANG Da-Peng
Computer Science. 2016, 43 (Z11): 186-189, 192.  doi:10.11896/j.issn.1002-137X.2016.11A.041
Abstract PDF(1038KB) ( 61 )   
References | Related Articles | Metrics
Foreground extraction of moving object is the foundation for further analysis.An almost complete object foreground can be obtained by RPCA (Robust Principal Component Analysis) decomposition.However,this approach is sensitive to illumination change.The robustness of RPCA-based foreground extraction approach can be increased by the fact that a,b channels of Lab color space are not sensitive to illumination change.Initially,the sparse foregrounds of L,a,b channels are calculated based on RPCA decomposition respectively.Then Ostu thresholds are employed for binary foreground segmentation of each channel and seed filling technology is utilized for fusing different foregrounds.Finally,the accurate moving object foregrounds are extracted after improving the fusion results with morphology filtering.The experimental results indicate that the proposed method can accurately extract object foreground under complex environments while handling illumination change effectively.
GMRES Algorithm to Solve Navier-Stokes Equation of Smoke Simulation
LI Xiu-chang, DUAN Jin, ZHU Yong and XIAO Bo1
Computer Science. 2016, 43 (Z11): 190-192.  doi:10.11896/j.issn.1002-137X.2016.11A.042
Abstract PDF(380KB) ( 70 )   
References | Related Articles | Metrics
Smoke plays an important role in large-scale battlefield simulation and complex environment simulation,so the smoke simulation is of great significance.A generalized minimal residual algorithm (GMRES) to solve smoke simulation of N-S equation was presented .The calculation principle of GMRES algorithm is given first.Secondly,GMRES algorithm for smoke simulation are applied to solve the navier-stokes equation and the solving result convergence analysis,and the analysis results show that when GMRES algorithm are applied to solve the smoke simulation navier-stokes equations the results are convergent.Finally GMRES algorithm of smoke visualization simulation through computer technology is used,and the simulation results show that smoke simulation effect of the algorithm is real,basic in line with the reality of smoke.
Level Set Medical Image Segmentation Method Combining Watershed Algorithm
ZHANG Hui, ZHU Jia-ming, CHEN Jing and WU Jie
Computer Science. 2016, 43 (Z11): 193-196.  doi:10.11896/j.issn.1002-137X.2016.11A.043
Abstract PDF(1084KB) ( 44 )   
References | Related Articles | Metrics
Complex targets of medical image are usually difficult to be completely segmented,so an image segmentation algorithm of modified Li model combining mark watershed algorithm was proposed.A symbol pressure function replaces the traditional stop function in Modified Li model,and the problem of unidirectional curve evolution is solved.Mark watershed has both stronger ability to suppress noise and stronger ability to capture weak edge of medical image.Firstly,mark watershed algorithm is used for image segmentation pretreatment,positioning information of target edge fast and accurately.Then,the modified Li model algorithm is introduced,and the symbol pressure function is used to guide curve evolution direction and control the size of the evolution speed,realizing full segmentation of complex object.The experimental results show that global information and edge information can be gooten,and the combination algorithm of complex targets can get satisfactory effect in the medical image segmentation.
Leather Image Color Classification Method Based on Texture Removing
ZHENG Hong-bo, CHEN Yu, ZHAO Hai, QIN Xu-jia and ZHANG Mei-yu
Computer Science. 2016, 43 (Z11): 197-200, 228.  doi:10.11896/j.issn.1002-137X.2016.11A.044
Abstract PDF(693KB) ( 74 )   
References | Related Articles | Metrics
Aiming at the existence of natural texture in leather image,leather’s concavo-convex structure texture makes leather image brightness change obviously,which affects the accuracy of color classification.A leather image color classification method based on texture removing was proposed.A relative total variation model was used to remove the texture of leather image and the average color components (L*a*b*) were used o represent color features based on L*a*b* color space’s strong ability to distinguish colors.Finally,the support vector machine was used to classify the leather image.Experiment shows the method can distinguish leather color image accurately and achieve leather image color classification,and the method is feasible.
Kernelized Correlation Filters Tracking with Scale Compensation
ZHANG Run-dong and ZHANG Feng-yuan
Computer Science. 2016, 43 (Z11): 201-204, 214.  doi:10.11896/j.issn.1002-137X.2016.11A.045
Abstract PDF(1552KB) ( 42 )   
References | Related Articles | Metrics
Kernelized correlation filters is a very useful tracking algorithm in practice,whose algorithm is simple and requires only one dense sampling in the next frame for tracking,however the applicability is insufficient when tracking object with scale changes.In this paper,we used kernelized correlation filter tracking algorithm with scale compensation to improve the original algorithm.Firstly,we used point tracking compensation mechanism for this algorithm to compensate for scale changes and shift change.Secondly,we modeled the target with features extracted from compressed sen-sing and detected the tracking object in key frame for the sake of error in the scale estimation.The algorithm is tested in standard tracking library and compared with original tracking tracking algorthm in center error and the actual overlapped rate.The results show that the tracking algorithm with scale compensation in this paper improves accuracy and practicality when used in vedio with scale attributes.
Image Retrieval of Global and Personalized ROI Adjustment of Features
DUAN Na and WANG Lei
Computer Science. 2016, 43 (Z11): 205-207, 246.  doi:10.11896/j.issn.1002-137X.2016.11A.046
Abstract PDF(288KB) ( 67 )   
References | Related Articles | Metrics
The key of personalized image retrieval in the area of traffic is to capture the bayonet related information of the key monitoring vehicle through their personalized features.The current image retrieval algorithms include text based image retrieval,content-based image retrieval and semantic based image retrieval.Based on the requirements of traffic in the field of image retrieval,a new image retrieval algorithm based on global and individual interest region features was presented.To get accurate search results,we used the traffic image database to search,verify and filter the accurate personalized features.The experiment demonstrates that this method solves the problem of CNN features,such as low abilityto describe personalized features and time-consuming.This method also has strong ability to describe personalized features,both the image retrieval rate and the average accuracy rate reach 90%,showing a better retrieval effect,fast calculation speed,strong robustness and practicality.
Image Denoising Model via Weighted Sparse Representation and Dictionary Learning
SUN Shao-chao
Computer Science. 2016, 43 (Z11): 208-209, 236.  doi:10.11896/j.issn.1002-137X.2016.11A.047
Abstract PDF(750KB) ( 49 )   
References | Related Articles | Metrics
The GMM model is trained by natural image patches,covariance matrix of the Gaussian component is used to form sub dictionary and feature value is used to weight the sparse coefficient.A novel model was proposed when introducing GMM into CSR model,and its optimization algorithm was given.Experimental results show that our method has advantages compared with some advanced models.
Medical Image Super Resolution Reconstruction Based on Adaptive Patch Clustering
SONG Jing-qi, LIU Hui and ZHANG Cai-ming
Computer Science. 2016, 43 (Z11): 210-214.  doi:10.11896/j.issn.1002-137X.2016.11A.048
Abstract PDF(1954KB) ( 53 )   
References | Related Articles | Metrics
Medical images,e.g.computed tomography(CT),magnetic resonance imaging(MRI) and positron emission tomography (PET),have important significance during the process of diagnosis and treatment for a lot of diseases.However,influenced by the restriction of equipment resolution and radiation dosage,the low resolution problem of medi-cal images is likely to adversely affect the final diagnosis and treatment.Aiming at this problem,a medical image super-resolution reconstruction algorithm of adaptive patch clustering was proposed.Firstly,a set of image patches in different scales,which is adaptive access to gray consistency,can be obtained by using of the algorithm of quad-tree decomposition for images.Then,the algorithm extracts features of these image patches,and clusters the patches to many centers of different scales after the process of clustering.Finally,the different scale centers will be used to reconstruct a high resolution image according to the clustering centers and the corresponding regression coefficients.The experimental results show that the new method performs better in medical image reconstruction,peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).
Specific Two Words Chinese Lexical Recognition Based on Broadband and Narrowband Spectrogram Feature Fusion with Zoning Projection
WEI Ying, WANG Shuang-wei, PAN Di, ZHANG Ling, XU Ting-fa and LIANG Shi-li
Computer Science. 2016, 43 (Z11): 215-219, 232.  doi:10.11896/j.issn.1002-137X.2016.11A.049
Abstract PDF(289KB) ( 65 )   
References | Related Articles | Metrics
A method based on broadband and narrowband spectrogram fusion with zoning projection of specific two words Chinese lexical recognition was presented.In the process of image feature extraction,the image processing technique is applied to the speech recognition field.Firstly,equal width zoning line projection and binary width zoning line projection are carried out to the narrowband spectrogram,and they are set respectively as the narrowband spectrogram of the first characteristic set and the second characteristic set.Meanwhile,equal width zoning line projection is carried out again to the narrowband spectrogram after Fourier transform,treating it as the third feature set.Then,equal width column projection is carried out to the broadband spectrogram,regarding it as the fourth feature set.The above three feature sets are used as feature vectors to support vector machine(SVM) as a classifier for the overall recognition of specific two words Chinese vocabulary.1000 voice samples are used in simulation experiment.The results show that the correct recognition rate of the two words Chinese word recognition by the first three feature sets is 92.4%.The correct recognition rate of two words vocabulary recognition using fourth feature sets is 80%.The correct recognition rate of the two words Chinese word recognition by using the feature value fusion of the above four features can reach 95.4%.This method of feature fusion provides a new way of thinking of Chinese vocabulary overall recognition.
Random-valued Impulse Noise Detection Based on Pixel-valued Density and Four Directions
GUO Yuan-hua and ZHOU Xian-lin
Computer Science. 2016, 43 (Z11): 220-222.  doi:10.11896/j.issn.1002-137X.2016.11A.050
Abstract PDF(962KB) ( 52 )   
References | Related Articles | Metrics
Improving correction detection,meanwhile decreasing miss detection and false alarm is a challenge in random-valued impulse noise detection.A two stages noise detection algorithm was proposed,at first stage,noisy pixels were detected when their pixel-value density was less than certain threshold,this stage included five iterations,noise pixels were removed through median filtering in every iteration,and the filtered images were the input images for next iterations.At second stage,noisy pixels were detected through four directions,and adaptive threshold was based on MAD.The test images were 512×512 Lena and Boat added by random-valued impulse noise from 10% to 50%.Simulations indicate that with the increase of noise density,false alarm remains low and miss detection remains theoretical low level.
Remote Sensing Images Fusion Method Coupling Contourlet Transform with Particle Swarm Optimization
GU Zhi-peng and HE Xin-guang
Computer Science. 2016, 43 (Z11): 223-228.  doi:10.11896/j.issn.1002-137X.2016.11A.051
Abstract PDF(1642KB) ( 47 )   
References | Related Articles | Metrics
In order to effectively optimize the retention of multi-spectral characteristic and the reservation of spatial information in the fusion image,we presented a remote sensing image fusion method by coupling contourlet transform and particle swarm optimization (PSO).The optimized fusion image is achieved by setting the PSO fitness functions which depend on the objective evaluation indexes of fusion image.The optimal weighting coefficient of lowpass sub-band and the best threshold for the regional structure similarity of highpass sub-band are adaptively obtained by PSO.Firstly,the panchromatic image and I component of multispectral image are decomposed respectively by using the contourlet transform.According to the different feature information,the difference between entropy and relative deviation is regarded as PSO fitness function to adaptively find the best weighting coefficient by the optimized algorithm in the lowpass sub-band.Meanwhile,the structure similarity is considered as PSO fitness function to search the best threshold p and the fusion rules of regional structure similarity is used to fuse image in the highpass sub-band.Finally,the fused image is reconstructed by inverse transform of contourlet and IHS.The simulation experiment results show that the proposed algorithm can effectively preserve the spectral information and spatial information of the original images.
Study of Image Registration Algorithm Based on Genetic Algorithm
XU Ying-qiang, SHI Qing-hua, QU Yong-dong, WANG Cui-zhi, ZHU Pan and YE Yu
Computer Science. 2016, 43 (Z11): 229-232.  doi:10.11896/j.issn.1002-137X.2016.11A.052
Abstract PDF(431KB) ( 54 )   
References | Related Articles | Metrics
Focusing on the problem of image position registration,an image registration algorithm based on genetic algorithm was proposed.The appropriate parameters to characterize the degree of similarity between the two image registration are selected,genetic parameters are optimized by comparing the number of tests obtained by population size,genetic algebra and computation time,and ultimately this algorithm is tested through MATLAB programming.Results show that the algorithm has better convergence and computing speed,better accuracy and practicality than conventional traversal method.
Speech Endpoint Detection Based on Improved Spectral Entropy
LI Yan, CHENG Ling-fei and ZHANG Pei-ling
Computer Science. 2016, 43 (Z11): 233-236.  doi:10.11896/j.issn.1002-137X.2016.11A.053
Abstract PDF(1053KB) ( 45 )   
References | Related Articles | Metrics
In view of the problem that conventional spectral entropy speech endpoint detection algorithm’s detection effect is poor under the non-stationary noise,a new feature parameter-sub-band amplitude spectrum entropy was proposed.The new parameter detection of speech endpoint uses non-stationary signal processing technology to combine the signal of time domain and frequency domain characteristics.Firstly,the conventional spectral entropy speech endpoint detection algorithm is improved and the multi-band spectral entropy is calculated,then the endpoint is detected with the combination of short time average magnitude.The simulation results show that this method has better robustness and precision than conventional spectral entropy algorithm and average magnitude algorithm,which proves the effectiveness of the proposed method.
Video Compression Based on Surfacelet Transform and SPIHT Algorithm
WANG Hai-yan, YIN Jun and PAN Xian-meng
Computer Science. 2016, 43 (Z11): 237-239, 267.  doi:10.11896/j.issn.1002-137X.2016.11A.054
Abstract PDF(687KB) ( 52 )   
References | Related Articles | Metrics
A new video compression coding method based on Surfacelet transform and SPIHT algorithm was proposed.The new method treats the video signal in a special 3D signal with space and time axes in a whole.Surfacelet transform achives properties of multi-directional decomposition,anisotropy,efficient tree-structured implementation,refinable angular resolution,low redundancy and so on.SPIHT algorithm is scalable and progressive in resolution.The video data compression coding is completed by using the properties of correlation and energy concentration between layers of decomposing factors in Surfacelet transform and SPIHT algorithm.The new video compression coding method can overcome the shortcoming of the method based on 3D wavelet transform,achiving higher PSNR and better vision effect,especially for video with complex texture and small motion.
Research of Discrete Cosine Transform for Image Compression Algorithm
FENG Fei, LIU Pei-xue, LI Xiao-yan and YAN Nan-bin
Computer Science. 2016, 43 (Z11): 240-241, 255.  doi:10.11896/j.issn.1002-137X.2016.11A.055
Abstract PDF(356KB) ( 154 )   
References | Related Articles | Metrics
With the development of communication technology,image compression technology has attracted more and more people’s attention.The paper mainly studied the use of discrete Cosine transform (DCT) to compress the image.The advantages of using DCT technique to excute image compression was analyzed by Matlab simulation.The paper introduced the principle of DCT and the necessity of image compression at first.Then DCT was used to compress image,and Huffman code analysis was done.Through Matlab simulation comparing the effect of DCT compressed image,the simulation proves that using DCT compression under the high bit rate can achieve better effect.
Clothing Image Retrieval Method Based on Improved GrabCut Algorithm
HU Yu-ping, XIAO Hang and LUO Dong-jun
Computer Science. 2016, 43 (Z11): 242-246.  doi:10.11896/j.issn.1002-137X.2016.11A.056
Abstract PDF(1515KB) ( 64 )   
References | Related Articles | Metrics
In order to eliminate interference of clothing image background,GrabCut algorithm was introduced.But the current GrabCut algorithm is sensitive to local noise and time-consuming,and its segmentation edge is not accurate.To solve these problems,the multi-scale watershed algorithm to de-noise gradient image was employed,enhancing the ima-ge edge points and reducing the subsequent processing computation.To reduce the loss of image key features,we employed entropy penalty factor optimal segmentation energy function,reducing the effective information loss of image retrieval.And then we introduced the improved GrabCut algorithm to content based clothing image retrieval system.The experimental results show the method has obvious advance on the accuracy of retrieval effect than the existing algorithms.
HMM Static Gesture Recognition Algorithm Based on Fusing Local Feature and Global Feature
ZHANG Li-zhi, HUANG Ju, SUN Hua-dong, ZHAO Zhi-jie, CHEN Li and XING Zong-xin
Computer Science. 2016, 43 (Z11): 247-251.  doi:10.11896/j.issn.1002-137X.2016.11A.057
Abstract PDF(448KB) ( 37 )   
References | Related Articles | Metrics
Focusing on the issue of static gesture recognition, a hidden markov model (HMM) static gesture recognition algorithm based on local and global contour shape was proposed.It extracts local features and upper contour shape entropy as training data of each type of gesture respectively to train its HMM parameters.While testing,the algorithm works with local shape entropy to obtain preliminary identification results,and then according to the fuzziness of prelimi-nary identification,chooses whether it needs to work with upper contour feature,which is a kind of global characteristic,and complementary to the local characteristic to get the final result.The experimental results show that the algorithm has a good effect for gesture library in which shape difference is dominant.And the ideal simulating static spatial feature data into time series makes static gesture recognition have the space scale invariance.At the same time,reasonable data dimension has shortened the training time,and accelerated the speed of recognition.
Hierarchical Semantic-based Web Intelligent Fashion Image Retrieval Method
GENG Zeng-min, SHANG Shu-yuan, SHAO Xin-yan, ZHOU Yi-ling and MA Lin
Computer Science. 2016, 43 (Z11): 252-255.  doi:10.11896/j.issn.1002-137X.2016.11A.058
Abstract PDF(238KB) ( 34 )   
References | Related Articles | Metrics
Aiming at the large-scale automatic collection of fashion images from Web,this paper studied how to use association between the accompany text and the concept of fashion images on Web pages to collect images automatically.Based on the acquisition of semantic-based Web content and the drawbacks of HITS (Hyperlink-induced Topic Search) method,a novel SICR (Semantic-based Image Collection Robot) method was proposed to collect the fashion images from Web.The proposed method (SICR),under the support of the hierarchical semantic library,removes Link Farm page in the expansion of root set,does the similarity calculation for anchor text when crawling link pages.In addition,it makes a brief conceptual analysis of the page content before downloading images.The experimental results on the large-scale dataset have demonstrated that the proposed method can overcome the deficiency of only text or link-based analysis and improve the precision rate and recall rate of fashion image retrieval,experimental results demonstrate the effectiveness of SICR.
Applied Research of Three-dimensional Visualization Based on Phase Retrieval
LIU Xing-ming, CAI Tie, WANG Hui-jing, PENG Gang and GUI Rong-zhi
Computer Science. 2016, 43 (Z11): 256-258, 274.  doi:10.11896/j.issn.1002-137X.2016.11A.059
Abstract PDF(1098KB) ( 40 )   
References | Related Articles | Metrics
Three dimensional measurement of space surface has important applications in areas of industrial inspection,virtual reality,film art and cultural relics of preservation.In this paper,a method was presented to build the 3D digitization of free-form,which can not only reconstruct the geometric surface,but also recover its texture attributes.The structured light is used to project onto the target surface for acquiring the space points,and the matching algorithm is used to build the geometric of the target model.Three-dimensinal reconstruction of sense of reality for target object is realized terough internactive selection optimazation strategy’s texture mappring.The proposed method is straight and can avoid difficult of feature extraction and the data post-processing of the human operation.At last,the experimental results prove the effectiveness of the proposed method.
Probability Routing Protocol Based on Slotted Sliding Window in Wireless Body Area Network
LI Yan, ZHOU Yi, LIU Yu-sheng and LIANG Zhi
Computer Science. 2016, 43 (Z11): 259-263, 285.  doi:10.11896/j.issn.1002-137X.2016.11A.060
Abstract PDF(292KB) ( 63 )   
References | Related Articles | Metrics
Wireless body area network (WBAN) is applied not only to the health care field, it also has great value in special populations care,sports,entertainment,military and other fields,but its own characteristics will seriously affect the reliability of data transmission.How to design a reliable routing protocol has become a challenging task.A probabilistic routing protocol based on sliding window slot was proposed to improve the reliability of packet delivery and reduce network latency.The routing algorithm is implemented by C # language on VS2012 and the result shows that the proba-bility of joining slot sliding window in routing protocol can really improve the reliability of network.
Improved Scheduler of Credit
ZHANG Yan
Computer Science. 2016, 43 (Z11): 264-267.  doi:10.11896/j.issn.1002-137X.2016.11A.061
Abstract PDF(601KB) ( 80 )   
References | Related Articles | Metrics
In this paper,through the analysis of VMware ESX and Xen CPU scheduling algorithm,it is found that scheduling algorithms are based on partition queue model.We proposed to establish a shared queue model to improve the credit algorithm,and then carried on the theoretical analysis and simulation experiments of the model.Based on the simulation results,the performance evaluation of the improved scheduling algorithm was performed.
Generation of Mixed Chaotic Sequences Based on Optimization Criterion
DONG Wen-hua and GUO Shu-xia
Computer Science. 2016, 43 (Z11): 268-270.  doi:10.11896/j.issn.1002-137X.2016.11A.062
Abstract PDF(416KB) ( 95 )   
References | Related Articles | Metrics
The number of traditional m sequence with and Gold sequence which can be used as spreading codes is limited.In order to solve this problem,using chaotic sequence instead of m sequence and Gold sequence was proposed.However,the chaotic sequence of single low-dimensional chaotic mapping has some shortcomings in terms of anti attack ability,key space and security.According to the optimization results,this paper presented a new mixed chaotic sequence by the combination of the single low-dimensional Logistic sequences and Chebyshev sequences.Results of performance analysis and simulation indicate that the new chaotic sequence does well in balance,random and correlation,and the anti-jamming ability of mixed chaotic sequences,m sequence and Gold sequence is similar.
Enhanced Four-fork Tree RFID Anti-collision Algorithm
SHAN Pu-fang, ZHENG Jia-li, YUE Shi-bin and YANG Zi-wei
Computer Science. 2016, 43 (Z11): 271-274.  doi:10.11896/j.issn.1002-137X.2016.11A.063
Abstract PDF(534KB) ( 60 )   
References | Related Articles | Metrics
Based on the four-fork tree and all kinds of adaptive anti collision algorithms,an enhanced EFFT (Enhanced four-fork tree) was proposed.The algorithm firstly uses the Manchester code to accurately locate the collision bit of the tag,and the K bit length tag is extracted from the collision bit to form a new K bit tag UID information,and then the dynamic four fork tree is used to identify the collision bit.In the MATLAB platform,the EFFT algorithm,the back type binary algorithm,the adaptive algorithm have been performed simulation comparison experiments.The simulation results and theoretical analysis show that the new algorithm greatly reduces the number of reader queries and the number of transmission bits,improve the throughput rate and the efficiency of system identification.
Two-hop ACK Message Based Routing Algorithm with Alternative Copy in DTN
WEN Guan-qi, WANG Zhong, GONG Zheng-zheng, ZHANG Shao-lei and WANG Jing
Computer Science. 2016, 43 (Z11): 275-277, 289.  doi:10.11896/j.issn.1002-137X.2016.11A.064
Abstract PDF(404KB) ( 64 )   
References | Related Articles | Metrics
Aiming at the problem of lacking effective method to dodge routing holes,which makes higher delay in end to end transmission,a forwarding algorithm based on two-hop ACK mechanism with alternative copies(2HAR) was proposed.To evade routing hole area and find a new communication path effectively,when sending a message,a backup that copies of the message will be maintained by a hop node.After receiving a confirmation of ACK,copies of the message will be deleted,or find other forwarding node to send a copy of the message.Results of the simulation indicate the proposed algorithm performs better in terms to the delivery ratio,the average end-to-end delays and the network overheads,which is significative to use in the VANET.
Real Time Scheduling Algorithm for Temporal and Spatial Tasks in Wireless Networked Control Systems
LIN Qiang, WU Guo-wei, WAN An-min and YU Jun-shuai
Computer Science. 2016, 43 (Z11): 278-281, 300.  doi:10.11896/j.issn.1002-137X.2016.11A.065
Abstract PDF(327KB) ( 113 )   
References | Related Articles | Metrics
In this paper,a hybrid spatial and temporal scheduling algorithm for wireless networked control system,which takes the distance between robots and task region as well as the deadline of a tasks into consideration,was proposed.In our method,patial and temporal factors are quantified for eventually getting the priority queue.Based on the priority queue,the robot is able to execute the tasks sequentially.At last,extensive simulations are conducted to show the advantages of our spatial and temporal scheduling algorithm.Simulation results show that,on the premise of ensuring high successful scheduling rate,compared with previous method,our scheme show merits in terms of task requesting throughput,successful task solving ratio,average responding delay.
Research on Hierarchical Hybrid Communication System for Power Transmission
YAO Ji-ming
Computer Science. 2016, 43 (Z11): 282-285.  doi:10.11896/j.issn.1002-137X.2016.11A.066
Abstract PDF(470KB) ( 66 )   
References | Related Articles | Metrics
Based on the communication feature and application requirement of state monitoring of transmission line,the limitations of power radio over fiber (RoF) communication system was analyzed,and the hierarchical hybrid network architecture combining RoF and wireless multi-hop was put forward.Considering the characteristics of the chain network of transmission scene,in order to ensure the reliability of communication network and prevent the interrupt problem of network by single point failure,three kinds of multi-hop link maintenance methods which have certain reference significance for the construction of power transmission communication system were proposed.
Research on Adaptive Synchronization Based on Complex Network with Multi-weights
ZHANG Li and AN Xin-lei
Computer Science. 2016, 43 (Z11): 286-289.  doi:10.11896/j.issn.1002-137X.2016.11A.067
Abstract PDF(277KB) ( 49 )   
References | Related Articles | Metrics
On the basis of traditional modeling approaches of single weight,a new complex network model was established.According to the weight nature difference and the idea of network split,we split the complex networks with multi-weights into several different sub-networks with single weight.Then we investigate the globally adaptive synchronization of the complex networks with multi-weights,and give the general conditions of adaptive synchronization.At last,taking Lorenz system for example,we prove the validity of the presented method.
Optimal and Dynamic Resource Management Scheme for Inter-domain Heterogeneous Wireless Networks
ZHANG Yuan-yuan, WANG Jian and XIAO Chuang-bai
Computer Science. 2016, 43 (Z11): 290-295.  doi:10.11896/j.issn.1002-137X.2016.11A.068
Abstract PDF(263KB) ( 68 )   
References | Related Articles | Metrics
In order to solve the issue that existing model can not combine breakoff and integrity very well,a partially observable markov-modulated poisson process (PO-MMPP) traffic model was proposed.This model captures the burstiness of the overflow traffic under the imperfect observability of the network states.The optimal objective is the minimum network cost.It shows that the optimal control policy can balance the network cost and traffic QoS requirement.By derivation and confirmation,we induced the relations in call arrival rate and transition rates.Simulation results indicate that the proposed model provides better performance compared with complete sharing algorithm,in terms of minimizing the network cost,new call blocking and handoff call dropping probabilities.The network cost is reduced.The optimal control policy can be adaptive with network state and QoS requirement.
Buffer Management Based on Stationary Relay Nodes and Message Correlation in Sparse Opportunistic Networks
MA Xue-bin, LI Ai-li, ZHANG Xiao-juan, JIA Lei-lei and XIAO Jing
Computer Science. 2016, 43 (Z11): 296-300.  doi:10.11896/j.issn.1002-137X.2016.11A.069
Abstract PDF(426KB) ( 46 )   
References | Related Articles | Metrics
Aiming at the difference of contact frequency and buffer resources between fixed relay nodes and mobile nodes in the sparse opportunistic networks, a buffer management strategy combining the importance of fixed relay nodes with message correlation was proposed in the multiple copy routing protocols.The proposed strategy can reduce the number of redundant messages in the buffer of fixed relay nodes by interacting the contact information and the message queue information of fixed relay nodes and mobile nodes so that using the buffer space of fixed relay nodes reasonably.Experiments show that the proposed buffer management strategy can improve the cache space utilization of fixed relay nodes on the basis of guaranteeing the successful rate of message transmission.
Research on Satellite Network Topologies Survivability Evaluation Method
WEI De-bin, QIN Yu-fan and YU Ran
Computer Science. 2016, 43 (Z11): 301-303, 310.  doi:10.11896/j.issn.1002-137X.2016.11A.070
Abstract PDF(664KB) ( 60 )   
References | Related Articles | Metrics
From the the high dynamic characteristics of satellite network,in view of the existing survivability assessment method,aiming at the shortcoming of the jump-range-node method for network topology survivability assessment,a new survivability evaluation method for satellite network was proposed.The new method defines the new correction factor,unifies the analytic expressions of the topological evaluation method based on jump-range-node method in theory,and this method is applicable to the survivability assessment of high dynamic satellite network topology.Simulation results show that distinguishing degree of this method is more higher and more reasonable to the different topological structure.
Hamiltonian Decomposition of 2r-regular Graph Connected Cycles Networks
SHI Hai-zhong, CHANG Li-ting, ZHAO Yuan, ZHANG Xin and WANG Hai-feng
Computer Science. 2016, 43 (Z11): 304-307, 319.  doi:10.11896/j.issn.1002-137X.2016.11A.071
Abstract PDF(202KB) ( 60 )   
References | Related Articles | Metrics
An Interconnection network is an important part of a supercomputer.The interconnection network is often modeled as an undirected graph,in which the vertices correspond to processor/communication parts,and the edges correspond to communication channels.In 2010,Hai-zhong Shi proposed the model of 2r-regular graph connected cycles,for designing analyzing and improving such networks,and proposed many conjectures.In this paper,we proved that any 2r-regular graph-connected cycles network is a union of edge-disjoint Hamiltonian cycle and a perfect matching.Therefore,we proved that the conjectures are true when primitive graphs are 2r-regular connected graph.
Design of RFID Tag Antenna
TIAN Hong-pu, SHAN Zhi-yong and ZHANG Ya-bing
Computer Science. 2016, 43 (Z11): 308-310.  doi:10.11896/j.issn.1002-137X.2016.11A.072
Abstract PDF(364KB) ( 46 )   
References | Related Articles | Metrics
With the development of the theories and key technologies of wireless communication,the application of RFID technology become more and more important.This article briefly introduces the application background of RFID antenna and gives the working principle of RFID tag.It points out the importance of the antenna of the reader to the RFID system,and the analysis of the radiation performance of the theoretical model of the rectangular patch antenna.Based on this theory,a RFID tag patch antenna was designed and the application prospects of RFID technology in the future were looked ahead.
Virtual Machine Dynamic Supply and Allocation Algorithm Based on Auction in Cloud Computing
LIU Zhong-tao and LIU Ming-li
Computer Science. 2016, 43 (Z11): 311-315, 341.  doi:10.11896/j.issn.1002-137X.2016.11A.073
Abstract PDF(376KB) ( 56 )   
References | Related Articles | Metrics
Current cloud computing providers allocate their virtual machine (VM) instances via fixed price-based or auction-like mechanisms.However,most of these algorithms require static supply virtual machine,and they are unable to accurately predict the user demand,lead to underutilization of resources.To this end,an auction-based algorithm for dynamic VM provisioning and allocation was proposed that takes into account the user demand for VMs when making VM provisioning decisions.The algorithm treats the set of available computing resource as ‘liquid’ resources that can be configured into different numbers and types of VM instances depending on the requests of the users,and the proposed algorithm determines the allocation strategy based on the users’ valuations until all resources are allocated.Our mechanism is evaluated by performing simulation experiments using traces of real workload from parallel workload archive,the results show that the proposed method can guarantee to bring the higher income for cloud providers,and improve the resource utilization rate.
Distributed Caching Strategy for Data Exchange Program Based on Ontology and KNN Algorithm
WANG Li, WANG Xin and MA Chao-dong
Computer Science. 2016, 43 (Z11): 316-319.  doi:10.11896/j.issn.1002-137X.2016.11A.074
Abstract PDF(395KB) ( 36 )   
References | Related Articles | Metrics
This paper gave an example of the management system of Open University of China,and proposed a distributed cache strategy for data exchange program based on ontology and KNN algorithm to solve the optimization problems in access time,cost and resources quality when distributed system exchanges data between the different nodes.The correctness and effectiveness of the presented strategy is demonstrated by a simulation application designed in the paper,and results show that the strategy can improve the efficiency of data exchanging in distributed system.
Blind Source Separation Based on Affine Projection and Nonlinear Principal Component Analysis
LI Xiong-jie and ZHOU Dong-hua
Computer Science. 2016, 43 (Z11): 320-323.  doi:10.11896/j.issn.1002-137X.2016.11A.075
Abstract PDF(355KB) ( 49 )   
References | Related Articles | Metrics
The affine projection algorithm (APA) can improve the algorithm convergence speed by repeated using the data.Aiming at the problem of slow convergence in the existing blind source separation (BSS),based on the nonlinear principal component analysis (PCA) for BSS,this paper proposed a nonlinear APA-PCA criterion by using the idea of APA,and the new APA-Kalman,APA-RLS and APA-LMS algorithms for BSS is designed.In these new algorithms,the prewhitened observation vector data is utilized in a repeated fashion,and the vector data is thus converted into matrix data.The convergence rate of BSS is accelerated.The simulation results show that the nonlinear APA-PCA criterion is effective and universal.
Security Survey on Android Application Based on Dynamic Analysis
NING Zhuo, HU Ting and SUN Zhi-xin
Computer Science. 2016, 43 (Z11): 324-328.  doi:10.11896/j.issn.1002-137X.2016.11A.076
Abstract PDF(172KB) ( 40 )   
References | Related Articles | Metrics
Because Android operating system is powerful and it is easy to develop ,it has not only become the world’s first share of the smartphone in the past few years,but also become a prime target for malicious attacks.In this paper,we briefly introduced the Android malware and its detection methods firstly.Then we summarized and analyzed some latest and accurate dynamic analysis techniques in Android security and introduced the technology works,technical solutions,technology performance levels and test results from multidimensional perspectives.Finally,we presented a few directions that are worthy of further research.
Security Analysis Model of Power Intelligent Unit Transmission Protocols
MA Yuan-yuan, CHEN Zhe, WANG Chen, FEI Jia-xuan and HUANG Xiu-li
Computer Science. 2016, 43 (Z11): 329-337.  doi:10.11896/j.issn.1002-137X.2016.11A.077
Abstract PDF(698KB) ( 81 )   
References | Related Articles | Metrics
Security of communication protocols for power intelligent units is the basis of achieving high speed,reliability and security intelligent communication of smart gird.The mainstream security analysis theories and methods of protocol were reviewed for the sake of constructing security analysis model for power intelligent unit transmission protocols.The formal methods based on the symbolic theory include logic reasoning,model checking and theorem proving.The computational methods based on the theory of computational complexity include RO,BCP,CK and UC model.The methods based on the computational soundness theory include mapping based methods,model based methods,computational sound formal methods and formalization of computational methods.A security analysis model for power intelligent units transmission protocols facing the field of smart grid was proposed,which lays a foundation for the security analysis of protocols of intelligent power units.
Real Time Analysis Method of Network Security Risk Based on Markov Model
WANG Xiao, LI Qian-mu and QI Yong
Computer Science. 2016, 43 (Z11): 338-341.  doi:10.11896/j.issn.1002-137X.2016.11A.078
Abstract PDF(228KB) ( 70 )   
References | Related Articles | Metrics
In view of the urgent need of real-time analysis of network risk,this paper studied and designed a Markov time variant model which is suitable for real-time risk probability prediction.The method has strong robustness and can react with the variation of the wave data,which has the effect of real time risk analysis.DRAPA2000 data sets were used to carry out simulation experiments to predict the probability of the network.The results show that the model mentioned in this paper has high real-time performance and accuracy.
Rapid Analysis Method of Malicious Code Based on Feature Threshold
QI Fa-zhi and SUN Zhi-hui
Computer Science. 2016, 43 (Z11): 342-345, 367.  doi:10.11896/j.issn.1002-137X.2016.11A.079
Abstract PDF(239KB) ( 49 )   
References | Related Articles | Metrics
Nowadays,malicious code has many characteristics,such as multiple types,harm,high complex and needing fast response to handle it.Because the existing method for the analysis of malicious code is difficult to adapt to rapidly analyzing and disposing at the scene and the needs of application practice,this paper proposed the analysis method of malicious code based on feature threshold and constructed the details of the rapid analysis and disposal of malicious code.It contains the environmental analysis,file refinement,static analysis and dynamic analysis.By constructing the threshold determination,locating the function and family properties of code,we provided the specific method of removing the malicious code.The result of practical application proves that this method combines intention,function,structure and behavior of malicious code,and realizes the research about the analysis of the security of malicious code at the level of the disposal site.It provides important support for the fast response and disposal of the current network security of malicious code.
Privacy-preserving Oriented Ciphertext Retrieval Algorithm
CHEN Chao-qun and LI Zhi-hua
Computer Science. 2016, 43 (Z11): 346-351.  doi:10.11896/j.issn.1002-137X.2016.11A.080
Abstract PDF(454KB) ( 60 )   
References | Related Articles | Metrics
Aiming at the safety problem of outsourcing data to cloud servers for mobile cloud environment,in order to ensure the safety of data and the efficiency of ciphertext retrieval,by improving the traditional ciphertext retrieval system structure,a private cloud index file server is introduced for separating the index file and ciphertext file.Based on this,a privacy-preserving oriented ciphertext retrieval algorithm was proposed.Considering the weak computing ability of mobile devices,to reduce the computational overhead,a symmetrical searchable encryption scheme is adopted.Furthermore,an index structure based on trie tree was proposed to improve the efficiency of retrieval,and support retrieve for result.The theoretical analysis and experimental results show that the proposed scheme can achieve the privacy guarantee,and has a good performance on the stored space and retrieval time.
In-lined Reference Monitor Method of Two-dimension Information Release Policy
ZHU Hao, CHEN Jian-ping and JIN Li
Computer Science. 2016, 43 (Z11): 352-354.  doi:10.11896/j.issn.1002-137X.2016.11A.081
Abstract PDF(128KB) ( 34 )   
References | Related Articles | Metrics
The static enforcement methods of declassification policies are over-restrictive.Dynamic approaches based on virtual machines are not suited to Web and just-in-time compiling environments completely.To this end,a two-dimension declassification policy based on the dimension of WHAT and WHERE was enforced by in-lined reference monitor method.The transformation rules of in-lined reference monitor method were presented,and the soundness of the rules was proved.According to transformation rules of the program,the source program is transformed and rewritten to a new program,which is independent of external monitoring environments and can be self-monitored.
Military Equipment’s Distributed Key-generating Algorithm for Identity-based Cryptography
WANG Hong, LI Jian-hua and CUI Qiong
Computer Science. 2016, 43 (Z11): 355-357, 397.  doi:10.11896/j.issn.1002-137X.2016.11A.082
Abstract PDF(218KB) ( 49 )   
References | Related Articles | Metrics
According to the decentralization theory of network central warfare,we proposed a distributed private-key extraction algorithm for Lewko-Waters’s identity-based encryption because the sole key generating center of identity-based encryption is likely attacked.In this scheme,master key is in charge of both key generating center and key privacy authority.User’s private key can be extracted and supervised by key generating center from a number of key privacy authorities distributing all over the network.It could be available to strengthen the survivability and robustness of key management system.Finally we proved their IND-CPA security,i.e.the indistinguishability of ciphertext under chosen plaintext attack,in the normal model and also performed a comparative analysis of the algorithm.As you can see,it can be helpful to accomplish key escrow.
Research on Network Security Technologies Based on NGN
LIU Wei, WU Jun-min and ZHU Xiao-dong
Computer Science. 2016, 43 (Z11): 358-361, 376.  doi:10.11896/j.issn.1002-137X.2016.11A.083
Abstract PDF(392KB) ( 41 )   
References | Related Articles | Metrics
With the deployment of IPv6 and the promotion of the 5G standardization,the next generation network has become the focus of attention of the industry.Through the analysis of next generation network (NGN) 4 layer system structure and its key technology,combined with the characteristics of our country network argues,there is still a need to go further into the next generation network.This paper gave the experience of the practice and deployment of the next generation network development,and put forward the place where it needs to break through.Combined with current technology,we gave the security problems of the next generation network and provided solutions.
FMEA Method Combining OWA Operator and Fuzzy DEMATEL
LIN Xiao-hua and JIA Wen-hua
Computer Science. 2016, 43 (Z11): 362-367.  doi:10.11896/j.issn.1002-137X.2016.11A.084
Abstract PDF(343KB) ( 65 )   
References | Related Articles | Metrics
Considering the shortcomings of traditional failure mode and effect analysis(FMEA) method in practical application,a risk ranking method was proposed based on ordered weighted averaging(OWA) operator and decision ma-king trial and evaluation method(DEMATEL).FMEA experts make fuzzy evaluation on the three risk factors of the fai-lure modes,and the evaluation information is aggregated by OWA operator to get the influence degree of the failure cause to failure mode.The initial direct-relation fuzzy matrix of the FMEA system is constructed by using fuzzy DEMATEL method,then the total-relation fuzzy matrix is calculated,and the reason degree of each failure cause is computed,based on which the product or system risk assessment is conducted.The proposed method is applied to safety analysis of the basic components of the metro car door system,and the feasibility and effectiveness of the method is verified by comparing with the results of the traditional RPN method.
KDCK-medoids Dynamic Clustering Algorithm Based on Differential Privacy
MA Yin-fang and ZHANG Lin
Computer Science. 2016, 43 (Z11): 368-372.  doi:10.11896/j.issn.1002-137X.2016.11A.085
Abstract PDF(256KB) ( 79 )   
References | Related Articles | Metrics
The traditional K-medoids clustering algorithm is sensitive to the initial center points,can’t effectively deal with dynamic data clustering,and needs privacy protection for private data.Therefore,this paper proposed the KDCK-medoids dynamic clustering algorithm.It establishes a new KD-tree using kth rectangular units optimally selected by KD-tree and incremental data based on differential privacy protection technologies,and then distributes the incremental data into the corresponding clusters by using the neighbor search strategy,and then completes the dynamic clustering.Through experiments on small data sets and multi-dimensional large data sets,clustering accuracy and running time are analyzed.And the effectiveness of the algorithm is evaluated.The experimental results indicate that the KDCK-medoids dynamic clustering based on differential privacy protection can realize privacy protection meanwhile quickly and efficiently process the dynamic clustering of incremental data problem.
Design Method of SRAM-PUF Based on Error Correcting Code Fuzzy Extractor
XU Tai-zhong, YANG Tian-chi, CHENG Juan and SHAO Qi-feng
Computer Science. 2016, 43 (Z11): 373-376.  doi:10.11896/j.issn.1002-137X.2016.11A.086
Abstract PDF(286KB) ( 171 )   
References | Related Articles | Metrics
Physical unclonable function is a new kind of hardware security technology.The physical fingerprint characteristics of the chip are used in many fields such as key generation and identity authentication.The fuzzy extractor based on error correcting code was proposed in this paper to improve the robustness of PUF of SRAM.The working process of fuzzy extractor can be divided into two stages,generation stage and reconfiguration stage.The supplementary data of PUF is created by using BCH encoder method in generation stage.And the stable response output of PUF is created by using supplementary data and the error correcting ability of BCH encoder in reconfiguration stage.The fuzzy extractor experiment is running on ATSAMV70J19 CPU platform and the consistency reaches 99.9% under different operating temperatures.The result of experiment verifies the excellent performance of this method.
Blind Image Watermark Algorithm Based on Compressed Sensing
WEN Jian-yang, GONG Ning-sheng and CHEN Yan
Computer Science. 2016, 43 (Z11): 377-382.  doi:10.11896/j.issn.1002-137X.2016.11A.087
Abstract PDF(877KB) ( 58 )   
References | Related Articles | Metrics
Aiming at modern image watermark design requirements,a compressed sensing (CS) based watermark algorithm was proposed.Since natural digital image is sparse in wavelet domain,cipher watermark image is embedded in the wavelet transform coefficients of the carrier image.If only the cipher key (a random seed) is known,the watermark image can be perfectly recovered,according to some properties of vector space and matrix and a CS reconstracion algorithm,dispensing with the original carrier image or other prior information.The experiments prove that this performance of watermark algorithm is so fine, and it can entirely fulfil the requirements of practical application.
Layer by Layer Triangulation Algorithm for 3D Point Clouds from Structured Light Vision
QIN Xu-jia, CHEN Lou-heng, TAN Xiao-jun, ZHENG Hong-bo and ZHANG Mei-yu
Computer Science. 2016, 43 (Z11): 383-387, 410.  doi:10.11896/j.issn.1002-137X.2016.11A.088
Abstract PDF(1489KB) ( 152 )   
References | Related Articles | Metrics
Aiming at the feature that large scale 3D points cloud recovered from structured light vision can be projected,a bottom edge driven layer by layer mesh surface reconstruction algorithm based on projection grid was presented.Firstly,the point cloud is projected to a 2D plane.Then a regular 2D projection grid based on the point cloud projection area is established,and the projected points are mapped to the 2D projection grid points.So the mapping between the 3D points and the 2D projection grid points is established.Thirdly,bottom edge driven layer by layer triangulates the 2D projection grid points,and a two-dimensional triangular mesh is obtained.Finally,according to the correspondences among 2D projection grid points and the 3D points,and the 2D triangular mesh topology,final 3D mesh surface is built.Experimental results show that the proposed algorithm is fast and can be used to maintain the details of the surface.
Network Security Situation Prediction Model Based on RAN-RBF Neural Network
GAN Wen-dao, ZHOU Cheng and SONG Bo
Computer Science. 2016, 43 (Z11): 388-392.  doi:10.11896/j.issn.1002-137X.2016.11A.089
Abstract PDF(386KB) ( 63 )   
References | Related Articles | Metrics
In order to know the development of network security situation more accurately,a model of network security situation predicition (NSSP) based on resource allocating network radical basis function (RAN-RBF) neural network was proposed.The model uses the algorithm of resource allocating network to cluster the samples of network security situation,and get the number of the hidden layer nodes of neural network,introducing pruning strategies to remove nodes that contribute little to the network,the neural network of centers,widths and the weights are optimized by modified particle swarm optimization (MPSO) algorithm,to predict the future network security situation.Using the data provided by the network management department of campus network simulation experiments show that compared with K-means clustering RBF neural network prediction model,the model can get more appropriate RBF neural network structure and control parameters,to improve the accuracy of the predictions,more directly reflects the overall situation of the network security situation and provide situation map for the network security administrators.
Privacy and Integrity Protection Range Query Processing in Two-tiered Wireless Sensor Networks
LIU Huai-jin, CHEN Yong-hong, TIAN Hui, WANG Tian and CAI Yi-qiao
Computer Science. 2016, 43 (Z11): 393-397.  doi:10.11896/j.issn.1002-137X.2016.11A.090
Abstract PDF(270KB) ( 46 )   
References | Related Articles | Metrics
In two-tiered wireless sensor networks,storage node is an intermediate hode between sensor nodes and Sink nodes,which is responsible for collecting data of sensor nodes and the query of the Sink,so it is more easily attacked by the attacker.Storage nodes can not only reveal the compromise data of sensor nodes,but also is likely to return an incomplete or false results to Sink.To this end,this paper proposed a privacy and integrity protection range query method PIRQ.The PIRQ separates data query and upload process,and uses the R-D sensory data process to change the process of size comparison of the size of the lower and upper bounds on the range of the sensing data and the median into the distance between the perceptual data and the median of the query range and the size of the range of the query,which reduces the energy overhead.0-1 coding and Hash message authentication mechanism are used for data privacy protection,and encrypted data link technology is used for data integrity verification.Theoretical analysis and experimental results show that the method can realize the privacy and integrality of the data,and have higher efficiency of energy consumption.
Fined-grained Location Privacy Protection System for Android Applications
PENG Rui-qing and WANG Li-na
Computer Science. 2016, 43 (Z11): 398-402.  doi:10.11896/j.issn.1002-137X.2016.11A.091
Abstract PDF(482KB) ( 48 )   
References | Related Articles | Metrics
The location privacy protection is a prominent problem in mobile location-based services.Coarse-grained permissions methods prevent location data exposure through absolute authorization policies without considering the service qualities of users.In this paper,a native location-based temporal-spatial obfuscation algorithm was proposed to implement the fine-grained location privacy protection system.It can protect user privacy through location cloak,which is also able to keep the service qualities.First,an interception method has been developed to capture the service requests and precise location data sent to applications and processes the location data with location obfuscation algorithm.Then the cloaked safe location data are returned to applications and the location privacy will be protected.The experimental results demonstrate the effectiveness of this method.
Study on DNA Encoding & Sine Chaos-based Meteorological Image Encryption Technology
Buhalqam AWDUN and LI Guo-dong
Computer Science. 2016, 43 (Z11): 403-406, 416.  doi:10.11896/j.issn.1002-137X.2016.11A.092
Abstract PDF(933KB) ( 53 )   
References | Related Articles | Metrics
Meteorological images in the transmission process may be stolen and intercepted,and it leads to the data flow,which do harm to the country’s meteorological data security.Thus,encryption before the transmission and decoding after receiving is a safe and reliable way.Encryption method first separates the three layers of meteorological image R,G,B to reduce the correlation between the layers.Second,it combines DNA encoding and sine chaotic mapping to encrypt the meteorological images’ layers separately.This method has the advantage of parallel computation,so the speed of encryption and decryption is greatly improved.The algorithm has large key space,strong attack resistance,reversible encryption methods and it is also sensitive to initial value.Therefore,it can against a series of attacks safely and efficiently.Then,the method can meet our needs of safe meteorological image transmission.In the age of big data,this method solves the difficulties of traditional encryption and decryption technology.This method also can be applied to the encryption of gray or other common color images.However,due to the large amount of the taken meteorological images in every day,the number of images needs to be encrypted in the unit time is large.In other words,high speed encryption on the basis of security is needed.Thus this method is extremely suitable for the fields,which collect a large amount of image data,such as meteorology.And it also has great potential applications in the military,medicine and other image secure communication fields.
Network Security Audit System Based on Redirection of DNS
YIN Jun, WANG Hai-yan and PAN Xian-meng
Computer Science. 2016, 43 (Z11): 407-410.  doi:10.11896/j.issn.1002-137X.2016.11A.093
Abstract PDF(348KB) ( 185 )   
References | Related Articles | Metrics
Network security auditing is an important task of network management,and it is also an important way to ensure network security and stability.Common network security audit system is complex and high cost.In view of this,we proposed a network security audit system based on redirection of DNS.The method by modifying the DNS configuration in gateway can monitor the data without changing the structure of the network or the configuration of clients.Experimental results show that it is simple,low cost and helpful for administrators to improve their capability of network ma-nagement.
Fast Image Encryption Algorithm Based on Novel Five Dimensional Discrete Chaos
ZHU Shu-qin, LI Jun-qing and GE Guang-ying
Computer Science. 2016, 43 (Z11): 411-416.  doi:10.11896/j.issn.1002-137X.2016.11A.094
Abstract PDF(1157KB) ( 51 )   
References | Related Articles | Metrics
Combining the Logistic mapping and 3D discrete Lorenz mapping,a new five dimensional discrete chaotic map was constructed.Based on the map,an image encryption algorithm with only two rounds of diffusion operation was proposed.In the first round of the diffusion operation,the key stream is related to the plaintext and the key stream in the second round diffusion operation is related to the ciphertext of first round.Therefore,the final encryption key of the algorithm is related to the plaintext and the relationship among ciphertext,plain text and key becomes complicated.Experimental results and security analysis show that the algorithm has a large key space,good statistical characteristics for ciphertext,high sensitiveness for ciphertext to plaintext and key,resistance to chosen plaintext and ciphertext,fast encryption speed.So,this algorithm has good application prospects in image secure communication and storage applications.
Chinese Character Generation Model for Cloud Information Security
ZHANG Li, LI Qing-sheng and LIU Quan
Computer Science. 2016, 43 (Z11): 417-421.  doi:10.11896/j.issn.1002-137X.2016.11A.095
Abstract PDF(502KB) ( 58 )   
References | Related Articles | Metrics
This paper presented a Chinese generation model for cloud information security. By defining the effective Chinese character strokes output method and the Chinese character structure dynamic generation scheme,the model,which includes the structure and the style of Chinese character,can be used for the information security and protection.Compared with the Chinese character coding system,the description system makes it easier to store the Chinese characters into the Web and monitor output in the client .It overcomes the shortcomings that caused by lacking of service information security in entire encoding of modern Chinese characters.It provides an effective strategy and method for cloud storage and cloud data security services,and at same time,it also provides a deeper cloud Chinese character information service basis for the system of information in the cloud.
Expert Recommendation Framework Based on Value Co-creation for Online Negative Word-of-mouth Handling
CAI Shu-qin and QIN Zhi-yong
Computer Science. 2016, 43 (Z11): 422-427.  doi:10.11896/j.issn.1002-137X.2016.11A.096
Abstract PDF(367KB) ( 61 )   
References | Related Articles | Metrics
Under circumstances of big data,online negative word-of-mouth has its unique features such as mass data and fast dissemination,and those features give enterprises many difficulties of resource shortage and directly handling blocked in “enterprise-user” processing mode.Based on value co-creation theory,by using user senerated content,this paper constructed the value co-creation model of online negative word-of-mouth handling from an overall perspective including two phrases of interaction and resource integration,and then built the model for the input resources and value benifits process from single perspective.Finally,an expert recommendation framework for value co-creation was designed.
Distributed Collaborative Filtering Recommendation Based on User Co-occurrence Matrix Multiplier
HE Ming, WU Xiao-fei, CHANG Meng-meng and REN Wan-peng
Computer Science. 2016, 43 (Z11): 428-435.  doi:10.11896/j.issn.1002-137X.2016.11A.097
Abstract PDF(955KB) ( 141 )   
References | Related Articles | Metrics
With the era of big data’s coming,the amount of application data increases sharply,the result of which gives more and more prominence to a personalized recommendation technique.However,traditional recommendation techniques applied to big data are confronted with some problems,such as low recommendation accuracy,long recommendation time and high network traffic.Therefore,the performance of recommendation degrades drastically.To address this issue,a user co-occurrence matrix recommendation strategy was proposed in this paper. The user for the project’s predi-cation rating matrix is got by multiplying user similarity matrix and item rating matrix.Candidate items set is generated for each user using user similarity matrix multiply by item similarity matrix.On this basis,traditional collaborative filtering algorithms were parallel expand according to the feature of distributed processing architecture,and a distributed collaborative filtering algorithm was designed.Finally,multi-sub tasks are in series utilizing combination of redefined MapReduce schema to execute automatically.Experimental results show that our approach achieve better prediction accuracy and efficiency in distribute computing environment.
UCNK-Means Clustering Method for Position Uncertain Data Based on Connection Number
WANG Jun and HUANG De-cai
Computer Science. 2016, 43 (Z11): 436-442.  doi:10.11896/j.issn.1002-137X.2016.11A.098
Abstract PDF(360KB) ( 44 )   
References | Related Articles | Metrics
Clustering for position uncertain data is a new problem of uncertain data clustering.Mainly there are two solutions to this new problem.The first is clustering acquiring the probability density function or probability distribution function of uncertain object and getting the expected distance with integral operation.The second is clustering by series of operation of interval data.However,the former has the disadvantages of getting probability density function hard,complex operation and poor practicability,and the latter ignores the effect of the range of interval data to the result of clustering.Therefore,a new uncertain data clustering method UCNK-Means was put forward.This method uses connection number as the model of uncertain object and defines connection distance between two objects and uses the situationvalue to compare the connection distance,which overcome the disadvantages existed in the two solutions above.The experiment illustrates that UCNK-Means has high precision of clustering,low complexity and strong praticability.
Short Text Clustering Algorithm Combined with Context Semantic Information
ZHANG Qun, WANG Hong-jun and WANG Lun-wen
Computer Science. 2016, 43 (Z11): 443-446, 450.  doi:10.11896/j.issn.1002-137X.2016.11A.099
Abstract PDF(339KB) ( 50 )   
References | Related Articles | Metrics
Because short text faces the challenges of information insufficiency,high dimensions and feature sparsity,conventional text clustering method has limited effect when applied to short text.In view of above,this paper proposed a novel short text clustering algorithm combined with the context semantic information.Firstly,drawing lessons from the idea of centrality and prestige in the field of social network analysis,the algorithm improved conventional feature weight calculation by considering the semantic information in the context.And on this basis,it constructs the term-document matrix and then carried out the singular value decomposition on the matrix to map the original high dimensional term vector space to the lower dimensional latent semantic space.Finally it clusters the short text on the lower dimensional latent semantic space by the improved K-means clustering algorithm.Experimental results show that using our scheme can effectively improve the characteristics of information insufficiency,high dimensions and feature sparsity of short text compared to the traditional text clustering method,and greatly improve the evaluation indicators of short text clustering.
W-PAM Restricted Clustering Algorithm in Data Mining
ZHANG Song and ZHANG Lin
Computer Science. 2016, 43 (Z11): 447-450.  doi:10.11896/j.issn.1002-137X.2016.11A.100
Abstract PDF(237KB) ( 58 )   
References | Related Articles | Metrics
In data mining,the effect of each data object on knowledge discovery is different.In order to distinguish these differences,this paper gave a certain amount of value to each object,and put forward a W-PAM (Weight Partitioning Around Medoids) clustering algorithm which is based on the PAM algorithm.It can improve the accuracy of the algorithm by adding weight to the data object in the cluster.Moreover,the effect of clustering algorithm can be improved by using the association among the data objects.In this paper,a W-PAM restricted clustering algorithm was proposed,which combines the W-PAM algorithm with the constraint clustering algorithm.The algorithm has advantages of the W-PAM restricted clustering algorithm and relevance constraints.The experimental results show that the W-PAM restricted clustering algorithm can effectively improve the clustering result and improve the accuracy of the algorithm.
Time-weighted Hybrid Recommender Algorithm
ZOU Ling-jun, CHEN Ling and LI Juan
Computer Science. 2016, 43 (Z11): 451-454.  doi:10.11896/j.issn.1002-137X.2016.11A.101
Abstract PDF(225KB) ( 104 )   
References | Related Articles | Metrics
This paper proposed a novel Time-weighted Hybrid Recommender algorithm.The algorithm was divided into an online component and an offline component.The offline component derives the similar group of the target recommendation user according to the information such as evaluation information and constructed object profile model while the online component constructed user profile model according to both target user and the neighbors’ behavior.Because user preferences are drifting over time,the method uses attenuation coefficient in user profile model to improve the recommendation quality.The method uses sliding windows which is divided into several equal segments to produce personalized recommendation in each segment and update user profile model as well as the neighbors in a certain interval.The experimental results show that the algorithm can reflect user preferences over time,and has better effectiveness and achieves a more satisfactory effort.
Forecasting Method for Fashion Clothing Demand Based on Kernel Functions Technology
MENG Zhi-qing, MA Ke and ZHENG Ying
Computer Science. 2016, 43 (Z11): 455-460, 465.  doi:10.11896/j.issn.1002-137X.2016.11A.102
Abstract PDF(529KB) ( 51 )   
References | Related Articles | Metrics
Fashion clothing demand forecasting of short life cycle has plagued the production and inventory of garment brand company,and it can not be solved.In this paper,we used the kernel function technique of nonlinear machine lear-ning,and put forward a prediction method for short life cycle fashion clothing.In combination with the characteristics of garment companies and the application of data warehouse,a model of garment demand forecasting based on kernel function was established.We gave a calculation algorithm and carried out the analysis and verification through the actual data.Data experiments show that the proposed method for fashion clothing demand forecasting has higher prediction accuracy,and it is suitable for clothing company dynamic replenishment.It also has important practical significance for the company inventory control.
Study on Financial Crisis Prediction Model with Data Envelopment Analysis and Data Mining
ZHAO Zhi-fan and CAO Qian
Computer Science. 2016, 43 (Z11): 461-465.  doi:10.11896/j.issn.1002-137X.2016.11A.103
Abstract PDF(194KB) ( 54 )   
References | Related Articles | Metrics
Previous studies on corporate financial crisis prediction can only predict whether the enterprise has the financial crisis,can not predict what the degree of enterprise financial crisis is.This is due to the definition of corporate financial crisis,only on the basis of whether the enterprise is ST Enterprise.In view of this,this paper used data envelopment analysis to refine the financial crisis of enterprises,then selected important predictive variable by using the association rule technology,and finally constructed prediction model of enterprise financial crisis by using the decision tree technique,and the validity of the classification and the accuracy of prediction were verified.Empirical study indicates that the financial crisis prediction model based on data envelopment analysis and data mining can not only maintain a high accuracy,but also can predict the degree of enterprise financial crisis.Its prediction result is more valuable.
Research on Causal Association Rule Mining Based on Constraint Network
CUI Yang and LIU Chang-hong
Computer Science. 2016, 43 (Z11): 466-468.  doi:10.11896/j.issn.1002-137X.2016.11A.104
Abstract PDF(153KB) ( 76 )   
References | Related Articles | Metrics
Causal association rule is one kind of important and special knowledge type in knowledge base,which can reveal the deep knowledge relative to general association rule.Characteristics of the causal relation and the causal association rule mining were briefly introduced at first.Based on that,constraint network theory was used to construct a causal relation structure of variables in practical system so as to solve the problem of how to restrict the casual variable set in initial step of the mining.Casual variable set as well as types of the variables can be inferred easily from the causal relation structure,so complexity of the mining would be reduced and the results would be more accurate.Introduction of the constraint network can make the process of causal association rule mining become more complete.
New Method of Microblog Classification Based on Feature Weighted Language Model
CUI Wei-na
Computer Science. 2016, 43 (Z11): 469-471.  doi:10.11896/j.issn.1002-137X.2016.11A.105
Abstract PDF(143KB) ( 43 )   
References | Related Articles | Metrics
Microblog as a new social media has been rapid development.Microblog rapid development brings convenience to people at the same time,also makes people swimming in the ocean of information.Aiming at the increase in microblog presented the problem of information overload,microblog retrieval has become an important research topic.For microblog retrieval,this paper proposed a new microblog retrieval method based on feature weighted language model,and this method is mainly used in microblog statistical characteristics and semantic characteristics of combined to solve the retrieval problem of the microblog.Experiments were performed on the real annotation data set extracted from sina microblog,and the comparative experimental results show that the proposed method is an effective retrieval method.
Research on Characteristics of Thinking in Era of Big Data
HONG Jing
Computer Science. 2016, 43 (Z11): 472-473, 505.  doi:10.11896/j.issn.1002-137X.2016.11A.106
Abstract PDF(109KB) ( 50 )   
References | Related Articles | Metrics
In recent years,big data,a massive data set that needs the proprietary platform to realize value abstraction to help decision analysis,has become a focus of the scientific community and the business community.The scale of the data that a country has and its ability to use the data will become the significant components of its comprehensive national strength,and the possession and control of data will become the focal points among countries and enterprises.This paper focuses on the research and analysis of five characteristics,pointing out that the economic benefit of enterprises is the main driving force of the development of big data,and current processing technology of big data makes the work which people engaged more intelligent.Through researching the characteristics and modes of thinking in the era of big data,we concluded that the biggest change in the big data era is that the researches will be conducted in data-intensive science.
Research about FSM Test Sequence Generation Algorithm
LI Yuan-ping, LI Hua and ZHAO Jun-lan
Computer Science. 2016, 43 (Z11): 474-481.  doi:10.11896/j.issn.1002-137X.2016.11A.107
Abstract PDF(582KB) ( 84 )   
References | Related Articles | Metrics
In the testing engineering,the basic step of the related test methods is to apply the test generated tree to construct test sequence.In this article,we added constraint set on the traditional test generated tree,making the tree more conform to the practice.We also considered the fluence of constraint set on the state separate sequence.We generated character set,states identification set,UIO sequence,and then proposed or advanced the algorithms.At the same time,we extended test method in NFSM,and proposed algorithms to generate prefix sequence and sate separate sets.The algorithm combines state separate matrix and FSM product,and is used in the adapt test of NFSM model,which makes the test theory on FSM completely.We constructed a test tool,which realizes these algorithms,and verifies the capability.Finally,the research work in the future was considered.
Uncertain RFID Data Stream Cleaning Strategy
LIU Yun-heng, LIU Yao-zong and ZHANG Hong
Computer Science. 2016, 43 (Z11): 482-485.  doi:10.11896/j.issn.1002-137X.2016.11A.108
Abstract PDF(206KB) ( 70 )   
References | Related Articles | Metrics
The original RFID data stream contains a lot of noise and uncertainty,so the data must be cleaned before using and the cleaning strategy is the guarantee of the quality of the cleaning.In this paper,a new method for cleaning the RFID data stream was proposed.The maximum entropy principle is introduced in the cleaning strategy,and this treat cleaning RFID tuple attributes to select weights.the cleaning cost analysis is performed according to the cleaning node time-consuming and error to decide the best cleaning method.Simulation experiment results show that this clea-ning strategy improves the cleaning efficiency and accuracy of the RFID data stream.
Software Failure Prediction Model Based on Quasi-likelihood Method
ZHANG Xiao-feng and ZHANG De-ping
Computer Science. 2016, 43 (Z11): 486-489, 494.  doi:10.11896/j.issn.1002-137X.2016.11A.109
Abstract PDF(384KB) ( 42 )   
References | Related Articles | Metrics
Software defect prediction is an important direction of software reliability research.Because there are many factors influencing the software failure,and the relationship among them is complicated,the joint distribution function is commonly used to describe the analysis model,which is difficult to be determined in the practical application.This problem may impact software defect prediction directly.In this paper,we proposed a Quasi-likelihood method (PCA-QLM),which uses PCA to select the main metrics firstly,and then build defect prediction model.In our model,we can use the mean function and variance function of dependent variable to get the estimated parameter and then predict defects.In this paper,we draw a conclusion that PCA-QLM can apply to the software failure prediction and its perfor-mance is better than other models by comparing with probit regression forecasting model and logistic regression forecasting model based on two real datasets Eclipse JDT and Eclipse PDE.
Analysis of Embedded Software Based on Static Model with Simplified Grammar and Sentence Depth
LI Zhen-xiang, LIU Chong-wei, YANG Guang-yi and LIU Jin-shuo
Computer Science. 2016, 43 (Z11): 490-494.  doi:10.11896/j.issn.1002-137X.2016.11A.110
Abstract PDF(243KB) ( 49 )   
References | Related Articles | Metrics
In order to solve the problem that the embedded software has the shortcoming of the platform dependence,this paper presented an embedded software analysis method based on the static structure model.Before control flow and data flow analysis,a lexical analysis/syntax analysis method with simplified grammar and sentence depth was designed to analyze the embedded software. This paper used the open source software of smart meters as a case,and used the artificial errors as the test objects,repeated 30 times.Compared with the popular static analyzing tools PC-Lint and Splint,the method can accurately orient 91% errors,which is between PC-Lint’s 95% and Splint’s 90%.The result indicates that the correct rate of our method is acceptable.Meanwhile,by removing the platform-dependent operation with simplified syntax analysis,our method is independent of development environment.It also shows that the method is applicable to the compiled C (including embedded software) program.
Effective Image File Storage Technique Using Improved Data Deduplication
LI Feng, LU Ting-ting and GUO Jian-hua
Computer Science. 2016, 43 (Z11): 495-498.  doi:10.11896/j.issn.1002-137X.2016.11A.111
Abstract PDF(317KB) ( 60 )   
References | Related Articles | Metrics
In the cloud computing environment,the increasing development of Infrastructure as a Service leads to the sharp increase of virtual machine and virtual machine image.For example,Amazon Elastic Compute Cloud(EC2)has 6521 public virtual machine image files,which bring a great challenge to the management of the cloud environment.In particular,the spatial storage of duplicate data brought by a large number of mirror images.In order to solve this problem,this paper proposed a storage scheme for a fixed block of the image file based deduplication.When an image file is stored,we should calculate the image file’s fingerprint first and compared with the fingerprint database.If it exits in the fingerprint database,we should replace it with pointer,else using fixed block to splite and storage image file.To this end,we designed the image file metadata format and mirror file MD5 index table.The experiment shows that the same content image file is just the cost of metadata and the second pass.And the same version of the same system,but different software’s mirror group,whose deletion rate is about 58%.As a result,our scheme is very effective.
Research on Evolution Similarity Measurement of Component-based Software Based on Multi-dimensional Evolution Properties
ZHONG Lin-hui, LI Jun-jie, XIA Jin and XUE Liang-bo
Computer Science. 2016, 43 (Z11): 499-505.  doi:10.11896/j.issn.1002-137X.2016.11A.112
Abstract PDF(716KB) ( 64 )   
References | Related Articles | Metrics
By measuring and comparing the evolution similarity for the different component-based software,the software developer can understand the software evolution and predict its evolution tendency.However,most traditional researches focus on the change of a single software evolution during the software evolution process.Although some of them are involved with multi-dimensional evolution properties,they are not related to the software evolution similarity and lack the ability to measure the evolution similarity at a higher level.This paper proposed an evolution similarity measure model for component-based software based on multi-dimensional evolution properties,which can measure evolution si-milarity for different atomic component or system(compose component) by selected evolution attributes.The experiments show the method can aid the software maintainer to judge the evolution similarity by the prototype support.
Metrics for Software Structure Complexity Based on Software Weighted Network
TIAN He and ZHAO Hai
Computer Science. 2016, 43 (Z11): 506-508.  doi:10.11896/j.issn.1002-137X.2016.11A.113
Abstract PDF(136KB) ( 41 )   
References | Related Articles | Metrics
Software structure complexity problems have been closely attention.With the increasing of the software scale,traditional metric methods have been difficult to adapt to the software development.In order to measure the software structure complexity effectively,we analyzed the relationship between the strength and closeness in the software weighted network,and calculated the general characteristics in the software sample collection based on the complex network theory.Firstly,we selected the software with a larger value parameter,and studied the nodes with a larger value in the strength and closeness in its corresponding software weighted network.Secondly,we contrasted and analyzed the attributes and functions of nodes with greater difference in sorting.Finally,we evaluated the two kinds of software structural complexity metric method,and concluded that the strength can measure the local structure complexity of the software,and closeness can measure the whole structure complexity of the software.
Business Process Model Optimized Analysis Method Based on Configuration Change
LIU Hong, LIU Xiang-wei and WANG Li-li
Computer Science. 2016, 43 (Z11): 509-512.  doi:10.11896/j.issn.1002-137X.2016.11A.114
Abstract PDF(236KB) ( 41 )   
References | Related Articles | Metrics
To adapt the quick demand of business process management market,the optimized analysis of business process model is increasingly important.Most of the existing methods are based on the static positioning change area to optimize the area,having some limitations.In this paper,based on Petri net and behavior profile,from the behavior angle,we did dynamic analysis of change parts in the business process,used matching relation between log and model and dynamic localization to determine the change region of model,considered the interface part,used the fitness and beha-vioral appropriateness to optimize model with the configuration change.Then behavioral profile consistency degree was used to determine the optimal model.Finally,an example was used to verify the feasibility of this method.
Process Improving in Software Development Based on Swam Intelligence
ZHANG Ya-qi, ZHANG Ying-chao and LI Bing
Computer Science. 2016, 43 (Z11): 513-516.  doi:10.11896/j.issn.1002-137X.2016.11A.115
Abstract PDF(260KB) ( 46 )   
References | Related Articles | Metrics
Complex software system is now becoming a new ubiquitous form of software,playing a more and more important role in social,economic and military activities.However,the traditional software engineering methods encounter difficulties when applied to complex software developing.Firstly,we analyzed the characteristics of complex software development process,from which we found implicated biology intelligences and a co-evolution mechanism.And then we raised up an evolution model based on swam intelligence.At last,based on the model,we gave measures to improve the process of complex software system developing.
Method of Optimization for Polynomial Datapaths
LI Dong-hai, ZHU Xiao-chen, FAN Zhong-lei and YANG Xiao-jun
Computer Science. 2016, 43 (Z11): 517-519.  doi:10.11896/j.issn.1002-137X.2016.11A.116
Abstract PDF(193KB) ( 36 )   
References | Related Articles | Metrics
In order to implement the high level synthesis of polynomial datapaths,the ordered,reduced and canonical weighted generalized list was used to represent the polynomial.Based on the weighted generalized list,a method for optimizing polynomial datapaths was proposed,which traverses the nods of the weighted generalized list in a top-down fashion and identifies the corresponding additive cuts and multiplicative cuts iteratively,and then an admissible cut sequence is generated;according to the admissible cut sequence,the corresponding schedulable data flow graph is obtained.Experimental results demonstrate the superiority of the schedulable data flow graph obtained by the proposed method in the latency compared with the existent methods.
Data Exchange Based on QR Code in Physically Isolated Internal and External Network Environment
HAN Lin, ZHANG Chun-hai and XU Jian-liang
Computer Science. 2016, 43 (Z11): 520-522.  doi:10.11896/j.issn.1002-137X.2016.11A.117
Abstract PDF(289KB) ( 37 )   
References | Related Articles | Metrics
To solve the problem of high confidentiality work data or data exchange problems in internal and external network physical isolation environmen caused by other reasons,and through the research of QR code generation and the parsing process,We found that QR code has several characteristics,such as it can be used to carry data,low cost and it can follow the characteristics of mobile carrier.Then we raised the point of view that the QR code can be used to solve the problem of data exchange of some special cases.Because a single QR code can carry limited data,we put forward using the Protocol Buffer format and LZMA compression algorithm to simplify and compress the data that QR code transmits.And we used the combination of multiple QR code to solve the problem of large amounts of data.This paper also expounded the application prospect of data exchange based on QR code.
Analyzing Scheduling Based on Priority in Concurrent Systems
ZHU Zhen-yu, ZHANG Shi, JIANG Jian-min, WU Ya-zhou and YANG Qi-fan
Computer Science. 2016, 43 (Z11): 523-528, 535.  doi:10.11896/j.issn.1002-137X.2016.11A.118
Abstract PDF(629KB) ( 43 )   
References | Related Articles | Metrics
Due to the complexity of the concurrent system,it is difficult for engineers to develop a complex system as a whole directly.They often apply informal principles,such as modular,stepwise refinement and information hiding,to guide system development.These guidelines are abstract and can’t guarantee the correct decomposition of system.In this paper,we focused on the system decomposition method based on priority,proposed a method to decompose the system,and proved its correctness.First,we modeled system with an event-based behavioral model.Next,based on such a model,we formally defined the schedule,the scheduling policy and the correctness of a scheduling policy.After that,we proposed a method for decomposing schedule policy,and proved the method’s correctness.Finally,according to the decentralized method,we developed a toolkit,which supports event-based behavioral model,for modeling system and decomposition of scheduling policy.An experiment demonstrates that these results may help engineers to design correct and efficient schedule policies in a system to realize decomposition.
Query Optimized Operation Model of XML Based on Cost
HUANG Shou-meng
Computer Science. 2016, 43 (Z11): 529-531.  doi:10.11896/j.issn.1002-137X.2016.11A.119
Abstract PDF(117KB) ( 35 )   
References | Related Articles | Metrics
Along with the development of XML database technology,the research on XML query optimization is increasing,but it is still the weak link of XML database.In this paper,we found out the atomic operation in the traditional query estimation model,and used the method based on statistical learning to find out the operating costs and the functional relationship between these factors,so as to establish the operation model based on cost.
Research of Fault-tolerant Booting for On-board Computer Based on Anti-radiation Loongson
HUANG Chao, CHEN Yong and LIN Bao-jun
Computer Science. 2016, 43 (Z11): 532-535.  doi:10.11896/j.issn.1002-137X.2016.11A.120
Abstract PDF(321KB) ( 75 )   
References | Related Articles | Metrics
To protect on-board computer (OBC) against negative impact brought by space environment,a new architecture of OBC which takes a Loongson processor as its core and uses FLASH memory to store the boot program was proposed.Hardware redundancy was combined with error detection and correction (EDAC) and scrubbing technology in fault-tolerant design.Memory data reliability can be improved obviously by EDAC and scrubbing technology.The OBC reliability was analyzed and a fault-tolerant boot method was described on the basis of the new hardware design.The method shields faulty memory chip and software error by hardware redundancy and software backup.The experimental results show that the solution protects OBC against the common fault of the boot process and improves OBC reliability effectively.
Design and Implementation of Safety Analysis Tool Based on Avionics System Architecture Model
XU Wen-hua and ZHANG Yu-ping
Computer Science. 2016, 43 (Z11): 536-541.  doi:10.11896/j.issn.1002-137X.2016.11A.121
Abstract PDF(1156KB) ( 194 )   
References | Related Articles | Metrics
Common mode analysis and zone safety analysis need to be conducted to the safety critical avionics system in order to form new separation requirements.As the avionics system is becoming more and more integrated,the traditional common mode analysis and zone safety analysis methods can’t ensure the completeness of the separation requirements as they mainly rely on how well the analyzers understand the system.Meanwhile,the requirements of the system are hard to be traced due to the differences between the understanding of the safety analyzers to the system and that of the system designers,especially when the design changes frequently,safety analysis results are always inaccurate and inconsistent.Aiming at the above problems,a safety analysis tool based on avionics system architecture model was designed and implemented.Fault tree auto-modeling was conducted through tracing the data signal path in physical architecture.Then common mode analysis and zone safety analysis were conducted based on the generated fault tree,getting a common mode checklist and some zone separation requirements.The results of the case study on one cockpit display system indicate that the tool is able to conduct auto fault tree modeling based on the avionics system architecture model described in SysML,and also can mark the components need to be isolated,ensuring the completeness of the results of common mode analysis and zone safety analysis.
μC/OS-II Based Highly Reliable RTOS Kernel
ZHU Yi-an and LIN He
Computer Science. 2016, 43 (Z11): 542-546, 550.  doi:10.11896/j.issn.1002-137X.2016.11A.122
Abstract PDF(578KB) ( 59 )   
References | Related Articles | Metrics
A highly reliable embedded partition operating system kernel based on μC/OS-II was designed and implemented.In order to ensure the execution time of the high safety-critical level partition and improve the reliability of the system,a new partition scheduling algorithm which has variable execution time in every cycle was proposed .It can not only guarantee the partition with higher safety-critical level execution first,but it also has good resource utilization and task schedulability.The kernel introduced access control to ensure information security of the system,and a bit computing based access control algorithm was also proposed.Finally, algorithm analysis and experiments demonstrate the effectiveness and timeliness of our algorithm.
Application of BP Neural Network Based on Genetic Algorithms in Prediction Model of City Water Consumption
YAN Xu, LI Si-yuan and ZHANG Zheng
Computer Science. 2016, 43 (Z11): 547-550.  doi:10.11896/j.issn.1002-137X.2016.11A.123
Abstract PDF(269KB) ( 152 )   
References | Related Articles | Metrics
The accurate prediction on urban water consumption is of great importance in management and improvement of water supply system.Traditional BP neural network prediction model is prone to problems like local minimum,weights adjustment and constant testament of parameters.Based on these confinements,genetic algorithms based on biological evolutionary theory was given to upgrade BP neural network algorithms,thus producing a new method namely GA-BP.Meanwhile,in view of the very low error accuracy resulted from unsuitable selection of input variables in traditional BP algorithms,this thesis analyzed the change rule of the urban water consumption to get suitable input variables.Then a prediction model was built with testament and simulation of historical data.The application of such prediction model was finally applied to a water supply company in Shenzhen.Its result indicates that the GA-BP is reliable and practical in predicting urban water consumption.
Attitude Control Research of Quad-rotor Aircraft Based on Anti-windup PID
ZHAO Yu-ying, JIANG Xiang-ju and ZENG You-han
Computer Science. 2016, 43 (Z11): 551-553, 563.  doi:10.11896/j.issn.1002-137X.2016.11A.124
Abstract PDF(582KB) ( 107 )   
References | Related Articles | Metrics
Aiming at the attitude control problem of four rotor aircraft,a PID Anti-windup attitude controller was designed.Based on the simplified mathematical model of the four rotor aircraft,the PID controller and PID Anti-windup controller were designed respectively on the four independent channels of the aircraft in the vertical velocity,the pitch rate,the roll rate and yaw rate.In the MATLAB/SIMULINK environment,the simulation analysis of the two algorithms to control the attitude of the four rotor aircraft was carried out.The simulation results show that the PID Anti-windup control method is superior to PID in performance,and has a good control effect on the aircraft.PID Anti-windup algorithm is used to build the physical experiment platform of the four rotor aircraft,which is better to verify the effectiveness of the algorithm.
Simulation Research of Industrial Production Index Based on Neural Network of ARIMA
LI Meng-gang, ZHOU Chang-sheng, LIAN Lian and LI Wen-rui
Computer Science. 2016, 43 (Z11): 554-556, 567.  doi:10.11896/j.issn.1002-137X.2016.11A.125
Abstract PDF(371KB) ( 48 )   
References | Related Articles | Metrics
Industrial production index is an important indicator to measure the status of industrial economic sentiment over a given period of time,and it is also the first indicator for researching the macroeconomic early-warning.In this paper,the ARIMA neural network model was built,through the ARIMA theory combined with neural network theory,and using 1997-2015 monthly time series data of the industrial production index,the simulation research of the industrial production index was carried out.First of all,the seasonal adjustment for industrial production index was made to get rid of the seasonal factors of industrial production index in the time series.Secondly,the 1997-2015 monthly industrial production index by ARIMA neural network model was emulated.The simulation results show a good simulation training effect.Finally,ARIMA neural network model was used to carry on the simulation of the industrial production index from January to June in 2016,and the simulation values of industrial production index from January to June in 2016 was got.
Random Forest Based Telco Out-calling Recommendation System
ZHU Yi-jian, ZHANG Zheng-qing, HUANG Yi-qing, BAI Rui-rui and YAN Jian-feng
Computer Science. 2016, 43 (Z11): 557-563.  doi:10.11896/j.issn.1002-137X.2016.11A.126
Abstract PDF(364KB) ( 187 )   
References | Related Articles | Metrics
Out-calling recommendation is widely used in recommending products and services to customers by telecommunication (telco) operators.In this paper,we developed an automatic out-calling recommendation system which relies on telco big data.This system uses data-mining methods to extract customer behaviors and machine-learning algorithm to predict the acceptance probabilities when customers are recommended to certain products.Different from most recommendation systems which use matrix factorization (MF),sparse features classification,neural network and etc,this paper used random forest,not only because the algorithm is easy to be parallel implemented and has fast training speed,but also the rules from the resulting decision trees are easy to explain.These characteristics make the random forest suitable for telco recommendation system.Our system is implemented on the top of Hadoop,Impala and Spark.Random forest is used as the core algorithm to calculate the acceptance probability when a user is recommended to a product based on user behavior features.Online testing shows that the proposed system can achieve 41% improvement compared with the current deployed random out-calling recommendation method.We also gave the customer behavior analysis according to the feature importance from the outputs of random forest.
Transformer Fault Monitoring Expert System Based on Rule Base
LI Feng and XIA Li
Computer Science. 2016, 43 (Z11): 564-567.  doi:10.11896/j.issn.1002-137X.2016.11A.127
Abstract PDF(134KB) ( 42 )   
References | Related Articles | Metrics
To deal with the lack of state equation on transformer on-line monitoring,we designed a kind of rule-based expert system for transformer fault real-time monitoring.First of all,the expert knowledge is coded in the form of rules.Secondly,in order to accelerate the system running speed and make storage and management of the rules conve-nient,we used normalized processing scheme of the rules to make the rules in the form of unified stored in knowledge base.In the end,this paper put forward a kind of algorithm,which can automaticly eliminate redundant rules and get minimum rule base.Experiments show that the design is good to simulate the human expert reasoning process of transformer faults.
Application and Research on Gesture Recognition by Kinect Sensors
YU Ze-sheng, CUI Wen-hua and SHI Tian-wei
Computer Science. 2016, 43 (Z11): 568-571.  doi:10.11896/j.issn.1002-137X.2016.11A.128
Abstract PDF(745KB) ( 178 )   
References | Related Articles | Metrics
In order to solve the problems of complex operations in the current smart home system and acquire a simple control method and increase feeling experience for users,the gesture recognition methods based on Kinect were researched and integrated into the human-computer interaction (HCI) system in smart home.In this system,users can customize the gestures to realize the intelligent control of household equipment.The utilized template matching gesture recognition method is accomplished based on the dynamic time warping (DTW) algorithm.It employs the Kinect depth camera to obtain the skeleton depth images and gestures.The results of actual gesture recognition experiments show that the gesture recognition based on DTW algorithm is feasible and effective.The best identification distance is in range of 2-2.5 m in front of Kinect and the highest recognition accuracy can reach 96%.
Android Client Design & Implementation of University Online Flea Market
TIAN Bai-yu, ZHUANG Hai-tao and QIAN Xu
Computer Science. 2016, 43 (Z11): 572-574, 590.  doi:10.11896/j.issn.1002-137X.2016.11A.129
Abstract PDF(858KB) ( 56 )   
References | Related Articles | Metrics
According to the problem of the traditional entity flea market which isn’t real-time and efficient,under the Android platform,campus second-hand transaction application which is also called university online flea market was designed and developed.The reliability of the trading,updating in real time and efficiently searching items were redefined to using college student id register and information classification.Meanwhile,asynchronous loading method was adopted,and interfaces and abstract classes were defined to manage the communication between the server and the client.Compared with the real flea market and online secondary trading platform,the proposed application not only improves the trading efficiency for college students,but also improves the reliability and security.
KingCloud:Object Oriented Archiving System
MIAO Jia-jia, FU Yin-jin and MAO Han-dong
Computer Science. 2016, 43 (Z11): 575-577, 596.  doi:10.11896/j.issn.1002-137X.2016.11A.130
Abstract PDF(0KB) ( 65 )   
References | Related Articles | Metrics
With the ceaseless propulsion of informatization process,a great deal of data accumulating in the production system resulting in archiving demand.Meanwhile,with increasingly enriching types of data information,big data technology enhances the function of unstructured data.This paper designed and realized the KingCloud intelligent object archiving system,which achieves classification of text files through document classification technology,provides lo-gic views of documents and achieves the acquisition of content elements through image identification,video key frame and other technologies.As for the overall storage structure,document prefetching,memory buffer,data layout and strategy perception etc.can be optimized by combining the semantic studies of file system,and data can be intelligently classified,summarized,discovered,predicted and analyzed to remarkably improve the service ability,service quality and service performance of memory system.
Research of GIS Second Development Based on OLE Technology
WANG Bin, YUE Peng, LI Jie and ZHANG Li-hai
Computer Science. 2016, 43 (Z11): 578-580, 600.  doi:10.11896/j.issn.1002-137X.2016.11A.131
Abstract PDF(407KB) ( 40 )   
References | Related Articles | Metrics
This paper simply introduced the function of MapInfo and MapBasic,and expatiated the advantage and disadvantage of three methods in the GIS second development,then detailedly described the OLE Automation based application of MapInfo integrating second development with the development environment of Visual Basic,and gave an example.This method of the OLE automation was proved to provide an efficient approach for the secondary development application of GIS.
Evaluation Indexes of Information Service Quality in C2 System
WANG Bing and QUAN Ji-chuan
Computer Science. 2016, 43 (Z11): 581-584.  doi:10.11896/j.issn.1002-137X.2016.11A.132
Abstract PDF(233KB) ( 45 )   
References | Related Articles | Metrics
The information service quality of C2 system determines the effectiveness of C2 system to some extent.The quality elements that may influence the effectiveness of the information service were analyzed,such as information quality,service function and service performance.Then the evaluation indexes were put forward for the information service quality in C2 system.According to the index characteristics,three kinds of evaluation methods were designed including quantitative evaluation,semi-quantitative evaluation and qualitative evaluation.Furthermore,the evaluation algorithms and steps of some typical indexes were provided and discussed in detail.The results can support the C2 users to evaluate the information service quality with theory and methods.
Application of Service Bundling in Churn Predict System
ZHANG Zheng-qing, ZHU Yi-jian, BAI Rui-rui, HUANG Yi-qing and YAN Jian-feng
Computer Science. 2016, 43 (Z11): 585-590.  doi:10.11896/j.issn.1002-137X.2016.11A.133
Abstract PDF(508KB) ( 55 )   
References | Related Articles | Metrics
Customer churn problem is one of the biggest problems that telco operators are faced with,and it need to be solved.For different scenarios,operators have developed many churn prediction systems.Service number bundling means that customers have relations with 3rd-service support companies such as bank,E-commercials,supermarkets etc.during service available period through bundling phone number.After researching,we found that customers will have service number bundling behaviors with kinds of 3rd-service support companies during service available period.These companies will send SMSs to customers at any time.For customers who are about to churn,they will cancel the service bundling gradually.Thus,service bundling feature plays an important role in predicting potential churners.In order to help decision-making manager formulate corresponding customer retention campaigns and drop the churn probability,we developed the churn prediction model with random forest algorithm,and used logistic regression algorithm to reduce service number features.Experiment results show that service number soft bundling can improve the analysis and prediction performance of churn prediction system.
Weighted Prediction Method Based on Sliding Window and Pattern Matching
WANG Li-zhen, ZHOU Li-hua and DENG Shi-kun
Computer Science. 2016, 43 (Z11): 591-596.  doi:10.11896/j.issn.1002-137X.2016.11A.134
Abstract PDF(826KB) ( 109 )   
References | Related Articles | Metrics
With the deepening of Chinese reform and opening to the outside world,and developing sustainably of the society and economy,various social conflicts become complex and diverse.As a result,public security is facing unprecedented challenge.At this time,if we could make a scientific prediction on the future social stability based on the historical data of the public security,our public security management work would get two fold results with half the effort.Data mining refers to extracting or discovering interesting data patterns or rules hidden in large data sets,and makes a scientific judgment or prediction according to these discovered patterns or rules.So far,research on social stability early warning is very rare,and the accuracy of prediction results is always a difficult problem.In this paper,a novel and high accurate prediction method based on sliding window and pattern matching was proposed.Extensive experiments and the actual applications show that the proposed algorithm has features of simplicity,stability and high accuracy.
Research of Greenhouse Precise Management System Based on Could Platform
NIU Ping-juan, ZHANG Hao-wei and TIAN Hai-tao
Computer Science. 2016, 43 (Z11): 597-600.  doi:10.11896/j.issn.1002-137X.2016.11A.135
Abstract PDF(861KB) ( 43 )   
References | Related Articles | Metrics
In order to cover the shortages of time and experience,and solve the problem of greenhouse cluster management when the user plants in the greenhouse,this paper designed a greenhouse precise management system based on cloud platform.The user can capture greenhouse information through remote cloud platform computer technology and control the greenhouse via revelant actuator.The system can also realize crops’ intelligent plant by fuzzy control system,which manages the crop only by a key.At the same time,the user can also capture,analyze,manage and control the greenhouse cluster environment parameter by cloud platform.The experimental results show that the greenhouse precise management system is very feasible and effective.
Application of Genetic Algorithm in Public Transit Dispatchers
DING Yong, JIANG Feng and WU Yu-yan
Computer Science. 2016, 43 (Z11): 601-603.  doi:10.11896/j.issn.1002-137X.2016.11A.136
Abstract PDF(250KB) ( 46 )   
References | Related Articles | Metrics
For technical problems in Taizhou public transport intelligent building,genetic algorithm (GA) is applied in the bus scheduling optimization.In this paper,the mathematical model of bus scheduling optimization was built for the objective function based on the bus company and passenger least cost,maximum social benefit.Genetic algorithm is applied to solve the model.By setting different parameters of the model,the rationality and the scientific nature of the model are verified by the matlab simulation experiment.Experimental results show that the optimized scheduling model can reduce the operating costs of the public transportation companies,improve the satisfaction of passengers,and ensure the maximum satisfaction of the social and economic benefits.
Data Acquisition and Transmission in High Speed Real-time System
WANG Jian-zhong and YANG Lu
Computer Science. 2016, 43 (Z11): 604-606.  doi:10.11896/j.issn.1002-137X.2016.11A.137
Abstract PDF(252KB) ( 48 )   
References | Related Articles | Metrics
Real-time data acquisition is the key to control system.How to effectively improve system data processing,transmission and signal control mode is a major problem to solve.This research set the processing and calculation of high speed acquisition strapdown inertial navigation data an example,gave a detailed introduction of joint work by DSP chips,and adopted dual-port RAM data transmission technology.Data reading adopts the method of FPGA programmable technology,and is capable of data collection,transmission and calculation of laser strapdown inertial navigation system.The research has achieved the navigation data processing for heading,rolling,pitching,speed and position,while the output data update frequencies come up to 500~1000Hz,so as to improve the carrier’s attitude tracking accuracy.