Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 44 Issue Z6, 01 December 2017
Survey of Multimodal Delay Tele-interaction
WANG Hai-peng, HUANG Tian-biao, REN Chong-shuai and YAO Wu-yi
Computer Science. 2017, 44 (Z6): 1-6.  doi:10.11896/j.issn.1002-137X.2017.6A.001
Abstract PDF(198KB) ( 793 )   
References | Related Articles | Metrics
Multimodal Tele-interaction aims to use a variety of interactive modal and collaborative way and complementary characteristics and information between multiple modal,to communicate and understand the user interaction desires,improve the efficiency of interaction,enhance the naturalness of interaction,and enable users to complete the remote tasks in their “expectations”.Recently,with the applications of multimodal tele-interaction on the space exploration,deep sea exploitation and tele-surgery,the delay is introduced into the process of tele-interaction,which is produced mainly by the communication delay and bandwidth.The significant dalay causes the problems of asynchronous interaction and lack of interaction modal,which results a fundamental effect on the user’s behavior,psychological and cognitive characteristics,breaks and blocks the continuity,real-time and naturalness of interaction,not only degrade interactive user experience,but also is difficult to guarantee validity.The article defined the concept of multimodal tele-interaction,presented its key applications,and discussed the key technology,including the delay issue,asynchronous interaction and lack of interaction modal.Finally,we presented research challenges in the future.
Semi-supervised and Ensemble Learning:A Review
CAI Yi, ZHU Xiu-fang, SUN Zhang-li and CHEN A-jiao
Computer Science. 2017, 44 (Z6): 7-13.  doi:10.11896/j.issn.1002-137X.2017.6A.002
Abstract PDF(432KB) ( 2553 )   
References | Related Articles | Metrics
Semi-supervised learning (SSL) and ensemble learning are two important paradigms in the field of machine learning research.SSL attempts to achieve strong generalization by exploiting both labeled and unlabeled instances,while ensemble learning aims to improve the performance of weak learner by making use of multiple classifiers.SSL ensemble learning is a novel paradigm which can improve the generalization performance of classifier by combining SSL and ensemble learning.Firstly the development process of SSL ensemble learning was analyzed and it was found that SSL ensemble learning is derived from disagreement-based SSL.Then,classify SSL Ensemble learning methods were classified into two categories:SSL-based ensemble learning and ensemble-based SSL.A detailed description for the main methods of SSL Ensemble learning was given.Finally,the current research status of SSL ensemble learning was summarized and some issues which are worth of further study were given.
Survey on Cross-language Named Entity Translation Pairs Extraction
WANG Zhi-juan and LI Fu-xian
Computer Science. 2017, 44 (Z6): 14-18.  doi:10.11896/j.issn.1002-137X.2017.6A.003
Abstract PDF(227KB) ( 1153 )   
References | Related Articles | Metrics
Cross-language named entity translation pairs are very important for machine translation,cross-language information retrial and so on.We made a survey on cross-language named entity translation pair extraction in three aspects.Firstly,transliteration is vital for cross-language named entity translation pair extraction.Rules,machine learning and deep learning are used in many languages named entity translation.The performance of transliteration model based on deep learning is excellent and it will be the key method in future studies.Secondly,named entity alignment based on parallel/comparable corpus is a useful method to get cross-language named entity translation pairs.The constructing and annotation of cross-language corpus are bottle necks for research on named entity alignment based on parallel/comparable corpus.Thirdly,cross-language named entity translation pairs can be extracted by Web mining.Cross-language named entity extraction based on cross-language information retail and knowledge base such as Wikipedia will be the trend in the future.
Survey on Visual Tracking Algorithms Based on Deep Learning Technologies
JIA Jing-ping and QIN Yi-hua
Computer Science. 2017, 44 (Z6): 19-23.  doi:10.11896/j.issn.1002-137X.2017.6A.004
Abstract PDF(247KB) ( 1062 )   
References | Related Articles | Metrics
Visual tracking is a fundamental subject in the field of computer vision.Classical tracking methods are not good at handling the problems of the complex background,such as illumination variation,great change of the target size and posture change greatly or occlusion and so on.Meanwhile,the introduction of deep learning technologies opens a new way of visual tracking study.There are a few research literature on visual tracking based on deep learning relatively,both in China and abroad at present.In order to attract more researchers in the field of visual tracking to explore and discuss deep learning,and to promote the research of visual tracking algorithm,this overview briefly reviewed the research status of visual tracking and deep learning.Then we focused on the related literatures about algorithms of visual tracking based on deep learning,and discussed their advantages and disadvantages.Finally,we proposed the direction of further research and the prospect of visual tracking algorithm based on deep learning.
Survey of Virtualization Access Control Research Based on Xen
KE Wen-jun, DONG Bi-dan and GAO Yang
Computer Science. 2017, 44 (Z6): 24-28.  doi:10.11896/j.issn.1002-137X.2017.6A.005
Abstract PDF(234KB) ( 883 )   
References | Related Articles | Metrics
Virtualization is the core technology of cloud computing,with its wide application and rapid development,the security threat has become increasingly prominent,seriously hindering the development of virtualization,which is an important issue to be resolved at the same time.Academic circles put forward various solutions,including access control technology,it’s viewed as an important barrier to virtualization security,attaining a wide range of attention and research.This paper started with a review of access control technology development and contrast,followed by analysis of the Xen virtualization environment security issues,as well as access to the control techniques.Finally,researches on the current domestic and foreign virtualization security access control were summarized.
Survey for Methods of Parameter Estimation in Topic Models
DU Hui, CHEN Yun-fang and ZHANG Wei
Computer Science. 2017, 44 (Z6): 29-32.  doi:10.11896/j.issn.1002-137X.2017.6A.006
Abstract PDF(304KB) ( 1474 )   
References | Related Articles | Metrics
Topic models extract low-dimensional representation of the topic from high-dimensional sparse data set of word by using fast machine learning algorithms,achieving a word document clustering.It is an important work in this field to study the model parameter estimation.The paper detailed the probabilitic latent semantic analysis model,the latent Dirichlet model and basic methods of parameter estimation in topic model.In addition,the paper gave an experimental analysis of perplexity in topic model.
Event Sensing and Multimodal Event Vein Generation Leveraging Social Media
XU Cheng-hao, GUO Bin, OUYANG Yi, ZHAI Shu-ying and YU Zhi-wen
Computer Science. 2017, 44 (Z6): 33-36.  doi:10.11896/j.issn.1002-137X.2017.6A.007
Abstract PDF(125KB) ( 2208 )   
References | Related Articles | Metrics
With the development of information technology and popularity of social media,normal users have become information producers from receivers and everyone can share what happened around them and repost what they are interested in,which makes the information stored in social media increase rapidly.The large amount of data contains abundant and valuable records of social events.How to get valuable informations from these data has become one of the most important problems in information field.This paper introduced the new research field,including crowd-powered event sensing and multimodal summarization to solve this problem.Crowd-powered event sensing and multimodal summarization aim at sensing and analyzing events by analyzing multimodal data existed in social media to predict and summarize events effectively.This paper described the modal of event,the history of sensing,the key technology,challenges and wide application field,summarized the development of event sensing and summarization based social media analysis and looked into the future.
Present Situation and Prospect of Data-driven Based Fault Diagnosis Technique
ZHANG Ni, CHE Li-zhi and WU Xiao-jin
Computer Science. 2017, 44 (Z6): 37-42.  doi:10.11896/j.issn.1002-137X.2017.6A.008
Abstract PDF(181KB) ( 2269 )   
References | Related Articles | Metrics
Fault diagnosis methods based on data-driven were summarized and divided,which include multivariate statistical methods,machine learning,manifold learning and so on.The principle,research progress and different methods application were analyzed and described.The problems needed to solve and recent research hotspots were addressed finally.
Review of Virtual Reality Technology Based Application Research in English Teaching for Special Purposes
ZHANG Ning, LIU Ying-chun, SHEN Zhi-peng and GUO Chen
Computer Science. 2017, 44 (Z6): 43-47.  doi:10.11896/j.issn.1002-137X.2017.6A.009
References | Related Articles | Metrics
The relationship between virtual reality technology and English for special purposes was introduced in this paper.By illustrating the applications of virtual reality in English for special purposes in recent years,theoretical explorations in this field,as well as related application models,the advantages and disadvantages of relevant studies were compared between domestic and overseas research.The significance and application values of virtual reality research in English for special purposes were analyzed,and how can the future virtual reality technology better serve computer assisted English for special purposes were discussed.Last but not least,future related research orientations were also discussed at the end of this paper.
Analysis of Influence of Domain Knowledge on Development of Big Data
LENG Li-hua, LIAO Yi-jie and LIAO Hong-zhi
Computer Science. 2017, 44 (Z6): 48-49.  doi:10.11896/j.issn.1002-137X.2017.6A.010
Abstract PDF(131KB) ( 627 )   
References | Related Articles | Metrics
Big data is becoming a hot topic in today’s society.Except of the IT field for its continuous exploration,it also continues to affect the economic and social progress.In the face of all walks of life to the big data hype,we should think seriously about the problems faced in the process of big data research and application.Domain knowledge is very important for a new technology,if we don’t know the field of domain knowledge in all walks of life,the development and application of them in different fields will face a lot of obstacles.This paper analyzed several aspects of the research and application of big data,and pointed out that big data processing involves data collection,data management,data analysis,data modeling and data application.Domain knowledge is the core of communication of data processing.As the key to the application of big data is data analysis,and data analysis is based on domain knowledge,the processing of big data must be through the field of domain knowledge to breakthrough.
Deep Learning for Early Diagnosis of Alzheimer’s Disease Based on Intensive AlexNet
LV Hong-meng, ZHAO Di and CHI Xue-bin
Computer Science. 2017, 44 (Z6): 50-60.  doi:10.11896/j.issn.1002-137X.2017.6A.011
Abstract PDF(1422KB) ( 2019 )   
References | Related Articles | Metrics
More and more people suffer from Alzheimer’s disease(AD) in China.AD is characterized by loss of memory and language ability,associated with aging.Currently,the number of Chinese patients has ranked first in the world.So,early diagnosis of AD is particularly urgent.Studies have shown that mild cognitive impairment (MCI) has a high pro-bability converted to AD.MCI may be a transition between healthy control (HC) and AD.With the advent of the era of big data,the machine learning algorithm is more and more popular in the diagnosis of disease.The method of deep lear-ning helps us classify AD,MCI and HC.The data set of magnetic resonance imaging (MRI) is from Alzheimer disease neuroimaging initiative (ADNI) as the data set.The pre-treatment of the raw brain MRI is directed by Beijing Tiantan Hospital affiliated to Capital Medical University.Images after dimensionality reduction are learned by deep convolutionalneural network (CNN) automatically.The current architecture of network is not for medical images.So,experiments focuss on improving existing networks,so as to achieve good diagnostic results.AlexNet is an excellent architecture for images classification which the experiments choose to improve.In this paper,we proposed 4 algorithms to improve the original model according to the characteristic of AD.Data ran in parallel with 8 GPUs of NVIDIA Tesla K80 of W780-G20 by Sugon.Then,we obtained 4 classifiers,AD vs.HC,AD vs.MCI,MCI vs.HC and AD vs.MCI vs.HC.Models were trained no more than 30 minutes with more than 70000 images.Finally,algorithms were evaluated by drawing ROC curve and computing sensitivity and specificity,and the better results were showed. 〖BHDWG1,WK42,WK43,WK42W〗第6A期 吕鸿蒙 ,等:基于增强AlexNet的深度学习的阿尔茨海默病的早期诊断
Spam Filter Algorithm with Improved Porter Stemmer and Kernels Methods
SUN Han-bo and FENG Guo-can
Computer Science. 2017, 44 (Z6): 61-67.  doi:10.11896/j.issn.1002-137X.2017.6A.012
Abstract PDF(629KB) ( 945 )   
References | Related Articles | Metrics
At present,statistical learning methods have been widely used in spam classification in which Bayesian classifier and SVM are favorable.To face the challenge of spams,a number of novel ideas and improved algorithms were proposed.We proposed improved Porter Stemmer algorithm to extract text features thoroughly and tailored it for spam classifiers.Compared with original algorithm,linear kernel SVM,gaussian kernel SVM,polynomial SVM and Nave Bayes classifiers obtain 63.7%,63.1%,61.3% and 11.4% decrease of error rate respectively based on proposed improved Porter Stemmer.Besides,experimental results justify that SVM has significant advantages when applied to spam classification compared to Nave Bayes,while SVMs also obtain greater improvements facilitated by improved Porter Stemmer.We also conducted a shallow analysis from the perspectives of linguistics and illustrated the potential value of spam classifier with personalized customization.
Research on Lower Extremity Exoskeleton Robot Servo Control Algorithm
ZHAO Han-bin, ZHAO Zi-yi, ZHAO Jiang-hai and WANG Yu
Computer Science. 2017, 44 (Z6): 68-69.  doi:10.11896/j.issn.1002-137X.2017.6A.013
Abstract PDF(330KB) ( 1313 )   
References | Related Articles | Metrics
The servo control algorithm plays a decisive role in assisted effect of the lower extremity exoskeleton robot.Based on zero power control algorithm and characteristics of the lower limb exoskeleton robot movement,a terminal servo control algorithm was proposed,and the algorithm process was given.Finally the effectiveness of the proposed algorithm is verified by Matlab simulation.
Attribute Reduction Based on Variable Precision Rough Sets and Concentration Boolean Matrix
LI Yan, GUO Na-na and ZHAO Hao
Computer Science. 2017, 44 (Z6): 70-74.  doi:10.11896/j.issn.1002-137X.2017.6A.014
Abstract PDF(167KB) ( 488 )   
References | Related Articles | Metrics
Attribute reduction is the most important research topic in rough set theory.The traditional attribute reduction based on discer-nibility matrix can only handle consistent decision tables.Then the concept of improved discernibility matrix was proposed to effectively deal with both consistent and inconsistent decision tables.Further,the condensed Boolean matrix was defined to represent the discernibility matrix in order to save the storage space and improve the efficiency of matrix generation.Based on the previous work,the idea of variable precision was used to select some inconsistent objects in the developing of the discernibility matrix,thus more information can be considered in generating attri-bute reduction.The experimental results show that the proposed method performs advantages in both running speed and classification accuracy.
Assignment Reduction of Intuitionistic Fuzzy Ordered Decision Information System
SANG Bin-bin and XU Wei-hua
Computer Science. 2017, 44 (Z6): 75-79.  doi:10.11896/j.issn.1002-137X.2017.6A.015
Abstract PDF(186KB) ( 564 )   
References | Related Articles | Metrics
Based on intuitionistic fuzzy sets,a new order relation was established by weighting the intuitionistic fuzzy numbers.The intuitionistic fuzzy order decision information system was established by the traditional order relation and the new order relation respectively.Then,on the basis of the definition of assignment function and assignment of coordination sets,judgement theorem for the assignment reduction and discernibility matrixare was given in the system.Furthermore,the specific method of assignment reduction was established in an intuitionistic fuzzy ordered decision information system.Finally the effectiveness of the method is verified by an example.
N-ary Chinese Open Entity-relation Extraction
LI Ying, HAO Xiao-yan and WANG Yong
Computer Science. 2017, 44 (Z6): 80-83.  doi:10.11896/j.issn.1002-137X.2017.6A.016
Abstract PDF(238KB) ( 1394 )   
References | Related Articles | Metrics
Traditionally,information extraction (IE) has focused on satisfying precise,narrow,pre-specified requests from small homogeneous corpora.Shifting to a new domain requires the user to name the target relations and to manually create new extraction rules or hand-tag new training examples.Open information extraction (OIE) overcomes the limitations of traditional IE techniques,which trains individual extractors for every single relation type.Present studies have attracted much attention on English OIE.However,few studies have been reported on OIE for Chinese.This paper presented a N-ary Chinese OIE system(N-COIE).N-COIE preprocesses the sentences using the nature language processing tools,and then extracts entity-relation groups from the preprocessed sentences.Finally,N-COIE filters entity-relation groups using the trained logistic regression classifier.Empirical results show the effectiveness of the proposed system.
Cross-media Semantic Similarity Measurement Using Bi-directional Learning Ranking
LIU Shuang, BAI Liang, YU Tian-yuan and JIA Yu-hua
Computer Science. 2017, 44 (Z6): 84-87.  doi:10.11896/j.issn.1002-137X.2017.6A.017
Abstract PDF(311KB) ( 623 )   
References | Related Articles | Metrics
With the rapid development of Internet technology,the presented forms of network information have exten-ded from simple text to images,voice,video and other multimedia expression.In the field of multimedia information retrieval,the traditional methods often represent all of the media mode in the same feature space model.Existing methods take either one-to-one paired data or uni-directional ranking examples.In this paper,we considered learning bi-directionalranking examples in the cross-media retrieval.By analyzing the experimental results basing on the Wikipedia dataset,it is demonstrated better performance of the proposed method.
Manifold Learning Algorithm Based on Compact Setsub-coverage
ZHANG Shao-qun
Computer Science. 2017, 44 (Z6): 88-91.  doi:10.11896/j.issn.1002-137X.2017.6A.018
Abstract PDF(832KB) ( 656 )   
References | Related Articles | Metrics
Since 2000,a series of nonlinear dimensionality reduction methods have been emerging,and Isomap in manifold learning is one of the representatives.The algorithm can reflect the global structure of the data set and is simple and efficient.But there are some shortcomings that low-dimensional manifold must be convex set and the computational complexity is large.L-Isomap successfully reduces the computational complexity,but majority of the landmarks is selected by random method,which makes the algorithm unstable.In this paper,according to the classical theoremes that bounded closed set is equivalent to compact set in finite-dimensional space and there is finite sub-coverage covering the compact set,we analyzed the topology of the area of the data set and selected a series of landmarks.This method has low computational complexity and is more stable than L-Isomap.In addition,this method weakens the condition that the data set is a convex set to a compact set (bounded closed set),which avoids enlarging “the hollow” error in the incomplete manifold.
Attention of Bilinear Function Based Bi-LSTM Model for Machine Reading Comprehension
LIU Fei-long, HAO Wen-ning, CHEN Gang, JIN Da-wei and SONG Jia-xing
Computer Science. 2017, 44 (Z6): 92-96.  doi:10.11896/j.issn.1002-137X.2017.6A.019
Abstract PDF(341KB) ( 1104 )   
References | Related Articles | Metrics
With the wild usage of deep learning in machine reading comprehension in the past few years,machine rea-ding comprehension has developed rapidly.In order to improve machine reading comprehension’s semantic comprehension and inference abilities,an attention of bilinear function based Bi-LSTM model was proposed,which has good performance in extracting semantics of questions,candidates and articles,and producing the correct answers.We tested the model on CET-4 and CET-6 listening text materials.The results show that the accuracy rate of word-level input is about 2% higher than sentence-level input.Besides,the accuracy rate can increase about 8% after adding infe-rence structure with multi-layer attention.
Equivalent Representation of Compressed Sensing Optimization Problem and Its Penalty Function Method
MENG Zhi-qing, XU Lei-yan, JIANG Min and SHEN Rui
Computer Science. 2017, 44 (Z6): 97-98.  doi:10.11896/j.issn.1002-137X.2017.6A.020
Abstract PDF(133KB) ( 523 )   
References | Related Articles | Metrics
Firstly,the definition of an equivalent representation for compressed sensing optimization problem was given.It is proved that an optimal solution to the equivalent representation problem is an optimal solution to compressed sen-sing problem.Then an objective penalty function was defined,which has more than 2 order of smoothness,and its iterative algorithm was given.The convergence of the algorithm was proved.By solving the objective penalty function,the approximate optimal solution of compressed sensing optimization problem can be obtained.This method provides a new tool for us to study and solve the actual compressed sensing.
Asynchronous Collaborative Chicken Swarm Optimization with Mutation Based on Cognitive Diversity
XIAO Liang and LIU Si-tong
Computer Science. 2017, 44 (Z6): 99-104.  doi:10.11896/j.issn.1002-137X.2017.6A.021
Abstract PDF(530KB) ( 565 )   
References | Related Articles | Metrics
The standard chicken swarm optimization is improved from the following three aspects:chick-update formula,optimization method and mutation based on cognitive diversity.Self-learning factor is added to chick-update formula.It is assumed that chicks learn from their own roosters respectively,and meanwhile the unknown space is explored.Asynchronous collaborative optimization strategy is adopted with inverted order to improve capacity of solving higher-dimensions problems.Self-cognitive diversity is taken full advantage to make sure the pbests mutate at a certain probability to lead the swarm to escape from the local optimum to converge to the global optimum.Benchmark function test indicates ICSO is better than other optimization algorithms.Model seismic data inversion shows strong global search ability,high precision and strong antinoise ability as well.
Optimized Research for Task-driven Grouping Based on Hybrid Genetic Algorithm
LI Hao-jun, DU Zhao-hong and QIU Fei-yue
Computer Science. 2017, 44 (Z6): 105-108.  doi:10.11896/j.issn.1002-137X.2017.6A.022
Abstract PDF(215KB) ( 691 )   
References | Related Articles | Metrics
Intelligent algorithm that applies to the education field to realize automatic grouping has great significance.In the task-driven teaching under the network learning environment for how to group divided according to the optimal grouping scheme,the factors of the characteristic differences between learners and the degree of task difficulty were considered,a mathematical model based on task-driven grouping optimization problem was built,the strategy of task-driven grouping optimization based on hybrid genetic algorithm was proposed.We had done an simulation experiment by using hybrid genetic algorithm on MATLAB7.0 platform.Experimental results show that the optimization of task-dri-ving grouping based on hybrid genetic algorithm is feasible and effective.
Fuel Flow Missing-value Imputation Method Based on Standardized Euclidean Distance
CHEN Jing-jie and CHE Jie
Computer Science. 2017, 44 (Z6): 109-111.  doi:10.11896/j.issn.1002-137X.2017.6A.023
Abstract PDF(337KB) ( 601 )   
References | Related Articles | Metrics
To reduce the negative impact of aircraft fuel consumption statistical inference accuracy caused by the data missing,an estimated method based on standardized Euclidean distance was proposed to solve the fuel flow data missing problems.The nearest neighbors were chosen by the standardized Euclidean distance between QAR data samples,and then entropy was utilized to obtain the weight of the nearest neighbors.The missing value was estimated by the weighted average fuel flow of the nearest neighbors.Experiments prove that this method is valid to process fuel consumption data missing problems,and its performance is higher than the other imputation methods based on normal Euclidean distance,Mahalanobis distance or reduced relational grade.
Fake Chapters Recognition in Shiji Based on Text Classiffication Methods
ZHAO Jian-ming, LI Chun-hui, YAO Nian-min and YAO Nian-jun
Computer Science. 2017, 44 (Z6): 112-114.  doi:10.11896/j.issn.1002-137X.2017.6A.024
Abstract PDF(219KB) ( 966 )   
References | Related Articles | Metrics
The text classiffication methods based on the machine learning are used to study how to distinguish the fake chapters in Shiji.Shiji is the first general history book in our country and distinguishing fake chapters is always one of the main problems in its study.But the traditional methods are subjective and can not apply quantitative analysis,and even many famous results are contradictive.In this paper,a method of distinguishing fake chapters was presented which can do quantitative research on this subject.This method can also be used in many other studies of the history.
Algorithm of SLAM Based on Robust EKF
LIU Pei-feng and WANG Jian
Computer Science. 2017, 44 (Z6): 115-118.  doi:10.11896/j.issn.1002-137X.2017.6A.025
Abstract PDF(387KB) ( 836 )   
References | Related Articles | Metrics
The key problem of robot autonomous work is self-positioning.Kalman filter can be used to estimate the robot’s location.The model and key technology of SLAM was introduced at first in this paper.Then the theory of extendedKalman filter was given.By analyzing the effect of error to the standard EKF model’s result,the model of robust EKF was presented.This model implements an equivalent Kalman gain matrix built by introducing redundancy and predicates residuals.An iterative scheme was suggested for solving the SLAM robust EKF solution.At last,the standard EKF-SLAM model and the robust EKF-SLAM model were both actualized in programs.Autonomous robot’s moving trajectory was simulated in the program.Simulation results show that the suggested algorithm can give correct location results.
Research on Tax Forecasting Model Based on PSO and Least Squares Support Vector Machine
ZHANG Shu-juan, DENG Xiu-qin and LIU Bo
Computer Science. 2017, 44 (Z6): 119-122.  doi:10.11896/j.issn.1002-137X.2017.6A.026
Abstract PDF(223KB) ( 720 )   
References | Related Articles | Metrics
Aiming at the tax revenue forecast for the existence of nonlinearity,instability and economic factors that affect multiple complexities,this paper offered to use the method of least squares support vector regression machine to predict the tax revenue of Guangdong conghua,and established the mathematical model.As the model parameters anddirectly affect the quality of support vector machine,so the author ingeniously incorporated the idea of particle swarm optimization algorithm,and PSO for parameters optimization was used to ensure the accuracy and stability of the forecasting model.The simulation experimental results show that with respect to each reference model,using the PSO for parameters optimization of least squares support vector regression machine accuracy has improved significantly,illustrates the validity and practicability of the model.
Improved Firefly Algorithm Based on Weighted Dimension
ZANG Rui and LI Jing
Computer Science. 2017, 44 (Z6): 123-125.  doi:10.11896/j.issn.1002-137X.2017.6A.027
Abstract PDF(258KB) ( 741 )   
References | Related Articles | Metrics
Firefly algorithm is a bionic optimization algorithm based on biological swarm Intelligence which has the advantages of simple concept,few parameters to adjust and easy to realize.However,it can easily get trapped in the local optima especially for high-dimensional optimization function.In literature [1],an improved algorithm based on opposition and dimension was proposed,which is improved in population initialization and algorithm iteration.In this paper,we proposed a new algorithm based on the dimension-weighted method.The algorithm takes into account the current optimal firefly information and part of firefly information.Through the comparison of the experimental results,the improved algorithm embodies the superiority.
Early Warning Model for Water Eutrophication Based on BP Artificial Neural Network and Genetic Algorithm
XU Yun-juan
Computer Science. 2017, 44 (Z6): 126-128.  doi:10.11896/j.issn.1002-137X.2017.6A.028
Abstract PDF(144KB) ( 558 )   
References | Related Articles | Metrics
With the economic development,the environmental protection work is facing unprecedented pressure.In order to enhance the aquatic environment control effectively and to deal with the impact of sudden environmental pollution accident on the social and economic development,the paper established BP neural network theory for fitting aquaculture feed,and total phosphorus,total nitrogen,transparency,as well as oxygen consumption,and other nutritious indicators of changes in the corresponding function.Furthermore,the paper used genetic algorithm to achieve optimization methods of the objective function,and formed a breeding waters early warning model.The model provides technical support for water environment governance and public decision-making.At the same time,the paper makes further analysis for the samples of Poyang Lake’s new aquaculture base and forecasts good results by using the model.
Linear-time Algorithm for Weighted Domination Problem of Strongly Chordal Graph Based on Local Ratio Method
ZHANG Xiu-jun, WU Pu, YANG Hong and SHAO Ze-hui
Computer Science. 2017, 44 (Z6): 129-132.  doi:10.11896/j.issn.1002-137X.2017.6A.029
Abstract PDF(359KB) ( 833 )   
References | Related Articles | Metrics
In an undirected graph G=(V,E),DV is dominating set if and only if any vertex v∈V-D adjacent to at least one vertex u∈D.The minimum weight dominating set problem consists of finding a dominating set of a graph G with minimum weight.By applying with the properties of strongly chordal graph,a linear-time algorithm based on local ratio method for the minimum weight dominating set problem on strongly chordal graph was proposed.We also provi-ded the proof of the time complexity of the proposed algorithm.
Research on Optimal Transportation Route Based on Chaos Optimization
Computer Science. 2017, 44 (Z6): 133-135.  doi:10.11896/j.issn.1002-137X.2017.6A.030
Abstract PDF(364KB) ( 614 )   
References | Related Articles | Metrics
Based on the analysis of ergodicity of Logistic chaotic sequences,the Logistic chaotic sequence was mapped to the search region of the multi pole objective function to search the global optimal solution.We studied on the general procedure of chaos optimization algorithm,analyzed the example,and applied the chaos optimization algorithm to the transport route optimization problem.The results show that the chaos optimization algorithm has better global search ability ofthe optimal solution,and it has the feasibility and effectiveness of the optimal transportation route selection.
FIR Low-pass Digital Filter Design Using Improved PSO Algorithms
SHAO Peng, WU Zhi-jian, PENG Hu, WANG Ying-long and ZHOU Xuan-yu
Computer Science. 2017, 44 (Z6): 136-138.  doi:10.11896/j.issn.1002-137X.2017.6A.031
Abstract PDF(537KB) ( 984 )   
References | Related Articles | Metrics
Particle swarm optimization presents an excellent optimization performance when it solves some complex problems because of its advantages such as few parameters and easy implementation.Finite impulse response digital filters have some advantages such as stable structure and easy implementation,which make FIR low pass digital filters have a widely practical application.Therefore,in this paper,TFPSO was introduced to design FIR low pass digital filter and make a comparison with FIR low pass digital filters designed by refrPSO and OPSO.In the experiment,the excellent fitness function was proposed to test the performance of FIR low pass digital filters designed by several improved PSO algorithms.The experiment results show that refrPSO has an excellent filter performance and TFPSO has a weak filter performance.
Discrete Fishing Strategy Optimization Algorithm for TSP
CHEN Jian-rong and CHEN Jian-hua
Computer Science. 2017, 44 (Z6): 139-140.  doi:10.11896/j.issn.1002-137X.2017.6A.032
Abstract PDF(220KB) ( 814 )   
References | Related Articles | Metrics
The classical fishing strategy can only solve the optimization problem on a continuous domain,but there is no relative research on discrete domain.To solve traveling salesman problem,a discrete fishing strategy optimization algorithm was presented.An efficient discrete encoding method was given with the characteristics of TSP,and base on this,the basic concepts of the distinct set and the exchange operations was put forward.A new distance formula was given and the several search strategy has redescription.The experiment results about TSP form TSPLIB indicate that the algorithm shows high accuracy,stability and quickly.And it provides a new choice to solve TSP.
Research on Method of Personal Relation Extraction under SDAs
ZHU Jie and HONG Jun-jian
Computer Science. 2017, 44 (Z6): 141-145.  doi:10.11896/j.issn.1002-137X.2017.6A.033
Abstract PDF(175KB) ( 1021 )   
References | Related Articles | Metrics
For the lack of corpus issue in personal relation,this paper studied the methods of automatic tagging based on HUDONG pedia;for poor ability to express feature issue in shallow machine learning models,we proposed the method of personal relation extraction under deep learning model SDAs and focused on the effect of personal relation extraction with combination features and effect of personal relation extraction with different depths in SDAs network. F factor can reach 73.75% through experiment analysis.
Introduce Numerical Solution to Visualize Convolutional Neuron Networks Based on Numerical Solution
YU Hai-bao, SHEN Qi and FENG Guo-can
Computer Science. 2017, 44 (Z6): 146-150.  doi:10.11896/j.issn.1002-137X.2017.6A.034
Abstract PDF(954KB) ( 828 )   
References | Related Articles | Metrics
Zeiler’s visualization model restore the feature maps to original image space,by unpooling and deconvolution,to visualize what the node learn from the image.It helps to research the convolutional neural networks mechanism,but the result is not apparent for the vague method.Based on the Zeiler’s deconvolutional visualiztion model,numerical solution method was introduced to replace the vague method that just use convolutional kernel.The database was constructed firstly.The triangle and rectangle was generated with random size,shape and location,which have simple structure and apparent vertex.Based on the database,we constructed hierarchy database and took out experiment.The experi-ment results show that the improvement model extracts more apparent features and has less noise,which has more precise result.Experiment on bigger database was taken to verify our result,and the result to guide how to construct the network’s stucture.
Blind Color Image Quality Assessment Base on Color Characteristics
WEN Wu and ZUO Ling-xuan
Computer Science. 2017, 44 (Z6): 151-156.  doi:10.11896/j.issn.1002-137X.2017.6A.035
Abstract PDF(622KB) ( 1778 )   
References | Related Articles | Metrics
Color image quality Assessment(the C-IQA) was proposed to evaluate the quality of a color image.Different from other image quality assessment systems simply convert the original image to gray image,the C-IQA consider not only the quality of an image under the gray scale,but also need to take color performance of that image in to account.In this paper,we devise a color image quality evaluation model based on color characteristics.Beside the characteristics of brightness,we used the characteristics of hue,color saturation and color entropy to assess the quality of color image.By experiments on the LIVE image database,we can find that our model predictions are highly consistent with the quality of image.
Estimate Threshold of SIFT Matching Adaptively Based on RANSAC
LIU Chuan-xi, ZHAO Ru-jin, LIU En-hai and HONG Yu-zhen
Computer Science. 2017, 44 (Z6): 157-160.  doi:10.11896/j.issn.1002-137X.2017.6A.036
Abstract PDF(1192KB) ( 805 )   
References | Related Articles | Metrics
When matching images with scale invariant feature transform(SIFT),the Euclidean distance between feature vectors is used as the similarity measurement.But it was difficult to get the best distance ratio.Moreover,when the ratio was a constant,there would be some problems of error matching or matching leakage.Deal with the problem,the Random Sample Consensus (RANSAC) algorithm was introduced.Optimize the ratio in the process adaptively,and we can get the best threshold.SIFT-based image matching algorithm was analyzed,and a bi-direction matching was used to improve the accuracy of image matching and ensure the correctness of matching at maximum level.Finally,the experiment results show that the proposed methods can obtain an optimal threshold for different images.It can get the most ma-tching points and a better matching rate,and by bi-direction matching,better results can be got.
Application of OpenMP in SAR Image Processing
CHENG Dong and WANG Wei-hong
Computer Science. 2017, 44 (Z6): 161-163.  doi:10.11896/j.issn.1002-137X.2017.6A.037
Abstract PDF(430KB) ( 823 )   
References | Related Articles | Metrics
The amount of a SAR image is very large usually,meanwhile,the ordinary recognition algorithm is complex and time-consuming.Aimed to solve this jam,a kind of target classification method of SAR image based on OpenMP is proposed.Firstly,the model-template based recognition algorithm is analyzed.Secondly,the OpenMP is applied to build the parallel calculation frame of SAR image target classification method.Finally,the tests are completed by classifying 3 kinds of objects.The test results show that the processing speed is improved by 8 times,and the proposed method is available.
Image Edge Detection Based on Pyramidal Algorithm of Interpolation Wavelet
ZHANG Zhi-guo, ZHENG Xi and LAN Jing-chuan
Computer Science. 2017, 44 (Z6): 164-168.  doi:10.11896/j.issn.1002-137X.2017.6A.038
Abstract PDF(546KB) ( 697 )   
References | Related Articles | Metrics
When classic wavelet theory is applied to detect edge of images,discrete integral formula is often used to replace continuous integral to obtain wavelet coefficients.Since discrete integral is approximate expression of continuous integral,great numerical errors often cannot be avoided in calculations.This has lead to the fact that some details of images cannot be described clearly in edge detection.To solve this problem,by applying Mallat pyramidal algorithm to interpolation conjugate filter,a new algorithm was proposed for edge detection based on the fact that image pixel values can be considered as coefficients of interpolation wavelets.In the experiment,our algorithm is compared with the classic one.It is shown that the new algorithm can obtain clearer and more intact edges.This implies that our algorithm is more effective and accurate than the classic one.
Study on Optimizations of Basic Image Processing Algorithm
XU Qi-hang, YOU An-qing, MA She and CUI Yun-jun
Computer Science. 2017, 44 (Z6): 169-172.  doi:10.11896/j.issn.1002-137X.2017.6A.039
Abstract PDF(325KB) ( 846 )   
References | Related Articles | Metrics
To provide useful references for the implementation of real-time and excellent image processing task,taking the open loop tracking algorithm based on template matching in video image as an example,the tracking performance of the algorithm based on MATLAB prototype algorithm was evaluated,and the multi-level optimization process was pre-sented.Starting with an MATLAB prototype algorithm,we began to optimize mainly in the two aspects as below.In improving real-time processing speed,more than 10 levels of optimizations are applied to speed up the algorithm,including C language speeding,multiplication speeding,release speeding,merge operation,CUDA speeding and so on.In terms of improving the correct rate,simple multi-pattern strategy is used.Testing results indicate that the algorithm reaches performance of 30Hz real-time image process and tracking rate of the algorithm is also promoted greatly.
Research on People Counting Based on Hot Area
GAO Fei, FENG Min-qiang, WANG Min-qian, LU Shu-fang and XIAO Gang
Computer Science. 2017, 44 (Z6): 173-178.  doi:10.11896/j.issn.1002-137X.2017.6A.040
Abstract PDF(1297KB) ( 669 )   
References | Related Articles | Metrics
People counting in the field of intelligent monitoring is important,but because of complex background environment and pedestrian movement occlusion phenomenon resulting in the current method accuracy is low,in addition to the traditional line statistics on the number of practical limited scope,taking into account the present that lacking effective methods,we proposed a method of people counting based on hot area.Firstly,the adaptive learning-rate background model is used to extract the foreground of the moving target,and the position and size of the foreground region are obtained.The HOG feature in the foreground region of the moving target is scanned,and the head and shoulder target is determined.Then the target matching Matrix algorithm based on KCF is used to track the head-shoulder target.Finally,the number of pedestrians is calculated by combining the target trajectory and people counting method based on hot area.The number of the video is 960×720 pixels with a resolution of 960×720 pixels.The correctness of the algorithm reached 93.1%,and it can meet the real-time requirement.The method we proposed combines the detection efficiency and accuracy,and has good effect in scenes with complex background environment,which can meet various practical application scenarios of people counting.
Image Inpainting Based on Dual-tree Complex Wavelet Transform
DOU Li-yun, XU Dan, LI Jie, CHEN Hao and LIU Yi-cheng
Computer Science. 2017, 44 (Z6): 179-182.  doi:10.11896/j.issn.1002-137X.2017.6A.041
Abstract PDF(1330KB) ( 642 )   
References | Related Articles | Metrics
The wavelet transform technology has been widely used in the field of digital image inpainting,however,the image inpainting based on wavelet transform will appear the phenomenon of edge fuzzy and not connection,which becomes a difficult problem.Based on the multiscale and multidirectional decomposition and the traditional method of ima-ge inpainting,a new algorithm of image inpainting based on dual-tree complex wavelet transform was proposed.Firstly,the image is decomposed into low frequency and high frequency parts by using the dual-tree complex wavelet transform.Then the parts of different frequency after image decomposition are inpainted respectively.The high frequency components of the image are inpainted by the total variation model,and an improved curvature-driven-diffusion is used to repair the low frequency components.Finally,the final image is obtained by dual-tree complex wavelet transform reconstruction process.The experimental results show that the proposed algorithm is very good for the promotion of the dual-tree complex wavelet transform in image inpainting application and gets better repair both in the part of texture and the part of structure.
SAR Image Denosing Based on Nonlocal Similarity and Low Rank Matrix Approximation
ZHAO Jie, WANG Pei-pei and MEN Guo-zun
Computer Science. 2017, 44 (Z6): 183-187.  doi:10.11896/j.issn.1002-137X.2017.6A.042
Abstract PDF(1360KB) ( 720 )   
References | Related Articles | Metrics
The SAR image denoising based on nonlocal similarity and low rank matrix approximation was presented to minimize the effect of speckle noise in Synthetic aperture radar.Firstly,multiplicative speckle is changed into additive noise by logarithmic transformation.Secondly,the image’s global noise variance is estimated in advance.Thirdly,a new joint block matching method based on Euclidean distance and R-squared is developed,which makes the matching result more accurate.Finally,within the framework of the low rank model,the improved residual noise variance estimation is used to approximate the low rank matrix with the weighted nuclear norm minimization.The noise suppression of SAR image is achieved.The experimental results show that this method not only the peak signal to noise ratio objective indicators have significantly improved and preserved the local structure of the image better,and produces a good subjective visual effect.
Research of Combination SVM Classifier in Pedestrian Detection
ZOU Chong, CAI Dun-bo, LIU Ying, Z HAO Na and ZHAO Tong-zhou
Computer Science. 2017, 44 (Z6): 188-191.  doi:10.11896/j.issn.1002-137X.2017.6A.043
Abstract PDF(856KB) ( 659 )   
References | Related Articles | Metrics
On the basis of histogram of oriented gradient and support vector machine(HOG-SVM)algorithm,this paper proposed an improved algorithm for combination classifiers.Firstly,This algorithm uses multi-scale sliding windows to extract the HOG features and trains SVM separately.Then,the trained SVM which is formed to a new classifier in series or parallel is used to detect pedestrian.In order to solve the problem that the target area is overlapped when features are extracted in multi-scale sliding windows,the non-maximum suppression (NMS)algorithm is used to fuse the rectangles and to get exact candidate region.Experiments show that combined SVM classifiers can effectively reduce the false detection rate and missed rate.
Video Stylization Based on Kinect Depth Information
TANG Ying and SUN Kang-gao
Computer Science. 2017, 44 (Z6): 192-197.  doi:10.11896/j.issn.1002-137X.2017.6A.044
Abstract PDF(1617KB) ( 792 )   
References | Related Articles | Metrics
This paper focused on extracting the depth information from Kinect for XBOX360 to separate the video foreground from background and stylizing the foreground and background with different artistic styles.First,we achieved the extraction of video foreground based on the depth data.Next,the foreground and the background video were stylized using different artistic styles based on texture advection which is guided by optical flow field.Finally,the final stylized video was obtained by combining the above two results effectively.We stylized video with texture advection-based methodso that multiple rendering styles are supported.The experimental results show that the video stylization produced by our system achieves a good artistic effect.
Image Segmentation Algorithm Based on Clustering and Improved Double Level Set
ZHANG Hui, ZHU Jia-ming and TANG Wen-jie
Computer Science. 2017, 44 (Z6): 198-201.  doi:10.11896/j.issn.1002-137X.2017.6A.045
Abstract PDF(760KB) ( 692 )   
References | Related Articles | Metrics
Usually,medical image accompanied by noise with a multi-objective problem,can not be separated completely by traditional level set in the image with multiple targets.This paper proposed a model based on inhibiting type of fuzzy clustering algorithm and modified double level set.First of all,the clustering algorithm is used for pre segmentation of medical image noise reduction,which can determine whether a cluster achieves satisfied effect through standardized rule of normalized mutual information (NMI),thus improving clustering algorithm. The improved double level set with pu-nishment item is given a second segmentation finally.The experimental results show that the method can reduce the noise of the image and the sensitivity of the algorithm,without reinitialize level set,reducing the amount of calculation and the number of iteration greatly.The model can separate medical image including noise and multiple objects completely,obtaining the expected effect of segmentation.
Choice of Coding Parameters in Video Transmission for Perceptual Quality
DU Lin, TIAN Chang, WU Ze-min, ZHANG Zhao-feng, HU Lei and ZHANG Lei
Computer Science. 2017, 44 (Z6): 202-205.  doi:10.11896/j.issn.1002-137X.2017.6A.046
Abstract PDF(660KB) ( 843 )   
References | Related Articles | Metrics
There are two important problems that need to be solved in video transmission .One is how to allocate the rate for source and channel coding under the known bandwidth and packet loss probability.Another one is how to choose the coding parameters under the restricted bitrate.This paper focused on the second problem.We analyzed the existed perceptual quality model VQMTQ,and chose VQMTQ as the index to evaluate the perceptual quality of videos.Combined with the rate model,the proposed method chooses the best coding parameters under the given target bitrate,which not only satisfy the limit of bitrate,but also make the compressed video have the best perceptual quality,which can be used in practice and protect the human perceptual feelings.
Multilevel Color Image Segmentation Based on Improved Glowworm Swarm Optimization Algorithm
MAO Xiao, HE Li-fang and WANG Qing-ping
Computer Science. 2017, 44 (Z6): 206-211.  doi:10.11896/j.issn.1002-137X.2017.6A.047
Abstract PDF(1374KB) ( 550 )   
References | Related Articles | Metrics
In order to improve the segmentation effect of color image,a novel multilevel color image segmentation me-thod was presented based on an improved glowworm swarm optimization (IGSO) algorithm which uses Kapur’s entropy.Aiming at the problem that the glowworm swarm optimization algorithm has low convergence speed and accuracy in the later period,an improved glowworm swarm optimization (IGSO) algorithm was presented based on adaptive step and global information.Depending on the effect of step size and the direction of movement on the convergence,IGSO algorithm improves convergence by adding global information and adaptive step with iterations and dimension of the search space during the course of movement.The experimental results show that it is a better method for multilevel co-lor image segmentation compared with GSO algorithm,improved quantum-behaved particle swarm optimization (CQPSO)algorithm and modified bacterial foraging (MBF) algorithm.
Double Threshold Orthogonal Matching Pursuit Algorithm
LIU Xin-yue, ZHAO Zhi-gang, LV Hui-xian, WANG Fu-chi and XIE Hao
Computer Science. 2017, 44 (Z6): 212-215.  doi:10.11896/j.issn.1002-137X.2017.6A.048
Abstract PDF(1809KB) ( 657 )   
References | Related Articles | Metrics
The reconstruction algorithm in theory of compressed sensing (CS) is an important part of compression perception theory.Under the unknown condition of the sparse degree,some reconstruction algorithms perform poorly.To solve this problem,a kind of orthogonal matching pursuit algorithm based on double threshold was put forward.Under the unknown condition of the sparse degree,the twice screening for the selected atoms can have high efficiency and high quality reconstruction image signal.The proposed algorithm can effectively reconstruct signals through experimental comparison with other algorithms.The proposed algorithm in this paper has higher reconstruction precision and has shorter running time.
Research on Anthrax Disease Classification of Dangshan Pear Based on Hyperspectral Imaging Technology
WEN Shu-xian, LI Shao-wen, JIN Xiu, ZHAO Liu and JIANG Han
Computer Science. 2017, 44 (Z6): 216-219.  doi:10.11896/j.issn.1002-137X.2017.6A.049
Abstract PDF(2179KB) ( 625 )   
References | Related Articles | Metrics
To detect the disease of different levels,this article took Dangshan pear which gets vaccinated of anthrax as the research object,using hyperspectral imaging technology for the modeling of disease classification.In 400~1000 nm spectral region,we collected the sequential hyperspectral images of the whole process of Dangshan pear samples from inoculation of anthrax to morbidity and to decompose,used threshold segmentation to conduct background segmentation of images,and did principal composition analysis based on the effective spectral region.We selected the second principal component (PC2) to extract the infected region of interest,and used weight coefficient method for eigenvalue extraction of region-of-interest and used unsupervised classification algorithm for clustering analysis of characteristic value.Through observation and analyzation of 210 sample sets,it comes that the effective sample classification is 98.41%.The experimental results show that it is valid to make use of hyperspectral imaging nondestructive testing technology for the classification of the anthrax disease of Dangshan pear of different levels.
Gesture Recognition Based on Weighted Feature Distance
WANG Yan, XU Shi-yi and CHEN Hai-yun
Computer Science. 2017, 44 (Z6): 220-223.  doi:10.11896/j.issn.1002-137X.2017.6A.050
Abstract PDF(344KB) ( 600 )   
References | Related Articles | Metrics
Gesture recognition based on computer vision has become a hot research topic.But due to the influence of illumination,environment and other factors,method based on single feature can’t identify gestures well.Therefore a method combined Hu invariant moments with number of fingertips as features of static gestures was proposed.After preprocessing the collected static gesture images,a kind of skin color model was applied to segment gesture,then the numbers of fingertips were detected by centroid distance method,and then the Hu values of the extracted gesture contour were calculated.Next the number of fingertips and Hu values were weighted respectively as gesture features.Finally the template matching was used to recognize gestures by weighting and fusing feature distance.Experimental results show that the proposed way can obtain a higher recognition rate of ten kinds of gestures than traditional Hu invariant moments feature recognition method and fingertip detection recognition method.
Garden Tourist Detection Based on Improved ViBe Algorithm
LIU Ying-ying, CHENG Shun, DING Shao-gang, LU Pan and SUN Yuan-hao
Computer Science. 2017, 44 (Z6): 224-228.  doi:10.11896/j.issn.1002-137X.2017.6A.051
Abstract PDF(638KB) ( 638 )   
References | Related Articles | Metrics
There are several problems in traditional visual background extraction algorithm,such as sensitivity to sha-dow of light,the wrong judged points of prospect,the hole of prospect and so on.In order to better segment the prospects of garden tourists,based on the analysis of a variety of building background model methods,this paper presented an improved tourist detection algorithm ViBe in Lab color space,and also tested the accuracy and robustness of improved ViBe algorithm.The results showed that the algorithm built an updated background model to improve the accuracy of tourist detection,it adapted to the change of light effectively and removed the shadow.By the analysis of dif-ferent locations’ video of garden,the improved ViBe algorithm has better detection results.
Similar Character Recognition of License Plates Based on Deep Learning
PAN Xiang and WANG Heng
Computer Science. 2017, 44 (Z6): 229-231.  doi:10.11896/j.issn.1002-137X.2017.6A.052
Abstract PDF(526KB) ( 810 )   
References | Related Articles | Metrics
It is hard to recognize similar characters of a license plate,so a new method based on deep learning was proposed to extract features and recognize similar characters.Firstly,this method is to normalize the character images,and then the normalized images will be regarded as input.We built five-layers architecture of deep network and extracted similar characters featured in representing from low-level to high-level.The convolution function which is sensitive to character edge is adopted,so that it can analyze the local differences of similar characters.We compared this method with support vector machine (SVM) in the experiments.The results show that the accuracy rate of the proposed me-thod increases 5%.
Face Recognition Method Based on Adaptive 3D Morphable Model and Multiple Manifold Discriminant Analysis
WANG Jian-tao, ZHAO Li and QI Xing-bin
Computer Science. 2017, 44 (Z6): 232-235.  doi:10.11896/j.issn.1002-137X.2017.6A.053
Abstract PDF(1088KB) ( 891 )   
References | Related Articles | Metrics
In order to reduce the loss information of face appearance after the normalization of face pose and expression,a normalization face recognition method of face pose and expression based on adaptive three-dimensional morphable model (3DMM) and multiple manifold discriminant analysis was proposed.Firstly,face pose 2D and 3D coordinate transformation caused by the non-correspondence is described,and an adaptive 3DMM fitting method is proposed.Then,the entire image is mapped into a 3D grid objects by three-dimensional transformation to preserve the identity information as much as possible.Finally,multiple manifold discriminant analysis is used to calculate the distance between manifolds,and the nearest neighbor classifier is used to finish recognition.The effectiveness of the proposed method is verified by experimental results on data base Multi-PIE,LFW and self-collection experiments,the face recognition accuracy on the three databases can achieve 99.8%,95.25%,98.62%,respectively.The proposed method significantly improves the performance of face recognition,and it is better than other similar advanced methods in constrainted and unconstrained environment.
Nonconvex Muclear Morm Minimization General Model with Its Application in Image Denoising
SUN Shao-chao
Computer Science. 2017, 44 (Z6): 236-239.  doi:10.11896/j.issn.1002-137X.2017.6A.054
Abstract PDF(782KB) ( 559 )   
References | Related Articles | Metrics
This paper focused on the nonconvex low rank approximation model.We proposed a class of nonconvex function g defined on the singular value of matrix.In fact,many famous nonconvex functions satisfy the condition of the function g.When the function g is introduced to the weighted nuclear norm minimization model,we can get a more ge-neral model,which can effectively solve the weight selection problem of former model.In this paper,the model was applied to the field of image denoising,and the convergence solver was given.Simulation results show that our proposed method is superior to other advanced algorithms.
New Method for Medical Image Segmentation Based on BP Neural Network
TANG Si-yuan, XING Jun-feng and YANG Min
Computer Science. 2017, 44 (Z6): 240-243.  doi:10.11896/j.issn.1002-137X.2017.6A.055
Abstract PDF(358KB) ( 642 )   
References | Related Articles | Metrics
For the medical image segmentation,good accuracy of results is very important and helpful for doctors to diag-nose the illness and make the right therapeutic schemes.The traditional BP neural network is used to segment medical image,but is sensitive to the initial weights,and it has fixed learning rate,slow convergence and is easy to fall into local minimum that.A method for medical image segmentation of BP neural network based on improved particle swarm optimization algorithm was proposed.Firstly,the mapping relationships are used to algorithm of particle swarm optimization algorithm and the BP neural network.The best adaptive functions can be found by particle swarm of powerful search function,which make the BP neural network attain the minimal error.It can overcome running into local minimum value easily in BP neural network.Secondly,the best position of particles can be determined,the most reasonable weights and bias values of BP neural network are obtained and network convergence speed is improved etc.Lastly,the BP neural network are repeatedly trained,then the best output values are obtained and the threshold values are calculated,The image area is divided by threshold.The simulation results show that the use of improved algorithm for medical images segmentation,can get more clear effect of segmentation image,improve the segmentation accurate rate,and it is important for the clinical diagnosis.
3D Reconstruction Based on SFS Method and Accuracy Analysis
CAO Fang and ZHU Yong-kang
Computer Science. 2017, 44 (Z6): 244-247.  doi:10.11896/j.issn.1002-137X.2017.6A.056
Abstract PDF(525KB) ( 847 )   
References | Related Articles | Metrics
The recovery from shading (SFS) is one of the research hotspots and difficulties in 3D reconstruction of computer vision.There are two problems in the algorithm,one is that the selected reflection model does not accord with the reflection characteristic of object surface,the other is that the constraints and the solution process are too complex,the solution is slow and has low efficiency.In this paper,the SFS algorithm was analyzed in detail,the Lambert illumination model was introduced,the spherical surface was assumed,and then the height function was obtained by the approximate differential operation.The 3D shape of the object surface can be recovered by using single gray image.The traditional linearized SFS algorithm and the algorithm proposed in this paper were experimentally validated.The reconstruction precision and efficiency of the two models were compared and analyzed.Experimental results show that the proposed algorithm is more efficient than the traditional algorithm in ensuring certain accuracy.
Multi-sensors Direction-and-Time Co-localization Algorithm Based on Efficient Anchor-nodes
XIA Xiao-dong, ZHUANG Yi, LI Jing and GU Jing-jing
Computer Science. 2017, 44 (Z6): 248-251.  doi:10.11896/j.issn.1002-137X.2017.6A.057
Abstract PDF(606KB) ( 566 )   
References | Related Articles | Metrics
In this paper,we proposed an efficient anchor node selection (EAS) model for the problem of low precision,high delay and low coverage in the field of electronic countermeasure.According to the environment of sensor nodes,the model can choose the effective anchor nodes to participate in the target location.To improve the classical localization algorithm that is based on independent data,we proposed a multi-sensors direction-and-time co-localization algorithm based on efficient anchor-nodes (LDTEAS).This algorithm can effectively reduce the influence of the environment and the enemy’s interference.Simulation results show that the proposed model can effectively improve the localization accuracy and localization coverage.
Shortest Routing Algorithm Based on Target Node in Mesh Network with Faulty Area
LIN Cheng-kuan, WANG Ming-cheng, GUO Li-li and DU Man-yi
Computer Science. 2017, 44 (Z6): 252-257.  doi:10.11896/j.issn.1002-137X.2017.6A.058
Abstract PDF(311KB) ( 544 )   
References | Related Articles | Metrics
Mesh network is studied early and it is still one of the most important and attractive network models at pre-sent.Because of its simple,regular and scalable structure,and it is useful for the implementation of VLSI (Very Large Scale Integrated Circuit),Mesh network has not only become the basic model of many theoretical studies,but also the topology structure of many large multiprocessor and parallel computer systems.A mesh with m rows and n columns is denoted by Mm,n.In this paper,we gave the shortest routing algorithm under two different kinds of faulty area.1)We gave a routing algorithm to find the shortest path between any two fault-free nodes in Mm,n with m≥3 and n≥3,and calculated the length of the path obtained by the algorithm when there is a rectangular faulty region.2)We gave a routing algorithm to find the shortest path between any two fault-free nodes in Mm,n with m≥3 and n≥3,and calculated the length of the path given by the algorithm when there exists such a situation that a node and its k-hop neighbours are faulty.
Energy-efficient Design under Imperfect Condition Based on Spectrum Prediction
ZHANG Yang, ZHAO Hang-sheng and ZHAO Xiao-long
Computer Science. 2017, 44 (Z6): 258-262.  doi:10.11896/j.issn.1002-137X.2017.6A.059
Abstract PDF(396KB) ( 516 )   
References | Related Articles | Metrics
In cognitive radio networks,when the secondary users do spectrum predicting and spectrum sensing,some spectrum prediction mistakes and spectrum sensing mistakes are included.In cognitive radio networks where energy is limited,the secondary users’ energy efficience is analyzed under spectrum prediction mistakes and spectrum sensing mistakes in this paper.A normalized spectrum prediction formula is designed.The impacts of spectrum prediction energy,the probability of wrong prediction,traffic intensity and channel number on the energy efficient are also investigated respectively in this paper.The simulation results are compared with perfect cognitive radio networks energy efficience.The simulation results are more conform to actual condition and have great value in theory and engineering application.
Multicasting Network System Design of Uniting Domains Based on MPLS VPN and MSDP
TAO Jun, KUANG Lei, XU Wang, YAN Yun-sheng and WAN Jia-shan
Computer Science. 2017, 44 (Z6): 263-265.  doi:10.11896/j.issn.1002-137X.2017.6A.060
Abstract PDF(580KB) ( 708 )   
References | Related Articles | Metrics
The article introduced the theory about multicasting,analysed the technology about MSDP and MPLS VPN.As to the network transformation of a normal enterprise on video,three multicasting schemes on uniting domains were proposed,and the priorities and shortcomings about them were compared.The scheme based MSDP and MPLS VPN is the best solution,which not only relieves the traffic of network,but also increases the security and reliability of network.
Traffic Scheduling Based Congestion Control Algorithm for Data Center Network on Software Defined Network
FAN Zi-fu, LI Shu and ZHANG Dan
Computer Science. 2017, 44 (Z6): 266-269.  doi:10.11896/j.issn.1002-137X.2017.6A.061
Abstract PDF(398KB) ( 1228 )   
References | Related Articles | Metrics
To alleviate the congestion problem using software defined network (SDN) in the modern data center network,a traffic scheduling based congestion control algorithm was proposed.When the link is congested,the proposed algorithm firstly discriminates the large flow of the maximum critical degree link in the congestion link,and then it reroutes the large flow,selects the minimum flow scheduling overhead,and calculates the scheduling cost.Finally,the minimum scheduling cost flow can be scheduled based on the above-mentioned process.Experimental results show that,the proposed algorithm can alleviate network congestion,improve the link utilization and enhance the network stability by reducing the dropout rates.
WF-C4.5:Handheld Terminal Traffic Identification Method Based on C4.5 Decision Tree in WiFi Environment
SHI Zhi-kai and ZHU Guo-sheng
Computer Science. 2017, 44 (Z6): 270-273.  doi:10.11896/j.issn.1002-137X.2017.6A.062
Abstract PDF(593KB) ( 593 )   
References | Related Articles | Metrics
It was reported that mobile terminals account for about 47% global IP traffic while WiFi traffic account for over 90% mobile traffic.Identification of mobile terminal traffic is important for efficient network traffic management.In order to solve the low identification rate problem of traditional HTTP user agent (UA) method,we analyzed the features of mobile terminal traffic in WiFi environment,including the connection persist time,packet size and payload size,etc.We proposed WF-C4.5:a handheld terminal traffic identification method based on C4.5 decision tree in WiFi environment.The method distinguishes handheld terminal traffic from non-handheld traffic by decision tree model which is created by calculating the information gain ratio of each attribute value.The experiments show that the identification rate of WF-C4.5 can reach 95%,while the identification rate of UA is about 65%.
Research on Wireless Interference Source Localization Based on Grid Spectrum Monitoring
LI Jin-shan, SHAO Yu-bin and LONG Hua
Computer Science. 2017, 44 (Z6): 274-275.  doi:10.11896/j.issn.1002-137X.2017.6A.063
Abstract PDF(163KB) ( 507 )   
References | Related Articles | Metrics
In order to find out the source of interference or illegal radio quickly and efficiently,in this method,a localization algorithm for interference position was proposed.The method is used to detect the size of the receiving power by setting up a grid distribution in the test area.Based on the radio signal source location algorithm proposed in this paper,the location of the interference source is found.The correctness and effectiveness of the proposed algorithm are proved by the simulation results.
Traffic Control Mechanism Design and Verification Based on VANET
YANG Lin, ZHANG Wen-li, ZHU Qin and PENG Chao
Computer Science. 2017, 44 (Z6): 276-283.  doi:10.11896/j.issn.1002-137X.2017.6A.064
Abstract PDF(1088KB) ( 1089 )   
References | Related Articles | Metrics
In recent years,worldwide rapid development in automobile industries and growing rate of vehicle ownership have been witnessed.Consequently,the thorny issue of traffic congestion has bothered more and more people.To alle-viate traffic jams as well as enhance traffic efficiency,this paper proposed a traffic control mechanism based on VANET.At first,we proposed a cooperative traffic light control mechanism for multiple intersections based on semi-real-time processing by combining the concepts of fixed-time control and traffic-response control.In our mechanism,the traffic light controller will use trajectory prediction method to predict the traffic situation of the next period,and then make the optimal decision of traffic light phase setting for the next period according to the prediction result.The aim of our mechanism is to minimize the waiting time of all the vehicles.This paper also proposed a heuristic algorithm for dynamic route planning to enhance individual’s travel efficiency in traffic system.On the basis of Dijkstra algorithm,we adopted a heuristic method to calculate the weight of each road.Vehicles can avoid traffic jam as much as possible by adopting this algorithm.To verify the performance of our proposed VANET traffic control mechanism,we ran simulation experiments by combining SUMO with NS3.The simulation results demonstrate that our proposed traffic control mechanism is both effective and practical.It is able to reduce traffic load and average waiting time of vehicles,as well as release traffic jams and divert jammed vehicles,thus can improve road traffic situation of the whole transportation system.
New Physical Layer Network Coding Denoising Mapping Algorithm Based on MQAM
LU Ming-yue, GUO Dao-xing and NIU He-hao
Computer Science. 2017, 44 (Z6): 284-287.  doi:10.11896/j.issn.1002-137X.2017.6A.065
Abstract PDF(360KB) ( 658 )   
References | Related Articles | Metrics
To overcome the problem of constellation points ambiguity in physical layer network coding(PLNC ),this paper proposed a new QAM-based PLNC de-noising mapping scheme.In this scheme,the relay node rearranges the M-QAM constellation mapping,and merges the points of constellation according to a certain rule.Comparing with traditional scheme,this design reduces the number of constellation points nearly a half,which increases the Euclidean distance between adjacent points in the constellation,thus improves the BER performance.In addition,the relay node only needs demodulation-remapping-modulation,which significantly reduces the processing complexity.Simulation results show the effectiveness of our scheme.
Multi-channel MAC with QoS Provisioning for Distributed Cognitive Radio Networks
SUN Wei and HUANG Jin-ke
Computer Science. 2017, 44 (Z6): 288-293.  doi:10.11896/j.issn.1002-137X.2017.6A.066
Abstract PDF(585KB) ( 475 )   
References | Related Articles | Metrics
The insufficence of dynamic resource availability and the lack of central control unit offer many challenges when designing MAC protocol for a distributed cognitive radio network.In this paper,we proposed a novel MAC design for distributed cognitive radio network which provides an efficient approach to address quality of service (QoS) requirements of delay sensitive applications by defining higher priority to such applications during channel reservation.It also combats other major challenges such as low utilization of spectrum and multi-channel hidden terminal problem.We developed an analytical framework to study the performance of the proposed protocol.We then compared the performance of proposed protocol with those of two existing protocols.Comparison results show that the proposed MAC outperforms the existing protocols by providing better throughput.The results achieved from the analytical model and validated by simulations show that our simple yet efficient design identifies and fulfils the QoS requirements of delay sensitive applications,achieves excellent spectrum utilization and handles multi-channel hidden terminal problem effectively.
Negative Acknowledgement Based Data Delivery Scheme for WISP
ZHANG Wen-bin, LI Er-tao, LI Fei, LI Yan-yan and ZHU Yi-hua
Computer Science. 2017, 44 (Z6): 294-299.  doi:10.11896/j.issn.1002-137X.2017.6A.067
Abstract PDF(1005KB) ( 576 )   
References | Related Articles | Metrics
The wireless identification and sensing platform (WISP) is able to harvest energy from the ultra-high-frequency (UHF)signals transmitted by the radio frequency identification (RFID) reader so as to power the microprocessor and sensors inside the WISP and deliver the data captured by the sensors to the reader.The NAK(negative acknow-ledgement) based data delivery scheme was presented which can remedy the problem of wasting channel in WISP due to high ratio of duplicated packets being transmitted.The experimental results show that the proposed scheme is able to reduce the ratio of duplicated packets and improve the effective throughput.
Data Aggregation Tree Construction and Transmission Scheduling Algorithm Based on Minimum Latency in Wireless Sensor Networks
GAO Lei and HU Yu-peng
Computer Science. 2017, 44 (Z6): 300-304.  doi:10.11896/j.issn.1002-137X.2017.6A.068
Abstract PDF(542KB) ( 530 )   
References | Related Articles | Metrics
Aiming at the shortcomings of the larger delay at the existing data aggregation algorithms in wireless sensor networks,we studied the problem of the minimum latency data aggregation tree and transmission scheduling.An aggregation tree construction algorithm based on degree constraint(DCAT) was proposed.It works by traversing the graph in a BFS manner.As it traverses each node,the set of potential parents is determined by identifying the nodes that are one-hop closer to the sink.The potential parent with the lowest degree in the graph is selected as the parent for the currently traversed node.Furthermore,we proposed two new approaches based on greedy for building a TDMA transmission schedule to perform efficient aggregation on a given tree:WIRES-G and DCAT-Greedy.We evaluated the perfor-mance of our algorithms through extensive simulations on randomly generated sensor networks of different sizes and we compared them to the previous state of the art.The results show that new scheduling algorithms combining with our new tree-building algorithm obtain significantly lower latencies than that of the previous best algorithm.
Grouping-based Wireless Sensor Network Multi-rounds Clustering Routing Algorithm
GE Bin, DAI Chen, JI Jie-qu and WU Bo
Computer Science. 2017, 44 (Z6): 305-308.  doi:10.11896/j.issn.1002-137X.2017.6A.069
Abstract PDF(469KB) ( 642 )   
References | Related Articles | Metrics
A grouping-based wireless sensor network multi-rounds clustering routing algorithm(LEACH-G) was proposed,which aims at the defects of the energy consumption of cluster head in LEACH algorithm.Grouping strategies are used in the clustering process,and signpost is used for communication to balance the entire network energy consumption.The factors of energy and the distance between nodes and the base station in the cluster heads selection thre-shold for reducing network energy consumption are introduced.The simulation results show that compared with correlation algorithm adout LEACH,the new algorithm can effectively reduce 10%~15% of the average energy consumption of node,significantly extend the network life cycle,and improve the efficiency of the cluster head.
VoIP Acoustic Echo Cancellation Algorithm Based on WebRTC
YAO Li and LIU Qiang
Computer Science. 2017, 44 (Z6): 309-311.  doi:10.11896/j.issn.1002-137X.2017.6A.070
Abstract PDF(523KB) ( 2350 )   
References | Related Articles | Metrics
Echo phenomenon is a common problem in voice communication system,which affects the communication quality.This paper presented an echo cancellation algorithm based on WebRTC (Web Real-Time Communication).As the fixed-point realization is limited,the algorithm uses floating-point computation to improve algorithm efficiency and accuracy,while maintains algorithm speed.The experimental results based on mobile equipments show that the proposed algorithm outperforms the original one on echo return loss enhancement with comparable complexity.
Virtual Network Mapping Optimization Based on Improved Ant Colony Algorithm
XIE Yong-hao, GAO Song-feng and DAI Ming-zhu
Computer Science. 2017, 44 (Z6): 312-313.  doi:10.11896/j.issn.1002-137X.2017.6A.071
Abstract PDF(279KB) ( 530 )   
References | Related Articles | Metrics
In virtual network mapping,the virtual network mapping results based on improved ant colony algorithm were optimized.Aiming at the optimal utilization efficiency of the underlying network resources,a new virtual network mapping algorithm based on improved ant colony algorithm was proposed.By introducing the Gauss process model,the convergence speed of ant colony optimization algorithm is accelerated,and the real-time requirement of practical application is satisfied.The results show that the algorithm can significantly reduce the solution time and play a positive role on the premise of satisfying the same accuracy.
AP-I:An Index to Quickly Answer Predictive Queries for Moving Objects
LIU Kai-yang
Computer Science. 2017, 44 (Z6): 314-318.  doi:10.11896/j.issn.1002-137X.2017.6A.072
Abstract PDF(749KB) ( 486 )   
References | Related Articles | Metrics
How to quickly answer the question of the future location of a moving object is a fundamental problem for a variety of applications,such as ITS,location-aware advertisement,and moving objects monitoring.In this paper,we proposed an innovative AP-I (Adaptive Predictive-Index) which can efficiently answer predictive queries without any objects’ historical trajectories.Compared with existing Predictive Tree[4] index,our index can greatly reduce the overhead of index update by discovering and utilizing the co-relations among objects’ paths.Furthermore,by introducing AP (Adaptive Probability) and Pruning procedure,the size of AP-I is further reduced to improve the query performance.An extensive set of experiments have demonstrated that compared with Predictive Tree,our AP-I can not only achieve higheraccuracy,but also greatly improve the updating and space efficiency with the same query performance.
Self Localization Technology of Wireless Sensor Network Node
XIONG Zhi-li and QU Shao-cheng
Computer Science. 2017, 44 (Z6): 319-321.  doi:10.11896/j.issn.1002-137X.2017.6A.073
Abstract PDF(230KB) ( 910 )   
References | Related Articles | Metrics
This paper firstly summarized and analyzed basic principle and classification of the wireless sensor network node,and got that essence of self localization technology is an optimal problem.Secondly,on this basis,we took the genetic algorithm,simulated annealing algorithm,evolutionary strategy and differential evolution algorithm as the research object,and discussed the advantages and disadvantages of the four kinds of typical localization algorithm.Then,combined with the advantages of GA algorithm and SA algorithm respectively,a genetic simulated annealing algorithm was proposed,thereby improving the initial population diversity and avoiding the local optimal solution problem in sensor node selection.Finally the improved method was applied to locate wireless sensor network node,with Matlab respectively on the GA algorithm,SA algorithm and GSA algorithm simulation,and the advantages of GSA algorithm were verified for free wire sensing node self localization technology to provide a new reference.
Introduction of Overhead Reduction for Small Cell Deployment
RU Xin-yu and LIU Yuan
Computer Science. 2017, 44 (Z6): 322-325.  doi:10.11896/j.issn.1002-137X.2017.6A.074
Abstract PDF(400KB) ( 854 )   
References | Related Articles | Metrics
The demand for mobile data in today’s society grows explosively,but the limited capacity of the resources severely affects the expansion and improvement of business capacity.Taking the imbalance in the occurrence of wireless data into account,increasing the deployment of small-area cells will undoubtedly become the effective approach.By exa-mining the method to reduce the overhead of the uplink and downlink reference signals and control signaling in small cells,this paper introduced a method to improve cell density by deploying small cells.Compared with the macro-cell deployments,this proposed method can effectively reduce the cost of small cells,and enhance the efficiency of resource usage.
Research on Intrusion Detection of Wireless Sensor Networks Based on Game Theory
XIONG Zi-li, HAN Lan-sheng, XU Xing-bo, FU Cai and LIU Bu-yu
Computer Science. 2017, 44 (Z6): 326-332.  doi:10.11896/j.issn.1002-137X.2017.6A.075
Abstract PDF(398KB) ( 790 )   
References | Related Articles | Metrics
Wide application of wireless sensor networks extends people’s ability to obtain information,but its inherent network characteristics make it more vulnerable to cyber-attack.Current intrusion detection systems are against for specific attacks,but powerless for other attacks and consump a little high energy,thus reduce the lifetime of the network.The paper proposed an intrusion detection model based on game theory,in which the attack-defense process between intrusion detection system and attacker is looked as a non-cooperative game model.To deal with the problem of diversity network intruder attacks,game model has been improved and established a non-cooperative information static game model.By analyzing the model’s mixed Nash equilibrium,the optimal defense strategy is obtained.It can balance the detection efficiency and energy consumption of the system.The simulation results show that the intrusion detection system based on game theory not only can resist a variety of network attacks effectively,but also reduce the energy consumption and prolong the lifetime of the network.
Scheme of Cloud Audit Data Encryption Based on AES and ECC
CHEN Zhuang and YE Cheng-yin
Computer Science. 2017, 44 (Z6): 333-335.  doi:10.11896/j.issn.1002-137X.2017.6A.076
Abstract PDF(351KB) ( 857 )   
References | Related Articles | Metrics
Focusing on the data transmission and storage security problem of one-way encryption of cloud audit data,HBES (Hybrid Bidirectional Encryption Scheme) was proposed to tackle this problem.HBES private key is brought by event sponsor and local storage,then random number which depends on response time and external factors generates private key by mapping rules.Corresponding simulation experiment advocates that compared with the one-way encryption method,HBES is a more effective and feasible way in encryption time,security and achieving the encryption and decryption of the cloud audit data.
LBS Group Nearest Neighbor Query Method Based on Differential Privacy
MA Yin-fang and ZHANG Lin
Computer Science. 2017, 44 (Z6): 336-341.  doi:10.11896/j.issn.1002-137X.2017.6A.077
Abstract PDF(521KB) ( 702 )   
References | Related Articles | Metrics
For the privacy issues caused by the group nearest neighbor query scenarios formed by multi-user collaboration,a new LBS group nearest neighbor query method based on privacy protection was proposed,which introduces the “geo-indistinguishability” concept and satisfies with differential privacy property.LBS group construction mechanism based on classification and clustering was given and the group privacy budget allocation mechanism was studied.LBS Group-users’ position obfuscation by laplace (GPOL) algorithm was also introduced,which has applied the group centroid nearest neighbor query instead of group nearest neighbor query into the entire privacy protection framework.Experimental results show that this method can effectively resist the existing cross attacks and combo attacks.
Data Surge Models for Public Security Data Processing and Its Application in Unity of Security System
GAO Di, XU Zheng and LIU Yun-huai
Computer Science. 2017, 44 (Z6): 342-347.  doi:10.11896/j.issn.1002-137X.2017.6A.078
Abstract PDF(961KB) ( 800 )   
References | Related Articles | Metrics
In recent years,with the construction and development of the intelligent City project and safe city,video surveillance systems have become a efficient way of public security authority security control,combating crime,and preventing emergency incidents.With the rapid development of network communication technology and mobile intelligent terminals (such as smart phones,tablets,etc),the rapid proliferation of smart terminals has carried sensing devices such as video surveillance,audio,speed sensors and so on.Video equipment parts in high-end smart terminal can carry over parts of the lo-wer end of video surveillance equipment.Intelligent terminal mass popularity makes building a people-centric sensing and computing networks possible in order to achieve the perfect fusion of the physical world and the digital world.Effective integration of different information spaces of information can enhance public safety and effective sensing and detection.According to the multi-source information fusion of public safety incidents,surge of data model was proposed,and the model was defined.And witness in one system by the model was verified.The unity of witness systems have been developed in several Beijing bus stations and train stations.
Mandatory Access Control Model Based on Safety Value of Attributes
CHEN Jie-wei, GUAN Yu and LIU Jun
Computer Science. 2017, 44 (Z6): 348-350.  doi:10.11896/j.issn.1002-137X.2017.6A.079
Abstract PDF(192KB) ( 611 )   
References | Related Articles | Metrics
By quantitatively mapping the fine-grained attributes defined by ABAC and by combining the basic features of BLP and Biba mandatory access models,an attempt is made to define a quantitative concept of security values related to attributes,and then a closed environment that can be calculated based on security values is built.Then a set of security values based on attribute mapping is calculated to meet the basic conditions of BLP and Biba mandatory access control model.Then BLP and Biba models are further optimized to fit the attribute security value and form a flexible mandatory access control model based on attribute security value.
Analysis of Network Security Based on Uncertain Attack Graph Path
ZENG Sai-wen, WEN Zhong-hua, DAI Liang-wei and YUAN Run
Computer Science. 2017, 44 (Z6): 351-355.  doi:10.11896/j.issn.1002-137X.2017.6A.080
Abstract PDF(389KB) ( 1254 )   
References | Related Articles | Metrics
With the development of science and technology,the existing attack graph generation algorithm has deficiencies in describing of network congestion,network disconnection,network delays and other unforeseen circumstances.And in pathing out which route network will be more reliable when all the routes can achieve the same target state has not keen studied in pathing out.Researches nowadays about the uncertain graph have delicate descriptions about the real network.Therefore,this thesis will put forward a new algorithm through uncertain graph model,and we can simulate the reality of attacks by reverse simulation to generate attack graph from the target of attackers and we can also avoid the troubles of space explosion to help defenders against the risks of network vulnerabilities.Through experiments we fond that our approach can generate the attack graph correctly and it is also practical for the simulation of large networks.
Image Encryption Scheme Based on Chaos with Parameter Perturbation
ZHU Shu-qin and LI Jun-qing
Computer Science. 2017, 44 (Z6): 356-360.  doi:10.11896/j.issn.1002-137X.2017.6A.081
Abstract PDF(1163KB) ( 611 )   
References | Related Articles | Metrics
Due to the limitation of the numerical accuracy of the computer,the chaotic sequence will degenerate into the periodic sequence.An image encryption scheme based on chaotic system with parameter perturbation was proposed.Firstly,an existing chaotic system was improved to obtain a new chaotic system.Secondly,the new chaotic system was perturbed by the state variables of the existing chaotic systems,so that Chaotic system with parameter perturbation can be produced.In the encryption scheme,the number of iterations is controlled by the feedback of the cipher text and the key stream is generated dynamically.Experimental results and security analysis show that the algorithm is sensitive to the key and has a large key space,the encrypted image has good statistical properties and the encrypted image is very sensitive to the plain image,and the algorithm can resist the attack of choosing plain or cipher text.
Research on Network-layer Topology Discovery Algorithm Based on Multi-protocol
ZHOU Chang-jian, XING Jin-ge and LIU Hai-bo
Computer Science. 2017, 44 (Z6): 361-365.  doi:10.11896/j.issn.1002-137X.2017.6A.082
Abstract PDF(209KB) ( 1152 )   
References | Related Articles | Metrics
With the continuous development of information technology,and the cyberspace security obtains more and more value,the existing network security products have been difficult to meet the user’s demand,which has more and more extensive network security.This paper analyzed the importance of network topology discovery in the area of network security and the advantages and disadvantages of the existing network topology,and improved the original topology discovery algorithm of network-layer.Though improving OSPF and SNMP topology discovery rules of network-la-yer,a network topology algorithm combining the common advantages of SNMP and OSPF was achieved.Experimental results prove that it has a good performance.
Prediction about Network Security Situation of Electric Power Telecommunication Based on Spark Framework and PSO Algorithm
JIN Xin, LI Long-wei, SU Guo-hua, LIU Xiao-lei and JI Jia-nan
Computer Science. 2017, 44 (Z6): 366-371.  doi:10.11896/j.issn.1002-137X.2017.6A.083
Abstract PDF(408KB) ( 860 )   
References | Related Articles | Metrics
With the expansion of the scale of electric power communication network,the electric power communication network continuously produce huge amounts of data communication.At the same time,the communication network attack means is in constant evolution,which brings threats for the safety of the electric power communication network.To solve above problems,combining with the Spark big data computing framework and the advantages of PSO,the Spark memory computing framework of parallel PSO optimization neural network algorithm is put forward to predict the security situation of electric power communication network.This study first introduced the Spark computing framework,the Spark frame has the characteristics of memory computing and quasi real-time processing,accord with the requirement of electric power communication big data processing.Then PSO optimization algorithm was proposed to modify the weights of neural network,in order to increase the study efficiency and accuracy of neural network.Then with the combination of RDD parallel characteristic,this paper proposed a parallel PSO optimization neural network algorithm.Through experiment and comparison,you can see that Spark framework based PSO optimization neural network algorithm has high accuracy,and compared with prediction method based on Hadoop,its processing speed has improved significantly.
Security of Subspace Code against Wiretap Attacks
LIU Yan-tao and WANG Xue-bing
Computer Science. 2017, 44 (Z6): 372-376.  doi:10.11896/j.issn.1002-137X.2017.6A.084
Abstract PDF(243KB) ( 713 )   
References | Related Articles | Metrics
The network system in combination of subspace code and random linear network coding has the advantages of low complexity of encoding and decoding,there is no need to attach coding vectors,and noncoherent communications,and it has been applied for network error correcting.This paper addressed the security of subspace code against wiretap attacks.The security performance is measured in the probability with which the attacker guesses source messages.Based on the wiretap network model proposed by Cai and Yeung,we quantitatively calculated the guess probability with the methods of linear algebra and combinatorics and obtained its closed form solution.The result shows that subspace code has weak security in the sense of probability.Compared to many coding schemes with perfect security or weak security,however,subspace code benefits from low complexity,high flexibility,topology independence,and capability of fighting wiretaps on multiple links.As a result,subspace code is suitable to network applications with limited calculations and moderate security requirements.
Differential Fault Analysis of PRINCE Lightweight Cryptographic Algorithm
ZOU Yi, LI Lang and JIAO Ge
Computer Science. 2017, 44 (Z6): 377-379.  doi:10.11896/j.issn.1002-137X.2017.6A.085
Abstract PDF(295KB) ( 1134 )   
References | Related Articles | Metrics
PRINCE is a lightweight cryptographic algorithm proposed in CRYPT ASIA 2012,for the protection of RFID tags and the communication security of the devices such as smart cards in the Internet of things,as smart cards.A differential fault analysis method was proposed and discussed in this paper.The method is based on the PRINCE algorithm.In this method,a semi-byte fault model is adopted,and the differential fault analysis is carried out on the last PRINCEcore.Experimental results show that the semi-byte random fault is injected into the last round of the PRINCEcore,the key(k1) can be cracked by 4 fault injections to PRINCEcore part.Therefore,without the protection,the PRINCE encryption algorithm will be difficult to resist differential fault analysis.
Encrypted Data Stream Recognition Based on Data Randomness and ELM
ZHOU Yu-huan, JIANG Da-wei, GONG Yong and CHEN Cong
Computer Science. 2017, 44 (Z6): 380-384.  doi:10.11896/j.issn.1002-137X.2017.6A.086
Abstract PDF(207KB) ( 847 )   
References | Related Articles | Metrics
This paper presented the identification method of encrypted data stream based on data randomness and pattern recognition without decrypting the encrypted data.The method uses the randomness distribution characteristics of different data as classification features,and then uses the pattern recognition method to classify different types of data.Firstly,the randomness test method NIST is used to analyze the data stream,getting the 15 kinds of randomness test values as the classification feature.Then the method creates classification models for different types of data streams.Finally,the method uses the trained model to identify the unknown data stream.Simulation results show that using the 15 kinds of features,the proposed method can effectively classify the different types of data stream,and the error rate decreases from 60% to 30%.Using the feature optimization method,the error rate drops to 15%.
TrustedMarket:A Trusted Software Market Model Based on Trust Measurement Theory for Smart Terminal
PING Liu-qiong
Computer Science. 2017, 44 (Z6): 385-389.  doi:10.11896/j.issn.1002-137X.2017.6A.087
Abstract PDF(967KB) ( 486 )   
References | Related Articles | Metrics
Aiming at the problem that the applications in the software market are not strictly examined,and the third party software market includes a large number of legitimate repackaged software,TrustedMarket,a trusted application software market model based on trust measurement mechanism was designed.As the characteristics of smart terminal’sown computing power,electricity power and network traffic are limited and the operating environment is complicated,the concept of relationship maturity,loyalty and recommended service quality was proposed to describe the recommended trust value provided by the smart terminal.A new feature description function was proposed to calculate the weight of historical loyalty evaluation value.The problem of trust value calculation in specific trust metric model is solved by all the above technologies.Through the implemented TrustedMarket model on Android platform,it is proved that the mo-del can solve the problem that the current application software market is untrustworthy.
Oscillatory Behaviors of Malware Propagation Model in Wireless Sensor Networks with Time Delays and Reaction-diffusion Terms
ZHANG Xiao-pan and YUAN Ling-yun
Computer Science. 2017, 44 (Z6): 390-394.  doi:10.11896/j.issn.1002-137X.2017.6A.088
Abstract PDF(529KB) ( 666 )   
References | Related Articles | Metrics
This paper investigated the oscillatory behaviors in malware propagation model for wireless sensor networks with time delays and reaction-diffusion terms.First of all,based on the existing relevant experimental evidence,a new delayed functional partial differential equation model is formulated by introduction of both delay and diffusion.This model can well describe many practical architectures of malware propagation model in wireless sensor networks.Secondly,by choosing the latent delay as bifurcation parameter and analyzing the associated characteristic equation at the positive equilibrium,the stability of positive constant steady state and the sufficient condition for the existence of Hopf bifurcation are demonstrated.It is shown that the combined effects of delay and diffusion can induce the delayed diffusive model to be oscillatory,including spatially homogeneous periodic oscillations and spatially inhomogeneous periodic oscillations,suggesting that such delay and diffusion would be deleterious to the security of wireless sensor networks.Finally,numerical examples are presented to illustrate and visualize theoretical results.
Social Recommendations Method Based on Differential Privacy
PENG Hui-li, ZHANG Xiao-jian and JIN Kai-zhong
Computer Science. 2017, 44 (Z6): 395-398.  doi:10.11896/j.issn.1002-137X.2017.6A.089
Abstract PDF(277KB) ( 607 )   
References | Related Articles | Metrics
User-item recommendation technique may disclose the user preferences in social network.Classical methods based on anonymization are ill-suited for the scenario because of special background knowledge.This paper proposed an efficient social item recommendation method,called DPSR (Differentially Private Social Recommendation),and this method employed clustering techniques to obtain different user social groups,used the noise generated from Laplace mechanism to perturb the weight of user-item edge.To handle the outliers in edge weights,DPSR combines the k-medianand exponential mechanism to boost the results of recommendation.The experimental results show that DPSR outperforms its competitors,and achieves accurate results.
Research on Closed Loop in Graph Theory
WANG Bin-jun, SHAO Hua, HE Ying-rui, CAI Wen-zhe and LI Jing-ying
Computer Science. 2017, 44 (Z6): 399-401.  doi:10.11896/j.issn.1002-137X.2017.6A.090
Abstract PDF(163KB) ( 746 )   
References | Related Articles | Metrics
Aiming at the problem of closed loop in the graph theory,we put forward the concept of minimum closed loop and minimum double closed loop,on which nodes satisfy certain properties.Then we put forward the concept of “sunflower” double closed loop,which put forward the formal definition,algorithm and algorithm analysis of the minimum closed loop,minimum double closed loop and “sunflower” double closed loop,which supplement the graph theory about the content of the closed loop.The application of closed loop detection in video is also studied,which provides theory and technology support for the police to detect and quickly lock the criminal suspect in the case of video surveillance.
Collective Matrix Factorization Algorithm Based on Bias Amendment
LI Ming, YUE Bin and DAI Yong-ping
Computer Science. 2017, 44 (Z6): 402-406.  doi:10.11896/j.issn.1002-137X.2017.6A.091
Abstract PDF(309KB) ( 798 )   
References | Related Articles | Metrics
Although matrix factorization model has become the major method in the collaborative filtering,it ignores the combined influence of the user bias and latentitems characteristics on recommendation quality.Therefore,this research proposed a collective matrix factorization algorithm,which factorizes items rating matrix and items co-occurrence matrix to amend user bias based on matrix factorization model.The experimental results from different benchmark datasets prove the rationality of the combined factorization algorithm,and indicate greater improvement in the ranking-based metrics in comparison with the traditional matrix factorization model.
Distributed and Heterogeneous Multi-agent System for Attributed Graph Clustering
BIAN Zhai-an, LI Hui-jia, CHEN Jun-hua, MA Yu-han and ZHAO Dan
Computer Science. 2017, 44 (Z6): 407-413.  doi:10.11896/j.issn.1002-137X.2017.6A.092
Abstract PDF(473KB) ( 498 )   
References | Related Articles | Metrics
Recent years have witnessed a renewed attention towards attributed graph clustering,which aims to divide the nodes in the attribute graph into several clusters,so that each cluster has a densely connected intra-cluster structure and homogeneous attribute values.Existing methods ignore nodes/objects selfish nature in real-life contexts.Meanwhile,some open problems,such as heterogeneous information integration,high computational cost,etc.,have not been effectively resolved yet.To this end,we considered the attribute graph clustering problem as the cluster formation game of selfish node-agents.To effectively integrate both topological and attributive information,we proposed both tightness and homogeneity constraints on node-agents’ strategy selection.To be specific,the game process will converge to weakly Pareto-Nash equilibrium almost surely.In the aspect of implement,we carefully designed a distributed and heterogeneous multiagent system,based on which,a fast distributed learning algorithm is also given.The main feature of the proposed algorithm is that the overlap rate of the resulted partition can be well controlled by a pre-specified threshold.Finally,we conducted a set of simulation experiments on real-life social networks and comparisons are listed.
Performance Comparison of Clustering Algorithms in Spark
HAI Mo and ZHANG You
Computer Science. 2017, 44 (Z6): 414-418.  doi:10.11896/j.issn.1002-137X.2017.6A.093
Abstract PDF(868KB) ( 1474 )   
References | Related Articles | Metrics
The performance of three typical clustering algorithms which are K-means,Bisecting K-means and Gaussian Mixture in Spark,were compared by the experiments from runtime,speedup,scalability and size up.The results show that when the scale of the dataset is hundreds of megabytes,as the number of nodes increases,the runtime of the three algorithms decreases more obviously.When the scale of the dataset is larger than 500MB,the speedup of the three algorithms increases more obviously,and the speedup increases linearly with the increase of the number of nodes.The scala-bility of the three algorithms decreases with the increase of the number of nodes.When the scale of the dataset is larger than 500MB,the scalability of the Bisecting K-means algorithm is the lowest compared to that of the K-means and Gaussian Mixture algorithm.When the scale of the dataset is larger than 100MB,the sizeup of the Gaussian Mixture algorithm is much larger than that of K-means algorithm and bisecting K-mean algorithm.
Community Structure Detection Algorithm Based on Nodes’ Eigenvectors
LU Yi-hong, ZHANG Zhen-ning and YANG Xiong
Computer Science. 2017, 44 (Z6): 419-423.  doi:10.11896/j.issn.1002-137X.2017.6A.094
Abstract PDF(341KB) ( 608 )   
References | Related Articles | Metrics
Community structure is one of the ubiquitous and significant topology characteristics of complex network.It can help us to learn the structure and functions of a complex network.The similarity index plays a vital role in community detection but it has the shortage of high time complexity and low accuracy.In order to improve the two shortages,nodes are abstracted from a complex network into a multi-dimension data set based on the theory of information transmission in the network.Combined with the traditional clustering algorithm K-means,a new community detection algorithm was proposed.The experimental results obtained from Zachary Karate Club network,Jazz Musician network and Facebook network show that the algorithm is effective and accurate.
Improved Adaptive Spectral Clustering NJW Algorithm
LI Jin-ze, XU Xi-rong, PAN Zi-qi and LI Xiao-jie
Computer Science. 2017, 44 (Z6): 424-427.  doi:10.11896/j.issn.1002-137X.2017.6A.095
Abstract PDF(247KB) ( 952 )   
References | Related Articles | Metrics
Clustering algorithm is a new research hotspot in the field of machine learning in recent years.In order to cluster in the sample space of any shape,the scholars have put forward excellent algorithms such as spectral clustering and graph theory clustering.In this paper,we first introduced the basic idea of the classical NJW algorithm of graph theo-ry clustering algorithm and NeiMu algorithm,and then we gave a new algorithm named Improved adaptive spectral clustering NJW algorithm.This approach overcomes the shortcomings of the classical NJW algorithm,which needs to adjust the number of clusters in advance and needs to be repeated to obtain the data classification results.We compared the adaptive NJW algorithm with the classical NJW algorithm,the adaptive NJW algorithm and the NeiMu graph theory clustering algorithm on the UCI standard data set and the measured data set.The experimental results show that the adaptive NJW algorithm is convenient and well practicable.
SMS Automatic Classification Based on Relational Matrix
LI Feng and WAN Xiao-qiang
Computer Science. 2017, 44 (Z6): 428-432.  doi:10.11896/j.issn.1002-137X.2017.6A.096
Abstract PDF(481KB) ( 756 )   
References | Related Articles | Metrics
SMS automatic classification is a hot issue of short text study.In this problem,this paper put forward to the feature extraction method of relational strength and the relational matrix,and designed a fully supervised learning algorithm based on relational matrix.In order to implement the system of self learning,this paper also discussed a semi-supervised learning algorithm based on relational matrix,which combines with active learning algorithm of the artificial modification.Finally the experiment results illustrate the effectiveness and efficiency of this algorithm.
Attributed Graph Clustering Algorithm Based on Cluster-aware Multiagent System
SHI Kai, REN Luo-kun, PENG Yi-ming and LI Hui-jia
Computer Science. 2017, 44 (Z6): 433-437.  doi:10.11896/j.issn.1002-137X.2017.6A.097
Abstract PDF(387KB) ( 686 )   
References | Related Articles | Metrics
Existing methods to partition nodes into clusters with tight correlations is to apply clustering techniques on attributed graphs based on node connectivity or attribute similarity.In this paper,we comprehend each node as an autonomous agent and developed an accurate multi-agent system to extract overlapping clusters in attributed graphs.First,a kernel function with bandwidth factor is introduced to measure the influence of each agent,and those agents with highest local influence are selected as leader agents.Next,a novel local expansion strategy is proposed,by which each leader agent absorbs the closest followers in the graph.Then,the cluster-aware multiagent system was designed so that the optimal overlapping cluster configuration can be uncovered.Our method is highly efficient,whose computational time nearly linearly depends on the number of edges.Finally,the proposed method is demonstrated on synthetic benchmark graphs and real-life attributed graphs to verify the systematic performance.
Personalized Recommendation Based on Probabilistic Matrix Factorization in Big Data Environment
TIAN Xian-zhong and SHEN Jie
Computer Science. 2017, 44 (Z6): 438-441.  doi:10.11896/j.issn.1002-137X.2017.6A.098
Abstract PDF(295KB) ( 638 )   
References | Related Articles | Metrics
Probabilistic matrix factorization is a type of collaborative filtering algorithm which is widely used in recent years.Based on the problem of how to use matrix factorization technology to improve the recommendation quality and how to breakthrough the limitation of calculation time and resource in big data environment,we introduced an improved probabilistic matrix factorization algorithm which integrates neighbor information and introduced parallel-IPMF,overcoming the problem of high calculation complex and the problem of parallelization.We used the real dataset to implement our algorithm on the MapReduce parallel computation framework.The experiment results show that our algorithm can improve the recommendation quality and reduce the computation time.
Study on Military Equipment Maintenance Support Sites’ Location Problem Based on BP and RBF Neural Network
DONG Peng, LU Wei and QIN Fu-rong
Computer Science. 2017, 44 (Z6): 442-445.  doi:10.11896/j.issn.1002-137X.2017.6A.099
Abstract PDF(239KB) ( 557 )   
References | Related Articles | Metrics
To solve the scientific location problems of military equipment maintenance support sites,we proposed location method based on BP and RBF neural network modle,designed computational procedures,illustrated an application case,and finally finding the different types of problem which the two methods are suited.
Research on Chinese Texts Sentiment Classification Approach Based on PSO-GP
HUANG Yi and WANG Juan
Computer Science. 2017, 44 (Z6): 446-450.  doi:10.11896/j.issn.1002-137X.2017.6A.100
Abstract PDF(454KB) ( 790 )   
References | Related Articles | Metrics
The Chinese texts sentiment orientation analysis is one of the key technologies for network public opinion information mining and analysis.This paper proposed a Chinese texts sentiment classification method based on particle swarm optimization-Gaussian process (PSO-GP) algorithm,which employs optimal hyper parameter search.It solves problems of traditional Gauss iteration process,like difficultly to determine conjugated gradient,strongly dependent on initial value and easily to fall into local minimum.It can collect data set text corpus to construct a domain specific emotion dictionary using multi-threaded web crawler technology,select the most effective features by emotional words,reduce the data dimension,and generate feature vector from feature words by TF-IDF algorithm.Experimental results show that the classification accuracy of the improved classification model is improved by nearly 15%.
Novel Approach on Collaborative Filtering Based on Gaussian Mixture Model
CHENG Ying-chao, WANG Rui-hu and HU Zhang-ping
Computer Science. 2017, 44 (Z6): 451-454.  doi:10.11896/j.issn.1002-137X.2017.6A.101
Abstract PDF(340KB) ( 616 )   
References | Related Articles | Metrics
Recommender system is a competitive solution to solve information overload problem.And collaborative filtering is an effective method for the recommender systems.Matrix factorization is widely used for collaborative filtering.However,the existing matrix factorization techniques are affected by the rating noise,and their robustness is not up to people’s expectations.We attributed the negative effect of the rating noise to the universal applied hypothesis that the rating data is subject to the Gauss distribution.In order to solve this problem,we proposed a collaborative filtering algorithm based on Gaussian mixture model.We assumed the rating data obeying the Gauss mixture distribution,and then applied the Bayesian probability matrix factorization model for recommendation.Besides,a semi-supervised algorithm has been proposed,which gets both labeled and unlabeled data involved.The experimental results show that the collaborative filtering algorithm based on Gaussian mixture modelis much more robust and it can alleviate the negative effect of rating noise and improve the accuracy of prediction as well.
Double Sunburst Matrix Visualization to Overview Majors Distributary Data
LI Hui, CHEN Hong-qian, DONG Shuang and MA Li-yi
Computer Science. 2017, 44 (Z6): 455-458.  doi:10.11896/j.issn.1002-137X.2017.6A.102
Abstract PDF(232KB) ( 592 )   
References | Related Articles | Metrics
To show and analyze the feature of attribute flow in majors distributary data,a sunburst matrix visualization method was proposed in the paper.The data with various attributes are firstly selected and counted in the method.The statistical data are mapped to the visual elements of visualization results.The mapping procedure includes three parts.In the first part,the scattered bubble charts are introduced to express the whole statistical numbers of student with various source and destination.Secondly,contrast method of pie chart is introduced into bubbles to show its gender proportion.The pie chart is adopted as the inner layer of sunburst.Thirdly,the students’ grade point attribute in each category are designed to display comparably in the outer layer of sunburst.The experimental results and the evaluation of the mana-gement staff denoted that the visualization results can be expressed directly to the flow characteristics such as the number of students,gender proportion,and grade point and so on.The detailed classification of students and professional construction and training are expected to achieve according to the visualization results.
Research and Implementation of Real-time Exchange System in Data Center
TANG Xu, WANG Fei, LI Tong and ZHANG Peng
Computer Science. 2017, 44 (Z6): 459-462.  doi:10.11896/j.issn.1002-137X.2017.6A.103
Abstract PDF(471KB) ( 492 )   
References | Related Articles | Metrics
In view of real-time information exchange system(RIES)in data center,the applications require real-time and reliable transmission of data flow.This paper analyzed the features of data flow in data center,and proposed an architecture of RIES.Furthermore,this paper focused on the methods of reliable real-time data transmission.The methods uses the thread-control module and the message synchronous blocking I/O model to achieve the concurrent data flow processing.Moreover,the package loss of data flow is solved by using a loop caching mechanism,and the reliability of the system is ensured by double-system synchronized technique.In order to achieve real-time data transmission and decrease delay in the process of system processing,the queue priority based tasks scheduling (QPTS) algorithm was proposed.The algorithm implemented the on-demand scheduling of data flow,which is based on priority,deadline and amount of remaining packets,meanwhile it improved the processing speed of RIES.The evaluation results show the efficiency of the algorithm,and guarantee the real-time and reliability of the system.
Researches of Redundancy Coding Technologies on Reducing Reconstruction Data Amount
MA Liang-li and LIU Qing
Computer Science. 2017, 44 (Z6): 463-469.  doi:10.11896/j.issn.1002-137X.2017.6A.104
Abstract PDF(679KB) ( 879 )   
References | Related Articles | Metrics
In order to avoid data loss due to hardware failure or server breakdown,redundancy coding technology is widely employed in distributed storage systems for data reliability.However,traditional erasure codes,such as Reed-Solomon codes,bear the burden of huge rebuilding data amount.Compared with the replication technique,which only needs to read and transfer the lost data,the erasure coding requires to read and transfer a much large amount of data,thereby consuming much more disk I/Os and network bandwidth.Thus,a erasure code based distributed storage system would cost longer time for data reconstruction than a replication based system,and exposes the whole system in a long-term degraded stage,increasing the risk of the permanent data loss.To solve this problem,many repair-bandwidth-efficient codes were constantly proposed,but these codes are only compared with the traditional Reed-Solomon codes and lack the comprehensive comparisons on practical storage systems.We systematically analyzed these repair-bandwidth-efficient codes from the some significant aspects,such as amount reduction on reconstruction data and so on,thus providing valuable basis and references for choosing suitable erasure codes for practical systems.
Representation Tool of Data Relations in Database Design Based on Data Source-target Digraph
CHEN Bing-chuan, CHEN Ai-xiang, WU Xiang-jun and LI Lei
Computer Science. 2017, 44 (Z6): 470-474.  doi:10.11896/j.issn.1002-137X.2017.6A.105
Abstract PDF(536KB) ( 577 )   
References | Related Articles | Metrics
Database design is a key step between requirement analysis and system implementation in information system engineering.According to the result of requirement analysis,traditional approaches of database design utilize human constructive thinking to abstract objects and their relationships.Since traditional approaches have some defects in describing data structures and relationships,especially for relationships between data items,the results of these approaches may deviate from the real situation.In this paper,we presented a new database design tool-data flowing direction graph(DFDG) to represent objects,relationships and relationships between data items.Our cases show that results of DFDG enjoy more simple,clear,correct and unambitious than those of former methods and can improve the correctness and reliability of information system while shortening implementing time needed.
Bibliographic Analysis for Code/API Recommendation Literatures
NIE Li-ming, JIANG He, GAO Guo-jun, WANG Han and XU Xiu-juan
Computer Science. 2017, 44 (Z6): 475-482.  doi:10.11896/j.issn.1002-137X.2017.6A.106
Abstract PDF(880KB) ( 1070 )   
References | Related Articles | Metrics
Code/API recommendation approaches can assist developers to implement programming tasks efficiently.Until now,lots of related literature are published.Although some researchers present the background and the state of this research field,there is a lack of understanding of some essential domain knowledge,such as the most productive author,institution,and country,the most popular literature and author,and the popular research hotspots.By employing a classical bibliographic analysis framework,we conducted a basic bibliographic analysis and explored cooperation among authors based on literature data.The analysis results show that,on the one hand,in the basic bibliographic analysis,we fond that the most productive author is Cristina Videira Lopes.The University of California at Irvine is the most productive institution,most of literature are from the USA,and the most influential author is Denys Poshyvanyk.On the other hand,in the research about cooperation among authors,we fond that three authors,i.e.,Tao Xie,Cristina Videira Lopes,and Denys Poshyvany are the most active authors in this research field.Two topics,i.e.,the performance improvement for recommendation algorithms and their applications on other software engineering tasks,are the most popular research topics.
Research on Software Defect Prediction Based on AIRS Using PCA
ZHU Chao-yang, CHEN Xiang-zhou, YAN Long and ZHANG Xin-ming
Computer Science. 2017, 44 (Z6): 483-485.  doi:10.11896/j.issn.1002-137X.2017.6A.107
Abstract PDF(229KB) ( 598 )   
References | Related Articles | Metrics
Aiming at the problem that the software system is becoming more and more complex and the software defect is difficult to detect,a software defect prediction model based on artificial immune recognition system was proposed.The model was constructed firstly by using the principal components analysis method to reduce the dimension of the original data set,and then the affinity between antibody and antigen was calculated based on the Gauss radial basis function (RBF).Based on affinity calculation,antibody training,resource competition and the selection of memory cells were conducted.Classification was performed using memory cell set at last.Simulation shows that our model prediction accuracy reaches 84%~90%,accuracy reaches 85%~91%.
Mining of API Usage Pattern Based on Clustering and Partial Order Sequences
WANG Shu-yi and DONG Dong
Computer Science. 2017, 44 (Z6): 486-490.  doi:10.11896/j.issn.1002-137X.2017.6A.108
Abstract PDF(235KB) ( 615 )   
References | Related Articles | Metrics
During software development,a developer often needs to follow specific usage patterns of application programming interface (API).However,few of those is well documented for developers to refer to in order to mining the API usage pattern,this paper proposed an approach that discovers the API usage pattern based on clustering and frequent closed partial order sequence mining.After parsing the source code by abstract syntax tree,the extracted API sequences is hierarchically clustered.Finally,API usage patterns by depth-first frequent closed partial order algorithm (DFP) is excauated.The experiment shows that this approach can obtain more succinct candidate API usage pattern compared to SPADE and BIDE on the same dataset.
Research on Data Mining Algorithm in Wine Information Data Analysis System
HAO Yan-ni, WU Su-ping and TIAN Wei-li
Computer Science. 2017, 44 (Z6): 491-494.  doi:10.11896/j.issn.1002-137X.2017.6A.109
Abstract PDF(278KB) ( 1267 )   
References | Related Articles | Metrics
With the rapid development of information technology,the classical algorithm in computer has been extensively studied and applied in wine industry.The characteristics of machine learning algorithm are to use the technology of artificial intelligence,and a large number of samples in the set of training and learning can atuomatically identify the model and parameters that operation needs.Related research was used in data mining machine learning algorithms in this paper.The research of data mining technology based on classification algorithm was taken as an example.And for the weak generalization ability of SVM(Support Vector Machine),we proposed an improved SVM-NSVM,in which the training set is selected precisely according to each sample,similarities and differences between the subject nearest class choice is decided,and then SVM is trained to get classifier.For big disadvantage of the training data set in kNN’s(K-Nearest Neighbor),an improved progressive idea was given to find the nearest neighbor.Experiments show that,NSVM has more advantages than SVM in classification accuracy,speed classification.Complexity of the improved kNN algorithm is significantly reduced.In addition,the wine information and data analysis system was designed,and the data mining method was used to analyze,contrast and match the extremely large amount of wine information data so as to excavate the comparative information of the main components of wine and marketing potential information.Then these components was designed accordingly.With high-quality wine in the corresponding comparison of the ingredients,the final analysis analyze wine-related information and data can help wine producers analyze wine content and wine quality.
Detection of Large Class Based on Latent Semantic Analysis
MA Sai and DONG Dong
Computer Science. 2017, 44 (Z6): 495-498.  doi:10.11896/j.issn.1002-137X.2017.6A.110
Abstract PDF(177KB) ( 763 )   
References | Related Articles | Metrics
Large Class is a kind of object-oriented design flaws.In order to overcome the insufficience of the traditional Large Class detecting which only considers the metrics of source code structure,this paper proposesd the mean concept similarity metric based on latent semantic analysis.A term-document matrix is formed from the identifiers and comments extracted from source code firstly.The similarity between methods and the mean concept similarity of a class are computed in the space of LSA.The conceptual measure is combined with the cyclomatic complexity of the source code to identify large classes.Experiments on the open source Landfill data set show that the detection accuracy and recall rate of this method all increase comparing to the traditional approaches through structure information of Large Class testing.
Efficiency Analysis of Different Statistical Algorithms on Statistical Model Checking
GAO Wan-ling, HONG Mei, YANG Qiu-hui and ZHAO He
Computer Science. 2017, 44 (Z6): 499-503.  doi:10.11896/j.issn.1002-137X.2017.6A.111
Abstract PDF(541KB) ( 764 )   
References | Related Articles | Metrics
Recently,statistical model checking technology has been widely used,and different statistical algorithms have different effects on the performance of the statistical model checking.This paper mainly compared the running time of different statistical algorithms,thus analyzed the applicable environment of the algorithms.The statistical algorithms include Chernoff algorithm,sequential algorithm,smart aim-listed probability estimation algorithm,smart content testing algorithm and Monte Carlo algorithm.Models are the Wireless LAN (WLAN) and the Dining Philosophers problem,using PLASMA model checking tool for validation.The result shows that different statistical algorithms have different influences on the efficiency of model checking when the environment is different.Sequential algorithm is fit for verifying the reachability of state,and the time performance is the best.Smart content testing algorithm and Monte Carlo algorithm are fit for verifying complex models.This conclusion can help the selection of statistical algorithms in model checking,in order to improve the efficiency of model checking.
Queuing Theory-guided Performance Evaluation on Reconfigurable High-speed Device Connected Bus
ZHANG Shao-nan, QIU Ke-ni, ZHANG Wei-gong, WANG Jing, ZHENG Jia-xin, BAI Rui-ying and ZHU Xiao-yan
Computer Science. 2017, 44 (Z6): 504-509.  doi:10.11896/j.issn.1002-137X.2017.6A.112
Abstract PDF(387KB) ( 638 )   
References | Related Articles | Metrics
UM-BUS is a high-speed serial BUS which has the ability of dynamic fault-tolerance and remote access.It is essential to implement performance modeling in advance when making comprehensive evaluation and optimization design of the bus.Targeting this issue,a performance evaluation model was proposed based on queuing theory in this paper.On one hand,this model describes the dataflow relationship among different slave nodes and the characteristics of packets waiting and arriving time to slave nodes qualitatively.On the other hand,this model analyzes the maximum,minimum and average delay of the different packets in the lane transmission quantitatively.Testing on the MATLAB platform provides packet waiting time and transmission time on the bus.Experimental results can help designers to well understand the bus system features in practical applications,so they can carry out reconfigurations and improve the efficiency of UM-BUS.
Research on Construction and Application of Radio Monitoring Data Warehouse
TIAN Bin, ZHU Ya-lei, ZHANG Yun-chun, HU Jian-tao and ZHANG Chen-bin
Computer Science. 2017, 44 (Z6): 510-514.  doi:10.11896/j.issn.1002-137X.2017.6A.113
Abstract PDF(610KB) ( 539 )   
References | Related Articles | Metrics
When faced with massive data on radio signal detection and surveillance,the simple storage and queries fail to meet the application requirements.In order to satisfy the requirements of higher level decision making in radio monitoring system and intelligent monitoring services,a radio monitoring data warehouse was designed based on the preprocessing of the data collected from existing radio monitoring system.First,ETL (Extract/Transform/Load) rules were defined to collect data from application systems and information platforms.Second,an unknown signal data cube was built after the design of dimensions,measures,and levels.Finally,the implementation of analysis,prediction and decision-making functions based on multidimensional data models was done.It helps to enhance the business functions relatedwith unknown signals.
Research on Software Development System of Internet of Things Terminal Equipment
WANG Pan-zao
Computer Science. 2017, 44 (Z6): 515-518.  doi:10.11896/j.issn.1002-137X.2017.6A.114
Abstract PDF(389KB) ( 522 )   
References | Related Articles | Metrics
This paper researched on the software development system,development software,simulation and test.By entering the system software development platform,we used JAVA to write the program,and then used the system switch platform to import the preparation of the software into the main test platform to test software parameters and simulation.APP software development and simulation test results show that the program can meet the requirements of indicators and functional requirements.This system views the virtual cloud desktop operating system as a platform,uses Citrix virtual desktop client software to login server,which can complete synchronization of experimental environment and the real environment by the development of virtual desktop cloud + end,and comstructe mobile application development and test environment,so that developers can quickly learn the knowledge of software development and improve development skills.
Command and Control Behavior Model Based on Improved Hierarchical Task Network
SUN Lin, JIAO Peng and XU Kai
Computer Science. 2017, 44 (Z6): 519-522.  doi:10.11896/j.issn.1002-137X.2017.6A.115
Abstract PDF(345KB) ( 609 )   
References | Related Articles | Metrics
The command and control (C2) behavior model is one of the major modeling objects in military analytical simulation system.The improved hierarchical task network (HTN) was proposed to build the C2 behavior model.It consists of the mission description specification and the general mission manager.The C2 behavior model overcomes the weaknesses of hard modeling,poor expansion and little flexibility in current command and control behavior models.The example of Air-Sea combat illustrates the application of the proposed approach.
Design and Implementation of Teaching Equipment Management System Based on Two-dimensional Code
GU Xiao-yan and XIA Zhi-qiang
Computer Science. 2017, 44 (Z6): 523-525.  doi:10.11896/j.issn.1002-137X.2017.6A.116
Abstract PDF(197KB) ( 1259 )   
References | Related Articles | Metrics
With the popularity of the network environment of the intelligent mobile phone and the teaching area being as an opportunity,the two-dimensional code information platform and the background manage system the the teaching equipment were set up.The system enables the people who manage and use the teaching equipment to complete the storage management of teaching equipment,the use of management and maintenance management by scanning the two-dimensional code of the intelligent mobile phone.It improves the management efficiency of the current teaching equipment.
Adaptive Backstepping Controler of Quadrotor UAV
WU Xiao-yan, HUANG Jia-qi and BU Xiang-wei
Computer Science. 2017, 44 (Z6): 526-528.  doi:10.11896/j.issn.1002-137X.2017.6A.117
Abstract PDF(420KB) ( 682 )   
References | Related Articles | Metrics
An adaptive backstepping controller was designed for quadrotor UAV with non-linear and couplings.In order to obtain the virtual control item derivatives,low-pass first order filter was introduced.A sufficiently smooth projection operator was employed to estimate and compensate the model unmatched uncertainties for the sake of the inhibition of the parametrical drift and broaden the request of traditional projection operator which is conscious of the upper and the lower bounds of the uncertainties.Simulation results show that the designed controller possesses productive control effect and robustness.
Design and Implementation of Arbitrage Trading System Based on Generalization
WANG Li-wen
Computer Science. 2017, 44 (Z6): 529-533.  doi:10.11896/j.issn.1002-137X.2017.6A.118
Abstract PDF(942KB) ( 1184 )   
References | Related Articles | Metrics
With the stock index futures as a representative of the financial derivatives coming into the market,more and more complex trading strategies such as hedging,arbitrage and statistical arbitrage are appearing in the domestic financial market.Therefore,this paper presented an arbitrage trading solution with procedures instead of human complex calculations and operations through the foreign market’s data interface SPTrader and the domestic market data interface CTP.They can obtain market data to achieve the method of any two contracts in global Mercantile Exchange.This paper firstly introduced and analyzed the advantage of generalized arbitrage trading system,and then systematically analyzed and summarized the system design,which is focused on the policy setting,policy monitoring management module and system logic structure.And finally the overall operation of the system was evaluated.
System Capability-oriented Approach for Formalized Software Requirements Analysing and Testing
CHEN Ping, LIANG Qi-ming and SUN Wei
Computer Science. 2017, 44 (Z6): 534-538.  doi:10.11896/j.issn.1002-137X.2017.6A.119
Abstract PDF(211KB) ( 926 )   
References | Related Articles | Metrics
In domestic software industry,system testing is based on only the function items in system requirements specifications,rarely based on the system capabilities.Consequently,the result of system testing can not fully testifies that the SUT(System Under Rest) meets the system requirements.Besides,there is ambiguity in system requirements specifications which is caused by natural language descriptions.These problems directly decrease the efficiency of system testing.In order to deal with these problems,a system capability-oriented approach for formalized software requirements analysing and testing was proposed.With this approach,software testing engineers can get clear descriptions about requirements of system capabilities,and conduct system testing for system capabilities.It will efficiently improve the adequacy and accuracy of system testing,which consequently increase the quality of SUT.
Optimized Analysis of Business Process Configuration Based on Petri Net Behavior Closeness
GAO Ya-nan, FANG Xian-wen and WANG Li-li
Computer Science. 2017, 44 (Z6): 539-542.  doi:10.11896/j.issn.1002-137X.2017.6A.120
Abstract PDF(220KB) ( 537 )   
References | Related Articles | Metrics
The optimized analysis of business process is an important content of business process management,especially the optimized analysis with the business process configuration.The existing research mainly focuses on the business process optimized analysis,but there are some obvious deficiencies for some business processes configuration optimized analysis.Business process configuration optimization analysis was studied.The closeness calculation method of log and business process Petri net model was proposed,on the basis of that,an approach of business process configuration optimization analysis based on behavior closeness was put forward.First of all,the initial model was established according to the number of a given event log and the execution sequence.The closeness of initial model and the residual log were used to optimize the initial model.And then,configurable transitions were brought in to further optimize the model.At last,the feasibility of this approach was illustratesd through the simple instance.
On t-type s Cuts and t Cuts of Occurrence Nets
LIU Ping
Computer Science. 2017, 44 (Z6): 543-545.  doi:10.11896/j.issn.1002-137X.2017.6A.121
Abstract PDF(151KB) ( 491 )   
References | Related Articles | Metrics
This paper discussed the t-type s cuts and t cuts of occurrence nets.The concept of transfer sets were introduced and used to transfer one t cut to anather.The concept of t-type s cut was introduced,that there is correspondence between the set of t-type s cut and the set of t cuts was proved,that every t cut is the accompany set of a t-type s cut and that the transfer of a t cut correspond to the transfer of a t-type s cut with some transfer set were also proved.
Karnaugh-based Reversible Logic Circuit Synthesis Algorithm for 3-bits
ZHU Wan-ning and LIU Zhi-hao
Computer Science. 2017, 44 (Z6): 546-550.  doi:10.11896/j.issn.1002-137X.2017.6A.122
Abstract PDF(302KB) ( 858 )   
References | Related Articles | Metrics
This paper presented a new algorithm of reversible logic synthesis based on Karnaugh map.This algorithm can solve reversible logic synthesis with garbage bit very fast.Most of the specific reversible logic gates have some garbage bits.It is very difficult to synthesize the reversible logic circuit with garbage by using the classical algorithms which are true table algorithm,permutation group algorithm and etc.The problem is that it is hardly to get the overall situation which the classical algorithms must need.The algorithm proposed in this paper does not care the overall situation and synthesizes every output variable respectively based on the feature of Karnaugh map.The algorithm of reversible logic synthesis based on Karnaugh map divides all the three-bits-reversible-logic-circuits to five equivalence classes based on contiguity of Karnaugh map.Then the algorithm calculates every equivalence class respectively and synthesizes the reversible logic circuit with garbage bits in constant time.
Research and Implementation of Identifying Music through Performances Using Entropy Based Audio-fingerprint
WANG Wei, CHEN Zhi-gao, MENG Xian-kai and LI Wei
Computer Science. 2017, 44 (Z6): 551-556.  doi:10.11896/j.issn.1002-137X.2017.6A.123
Abstract PDF(1077KB) ( 1007 )   
References | Related Articles | Metrics
A technology of identifying music using entropy based audio-fingerprint was introduced,which takes the music’s character of entropy as audio-fingerprint.In the domain of music identifying,the above audio-fingerprint enables to use flexible string matching algorithms.We adopted longest common subsequence (LCS),levenshtein distance and dynamic time warping (DTW) as the matching algorithms of this audio-fingerprint,and used a number of music as the test set.Every music has another performance which is generated from the original one,most of the other performances have been artificially changed,such as to be noise-accessed,accelerated,cut and so on,and some of them may even be paired of same music played by different orchestras.The obtained results are impressive,in which all the performances in the collection can be correctly identified either with LCS,levenhtein distance or the dynamic time warping (DTW) distances,proving the veracity,robustness and good distinguish ability.
FPDA of Fuzzy Concepts Automatic Calculation PDA Designing Based on Normal Distribution
LIU Hao-ge and GUAN Jian-he
Computer Science. 2017, 44 (Z6): 557-559.  doi:10.11896/j.issn.1002-137X.2017.6A.124
Abstract PDF(303KB) ( 527 )   
References | Related Articles | Metrics
The controlling program of the computer system has the characteristics of FA,which can be described by the finite automaton theory.Finite automaton is an important cornerstone of every aspect of computer science.But besides deterministic finite automaton theory,there are multiply fuzzy events which should be operated by fuzzy automaton according to their membership function.In this paper we focused on the events based on normal distribution. In order to realize the automatic calculation of formulations and improve the effectiveness of the statistics of normal distribution events,this paper presented a fuzzy automaton model to achieve automatic operation,and we gave examples when wan-ting to get the probability of fuzzy events like “may-occur”,“very-likely-occur” or “rarely-occur” in normal distribution.
Design and Optimization on Virtual Desktop Infrastructure Based on KVM
TANG Hong-mei and ZHENG Gang
Computer Science. 2017, 44 (Z6): 560-562.  doi:10.11896/j.issn.1002-137X.2017.6A.125
Abstract PDF(346KB) ( 1158 )   
References | Related Articles | Metrics
With the continuous development of cloud computing,the virtual desktop infrastructure(VDI) solution is becoming more and more mature.VDI is established on the basis of the virtualization technology,break through the limit of time and space,effectively solve the problems in the process of using traditional personal computer,and is the mainstream in current desktop cloud solution architecture and deployment.In this paper,making full use of the advantages of VDI,combining with the epidemic KVM virtualization technology,virtual desktop resolution architecture was discussed and made actual deployment,and the optimization platform was detailed designed.Finally we did the test and recorded performance results,which verify the correctness and availability of the system.The results show that the VDI brings many conveniences for mobile computing,management and reducing operational costs of modern computer room,and provides real guidance for colleges deploying virtualization platform.
Design and Implementation of High-availability Based on OpenStack Cloud Platform
LUO Bing, QIAO Ying and FU Xiao
Computer Science. 2017, 44 (Z6): 563-566.  doi:10.11896/j.issn.1002-137X.2017.6A.126
Abstract PDF(367KB) ( 1521 )   
References | Related Articles | Metrics
Achieving high availability is one of the most important issues in the study of the OpenStack cloud management platform.In order to solve the problem that related services for OpenStack cloud management platform components running on a single node can cause single points of failure (SPoF),we combined with existing high-availability solutions,put forward a method based on Pacemaker+Corosync+HAProxy+Ceph to realize the high availability of OpenStack cloud management platform.This solution combines with Active-Active main/standby mode,Active-Passive Double live mode and cluster technology together,and then uses the method of hardware and software redundancy and transfer failure of service instance,finally,comes up with the high availability of OpenStack cloud management platform.Experiments show that in the case of the system only having fewer nodes or lose of link,the method is effective.
Design of Local Scheduling Algorithm for Integrated Preemptive Scheduling Policy in Hadoop Cluster Environment
WANG Yue-feng and WANG Xi-bo
Computer Science. 2017, 44 (Z6): 567-570.  doi:10.11896/j.issn.1002-137X.2017.6A.127
Abstract PDF(280KB) ( 563 )   
References | Related Articles | Metrics
Local scheduling algorithm is an algorithm to improve data locality in Hadoop cluster environment.The nature of the scheduling strategy of the local scheduling algorithm is to improve the data locality,reduce network transmission and avoid congestion.However,due to the different completion time of the Map task,the waiting phenomenon of Reduce task affects the completion average time of the job,the completion time of the job is increased,and then the performance parameters of the system are not good.In this thesis,we proposed to integrate the preemptive scheduling based on the local requirement of the original algorithm.When the Reduce task waits,the task is supended and the resource is rleased to other Map tasks.Based on the above scheduling strategy,this thesis designed the qualitative scheduling of integrated preemptive strategy.In order to validate the improved algorithm,the local scheduling algorithm and the integrated preemptive local scheduling algorithm were compared by experiments.Experimental results show that,on the same data,the average completion time of the integrated preemptive local scheduling algorithm is significantly reduced.
Validation for Probability Real-time System with Data Constraints
ZHANG Chun-yan and SUN Jun
Computer Science. 2017, 44 (Z6): 571-574.  doi:10.11896/j.issn.1002-137X.2017.6A.128
Abstract PDF(214KB) ( 478 )   
References | Related Articles | Metrics
A probabilistic real-time system with data constraints is a computing system with both probabilistic time constraints and data constraints.At present,there exist a few studies of the specification and verification that uniform discrete data constraints and sequence time constraint work in the probability model.In this paper,we proposed a specification based on sequence time probability ZIA,which has both sequence data anstraints and discrete data constraints,and gave its temporal logic.For CTL and PCTL,although the logic is very powerful,it can only reflect the temporal properties.This paper proposed a new formal language CTML to express the metric properties of query which retaines the ability to express temporal properties and gave the validation algorithm of probability of ZIA specification.
Research on Intelligent Maps Navigation System Based on Location Service
WANG Pan-zao
Computer Science. 2017, 44 (Z6): 575-576.  doi:10.11896/j.issn.1002-137X.2017.6A.129
Abstract PDF(682KB) ( 1006 )   
References | Related Articles | Metrics
We researched and developed a smart map navigation system on mobile devices,which can provide support and help for self driving tours and other personalized tourist groups.The system uses Eclipse and Android SDK deve-lopment kit, through the introduction of Java language and BAIDU map API,while adding KEDAXUNFEI voice support users can directly input text through voice,quick access to basic map provided by BAIDU Inc,and finally perform keyword search,user location,perimeter search,precise positioning,specify the location of the longitude query,route planning and other functions.Based on the project “Jilin Deng Xiao ping Square” researching,it is proved that the system has fast response speed and high security,and it can meet the needs of personalized tourism users.
Research and Analysis on Throughout of National Geological Drilling Database Service Platform Website
WANG Bin, LIANG Yin-ping, YUE Peng, LI Jie and ZHANG Li-hai
Computer Science. 2017, 44 (Z6): 577-581.  doi:10.11896/j.issn.1002-137X.2017.6A.130
Abstract PDF(597KB) ( 478 )   
References | Related Articles | Metrics
The data analysis has been done on many indicators such as page views,access times etc. of national geologi-cal drilling database service platform from 0:00AM Mar.1st to 24:00PM Jun.30th,2016 by baidu statistical analysis tool.According to the results of statistical analysis,it reveals that visitors are mainly from Beijing and eight other domestic provinces,as well as the United States and nine other countries.Visit throughout has a periodic change,more visi-tors when workdays and less when weekends.The new and old visitors ratio is 2.41,and it shows that there are more original ones.Better access times and access time,general access depth,indicate that viscosity of visitors isn’t very good.The users of website are mainly professional technicians of geology.But there are a number of problems for website by analysis,such as less access times and unique IP,higher bounce rate,etc.The means including strengthening the propaganda by meeting and so on should be taken so as to attract more users and improve page views of website, enriching the content,optimizing the design structure and perfecting functions of website.
Research on Plane Dead Reckoning Based on Inertial Navigation System
ZHOU Jing, CHEN Miao-hong and WU Hao-jie
Computer Science. 2017, 44 (Z6): 582-586.  doi:10.11896/j.issn.1002-137X.2017.6A.131
Abstract PDF(1094KB) ( 1300 )   
References | Related Articles | Metrics
An indoor positioning system based on MEMS inertial measurement unit was presented in this paper.In this system,the position and direction of pedestrians on a plane (zero slope) are obtained by using the sensors to the waist and knees of the human body.The proposed algorithm can execute reset calculation to the moving steps of pedestrians with wearing knee gyroscope so as to eliminate the error of angular displacement, and can execute resetcontinuous recko-ning for the angular rate obtained by testing gyroscope on the waist so as to get moving direction.The results show that,in a plane,the average trajectory error of the total distance of 62.32 meters is 0.1935 meters and the standard absolute deviation is 0.0512 meters.
Architecture and Solution for Large Web Sites
ZHOU Qiang, XIE Jing and ZHAO Hua-ming
Computer Science. 2017, 44 (Z6): 587-590.  doi:10.11896/j.issn.1002-137X.2017.6A.132
Abstract PDF(118KB) ( 930 )   
References | Related Articles | Metrics
With the development of Internet business,the scale of website is getting bigger and bigger.Various techno-logy was proposed to upgrade the performance,usability,scalability,expandability and security of website.Basd on the analysis of effect performance,usability,scalability,expandability and security, a website schema solution programme was proposed,providing success experience for library integrated found system management operation exploration.
Application of DBSCAN Algorithm in Electronic Mail Network Community Detection
YANG Fang-xun
Computer Science. 2017, 44 (Z6): 591-593.  doi:10.11896/j.issn.1002-137X.2017.6A.133
Abstract PDF(234KB) ( 484 )   
References | Related Articles | Metrics
According to the community detection in the email complex network,the DBSCAN algorithm was introduced into the email network to detect the community in this paper.Based on the analysis of the algorithm,the system architecture and algorithm implementation process of email network community detection were studied.Finally,the feasibility of the DBSCAN algorithm in electronic mail network community detection was verified by the test of the Enron email corpus.
Application for Data Sharing of Organization and Personnel Based on Personnel Information System
YU Jun-yang, CAO Shi-hua, FU Xian-shu, ZHOU Feng, SUN Jian-ming and CHEN Yu-lin
Computer Science. 2017, 44 (Z6): 594-597.  doi:10.11896/j.issn.1002-137X.2017.6A.134
Abstract PDF(248KB) ( 813 )   
References | Related Articles | Metrics
Information data sharing has been a research hot spot.The main business process of the data sharing of the organization and personnel was described,the structure table of the sharing data was designed,hierarchical tree method of organization and personned data was detailly described.Through the analysis of organization and personnel data sharing scope,method and frequency,a method suitable for Zhejiang Inspection and Quarantine of the basic data sharing table and personnel and sharing application of the organization was established,effectively improving the efficiency of the construction and management of information.