Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 44 Issue Z6, 01 December 2017
  
Survey of Multimodal Delay Tele-interaction
WANG Hai-peng, HUANG Tian-biao, REN Chong-shuai and YAO Wu-yi
Computer Science. 2017, 44 (Z6): 1-6.  doi:10.11896/j.issn.1002-137X.2017.6A.001
Abstract PDF(198KB) ( 1100 )   
References | Related Articles | Metrics
Multimodal Tele-interaction aims to use a variety of interactive modal and collaborative way and complementary characteristics and information between multiple modal,to communicate and understand the user interaction desires,improve the efficiency of interaction,enhance the naturalness of interaction,and enable users to complete the remote tasks in their “expectations”.Recently,with the applications of multimodal tele-interaction on the space exploration,deep sea exploitation and tele-surgery,the delay is introduced into the process of tele-interaction,which is produced mainly by the communication delay and bandwidth.The significant dalay causes the problems of asynchronous interaction and lack of interaction modal,which results a fundamental effect on the user’s behavior,psychological and cognitive characteristics,breaks and blocks the continuity,real-time and naturalness of interaction,not only degrade interactive user experience,but also is difficult to guarantee validity.The article defined the concept of multimodal tele-interaction,presented its key applications,and discussed the key technology,including the delay issue,asynchronous interaction and lack of interaction modal.Finally,we presented research challenges in the future.
Semi-supervised and Ensemble Learning:A Review
CAI Yi, ZHU Xiu-fang, SUN Zhang-li and CHEN A-jiao
Computer Science. 2017, 44 (Z6): 7-13.  doi:10.11896/j.issn.1002-137X.2017.6A.002
Abstract PDF(432KB) ( 2946 )   
References | Related Articles | Metrics
Semi-supervised learning (SSL) and ensemble learning are two important paradigms in the field of machine learning research.SSL attempts to achieve strong generalization by exploiting both labeled and unlabeled instances,while ensemble learning aims to improve the performance of weak learner by making use of multiple classifiers.SSL ensemble learning is a novel paradigm which can improve the generalization performance of classifier by combining SSL and ensemble learning.Firstly the development process of SSL ensemble learning was analyzed and it was found that SSL ensemble learning is derived from disagreement-based SSL.Then,classify SSL Ensemble learning methods were classified into two categories:SSL-based ensemble learning and ensemble-based SSL.A detailed description for the main methods of SSL Ensemble learning was given.Finally,the current research status of SSL ensemble learning was summarized and some issues which are worth of further study were given.
Survey on Cross-language Named Entity Translation Pairs Extraction
WANG Zhi-juan and LI Fu-xian
Computer Science. 2017, 44 (Z6): 14-18.  doi:10.11896/j.issn.1002-137X.2017.6A.003
Abstract PDF(227KB) ( 1516 )   
References | Related Articles | Metrics
Cross-language named entity translation pairs are very important for machine translation,cross-language information retrial and so on.We made a survey on cross-language named entity translation pair extraction in three aspects.Firstly,transliteration is vital for cross-language named entity translation pair extraction.Rules,machine learning and deep learning are used in many languages named entity translation.The performance of transliteration model based on deep learning is excellent and it will be the key method in future studies.Secondly,named entity alignment based on parallel/comparable corpus is a useful method to get cross-language named entity translation pairs.The constructing and annotation of cross-language corpus are bottle necks for research on named entity alignment based on parallel/comparable corpus.Thirdly,cross-language named entity translation pairs can be extracted by Web mining.Cross-language named entity extraction based on cross-language information retail and knowledge base such as Wikipedia will be the trend in the future.
Survey on Visual Tracking Algorithms Based on Deep Learning Technologies
JIA Jing-ping and QIN Yi-hua
Computer Science. 2017, 44 (Z6): 19-23.  doi:10.11896/j.issn.1002-137X.2017.6A.004
Abstract PDF(247KB) ( 1335 )   
References | Related Articles | Metrics
Visual tracking is a fundamental subject in the field of computer vision.Classical tracking methods are not good at handling the problems of the complex background,such as illumination variation,great change of the target size and posture change greatly or occlusion and so on.Meanwhile,the introduction of deep learning technologies opens a new way of visual tracking study.There are a few research literature on visual tracking based on deep learning relatively,both in China and abroad at present.In order to attract more researchers in the field of visual tracking to explore and discuss deep learning,and to promote the research of visual tracking algorithm,this overview briefly reviewed the research status of visual tracking and deep learning.Then we focused on the related literatures about algorithms of visual tracking based on deep learning,and discussed their advantages and disadvantages.Finally,we proposed the direction of further research and the prospect of visual tracking algorithm based on deep learning.
Survey of Virtualization Access Control Research Based on Xen
KE Wen-jun, DONG Bi-dan and GAO Yang
Computer Science. 2017, 44 (Z6): 24-28.  doi:10.11896/j.issn.1002-137X.2017.6A.005
Abstract PDF(234KB) ( 1176 )   
References | Related Articles | Metrics
Virtualization is the core technology of cloud computing,with its wide application and rapid development,the security threat has become increasingly prominent,seriously hindering the development of virtualization,which is an important issue to be resolved at the same time.Academic circles put forward various solutions,including access control technology,it’s viewed as an important barrier to virtualization security,attaining a wide range of attention and research.This paper started with a review of access control technology development and contrast,followed by analysis of the Xen virtualization environment security issues,as well as access to the control techniques.Finally,researches on the current domestic and foreign virtualization security access control were summarized.
Survey for Methods of Parameter Estimation in Topic Models
DU Hui, CHEN Yun-fang and ZHANG Wei
Computer Science. 2017, 44 (Z6): 29-32.  doi:10.11896/j.issn.1002-137X.2017.6A.006
Abstract PDF(304KB) ( 1821 )   
References | Related Articles | Metrics
Topic models extract low-dimensional representation of the topic from high-dimensional sparse data set of word by using fast machine learning algorithms,achieving a word document clustering.It is an important work in this field to study the model parameter estimation.The paper detailed the probabilitic latent semantic analysis model,the latent Dirichlet model and basic methods of parameter estimation in topic model.In addition,the paper gave an experimental analysis of perplexity in topic model.
Event Sensing and Multimodal Event Vein Generation Leveraging Social Media
XU Cheng-hao, GUO Bin, OUYANG Yi, ZHAI Shu-ying and YU Zhi-wen
Computer Science. 2017, 44 (Z6): 33-36.  doi:10.11896/j.issn.1002-137X.2017.6A.007
Abstract PDF(125KB) ( 2613 )   
References | Related Articles | Metrics
With the development of information technology and popularity of social media,normal users have become information producers from receivers and everyone can share what happened around them and repost what they are interested in,which makes the information stored in social media increase rapidly.The large amount of data contains abundant and valuable records of social events.How to get valuable informations from these data has become one of the most important problems in information field.This paper introduced the new research field,including crowd-powered event sensing and multimodal summarization to solve this problem.Crowd-powered event sensing and multimodal summarization aim at sensing and analyzing events by analyzing multimodal data existed in social media to predict and summarize events effectively.This paper described the modal of event,the history of sensing,the key technology,challenges and wide application field,summarized the development of event sensing and summarization based social media analysis and looked into the future.
Present Situation and Prospect of Data-driven Based Fault Diagnosis Technique
ZHANG Ni, CHE Li-zhi and WU Xiao-jin
Computer Science. 2017, 44 (Z6): 37-42.  doi:10.11896/j.issn.1002-137X.2017.6A.008
Abstract PDF(181KB) ( 2761 )   
References | Related Articles | Metrics
Fault diagnosis methods based on data-driven were summarized and divided,which include multivariate statistical methods,machine learning,manifold learning and so on.The principle,research progress and different methods application were analyzed and described.The problems needed to solve and recent research hotspots were addressed finally.
Review of Virtual Reality Technology Based Application Research in English Teaching for Special Purposes
ZHANG Ning, LIU Ying-chun, SHEN Zhi-peng and GUO Chen
Computer Science. 2017, 44 (Z6): 43-47.  doi:10.11896/j.issn.1002-137X.2017.6A.009
Abstract   
References | Related Articles | Metrics
The relationship between virtual reality technology and English for special purposes was introduced in this paper.By illustrating the applications of virtual reality in English for special purposes in recent years,theoretical explorations in this field,as well as related application models,the advantages and disadvantages of relevant studies were compared between domestic and overseas research.The significance and application values of virtual reality research in English for special purposes were analyzed,and how can the future virtual reality technology better serve computer assisted English for special purposes were discussed.Last but not least,future related research orientations were also discussed at the end of this paper.
Analysis of Influence of Domain Knowledge on Development of Big Data
LENG Li-hua, LIAO Yi-jie and LIAO Hong-zhi
Computer Science. 2017, 44 (Z6): 48-49.  doi:10.11896/j.issn.1002-137X.2017.6A.010
Abstract PDF(131KB) ( 962 )   
References | Related Articles | Metrics
Big data is becoming a hot topic in today’s society.Except of the IT field for its continuous exploration,it also continues to affect the economic and social progress.In the face of all walks of life to the big data hype,we should think seriously about the problems faced in the process of big data research and application.Domain knowledge is very important for a new technology,if we don’t know the field of domain knowledge in all walks of life,the development and application of them in different fields will face a lot of obstacles.This paper analyzed several aspects of the research and application of big data,and pointed out that big data processing involves data collection,data management,data analysis,data modeling and data application.Domain knowledge is the core of communication of data processing.As the key to the application of big data is data analysis,and data analysis is based on domain knowledge,the processing of big data must be through the field of domain knowledge to breakthrough.
Deep Learning for Early Diagnosis of Alzheimer’s Disease Based on Intensive AlexNet
LV Hong-meng, ZHAO Di and CHI Xue-bin
Computer Science. 2017, 44 (Z6): 50-60.  doi:10.11896/j.issn.1002-137X.2017.6A.011
Abstract PDF(1422KB) ( 2378 )   
References | Related Articles | Metrics
More and more people suffer from Alzheimer’s disease(AD) in China.AD is characterized by loss of memory and language ability,associated with aging.Currently,the number of Chinese patients has ranked first in the world.So,early diagnosis of AD is particularly urgent.Studies have shown that mild cognitive impairment (MCI) has a high pro-bability converted to AD.MCI may be a transition between healthy control (HC) and AD.With the advent of the era of big data,the machine learning algorithm is more and more popular in the diagnosis of disease.The method of deep lear-ning helps us classify AD,MCI and HC.The data set of magnetic resonance imaging (MRI) is from Alzheimer disease neuroimaging initiative (ADNI) as the data set.The pre-treatment of the raw brain MRI is directed by Beijing Tiantan Hospital affiliated to Capital Medical University.Images after dimensionality reduction are learned by deep convolutionalneural network (CNN) automatically.The current architecture of network is not for medical images.So,experiments focuss on improving existing networks,so as to achieve good diagnostic results.AlexNet is an excellent architecture for images classification which the experiments choose to improve.In this paper,we proposed 4 algorithms to improve the original model according to the characteristic of AD.Data ran in parallel with 8 GPUs of NVIDIA Tesla K80 of W780-G20 by Sugon.Then,we obtained 4 classifiers,AD vs.HC,AD vs.MCI,MCI vs.HC and AD vs.MCI vs.HC.Models were trained no more than 30 minutes with more than 70000 images.Finally,algorithms were evaluated by drawing ROC curve and computing sensitivity and specificity,and the better results were showed. 〖BHDWG1,WK42,WK43,WK42W〗第6A期 吕鸿蒙 ,等:基于增强AlexNet的深度学习的阿尔茨海默病的早期诊断
Spam Filter Algorithm with Improved Porter Stemmer and Kernels Methods
SUN Han-bo and FENG Guo-can
Computer Science. 2017, 44 (Z6): 61-67.  doi:10.11896/j.issn.1002-137X.2017.6A.012
Abstract PDF(629KB) ( 1255 )   
References | Related Articles | Metrics
At present,statistical learning methods have been widely used in spam classification in which Bayesian classifier and SVM are favorable.To face the challenge of spams,a number of novel ideas and improved algorithms were proposed.We proposed improved Porter Stemmer algorithm to extract text features thoroughly and tailored it for spam classifiers.Compared with original algorithm,linear kernel SVM,gaussian kernel SVM,polynomial SVM and Nave Bayes classifiers obtain 63.7%,63.1%,61.3% and 11.4% decrease of error rate respectively based on proposed improved Porter Stemmer.Besides,experimental results justify that SVM has significant advantages when applied to spam classification compared to Nave Bayes,while SVMs also obtain greater improvements facilitated by improved Porter Stemmer.We also conducted a shallow analysis from the perspectives of linguistics and illustrated the potential value of spam classifier with personalized customization.
Research on Lower Extremity Exoskeleton Robot Servo Control Algorithm
ZHAO Han-bin, ZHAO Zi-yi, ZHAO Jiang-hai and WANG Yu
Computer Science. 2017, 44 (Z6): 68-69.  doi:10.11896/j.issn.1002-137X.2017.6A.013
Abstract PDF(330KB) ( 1656 )   
References | Related Articles | Metrics
The servo control algorithm plays a decisive role in assisted effect of the lower extremity exoskeleton robot.Based on zero power control algorithm and characteristics of the lower limb exoskeleton robot movement,a terminal servo control algorithm was proposed,and the algorithm process was given.Finally the effectiveness of the proposed algorithm is verified by Matlab simulation.
Attribute Reduction Based on Variable Precision Rough Sets and Concentration Boolean Matrix
LI Yan, GUO Na-na and ZHAO Hao
Computer Science. 2017, 44 (Z6): 70-74.  doi:10.11896/j.issn.1002-137X.2017.6A.014
Abstract PDF(167KB) ( 797 )   
References | Related Articles | Metrics
Attribute reduction is the most important research topic in rough set theory.The traditional attribute reduction based on discer-nibility matrix can only handle consistent decision tables.Then the concept of improved discernibility matrix was proposed to effectively deal with both consistent and inconsistent decision tables.Further,the condensed Boolean matrix was defined to represent the discernibility matrix in order to save the storage space and improve the efficiency of matrix generation.Based on the previous work,the idea of variable precision was used to select some inconsistent objects in the developing of the discernibility matrix,thus more information can be considered in generating attri-bute reduction.The experimental results show that the proposed method performs advantages in both running speed and classification accuracy.
Assignment Reduction of Intuitionistic Fuzzy Ordered Decision Information System
SANG Bin-bin and XU Wei-hua
Computer Science. 2017, 44 (Z6): 75-79.  doi:10.11896/j.issn.1002-137X.2017.6A.015
Abstract PDF(186KB) ( 859 )   
References | Related Articles | Metrics
Based on intuitionistic fuzzy sets,a new order relation was established by weighting the intuitionistic fuzzy numbers.The intuitionistic fuzzy order decision information system was established by the traditional order relation and the new order relation respectively.Then,on the basis of the definition of assignment function and assignment of coordination sets,judgement theorem for the assignment reduction and discernibility matrixare was given in the system.Furthermore,the specific method of assignment reduction was established in an intuitionistic fuzzy ordered decision information system.Finally the effectiveness of the method is verified by an example.
N-ary Chinese Open Entity-relation Extraction
LI Ying, HAO Xiao-yan and WANG Yong
Computer Science. 2017, 44 (Z6): 80-83.  doi:10.11896/j.issn.1002-137X.2017.6A.016
Abstract PDF(238KB) ( 1761 )   
References | Related Articles | Metrics
Traditionally,information extraction (IE) has focused on satisfying precise,narrow,pre-specified requests from small homogeneous corpora.Shifting to a new domain requires the user to name the target relations and to manually create new extraction rules or hand-tag new training examples.Open information extraction (OIE) overcomes the limitations of traditional IE techniques,which trains individual extractors for every single relation type.Present studies have attracted much attention on English OIE.However,few studies have been reported on OIE for Chinese.This paper presented a N-ary Chinese OIE system(N-COIE).N-COIE preprocesses the sentences using the nature language processing tools,and then extracts entity-relation groups from the preprocessed sentences.Finally,N-COIE filters entity-relation groups using the trained logistic regression classifier.Empirical results show the effectiveness of the proposed system.
Cross-media Semantic Similarity Measurement Using Bi-directional Learning Ranking
LIU Shuang, BAI Liang, YU Tian-yuan and JIA Yu-hua
Computer Science. 2017, 44 (Z6): 84-87.  doi:10.11896/j.issn.1002-137X.2017.6A.017
Abstract PDF(311KB) ( 945 )   
References | Related Articles | Metrics
With the rapid development of Internet technology,the presented forms of network information have exten-ded from simple text to images,voice,video and other multimedia expression.In the field of multimedia information retrieval,the traditional methods often represent all of the media mode in the same feature space model.Existing methods take either one-to-one paired data or uni-directional ranking examples.In this paper,we considered learning bi-directionalranking examples in the cross-media retrieval.By analyzing the experimental results basing on the Wikipedia dataset,it is demonstrated better performance of the proposed method.
Manifold Learning Algorithm Based on Compact Setsub-coverage
ZHANG Shao-qun
Computer Science. 2017, 44 (Z6): 88-91.  doi:10.11896/j.issn.1002-137X.2017.6A.018
Abstract PDF(832KB) ( 984 )   
References | Related Articles | Metrics
Since 2000,a series of nonlinear dimensionality reduction methods have been emerging,and Isomap in manifold learning is one of the representatives.The algorithm can reflect the global structure of the data set and is simple and efficient.But there are some shortcomings that low-dimensional manifold must be convex set and the computational complexity is large.L-Isomap successfully reduces the computational complexity,but majority of the landmarks is selected by random method,which makes the algorithm unstable.In this paper,according to the classical theoremes that bounded closed set is equivalent to compact set in finite-dimensional space and there is finite sub-coverage covering the compact set,we analyzed the topology of the area of the data set and selected a series of landmarks.This method has low computational complexity and is more stable than L-Isomap.In addition,this method weakens the condition that the data set is a convex set to a compact set (bounded closed set),which avoids enlarging “the hollow” error in the incomplete manifold.
Attention of Bilinear Function Based Bi-LSTM Model for Machine Reading Comprehension
LIU Fei-long, HAO Wen-ning, CHEN Gang, JIN Da-wei and SONG Jia-xing
Computer Science. 2017, 44 (Z6): 92-96.  doi:10.11896/j.issn.1002-137X.2017.6A.019
Abstract PDF(341KB) ( 1466 )   
References | Related Articles | Metrics
With the wild usage of deep learning in machine reading comprehension in the past few years,machine rea-ding comprehension has developed rapidly.In order to improve machine reading comprehension’s semantic comprehension and inference abilities,an attention of bilinear function based Bi-LSTM model was proposed,which has good performance in extracting semantics of questions,candidates and articles,and producing the correct answers.We tested the model on CET-4 and CET-6 listening text materials.The results show that the accuracy rate of word-level input is about 2% higher than sentence-level input.Besides,the accuracy rate can increase about 8% after adding infe-rence structure with multi-layer attention.
Equivalent Representation of Compressed Sensing Optimization Problem and Its Penalty Function Method
MENG Zhi-qing, XU Lei-yan, JIANG Min and SHEN Rui
Computer Science. 2017, 44 (Z6): 97-98.  doi:10.11896/j.issn.1002-137X.2017.6A.020
Abstract PDF(133KB) ( 798 )   
References | Related Articles | Metrics
Firstly,the definition of an equivalent representation for compressed sensing optimization problem was given.It is proved that an optimal solution to the equivalent representation problem is an optimal solution to compressed sen-sing problem.Then an objective penalty function was defined,which has more than 2 order of smoothness,and its iterative algorithm was given.The convergence of the algorithm was proved.By solving the objective penalty function,the approximate optimal solution of compressed sensing optimization problem can be obtained.This method provides a new tool for us to study and solve the actual compressed sensing.
Asynchronous Collaborative Chicken Swarm Optimization with Mutation Based on Cognitive Diversity
XIAO Liang and LIU Si-tong
Computer Science. 2017, 44 (Z6): 99-104.  doi:10.11896/j.issn.1002-137X.2017.6A.021
Abstract PDF(530KB) ( 883 )   
References | Related Articles | Metrics
The standard chicken swarm optimization is improved from the following three aspects:chick-update formula,optimization method and mutation based on cognitive diversity.Self-learning factor is added to chick-update formula.It is assumed that chicks learn from their own roosters respectively,and meanwhile the unknown space is explored.Asynchronous collaborative optimization strategy is adopted with inverted order to improve capacity of solving higher-dimensions problems.Self-cognitive diversity is taken full advantage to make sure the pbests mutate at a certain probability to lead the swarm to escape from the local optimum to converge to the global optimum.Benchmark function test indicates ICSO is better than other optimization algorithms.Model seismic data inversion shows strong global search ability,high precision and strong antinoise ability as well.
Optimized Research for Task-driven Grouping Based on Hybrid Genetic Algorithm
LI Hao-jun, DU Zhao-hong and QIU Fei-yue
Computer Science. 2017, 44 (Z6): 105-108.  doi:10.11896/j.issn.1002-137X.2017.6A.022
Abstract PDF(215KB) ( 993 )   
References | Related Articles | Metrics
Intelligent algorithm that applies to the education field to realize automatic grouping has great significance.In the task-driven teaching under the network learning environment for how to group divided according to the optimal grouping scheme,the factors of the characteristic differences between learners and the degree of task difficulty were considered,a mathematical model based on task-driven grouping optimization problem was built,the strategy of task-driven grouping optimization based on hybrid genetic algorithm was proposed.We had done an simulation experiment by using hybrid genetic algorithm on MATLAB7.0 platform.Experimental results show that the optimization of task-dri-ving grouping based on hybrid genetic algorithm is feasible and effective.
Fuel Flow Missing-value Imputation Method Based on Standardized Euclidean Distance
CHEN Jing-jie and CHE Jie
Computer Science. 2017, 44 (Z6): 109-111.  doi:10.11896/j.issn.1002-137X.2017.6A.023
Abstract PDF(337KB) ( 928 )   
References | Related Articles | Metrics
To reduce the negative impact of aircraft fuel consumption statistical inference accuracy caused by the data missing,an estimated method based on standardized Euclidean distance was proposed to solve the fuel flow data missing problems.The nearest neighbors were chosen by the standardized Euclidean distance between QAR data samples,and then entropy was utilized to obtain the weight of the nearest neighbors.The missing value was estimated by the weighted average fuel flow of the nearest neighbors.Experiments prove that this method is valid to process fuel consumption data missing problems,and its performance is higher than the other imputation methods based on normal Euclidean distance,Mahalanobis distance or reduced relational grade.
Fake Chapters Recognition in Shiji Based on Text Classiffication Methods
ZHAO Jian-ming, LI Chun-hui, YAO Nian-min and YAO Nian-jun
Computer Science. 2017, 44 (Z6): 112-114.  doi:10.11896/j.issn.1002-137X.2017.6A.024
Abstract PDF(219KB) ( 1286 )   
References | Related Articles | Metrics
The text classiffication methods based on the machine learning are used to study how to distinguish the fake chapters in Shiji.Shiji is the first general history book in our country and distinguishing fake chapters is always one of the main problems in its study.But the traditional methods are subjective and can not apply quantitative analysis,and even many famous results are contradictive.In this paper,a method of distinguishing fake chapters was presented which can do quantitative research on this subject.This method can also be used in many other studies of the history.
Algorithm of SLAM Based on Robust EKF
LIU Pei-feng and WANG Jian
Computer Science. 2017, 44 (Z6): 115-118.  doi:10.11896/j.issn.1002-137X.2017.6A.025
Abstract PDF(387KB) ( 1147 )   
References | Related Articles | Metrics
The key problem of robot autonomous work is self-positioning.Kalman filter can be used to estimate the robot’s location.The model and key technology of SLAM was introduced at first in this paper.Then the theory of extendedKalman filter was given.By analyzing the effect of error to the standard EKF model’s result,the model of robust EKF was presented.This model implements an equivalent Kalman gain matrix built by introducing redundancy and predicates residuals.An iterative scheme was suggested for solving the SLAM robust EKF solution.At last,the standard EKF-SLAM model and the robust EKF-SLAM model were both actualized in programs.Autonomous robot’s moving trajectory was simulated in the program.Simulation results show that the suggested algorithm can give correct location results.
Research on Tax Forecasting Model Based on PSO and Least Squares Support Vector Machine
ZHANG Shu-juan, DENG Xiu-qin and LIU Bo
Computer Science. 2017, 44 (Z6): 119-122.  doi:10.11896/j.issn.1002-137X.2017.6A.026
Abstract PDF(223KB) ( 1037 )   
References | Related Articles | Metrics
Aiming at the tax revenue forecast for the existence of nonlinearity,instability and economic factors that affect multiple complexities,this paper offered to use the method of least squares support vector regression machine to predict the tax revenue of Guangdong conghua,and established the mathematical model.As the model parameters anddirectly affect the quality of support vector machine,so the author ingeniously incorporated the idea of particle swarm optimization algorithm,and PSO for parameters optimization was used to ensure the accuracy and stability of the forecasting model.The simulation experimental results show that with respect to each reference model,using the PSO for parameters optimization of least squares support vector regression machine accuracy has improved significantly,illustrates the validity and practicability of the model.
Improved Firefly Algorithm Based on Weighted Dimension
ZANG Rui and LI Jing
Computer Science. 2017, 44 (Z6): 123-125.  doi:10.11896/j.issn.1002-137X.2017.6A.027
Abstract PDF(258KB) ( 1027 )   
References | Related Articles | Metrics
Firefly algorithm is a bionic optimization algorithm based on biological swarm Intelligence which has the advantages of simple concept,few parameters to adjust and easy to realize.However,it can easily get trapped in the local optima especially for high-dimensional optimization function.In literature [1],an improved algorithm based on opposition and dimension was proposed,which is improved in population initialization and algorithm iteration.In this paper,we proposed a new algorithm based on the dimension-weighted method.The algorithm takes into account the current optimal firefly information and part of firefly information.Through the comparison of the experimental results,the improved algorithm embodies the superiority.
Early Warning Model for Water Eutrophication Based on BP Artificial Neural Network and Genetic Algorithm
XU Yun-juan
Computer Science. 2017, 44 (Z6): 126-128.  doi:10.11896/j.issn.1002-137X.2017.6A.028
Abstract PDF(144KB) ( 778 )   
References | Related Articles | Metrics
With the economic development,the environmental protection work is facing unprecedented pressure.In order to enhance the aquatic environment control effectively and to deal with the impact of sudden environmental pollution accident on the social and economic development,the paper established BP neural network theory for fitting aquaculture feed,and total phosphorus,total nitrogen,transparency,as well as oxygen consumption,and other nutritious indicators of changes in the corresponding function.Furthermore,the paper used genetic algorithm to achieve optimization methods of the objective function,and formed a breeding waters early warning model.The model provides technical support for water environment governance and public decision-making.At the same time,the paper makes further analysis for the samples of Poyang Lake’s new aquaculture base and forecasts good results by using the model.
Linear-time Algorithm for Weighted Domination Problem of Strongly Chordal Graph Based on Local Ratio Method
ZHANG Xiu-jun, WU Pu, YANG Hong and SHAO Ze-hui
Computer Science. 2017, 44 (Z6): 129-132.  doi:10.11896/j.issn.1002-137X.2017.6A.029
Abstract PDF(359KB) ( 1136 )   
References | Related Articles | Metrics
In an undirected graph G=(V,E),DV is dominating set if and only if any vertex v∈V-D adjacent to at least one vertex u∈D.The minimum weight dominating set problem consists of finding a dominating set of a graph G with minimum weight.By applying with the properties of strongly chordal graph,a linear-time algorithm based on local ratio method for the minimum weight dominating set problem on strongly chordal graph was proposed.We also provi-ded the proof of the time complexity of the proposed algorithm.
Research on Optimal Transportation Route Based on Chaos Optimization
ZHANG Yan
Computer Science. 2017, 44 (Z6): 133-135.  doi:10.11896/j.issn.1002-137X.2017.6A.030
Abstract PDF(364KB) ( 925 )   
References | Related Articles | Metrics
Based on the analysis of ergodicity of Logistic chaotic sequences,the Logistic chaotic sequence was mapped to the search region of the multi pole objective function to search the global optimal solution.We studied on the general procedure of chaos optimization algorithm,analyzed the example,and applied the chaos optimization algorithm to the transport route optimization problem.The results show that the chaos optimization algorithm has better global search ability ofthe optimal solution,and it has the feasibility and effectiveness of the optimal transportation route selection.
FIR Low-pass Digital Filter Design Using Improved PSO Algorithms
SHAO Peng, WU Zhi-jian, PENG Hu, WANG Ying-long and ZHOU Xuan-yu
Computer Science. 2017, 44 (Z6): 136-138.  doi:10.11896/j.issn.1002-137X.2017.6A.031
Abstract PDF(537KB) ( 1351 )   
References | Related Articles | Metrics
Particle swarm optimization presents an excellent optimization performance when it solves some complex problems because of its advantages such as few parameters and easy implementation.Finite impulse response digital filters have some advantages such as stable structure and easy implementation,which make FIR low pass digital filters have a widely practical application.Therefore,in this paper,TFPSO was introduced to design FIR low pass digital filter and make a comparison with FIR low pass digital filters designed by refrPSO and OPSO.In the experiment,the excellent fitness function was proposed to test the performance of FIR low pass digital filters designed by several improved PSO algorithms.The experiment results show that refrPSO has an excellent filter performance and TFPSO has a weak filter performance.
Discrete Fishing Strategy Optimization Algorithm for TSP
CHEN Jian-rong and CHEN Jian-hua
Computer Science. 2017, 44 (Z6): 139-140.  doi:10.11896/j.issn.1002-137X.2017.6A.032
Abstract PDF(220KB) ( 1118 )   
References | Related Articles | Metrics
The classical fishing strategy can only solve the optimization problem on a continuous domain,but there is no relative research on discrete domain.To solve traveling salesman problem,a discrete fishing strategy optimization algorithm was presented.An efficient discrete encoding method was given with the characteristics of TSP,and base on this,the basic concepts of the distinct set and the exchange operations was put forward.A new distance formula was given and the several search strategy has redescription.The experiment results about TSP form TSPLIB indicate that the algorithm shows high accuracy,stability and quickly.And it provides a new choice to solve TSP.
Research on Method of Personal Relation Extraction under SDAs
ZHU Jie and HONG Jun-jian
Computer Science. 2017, 44 (Z6): 141-145.  doi:10.11896/j.issn.1002-137X.2017.6A.033
Abstract PDF(175KB) ( 1345 )   
References | Related Articles | Metrics
For the lack of corpus issue in personal relation,this paper studied the methods of automatic tagging based on HUDONG pedia;for poor ability to express feature issue in shallow machine learning models,we proposed the method of personal relation extraction under deep learning model SDAs and focused on the effect of personal relation extraction with combination features and effect of personal relation extraction with different depths in SDAs network. F factor can reach 73.75% through experiment analysis.
Introduce Numerical Solution to Visualize Convolutional Neuron Networks Based on Numerical Solution
YU Hai-bao, SHEN Qi and FENG Guo-can
Computer Science. 2017, 44 (Z6): 146-150.  doi:10.11896/j.issn.1002-137X.2017.6A.034
Abstract PDF(954KB) ( 1162 )   
References | Related Articles | Metrics
Zeiler’s visualization model restore the feature maps to original image space,by unpooling and deconvolution,to visualize what the node learn from the image.It helps to research the convolutional neural networks mechanism,but the result is not apparent for the vague method.Based on the Zeiler’s deconvolutional visualiztion model,numerical solution method was introduced to replace the vague method that just use convolutional kernel.The database was constructed firstly.The triangle and rectangle was generated with random size,shape and location,which have simple structure and apparent vertex.Based on the database,we constructed hierarchy database and took out experiment.The experi-ment results show that the improvement model extracts more apparent features and has less noise,which has more precise result.Experiment on bigger database was taken to verify our result,and the result to guide how to construct the network’s stucture.
Blind Color Image Quality Assessment Base on Color Characteristics
WEN Wu and ZUO Ling-xuan
Computer Science. 2017, 44 (Z6): 151-156.  doi:10.11896/j.issn.1002-137X.2017.6A.035
Abstract PDF(622KB) ( 2155 )   
References | Related Articles | Metrics
Color image quality Assessment(the C-IQA) was proposed to evaluate the quality of a color image.Different from other image quality assessment systems simply convert the original image to gray image,the C-IQA consider not only the quality of an image under the gray scale,but also need to take color performance of that image in to account.In this paper,we devise a color image quality evaluation model based on color characteristics.Beside the characteristics of brightness,we used the characteristics of hue,color saturation and color entropy to assess the quality of color image.By experiments on the LIVE image database,we can find that our model predictions are highly consistent with the quality of image.
Estimate Threshold of SIFT Matching Adaptively Based on RANSAC
LIU Chuan-xi, ZHAO Ru-jin, LIU En-hai and HONG Yu-zhen
Computer Science. 2017, 44 (Z6): 157-160.  doi:10.11896/j.issn.1002-137X.2017.6A.036
Abstract PDF(1192KB) ( 1165 )   
References | Related Articles | Metrics
When matching images with scale invariant feature transform(SIFT),the Euclidean distance between feature vectors is used as the similarity measurement.But it was difficult to get the best distance ratio.Moreover,when the ratio was a constant,there would be some problems of error matching or matching leakage.Deal with the problem,the Random Sample Consensus (RANSAC) algorithm was introduced.Optimize the ratio in the process adaptively,and we can get the best threshold.SIFT-based image matching algorithm was analyzed,and a bi-direction matching was used to improve the accuracy of image matching and ensure the correctness of matching at maximum level.Finally,the experiment results show that the proposed methods can obtain an optimal threshold for different images.It can get the most ma-tching points and a better matching rate,and by bi-direction matching,better results can be got.
Application of OpenMP in SAR Image Processing
CHENG Dong and WANG Wei-hong
Computer Science. 2017, 44 (Z6): 161-163.  doi:10.11896/j.issn.1002-137X.2017.6A.037
Abstract PDF(430KB) ( 1139 )   
References | Related Articles | Metrics
The amount of a SAR image is very large usually,meanwhile,the ordinary recognition algorithm is complex and time-consuming.Aimed to solve this jam,a kind of target classification method of SAR image based on OpenMP is proposed.Firstly,the model-template based recognition algorithm is analyzed.Secondly,the OpenMP is applied to build the parallel calculation frame of SAR image target classification method.Finally,the tests are completed by classifying 3 kinds of objects.The test results show that the processing speed is improved by 8 times,and the proposed method is available.
Image Edge Detection Based on Pyramidal Algorithm of Interpolation Wavelet
ZHANG Zhi-guo, ZHENG Xi and LAN Jing-chuan
Computer Science. 2017, 44 (Z6): 164-168.  doi:10.11896/j.issn.1002-137X.2017.6A.038
Abstract PDF(546KB) ( 1005 )   
References | Related Articles | Metrics
When classic wavelet theory is applied to detect edge of images,discrete integral formula is often used to replace continuous integral to obtain wavelet coefficients.Since discrete integral is approximate expression of continuous integral,great numerical errors often cannot be avoided in calculations.This has lead to the fact that some details of images cannot be described clearly in edge detection.To solve this problem,by applying Mallat pyramidal algorithm to interpolation conjugate filter,a new algorithm was proposed for edge detection based on the fact that image pixel values can be considered as coefficients of interpolation wavelets.In the experiment,our algorithm is compared with the classic one.It is shown that the new algorithm can obtain clearer and more intact edges.This implies that our algorithm is more effective and accurate than the classic one.
Study on Optimizations of Basic Image Processing Algorithm
XU Qi-hang, YOU An-qing, MA She and CUI Yun-jun
Computer Science. 2017, 44 (Z6): 169-172.  doi:10.11896/j.issn.1002-137X.2017.6A.039
Abstract PDF(325KB) ( 1189 )   
References | Related Articles | Metrics
To provide useful references for the implementation of real-time and excellent image processing task,taking the open loop tracking algorithm based on template matching in video image as an example,the tracking performance of the algorithm based on MATLAB prototype algorithm was evaluated,and the multi-level optimization process was pre-sented.Starting with an MATLAB prototype algorithm,we began to optimize mainly in the two aspects as below.In improving real-time processing speed,more than 10 levels of optimizations are applied to speed up the algorithm,including C language speeding,multiplication speeding,release speeding,merge operation,CUDA speeding and so on.In terms of improving the correct rate,simple multi-pattern strategy is used.Testing results indicate that the algorithm reaches performance of 30Hz real-time image process and tracking rate of the algorithm is also promoted greatly.
Research on People Counting Based on Hot Area
GAO Fei, FENG Min-qiang, WANG Min-qian, LU Shu-fang and XIAO Gang
Computer Science. 2017, 44 (Z6): 173-178.  doi:10.11896/j.issn.1002-137X.2017.6A.040
Abstract PDF(1297KB) ( 1024 )   
References | Related Articles | Metrics
People counting in the field of intelligent monitoring is important,but because of complex background environment and pedestrian movement occlusion phenomenon resulting in the current method accuracy is low,in addition to the traditional line statistics on the number of practical limited scope,taking into account the present that lacking effective methods,we proposed a method of people counting based on hot area.Firstly,the adaptive learning-rate background model is used to extract the foreground of the moving target,and the position and size of the foreground region are obtained.The HOG feature in the foreground region of the moving target is scanned,and the head and shoulder target is determined.Then the target matching Matrix algorithm based on KCF is used to track the head-shoulder target.Finally,the number of pedestrians is calculated by combining the target trajectory and people counting method based on hot area.The number of the video is 960×720 pixels with a resolution of 960×720 pixels.The correctness of the algorithm reached 93.1%,and it can meet the real-time requirement.The method we proposed combines the detection efficiency and accuracy,and has good effect in scenes with complex background environment,which can meet various practical application scenarios of people counting.
Image Inpainting Based on Dual-tree Complex Wavelet Transform
DOU Li-yun, XU Dan, LI Jie, CHEN Hao and LIU Yi-cheng
Computer Science. 2017, 44 (Z6): 179-182.  doi:10.11896/j.issn.1002-137X.2017.6A.041
Abstract PDF(1330KB) ( 1001 )   
References | Related Articles | Metrics
The wavelet transform technology has been widely used in the field of digital image inpainting,however,the image inpainting based on wavelet transform will appear the phenomenon of edge fuzzy and not connection,which becomes a difficult problem.Based on the multiscale and multidirectional decomposition and the traditional method of ima-ge inpainting,a new algorithm of image inpainting based on dual-tree complex wavelet transform was proposed.Firstly,the image is decomposed into low frequency and high frequency parts by using the dual-tree complex wavelet transform.Then the parts of different frequency after image decomposition are inpainted respectively.The high frequency components of the image are inpainted by the total variation model,and an improved curvature-driven-diffusion is used to repair the low frequency components.Finally,the final image is obtained by dual-tree complex wavelet transform reconstruction process.The experimental results show that the proposed algorithm is very good for the promotion of the dual-tree complex wavelet transform in image inpainting application and gets better repair both in the part of texture and the part of structure.
SAR Image Denosing Based on Nonlocal Similarity and Low Rank Matrix Approximation
ZHAO Jie, WANG Pei-pei and MEN Guo-zun
Computer Science. 2017, 44 (Z6): 183-187.  doi:10.11896/j.issn.1002-137X.2017.6A.042
Abstract PDF(1360KB) ( 1117 )   
References | Related Articles | Metrics
The SAR image denoising based on nonlocal similarity and low rank matrix approximation was presented to minimize the effect of speckle noise in Synthetic aperture radar.Firstly,multiplicative speckle is changed into additive noise by logarithmic transformation.Secondly,the image’s global noise variance is estimated in advance.Thirdly,a new joint block matching method based on Euclidean distance and R-squared is developed,which makes the matching result more accurate.Finally,within the framework of the low rank model,the improved residual noise variance estimation is used to approximate the low rank matrix with the weighted nuclear norm minimization.The noise suppression of SAR image is achieved.The experimental results show that this method not only the peak signal to noise ratio objective indicators have significantly improved and preserved the local structure of the image better,and produces a good subjective visual effect.
Research of Combination SVM Classifier in Pedestrian Detection
ZOU Chong, CAI Dun-bo, LIU Ying, Z HAO Na and ZHAO Tong-zhou
Computer Science. 2017, 44 (Z6): 188-191.  doi:10.11896/j.issn.1002-137X.2017.6A.043
Abstract PDF(856KB) ( 986 )   
References | Related Articles | Metrics
On the basis of histogram of oriented gradient and support vector machine(HOG-SVM)algorithm,this paper proposed an improved algorithm for combination classifiers.Firstly,This algorithm uses multi-scale sliding windows to extract the HOG features and trains SVM separately.Then,the trained SVM which is formed to a new classifier in series or parallel is used to detect pedestrian.In order to solve the problem that the target area is overlapped when features are extracted in multi-scale sliding windows,the non-maximum suppression (NMS)algorithm is used to fuse the rectangles and to get exact candidate region.Experiments show that combined SVM classifiers can effectively reduce the false detection rate and missed rate.
Video Stylization Based on Kinect Depth Information
TANG Ying and SUN Kang-gao
Computer Science. 2017, 44 (Z6): 192-197.  doi:10.11896/j.issn.1002-137X.2017.6A.044
Abstract PDF(1617KB) ( 1135 )   
References | Related Articles | Metrics
This paper focused on extracting the depth information from Kinect for XBOX360 to separate the video foreground from background and stylizing the foreground and background with different artistic styles.First,we achieved the extraction of video foreground based on the depth data.Next,the foreground and the background video were stylized using different artistic styles based on texture advection which is guided by optical flow field.Finally,the final stylized video was obtained by combining the above two results effectively.We stylized video with texture advection-based methodso that multiple rendering styles are supported.The experimental results show that the video stylization produced by our system achieves a good artistic effect.
Image Segmentation Algorithm Based on Clustering and Improved Double Level Set
ZHANG Hui, ZHU Jia-ming and TANG Wen-jie
Computer Science. 2017, 44 (Z6): 198-201.  doi:10.11896/j.issn.1002-137X.2017.6A.045
Abstract PDF(760KB) ( 1014 )   
References | Related Articles | Metrics
Usually,medical image accompanied by noise with a multi-objective problem,can not be separated completely by traditional level set in the image with multiple targets.This paper proposed a model based on inhibiting type of fuzzy clustering algorithm and modified double level set.First of all,the clustering algorithm is used for pre segmentation of medical image noise reduction,which can determine whether a cluster achieves satisfied effect through standardized rule of normalized mutual information (NMI),thus improving clustering algorithm. The improved double level set with pu-nishment item is given a second segmentation finally.The experimental results show that the method can reduce the noise of the image and the sensitivity of the algorithm,without reinitialize level set,reducing the amount of calculation and the number of iteration greatly.The model can separate medical image including noise and multiple objects completely,obtaining the expected effect of segmentation.
Choice of Coding Parameters in Video Transmission for Perceptual Quality
DU Lin, TIAN Chang, WU Ze-min, ZHANG Zhao-feng, HU Lei and ZHANG Lei
Computer Science. 2017, 44 (Z6): 202-205.  doi:10.11896/j.issn.1002-137X.2017.6A.046
Abstract PDF(660KB) ( 1159 )   
References | Related Articles | Metrics
There are two important problems that need to be solved in video transmission .One is how to allocate the rate for source and channel coding under the known bandwidth and packet loss probability.Another one is how to choose the coding parameters under the restricted bitrate.This paper focused on the second problem.We analyzed the existed perceptual quality model VQMTQ,and chose VQMTQ as the index to evaluate the perceptual quality of videos.Combined with the rate model,the proposed method chooses the best coding parameters under the given target bitrate,which not only satisfy the limit of bitrate,but also make the compressed video have the best perceptual quality,which can be used in practice and protect the human perceptual feelings.
Multilevel Color Image Segmentation Based on Improved Glowworm Swarm Optimization Algorithm
MAO Xiao, HE Li-fang and WANG Qing-ping
Computer Science. 2017, 44 (Z6): 206-211.  doi:10.11896/j.issn.1002-137X.2017.6A.047
Abstract PDF(1374KB) ( 851 )   
References | Related Articles | Metrics
In order to improve the segmentation effect of color image,a novel multilevel color image segmentation me-thod was presented based on an improved glowworm swarm optimization (IGSO) algorithm which uses Kapur’s entropy.Aiming at the problem that the glowworm swarm optimization algorithm has low convergence speed and accuracy in the later period,an improved glowworm swarm optimization (IGSO) algorithm was presented based on adaptive step and global information.Depending on the effect of step size and the direction of movement on the convergence,IGSO algorithm improves convergence by adding global information and adaptive step with iterations and dimension of the search space during the course of movement.The experimental results show that it is a better method for multilevel co-lor image segmentation compared with GSO algorithm,improved quantum-behaved particle swarm optimization (CQPSO)algorithm and modified bacterial foraging (MBF) algorithm.
Double Threshold Orthogonal Matching Pursuit Algorithm
LIU Xin-yue, ZHAO Zhi-gang, LV Hui-xian, WANG Fu-chi and XIE Hao
Computer Science. 2017, 44 (Z6): 212-215.  doi:10.11896/j.issn.1002-137X.2017.6A.048
Abstract PDF(1809KB) ( 1008 )   
References | Related Articles | Metrics
The reconstruction algorithm in theory of compressed sensing (CS) is an important part of compression perception theory.Under the unknown condition of the sparse degree,some reconstruction algorithms perform poorly.To solve this problem,a kind of orthogonal matching pursuit algorithm based on double threshold was put forward.Under the unknown condition of the sparse degree,the twice screening for the selected atoms can have high efficiency and high quality reconstruction image signal.The proposed algorithm can effectively reconstruct signals through experimental comparison with other algorithms.The proposed algorithm in this paper has higher reconstruction precision and has shorter running time.
Research on Anthrax Disease Classification of Dangshan Pear Based on Hyperspectral Imaging Technology
WEN Shu-xian, LI Shao-wen, JIN Xiu, ZHAO Liu and JIANG Han
Computer Science. 2017, 44 (Z6): 216-219.  doi:10.11896/j.issn.1002-137X.2017.6A.049
Abstract PDF(2179KB) ( 977 )   
References | Related Articles | Metrics
To detect the disease of different levels,this article took Dangshan pear which gets vaccinated of anthrax as the research object,using hyperspectral imaging technology for the modeling of disease classification.In 400~1000 nm spectral region,we collected the sequential hyperspectral images of the whole process of Dangshan pear samples from inoculation of anthrax to morbidity and to decompose,used threshold segmentation to conduct background segmentation of images,and did principal composition analysis based on the effective spectral region.We selected the second principal component (PC2) to extract the infected region of interest,and used weight coefficient method for eigenvalue extraction of region-of-interest and used unsupervised classification algorithm for clustering analysis of characteristic value.Through observation and analyzation of 210 sample sets,it comes that the effective sample classification is 98.41%.The experimental results show that it is valid to make use of hyperspectral imaging nondestructive testing technology for the classification of the anthrax disease of Dangshan pear of different levels.
Gesture Recognition Based on Weighted Feature Distance
WANG Yan, XU Shi-yi and CHEN Hai-yun
Computer Science. 2017, 44 (Z6): 220-223.  doi:10.11896/j.issn.1002-137X.2017.6A.050
Abstract PDF(344KB) ( 945 )   
References | Related Articles | Metrics
Gesture recognition based on computer vision has become a hot research topic.But due to the influence of illumination,environment and other factors,method based on single feature can’t identify gestures well.Therefore a method combined Hu invariant moments with number of fingertips as features of static gestures was proposed.After preprocessing the collected static gesture images,a kind of skin color model was applied to segment gesture,then the numbers of fingertips were detected by centroid distance method,and then the Hu values of the extracted gesture contour were calculated.Next the number of fingertips and Hu values were weighted respectively as gesture features.Finally the template matching was used to recognize gestures by weighting and fusing feature distance.Experimental results show that the proposed way can obtain a higher recognition rate of ten kinds of gestures than traditional Hu invariant moments feature recognition method and fingertip detection recognition method.
Garden Tourist Detection Based on Improved ViBe Algorithm
LIU Ying-ying, CHENG Shun, DING Shao-gang, LU Pan and SUN Yuan-hao
Computer Science. 2017, 44 (Z6): 224-228.  doi:10.11896/j.issn.1002-137X.2017.6A.051
Abstract PDF(638KB) ( 962 )   
References | Related Articles | Metrics
There are several problems in traditional visual background extraction algorithm,such as sensitivity to sha-dow of light,the wrong judged points of prospect,the hole of prospect and so on.In order to better segment the prospects of garden tourists,based on the analysis of a variety of building background model methods,this paper presented an improved tourist detection algorithm ViBe in Lab color space,and also tested the accuracy and robustness of improved ViBe algorithm.The results showed that the algorithm built an updated background model to improve the accuracy of tourist detection,it adapted to the change of light effectively and removed the shadow.By the analysis of dif-ferent locations’ video of garden,the improved ViBe algorithm has better detection results.
Similar Character Recognition of License Plates Based on Deep Learning
PAN Xiang and WANG Heng
Computer Science. 2017, 44 (Z6): 229-231.  doi:10.11896/j.issn.1002-137X.2017.6A.052
Abstract PDF(526KB) ( 1141 )   
References | Related Articles | Metrics
It is hard to recognize similar characters of a license plate,so a new method based on deep learning was proposed to extract features and recognize similar characters.Firstly,this method is to normalize the character images,and then the normalized images will be regarded as input.We built five-layers architecture of deep network and extracted similar characters featured in representing from low-level to high-level.The convolution function which is sensitive to character edge is adopted,so that it can analyze the local differences of similar characters.We compared this method with support vector machine (SVM) in the experiments.The results show that the accuracy rate of the proposed me-thod increases 5%.
Face Recognition Method Based on Adaptive 3D Morphable Model and Multiple Manifold Discriminant Analysis
WANG Jian-tao, ZHAO Li and QI Xing-bin
Computer Science. 2017, 44 (Z6): 232-235.  doi:10.11896/j.issn.1002-137X.2017.6A.053
Abstract PDF(1088KB) ( 1206 )   
References | Related Articles | Metrics
In order to reduce the loss information of face appearance after the normalization of face pose and expression,a normalization face recognition method of face pose and expression based on adaptive three-dimensional morphable model (3DMM) and multiple manifold discriminant analysis was proposed.Firstly,face pose 2D and 3D coordinate transformation caused by the non-correspondence is described,and an adaptive 3DMM fitting method is proposed.Then,the entire image is mapped into a 3D grid objects by three-dimensional transformation to preserve the identity information as much as possible.Finally,multiple manifold discriminant analysis is used to calculate the distance between manifolds,and the nearest neighbor classifier is used to finish recognition.The effectiveness of the proposed method is verified by experimental results on data base Multi-PIE,LFW and self-collection experiments,the face recognition accuracy on the three databases can achieve 99.8%,95.25%,98.62%,respectively.The proposed method significantly improves the performance of face recognition,and it is better than other similar advanced methods in constrainted and unconstrained environment.
Nonconvex Muclear Morm Minimization General Model with Its Application in Image Denoising
SUN Shao-chao
Computer Science. 2017, 44 (Z6): 236-239.  doi:10.11896/j.issn.1002-137X.2017.6A.054
Abstract PDF(782KB) ( 877 )   
References | Related Articles | Metrics
This paper focused on the nonconvex low rank approximation model.We proposed a class of nonconvex function g defined on the singular value of matrix.In fact,many famous nonconvex functions satisfy the condition of the function g.When the function g is introduced to the weighted nuclear norm minimization model,we can get a more ge-neral model,which can effectively solve the weight selection problem of former model.In this paper,the model was applied to the field of image denoising,and the convergence solver was given.Simulation results show that our proposed method is superior to other advanced algorithms.
New Method for Medical Image Segmentation Based on BP Neural Network
TANG Si-yuan, XING Jun-feng and YANG Min
Computer Science. 2017, 44 (Z6): 240-243.  doi:10.11896/j.issn.1002-137X.2017.6A.055
Abstract PDF(358KB) ( 941 )   
References | Related Articles | Metrics
For the medical image segmentation,good accuracy of results is very important and helpful for doctors to diag-nose the illness and make the right therapeutic schemes.The traditional BP neural network is used to segment medical image,but is sensitive to the initial weights,and it has fixed learning rate,slow convergence and is easy to fall into local minimum that.A method for medical image segmentation of BP neural network based on improved particle swarm optimization algorithm was proposed.Firstly,the mapping relationships are used to algorithm of particle swarm optimization algorithm and the BP neural network.The best adaptive functions can be found by particle swarm of powerful search function,which make the BP neural network attain the minimal error.It can overcome running into local minimum value easily in BP neural network.Secondly,the best position of particles can be determined,the most reasonable weights and bias values of BP neural network are obtained and network convergence speed is improved etc.Lastly,the BP neural network are repeatedly trained,then the best output values are obtained and the threshold values are calculated,The image area is divided by threshold.The simulation results show that the use of improved algorithm for medical images segmentation,can get more clear effect of segmentation image,improve the segmentation accurate rate,and it is important for the clinical diagnosis.
3D Reconstruction Based on SFS Method and Accuracy Analysis
CAO Fang and ZHU Yong-kang
Computer Science. 2017, 44 (Z6): 244-247.  doi:10.11896/j.issn.1002-137X.2017.6A.056
Abstract PDF(525KB) ( 1257 )   
References | Related Articles | Metrics
The recovery from shading (SFS) is one of the research hotspots and difficulties in 3D reconstruction of computer vision.There are two problems in the algorithm,one is that the selected reflection model does not accord with the reflection characteristic of object surface,the other is that the constraints and the solution process are too complex,the solution is slow and has low efficiency.In this paper,the SFS algorithm was analyzed in detail,the Lambert illumination model was introduced,the spherical surface was assumed,and then the height function was obtained by the approximate differential operation.The 3D shape of the object surface can be recovered by using single gray image.The traditional linearized SFS algorithm and the algorithm proposed in this paper were experimentally validated.The reconstruction precision and efficiency of the two models were compared and analyzed.Experimental results show that the proposed algorithm is more efficient than the traditional algorithm in ensuring certain accuracy.
Multi-sensors Direction-and-Time Co-localization Algorithm Based on Efficient Anchor-nodes
XIA Xiao-dong, ZHUANG Yi, LI Jing and GU Jing-jing
Computer Science. 2017, 44 (Z6): 248-251.  doi:10.11896/j.issn.1002-137X.2017.6A.057
Abstract PDF(606KB) ( 861 )   
References | Related Articles | Metrics
In this paper,we proposed an efficient anchor node selection (EAS) model for the problem of low precision,high delay and low coverage in the field of electronic countermeasure.According to the environment of sensor nodes,the model can choose the effective anchor nodes to participate in the target location.To improve the classical localization algorithm that is based on independent data,we proposed a multi-sensors direction-and-time co-localization algorithm based on efficient anchor-nodes (LDTEAS).This algorithm can effectively reduce the influence of the environment and the enemy’s interference.Simulation results show that the proposed model can effectively improve the localization accuracy and localization coverage.
Shortest Routing Algorithm Based on Target Node in Mesh Network with Faulty Area
LIN Cheng-kuan, WANG Ming-cheng, GUO Li-li and DU Man-yi
Computer Science. 2017, 44 (Z6): 252-257.  doi:10.11896/j.issn.1002-137X.2017.6A.058
Abstract PDF(311KB) ( 832 )   
References | Related Articles | Metrics
Mesh network is studied early and it is still one of the most important and attractive network models at pre-sent.Because of its simple,regular and scalable structure,and it is useful for the implementation of VLSI (Very Large Scale Integrated Circuit),Mesh network has not only become the basic model of many theoretical studies,but also the topology structure of many large multiprocessor and parallel computer systems.A mesh with m rows and n columns is denoted by Mm,n.In this paper,we gave the shortest routing algorithm under two different kinds of faulty area.1)We gave a routing algorithm to find the shortest path between any two fault-free nodes in Mm,n with m≥3 and n≥3,and calculated the length of the path obtained by the algorithm when there is a rectangular faulty region.2)We gave a routing algorithm to find the shortest path between any two fault-free nodes in Mm,n with m≥3 and n≥3,and calculated the length of the path given by the algorithm when there exists such a situation that a node and its k-hop neighbours are faulty.
Energy-efficient Design under Imperfect Condition Based on Spectrum Prediction
ZHANG Yang, ZHAO Hang-sheng and ZHAO Xiao-long
Computer Science. 2017, 44 (Z6): 258-262.  doi:10.11896/j.issn.1002-137X.2017.6A.059
Abstract PDF(396KB) ( 801 )   
References | Related Articles | Metrics
In cognitive radio networks,when the secondary users do spectrum predicting and spectrum sensing,some spectrum prediction mistakes and spectrum sensing mistakes are included.In cognitive radio networks where energy is limited,the secondary users’ energy efficience is analyzed under spectrum prediction mistakes and spectrum sensing mistakes in this paper.A normalized spectrum prediction formula is designed.The impacts of spectrum prediction energy,the probability of wrong prediction,traffic intensity and channel number on the energy efficient are also investigated respectively in this paper.The simulation results are compared with perfect cognitive radio networks energy efficience.The simulation results are more conform to actual condition and have great value in theory and engineering application.
Multicasting Network System Design of Uniting Domains Based on MPLS VPN and MSDP
TAO Jun, KUANG Lei, XU Wang, YAN Yun-sheng and WAN Jia-shan
Computer Science. 2017, 44 (Z6): 263-265.  doi:10.11896/j.issn.1002-137X.2017.6A.060
Abstract PDF(580KB) ( 1011 )   
References | Related Articles | Metrics
The article introduced the theory about multicasting,analysed the technology about MSDP and MPLS VPN.As to the network transformation of a normal enterprise on video,three multicasting schemes on uniting domains were proposed,and the priorities and shortcomings about them were compared.The scheme based MSDP and MPLS VPN is the best solution,which not only relieves the traffic of network,but also increases the security and reliability of network.
Traffic Scheduling Based Congestion Control Algorithm for Data Center Network on Software Defined Network
FAN Zi-fu, LI Shu and ZHANG Dan
Computer Science. 2017, 44 (Z6): 266-269.  doi:10.11896/j.issn.1002-137X.2017.6A.061
Abstract PDF(398KB) ( 1605 )   
References | Related Articles | Metrics
To alleviate the congestion problem using software defined network (SDN) in the modern data center network,a traffic scheduling based congestion control algorithm was proposed.When the link is congested,the proposed algorithm firstly discriminates the large flow of the maximum critical degree link in the congestion link,and then it reroutes the large flow,selects the minimum flow scheduling overhead,and calculates the scheduling cost.Finally,the minimum scheduling cost flow can be scheduled based on the above-mentioned process.Experimental results show that,the proposed algorithm can alleviate network congestion,improve the link utilization and enhance the network stability by reducing the dropout rates.
WF-C4.5:Handheld Terminal Traffic Identification Method Based on C4.5 Decision Tree in WiFi Environment
SHI Zhi-kai and ZHU Guo-sheng
Computer Science. 2017, 44 (Z6): 270-273.  doi:10.11896/j.issn.1002-137X.2017.6A.062
Abstract PDF(593KB) ( 949 )   
References | Related Articles | Metrics
It was reported that mobile terminals account for about 47% global IP traffic while WiFi traffic account for over 90% mobile traffic.Identification of mobile terminal traffic is important for efficient network traffic management.In order to solve the low identification rate problem of traditional HTTP user agent (UA) method,we analyzed the features of mobile terminal traffic in WiFi environment,including the connection persist time,packet size and payload size,etc.We proposed WF-C4.5:a handheld terminal traffic identification method based on C4.5 decision tree in WiFi environment.The method distinguishes handheld terminal traffic from non-handheld traffic by decision tree model which is created by calculating the information gain ratio of each attribute value.The experiments show that the identification rate of WF-C4.5 can reach 95%,while the identification rate of UA is about 65%.
Research on Wireless Interference Source Localization Based on Grid Spectrum Monitoring
LI Jin-shan, SHAO Yu-bin and LONG Hua
Computer Science. 2017, 44 (Z6): 274-275.  doi:10.11896/j.issn.1002-137X.2017.6A.063
Abstract PDF(163KB) ( 814 )   
References | Related Articles | Metrics
In order to find out the source of interference or illegal radio quickly and efficiently,in this method,a localization algorithm for interference position was proposed.The method is used to detect the size of the receiving power by setting up a grid distribution in the test area.Based on the radio signal source location algorithm proposed in this paper,the location of the interference source is found.The correctness and effectiveness of the proposed algorithm are proved by the simulation results.
Traffic Control Mechanism Design and Verification Based on VANET
YANG Lin, ZHANG Wen-li, ZHU Qin and PENG Chao
Computer Science. 2017, 44 (Z6): 276-283.  doi:10.11896/j.issn.1002-137X.2017.6A.064
Abstract PDF(1088KB) ( 1411 )   
References | Related Articles | Metrics
In recent years,worldwide rapid development in automobile industries and growing rate of vehicle ownership have been witnessed.Consequently,the thorny issue of traffic congestion has bothered more and more people.To alle-viate traffic jams as well as enhance traffic efficiency,this paper proposed a traffic control mechanism based on VANET.At first,we proposed a cooperative traffic light control mechanism for multiple intersections based on semi-real-time processing by combining the concepts of fixed-time control and traffic-response control.In our mechanism,the traffic light controller will use trajectory prediction method to predict the traffic situation of the next period,and then make the optimal decision of traffic light phase setting for the next period according to the prediction result.The aim of our mechanism is to minimize the waiting time of all the vehicles.This paper also proposed a heuristic algorithm for dynamic route planning to enhance individual’s travel efficiency in traffic system.On the basis of Dijkstra algorithm,we adopted a heuristic method to calculate the weight of each road.Vehicles can avoid traffic jam as much as possible by adopting this algorithm.To verify the performance of our proposed VANET traffic control mechanism,we ran simulation experiments by combining SUMO with NS3.The simulation results demonstrate that our proposed traffic control mechanism is both effective and practical.It is able to reduce traffic load and average waiting time of vehicles,as well as release traffic jams and divert jammed vehicles,thus can improve road traffic situation of the whole transportation system.
New Physical Layer Network Coding Denoising Mapping Algorithm Based on MQAM
LU Ming-yue, GUO Dao-xing and NIU He-hao
Computer Science. 2017, 44 (Z6): 284-287.  doi:10.11896/j.issn.1002-137X.2017.6A.065
Abstract PDF(360KB) ( 951 )   
References | Related Articles | Metrics
To overcome the problem of constellation points ambiguity in physical layer network coding(PLNC ),this paper proposed a new QAM-based PLNC de-noising mapping scheme.In this scheme,the relay node rearranges the M-QAM constellation mapping,and merges the points of constellation according to a certain rule.Comparing with traditional scheme,this design reduces the number of constellation points nearly a half,which increases the Euclidean distance between adjacent points in the constellation,thus improves the BER performance.In addition,the relay node only needs demodulation-remapping-modulation,which significantly reduces the processing complexity.Simulation results show the effectiveness of our scheme.
Multi-channel MAC with QoS Provisioning for Distributed Cognitive Radio Networks
SUN Wei and HUANG Jin-ke
Computer Science. 2017, 44 (Z6): 288-293.  doi:10.11896/j.issn.1002-137X.2017.6A.066
Abstract PDF(585KB) ( 797 )   
References | Related Articles | Metrics
The insufficence of dynamic resource availability and the lack of central control unit offer many challenges when designing MAC protocol for a distributed cognitive radio network.In this paper,we proposed a novel MAC design for distributed cognitive radio network which provides an efficient approach to address quality of service (QoS) requirements of delay sensitive applications by defining higher priority to such applications during channel reservation.It also combats other major challenges such as low utilization of spectrum and multi-channel hidden terminal problem.We developed an analytical framework to study the performance of the proposed protocol.We then compared the performance of proposed protocol with those of two existing protocols.Comparison results show that the proposed MAC outperforms the existing protocols by providing better throughput.The results achieved from the analytical model and validated by simulations show that our simple yet efficient design identifies and fulfils the QoS requirements of delay sensitive applications,achieves excellent spectrum utilization and handles multi-channel hidden terminal problem effectively.
Negative Acknowledgement Based Data Delivery Scheme for WISP
ZHANG Wen-bin, LI Er-tao, LI Fei, LI Yan-yan and ZHU Yi-hua
Computer Science. 2017, 44 (Z6): 294-299.  doi:10.11896/j.issn.1002-137X.2017.6A.067
Abstract PDF(1005KB) ( 912 )   
References | Related Articles | Metrics
The wireless identification and sensing platform (WISP) is able to harvest energy from the ultra-high-frequency (UHF)signals transmitted by the radio frequency identification (RFID) reader so as to power the microprocessor and sensors inside the WISP and deliver the data captured by the sensors to the reader.The NAK(negative acknow-ledgement) based data delivery scheme was presented which can remedy the problem of wasting channel in WISP due to high ratio of duplicated packets being transmitted.The experimental results show that the proposed scheme is able to reduce the ratio of duplicated packets and improve the effective throughput.
Data Aggregation Tree Construction and Transmission Scheduling Algorithm Based on Minimum Latency in Wireless Sensor Networks
GAO Lei and HU Yu-peng
Computer Science. 2017, 44 (Z6): 300-304.  doi:10.11896/j.issn.1002-137X.2017.6A.068
Abstract PDF(542KB) ( 836 )   
References | Related Articles | Metrics
Aiming at the shortcomings of the larger delay at the existing data aggregation algorithms in wireless sensor networks,we studied the problem of the minimum latency data aggregation tree and transmission scheduling.An aggregation tree construction algorithm based on degree constraint(DCAT) was proposed.It works by traversing the graph in a BFS manner.As it traverses each node,the set of potential parents is determined by identifying the nodes that are one-hop closer to the sink.The potential parent with the lowest degree in the graph is selected as the parent for the currently traversed node.Furthermore,we proposed two new approaches based on greedy for building a TDMA transmission schedule to perform efficient aggregation on a given tree:WIRES-G and DCAT-Greedy.We evaluated the perfor-mance of our algorithms through extensive simulations on randomly generated sensor networks of different sizes and we compared them to the previous state of the art.The results show that new scheduling algorithms combining with our new tree-building algorithm obtain significantly lower latencies than that of the previous best algorithm.
Grouping-based Wireless Sensor Network Multi-rounds Clustering Routing Algorithm
GE Bin, DAI Chen, JI Jie-qu and WU Bo
Computer Science. 2017, 44 (Z6): 305-308.  doi:10.11896/j.issn.1002-137X.2017.6A.069
Abstract PDF(469KB) ( 919 )   
References | Related Articles | Metrics
A grouping-based wireless sensor network multi-rounds clustering routing algorithm(LEACH-G) was proposed,which aims at the defects of the energy consumption of cluster head in LEACH algorithm.Grouping strategies are used in the clustering process,and signpost is used for communication to balance the entire network energy consumption.The factors of energy and the distance between nodes and the base station in the cluster heads selection thre-shold for reducing network energy consumption are introduced.The simulation results show that compared with correlation algorithm adout LEACH,the new algorithm can effectively reduce 10%~15% of the average energy consumption of node,significantly extend the network life cycle,and improve the efficiency of the cluster head.
VoIP Acoustic Echo Cancellation Algorithm Based on WebRTC
YAO Li and LIU Qiang
Computer Science. 2017, 44 (Z6): 309-311.  doi:10.11896/j.issn.1002-137X.2017.6A.070
Abstract PDF(523KB) ( 2706 )   
References | Related Articles | Metrics
Echo phenomenon is a common problem in voice communication system,which affects the communication quality.This paper presented an echo cancellation algorithm based on WebRTC (Web Real-Time Communication).As the fixed-point realization is limited,the algorithm uses floating-point computation to improve algorithm efficiency and accuracy,while maintains algorithm speed.The experimental results based on mobile equipments show that the proposed algorithm outperforms the original one on echo return loss enhancement with comparable complexity.
Virtual Network Mapping Optimization Based on Improved Ant Colony Algorithm
XIE Yong-hao, GAO Song-feng and DAI Ming-zhu
Computer Science. 2017, 44 (Z6): 312-313.  doi:10.11896/j.issn.1002-137X.2017.6A.071
Abstract PDF(279KB) ( 815 )   
References | Related Articles | Metrics
In virtual network mapping,the virtual network mapping results based on improved ant colony algorithm were optimized.Aiming at the optimal utilization efficiency of the underlying network resources,a new virtual network mapping algorithm based on improved ant colony algorithm was proposed.By introducing the Gauss process model,the convergence speed of ant colony optimization algorithm is accelerated,and the real-time requirement of practical application is satisfied.The results show that the algorithm can significantly reduce the solution time and play a positive role on the premise of satisfying the same accuracy.
AP-I:An Index to Quickly Answer Predictive Queries for Moving Objects
LIU Kai-yang
Computer Science. 2017, 44 (Z6): 314-318.  doi:10.11896/j.issn.1002-137X.2017.6A.072
Abstract PDF(749KB) ( 829 )   
References | Related Articles | Metrics
How to quickly answer the question of the future location of a moving object is a fundamental problem for a variety of applications,such as ITS,location-aware advertisement,and moving objects monitoring.In this paper,we proposed an innovative AP-I (Adaptive Predictive-Index) which can efficiently answer predictive queries without any objects’ historical trajectories.Compared with existing Predictive Tree[4] index,our index can greatly reduce the overhead of index update by discovering and utilizing the co-relations among objects’ paths.Furthermore,by introducing AP (Adaptive Probability) and Pruning procedure,the size of AP-I is further reduced to improve the query performance.An extensive set of experiments have demonstrated that compared with Predictive Tree,our AP-I can not only achieve higheraccuracy,but also greatly improve the updating and space efficiency with the same query performance.
Self Localization Technology of Wireless Sensor Network Node
XIONG Zhi-li and QU Shao-cheng
Computer Science. 2017, 44 (Z6): 319-321.  doi:10.11896/j.issn.1002-137X.2017.6A.073
Abstract PDF(230KB) ( 1199 )   
References | Related Articles | Metrics
This paper firstly summarized and analyzed basic principle and classification of the wireless sensor network node,and got that essence of self localization technology is an optimal problem.Secondly,on this basis,we took the genetic algorithm,simulated annealing algorithm,evolutionary strategy and differential evolution algorithm as the research object,and discussed the advantages and disadvantages of the four kinds of typical localization algorithm.Then,combined with the advantages of GA algorithm and SA algorithm respectively,a genetic simulated annealing algorithm was proposed,thereby improving the initial population diversity and avoiding the local optimal solution problem in sensor node selection.Finally the improved method was applied to locate wireless sensor network node,with Matlab respectively on the GA algorithm,SA algorithm and GSA algorithm simulation,and the advantages of GSA algorithm were verified for free wire sensing node self localization technology to provide a new reference.
Introduction of Overhead Reduction for Small Cell Deployment
RU Xin-yu and LIU Yuan
Computer Science. 2017, 44 (Z6): 322-325.  doi:10.11896/j.issn.1002-137X.2017.6A.074
Abstract PDF(400KB) ( 1124 )   
References | Related Articles | Metrics
The demand for mobile data in today’s society grows explosively,but the limited capacity of the resources severely affects the expansion and improvement of business capacity.Taking the imbalance in the occurrence of wireless data into account,increasing the deployment of small-area cells will undoubtedly become the effective approach.By exa-mining the method to reduce the overhead of the uplink and downlink reference signals and control signaling in small cells,this paper introduced a method to improve cell density by deploying small cells.Compared with the macro-cell deployments,this proposed method can effectively reduce the cost of small cells,and enhance the efficiency of resource usage.
Research on Intrusion Detection of Wireless Sensor Networks Based on Game Theory
XIONG Zi-li, HAN Lan-sheng, XU Xing-bo, FU Cai and LIU Bu-yu
Computer Science. 2017, 44 (Z6): 326-332.  doi:10.11896/j.issn.1002-137X.2017.6A.075
Abstract PDF(398KB) ( 1103 )   
References | Related Articles | Metrics
Wide application of wireless sensor networks extends people’s ability to obtain information,but its inherent network characteristics make it more vulnerable to cyber-attack.Current intrusion detection systems are against for specific attacks,but powerless for other attacks and consump a little high energy,thus reduce the lifetime of the network.The paper proposed an intrusion detection model based on game theory,in which the attack-defense process between intrusion detection system and attacker is looked as a non-cooperative game model.To deal with the problem of diversity network intruder attacks,game model has been improved and established a non-cooperative information static game model.By analyzing the model’s mixed Nash equilibrium,the optimal defense strategy is obtained.It can balance the detection efficiency and energy consumption of the system.The simulation results show that the intrusion detection system based on game theory not only can resist a variety of network attacks effectively,but also reduce the energy consumption and prolong the lifetime of the network.
Scheme of Cloud Audit Data Encryption Based on AES and ECC
CHEN Zhuang and YE Cheng-yin
Computer Science. 2017, 44 (Z6): 333-335.  doi:10.11896/j.issn.1002-137X.2017.6A.076
Abstract PDF(351KB) ( 1136 )   
References | Related Articles | Metrics
Focusing on the data transmission and storage security problem of one-way encryption of cloud audit data,HBES (Hybrid Bidirectional Encryption Scheme) was proposed to tackle this problem.HBES private key is brought by event sponsor and local storage,then random number which depends on response time and external factors generates private key by mapping rules.Corresponding simulation experiment advocates that compared with the one-way encryption method,HBES is a more effective and feasible way in encryption time,security and achieving the encryption and decryption of the cloud audit data.
LBS Group Nearest Neighbor Query Method Based on Differential Privacy