Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 42 Issue 9, 14 November 2018
  
Driving Behavior Identification System and Application
XU Yang, LI Shi-jian, JIAO Wen-jun and PAN Gang
Computer Science. 2015, 42 (9): 1-6.  doi:10.11896/j.issn.1002-137X.2015.09.001
Abstract PDF(1105KB) ( 850 )   
References | Related Articles | Metrics
With the rapid development of the automotive industry,people use the car so frequently that it can be regarded as a personal private space.As more and more sensors and devices shipped with the car,there are increasingly rich services.However,most of the services are for common people without considering individual difference.To solve this problem,a user model is necessary to make a user-centric car space,which can provide personalized service.For the car industry,a key goal is to improve safety,so modeling driving behavior is one of the key parts of user modeling in car space.This paper presented a solution under simulation environment to model driving behavior,and provided an example of personalized service based on the model.
Study on Effects of Target Color on Eye Pointing Tasks
ZHANG Xin-yong and XIAO Yuan
Computer Science. 2015, 42 (9): 7-12.  doi:10.11896/j.issn.1002-137X.2015.09.002
Abstract PDF(1043KB) ( 727 )   
References | Related Articles | Metrics
Because eye tracking technologies become increasingly mature ,there have been some gaze input devices for end users in the market,leading to the increasing practicability of gaze-based interactions.However,the eyes are not inherent control organs,so the visual feedback scan probably interferes the eye movements of users,regardless of dynamic or static forms presented in user interfaces,thereby affecting gaze input.Using two eye pointing task experiments,this paper systematically evaluated the effects of target color on gaze-based interactions according to the spatial features of gaze points and the criteria of human performance.The results indicate that although the factor of target color does not significantly change the spatial features of gaze points during fixations for target acquisition,it can indeed affect the performance of users in eye pointing tasks,especially for the targets located at long distances.
Zoom Feature in Image Retrieval System
ZHANG Jin-zhou
Computer Science. 2015, 42 (9): 13-17.  doi:10.11896/j.issn.1002-137X.2015.09.003
Abstract PDF(670KB) ( 461 )   
References | Related Articles | Metrics
Image retrieval systems are user-oriented.Diversity of retrieval results has different effects on users’ experie-nces depending on their intents.Some users may need those different but similar results,which means higher diversity.Nevertheless current retrieval system which is majorly based on query keywords can hardly capture users’ intents directly from their query.Thus,a new interactive element,zoom factor,was introduced into retrieval system to bridge the gap between users’ intents and the diversity of retrieval results.This enables users to directly control the diversity of results.We first obtained images returned by retrieval system.And then the visual and semantic distances of each other were computed.Hierarchical clustering was then used to form a clustering tree.And finally we controlled the expansion of a sub-tree with users’ directly tune of zoom factor.For each expanded sub-tree,the node with the lowest index in the original results was selected as the representative.
Scudware Mobile:Mobile Middleware for Collaboration of Wearable Devices
DING Yang, WANG Shu-gang, LI Shi-jian and PAN Gang
Computer Science. 2015, 42 (9): 18-23.  doi:10.11896/j.issn.1002-137X.2015.09.004
Abstract PDF(855KB) ( 500 )   
References | Related Articles | Metrics
In recent years the wearable devices develop rapidly.Wearable devices of various styles and usage appear in abundance.However,most of these wearable devices work independently and there is little contact between device and device.Thus it is very hard to realize the collaboration of the data and services of the devices.To solve these issues,we proposed a model which is smartphone-centered and supports collaboration of the data and services among devices on the scale of individuals and between man and man.We designed and implemented Scudware Mobile,a data and services collaboration middleware.The middleware runs on the smartphone.It can gather the data and services of the smartphone and wearable devices and be open to the users based on the open authorization mechanism.It realizes the collaboration of data and services.We also implemented an application named MobileTrace based on the Scudware Mobile platform.The application verifies the usability of Scudware Mobile.
Speech Emotion Recognition Based on Acoustic Features
JIN Qin, CHEN Shi-zhe, LI Xi-rong, YANG Gang and XU Jie-ping
Computer Science. 2015, 42 (9): 24-28.  doi:10.11896/j.issn.1002-137X.2015.09.005
Abstract PDF(409KB) ( 1191 )   
References | Related Articles | Metrics
Emotion recognition from speech is a challenging research area with wide applications.This paper explored one of the key aspects of building an emotion recognition system:generating suitable feature representation.We extractedfeatures from four angles:(1)low-level acoustic features such as intensity,F0,jitter,shimmer,spectral contours etc.and statistical functions over these features,(2)a set of features derived from segmental cepstral-based features scored against emotion-dependent Gaussian mixture models,(3)a set of features derived from a set of low-level acoustic codewords,(4)GMM supervectors constructed by stacking the means or covariance or weights of the adapted mixture components on each utterance.We applied these features for emotion recognition independently and jointly and compared their performance within this task.We built a support vector machine(SVM) classifier based on these features.We testedthe performance of these different features on some public emotion recognition corpus(including IEMOCAP corpus in English,CASIA corpus in Mandarin,and BerlinEMO-DB in Germany).On the IEMOCAP database,the four-class emotion recognition accuracy of our system is 71.9%,which outperforms the previously reported best results on this dataset.
Parallel Acceleration Method for Very High Resolution Remote Sensing Image Registration
HAO Yun-chao and WANG Xian-min
Computer Science. 2015, 42 (9): 29-32.  doi:10.11896/j.issn.1002-137X.2015.09.006
Abstract PDF(596KB) ( 705 )   
References | Related Articles | Metrics
The method for remote sensing image registration based on scale-invariant feature transform(SIFT) has the advantage of hig haccuracy and good stability.However, the method is very time-consuming because of the large size of the image and the huge quantity of feature points.This paper presented a parallel acceleration method for very high resolution remote sensing image registration which builds the Gaussian pyramid by hardware implementation on GPU.We used the shared memory to cache the temporary extremum at high speed when identifying the keypoint,which effectivelydecreases the time for the keypoint extraction.Meanwhile,we divided the whole image into blocks and used OpenMP to match the feature-points and build parallel acceleration of the affine model.Compared with the traditional registration method——SIFT,this method is 3 times faster.We concluded that the runtime of the keypoint extraction has linear relationship with the quantity of the keypoints,and the acceleration ratio raise with the density of the keypoints going up.
Research and Implementation of Commercial Site Recommendation System Based on LBSN
QU Hong-yang, YU Zhi-wen, TIAN Miao and GUO Bin
Computer Science. 2015, 42 (9): 33-36.  doi:10.11896/j.issn.1002-137X.2015.09.007
Abstract PDF(669KB) ( 664 )   
References | Related Articles | Metrics
With the development and popularization of smart mobile devices,spatial positioning technology continues to develop,and based on this,location-based social network is widely used.The majority of users check in LBSN,and comment on check-in activity,which not only record the spatial-temporal behavior track,but also provide great opportunities for us to study user behavior patterns and characteristics of preference.This paper proposed a commercial site re-commendation system based on LBSN.Firstly,it analysed the characteristics about the check-in time,the check-in location and the category of check-in retail in LBSN.Then it proposed four kinds of factors that affect retail location:diversity,competitive,relevance,passenger flow.Finally the system was implemented that can provide the best candidate based on various factors.The paper used those as the basis for experiments to verify the recommendation result.The results comply with the relevant expectations.
Topology Generation Method of Road Network Based on GPS Trajectories
TAN Kang, LIU Jian-xun and LIAO Zhu-hua
Computer Science. 2015, 42 (9): 37-40.  doi:10.11896/j.issn.1002-137X.2015.09.008
Abstract PDF(657KB) ( 1198 )   
References | Related Articles | Metrics
Automatic generation of complex network topology is based on road extraction and road intersection detecting,which is one of the hot research topics in intelligent traffic control and automatic navigation service fields,and GPS trajectories generated by floating cars or taxis can reflect the road topology.Therefore,this article presented a method to extract road intersection and build topology.It extracts road intersections,builds topology with geographical location information based on large-scale GPS trajectories without auxiliary road map,and calculates network distance between any two adjacent road intersections.The result shows that our method can extract road intersection and build topology effectively.In our experiment,when the road width is set with 55 meters,the accuracy of extracting road intersection is 87.08%,the average error rate between adjacent road intersections is 8.87%,and connected relationship between adjacent intersections can be gotten correctly.
Algorithm for Mining Association Roles Based on Dynamic Hashing and Transaction Reduction
CUI Liang, GUO Jing and WU Ling-da
Computer Science. 2015, 42 (9): 41-44.  doi:10.11896/j.issn.1002-137X.2015.09.009
Abstract PDF(312KB) ( 654 )   
References | Related Articles | Metrics
Mining association rules search frequent data patterns in given dataset,and find out the correlation between them.This paper analyzed the shortcomings of the classical Apriori algorithm’s efficiency in time and space,and the effect of data form on algorithm’s efficiency.An effective algorithm based on dynamic hashing and transaction reduction technique was proposed,and it was compared with Apriori algorithm.Experimental results verify the correctness and effectiveness of the algorithm.
Dynamic Scheduling Algorithm in Hadoop Platform
GAO Yan-fei, CHEN Jun-jie and QIANG Yan
Computer Science. 2015, 42 (9): 45-49.  doi:10.11896/j.issn.1002-137X.2015.09.010
Abstract PDF(498KB) ( 666 )   
References | Related Articles | Metrics
With the increasing of the clusters and the user’s QoS in the cloud environment,it becomes much harder to meet the requirements of jobs and users using the traditional strategy.To adjust scheduler dynamically according to the status of the jobs and the resources,this paper proposed a dynamic scheduling method based on the job classification method in the Hadoop platform.The proposed method employs the Nave Bayesian method to classify the jobs in which the human inferences are added to preset the jobs’ weight according to the types.Then, the scheduling priority of the jobs is set dynamically using the utility function based on the user’s expected completing time and the estimated completed time of jobs.The experimental results show that the proposed method can not only reduce the classification time,but also improve the scheduling dynamics and user’s QoS greatly.
Comparison of Direct and Indirect Pen Tilt Input Performance
XIN Yi-zhong, MA Ying and YU Xia
Computer Science. 2015, 42 (9): 50-55.  doi:10.11896/j.issn.1002-137X.2015.09.011
Abstract PDF(758KB) ( 442 )   
References | Related Articles | Metrics
More and more pen devices introduce tilting to expend the parallel input capacity.However,when using pen tilt on direct device and indirect device respectively,due to the diversities of operating environment and visual feedback,the performance of direct and indirect pen tilt input are often discrepant.Aiming at exploring the differences of pen tilt input performance on these two kinds of devices,we empirically investigated the speed,stability and accuracy of pen tilt input in target selection tasks according to different angle interval.We discussed the possible reasons why the performances of pen tilt input are different and suggested the optimal tilt angle interval and tilt target.Experimental results indicate that the human abilities to control pen tilt in direct and indirect input device are different.That users complete tilt selection tasks on direct devices is faster but less stable and accurate than on the indirect ones.20° tilt angle interval is the optimal interval for user to select both of devices,while the indirect one is better if smaller interval is given.Therefore,this study gave guidelines for pen interface designs in direct and indirect pen tilt conditions and presented user habits with these two kinds of devices as a reference.
Representation Ability Research of Auto-encoders in Deep Learning
WANG Ya-si, YAO Hong-xun, SUN Xiao-shuai, XU Peng-fei and ZHAO Si-cheng
Computer Science. 2015, 42 (9): 56-60.  doi:10.11896/j.issn.1002-137X.2015.09.012
Abstract PDF(1042KB) ( 910 )   
References | Related Articles | Metrics
Deep learning frameworks and unsupervised learning methods have become increasingly popular and attracted the attention of many researchers in machine learning and artificial intelligence fields.This paper started from the “building-blocks” of deep learning methods and focused on the representation ability research of auto-encoders,especially on auto-encoders’ ability to reduce dimensionality and the stability of their representation ability.We expected that starting from the basis of deep learning can help us understand it better.Firstly,auto-encoder and restricted Boltzmann machine are two "building-blocks" of deep learning methods,both of which can be used to transform representation and be seen as relatively new nonlinear dimensionality reduction methods.Secondly,we investigated whether auto-encoder is a good representation transformation method under the context of understanding visual features,including eva-luating single-layer auto-encoder’s representation ability compared with classic methodology principal component analysis.Experiments based on original pixels and local descriptors demonstrate auto-encoder’s ability to reduce dimensiona-lity,the stability of its representation ability and the effectiveness of proposed AE-based transformation strategy.Fina-lly,future research direction was discussed.
Face Recognition with Multiple Variations Using Deep Networks
WANG Ying, FAN Xin, LI Hao-jie and LIN Miao-zhen
Computer Science. 2015, 42 (9): 61-65.  doi:10.11896/j.issn.1002-137X.2015.09.013
Abstract PDF(701KB) ( 529 )   
References | Related Articles | Metrics
In automatic face recognition(AFR) applications,input images typically present multiple types of variations on expression,resolution and pose.Existing approaches attempt to seek a common feature space shared by these variations through linear or local linear mappings.We used deep networks stacked by restricted Boltzmann machines to discover intrinsic non-linear representations of these variations.Deep learning can provide insight into how high-dimensional data are organized in a lower dimensional feature space and it also improves the performance of classification and recognition.In the meantime,we realized a supervised regression layer on the top of the network so that both feature extraction and recognition can be achieved in a unified deep framework.For the pre-training phrase,the whole network is initialized by training set including different poses with various expressions under high resolution(HR) and low resolution(LR).For the fine-tuning phrase,the parameter space is adjusted by the errors between the output of network and the labels via standard back propagation.For the test phrase,a profile face image from Probe is chosen randomly,then the feature vector in the subspace is gained.Compared with all of the vectors in the Gallery set,we determined the identity of images by the nearest neighborhood.We performed the extensive experiments on CMU-PIE facial database that presents rich expressions and wide range pose variations.The experiments show the superior recognition rate of our approach over the state-of-the-art linear(or locally linear) methods.
Research on Field Influence of Digu Users
LI Min, XIAO Sheng, LIU Zheng-jie and ZHANG Jun
Computer Science. 2015, 42 (9): 66-69.  doi:10.11896/j.issn.1002-137X.2015.09.014
Abstract PDF(349KB) ( 454 )   
References | Related Articles | Metrics
Social media develops rapidly,which makes people pay more attention to the behaviors of influential social media users and the effects to others.Some studies dealt with the measurement of social influence of social media users.However,they usually chose global metrics,such as number of posts and number of fans,rather than other metrics that might consider varied social influences within different fields.So the measurement metrics are general and unspecific.This research chose online data of Digu users as object to study the classifications of users’ posts,and proposed the concept of field influence and the measurement method.At last,the method was verified by a sample study.The results show that it can be well used to measure users’ social influence within different fields.It was also found that the measurement metrics such as the number of fans have no positive correlation with user field influence.
Intelligent Selection Algorithm of Measurement Nodes in Distributed Network Measurement
ZHANG Rong, JIN Yue-hui, YANG Tan and RONG Zi-zhan
Computer Science. 2015, 42 (9): 70-77.  doi:10.11896/j.issn.1002-137X.2015.09.015
Abstract PDF(1336KB) ( 554 )   
References | Related Articles | Metrics
The complexity of large-scale networks calls for monitoring techniques of special consideration.The automa-tic selection of measurement nodes must make a balance between costs and coverage.With appropriate selection of mea-surement nodes,not only the performance status of the overall network can be obtained,but also the impact of monitoring on the monitored network in terms of bandwidth and consumption of software/hardware resources can effectively be reduced.By targeting minimum number of measurement nodes,applying ant colony optimization as the basic algorithm,and making improvements and innovations on the foundation of the basic algorithm,an intelligent selection algorithm of measurement nodes was formed and proposed.
Improving Resolution Ability of Spectral Estimator by Weighted Subspace Projection
BAO Jian-dong, XU Wei-li, HU Wei-wei and XIE Xiao-min
Computer Science. 2015, 42 (9): 78-82.  doi:10.11896/j.issn.1002-137X.2015.09.016
Abstract PDF(378KB) ( 545 )   
References | Related Articles | Metrics
In order to improve the decreasing resolution ability under the environments like low signal noise ratio and small number of snapshots,two weighted projection methods were proposed respectively to signal subspace and noise subspace.The weighted values of signal subspace are reciprocal of margins between principle eigenvalues and noise power,and respective eigenvectors are weighted with them.To noise subspace,the elements of orthonormal basis are weighted with projection values which are gained by projecting integral value of steering vector in field of view to each element of orthonormal basis.Simulation results show that the proposed methods can decrease signal noise ratio threshold and snapshots threshold,so they have better resolution ability and higher precision in deficient snapshot and low signal noise ratio scenario.
Adaptive Cepstral Distance-based Voice Endpoint Detection of Strong Noise
ZHAO Xin-yan, WANG Lian-hong and PENG Lin-zhe
Computer Science. 2015, 42 (9): 83-85.  doi:10.11896/j.issn.1002-137X.2015.09.017
Abstract PDF(325KB) ( 670 )   
References | Related Articles | Metrics
In the case of noise interference,accuracy of speech endpoint detection using the traditional method dramati-cally declines.In order to effectively distinguish the speech signal and non-voice signal in strong background noise environment,this paper presented a strong noise speech endpoint detection method based on adaptive cepstral distance.The method introduces cepstral distance multiplier and the threshold increment coefficient.Different cepstral distance multipliers are used for different SNR and adaptive decision threshold method is used for voice activity detection.MATLAB simulation results show that,under different background noise and different SNR,the method for voice activity detection has high detection accuracy.Its detection is better than the traditional endpoint detection method,and is suitable for endpoint detection under strong background noise.
Study on Quality Assessment Model for Mobile Videos over 3G Network
CHEN Xi-hong, JIN Yue-hui and YANG Tan
Computer Science. 2015, 42 (9): 86-93.  doi:10.11896/j.issn.1002-137X.2015.09.018
Abstract PDF(941KB) ( 445 )   
References | Related Articles | Metrics
With the rapid development of 3rd generation network technology,the mobile video service has attracted more attention of the mobile users than ever.Compared to the traditional cable network video servi-ce,mobile video service is more sensitive to the transport condition and there can be more probabilities of bit error.And the performance of mobile video is limit to the configuration of mobile devices’ hardware,which means it’s necessary to choose an appropriate way to encode video for mobile videos.Besides,some other factors can affect the quality of user experience for mobile videos as well,such as video contents,user’s interests and so on.All these factors make it difficult for the mobile video service providers to evaluate the quality for mobile videos,as well as leading to an improvement of the mobile video experience for users.At present,most of the researches on assessment of video quality depend on quality of service.But it is not an effective method,because this method doesn’t take the factors of user experience into consideration.Therefore we proposed a new subjective assessment of quality for mobile video,and in the research we adopted a user intervention approach when collecting data and focusing on user experience in order to achieve scientific and effective strategies to evaluate the quality for mobile video service.
Dynamic Linear Relevance Based Access Control Model
WU Chun-qiong and HUANG Rong-ning
Computer Science. 2015, 42 (9): 94-96.  doi:10.11896/j.issn.1002-137X.2015.09.019
Abstract PDF(324KB) ( 393 )   
References | Related Articles | Metrics
The research of access control is an important issue in resource sharing based on networks.In order to improve the prediction ability of access control for unknown nodes,this paper proposed a dynamic linear relevance based access control model.Firstly,we proposed an architecture of access control containing service requestor,service recommendation node and service provider.Then,we proposed a dynamic trust score linear computing approach.The proposed approach represents the global trust score of node with the linear combination of direct trust score and recommendation trust score,and adjusts the weights of direct trust score and recommendation trust score with history records between nodes.Finally,the simulation experiments show that,compared with related access control models,the proposed model has higher prediction success rate,and can resist malicious attacks more efficiently.
Multi-slice Multi-cover Routing Protocol in Wireless Multimedia Sensor Networks
LI Rui-yao, BAI Guang-wei, SHEN Hang, DI Hai-yang and ZHAO Yun-hua
Computer Science. 2015, 42 (9): 97-101.  doi:10.11896/j.issn.1002-137X.2015.09.020
Abstract PDF(467KB) ( 381 )   
References | Related Articles | Metrics
This paper proposed a multi-slice multi-cover routing(MKCR) protocol in wireless multimedia sensor networks.We used video sensors with fixed FoVs and fixed location during the monitoring.The purpose is to enhance surveillance quality while providing multi-dimension coverage for key objectives.With cooperation of multi-sensors,the protocol computes multi-slice and k-cover paths,as well as allocates each sensor’s practicing time,in order to prolong the network lifetime.We also studied and designed two algorithms named MKCR-K and MKCR-T.The former focuses on the situation where k requests are determined by the system before computing the k-cover paths.The later determines the coverage requirement of each target according to sensor deployment,with the objective of achieving multi-coverand energy efficiency.Our simulation results show that the proposed MKCR can coordinate the working sensors and meet multi-cover requirement for key objectives,in the meanwhile,prolong the network lifetime.
On Degrees of Freedom of Cognitive Networks from Point-to-Point Channel to Two-user Interference Channel
WANG Yuan-yuan, LIU Feng, ZENG Lian-sun and ZHANG You-jun
Computer Science. 2015, 42 (9): 102-106.  doi:10.11896/j.issn.1002-137X.2015.09.021
Abstract PDF(391KB) ( 494 )   
References | Related Articles | Metrics
This paper investigated the degrees of freedom(DoF) of two cognitive networks,where the primary network is a two-user interference channel(IC),the secondary network is point-to-point channel(PTP),and the secondary network has cognition to the messages or signals of the primary network.Three cognition scenarios were considered:(1)PTP transmitter is cognitive of IC transmitters’ messages,(2)PTP receiver is cognitive of IC receivers’ signals,(3)PTP receiver is cognitive of IC transmitters’ messages.The inner bound of the DoF was obtained by using interference alignment and interference neutralization.Besides,the outer bound was proven.We found that inner and outer bound is tight and transmitter cognition can achieve higher DoF than receiver cognition.
Performance Bottlenecks Location Scheme Based on Structural Features of Service Composition Model
SHEN Hua, HE Yan-xiang and ZHANG Ming-wu
Computer Science. 2015, 42 (9): 107-117.  doi:10.11896/j.issn.1002-137X.2015.09.022
Abstract PDF(890KB) ( 455 )   
References | Related Articles | Metrics
The key of achieving the preference of customers is the performance of Web services composition in terms of satisfying capability requirements.How to identify and eliminate the performance bottlenecks of Web services composition is still a challenging research issue.Faced with this challenge,this paper proposed a performance bottleneck analysis scheme based on structural features of the services composition model.In order to ensure the viability and effectiveness of the scheme,the viability of the scheme’s technical route was proved.And the key fundamental problem needed to be solved is to find the smallest structural complete set of services composition model.Based on stochastic Petri net,this paper proposed a performance analysis model of Web services composition and introduced the methods for solving and proving the smallest structural complete set.Finally,the effectiveness of the scheme was illustrated by an application example of the scheme.
Game Logic Formal Model of Rational Secure Protocol
LIU Hai, PENG Chang-gen, ZHANG Hong and REN Zhi-jing
Computer Science. 2015, 42 (9): 118-126.  doi:10.11896/j.issn.1002-137X.2015.09.023
Abstract PDF(853KB) ( 482 )   
References | Related Articles | Metrics
The fairness and security properties of traditional secure protocol can be analyzed and verified by the game logics,alternating-time temporal logic and alternating-time temporal epistemic logic.However,for the formal analysis of rational secure protocol,taking players’ selfishness to knowledge into consideration,ATL and ATEL can not formally analyse and verify rational secure protocol.So by introducing utility function and preference relation in the concurrent epistemic game structure,new concurrent epistemic game structure rCEGS can be gotten.By introducing action ACT parameter in the cooperation modality operator 《Γ》,a novel alternating-time temporal epistemic logic rATEL-A can be gotten that can be used to formally analyse rational secure protocol.Then,rATEL-A was used to construct formal mo-del of two-party rational secure protocol.Based on extensive form game of equivalent to rCEGS,two-party rational exchange protocol was formally analysed,which shows that the formal model can formally analyse correctness,rational security and rational fairness of rational secure protocol effectively.
Research on QoS Quantitative Evaluation Method of Cloud Service Composition Based on Markov Process
JIAO Yang, CHEN Zhe, LIANG Yuan-ning and LI Dong-xing
Computer Science. 2015, 42 (9): 127-133.  doi:10.11896/j.issn.1002-137X.2015.09.024
Abstract PDF(616KB) ( 403 )   
References | Related Articles | Metrics
This paper studied the QoS quantitative evaluation method of cloud service composition.On the basis of the characteristics of combined cloud service environment of virtual dynamic nature and random service,this paper put forward the realization frame of cloud service composition based on BPEL workflow,established the combined cloud ser-vice process net(CCSPNet) based on the theory of stochastic Petri net,evaluated the performance with Markov process,and proposed the six-dimensional QoS evaluation system and the QoS quantitative evaluation method of cloud service composition combined with the CCSPNet module.The application example shows that the new method has better dynamic adaptability and flexibility and can satisfy QoS evaluation requirement of cloud service application environment effectively.
Risk Assessment of Software Vulnerability Based on GA-FAHP
TANG Cheng-hua, TIAN Ji-long, TANG Shen-sheng, ZHANG Xin and WANG Lu
Computer Science. 2015, 42 (9): 134-138.  doi:10.11896/j.issn.1002-137X.2015.09.025
Abstract PDF(495KB) ( 518 )   
References | Related Articles | Metrics
Aiming at the problem of the vulnerability risk level determination in the software system,a genetic fuzzy ana-lytic hierarchy process(GA-FAHP) approach was proposed to evaluate the risk of software vulnerability.Firstly,the improved FAHP is used to calculate the weight of each risk factor,and the fuzzy judgment matrix are established.Se-condly,the consistency checking and correcting process of the fuzzy judgment matrix are transformed into an optimization problem for nonlinear constrained system,and the genetic algorithm is used to solve it.Finally,the risk degree of the vulnerability is calculated by GA-FAHP algorithm.Experimental results show that this method has good accuracy and validity,and provides a feasible way for the software vulnerability risk assessment.
Research on Resource Deployment Model Based on Active Prediction in Cloud Computing
MA Zi-tang, CHEN Peng and LI Zhao-xing
Computer Science. 2015, 42 (9): 139-143.  doi:10.11896/j.issn.1002-137X.2015.09.026
Abstract PDF(442KB) ( 422 )   
References | Related Articles | Metrics
With the growing popularity of cloud computing,more and more users choose to migrate their business to the cloud computing system.Users’ usage habits and social routine working laws swarm into the cloud computing system along with the influx of large numbers of users,such as applying intensively to cloud computer system for resource nodes as early as 8:00,which leads into a kind of predictable resources conflict.In view of the problems above,a resource deployment model based on active prediction was proposed.Firstly,the task request amounts of next cycle are predicted according to the algorithm cycle length of Holt-Winters seasonal exponential smoothing model in prediction model,and determination of whether to make response to the current task request amounts or not and the specific amount,location and other parameter indicators should be made according to the active prediction algorithm designed to achieve active response capabilities to users’ usage patterns.The simulation experiment was conducted using CloudSim,and the performance of model proposed was judged systematically.Experimental results show that AF-HW model can effectively enhance the single-point and overall response rate when responding to predictable massive and concurrent task requests,so that users can get a better experience.
Distribution of a Family of Five-valued Cross Correlation Function
XU Li-ping and HU Bin
Computer Science. 2015, 42 (9): 144-146.  doi:10.11896/j.issn.1002-137X.2015.09.027
Abstract PDF(247KB) ( 373 )   
References | Related Articles | Metrics
Fewvalued cross correlation functions of msequences always interest the researchers.Multivariateequation of higher degree over finite field becomes the key to determine this problem.Most studies ofcross correlation functions when decimated factor is the form of d=(pl+1)/(pk+1) are based on binary msequences(p=2).The paper took pary msequences into account when l=2k.Using the theory of quadratic form,we proved that their cross correlation function is fivevalued.Taking association scheme into consideration,we transformed the problem of cross correlation distribution to the study of the ranks of quadratic form.Finally the complete fivevalued cross correlation distribution of pary msequences was determined.
Collision Attack on Reduced-round SNAKE(2)
QIU Feng-pin, WEI Hong-ru and PAN Jin-hang
Computer Science. 2015, 42 (9): 147-150.  doi:10.11896/j.issn.1002-137X.2015.09.028
Abstract PDF(297KB) ( 379 )   
References | Related Articles | Metrics
In order to research the ability of SNAKE(2) algorithm against the collision attack,a 6-round distinguisher of SNAKE(2) algorithm based on an equivalent structure of it was proposed.Attacks on 7/8/9 rounds of SNAKE(2) were performed by adding proper rounds before or after the 6-round distinguisher.The data complexities are O(26),O(26.52),O(215),and the time complexities are O(29.05),O(218.32),O(226.42).The results are better than that of Square attack.
Method of PUE Attack User Detection by Weighted Fractal Dimension
TIAN Hong-yuan, YAO Yin-di, ZHENG Wen-xiu and WANG Hong-wei
Computer Science. 2015, 42 (9): 151-153.  doi:10.11896/j.issn.1002-137X.2015.09.029
Abstract PDF(308KB) ( 423 )   
References | Related Articles | Metrics
PUE attack users illegally occupy the primary user’s channel,resulting to the decrease of cognitive user’s available spectrum resources.Based on the fingerprint characteristics of the radiation source,this paper defined the fractal dimension weighted,and depicted the intrapulse fluctuation characteristics of the communication signal envelope.We proposed a new method of PUE attack detection by the feature extraction of radiation source.Theoretical analysis and experimental results show that this method can effectively distinguish between the main users and PUE attacks users,and it will play a great role in the field of information security.
Self-adaptive Test Case Prioritization Based on History Information
CHANG Long-hui, MIAO Huai-kou and XIAO Lei
Computer Science. 2015, 42 (9): 154-158.  doi:10.11896/j.issn.1002-137X.2015.09.030
Abstract PDF(440KB) ( 610 )   
References | Related Articles | Metrics
Test case prioritization(TCP),which can effectively improve the testing efficiency and reduce testing time overhead and labor costs in iterative software development process,has attracted widespread attention of researchers.And many optimization methods have been proposed.But most methods incline to TCP technology based on requirement and coverage,and keep a static sort.This paper presented a TCP technology based on history information,and we dynamically adjusted the prioritization of test cases during the execution of test cases.This method helps to find defects as early as possible and to achieve the goal of bug detection.Finally,we applied our method to the project developed by our research group to verify the effectiveness of our method.
Data Driven Feature Extraction for Mining Software Repositories
LI Xiao-chen, JIANG He and REN Zhi-lei
Computer Science. 2015, 42 (9): 159-164.  doi:10.11896/j.issn.1002-137X.2015.09.031
Abstract PDF(782KB) ( 460 )   
References | Related Articles | Metrics
In mining software repositories(MSR),software tasks are usually transformed into data mining problems for solving.Domain-specific features heavily impact the solving of software tasks.However,no systematic investigation has been conducted on the issue of extracting features for specific software tasks.In this study,data driven feature extraction(DDFE) is a new feature extraction approach.For a software task,DDFE extracts a set of software data(e.g.,source code,bug reports) and employs some volunteers to manually accomplish this software task.During the process,these volunteers are requested to submit their reasons under consideration.From these submitted reasons,DDFE can extract domain-specific features for software tasks.The experimental results on the task of bug report summarization demonstrate that DDFE may find effective features and achieve better predictive results against the state-of-the-art algorithm in the literatures.
Measurement Component Transfer Model-based Conformance Testing Approach of Reconfigurable Measurement Component
WANG Jing, WANG Bin-qiang and SHEN Juan
Computer Science. 2015, 42 (9): 165-170.  doi:10.11896/j.issn.1002-137X.2015.09.032
Abstract PDF(487KB) ( 374 )   
References | Related Articles | Metrics
In the reconfigurable network measurement system,whether the measurement component state transfer precess conforming to the criterions is the key content of measurement component conformance testing.The paper proposed a MCTM(measurement component transfer model) of multi-measurement components system based on workflow,and then specified some related definitions of MCTM model.A conformance test-case generating algorithm based on MCTM(CTBMCTM) was then proposed.The experimental results show that the approach can discover the abnormity measurement component exactly.At the same time,this approach has shorter testing stubs and running time than T&GS.
Fine-grained Variable Entity Identification Algorithm Based on Memory Access Model
JING Jing, JIANG Lie-hui, HE Hong-qi and ZHANG Yuan-yuan
Computer Science. 2015, 42 (9): 171-176.  doi:10.11896/j.issn.1002-137X.2015.09.033
Abstract PDF(848KB) ( 427 )   
References | Related Articles | Metrics
There are two popular methods for variable identification.One is based on specific compiler habits and matching on memory address access mode,another is based on memory model and abstract interpretation technology.The former method is applicable to some specific compliers;the latter one often gets coarse-grained variables and higher wrong identification rate,because it has to consider the balance of accurate and time costs.In this paper,a fine-grained memory access model was defined firstly,which can simulate the fine-grained memory operation.And an abstract-state generation algorithm was given based on this model,which can track and record the fine-grained data information for advanced intermediate language HBRIL.Then a novel variable entity identification algorithm on memory region was designed according to data information.At last,the variables’ refinement proportion and recognition rate were given.The test results show that our approach gets higher identification ratio for dynamic allocated variables.
Improvement of Recovery Mechanism for Lustre Metadata Service
QIAN Ying-jin, LI Yong-gang, WANG Yi and ZHOU Lin-qi
Computer Science. 2015, 42 (9): 177-182.  doi:10.11896/j.issn.1002-137X.2015.09.034
Abstract PDF(579KB) ( 808 )   
References | Related Articles | Metrics
Lustre reboot recovery algorithm needs that all clients reconnect to the server in a special recovery time window,and then clients resend uncommitted transactional requests and the server replays these requests strictly in the transaction number order.The recovery conditions are too strict.To improve Lustre’s recoverability and availability,this paper proposed version based recovery and commit on share algorithms.They extend Lustre’s metadata update algorithm and recovery algorithm respectively,and allow clients rejoin in the cluster by recovery under a more relaxed condition according to the dependence between transactions.At last,the performance of improved recovery algorithms was evaluated via a series of experiments.
Research on Big Data Retrieve Filter Model for Batch Processing
LI Zhao-xing and MA Zi-tang
Computer Science. 2015, 42 (9): 183-190.  doi:10.11896/j.issn.1002-137X.2015.09.035
Abstract PDF(682KB) ( 536 )   
References | Related Articles | Metrics
As a new strategic resource,big data plays an important role in the field of information.The scale of big data retrieval often reaches billions or even ten billions,resulting in that traditional query mechanism’s low efficiency becomes regular.Therefore,improving the efficiency of big data query and reducing the burden of querying big data have become an important aspect of big data research.In order to speed up the querying of big data as well as reduce the burden,we proposed a big data retrieval filtering model IMFM of batch-oriented processing,demonstrated its support for multi-dimensional queries,and gave out the IMFM’s deployment strategy.By deploying the model in the appropriate position of the index structure,IMFM can filter the search requests that pass through the node quickly to avoid that the lower node is searched,so as to reduce the consumption of retrieval performance.Experiments show that,in the batch-oriented processing of big data environment,IMFM can effectively reduce the path length of both single and multi-dimensional data queries,improve the efficiency of retrieval and reduce the workload of big data storage and processing platform significantly.
Personalized Friends Recommendation System Based on Game Theory in Social Network
YANG A-tiao, TANG Yong, WANG Jiang-bin and LI Jian-guo
Computer Science. 2015, 42 (9): 191-194.  doi:10.11896/j.issn.1002-137X.2015.09.036
Abstract PDF(421KB) ( 623 )   
References | Related Articles | Metrics
With the expansion of online social networks,the information overload problem has become one of the most critical problems in computer network analysis.However,the complexity of entities and structure of the social network bring a challenge in the personalization recommendation.In this paper,a game-theoretical approach was proposed to link prediction,and the simplest way to formalize friendship recommendation is to cast the problem as a link prediction.Finally,we compared our approach with standard local measures and demonstrated a significant performance benefit in terms of mean average precision and reciprocal rank.
Efficient Algorithm for Large-scale Support Vector Machine
FENG Chang, LI Zi-da and LIAO Shi-zhong
Computer Science. 2015, 42 (9): 195-198.  doi:10.11896/j.issn.1002-137X.2015.09.037
Abstract PDF(301KB) ( 787 )   
References | Related Articles | Metrics
The algorithm for solving large-scale support vector machine(SVM) needs large memory requirement and computation time.Therefore,large-scale SVMs are performed on computer clusters or supercomputers.An efficient algorithm for large-scale SVM was presented,which can be operated on a daily-life PC.First,the large-scale training examples were subsampled to reduce the data size.Then,the random Fourier mapping was explicitly applied to the subsample to generate the random feature space,making it possible to apply a linear SVM to uniformly approximate to the Gaussian kernel SVM.Finally,a parallelized linear SVM algorithm was implemented to speed up the training further.Experimental results on benchmark datasets demonstrate the feasibility and efficiency of the proposed algorithm.
Empirical Research Based on Web Data:An Analysis on Spatio-temporal Effect of City Rail Transit on Residential House Prices
LIU Kang, LI Zhou-jun and ZHANG Xiao-ming
Computer Science. 2015, 42 (9): 199-203.  doi:10.11896/j.issn.1002-137X.2015.09.038
Abstract PDF(1269KB) ( 641 )   
References | Related Articles | Metrics
Based on real Web data collected by Web crawlers,this paper discussed the impact of rail transit system on the surrounding residential house prices in the case of Changsha subway line 2.First,by means of analyzing house prices’ features and influence factors,a hedonic model was established based on the formation and fluctuation of residential house prices in Changsha,including 13 feature variables consisting of location features,neighborhood features and structural features.Our study also confirmed the significant influence of subway features on residential house prices via significance testing,with further analysis of the significant influence range of subway stations.Second,based on the visualization of price distribution before and after the opening of Changsha subway line 2,a hypothesis that house price declines around subway stations in downtown area and house price increases around subway stations in suburb area was proposed.The hypothesis was validated based on hypothesis testing method.
Collaborative Filtering Recommendation Algorithm Based on Ontology Semantic Similarity
WU Zheng-yang, TANG Yong, FANG Jia-xuan and DONG Hao-ye
Computer Science. 2015, 42 (9): 204-207.  doi:10.11896/j.issn.1002-137X.2015.09.039
Abstract PDF(455KB) ( 485 )   
References | Related Articles | Metrics
Collaborative filtering recommendation is a personalized recommendation method based on users’ preferences.It includes two steps.Firstly,according to the information marked by user or project,the similarity of the users or projects is calculated and the neighbor set is determined.Secondly,sorting the similarity,user or project is recommended.During those process,similarity calculation is the core problem.In recent years,the method which uses users’ social network information to calculate the similarity has gotten widely attention.Users’ registration information,the project score information,and social information can be used as a basis for comparing.Based on those,we built the ontology of users,calculated the semantic similarity between the ontologys,and then found a similar set of users.Through this method,we accomplished the purpose of personalized service.This method provides an idea to combine ontology technology and recommendation system.Experiments show that this method can improve the accuracy of recommendation.
New Word Detection and Emotional Tendency Judgment Based on Deep Structured Model
SUN Xiao, SUN Chong-yuan and REN Fu-ji
Computer Science. 2015, 42 (9): 208-213.  doi:10.11896/j.issn.1002-137X.2015.09.040
Abstract PDF(535KB) ( 499 )   
References | Related Articles | Metrics
With the development of social network,new words appear ceaselessly.The appearance of new word tends to characterize the social hot spot or represent certain public mood.The new word detection and emotional tendency judgment provide a new way for the public mood forecast.We constructed the deep conditional random fields model for the sequence labeling,introduced part of speech,character position,the ability of word formation as features,and combined it with the crowd sourcing network dictionary and the other third party dictionary.Traditional method based on emotional dictionary is difficult to judge the new word emotional tendency.We expressed word as a vector of K dimension based on neural network language model in order to find the nearest words to the new word in the vector space.According to the emotional tendency of these words and the distance between them and the new word,the new word sentiment is judged.The experiment on corpus of Peking university demonstrates the feasibility of the proposed model and method,in which the new word detection F-value is 0.991,and the emotion recognition accuracy is 70%.
Behavior Modeling and its Application in Multi-agent System
FENG Xiang and ZHANG Jin-wen
Computer Science. 2015, 42 (9): 214-219.  doi:10.11896/j.issn.1002-137X.2015.09.041
Abstract PDF(448KB) ( 547 )   
References | Related Articles | Metrics
Five elements theory owns the informational dynamics,but it has not been used in current networks.This paper proposed a novel five elements particle model based on five elements theory,which can effectively solve distribution problem in multi-agent system.This model can describe and handle the random,parallel and multi-types coordination among Agents in multi-agent system.According to the relation of generation and control,and the inner stability and balance in five elements theory,we built a connection between the multi-agent system and five elements particle model.Meanwhile,the behaviors among Agents were well modeled,which is the prominent part in our five elements particle model algorithm.At last,we validated the effectiveness of our model through experiments.
Hybrid Multi-objective Algorithm for Solving Flexible Job Shop Scheduling Problem
ZUO Yi, GONG Mao-guo, ZENG Jiu-lin and JIAO Li-cheng
Computer Science. 2015, 42 (9): 220-225.  doi:10.11896/j.issn.1002-137X.2015.09.042
Abstract PDF(728KB) ( 542 )   
References | Related Articles | Metrics
The flexible job shop scheduling problem is one of the most important optimization problems in the field of production scheduling,due to its complexity and practical applications in real life.Most studies focus on only one objective—the makespan that is the total time required to complete all jobs.However,a single objective may be insufficient for real applications.Therefore,a new hybrid multi-objective algorithm was proposed for solving the flexible job shop scheduling problem(FJSP) with three objectives,including the makespan,total workload,and critical workload.Effective chromosome representation and genetic operators are introduced.The nondominated neighbor immune algorithm is used to search for Pareto optimal solutions.To improve the search performance,three different local search strategies were designed and combined in the multi-objective algorithm.The computational results on several data sets show that the proposed algorithm outperforms other representative algorithms in general.In addition,the experiments validate the effectiveness of local search strategies.
Optimization for Smoothing Parameter in Process of Data Fitting
WANG Li, WANG Wen-jian and JIANG Gao-xia
Computer Science. 2015, 42 (9): 226-229.  doi:10.11896/j.issn.1002-137X.2015.09.043
Abstract PDF(342KB) ( 1005 )   
References | Related Articles | Metrics
Data functionalizing is the basis of functional data analysis (FDA) and important step differed from other analysis methods.As the main approach of data functionalizing,data fitting usually can be converted into an optimization problem including loss function and the regularization term,and smoothing parameter plays a compromising role in weighing loss and the risk of over fitting.Generalized cross-validation (GCV) is a general and better parameter selection way,but massive calculation may be needed in order to get a more accurate smoothing parameter because GCV is calculated on discrete values.Aiming at this problem,the fitting optimization and the finite difference solution strategies were proposed to improve the solution efficiency of selection of the optimal smoothing parameter,and their precision and efficiency were compared and analyzed.The experiment results on simulated and real data sets demonstrate that the two proposed strategies are greatly improved in efficiency compared with the conventional grid method with almost the same precision.The finite difference solution strategy is better than the fitting optimization solution strategy in terms of algorithm precision,and the latter is more efficient.
Recommender Algorithm Based on Dynamical Trust Relationship between Users
ZHENG Jiong and SHI Gang
Computer Science. 2015, 42 (9): 230-234.  doi:10.11896/j.issn.1002-137X.2015.09.044
Abstract PDF(400KB) ( 461 )   
References | Related Articles | Metrics
In e-commerce,a user’s selection of item largely depends on the trust relationship of users.Traditional re-commender algorithms usually consider the static relationship between users,that is,the depended relationship for decision is changeless.In order to describe the importance of static trust relationship in recommender system,this paper proposed a dynamical trust relationship based recommender algorithm.First,we proposed a generative model that takes both static user inte-rest and static trust relationship into consideration.Then,we added temporal factor into user inte-rest and trust relationship,and proposed corresponding dynamical generative model.The experiments show that the proposed algorithm can describe the dynamical trust relationship between users,and has better prediction accuracy than related algorithms.
CGDNA:An Ensemble De Novo Genome Assembly Algorithm Based on Clustering Graph
XU Kui, CHEN Ke, XU Jun, TIAN Jia-lin, LIU Hao and WANG Yu-fan
Computer Science. 2015, 42 (9): 235-239.  doi:10.11896/j.issn.1002-137X.2015.09.045
Abstract PDF(755KB) ( 553 )   
References | Related Articles | Metrics
The ultimate goal of genome sequencing is to determine the complete DNA sequence of an organism,which is the basis for genetic research and disease diagnosis.In general,genome sequencing can be divided into two steps:first,generating and determining the DNA fragments experimentally;second,assembling the fragments into full genome through computational method.Although the Sanger technology successfully resolves the human genome,it is replaced by the next generation of sequencing technology due to its high cost.The next generation of sequencing technology has the merits of high throughput,high coverage and low cost and accompanies with short reads and more errors as a byproduct,which brings more challenge to the assembly algorithms.Since it is reported that the assembly results by different algorithms are complementary and none of the assembly algorithms consistently outperforms the remaining algorithms,this study aimed at integrating the assembly results produced by multiple algorithms.In this study,we proposed an algorithm based on clustering graph.Through building index,mapping of reads,clustering of contigs and building of clustering graph,the proposed algorithm outperforms any of the single algorithm.The experimental results demonstrate that by implementing the CGDNA algorithm,two standard metrics(the largest scaffold and scaffold N50) are increased by 50% when compared to the state-of-the-art algorithms,i.e.,Velvet,ABySS,and SOAPdenovo.Moreover,the performance of CGDNA algorithm should be further improved when more base algorithms are added.The proposed algorithm largely improves the quality of assembly result,reduces the difficulty of genetic analysis and accelerates the genome research.
Repulsion Force Based Gravitational Search Algorithm
WANG Qi-qi, SUN Gen-yun, WANG Zhen-jie, ZHANG Ai-zhu, CHEN Xiao-lin and HUANG Bing-hu
Computer Science. 2015, 42 (9): 240-245.  doi:10.11896/j.issn.1002-137X.2015.09.046
Abstract PDF(415KB) ( 692 )   
References | Related Articles | Metrics
To overcome the shortage of gravitational search algorithm(GSA),such as high convergence speed and premature convergence,this paper presented a repulsion force based GSA(RFGSA).In RFGSA,repulsion force is introduced to GSA,which means that a part of attraction force is changed to repulsive force.Therefore,the diversity of the population is increased and thus the search ability of GSA is improved.To demonstrate the validity of RFGSA,10 benchmark functions were tested.The compared results indicate the significant superiority of the proposed algorithm.
Almost Sure Convergence of Artificial Bee Colony Algorithm:A Martingale Method
KONG Xiang-yu, LIU San-yang and WANG Zhen
Computer Science. 2015, 42 (9): 246-248.  doi:10.11896/j.issn.1002-137X.2015.09.047
Abstract PDF(315KB) ( 594 )   
References | Related Articles | Metrics
The most convergence analysis on artificial bee colony(ABC) algorithm is based on ergodicity analysis and conducted in the sense of probabilistic convergence.Such analysis cannot infer in general that the ABC algorithm will be convergent to a global optimum in a finite number of evolution steps.In this paper,a martingale analysis method was proposed to study the almost sure convergence of ABC algorithm.It is shown that ABC algorithm can surely converge to a global optimum with probability 1 in a finite number of evolution steps.The obtained results underlie application of the ABC algorithm,and the suggested martingale analysis method provides a new technique for convergence analysis of ABC algorithm.
Classification Method of Imbalanced Data Based on RSBoost
LI Ke-wen, YANG Lei, LIU Wen-ying, LIU Lu and LIU Hong-tai
Computer Science. 2015, 42 (9): 249-252.  doi:10.11896/j.issn.1002-137X.2015.09.048
Abstract PDF(663KB) ( 933 )   
References | Related Articles | Metrics
The problem of class imbalance which is very common to many application domains becomes the research hotspot in data mining and machine learning.We presented a new classification method of imbalance data,called RSBoost,to increase the recognition rate of minority class and the classification efficiency.This approach uses SMOTE(synthetic minority over-sampling technique) and random under-sampling to balance the data sets,and then uses boosting method to optimize the classification performance.We conducted experiments using several public data sets to eva-luate the performances of RSBoost and other four methods.The experimental results show that the approach proposed in this article can improve the classification performance and efficiency of imbalance data sets.
Improved ITO Algorithm for Solving VRP
WANG Hao-guang and YU Shi-ming
Computer Science. 2015, 42 (9): 253-256.  doi:10.11896/j.issn.1002-137X.2015.09.049
Abstract PDF(336KB) ( 497 )   
References | Related Articles | Metrics
In order to avoid local optimum for selecting client node in VRP,this paper introduced saving method combined with the path weight value and the distance heuristic factor to improve the decision rule of selecting client node.According to the characteristics of the actual process of particle motion and the gradual convergence characteristics of ITO algorithm in the iterative procedure,combining drifting operator and fluctuation operator,this paper proposed the path weight value update rule to enhance the convergence rate of the algorithm.By increasing fluctuation coefficient and raising the ambient temperature,local optimum is skipped and search stagnating is avoided.The local optimization algorithm named 2-opt was introduced to further optimize the current generating best solution.Experimental result shows that the improved ITO algorithm effectively promotes the convergence rate and the ability of searching the global optimal solution.
Multi-objective Artificial Bee Colony Algorithm
GE Yu and LIANG Jing
Computer Science. 2015, 42 (9): 257-262.  doi:10.11896/j.issn.1002-137X.2015.09.050
Abstract PDF(817KB) ( 451 )   
References | Related Articles | Metrics
This paper designed a multi-objective artificial bee colony algorithm in order to make it effectively apply to multi-objective optimization problem.The evolutionary strategy uses elite solutions to guide search,at the same time combines sine function searching operation to balance exploration and exploitation of solution space.In addition,the algorithm records and maintains the Pareto optimal solutions of evolutionary process with the aid of the external archive .The theoretical analysis shows that the proposed algorithm can converge to the theory optimal solution archive of multi-objective problem.In addition,simulations result indicate that the proposed algorithm can effectively close to theory optimal solution archive,has good convergence and uniformity in solving typical multi-objective optimization problem,and compared with the same type of algorithms in references,it has good performance.
Combinatorial Optimization Model of Multi-modal Transit Scheduling
MING Jie, ZHANG Gui-jun and LIU Yu-dong
Computer Science. 2015, 42 (9): 263-267.  doi:10.11896/j.issn.1002-137X.2015.09.051
Abstract PDF(394KB) ( 647 )   
References | Related Articles | Metrics
To satisfy the traffic demand on each station at different time,the connection between passenger travelling time and the operation management of bus company was systematically probed,meanwhile,the combination of three different departure modals—normal bus,zone bus and express bus—and the departure interval were also deeply investigated.Aiming at optimal time cost,a combined model of bus scheduling was established based on the principle of selecting different decision-making models over the same decision interval.Due to the typical NP-hard problem in bus scheduling,differential evolution algorithm was used to solve the model.The results of the experiment indicate that with a 4min decision-making interval,there are three possible departure interval at the origin station—4min,8min,12min.Taking the overtaking of zone bus and express bus into account,the waiting time at different stations is 0.8min to 12min.Compared to the traditional bus modulation,the multi-modal bus combination leads to less departure and thus the whole system time cost is lower.
Research on Generalized Lorenz Kernel Function in Fuzzy C Means Clustering
WANG Jian-hua, LI Xiao-feng and GAO Wei-wei
Computer Science. 2015, 42 (9): 268-271.  doi:10.11896/j.issn.1002-137X.2015.09.052
Abstract PDF(299KB) ( 455 )   
References | Related Articles | Metrics
Fuzzy C means(FCM) algorithm is the main algorithm for data clustering analysis.But in a noisy environment,for the clusters of different sampling sizes,accuracy is low when the number of clusters is large.The above disadvantages can be sloved through the Gauss kernel mapping of alternative FCM(AFCM) .This paper proposed generalized Lorenz kernel function to the fuzzy C means clustering for the deficiency of AFCM. This algorithm was used to analyze the Iris database cluster,to classify the Iris database into three clusters of Iris setosa,Iris versicolour and Iris virginica.Experimental results show that the generalized lorentzian fuzzy C-means(GLFCM) can classify data of outliers and un-equal sized clusters.The GLFCM yields better cluster than K-means(KM),FCM,alternative fuzzy C-means(AFCM),Gustafson-Kessel(GK) and Gath-Geva(GG).It takes less iteration than that of AFCM to converge.Its partition index(SC) is better than the others.
SD-OCT Image Layer Segmentation Using Multi-scale 3-D Graph Search Method
NIU Si-jie, CHEN Qiang, LU Sheng-tao and SHEN Hong-lie
Computer Science. 2015, 42 (9): 272-277.  doi:10.11896/j.issn.1002-137X.2015.09.053
Abstract PDF(1525KB) ( 439 )   
References | Related Articles | Metrics
Spectral domain optical coherence tomography (SD-OCT) imaging technique is widely used in the field of ophthalmology.The segmentation of retinal tissue layers plays a vital role in the diagnosis of retinal disease.The traditional 3-D graph search method is able to segment k surfaces simultaneously.Problems of this algorithm associated with high time complexity and weak robustness of segmenting pathological retinal images have limited its utility.Multi-scale theory was introduced to the traditional 3-D graph search modal,and a 3-D graph search algorithm based on multi-scale for segmenting retinal images was proposed in this paper.Firstly,reasonable cost functions are constructed for each surface according to the characteristics of the layer.Then the minimum and maximum differences in height of adjacent columns are utilized to construct inter-column constraints to improve smoothness constraints of surfaces.Finally,3-D graph search method is used for coarse segmentation of lower-scale images and then a search region of higher-scale images is redefined for a more accurate result.The improved algorithm was evaluated on 3 groups of normal eyes and 1 group of abnormal eye with age-related macular degeneration.The experimental results were compared with manual segmentation and traditional 3-D graph search method.The results demonstrate that the improved method can effectively detect 3 layer surfaces (the mean absolute boundary positioning difference is 3.86±2.50μm) more closely to manual segmentation (3.78±2.76μm),and it is better than traditional 3-D graph search method (7.92±3.31μm).
Visual Object Tracking Algorithm Based on Region Covariance Matrix and 2DPCA Learning
ZHANG Huan-long, ZHENG Wei-dong, SHU Yun-xing and JIANG Bin
Computer Science. 2015, 42 (9): 278-281.  doi:10.11896/j.issn.1002-137X.2015.09.054
Abstract PDF(606KB) ( 397 )   
References | Related Articles | Metrics
Against the information loss problem during the process of transferring image into vector using PCA in visual tracking,a new adaptive object tracking method based on 2DPCA learning was proposed.It takes the tracked object as a matrix,which can maintain the spatial structure information of the target.And in the particle filter framework,the algorithm adopts the affine model to describe object motion.Meanwhile,in order to enhance the learning ability,the algorithm uses covariance feature fusion to estimate the motion states of the tracked object so as to obtain the robust tracking results.Experimental results indicate that the proposed method achieves favorable performance when the object undergoes illumination changes,pose changes,and partial occlusion between consecutive frames.
No-reference Stereoscopic Image Quality Assessment Based on Wavelet Transform
XIONG Run-sheng, LI Chao-feng and ZHANG Wei
Computer Science. 2015, 42 (9): 282-284.  doi:10.11896/j.issn.1002-137X.2015.09.055
Abstract PDF(847KB) ( 459 )   
References | Related Articles | Metrics
Stereoscopic image quality assessment is an important field of image processing,and existing no-reference quality assessment method of 2D image cannot be well used in stereoscopic images.To evaluate the stereoscopic image quality,we presented a no-reference stereoscopic image quality assessment method based on wavelet transform.Firstly,the “Cyclopean” images are gained using Gabor and SSIM algorithm based on the left and right stereoscopic images.And then the sub-band energy of the left and right stereoscopic images and the “Cyclopean” image is calculated by wavelet decomposition.At last the relationship model between perceptual features of 3D image and subjective scores is built by support vector regression(SVR).Experimental results show that our method is better consistent with subjective assessment,and is more accord with human visual system.
Multi-projector Displays System Research Based on GPU Real-time Video Processing
LI Xiao-guang, LIU Hong-zhe and YUAN Jia-zheng
Computer Science. 2015, 42 (9): 285-288.  doi:10.11896/j.issn.1002-137X.2015.09.056
Abstract PDF(413KB) ( 720 )   
References | Related Articles | Metrics
This paper introduced GPU high-speed parallel computing,which plays an important role in digital image and video processing.For multi-channel surround screen projection system,we used a combination of CPU and GPU heterogeneous computing structure,and proposed a real-time video processing solution.By using link model of DirectShow, the program ensures the flexibility of video processing.Geometric correction and edge blending algorithm are designed for parallel computing to enhance the efficiency of video processing.This framework can be used for single-channel,high-quality 4k format video display,effectively reduces building costs,and improves economic practicality of the system.
Image Resizing Based on Seam Carving and Warping
LIN Xiao, ZHANG Xiao-yu and MA Li-zhuang
Computer Science. 2015, 42 (9): 289-292.  doi:10.11896/j.issn.1002-137X.2015.09.057
Abstract PDF(1233KB) ( 537 )   
References | Related Articles | Metrics
This paper presented an image resizing method which can preserve the content and the shape of salient objects.Image can be resized by using traditional seam carving and warping techniques.First,the proposed method produces the significant map of clear shape and structure by combining the saliency detection via graph-based manifold ranking and gradient energy information.And then appropriate resizing methods are determined according to the size of the resizing scale by using the former significant map.Finally,image can be resized by the classic seam carving method or the resizing method of deformation based on energy optimization according to the comparison results.A lot of comparison results show that the method can keep both important content and shape and structure of salient objects.
Symmetrical 8 Chain Code Encoding Algorithm to Describe Outer Contour Information of Phalaenopsis Amabilis Images
XU Huan-liang, WANG Yi-jun, XIONG Ying-jun, REN Shou-gang and WANG Hao-yun
Computer Science. 2015, 42 (9): 293-298.  doi:10.11896/j.issn.1002-137X.2015.09.058
Abstract PDF(1795KB) ( 462 )   
References | Related Articles | Metrics
One important feature parameter to judge the growing situation of phalaenopsis amabilis is outer contour information,which is obtained by contour extracting and chain code encoding.Mathematical morphological algorithm is more suitable to extract phalaenopsis amabilis edge contour,however,its edge contour is no-single pixel width,and traditional 8 chain code algorithm will wrongly express the outer contour information.Combining contour direction feature,we defined starting chain code direction and proposed symmetrical 8 chain code algorithm.During the encoding process,this algorithm can judge current contour direction through change points,and then select the starting chain code direction adaptively.Verification experiments show that this algorithm can well describe the outer contour information with low misjudgement rate,and general experiments prove that this algorithm is also suitable for other enclosed images which have extracted target well.
Depth Image Based Gesture Recognition for Multiple Learners
ZHANG Hong-yu, LIU Wei, XU Wei and WANG Hui
Computer Science. 2015, 42 (9): 299-302.  doi:10.11896/j.issn.1002-137X.2015.09.059
Abstract PDF(1107KB) ( 458 )   
References | Related Articles | Metrics
Gesture recognition of the learner’s body is helpful for analysing and evaluating the learner’s status in the e-learning system.In this paper,a depth image based gesture recognition method was proposed to recognize multiple learners.After obtaining the depth image from Kinect sensor,the human body is separated from the background image and the contour features described in Hu moments are extracted.The learner’s gesture is then recognized based on SVM classifier.The test results show that this method can efficiently recognize the hand-up,sitting and head-yield gestures of multiple learners.
Objectness Proposal Based on Prior Distribution of Geometric Characteristics of Object Regions
LIU Zhi-bin and ZHAO Qi-yang
Computer Science. 2015, 42 (9): 303-308.  doi:10.11896/j.issn.1002-137X.2015.09.060
Abstract PDF(1019KB) ( 450 )   
References | Related Articles | Metrics
Objectness proposal is an emerging problem aiming to improve the efficiency of object detection by reducing candidate windows.The problem was analyzed from the perspective of combinatorial geometric ,and a method was proposed to construct full cover sets which cover all possible object rectangles with a rather small amount of windows.For images no larger than 512×512,supposing all object rectangles are not smaller than 16×16,nearly 19000 windows are sufficient to make up a full cover set.By exploiting the prior distribution of locations/sizes of object rectangles,this amount can be reduced further in a greedy mode.In order to address the diversity of low-probability samples of different image sets,a hybrid scheme mixing the greedy and random methods which has good generality was presented.The new scheme recalls 94.52% object rectangles with 1000 proposal windows,and its DRs on the first ten hot proposal windows are 13.99%~40.29% higher than existing methods in average.
Clustering Based Object Shadow Recognition Algorithm
ZHANG Xiao-dan, LI Chun-lai and JIN Zhao-yan
Computer Science. 2015, 42 (9): 309-312.  doi:10.11896/j.issn.1002-137X.2015.09.061
Abstract PDF(866KB) ( 460 )   
References | Related Articles | Metrics
Object shadow computation is a critical issue for object rendering in image processing,and thus it is an important research issue in image processing field.In order to further improve the efficiency of shadow rendering,this paper proposed a clustering based object shadow recognition algorithm.We represented each light as a sphere according to its attenuation range,classified lights into different classes while the distance between them extended the predefined minimum distance,and applied a top-down hierarchal clustering method for lights.During the process of clustering,the attenuation range of light increases linearly with the distance between light and the light source.After clustering the lights into different classes,we rendered lights of a class by the same texture,and thus,the efficiency of rendering shadow in an image is improved largely.Finally,we validated the efficiency of our algorithm according to some experiments.
Locality-sensitive Discriminant Sparse Representation for Video Semantic Analysis
WANG Min-chao, ZHAN Yong-zhao, GOU Jian-ping and MAO Qi-rong
Computer Science. 2015, 42 (9): 313-319.  doi:10.11896/j.issn.1002-137X.2015.09.062
Abstract PDF(831KB) ( 408 )   
References | Related Articles | Metrics
Video semantic analysis has been a research hotspot.Traditional sparse representation methods cannot produce similar coding result when the input video features are close to each other.We assumed that similar video features should be encoded as similar sparse codes in the process of video semantic analysis based on sparse representation.In other words,the similar video features should have smaller distance between their sparse codes.In order to improve the accuracy of video semantic analysis,locality-sensitive discriminant sparse representation(LSDSR) based on the hypothesis for video semantic analysis was developed.In proposed method,discriminant loss function based on sparse coefficient is introduced into the locality-sensitive sparse representation.An optimization dictionary is generated with the constraint.In the process,the sparse coding coefficients have both small within-class scatter and large between-class scatter using Fisher criterion,so as to build the discriminant sparse model in the LSDSR.The proposed method was extensively evaluated on related video databases in comparison with existing sparse representation methods.The experimental results show that this method significantly enhances the power of discrimination of sparse representation features,and consequently improves the accuracy of video semantic analysis.