Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 45 Issue 8, 20 August 2018
  
ChinaMM 2017
Deblurring for Imaging through Simple LensCombining Adaptive Gradient Sparsity and Interchannel Correlation
WANG, Xin-ling FU, Ying HUANG Hua
Computer Science. 2018, 45 (8): 1-6.  doi:10.11896/j.issn.1002-137X.2018.08.001
Abstract PDF(2621KB) ( 534 )   
References | Related Articles | Metrics
Due to optical aberrations in imaging optics,the image taken from simple lensessuffers from severe artifacts and blurring.Aiming at this kind of blurring problem,this paper proposed a deblurring method combining adaptive gradient sparsity and interchannel correlation.This method restores every color channel of the blurred images separately through imposing different sparse priors on points in smooth areas and at edges and using interchannel correlation constraint,which uses edge information preserved in some channel to restore another channel.The simulation experiment results show that the proposed method can achieve better restoration in respect of image resolution and visual effect for blurred images through simple lens.
Study on Wi-Fi Fingerprint Anonymization for Users in Wireless Networks
HAN Xiu-ping, WANG Zhi, PEI Dan
Computer Science. 2018, 45 (8): 7-12.  doi:10.11896/j.issn.1002-137X.2018.08.002
Abstract PDF(1846KB) ( 391 )   
References | Related Articles | Metrics
Billions of Wi-Fi assess points (APs) have been deployed to provide wireless connection to people with different kinds of mobile devices.Toaccelerate the speed of Wi-Fi connection,mobile devices will send probe requests to discover nearby Wi-Fi APs,and maintain previously connected network lists (PNLs) of APs.Previous studies show that the Wi-Fi fingerprints that consist of probed SSIDs individually will leak private information of users.This paper investigated the privacy caused by the Wi-Fi fingerprints in the wild,and provided a data-driven solution to protect privacy.First,measurement studies were carried out based on 27 million users associating with 4 million Wi-Fi APs in 4 cities,and it was revealed that Wi-Fi fingerprints can be used to identify users in a wide range of Wi-Fi scenarios.Based on semantic mining and analysis of SSIDs in Wi-Fi fingerprints,this paper further inferred demographic information of identified users (e.g.,people’s jobs),telling “who they are”.Second,this paper proposed a collaborative filtering (CF) based heuristic protection method,which can “blur” an user’s PNL by adding faked SSIDs,such that nearby users’ PNLs and Wi-Fi fingerprints are similar to each other.Finally,the effectiveness of the design was verified by using real-world Wi-Fi connection traces.The experiments show that the refined PNLs protect users’ privacy while still provide fast Wi-Fi reconnection.
Accuracy Assessment Method of PnP Algorithm in Visual Geo-localization
GUI Yi-nan, LAO Song-yang, KANG Lai, BAI Liang
Computer Science. 2018, 45 (8): 13-16.  doi:10.11896/j.issn.1002-137X.2018.08.003
Abstract PDF(3775KB) ( 1246 )   
References | Related Articles | Metrics
In recent years,the rapid growth of demand based on location-based services has led to the development of positioning technology.The vision-based approach utilizes multiple images to restore more accurate camera pose para-meters,but there is no uniform assessment of the performance of its quantitative evaluation.Now the mainstream camerapose assessment method is compared with the GPS data.However,since the photo comes with the GPS tag noise and the conversion between different coordinate systems introduces errors,using GPS tag as ground truth to evaluate the accuracy of the estimated camera pose is not an objective way.In this paper,an objective accuracy evaluation method was proposed.The reference plane was established by the calculated pose.The camera pose obtained by the PnP algorithm was projected onto the reference plane by the same method.
Crowd Counting Method via Scalable Modularized Convolutional Neural Network
LI Yun-bo, TANG Si-qi, ZHOU Xing-yu, PAN Zhi-song
Computer Science. 2018, 45 (8): 17-21.  doi:10.11896/j.issn.1002-137X.2018.08.004
Abstract PDF(2601KB) ( 479 )   
References | Related Articles | Metrics
The purpose of this paper is to accurately estimate the crowd density in real scenes based on image information from arbitrary perspective and arbitrary crowd density.However,crowd counting on static images is a challenging problem.Due to the perspective distortion and the crowd crushes caused by the projection from 3D space into 2D space,it is difficult to distinguish the difference between individual and individual and the difference between individual and background.To this end,this paper proposed a flexible and efficient scalable modularized convolutional neural network (CNN) architecture.The network allows to directly input images with arbitrary size and resolution and it does not require additional computational changes in view information.Each module of the architecture employs a multiple column structure with different convolution kernels,which can be used to fit individual information of different distances.The proposed module also combines the feature information of the front and rear two layers,reducing the decrease loss of the accuracy caused by the vanishing of the gradient.Experiments show thatthe accuracy of proposed method is increased by 14.58% and 40.53%,and the root mean square error is reduced by 23.89% and 33.90% respectively on ShanghaiTech PartA and PartB datasets compared with the state-of-the-art MCNN methods.
Human Action Recognition Framework with RGB-D Features Fusion
MAO Xia, WANG Lan, LI Jian-jun
Computer Science. 2018, 45 (8): 22-27.  doi:10.11896/j.issn.1002-137X.2018.08.005
Abstract PDF(2786KB) ( 576 )   
References | Related Articles | Metrics
Human action recognition is an important research direction in the field of computer vision and pattern recognition.The complexity of human behavior and the variety of action performing make behavior recognition still as a challenging subject.With the new generation of sensing technology,RGB-D cameras can simultaneously record RGB images,depth images,and extract skeleton information from depth images in real time.How to take advantages of above information has become the new hotspot and breakthrough point of behavior recognition research.This paper presented a new feature extraction method based on Gaussian weighted pyramid histograms of orientation gradients for RGB images,and built an action recognition framework fusing multiple features.The feature extraction method and the framework proposed in this paper were researched on three databases:UTKinect-Action3D,MSR-Action 3D and Florence 3D Actions.The results indicate that the proposed action recognition framework achieves the accuracy of 97.5%,93.1%,91.7% respectively.It shows the effectiveness of the proposed action recognition framework.
Image Co-segmentation Algorithm via Consistency of Center Sensitive Histogram
LI Yang, CHEN Zhi-hua, SHENG Bin
Computer Science. 2018, 45 (8): 28-35.  doi:10.11896/j.issn.1002-137X.2018.08.006
Abstract PDF(4250KB) ( 412 )   
References | Related Articles | Metrics
Image co-segmentation is one of the active research areas in computer vision.The ability to utilize the information of similar objects in segmentation process is one of the advantages of co-segmentation,which is different from other segmentation methods.Meanwhile,establishing the similarity of corresponding objects is becoming a challenging task.This paper presented a novel consistency of center sensitive histogram for image co-segmentation.Unlike the traditional image histogram that calculates the frequency of occurrence for the intensity value by adding ones to the corresponding bin,a consistency of center sensitive histogram is computed at each pixel and a floating-point value is added to the corresponding bin for each occurrence of the intensity value.The floating-point value is a spatial consistency between the pixel of occurrence of intensity and the pixel where the histogram is computed.Therefore,the histogram not only takes the distribution of each pixel’s intensity value into account,but also the spatial relative position.A robust co-segmentation framework was proposed.Its robustness reflectsthe similar objects under different illumination and deformation condition can be both segmented well.The proposed technique was verified on various test image data sets.The experimental results demonstrate that the proposed method outperforms the average of state-of-the-art 3%,especially when the test image is in different illumination conditions and has different shapes.
Motion Blur Parameters Estimation Based on Bottom-hat Using Spectrum
FANG Zheng, CAO Tie-yong, FU Tie-lian
Computer Science. 2018, 45 (8): 36-40.  doi:10.11896/j.issn.1002-137X.2018.08.007
Abstract PDF(3653KB) ( 668 )   
References | Related Articles | Metrics
Motion blur is caused by the relative motion between object and imaging system,and precise motion blur parameters are needed when the uniform linear motion-blurred image is recovered.It can be proved that the motion blur parameters are relative to the zero points of Fourier spectrum.The number of dark lines is related to the fuzzy scale,and the spectrum dark lines are perpendicular to the angle.In the detection of the dark lines of spectrum,due to the image structure or noise,it is difficult to accurately locate the spectral dark lines,and the spectral structure will be affected by image’s aspect ratio.To deal with these problems,this paper proposed a way named bottom-hat which are used in morphology and processed it in the blur image spectrum,and then used the Hough transform to find the blurred angles.Blurred angles and the distance between two mid zero points were used to find the blurred length.Experimental results show that the errors of blurred length detected by the proposed method are lower than 0.25 pixels,and the errors of the angle are lower than 0.6°.The proposed method is very robust for it can estimate the blur parameters of blurred images in different scales and contents.
Research on Regional Age Estimation Model
SUN Jin-guang, RONG Wen-zhao
Computer Science. 2018, 45 (8): 41-49.  doi:10.11896/j.issn.1002-137X.2018.08.008
Abstract PDF(1947KB) ( 653 )   
References | Related Articles | Metrics
With the further research on age feature extraction and age feature classification pattern,in order to make further efforts to meet the application demand of human-computer interaction system based on age information in real life,constructing an effective machine learning algorithm has become a research focus in age estimation technology of face image.Firstly,this paper analyzed the rule of multiple regional features changing with age,and divided the face into prefrontal region,eye region,central region and integrated region.Then,it constructed features extraction model of deep convolutional neural network models separately to extract age features of each region.Thirdly,taking Morph face database as the sample set,this paper divided it into 6 stages aged 10~19,20~29,30~39,40~49,50~59,and 60 years or older to train and test age feature extraction network model in multiple regions.Finally,according to the accuracy of age feature classification model,this paper defined the region-based dynamic weight age estimation model.The experiment shows that the accuracy of age estimation on Morph face database is 72.6%,and the age classification category has been raised from 4 to 6.
Two-stage Method for Video Caption Detection and Extraction
WANG Zhi-hui, LI Jia-tong, XIE Si-yan, ZHOU Jia, LI Hao-jie, FAN Xin
Computer Science. 2018, 45 (8): 50-53.  doi:10.11896/j.issn.1002-137X.2018.08.009
Abstract PDF(3427KB) ( 777 )   
References | Related Articles | Metrics
Video caption detection and extraction is one of the key technologies forvideo understanding.This paper proposed a two-stage approach which divides the process into caption frame and caption area,improving the caption detection efficiency and accuracy.In the first stage,caption frame detection and extraction is conducted.Firstly,the motion detection is performed according to the gray correlation frame difference,the captions are judged initially,and a new binary image sequence is obtained.Then,according to dynamic characteristics of ordinary captions and scrolling captions,the new sequence is screened two times to get caption frame.In the second stage,caption area detection and extraction is conducted.Firstly,the Sobel edge detection algorithm is used to detect the caption region,and the background is eliminated according to the constraint height.Then according to the aspect ratio,the vertical and horizontal captions are distinguished,and all captions in the caption frame can be obtained,including static captions,ordinary captions and scrol-ling captions.This method reduces the frames which need to be detected and improves caption detection efficiency by 11%.The experimental results show that the proposed method can approximately improve the F score by 9% compared with the methods of separately using the gray correlation frame difference and edge detection.
Improved Shuffled Frog Leaping Algorithm and Its Application in Multi-threshold Image Segmentation
ZHANG Xin-ming, CHENG Jin-feng, KANG Qiang, WANG Xia
Computer Science. 2018, 45 (8): 54-62.  doi:10.11896/j.issn.1002-137X.2018.08.010
Abstract PDF(2463KB) ( 488 )   
References | Related Articles | Metrics
Aiming at the disadvantages of shuffled frog leaping algorithm (SFLA),such as high computational comple-xity and poor optimization efficiency,an improved shuffled frog leaping algorithm (ISFLA) was proposed in this paper.The following improvements have been made on the basis of SFLA.Firstly,the method which only updates the worst frog in SFLA is replaced by the method which updates all frogs in each group.This replacement can increase the probability of obtaining the high quality solutions,omit the steps of setting the number of iterations in the group and then improve the optimization efficiency and operability.Secondly,the method based on local optimum updating and the method based on global optimum updating are combined into a hybrid disturbance updating method,which avoids the tedious condition selection steps and further improves the optimization efficiency.Finally,the random updating method is removed to avoid destroying the superior solutions and further enhance the overall performance optimization.ISFLA was tested on the benchmark functions from CEC2005 and CEC2015,and was applied to the multi-threshold gray and color images segmentation based on Renyi entropy.The experimental results show that,ISFLA obtains higher optimization efficiency and is more suitable for threshold selection of multi-threshold image segmentation compared with SFLA and the state-of-the-art LSFLA.
2D-to-3D Conversion Algorithm for Badminton Video
LIU Yang, QI Chun, YANG Jing-yi
Computer Science. 2018, 45 (8): 63-69.  doi:10.11896/j.issn.1002-137X.2018.08.011
Abstract PDF(4334KB) ( 510 )   
References | Related Articles | Metrics
This paper proposed a 2D-to-3D conversion algorithm for badminton video.The most attractive part of badminton video is the foreground.The core of the depth map extraction is to separate the foreground objects accurately from the background.The improved grab cut segmentation algorithm is used to extract foreground regions.A model for the background depth is constructed based on the structure of scene.The depth value is assigned for foreground based on the distance of scene objects from the viewpoint and the background depth map.Then the depth of foreground and background are merged.Finally,the synthesized stereo pairs of images for 3D display are obtained by DIBR formula.The experimental results show that the generated stereo images have good 3D perception performance.
Quality Evaluation of Color Image Based on Discrete Quaternion Fourier Transform
CHEN Li-li, ZHU Feng, SHENG Bin, CHEN Zhi-hua
Computer Science. 2018, 45 (8): 70-74.  doi:10.11896/j.issn.1002-137X.2018.08.012
Abstract PDF(2039KB) ( 640 )   
References | Related Articles | Metrics
Quality evaluation of color image is of great significance in image acquisition,compression,storage,transmission and so on.However,traditional objective evaluation methods often lose some color information or ignore the integrity of color image,making the results can not be well consistent with the subjective scores.This paper proposed an objective quality evaluation method of color image based on discrete quaternion Fourier transform(DQFT).A color image is expressed as a quaternion matrix and the discrete quaternion Fourier transform is applied.Then,the spectrum is divided into non-uniform bins and a reduced space representation of the spectrum is obtained by considering the characte-ristics of Human vision system which is sensitive to the distortion of lower frequency components and insensitive to higher frequency components.Next,the amplitude similarity and phase similarity between the distorted image and the reference image are described.Taking into account the influence on the image quality of the amplitude similarity and phase similarity,both of them are integrated by using entropy method and the index of the distorted image quality is achieved.Finally,image databases were used to analyze the correlation between the objective scores and the subjective scores.The experimental results demonstrate the feasibility and effectiveness of the proposed method.
Network & Communication
Optimal Energy Allocation Algorithm with Energy Harvesting and Hybrid Energy Storage for Microscale Wireless Networks
YAO Xin-wei, ZHANG Meng-na, WANG Wan-liang, YANG Shuang-hua
Computer Science. 2018, 45 (8): 75-79.  doi:10.11896/j.issn.1002-137X.2018.08.013
Abstract PDF(1486KB) ( 497 )   
References | Related Articles | Metrics
With the rapid development of nanotechnologies and wireless networking technologies,small node size and constrained node energy extremely limit the applications of microscale wireless networks.Therefore,in this paper,aiming at the problems that thestorage structure of traditional macro network node is single and energy harvesting technology is unstable,a hybrid energy storage structure with super-capacitor and battery was proposed to overcome the limitation of battery-based energy storage in traditional wireless network.Based on the proposed hybrid energy storage structure,the network throughput model with energy harvesting was presented by integrating the point-to-point duplex channelmodel and energy loss coefficient.In order to maximize the throughput,an analytical energy allocation model was presented by considering the transmission cost,and then an optimal energy allocation algorithm was proposed based on the model analysis.Due to the inequality of energy distribution of each epoch,this algorithm allocates different energy for the capacitor and the battery,and uses the optimal transmission power and transmission time for data transmission.Experimental results demonstrate that the proposed algorithm can effectively maximize the total network throughput.
Gradient Descent Bit-flipping Decoding Algorithm Based on Updating of Variable Nodes
ZHANG Xuan, JIANG Chao, LI Xiao-qiang, YAN Sha
Computer Science. 2018, 45 (8): 80-83.  doi:10.11896/j.issn.1002-137X.2018.08.014
Abstract PDF(1426KB) ( 445 )   
References | Related Articles | Metrics
The reliability metric of the variable node does not change with flipping the bits during the process of iterative decoding,so the calculation of flipping-function is not accurate,which affects the decoding performance of gradient descent bit-flipping(GDBF) algorithm.Based on the analysis of gradient descent bit-flipping decoding algorithm,a weighted GDBF algorithm was proposed based on updating of variable nodes.This algorithm introduces extrinsic reliability information weights of the check nodes and update rules of the variable nodes for flipping-function,which makes the calculation of flipping-function more accurate.Simulation results show that the BER performance of the proposed algorithm is better than that of the gradient descent bit-flipping decoding algorithm over the additive white Gaussian noise channel.
Sandpile Model Based Load Balancing Algorithm in Wireless Mesh Networks
ZHANG Yun-chun, LI Long-bao, YAO Shao-wen, HU Jian-tao, ZHANG Chen-bin
Computer Science. 2018, 45 (8): 84-87.  doi:10.11896/j.issn.1002-137X.2018.08.015
Abstract PDF(3072KB) ( 533 )   
References | Related Articles | Metrics
The shortest path routing based load balancing mechanisms,which are widely used in wireless networks,result in congestion problem on some overloaded nodes.This seriously degrades the network transmission performance.Meanwhile,with the wide deployment of wireless networks and the increasing demand of application requirements,the optimization and improvement for existing load balancing mechanisms are urgent needed.Consequently,on the basis of “collapse” mechanism and its improvement,a load balancing algorithm suitable for wireless Mesh networks was proposed.It focuses on designing the triggering condition under which the load balancing is started,candidate node set computation and traffic load distribution mechanism.The experimental results show that the sandpile model based load ba-lancing algorithm outperforms the other similar algorithms in both packet drop ratio and throughput by 10.4% and 7% respectively.
Multicarrier Time Division Multiple Access and System Implementation Based on Fast Convolution Scheme
WANG Lei, LIANG Yan, SUN Shang-yong, WANG Guang-yu
Computer Science. 2018, 45 (8): 88-93.  doi:10.11896/j.issn.1002-137X.2018.08.016
Abstract PDF(3087KB) ( 460 )   
References | Related Articles | Metrics
In order to solve the problems that peak average power ratio(PAPR) of orthogonal frequency division multiplexing(OFDM) is too high,and it is sensitive to the frequency offset,a multicarrier time division multiple access(MC-TDMA) scheme was put forward in this paper.The interleaving mapping and modified discrete Fourier transform(MDFT) filter banks technology can effectively reduce the peak average power ratio (PAPR),and enhance the performance of system against frequency offset.MC-TDMA can be used in the uplink and downlink communication.MC-TDMA was implemented from interleaving mapping and MDFT filter banks in this paper.In order to enhance the flexibility of system,fast convolution scheme was used to achieve MC-TDMA,so that it can better deal with the 5G complex application scenarios.The fast convolution system MC-TDMA was designed from the aspects of system structure and frequency domain sampling filter.The performance of fast convolution MC-TDMA system was simulated and compared with MC-TDMA.The results show that the MC-TDMA system can be realized by the fast convolution scheme,and the perfor-mance of the system is better than that of the MC-TDMA system by flexibly adjusting the parameters such as the overlap factor,the decimation factor and the roll-off factor.
Research on Underwater Acoustic Channel Model and Its Calculation Method Based on SOC
WU Peng, ZHOU Jie, CHENG Jiang-gao-lu
Computer Science. 2018, 45 (8): 94-99.  doi:10.11896/j.issn.1002-137X.2018.08.017
Abstract PDF(3011KB) ( 745 )   
References | Related Articles | Metrics
In this paper,the line-of-sight and no-line-of-sight environment of wireless communication channel in underwater acoustic communication environment was studied.A geometric reference model was introduced and related models were designed.Assuming that an unlimited number of scatterers are uniformly distributed on a two-dimensional vertical cross section in 3D underwater space,this paper deduced the signal arriving time probability density function,time autocorrelation function and theexpression of Doppler power spectral density,and analyzed the influence of several main parameters on the channel statistics.According to the assumed reference model,the Sum of Cisoids(SOC) underwater acoustic channel model and two effective calculation methods for the required parameters were proposed,and their performance was compared.The research broadens the research direction of underwater wireless channel modeling,greatly reduces the computational cost,and reduces the complexity of design and simulation of model.
Time-aware Minimum Area Task Scheduling Algorithm Based on Backfilling Algorithm
YUAN Jia-xin, CHEN Jian-xin, XIAO Jun, WU Dao-liang
Computer Science. 2018, 45 (8): 100-104.  doi:10.11896/j.issn.1002-137X.2018.08.018
Abstract PDF(2339KB) ( 672 )   
References | Related Articles | Metrics
In the cloud computing,the task scheduling algorithm directly affects the performance of cloud computing system,so a good cloud computing scheduling task algorithm can not only reduce the pressure of cloud computing data center,deal with user’s large amount of data requests faster and better,but also allow users to obtain better user expe-rience.The existing backfilling algorithm considers single index,and its backfilling performance is poor,resulting in longer final completion time and longer task delay.In order to get rid of these limitations,an MRA algorithm based on backfilling algorithm was proposed.On this basis,the backfilling operation was performed on the basis of the relationship between the number of processor cores for task applications and the task execution time.In the backfilling operation,the virtual machine load distribution was also considered to achieve a certain load balancing.Experimental results show that the MRA algorithm has excellent performance in the maximum task completion time,task queue wait delay and load distribution of virtual machine.
Coevolutionary Genetic Algorithm of Cloud Workflow Scheduling Based on Adaptive Penalty Function
XU Jian-rui, ZHU Hui-juan
Computer Science. 2018, 45 (8): 105-112.  doi:10.11896/j.issn.1002-137X.2018.08.019
Abstract PDF(2711KB) ( 529 )   
References | Related Articles | Metrics
The cloud computing provides a more efficient operation environment for the execution of large-scale scienti-fic workflow application.To solve thecost optimization problem of the scientific workflow scheduling in the cloud environment,a workflow scheduling genetic algorithm based on coevolution was proposed.This algorithm introduces an adaptive penalty function into GA with the strict constraints.By the coevolutionary approach,it can adjustthe crossover and mutation probability of population individuals adaptively to accelerate the convergence of the algorithm and prevent the prematurity ofpopulation.The simulation experiment results of four kinds of scientific workflow in reality show that the scheduling scheme obtained by the CGAA algorithm performs better in satisfying the comprehensive perfor-mance of the workflow scheduling deadline constraints and reducing the total execution cost of tasks compared with the same types of algorithms.
Node Position Prediction Method for Mobile Wireless Sensor Networks
XIA Yang-bo, YANG Wen-zhong, ZHANG Zhen-yu, WANG Qing-peng, SHI Yan
Computer Science. 2018, 45 (8): 113-118.  doi:10.11896/j.issn.1002-137X.2018.08.020
Abstract PDF(2159KB) ( 436 )   
References | Related Articles | Metrics
In view of the defects that the prediction accuracy of the existing position prediction method is low and a large number of historical movement path data need to be relied on in mobile wireless sensor network,this paper proposed an A-USVC position prediction method based on uncertain supporting vector machines.This method uses the node membership vector collected by nodes to construct classification prediction model.On the basis of the constructed prediction model and the calculated moving deflecting direction of mobile node,the location of unknown node is determined.Therefore,the position of unknown mobile node can be predicted.The simulation tests show that the proposed method improves the accuracy by 35% compared with the traditional Markov model prediction method,and improves the accuracy by 19% compared with the neural network prediction method.The A-USVC position prediction method can improve the position prediction accuracy effectively,which has low computational complexity and can also maintain good prediction ability in the case of small samples.
High-throughput and Load-balanced Node Access Scheme for RF-energy Harvesting Wireless Sensor Networks
CHI Kai-kai ,WEI Xin-chen, LIN Yi-min
Computer Science. 2018, 45 (8): 119-124.  doi:10.11896/j.issn.1002-137X.2018.08.021
Abstract PDF(1462KB) ( 411 )   
References | Related Articles | Metrics
For the traditional wireless sensor networks (WSNs),their practical applications are greatly restricted by the inconvenient or even impossible battery replacement.This paper considered the RF-energy harvesting WSNs where the positions of energy sources,nodes and base stations (i.e.,sinks) are given and studied how to arrange the access base stations for each node,aiming to maximize the total throughput of the entire network nodes while satisfying the load balancing constraints of all base stations.Firstly,the energy harvesting model and information transmission model were built.Then,this node access problem was modeled as a 0-1 integer programming problem.Next,a low-complexity algorithm and a greedy algorithm were proposed for solving this problem.Simulation results demonstrate that the node access scheme obtained by the greedy scheme is able to achieve higher total network throughput compared to the low-complexity scheme.Due to its relative high complexity,the greedy scheme can be used in scenarios where the number of nodes is not very large,whereas the low-complexity scheme can be used in scenarios with a large number of nodes.
Characteristic Analysis of Urban Public Transport Networks Based on Space-P Complex Network Model
KONG Fan-yu, ZHOU Yu-feng, LI Xian-zhong
Computer Science. 2018, 45 (8): 125-130.  doi:10.11896/j.issn.1002-137X.2018.08.022
Abstract PDF(3904KB) ( 528 )   
References | Related Articles | Metrics
For the overall performance analysis of transfer network in urban public transport networks,an analysis method based on complex network theory was proposed.Firstly,the public network is modeled as a public transport network topology model represented by Space-P method based on the idea of graph theory.Then,the degree distribution,the average shortest path length,the clustering coefficient,the closeness centrality and the betweenness centrality of the transport network are analyzed statistically.This paper took the public bus network in Beijing as an example.It shows that the Beijing public transport network has the characteristics of small-world network.The probability of transfer is bigger,but the transfer is more convenient.At the same time,the specific geographical information of the re-levant stations was given,which can provide reference for the public transportation planning department to optimize the public transportation network.
Routing Optimization Algorithm of Wireless Sensor Network Based on Improved SVM
HAN Ye-fei, BAI Guang-wei, ZHANG Gong-xuan
Computer Science. 2018, 45 (8): 131-133.  doi:10.11896/j.issn.1002-137X.2018.08.023
Abstract PDF(3352KB) ( 363 )   
References | Related Articles | Metrics
In order to solve defects of large energy consumption in the current wireless sensor network routing algorithms,a novel wireless sensor network routing algorithm based on improved SVM(PSO-LSSVM) was designed.Firstly,a mathematical model of the routing energy consumption for wireless sensor network is established.Secondly,the residual energy of nodes of the combination model is used to perform on-line estimation.The route with minimum energy consumption is selected for data transfer.At last,the performance of this algorithm is tested on Matlab platform.The results show that the proposed algorithm can improve the reliability of data transmission and reduce the average delay of data transmission,and the over all performance is better than other wireless sensor network routing algorithms.
Information Security
Attribute-based Revocation Scheme in Cloud Computing Environment
ZHANG Guang-hua, LIU Hui-meng, CHEN Zhen-guo
Computer Science. 2018, 45 (8): 134-140.  doi:10.11896/j.issn.1002-137X.2018.08.024
Abstract PDF(1747KB) ( 554 )   
References | Related Articles | Metrics
Aiming at the problem of revoking the access rights of the ciphertexts policy attribute base encryption shared data in the cloud environment,the revocation scheme based on attribute was proposed.In the scheme,the trusted third party searches the attribute set satisfying the ciphertext access structure from the user attribute set with the global identity,generates the key component with the same global identity for each attribute in the intersection,and generates the user private key by combining the key components.When the revocation is occured,the scheme updates the key component which revokes the user attribute and distributes the component to other users who have the same attribute.At the same time,the corresponding re-encryption key is generated,and the ciphertext is re-encrypted in the cloud environment.The security analysis and experiments show that the scheme is safe to choose plaintext,which can effectively realize the real-time cancellation of attributes and solve the synchronization problem of multi-authorization structure key distribution.The hash function is used to make the ciphertext length constant,thus reducing the resource cost and mee-ting the application requirements of security in the real cloud environment.
Web Server Fingerprint Identification Technology Based on KNN and GBDT
NAN Shi-hui, WEI Wei, WU Hua-qing, ZOU Jing-rong, ZHAO Zhi-wen
Computer Science. 2018, 45 (8): 141-145.  doi:10.11896/j.issn.1002-137X.2018.08.025
Abstract PDF(1257KB) ( 818 )   
References | Related Articles | Metrics
Conventional Web server fingerprinting method is easy to modify the response head so that the recognition result is not accurate,and the existing recognition method based on machine learning needs to send a large number of requestsfor identification.To solve these problems,by analyzing the feature relations of the response head,a Web server fingerprint recognition algorithm based on KNN and GBDT was proposed.Only two different types of exception requests are sent to identify the corresponding Web server fingerprint type and version range.Compared with the existing algorithm of the relevant Web server fingerprint recognition,the proposed algorithm can optimize the recognition speed and the recognition accuracy.
Trust Network Based Collaborative Filtering Recommendation Algorithm
ZHANG Hong-bo, WANG Jia-lei, ZHANG Li-juan, LIU Zhi-hong
Computer Science. 2018, 45 (8): 146-150.  doi:10.11896/j.issn.1002-137X.2018.08.026
Abstract PDF(2056KB) ( 558 )   
References | Related Articles | Metrics
The problems of data sparsity and cold start cannot be solved by the classical collaborative filtering recommendation schemes.Although these problems can be solved effectively by exploiting the trust networks of users,the performance of these schemes need to be improved.Based on the ubiquitous phenomenon of“if a trusts b,then the similarity between a and b is relatively high”,this paper proposed a collaborative filtering recommendation algorithm,which exploits a penalty and reward mechanism to further promote its performance.Then it was compared with the classical collaborative filtering algorithms and the existing trust recommendation algorithms in terms of the coverage and accuracy.The results show that the performance of the proposed algorithm is improved.
Adaptive Pixel Block Reference Value Based Reversible Data Hiding in Encrypted Domain
LIU Yu, YANG Bai-long, ZHAO Wen-qiang, YUAN Zhi-hua
Computer Science. 2018, 45 (8): 151-155.  doi:10.11896/j.issn.1002-137X.2018.08.027
Abstract PDF(13171KB) ( 507 )   
References | Related Articles | Metrics
Aiming at the problems that the capacity of reversible data hiding in encrypted domain is not enough,the exis-ting algorithms are not reversible,inefficient and complex,a simple and efficient technique based on pixel blockrefe-rence was proposed.The scheme divides the image adaptively in the way of quadtree,partial image block mean values are obtained and retained,the pseudo-random sequenceis used to encrypt image,the secret information is embedded through addition operation.The secret information can be extracted independently,the image decrypted can be carried out independently,and the carrier image can be completely restored.Experiments show that the method is simple,efficient,easy to calculate,and it has good hidden capacity,reversibility and separability.
Optimistic Certified Email for Line Topology
GUO Li-juan, LV Xiao-lin
Computer Science. 2018, 45 (8): 156-159.  doi:10.11896/j.issn.1002-137X.2018.08.028
Abstract PDF(2109KB) ( 314 )   
References | Related Articles | Metrics
Most of optimistic certified emails are of ring topology,star topology,mesh topology and the hybrid structure of these three topologies.In practice,the certified email will be collected in order.At present,only the fair exchange protocol for mesh topology put forward by Asoken can be applied to certified email for line topology.Based on this situation,this paper proposed a new multi-party certified email protocol for line topology by using an efficient signcryption scheme for signature and message authentication.The scheme only needs 4(n-1)passes in all multi-party honest and 8n-4 passes in the worst case.The efficiency ofscheme is much better than Asoken’s certified mail for line topology(the scheme needs 4n(n-1) passes in all multi-party honest and 8n2-n-10 passes in the worst case).Besides,the freshness of messages can be verified by timestamp.The analysis shows that the protocol is fair and non-repudiation.
Software & Database Technology
Software Defect Prediction Based on Improved Deep Forest Algorithm
XUE Can-guan, YAN Xue-feng
Computer Science. 2018, 45 (8): 160-165.  doi:10.11896/j.issn.1002-137X.2018.08.029
Abstract PDF(1673KB) ( 776 )   
References | Related Articles | Metrics
Software defect prediction is an important way to rationally use software testing resources and improve software performance.In order to solve the problem that the shallow machine learning algorithm cannot deeply mine the characteristics of software data,an improved deep forest algorithm named deep stacking forest (DSF) was proposed.This algorithm firstly adopts the random sampling method to transform the original features to enhance its feature expression ability,and then uses the stacking structure to performlayer-by-layer representation learning for the transform features.The deep stacking forest was applied for the defect prediction of Eclipse dataset.The experimental results show that the algorithm has significant improvement in the predicting performance and time efficiency than the deep forest.
Analysis of Java Open Source System Evolution Based on Complex Network Theory
TANG Qian-wen, CHEN Liang-yu
Computer Science. 2018, 45 (8): 166-173.  doi:10.11896/j.issn.1002-137X.2018.08.030
Abstract PDF(7898KB) ( 369 )   
References | Related Articles | Metrics
With the fast iteration of software version and the rapid development of software scale,software design and quality issues have aroused wide spread concern in the IT field.Using complex network theory to study the characteristics of software systems has become an important way to solve these problems.Characterizing software source code dependencies as a network,with the complex network method,the structure of the macroscopic level of the code can be deeply understood and the overall evolution trend can be grasped,which help the developers optimize the software architecture and make it better.As an open-source mainstream Java EE application server,Tomcat has been widely used in industry applications.Based on the complex network method,the conclusion comes out that Tomcat’s class dependency networks of all history versions satisfy the nature of small-world network and scale-free network.By deeply analyzing the evolution process of the nine versions,this paper found that Tomcat shows preferential attachment phenomenon,so it can maintain the robustness of software.
Approach for Path-oriented Test Cases Generation Based on Improved Genetic Algorithm
BAO Xiao-an, XIONG Zi-jian, ZHANG Wei, WU Biao, ZHANG Na
Computer Science. 2018, 45 (8): 174-178.  doi:10.11896/j.issn.1002-137X.2018.08.031
Abstract PDF(2191KB) ( 768 )   
References | Related Articles | Metrics
Using genetic algorithms to solve the problem of generating test cases for path coverage is a hot topic in software testing automation.In view of the problems in traditional standard genetic methods,such as premature convergence and slow search efficiency,this paper designed adaptive crossover operator and mutation operator,thus enhancing the global optimal capability of genetic algorithm.Meanwhile,a new fitness function was introduced to evaluate individuals based on dynamic generation algorithm framework,which combines approach level and branch distance and takes the nesting degree of branches into consideration to compute the fitness values of test data.The experimental results confirm that the proposed improved method is more efficient in generating test cases for path coverage compared with the traditional method.
Dynamic Semantics of Aspect-oriented Programming
XIE Gang, JIANG Qiang, SHI Lei
Computer Science. 2018, 45 (8): 179-185.  doi:10.11896/j.issn.1002-137X.2018.08.032
Abstract PDF(1988KB) ( 379 )   
References | Related Articles | Metrics
At present,many researchers have developed various formal semantics for aspect-oriented program.How-ever,none of the semantics can be understood by software designers and developers.Based on the existing research,this paper defined a dynamic semantics of aspect-oriented programs through using the definition of design in unifying theories of programming.The approach was enumerated with a case to demonstrate the usage of the semantics.
Artificial Intelligence
Supervised Neighborhood Rough Set
WANG Lin-na, YANG Xin, YANG Xi-bei
Computer Science. 2018, 45 (8): 186-190.  doi:10.11896/j.issn.1002-137X.2018.08.033
Abstract PDF(2010KB) ( 420 )   
References | Related Articles | Metrics
The uncertainty of information can’t be efficiently reduced by traditional neighborhood rough set with single threshold.By considering the existing or predicted category label information of the object,this paper introduced two kinds of thresholds,namely,intra-class and inter-class,and proposed a novel neighborhood granulation methods to construct a rough set model based on supervised neighborhood.This model is the generalized form of conventional neighborhood rough set.Moreover,the theorem of monotonic variation with approximate quality and conditional entropy was presented through analyzing the change rules of neighborhood particlesunder double thresholds.Finally,the performance of the model was demonstrated on four data sets of UCI.The results show that the effect of neighborhood granulation can be improved andthe uncertainty of information can be reduced by adjusting supervised threshold parameters.
Emotion Classification for Readers Based on Multi-view Multi-label Learning
WEN Wen, CHEN Ying, CAI Rui-chu, HAO Zhi-feng, WANG Li-juan
Computer Science. 2018, 45 (8): 191-197.  doi:10.11896/j.issn.1002-137X.2018.08.034
Abstract PDF(1405KB) ( 442 )   
References | Related Articles | Metrics
The traditional emotion classification for readers mainly focuses on the emotional polarity embodied in the reader’s comments,which is from the perspective of sentiment analysis.However,the readers’ comments are occasio-nally not collected due to some reasons,which tends to reduce the effectiveness and timeliness of emotional classification.How to integrate the multi-perspective information,including news texts and readers’ comments,and to make a more accurate judgment of reader’s emotions has become a challenging problem.In this paper,a multi-view multi-label latent indexing (MV-MLSI) model was proposed,which maps the multi-view text features from different perspectives to the low-dimensional semantic space.Meanwhile,the mapping function among the features and labels was established,and the model could be solved by minimizing the reconstruction error.The optimization algorithm was also presented in this paper so as to make the effective prediction of reader’s emotion.Compared with the traditional model,the proposed model can not only take full advantage of multi-view information,but also take into account the correlation among labels.Experiments on the multi-view news text dataset demonstrate that the method can achieve higher accuracy and stability.
Optimal Model of Multi-objective Supply Chain Based on Improved IWD Algorithm
FANG Qing, SHAO Yuan
Computer Science. 2018, 45 (8): 198-202.  doi:10.11896/j.issn.1002-137X.2018.08.035
Abstract PDF(2159KB) ( 461 )   
References | Related Articles | Metrics
In order to minimize the selling cost and delivery time of manufacturing supply chain,a multi-objective supply chain optimization model based on improved intelligent water drop algorithm was proposed.The model improves the efficiency of the supply chain by considering both cost and time during option selection,and minimizes the sales cost and leads time in the manufacturing supply chain simultaneously.By using the Pareto optimality criterion,the traditional intelligent water drop algorithm is modified to obtain a Pareto set to minimize the two objectives.The algorithm was tes-ted by three examples andcompared with the ant colony optimization algorithm using the generation distance and hypera-rea ratio index.The results show that the performance of the proposed method is more excellent and the generated set is closer to the real Pareto set to cover a larger area of solution region,with the calculation efficiency being high.
Improved Bayesian Algorithm Based Automatic Classification Method for Bibliography
YANG Xiao-hua, GAO Hai-yun
Computer Science. 2018, 45 (8): 203-207.  doi:10.11896/j.issn.1002-137X.2018.08.036
Abstract PDF(1514KB) ( 489 )   
References | Related Articles | Metrics
Bayesian algorithm is widely used in the field of automatic classification for bibliography.This method usually adopts differential evolution method to estimate the probability items.However,the traditional differential evolution method is easy to fall into the local optimum when estimating the probability items,which reduces the accuracy of Bayesian classifcation.In view of this problem,this paper proposed an improved Bayesian algorithm based automatic classification method for bibliography.In this method,the optimal solution of probability items is estimated through multi-parent mutation and crossover operation,which improves the accuracy of Bayesian classification.In the process of automatic classification for bibliography,the ICTCLAS system is used to preprocess the text and then extract the term frequency-inverse document frequency features of texts.Then,the improved Bayesian estimation method is utilized to train and classify the features.Finally,the automatic classification for bibliography is achieved.Simulation results show that this method has a high classification accuracy.
Intelligent Classification of Massive Information Based on Conflict Game Algorithm
ZENG Jin-song, RAO Yun-bo
Computer Science. 2018, 45 (8): 208-212.  doi:10.11896/j.issn.1002-137X.2018.08.037
Abstract PDF(2052KB) ( 436 )   
References | Related Articles | Metrics
In the process of mass information classification,the information text model and similarity are often used to classify,which can not fully represent the information attribute,leading to conflicts when classifying information.The intelligent classification method of massive information based on conflict game theory was proposed to extract information features.On this basis,according to the orthogonal property of mass information,the massive information classification strategy was determined.The Nash equilibrium strategy and Pareto optimal strategy were introduced to seek out the optimal solution to the problem of massive information classification and improve the classification strategy.The conflict information detection method was used to determine whether there is a conflict in the conflict information detection classification.If there is a conflict,it is transformed into a constraint satisfaction problem.Through the analysis of constraint variables of the classification problem,the contents of operational conflict in the classification is determined,and the expression of conflict discrimination in the mass information classification is established to realize the research of massive information intelligen classification.The experimental results show that using the proposed method for intelligent classification of massive information can get better classification results,it’s process is relatively simple,and this method has little effect on the computer network operation,providing reference experience for the practical application of the conflict game algorithm in the distribution of massive information.
Deeply Hierarchical Bi-directional LSTM for Sentiment Classification
ZENG Zheng, LI Li, CHEN Jing
Computer Science. 2018, 45 (8): 213-217.  doi:10.11896/j.issn.1002-137X.2018.08.038
Abstract PDF(2116KB) ( 726 )   
References | Related Articles | Metrics
The comments on goods,films and others contribute to assess people’s preference degree for goods,which provides reference for the people who intend to buy the goods,and can help businesses adjust shelves to maximize pro-fits.In recent years,the powerful representation and learning ability in deep learning technologies provides a good support for understanding text semantics and grasping the emotional tendency of texts,especially the long short-term me-mory (LSTM) model in deep learning.The comment is a form of temporal data,which expresses semantic information through the forward arrangement of words.LSTM is a sequential model that reads the comment forward and encodes it into a real vector,and this vector implies the potential semantics of the comment and can be stored and processed by the computer.In this paper,two LSTM models are utilized to read comments from forward and backward directions respectively,and thus the two-way semantic information of the review can be obtained.Then the purpose of obtaining the deep features of comments is achieved by stacking the multilayer bidirectional LSTM.Finally,the model is put into a sentimental classification model to implement the sentiment classification.Experimental results show that the proposed method outperforms baseline LSTM,which means that deeply hierarchical bi-directional LSTM (DHBL) can capture more accurate text information.Compared with the convolutional neural network (CNN) model,the proposed model also achieves better effect.
Graphics, Image & Pattern Recognition
Intelligent Geometry Size Measurement System for Logistics Industry
LI Juan, ZHOU Fu-qiang, LI Zuo-xin, LI Xiao-jie
Computer Science. 2018, 45 (8): 218-222.  doi:10.11896/j.issn.1002-137X.2018.08.039
Abstract PDF(2114KB) ( 660 )   
References | Related Articles | Metrics
The rapid development of Internet and e-commerce has brought disruptive change of logistics industry.However,there are still some problems for logistics industry,such as high cost,low technical equipment and low efficiency in the distribution system.For a long time,the use of goods’ geometry information,which can be very helpful in improving packing,classification and transport,is a weak process in logistics industry.Focusing on the above problems,this paper built an intelligent system to measure geometry size of logistic goods.Based on stereo vision system and by the combination of disparity algorithm for 3D reconstruction and feature extraction algorithm, the system computes the geometry size for general logistics in complex background and is less affected by light.The experimental results show that this system can be implemented to compute the geometry size quickly for the logistics industry,and its mean measurement error is less than 2% and maximum error is less than 3%,which can meet the basic requirement of logistics.
Eye-movement Analysis of Visual Similarity Perception on Synthesized Texture Images
GUO Xiao-ying, LI Liang, GENG Hai-jun
Computer Science. 2018, 45 (8): 223-228.  doi:10.11896/j.issn.1002-137X.2018.08.040
Abstract PDF(3921KB) ( 527 )   
References | Related Articles | Metrics
Global features and local features are very important for texture visual perception.This paper investigated the influence of the global features and local features of texture on the eye movement pattern and the relationship between the eye movement pattern and visual similarity selection.Firstly,the texture images were synthesized by separately controlling global textural features and local textural features with primitive,grain,and point configuration (PGPC) texture model,which is a mathematical morphology based texture model.In the experiment,three scenes were utilized.For each scene,three textures (A,B,and S) were included.Secondly,an experiment was conducted on visual similarity selection to acquire eye movement data while the subjects were viewing the visual similarities of texture scenes under the task of “Which texture is more similar to texture S,texture A or texture B?”.Experimental data were obtained with an eye tracker Tobii T60 by conducting two tests on 89 subjects.The collected eye-tracking data were analyzed in terms of fi-xation point variance in each ROI and fixation transfer count between different ROIs.Analysis results indicate that the global features and local features of texture influence the eye-movement pattern,namely,the texture image that is glo-bally similar to the compared texture contains dispersed fixation points,and the texture image that is locally similar to the compared texture contains concentrated fixation points.Besides,the final visual similarity selection is related to the visual search between different ROIs.
People Counting Method Based on Adaptive Overlapping Segmentation and Deep Neural Network
GUO Wen-sheng, BAO Ling, QIAN Zhi-cheng, CAO Wan-li
Computer Science. 2018, 45 (8): 229-235.  doi:10.11896/j.issn.1002-137X.2018.08.041
Abstract PDF(2221KB) ( 423 )   
References | Related Articles | Metrics
People counting based on surveillance camera is fundamentalfor analyzing behavior of counting,resource optimization and resource allocation,modern security and protection,collecting commerce information as well as intelligent management.Therefore,it has significant meaning of study and application value.Recently,technology of digital image processing and theory of deep learning are constantly improved and developed,extremely promoting the study of people counting based on surveillance camera.However,there exist some problems,such as low accuracy of people counting and time-consuming of high definition,which are unable to be solved.In the wide range of object scale,accuracy of people counting method based on object detection decreases significantly.Aiming at this problem,this paper proposed a people counting method based onadaptive overlapping segmentation and deep neural network.The idea of this method comes from attention mechanism,and makes full use of information of the scales and numbers of head object in overlapping segmentation.The experimental results show that the adaptive overlapping segmentation algorithm can combine existing object detection model based on neural network.What’s more,compared with the method of counting people by directly using object detection model based on neural network,the combination algorithm of adaptive overlapping segmentation and deep neural network can greatly improve the accuracy of people counting.
Unbalanced Crowd Density Estimation Based on Convolutional Features
QU Jia, SHI Zeng-lin, YE Yang-dong
Computer Science. 2018, 45 (8): 236-241.  doi:10.11896/j.issn.1002-137X.2018.08.042
Abstract PDF(2795KB) ( 575 )   
References | Related Articles | Metrics
Crowd density estimation plays a central role in intelligent monitoring.Deep neural network usually outperforms conventional approaches based on manual features owing to its data-driven superiority.However,deep neural networks are still far from optimal solution because of the scarceness of large-scale datasets.To address this problem,this paper investigated the feasibility of several solutions which are training shallow neural network from scratch,using fullyconnected layer features of pretrained deep neural network and aggregating convolutional features by way of fisher vector(FV).Aiming at the problem of unbalanced distribution,this paper further proposed several classification evaluation criteria.Comprehensive experiments were carried out on benchmark PETs2009 dataset.Results show that convolutional features outperform existing hand-crafted ones.Moreover,utilizing deep convolutional features based on transfer learning usually leads to better performance than the models trained from scratch.Finally,simpler pretrained models such as AlexNet can generalize better mobility of the lower layer features than more complicated ones such as VGGNet.
Stereo Matching Algorithm Based on Adaptive Support Weight Optimization
JIANG Ze-tao, WANG Qi, ZHAO Yan
Computer Science. 2018, 45 (8): 242-246.  doi:10.11896/j.issn.1002-137X.2018.08.043
Abstract PDF(2062KB) ( 729 )   
References | Related Articles | Metrics
Stereo matching is one of the classic problems and hot topics in image processing.In view of the problem that the operation time of original ASW stereo matching algorithm is too long and the mismatching rate of occlusion area is high,an improved optimization method was proposed.Based on the adaptive support weight method,the Rank transform method is used to improve the adaptive support weight from two aspects of parameter selection and stereo matching performance,and then the final parallax is obtained by performing the effective parallax calibration.Finally,the image sequence disparity map with high matching accuracy is obtained by simulation experiment.The experimental results show that the method is feasible.
Improved Three-dimensional Otsu Image Segmentation Algorithm
QIU Guo-qing, XIONG Geng-yun, ZHAO Wen-ming
Computer Science. 2018, 45 (8): 247-252.  doi:10.11896/j.issn.1002-137X.2018.08.044
Abstract PDF(2954KB) ( 390 )   
References | Related Articles | Metrics
Aiming at the problem of large calculation and long running time of the three-dimensional Otsu image segmentation algorithm,an algorithm was proposed to reduce the iterative space and search space by using one-dimensional Otsu and cuckoo search algorithm in this paper.The simulation shows that the algorithm can effectively reduce the computing time.At the same time,aiming at the problem of the error segmentation of traditional three-dimensional Otsu ima-ge algorithm due to neglecting the region of 2 to 7,a processing method was proposed.This method divides the pixels in the region of 2 to 7 into noise and non-noise points,and assigns all the pixels.The simulation result shows that the segmentation of this method is superior to that of traditional three-dimensional Otsu segmentation algorithm.
Multi-object Tracking Algorithm Based on Kalman Filter
ZHAO Guang-hui, ZHUO Song, XU Xiao-long
Computer Science. 2018, 45 (8): 253-257.  doi:10.11896/j.issn.1002-137X.2018.08.045
Abstract PDF(4286KB) ( 1111 )   
References | Related Articles | Metrics
Aiming at the tracking failure caused by occlusion between objects,interleaving or target drift in multi-object tracking,this paper proposed an occlusion prediction tracking algorithm based on Kalman filter and spatiograms.By combining the color histogram and the distribution of color in space,spatiograms can be used to distinguish different objects,so that the object can still be tracked when interleaving or occlusion between objects occurs.The state of the object can be predicted by the Kalman filtering algorithm.The occlusion mark is usedfor the object which overlaps with other objects,so that the occluded object which is undetected can be tracked in the next frame.The 2D MOT 2015 data set was used for experiment.The average accuracy of tracking achieves 34.1%.Experimental results show that the algorithm can improve the performance of multi-object tracking.
Salient Object Detection Algorithm Based on Sparse Recovery and Optimization
WANG Jun, WU Ze-min, YANG Wei, HU Lei, ZHANG Zhao-feng, JIANG Qing-zhu
Computer Science. 2018, 45 (8): 258-263.  doi:10.11896/j.issn.1002-137X.2018.08.046
Abstract PDF(4490KB) ( 468 )   
References | Related Articles | Metrics
In view of the issues of boundary ambiguity and low detection accuracy in current saliency detection algorithms which employ sparse representation,this paper proposed a new saliency detection algorithm based on sparse recovery and optimization.Firstly,the RG filter is used to smooth the image.Then,the SLIC algorithm is used to segment the image,and the reliable background seed is selected from the boundary and the inside super pixel block is chosen to construct the dictionary.Based on the dictionary,the sparse recovery of the whole image is achieved,and the initial sa-liency map is generated according to the sparse recovery error.After that,the modified optimization model is used to optimize the initial saliency map.Finally,the final saliency map is obtained through multiscale fusion.Experimental results on three public benchmark datasets show that the performance of the proposed algorithm is superior to the current state-of-the-art methods.Meanwhile,it performs well in dealing with boundary saliency and has strong robustness.
Extraction of Palm Vein ROI Based on Maximal Inscribed Circle Algorithm
LIU Gang, ZHANG Jing, LI Yue-long
Computer Science. 2018, 45 (8): 264-267.  doi:10.11896/j.issn.1002-137X.2018.08.047
Abstract PDF(2921KB) ( 643 )   
References | Related Articles | Metrics
Aiming at the problem that the information clarity and richness are low for extracting palm vein image,the algorithm based on the maximum inscribed circle extraction was proposed for the region of interest.The original image of palm vein is pretreated and regional grid line is added.First,the grid line is used as a reference to narrow the search area of the center of circle,thereby it can simplify the definition process of the center of circle.Then the initial radius is set and grid width is used as a variable to increase the radius.Finally,the maximum inscribed circle is determined.The results show that the improved algorithm improves the clarity and information richness by an average of 0.0102 and 0.0121 respectively.Iteration times of extracting ROI images are reduced by 200 in iterative training.For the four sets of images,the execution time of the improved algorithm is reduced by 10.7ms,10.2ms,11.3ms and 10.8ms respectively.
Research on Face Information Enhancement and Recognition Based on Convolutional Neural Network
WANG Yan, WANG Shuang-yin
Computer Science. 2018, 45 (8): 268-271.  doi:10.11896/j.issn.1002-137X.2018.08.048
Abstract PDF(1796KB) ( 458 )   
References | Related Articles | Metrics
There exist large fuzziness of human face and large change of people’s gesture and other issues when images are collected,and the accuracy of face recognition is not high.In order to improve the accuracy of human face recognition,this paper proposed a new face recognition algorithm based on information enhancement of convolutional neural network.Wavelet denoising is applied to the collected fuzzy face images.Theadaptive template matching is given for output image affter noise reduction,and the face image is segmented by the image segmentation method.The geometric feature invariance of the Radon scale transform is used to implement information enhancement for the key feature points of the face.The convolutional neural network classifier is used to classify the enhanced facial feature points to realize feature point optimization and accurate face recognition.The experiments show that the accuracy of proposed method is better,and the proposed method can fulfill the application requirements of rapid recognition for large-scale sample faces.
Self-adaptive Group Sparse Representation Method for Image Inpainting
GAN Ling, ZHAO Fu-chao, YANG Meng
Computer Science. 2018, 45 (8): 272-276.  doi:10.11896/j.issn.1002-137X.2018.08.049
Abstract PDF(2299KB) ( 533 )   
References | Related Articles | Metrics
This paper proposed an image inpainting algorithm based on self-adaptive group sparse representation to solve the problem that the texture and structure clarity are poor in the repair results.Due to the difference of texture and structure information in the natural images,in order to distinguish the group structure with thefixed image block size in original algorithm,firstly,a method is proposed to adaptively select the size of sample image patch to construct an adaptive group structure.Secondly,singular value decomposition is conducted in groups to obtain an adaptive learning dictionary of the image patch group,and the Split Bregman Iteration algorithm is used to solve the objective cost function.Finally,the adaptive dictionary and the sparse coding coefficient of each group are updated by adjusting the number of image patches and iterations in the group to get a better restoration effect.The experimental results show that this method not only improves the peak signal to noise ratio and feature similarity index of image,but also improves the repair efficiency.
GLCM-based Adaptive Block Compressed Sensing Method for Image
DU Xiu-li, ZHANG Wei, GU Bin-bin, CHEN Bo, QIU Shao-ming
Computer Science. 2018, 45 (8): 277-282.  doi:10.11896/j.issn.1002-137X.2018.08.050
Abstract PDF(5724KB) ( 439 )   
References | Related Articles | Metrics
The method of block compressed sensing really makes up for the defects of the consumed resource and time in reconstruction of large-size images.However,there is an obvious block effect in the reconstructed image.Aiming to solve the problem of inaccurate anlysis of texture complexity that hinders reduction of the block effect in adaptive sampling compressed sensing method,this paper proposed an adaptive block compressed sensing method based on co-occurrence matrices.The texture feature of image is analyzed by the co-occurrence matrix,and then the sampling rate is adaptively allocated according to the texture feature.Under the premise that the total sampling rate is not changed,the image with complex texture obtains higher sampling rate,and the image with simple texture obtains lower sampling rate.At last,SAMP (Sparsity Adaptive Matching Pursuit) is used to conduct reconstruction.The simulation results show that the proposed method can effectively eliminate the block effect,especially for the partial blocks,and the performance of the reconstructed block is obviously improved.
Vein Recognition Algorithm Based on Supervised NMF with Two Regularization Terms
JIA Xu, SUN Fu-ming, LI Hao-jie, CAO Yu-dong
Computer Science. 2018, 45 (8): 283-287.  doi:10.11896/j.issn.1002-137X.2018.08.051
Abstract PDF(2216KB) ( 350 )   
References | Related Articles | Metrics
In order to make the extracted vein feature have good clustering performance and thus be more conductive to correct identification,this paper proposed a recognition algorithm based on supervised Nonnegative Matrix Factorization (NMF).Firstly,vein image is divided into blocks,and the original vein feature can be acquired by fusing all sub image features.Secondly,the sparsity and clustering property of feature vectors areregarded as two regularization terms,and the original NMF model is improved.Then,gradient descent method is used to solve the improved NMF model,and feature optimization and dimension reduction can be achieved.Finally,by using nearest neighbor algorithm to match new vein features,the recognition results can be acquired.Experiment results show that the obtained false accept rate (FAR) and false reject rate (FRR) of the proposed recognition algorithm can be reached 0.02 and 0.03 respectively for three vein databases,in addition,the recognition time of 2.89 seconds can meet real-time requirement.
Interdiscipline & Frontier
Multi-objective Verification of Web Service Composition Based on Probabilistic Model Checking
ZHOU Nv-qi, ZHOU Yu
Computer Science. 2018, 45 (8): 288-294.  doi:10.11896/j.issn.1002-137X.2018.08.052
Abstract PDF(1405KB) ( 537 )   
References | Related Articles | Metrics
Web service composition becomes an important research topic in service computing field.The non-functional requirements of the users are the most frequently used criteriafor Web service composition.However,users’ non-functional requirements have certain uncertainties and multi-objective characteristics in the open environments.This paper proposed a multi-objective verification method to tackle this problem.Firstly,the Web service composition process is modeled as a quantitative multi-objective Markov decision process,and then it is transformed to the PRISM language.Simultaneously,different user requirements are expressed by multi-objective temporal logic formulas.With the input of the above two models,the optimal solution is found via model checking.Finally,an example is delivered to illustrate the method and the experiment result indicates that the proposed approach can be used for Web service composition effectively.
Measuring Point Selection Method of Board-level Circuit Based on Multi-signal Model and Genetic Algorithm
SHI Wei-wen, WANG Xue-qi, FAN Kai-yin, WANG Ming-jun
Computer Science. 2018, 45 (8): 295-299.  doi:10.11896/j.issn.1002-137X.2018.08.053
Abstract PDF(1597KB) ( 318 )   
References | Related Articles | Metrics
This paper proposed an optimization method by combining multi-signal model and genetic algorithm for the problems of many input messages,low efficiency,tedious work,and difficulty on getting a global optimal solution exis-ting in the traditional circuit board measuring point selection method.First,a multi-signal flow system model of the board level circuit is established to get the correlation matrix of measuring points and corresponding board level circuit components.Then a further analysis is taken on the correlation matrix,and the test ability parameters of measuring points combination is got.Second,when the number of selected measuring points is not bigger than the given value,the test capability parameters are selected as the fitness function of the genetic algorithm,and the search is optimized to determine the optimal selection of measuring points.Third,combining with Multisim simulation software,the fault simulation experiment of circuit system with active low-pass filter is carried out .The simulation results show that the combination of board-level circuit measuring point selection based on multi-signal model and genetic algorithmhas good detection and isolation capabilities for most of the faults in active low-pass filter circuits,and achieves good results.Besides,this method is applicable to a variety of other circuits.
Node Invariants by Imitating High-order Moments and Their Graph Invariants
JIANG Shun-liang, GE Yun, TANG Yi-ling XU, Shao-ping, YE Fa-mao
Computer Science. 2018, 45 (8): 300-305.  doi:10.11896/j.issn.1002-137X.2018.08.054
Abstract PDF(1352KB) ( 381 )   
References | Related Articles | Metrics
By the way of high-order moments and the level-order computing framework,twenty kinds of node invariants were defined with the connection distance and level-order information of nodes.These node invariants reflect overall bias distribution characteristics,non-uniformity and smoothness,while the sum of squares of nodal degree reflects the distribution of the nodal degrees in the level.By comparing the number of distinguishable nodes,it is found that the sum of squares of nodal degrees obviously improves the refining ability of node invariants.The node invariants are ordered to form a vector as the graph invariant.The calculation results show that there are nine graph invariants that can distinguish between all N<25 non-isomorphic tree and N<34 non-isomorphic irreducible tree (no 2-degree tree) without instance being found so far for non-isomorphic tree with the same graph invariants for trees with more nodes.The distinguished ability of nine graph invariants to non-isomorphic graphs (N<10) exceeds the 19 results of 22 graph invariants in reference [8],and the degeneracy of nine graph invariants is small,thus improving the performance of random graph isomorphism testing.
IK-medoids Based Aircraft Fuel Consumption Clustering Algorithm
CHEN Jing-jie, CHE Jie
Computer Science. 2018, 45 (8): 306-309.  doi:10.11896/j.issn.1002-137X.2018.08.055
Abstract PDF(2737KB) ( 458 )   
References | Related Articles | Metrics
To analyze the aircraft fuel consumption in given external environment,this paper proposed a neighborhood search K-medoids clustering algorithm (IK-medoids) based on the maximum distance method.According to the idea that the sample points with the farthest distance cannot be divided into the same cluster,the maximum distance method is used to select the initial center.And then,the center neighborhood is determined by the standardized Euclidean distance between the initial center and rest samples.What’s more,theregeneration of initial center is conducted by the proposed nearest neighbor searching strategy,efficiently reducing the iteration time.The contrast experiments were conducted on datasets with different size of the same aircraft model and flight segment,so as to classify the fuel flow data according to the gross weight,cruise altitude,flight distances and flight environment.The results demonstrate that the proposed IK-medoids algorithm outperforms common K-medoids algorithms,and provides a new angle for further analysis on the fuel consumption in flight process.
Method of Mining Conditional Infrequent Behavior Based on Communication Behavior Profile
CAO Rui, FANG Xian-wen, WANG Li-li
Computer Science. 2018, 45 (8): 310-314.  doi:10.11896/j.issn.1002-137X.2018.08.056
Abstract PDF(1537KB) ( 359 )   
References | Related Articles | Metrics
Conditional infrequent behavior refers to the behavior recorded by infrequent event traces with attribute va-lues.Mining the conditional infrequent behavior from the event log is one of the main contents of business process optimization.The existing methods remove low frequency behavior,but take less consideration of the conditional infrequent behavior under the perspective of data-flow between different module nets.Based on this,the paper presented a method of mining conditional infrequent behavior based on communication behavior profile.Based on the communication beha-vior profile theory between module nets,firstly,through a given business process source model,its executable event log is searched and the infrequent event traces are found,adding the relevant attributes and attribute values to the infrequent event traces to get the conditional infrequent traces.Secondly,by calculating condition dependent values of the communication features of different module nets,whether the conditional infrequent traces are deleted or retained can be determined.The optimized event log is given,and the business process optimization communication model is mined.Finally,the feasibility of the method is verified by a simulation.