Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 46 Issue 9, 15 September 2019
  
Surverys
Survey of Compressed Deep Neural Network
LI Qing-hua, LI Cui-ping, ZHANG Jing, CHEN Hong, WANG Shao-qing
Computer Science. 2019, 46 (9): 1-14.  doi:10.11896/j.issn.1002-137X.2019.09.001
Abstract PDF(5948KB) ( 3736 )   
References | Related Articles | Metrics
In recent years,deep neural networks have achieved significant breakthroughs in target recognition,image classification,etc.However,training and testing for these deep neural network have several limitations.Firstly,training and testing for these deep neural networks require a lot of computation (training and testing consume a lot of time),which requires high-performance computing devices (such as GPUs) to improve the training and testing speed,and shorten training and testing time.Secondly,the deep neural network model usually contains a large number of parameters that require high-capacity,high-speed memory to store.These limitations hinder the widespread use of deep neural networks.At present,training and testing of deep neural networks usually run under high-performance servers or clusters.In some mobile devices with high real-time requirements,such as mobile phones,applications are limited.This paper reviewed the progress of compression deep neural network algorithm in recent years,and introduced the main me-thods of compressing neural network,such as cropping method,sparse regularization method,decomposition method,shared parameter method,mask acceleration method and discrete cosine transform method.Finally,the future research direction of compressed deep neural network was prospected.
Survey of Semi-supervised Clustering
QIN Yue, DING Shi-fei
Computer Science. 2019, 46 (9): 15-21.  doi:10.11896/j.issn.1002-137X.2019.09.002
Abstract PDF(1298KB) ( 3531 )   
References | Related Articles | Metrics
Semi-supervised clustering is a new learning method combining semi-supervised learning and clustering analysis,and it has been used widely in machine learning.The traditional unsupervised clustering algorithms do not need any data attributes when dividing data,but in practical applications,there are a small number of data samples for supervised information with independent class labels or paired constraints,so scholars are committed to applying these few supervised information into clustering to obtain better clustering results,thus proposing semi-supervised clustering.This paper mainly introduced the theoretical basis and algorithm ideas of semi-supervised clustering,and summarized the latest progress of semi-supervised clustering.Firstly,the current situation and classification of semi-supervised learning were reviewed,and the generative semi-supervised learning,semi-supervised SVM,semi-supervised learning based on graph and collaborative training were compared.Secondly,the clustering of semi-supervised learning was described in detail,four typical semi-supervised clustering algorithms (Cop-Kemans algorithm,LCop-Kmeans algorithm,Seeded-Kmeans algorithm and SC-Kmeans algorithm) were analyzed and summarized,and their advantages and disadvantages were eva-luated.Then,according to the two situations of semi-supervised clustering based on constraints and the semi-supervised clustering based on distance,the research status of semi-supervised clustering was expounded respectively.Finally,the applications of semi-supervised clustering in bioinformatics,image segmentation and other fields of computer and the future research directions were discussed.This paper aims to enable beginners to quickly know about the progress of semi-supervised clustering and understand the typical algorithm ideas,and it can play a guiding role in actual applications afterwards.
Survey on Character Motion Synthesis Based on Neural Network
WANG Xin, MENG Hao-hao, JIANG Xiao-tao, CHEN Sheng-yong, SUN Ling-yun
Computer Science. 2019, 46 (9): 22-27.  doi:10.11896/j.issn.1002-137X.2019.09.003
Abstract PDF(2828KB) ( 1311 )   
References | Related Articles | Metrics
The application of neural network technology to character motion synthesis on human motion data sets is an important research content in the field of computer graphics.This study aims to generate naturally realistic character motion using neural networks through date-driven technology.Based on the analysis and summary of related research work,this paper introduced the research progress in the fields of motion model construction,motion interaction and motion stylization and so on.Based on the motion capture data,by using data-driven technology,interactive control methods and network models such as ERD,CAE and MAR,the character was dynamically modeled,synthesized and controlled by interactive motion,and in order to generate higher quality character motions,motion animation and other content were stylized.In this paper,taking neural network technology as the focal point,various study works of the character motion synthesis were connected.Combined with the practical applications and difficulties faced in the current research work,this paper suggested some problems that can be further studied.
Task Scheduling on Storm:Current Situations and Research Prospects
ZHANG Zhou, HUANG Guo-rui, JIN Pei-quan
Computer Science. 2019, 46 (9): 28-35.  doi:10.11896/j.issn.1002-137X.2019.09.004
Abstract PDF(2414KB) ( 1132 )   
References | Related Articles | Metrics
Distributed streaming data processing systems represented by Apache Storm provide low latency processing in complex big data processing environment.Therefore,systems have attracted wide attentions in both academic field and industrial field.In the distributed streaming data processing system,task scheduling is a critical factor to determine system performance.A good task scheduler can result in higher throughput,lower processing latency,and better resource utilization for the system.However,the original Storm task scheduler requires users to set the parallelism ma-nually,and it also uses simple round-robin method to assign tasks,which leads to poor performance in practical application.To handle this problem,researchers have proposed many optimization strategies of Storm task scheduling mechanism.This paper reviewed related works of Storm task scheduling.Firstly,the Storm system and the original task scheduling mechanism were introduced,and current optimization techniques on Storm task scheduling mechanism were sorted.Then the advantages and disadvantages of scheduling strategies were summarized and analyzed.Finally,some future development directions of Storm task scheduling optimization were discussed in order to provide references for further optimization and follow-up researches on Storm scheduling mechanism.
Research on Image Semantic Segmentation for Complex Environments
WANG Yan-ran, CHEN Qing-liang, WU Jun-jun
Computer Science. 2019, 46 (9): 36-46.  doi:10.11896/j.issn.1002-137X.2019.09.005
Abstract PDF(1988KB) ( 3648 )   
References | Related Articles | Metrics
Image semantic segmentation is one of the most important fundamental technologies for visual intelligence.Semantic segmentation can greatly enable intelligent systems to understand their surrounding scenarios,so it has enormous value in application domains such as unmanned vehicles,robot cognition and navigation,video surveillance and drone landing systems.Great challenges also exist in the semantic segmentation of images,due to various interfering factors of targets in complex environments,such as unstructured targets,diversity of objectives,irregular shapes,illumination changes,different viewing angles,scale variation,object occlusion,etc.In recent years,benefiting from the great advancements in deep learning techniques,a large number of research approaches with practical significance emerge in ima-ge semantic segmentation.For having a comprehensive survey and inspiring the academic research,this paper extensively discussed the existing state-of-the-art image semantic segmentation methods,and further classified them into the traditional image semantic segmentation ones,the ones combining traditional and deep learning techniques,and those based purely on deep learning.In order to address these problems in complex environments,various semantic segmentation methods for complex environment emerged in recent years were analyzed and compared in detail,including the mo-dels,algorithms and performance with the category of strong supervised,weak supervised and unsupervised semantic segmentation methods.Furthermore,the current main datasets such as PASCAL VOC,Cityscape,SUN RGB-D,which contains various complex environments and 3 evaluation indicators of PA,mPA,mIoU were summarized.Finally,the existing research of image semantic segmentation for complex environment was summarized,and its future trends were prospected such as optimization in real-time video,3d scene reconstruction and unsupervised semantic segmentation techniques.
Surveys
3D Shape Feature Extraction Method Based on Deep Learning
ZHOU Yan, ZENG Fan-zhi, WU Chen, LUO Yue, LIU Zi-qin
Computer Science. 2019, 46 (9): 47-58.  doi:10.11896/j.issn.1002-137X.2019.09.006
Abstract PDF(2527KB) ( 3045 )   
References | Related Articles | Metrics
Research on extracting 3D shape features with low dimension and high discriminating ability can solve the problem such as classification,retrieval of 3D shape data.With the continuous development of deep learning,3D shape feature extraction method combineds with deep learning has become a research hotspot.Combining deep learning with traditional 3D shape feature extraction methods can not only break through the bottleneck of non-deep learningme-thods,but also improve the accuracy of 3D shape data classification,retrieval and other tasks,especially when 3D shape is non-rigid body.However,deep learning is still developing,and there are still problems that require a large number of training samples.Therefore,how to effectively extract 3D shape features by using deep learning methods has become the research focus and difficulty in the field of computer vision.At present,most researchers focus on improving the ability of neural network to extract features by improving network structure,training methods and other aspects.First,the re-levant deep learning model are introduced,and there are some new ideas about the network improvement and training methods.Second,the feature extraction methods of rigid body and non-rigid body based on deep learning are comprehensively expounded which combined with the development of deep learning and 3D shape feature extraction methods,and the current deep learning methods for the 3D shape feature extraction are described.And then,the current situation of the existing 3D shape retrieval system and the similarity calculation method are described.Finally,the current problems of 3D shape feature extraction methods are introduced,and its future development trend are explored.
NDBC 2018
RAISE:Efficient Influence Cost Minimizing Algorithm in Social Network
SUN Yong-yue, LI Hong-yan, ZHANG Jin-bo
Computer Science. 2019, 46 (9): 59-65.  doi:10.11896/j.issn.1002-137X.2019.09.007
Abstract PDF(1885KB) ( 738 )   
References | Related Articles | Metrics
In many scenarios,such as viral marketing and political campaign,persuading individuals to accept new pro-ducts or ideas requires a certain cost.Influence cost minimization problem is defined as choosing an influential set of individuals so that the influence can be spread to given number of individuals while the total cost is minimized.The solution quality and time efficiency are faced with bottlenecks when solving this problem with existing methods.To tackle the issue,this paper proposed an efficient algorithm,RAISE.In theory,when the expected influence is comparable to the network size,the proposed algorithm has constant approximation ratio and linear time complexity.In practice,the proposed algorithm is significantly superior to the existing methods in terms of solution quality and time efficiency.
Logless Hash Table Based on NVM
WANG Tao, LIANG Xiao, WU Qian-qian, WANG Peng, CAO Wei, SUN Jian-ling
Computer Science. 2019, 46 (9): 66-72.  doi:10.11896/j.issn.1002-137X.2019.09.008
Abstract PDF(2092KB) ( 845 )   
References | Related Articles | Metrics
Emerging non-volatile memory(NVM) is taking people’s attention.Due to the advantages of low latency,persistence,large capacity and byte-addressable,database system can run on the NVM-only storage architecture.In this configuration,some novel logless indexing structures come into being and are expected to recover indexing capability immediately after an system failure.However,under the current computer architecture,these structures need a large amount of synchronizations to ensure data consistency,which leads to a severe performance penalty.NVM-baesd logless hash table leverages the atomic update of the pointer data to ensure the consistency.An optimized rehash procedure was proposed to not only reduce the synchronizations during normal execution,but also ensure the instant recovery after system failures.Performance evaluation shows that,compared with existing persistent indexing structures,logless hash tables perform well under most workloads,and have significant advantages in terms of recovery time,NVM footprint,and write wear.
Dynamic Skyline Query for Multiple Mobile Users Based on Road Network
ZHOU Jian-gang, QIN Xiao-lin, ZHANG Ke-heng, XU Jian-qiu
Computer Science. 2019, 46 (9): 73-78.  doi:10.11896/j.issn.1002-137X.2019.09.009
Abstract PDF(2338KB) ( 819 )   
References | Related Articles | Metrics
With the development of wireless communication and positioning technology,the road network Skyline query has become increasingly important in location-based services.However,the spatial attributes involved in the existing road network Skyline research only consider distance,and do not consider the influence of changes in the positions and speeds of multiple mobile users on the user’s movement time.When the user’s movement state is changed,the Skyline results need to be dynamically adjusted and re-planned.This paper analyzed the incidence relation between the user’s motion state and the query,proposed the query processing algorithm EI,and divided the query process into two steps.Firstly,the initial Skyline result set is determined by the collaborative filtering extension method according to time,and the data set is pruned.The user’s movement status,as soon as the user’s speed changes,quickly adjusts the Skyline set according to the entry point.Finally,the algorithm is tested on the real road network,and is compared with the existing algorithms N3S and EDC.The results show that EI algorithm can efficiently solve the dynamic Skyline query problem of multiple mobile users based on road network.
Mining User Interests on Twitter Using Wikipedia Category Graph
LIU Xiao-jie, LV Xiao-qiang, WANG Xiao-ling, ZHANG Wei, ZHAO An
Computer Science. 2019, 46 (9): 79-84.  doi:10.11896/j.issn.1002-137X.2019.09.010
Abstract PDF(1944KB) ( 752 )   
References | Related Articles | Metrics
Social network such as Twitter plays an important role in life,and the huge number of users makes social network data mining valuable.User interest modeling on social networks has been studied widely,and is used to provide personalized recommendations.This paper proposed a novel user interest mining and representation approach based on Wikipedia Category Graph.User interest profile is represented as a wikipedia category vector.First,according to the degree of user’s activeness,an interest mining method based on tweets is proposed for active users,and another method based on names and descriptions of followees is proposed for passive users.Then,user interest is extended and genera-lized based on Wikipedia Category Graph by personalized PageRank algorithm,and user interest profile is represented by wikipedia categories.The proposed interest modeling strategy was evaluated in the context of a tweet recommendation system.The results shows that the proposed approach improves the quality of recommendation significantly compared with the state-of-the-art Twitter user interest modeling approachs,which means it can provide a more effective user interest profile.
Performance Prediction and Configuration Optimization of Virtual Machines Based on Random Forest
ZHANG Bin-bin, WANG Juan, YUE Kun, WU Hao, HAO Jia
Computer Science. 2019, 46 (9): 85-92.  doi:10.11896/j.issn.1002-137X.2019.09.011
Abstract PDF(1697KB) ( 819 )   
References | Related Articles | Metrics
In IaaS cloud computing,users rent one or more virtual machines with different resource configurations.However,it is difficult for users to accurately estimate the performance of the virtual machine according to the resources allocated.Thus it is hard for them to select an appropriate virtual machine according to the performance requirement of the applications.Therefore,this paper proposed to predict performance of the virtual machine according to their resources and configurations based on random forest.Further,it proposed to use genetic algorithm to search the optimal configuration of the virtual machine which can meet the performance requirement.The difference of the prediction result and the target performance are used as the fitness function.The experimental results show that the random forest model can accurately predict performance of the virtual machine.And the actual performance of the virtual machine configured according to the configuration obtained by the genetic algorithm is very close to the performance requirement,and the convergence can be achieved in a short time.
Identification of Same User in Social Networks
ZHANG Zheng, WANG Hong-zhi, DING Xiao-ou, LI Jian-zhong, GAO Hong
Computer Science. 2019, 46 (9): 93-98.  doi:10.11896/j.issn.1002-137X.2019.09.012
Abstract PDF(1476KB) ( 944 )   
References | Related Articles | Metrics
This paper carried on the related research of the same user identification in different social global networks.The social network was modeled as a network with attribute value and a central node,namely ego-network.And aiming at the identification problem in the social network,this paper designed related algorithm.In order to mine the node pairs of the same user,the user’s attributes and the similarity of the friends’ relationship are modeled,so as to comprehensively evaluate the similarities among the nodes in different social networks,namely,to get the user match score and to use it in node matching.Then through the improved RCM algorithm,the global optimal matching results are obtained,and finally the matching user pairs with lower user match scores are cut off to achieve better results.Based on real datasets,the performance of the algorithm is compared with several related algorithms.The effect of different parameters on experimental results is also analyzed and the rationality of the proposed algorithm is verified.
Adaptive Parameter Optimization for Real-time Differential Privacy Streaming Data Publication
WU Ying-jie, HUANG Xin, GE Chen, SUN Lan
Computer Science. 2019, 46 (9): 99-105.  doi:10.11896/j.issn.1002-137X.2019.09.013
Abstract PDF(2750KB) ( 686 )   
References | Related Articles | Metrics
Recently,many practical applications need to continuously respond to range queries over streaming data in real-time,and adopt the differential privacy protection to deal with the disclosure of sensitive data in the information publication process.Existing research adopts Fenwick tree as data structure to organize and store data items in the stream to satisfy the real-time requirement in the information publication process.However,parameters in the previous method are predefined,which cannot adapt dynamic changes of the queries well.To solve this problem,based on the framework of real-time differential privacy streaming data publication,this paper proposed a method introducing the historical query records to achieve the adaptive parameter optimization.Firstly,based on moving average method,the historical queries are analyzed to predict the subsequent queries.Then according to the prediction results,the optimistic height of the tree is calculated theoretically,which minimizes the expected error.Finally,the adaptive parameter optimization is achieved in real-time differential privacy streaming data publication.The experimental results show that this method can significantly improve the query accuracy while guaranteeing the time efficiency.
Fall Action Recognition Based on Deep Learning
MA Lu, PEI Wei, ZHU Yong-ying, WANG Chun-li, WANG Peng-qian
Computer Science. 2019, 46 (9): 106-112.  doi:10.11896/j.issn.1002-137X.2019.09.014
Abstract PDF(3933KB) ( 4129 )   
References | Related Articles | Metrics
With the rapid growth of the aging population,fall detection has become a key issue in the medical and health field.Accurately detecting falling events in the monitoring video and giving feedback in real time can effectively reduce injuries even deaths caused by falls in the elderly.In view of the complex scenes in the monitoring video and multiple similar human behaviors,this paper proposed an improved FSSD (Feature Fusion Single Shot Multibox Detector) fall detection method.Firstly,a video frame forming dataset is extracted from different falling video sequences.Then,the training sample set is input into the improved convolutional neural network until the network converges.Finally,the target category and the location of the target in the video are tested according to the optimized network model.The experimental results show that the improved FSSD algorithm can effectively detect the falling or ADL activities of each frame of image and provide real-time feedback.The detection speed is 24fps (GTX1050Ti),which can meet the real-time requirements while ensuring the detection accuracy.Comparing the improved method with the state-of-the-art fall detection methods,the performance of the improved FSSD is better than other algorithms.The detection of fall behavior in video further validates the feasibility and efficiency of the recognition method based on deep learning.
Network & Communication
Acoustic Signal Propagation Model and Its Performance in Cave Environment
HE Ming-xing, ZHOU Jie, WU Peng, LIU Yang
Computer Science. 2019, 46 (9): 113-119.  doi:10.11896/j.issn.1002-137X.2019.09.015
Abstract PDF(2091KB) ( 536 )   
References | Related Articles | Metrics
In view of the cave environment,this paper presented a geometric model based on the new environment in the cave.The channel on both sides of the cave gradually widen (narrow) from the entrance to the depth.According to the geometric model,and by means of ray theory,this paper assumed that both sides of the channel surface are approximate smooth,and proposed a random channel model of single transmission and single reception for acoustic signal communication system in cave environment.According to the geometric model,the influence of channel opening angle on channel distribution,instant channel capacity,time autocorrelation function,frequency correlation function,Doppler power spectral density and power delay distribution is studied.The theoretical and simulation results show that compared with the case where both sides of the channel are parallel (i.e.the opening angle of both sides is 0),the statistical characteristics of the acoustic channel wireless communication system will be significantly affected by only a small change in the ope-ning angle of both sides of the channel,and the parallel is a special case of this research content.
RF Energy Source Deployment Schemes Maximizing Total Energy Harvesting Power
CHI Kai-kai, XU Xing-yuan, HU Ping
Computer Science. 2019, 46 (9): 120-124.  doi:10.11896/j.issn.1002-137X.2019.09.016
Abstract PDF(1543KB) ( 503 )   
References | Related Articles | Metrics
Radio frequency (RF) energy harvesting is one of the effective methods to deal with the energy limitation of wireless network nodes.The placement of RF energy sources (ESs) determines the energy harvesting power of each node.However,so far,almost no work has been done to study how to select appropriate deployment locations among the candidate deployment locations of ESs.Given the node locations,the number of ESs and candidate deployment locations of ESs,this paper studied and designed the ES deployment schemes which maximize the total energy harvesting power of nodes.Firstly,the problem is modeled as a 0-1 integer programming problem.Then a low-complexity approximation scheme with approximation ratio (1-1/e) and a genetic algorithm based deployment scheme with higher total energy harvesting power are proposed,respectively.Simulation results show that the proposed schemes improve the total energy harvesting power by about 50% compared to the scheme of randomly selecting the deployment locations,and the total energy harvesting power of genetic scheme can be 15% higher than that of approximation scheme.Therefore,the deployment scheme based on genetic scheme can be used for small and medium-sized ES deployment scenarios,while the approximation scheme can be used for large-scale ES deployment scenarios.
RSSI-based Centroid Localization Algorithm Optimized by Hybrid Swarm Intelligence Algorithm
WANG Gai-yun, WANG Lei-yang, LU Hao-xiang
Computer Science. 2019, 46 (9): 125-129.  doi:10.11896/j.issn.1002-137X.2019.09.017
Abstract PDF(1965KB) ( 768 )   
References | Related Articles | Metrics
Sensor nodes self-positioning is one of the most critical technologies in wireless sensor network.Aiming at the localization problem of wireless sensor network,this paper proposed the centroid localization algorithm with particle swarm optimization and simulated annealing algorithm (PSO-SA) based on RSSI.Firstly,the distance between nodes in the wireless sensor network is calculated by using the RSSI ranging model in the method.Secondly,a mathematical model with unknown node coordinates as parameters is established by selecting three reference nodes closest to the unknown node and the nodes that have been located,and PSO-SA is used in the process of solution.To evaluate the performance of the proposed method,a comparison experiment was carried out with the traditional centroid localization algorithm,the RSSI-based weighted centroid localization algorithm and the centroid localization algorithm based on PSO.Experiment results indicate that the RSSI centroid localization algorithm based on PSO-SA has higher localization accuracy and stronger generalization performance than the others.
Service Function Load Balancing Based on SDN-SFC
ZHANG Zhao, LI Hai-long, HU Lei, DONG Si-qi
Computer Science. 2019, 46 (9): 130-136.  doi:10.11896/j.issn.1002-137X.2019.09.018
Abstract PDF(2837KB) ( 609 )   
References | Related Articles | Metrics
With the rapid development of Internet technology,network terminal devices tend to be smaller and more convenient,and with the popularity of mobile terminals,their frequency of utilization is higher,people’s demand for network bandwidth is increasing sharply,at the same time,people’s requirement for network data transmission time is becoming more and more stringent.To meet this demand,this paper proposed a load balancing mechanism based on SDN-SFC.It considers and categorizes the types and priorities of services required by each terminal.Then,a heuristic algorithm is used to plan the transmission paths between SFCs to reduce the load of each SF and improve the overall network performance.Simulation results show that the proposed method can shorten the time of data transmission and achieve load balancing.
ICN Energy Efficiency Optimization Strategy Based on Content Field of Complex Networks
ZHAO Lei, ZHOU Jin-he
Computer Science. 2019, 46 (9): 137-142.  doi:10.11896/j.issn.1002-137X.2019.09.019
Abstract PDF(2385KB) ( 618 )   
References | Related Articles | Metrics
The current network architecture still adopts end-to-end communication based on location.With the rapid growth of network data and load,there are many problems in the traditional TCP/IP network architecture, such as the low transmission efficiency of the Internet and the low ability of real-time data processing,which are mainly reflected in the lack of guaranteed quality of service for network users,large network energy consumption,etc.Information-Centric Networking (ICN) will become the research hotspot of the next generation Internet architecture.This paper used complex networks to model ICN,and proposed a content-field-based energy efficiency optimization strategy (CFS) to find the best path according to the content field strength of neighbor nodes,and decide whether to cache content on request path by using the proposed caching strategy based on content popularity.The caching strategy takes into account both the content popularity and the distance among users.The simulation results show that compared with the existing ICN strategies,CFS has relative advantages in network throughput,average request delay,average network energy consumption and data packet distribution,especially when the network has a large amount of data,because this algorithm will choice the path close to the content and with low congestion firstly,and its performance is more outstanding.
Cognitive Spectrum Allocation Mechanism in Internet of Vehicles Based on Clustering Structure
XUE Ling-ling, FAN Xiu-mei
Computer Science. 2019, 46 (9): 143-149.  doi:10.11896/j.issn.1002-137X.2019.09.020
Abstract PDF(1854KB) ( 608 )   
References | Related Articles | Metrics
Nowadays,the spectrum allocation mechanism adopts a fixed allocation mode.With the rapid development of wireless network,it is difficult for limited spectrum resources to meet the communication requirements.Therefore,it is an effective solution to use cognitive radio technology to solve the shortage of spectrum resources.Cognitive spectrum allocation is a key technology to improve spectrum utilization.Based on the specific application of Internet of Vehicles,this paper studied the allocation mechanism of cognitive spectrum,and proposed a three-step cognitive spectrum allocation mechanism based on clustering structure,in which the idle spectrum owner is primary user,the intersection fixed unit is cluster head node,and the cognitive vehicle is intra-cluster ordinarynode.The first step of this mechanism is to judge the current load status of the network.Only when the network load status is overloaded or super heavy load,the cognitive spectrum mechanism will be activated.In the second step,the spectrum allocation algorithm based on traffic congestion priority pricing is adopted for the spectrum allocation between the primary user and the cluster head node,so as to ensure that the total spectrum utility of the cluster head is maximized while the primary user obtains certain income.In the third step,the equalization price spectrum allocation algorithm based on message priority is utilized for the spectrum allocation of the nodes in the cluster,the utility function of the cluster head and the nodes in the cluster is used to derive the supply and demand functions within the cluster,and the market equilibrium principle is used to find the optimal unit price of the spectrum within the cluster.The simulation results are analyzed in terms of the allocated spectrum number and spectrum benefit,and the results demonstrate that the spectrum allocation algorithm based on message priority is better than the non-priority,and the spectrum allocation algorithm based on traffic congestion priority pricing between clusters is better than the average allocation.The results also show that the proposed cognitive spectrum allocation mechanism basically meets the spectrum demands of actual users,improves spectrum revenue and spectrum utilization,and ensures the priority transmission of security messages.
Information Security
Covert Communication Method Based on Closed Source Streaming Media
GUO Qi, CUI Jing-song
Computer Science. 2019, 46 (9): 150-155.  doi:10.11896/j.issn.1002-137X.2019.09.021
Abstract PDF(1486KB) ( 646 )   
References | Related Articles | Metrics
A covert channel represents an unforeseen method of communication that utilizes authorized public communication as a carrier medium for covert messages.A covert channel can be a safe and efficient way to transmit confidential information hidden in explicit traffic.Existing streaming-based covert channels are often easily detected due to the establishment of new communication links.For this reason,this paper conducted targeted tests and research on data pa-ckets passing through the streaming media server.It is found that the existing closed source streaming media does not strictly check the data packets passing through the server,and the data packets can still reach the termina lafter modi-fying some data.Based on the above facts,this paper established a hidden channel based on closed source streaming media by exploring the data bit distribution rules of the modified data packets through the server.In order to improve the entropy value of the data packet,this paper used an efficient and compact speck algorithm to encrypt the packet content.In order to monitor existing links and real-time traffic in real time,the firewalls were connected in series in the network structure,and the network connection and communication quality were monitored by a firewall.Experimental data show that this method does not increase the number of network connections and does not affect the communication qua-lity,and it is compatible with a variety of streaming media devices,showing that this method is practical and not easily detected.Moreover,since the hidden channel is mounted on the closed source streaming medium,the transmission efficiency of the covert information is high.The above results show that the method of establishing a covert channel based on the communication flow of the existing closed source streaming media software is feasible,and has strong concealment after encrypting the content of the data packet.
Systemic Muti-factors Based Verification Method for Safety-critical Software
LV Xiao-hu, HAN Xiao-dong, GONG Jiang-lei, WANG Zhi-jie, LIU Xiao-kun
Computer Science. 2019, 46 (9): 156-161.  doi:10.11896/j.issn.1002-137X.2019.09.022
Abstract PDF(1877KB) ( 534 )   
References | Related Articles | Metrics
Software-intensive systems have been the inexorable development trend.The proportion of functions of safety-critical software keep growing,and the software safety problems are highlighted increasingly,in which the influence factors are characterized by complex,multidimensional,dynamic and insidious.Therefore,it’s urgent to seek a reasonable verification method for safety-critical software,and how to effectively verify it has become a difficult issue in software safety-related work.Based on the research and development of safety-critical software,this paper studied and proposed a verification method for safety-critical software based on systemic muti-factors,modeled the muti-factors that affect software safety from the point of system,and gave detailed verification methods and steps through constructing the requirement constraint sets and verification sets.The results of practical application show that the proposed method can effectively identify potential and systemic problems in safety-critical software compared with the traditional verification methods limited to software logic.
Information Sharing and Secure Multi-party Computing Model Based on Blockchain
WANG Tong, MA Wen-ping, LUO Wei
Computer Science. 2019, 46 (9): 162-168.  doi:10.11896/j.issn.1002-137X.2019.09.023
Abstract PDF(1973KB) ( 2342 )   
References | Related Articles | Metrics
Under the background of big data,the control and privacy of data information have become a concern.However,existing computation models mostly rely on the third-party institution.Because the incompliance and the information control of the third party cause that information security cannot be guaranteed,more privacy problems appear.To solve this problem,this paper constructed an information sharing and secure multi-party computing model with high performance and security combining the blockchain with the secure multi-party computation,which enables users to control the data autonomously while ensuring the security of data information computing and sharing.This scheme firstly combines the on-chain storage with the off-chain storage.In this storage condition,proxy heavy encryption is used for data sharing and improved consensus algorithm is used to ensure the accuracy of nodes.Then,based on the MapReduce parallel computing framework,an improved homomorphic encryption algorithm was put forward for data processing and secure computing in cipher without decrypting the privacy data.Finally,the correctness and the security of the scheme were analyzed,and the experimental simulation was carried out.The analysis results and experimental results show that this scheme has high performance when dealing with big data and has a great improvement in operational efficiency.
Bilateral Authentication Protocol for WSN and Certification by Strand Space Model
LIU Jing, LAI Ying-xu, YANG Sheng-zhi, Lina XU
Computer Science. 2019, 46 (9): 169-175.  doi:10.11896/j.issn.1002-137X.2019.09.024
Abstract PDF(1603KB) ( 596 )   
References | Related Articles | Metrics
With the development of industrial Internet,smart agriculture,smart home and other fields,wireless sensor networks (WSN) have been more widely used.However,its security issues have become prominent.Aiming at the problems of the vulnerability to failure as well as the limited capacity of energy and computational storage of sensor nodes in the wireless sensor networks (WSN),this paper constructed a two-way identity authentication protocol based on state information between base station and sensor nodes,which can ensure safety while meeting the requirements of lightweight and low cost of wireless sensor networks.First,the protocol authenticates the trusted situation of the platform based on the trusted network connection in the node access phase,verifies the trusted condition of the node and implements its encrypted registration.Then,during the operation phase,the transmission process of the important data is protected by the two-way authentication process of the data,and the status and reliability of the sensor nodes are confirmed by the timing update authentication.Meanwhile,the protocol allows the base station to periodically detect the running state information of the node,which is used for authentication to further enhance the protocol security,and to timely monitor the physical damage of the node.The proposed protocol reduces the communication process of the authentication process,while the introduced alarm message can enhance the troubleshooting capability,and the serial space model is used to formally analyze the protocol,proving the security of the protocol.Finally,the experimental results show that under a reasonable safety condition,the designed two-way identity authentication protocol has a good network scalability,and the increased delay time of sending data is within an acceptable range.The solution can enhance network access security and effectively defend against attacks from the inside node system,having good application value.
Web Log Analysis Method Based on Storm Real-time Streaming Computing Framework
YANG Li-peng, ZHANG Yang-sen, ZHANG Wen, WANG Jian, ZENG Jian-rong
Computer Science. 2019, 46 (9): 176-183.  doi:10.11896/j.issn.1002-137X.2019.09.025
Abstract PDF(2089KB) ( 772 )   
References | Related Articles | Metrics
With the rapid development of the Internet,the network log data in the Internet show explosive growth,and the network log contains a wealth of network security information.By analyzing network log,this paper proposed an attack IP recognition model based on access behavior and network relationship and an IP real person attribute decision model based on sliding time window.Based on the Storm real-time flow computing framework,the proposed model was implemented in order to construct a real-time computing and analysis platform for distributed network logs,and a solution to the technical problems encountered in the implementation process was given.Through the analysis and calculation of the constructed model through real data,the results show that the accuracy of the constructed attack IP identification model is 98%,the accuracy rate of the IP real property judgment model reaches 96%,and the constructed distributed network log real-time computing and analyzing platform can effectively and timely monitor network security and timely identify potential security risks in the network.
Artificial Intelligence
STransH:A Revised Translation-based Model for Knowledge Representation
CHEN Xiao-jun, XIANG Yang
Computer Science. 2019, 46 (9): 184-189.  doi:10.11896/j.issn.1002-137X.2019.09.026
Abstract PDF(1351KB) ( 1174 )   
References | Related Articles | Metrics
Recently,representation learning technology represented by deep learning has attracted many attentions in natural language processing,computer vision and speech recognition.Representation learning aims to project the interested objects into a low-dimensional,dense and real-valued semantic space.To this end,a number of models and methods were proposed for knowledge embedding.Among them,TransE is a classic translation-based method with low model complexity,high computational efficiency and favorable knowledge representation ability.However,it has limitations in dealing with complex relations including reflexive,one-to-many,many-to-one and many-to-many relations.In light of this,this paper proposed a revised translation-based method for knowledge graph representation,namely STransH.In this method,firstly,entity and relation embeddings are built in separate entity space and relation space,and then the non-linear operation of single-layer network layer is adopted to enhance the semantic connection between entity and relation.Inspired by TransH,this paper introduced the relation-oriented hyperspace model,thus projecting head and tail entities to the hyperspace of a given relation for distinction.Besides,it also proposed a simple trick to improve the quality of negative triplets.At last,it conducted extensive experiments on link prediction and triplet classification on benchmark datasets like WordNet and Freebase.Experimental results show that STransH performs significant improvements over TransE and TransH compared with TransE and TransH,and its Hits@10 and triplet classification accuracy are increased by nearly 10% respectively.
Prediction Model of User Purchase Behavior Based on Deep Forest
GE Shao-lin, YE Jian, HE Ming-xiang
Computer Science. 2019, 46 (9): 190-194.  doi:10.11896/j.issn.1002-137X.2019.09.027
Abstract PDF(1715KB) ( 2082 )   
References | Related Articles | Metrics
In recent years,online retail kept growing at a high speed.There exist redundant user behavior data in website.User’s behavior can embody user’s preference in the e-commerce platform.How to utilize user behavior to mine user preferences has become the focus of attention in academia and industry,and has formed a number of research results.The prediction methods of user behavior only aims at a certain type of user behavior,which is not able to reflect the overall characteristics of user behavior.Therefore,this paper proposed deep forest based prediction model of purchase behavior.By constructing feature engineering of user behavior,a whole user behavior feature model was built.In order to achieve efficient training,a deep forest based prediction method of purchase behavior was put forward to implement the behavior recognition training effect.The training time of this method is 43s,and the F1 value is 9.73%.Compared with other models,this method has achieved good results in both indexes.Finally,the experiments show that the model has an ability to reduce the time overhead and improve the prediction accuracy.
Semi-supervised Support Tensor Based on Tucker Decomposition
WU Zhen-yu, LI Yun-lei, WU Fan
Computer Science. 2019, 46 (9): 195-200.  doi:10.11896/j.issn.1002-137X.2019.09.028
Abstract PDF(5499KB) ( 1166 )   
References | Related Articles | Metrics
Most data used by traditional machine learning methods belong to vector space.As an important machine learning method,support vector machine (SVM) has better performance in solving small samples,nonlinearity and high-dimensionality problems.However,in practical applications,some data like images and videos are both stored in tensor form.If tensor data are convert into vector data,some original structure and related information may be lost,which will cause dimensional disasters and small samples problems.Therefore,to maintain as much tensor structure information as possible,support tensor machine (STM) based on Tucker decomposition was proposed.Experiments show that this method can significantly improve the classification performance.Meanwhile,as a supervised learning method,support vector machine cannot use unlabeled data,which often encounter problems with insufficient training data.Therefore,semi-supervised support tensor machine based on Tucker decomposition was proposed.This algorithm can not only maintain more tensor structure information,but also make full use of the unlabeled data.Experiments show that the prediction accuracy rate is 90.26%,which validates the effectiveness of the proposed method.
Event Coreference Resolution Method Based on Attention Mechanism
CHENG Hao-yi, LI Pei-feng, ZHU Qiao-ming
Computer Science. 2019, 46 (9): 201-205.  doi:10.11896/j.issn.1002-137X.2019.09.029
Abstract PDF(1938KB) ( 935 )   
References | Related Articles | Metrics
Event coreference resolution is a challenging NLP task.It plays an import role in event extraction,QA system and reading comprehension.This paper introduced a decomposable attention neural network model DANGL with global inference mechanism based on remote and local information to document-level event coreference resolution.The neural network model DANGL is quite different from most traditional methods based on probabilistic models and graph models in the past.DANGL first uses Bi-LSTM and CNN to capture both the remote information and the local information of each event mention.Then,it applies the decomposable attention network to capture relatively important information in event mention.Finally,it employs a document-level global inference mechanism to further optimize the coreference chains.Experimental results on TAC-KBP show that DANGL uses a few features and outperforms the state-of-the-art baseline.
Bayesian Structure Learning Based on Physarum Polycephalum
LIN Lang, ZHANG Zi-li
Computer Science. 2019, 46 (9): 206-210.  doi:10.11896/j.issn.1002-137X.2019.09.030
Abstract PDF(2138KB) ( 725 )   
References | Related Articles | Metrics
Bayesian network is a graph model which combines probability statistics and graph theory.It has been successfully applied in many fields.However,it is very difficult to build Bayesian network only depending on the domain knowledge of experts.Therefore,learning Bayesian network structure from data has become a key issue in this field.Bayesian network structure learning is a NP-hard problem because the search space is too large.According to the chara-cteristics of physarum polycephalum in the process of foraging to retain an important feeding pipeline,the original search space was reduced by combining the relevant mathematical model of physarum polycephalum and the theory of conditional mutual information.In this paper,the obtained undirected graph is used as the basic framework of the network,and then the mountain climbing method is used to determine the direction of the skeleton and get the correspon-ding topological ordering.Finally,the order of nodes is used as the input of K2 algorithm to get the final network.The network topology structure and score are selected as the evaluation index,and comparative experiments are carried out on multiple data sets.Experiments show that the proposed algorithm has higher accuracy in network reconfiguration and raw data matching.
Law Article Prediction Method for Legal Judgment Documents
ZHANG Hu, WANG Xin, WANG Chong, CHENG Hao, TAN Hong-ye, LI Ru
Computer Science. 2019, 46 (9): 211-215.  doi:10.11896/j.issn.1002-137X.2019.09.031
Abstract PDF(1713KB) ( 1535 )   
References | Related Articles | Metrics
Inrecent years,the analysis of legal judgment documents and the prediction of results based on case facts in the judicial field have become the hot research topics in AI law.The law article prediction task is based on the factual description of the judicial case to predict the applicable law of the cases,which has become an important research content of wisdom justice.After analyzing the factual description of the legal documents and the specific judicial interpretation of the law,and excavating the characteristics of the factual description part of the judicial document,a method of recommending the law based on multi-model fusion was proposed.Based on the public dataset in the “CAIL2018” Judicial Artificial Intelligence Challenge,three datasets were constructed from different angles,and multiple sets of experiments were performed on each dataset.The experimental results show that the proposed method is simpler than the single model of law article prediction.The proposed method can effectively improve the accuracy of the task,and can better solve the recommendation problem of multiple cases in a single case fact description.
Collaborative Filtering Recommendation Algorithm Mixing LDA Model and List-wise Model
WANG Han, XIA Hong-bin
Computer Science. 2019, 46 (9): 216-222.  doi:10.11896/j.issn.1002-137X.2019.09.032
Abstract PDF(1835KB) ( 884 )   
References | Related Articles | Metrics
Rranking-oriented collaborative filtering is affected by the sparsity of data,which leads to the inaccuracy of recommendations.This paper proposed a hybrid ranking-oriented collaborative filtering algorithm based on LDA topic model and list-wise model.The algorithm uses the LDA topic model to model the user-item ratings matrix,and obtains the potential low-dimensional topic vector of the user,then measures the similarity between users with the topic vector.Next,the list-wise learning function is used to directly predict the total order of items that satisfies the users preference.The experimental results on the two real datasets of Movielens and EachMovie show that the algorithm can avoid the inaccuracy of similarity calculation between users caused by too little common score information,and at the same time reflect the superiority of learning to rank.It can effectively alleviate the effect of data sparsity and improve the accuracy of recommendation.
RCNN-BGRU-HN Network Model for Aspect-based Sentiment Analysis
SUN Zhong-feng, WANG Jing
Computer Science. 2019, 46 (9): 223-228.  doi:10.11896/j.issn.1002-137X.2019.09.033
Abstract PDF(1487KB) ( 682 )   
References | Related Articles | Metrics
The general neural network model has less inter-connectivity between sentences and cannot capture much more semantic information between words in the task of aspect-based sentiment analysis.To adress these problems,this paper proposed a deep learning network model with novel structure.The model can preserve the sequential relationship of sentences in the comment text through the regional convolutional neural network(RCNN).At the same time,the time cost of model training can be greatly reduced by combining bi-directional gated recurrent unit (BGRU).In addition,the introduction of highway network (HN) could enable the proposed model to capture much more semantic information between words.The attention mechanism is additionally utilized in an effort to assign weights of the concerned aspect in the network structure,which is able to effectively obtain the long-distance dependency of the concerned aspect in the whole review.The model can give end-to-end training and experiment on different datasets,achieving better performance than the existing network model.
Double Cycle Graph Based Fraud Review Detection Algorithm
CHEN Jin-yin, HUANG Guo-han, WU Yang-yang, JIA Cheng-yu
Computer Science. 2019, 46 (9): 229-236.  doi:10.11896/j.issn.1002-137X.2019.09.034
Abstract PDF(5824KB) ( 1119 )   
References | Related Articles | Metrics
Because online reviews of stores can provide customers with a lot of valuable information and greatly affect the credibility of stores,a large number of spam reviews are emerged to disturb the order of market for pro-fit.Many stores or individuals deliberately flatter or denigrate certain stores through fake reviews to achieve their profit objectives.Thus an efficient fraud review detection algorithm is crucial.This paper built a graph filter based on the relationships among users,comments and stores,and obtained the reliability of users,comments and stores through iterative calculation,so as to find fake reviews.Three key questions are brought up:to get more reliable reliability of users,comments and stores,to identify the real reviews effectively,and to detect fake reviews and spammers effectively.In order to improve the reliability of users,comments and stores,a double cycle graph based detection algorithm was proposed to obtain reliable users,comments and stores.In order to find fake reviews and spammers effectively,this paper designed a novel weighted graph filter,through the combination of reliability and obtain reliable,and put forward double cycle filtering detection algorithm.The proposed detection algorithm is applied to Yelp datasets for experiments and proved efficiently in detection of spammers and identifies real reviews.
Chinese Named Entity Recognition Method Based on BGRU-CRF
SHI Chun-dan, QIN Lin
Computer Science. 2019, 46 (9): 237-242.  doi:10.11896/j.issn.1002-137X.2019.09.035
Abstract PDF(1760KB) ( 957 )   
References | Related Articles | Metrics
Aiming at the problem that the traditional named entity recognition method relies heavily on plenty of hand-crafted features,domain knowledge,word segmentation effect,and does not make full use of word order information,anamed entity recognition model based on BGRU(bidirectional gated recurrent unit) was proposed.This model utilizes external data and integrates potential word information into character-based BGRU-CRF by pre-training words into dictionaries on large automatic word segmentation texts,making full use of the information of potentialwords,extracting comprehensive information of context,and more effectively avoiding ambiguity of entity.In addition,attention mechanism is used to allocate the weight of specific information in BGRU network structure,which can select the most relevant characters and words from the sentence,effectively obtain long-distance dependence of specific words in the text,recognize the classification of information expression,and identify named entities.The model explicitly uses the sequence information between words,and is not affected by word segmentation errors.Compared with the traditional sequence labeling model and the neural network model,the experimental results on MSRA and OntoNotes show that the proposed model is 3.08% and 0.16% higher than the state-of-art complaint models on the overall F1 value respectively.
Multi-layer Perceptron Deep Convolutional Generative Adversarial Network
WANG Ge-ge, GUO Tao, LI Gui-yang
Computer Science. 2019, 46 (9): 243-249.  doi:10.11896/j.issn.1002-137X.2019.09.036
Abstract PDF(11623KB) ( 1259 )   
References | Related Articles | Metrics
Generative adversarial network (GAN) is currently a new and effective method for training generative model in image generation.As an extension of GAN,deep convolutional generative adversarial network (DCGAN) introduces convolutional neural networks into the generative model for unsupervised learning.However,the linear convolutional layer of DCGAN is a generalized linear model for the underlying data block.The abstraction level of DCGAN is low and the quality of the generated image is not high.In terms of model performance metrics,image quality is judged only by subjective visual perception.Aiming at the above problems,multi-layer perceptron deep convolutional generative adversarial network (MPDCGAN) was proposed,and the multi-layer perceptron convolutional layer was used to replace the generalized linear model to convolve the input data to capture the deeper features of the image.In order to evaluate the quality of the generated image,a quantitative evaluation method named Frechet Inception Distance (FID) was used.The experimental results on the four benchmark datasets show that the FID value of the image generated by MPDGAN is negatively correlated with the image quality,and the image quality is further improved with the decrease of the FID value.
Graphics,Image & Pattern Recognition
Automatic Recognition Algorithm for Unconstrained Multi-pose Face Key Features under Unqualified Conditions
ZHAO Zhi-wei, NI Gui-qiang
Computer Science. 2019, 46 (9): 250-253.  doi:10.11896/j.issn.1002-137X.2019.09.037
Abstract PDF(1419KB) ( 592 )   
References | Related Articles | Metrics
Automatic recognition of multi-pose faces key features is of great significance to the processing of images in face database.In order to ensure that face key features are accurately recognized,it is necessary to extract key features of the face.When the traditional algorithm is used to automatically recognize multi-pose face key features,the obtained face images are of poor efficiency,low recognition rate and low efficiency.This paper presented an automatic multi-pose face feature recognition algorithm based on vector machine.The 3D coordinate of the face key feature image is represented by the focal length of the camera,and the 3D information of the multi-pose face key feature is calculated.Filter is used to deal with multi-pose face key features.Finally,according to the weight of the vector machine,this paper analyzed the target function and the noise of face key features,calculated the condition probability and the iteration number of the face automatic recognition,and realized the automatic recognition of the key features of unconstrained multi-pose face under the unqualified condition.Experiment results show that the proposed algorithm can be used to automatically identifiy the multi-pose face key features,and has high recognition rate and recognition efficiency.
Multi-focus Image Fusion Based on Latent Sparse Representation and Neighborhood Information
ZHANG Bing, XIE Cong-hua, LIU Zhe
Computer Science. 2019, 46 (9): 254-258.  doi:10.11896/j.issn.1002-137X.2019.09.038
Abstract PDF(6228KB) ( 604 )   
References | Related Articles | Metrics
This paper presented a novel fusion method based on latent sparse representation model for edge blurring and ghost in multi-focus image fusion.Firstly,it decomposes the image into public features,unique features and detail information by using latent sparse representation.Secondly,it combines unique features and detail information to determine the focused and defocused regions.Finally,it uses source information fused multi-focus images based on context information.A large number of experimental results show that the proposed method can effectively fuse multi-focus images.Compared with the most advanced methods,the images processed by this algorithm retain more information of the source image.At the same time,the ghost of the unregistered images is reduced,and the fusion effect of the image is greatly improved.
Image Localized Style Transfer Based on Convolutional Neural Network
MIAO Yong-wei, LI Gao-yi, BAO Chen, ZHANG Xu-dong, PENG Si-long
Computer Science. 2019, 46 (9): 259-264.  doi:10.11896/j.issn.1002-137X.2019.09.039
Abstract PDF(3509KB) ( 1674 )   
References | Related Articles | Metrics
Image style transfer is a research hot topic in computer graphics and computer vision.Aiming at the difficulty in the style transfer of the local area of the content image in the existing image style transfer method,this paper proposed a localized image transfer framework based on convolutional neural network.First,according to the input content image and style image,the image style transfer network is used to generate the whole style transferred image.Then,the image foreground and the background area are determined by the mask generated by automatic semantic segmentation.Finally,according to style transfer result of the foreground or the background region,an image fusion algorithm based on Manhattan distance is proposed to optimize the convergence and smooth transition between the stylized object and the original area.The framework comprehensively considers the pixel values and positions of the target area and the boundary band,and experiments on three public image datasets demonstrate that the method can efficiently,quickly and naturally implement local style transfer of input content maps,and produce visual effects that are both artistic and authentic.
Multi-spectral Scene Recognition Method Based on Multi-way Convolution Neural Network
JIANG Ze-tao, QIN Jia-qi, HU Shuo
Computer Science. 2019, 46 (9): 265-270.  doi:10.11896/j.issn.1002-137X.2019.09.040
Abstract PDF(2489KB) ( 1123 )   
References | Related Articles | Metrics
The existing scene recognition algorithm based on convolution neural network can’t deal with the multi spectral image of the target scene and can’t implement ideal accuracy in the case of insufficient data.In view of the above problems,this paper proposed a multi-spectral convolution neural network based multispectral scene recognition me-thod.The multi-way convolution neural network accepts three channels of visible light color image (RGB image) and a single channel near infrared image (NIR image) with a total of four channels.The proposed method can effectively extract the features of visible light image,infrared image and the correlation between visible and infrared images,and combine the features in the full connection layer,utilizing the correlation information among spectral images reasonably.The pre-training method is combined to improve the accuracy.Experiment results on the NIR_RGB dataset show that the average accuracy of the network is higher than that of AlexNet,InceptionNet,ResNet and artificial design feature descriptors.Moreover,this network can be extended to other multi-spectral image classification tasks with slight modification.
Long-term Object Tracking Based on Kernelized Correlation Filter and Hierarchical Convolution Features
CHEN Wei, LI Jue-long, XING Jian-chun, YANG Qi-liang, ZHOU Qi-zhen
Computer Science. 2019, 46 (9): 271-276.  doi:10.11896/j.issn.1002-137X.2019.09.041
Abstract PDF(3363KB) ( 645 )   
References | Related Articles | Metrics
Aiming at the problems such as deformation,scale variation,target occlusion,and out of sight during long-term object tracking,this paper proposed a long-term object tracking algorithm based on kernelized correlation filter and hierarchical convolution feature.Firstly,the pre-trained convolution neural network is applied to extract the hierarchical convolution feature,so as to train correlation filter and estimate location.Then the target scale pyramid is constructed to estimate scale.In order to prevent tracking failure caused by target occlusion and tanget leaving the field of vision,an online support vector machine is trained for target re-detection to achieve long-term tracking.Experimental results on long-term object tracking dataset show that the accuracy of the proposed algorithm is 7%,15%,17%,21% and 50% higher than that of HCF,LCT,DSST,KCF and TLD.
Edge-preserving Filtering Method Based on Convolutional Neural Networks
SHI Xiao-hong, HUANG Qin-kai, MIAO Jia-xin, SU Zhuo
Computer Science. 2019, 46 (9): 277-283.  doi:10.11896/j.issn.1002-137X.2019.09.042
Abstract PDF(3305KB) ( 1363 )   
References | Related Articles | Metrics
Edge-preserving filtering is a significant basic theory research in the fields of computer vision and image processing.As subsequent operation of pre-processing,edge-preserving filtering has great influence on final results of image processing.Different with traditional filtering,edge-preserving filtering focuses not only on smooth,but also on image edge details.Convolutional neural networks (CNNs) have been applied into a variety of research fields with great success.In this paper,CNN was introduced into edge-preserving filtering.Taking advantages of CNN’s excellent extensibility and flexibility,this paper constructed a deep convolutional neural network (DCNN).With three types of cascading network layers,DCNN iteratively updates its parameters by back propagation,produces a residual image and realizes a DCNN-based edge-preserving filtering.Besides,a gradient CNN model (GCNN) was constructed.The gradient of color images is learnt,edge-preserving smoothing operation is conducted for gradient images by three layers of convolution,and edge-preserving filtering gradient images are obtained.Subsequently,the input image is used to guide the filtering gradient image for reconstructing and obtaining color filtering image.Finally,experiments were made to evaluate the proposed methods and the proposed methods were compared with popular edge-preserving filtering methods subjectively and objectively.DCNN not only achieves the same visual effects as other methods,but also has big advantages in processing time,which demonstrates that DCNN can effectively and efficiently imitate various filtering methods through training on large amount of data.For GCNN,in terms of visual effects,its output conforms to the input in the color style globally.In terms of image similarity evaluation,it also outperforms other methods.This verifies that GCNN can address the problems of color shift and gradient inversion,as well as improve the filtering efficiency.
Iris Center Localization Method Based on 3D Eyeball Model and Snakuscule
ZHOU Xiao-long, JIANG Jia-qi, LIN Jia-ning, CHEN Sheng-yong
Computer Science. 2019, 46 (9): 284-290.  doi:10.11896/j.issn.1002-137X.2019.09.043
Abstract PDF(3128KB) ( 925 )   
References | Related Articles | Metrics
In order to imrove the accuracy of iris center localization of eyes in gaze estimation,this paper proposed a novel iris center localization method based on 3D eyeball model and Snakuscule.Firstly,a facial alignment method is employed to get the facial feature points and the obtained points are used to obtain the initial iris center.Then,the eye status is judged to reduce the error brought by the low-quality image.To further obtain the accurate iris center location,the energy model of Snakuscule is improved and the iris contour is updated iteratively by a fixed size of Snakuscule.The energy value is obtained by combining the Snakuscule model and 3D eye model.The 3D eyeball model reflects the geometric relationship between the iris center,the eyeball center and the iris contours.According to the energy value,the final iris center can be obtained by updating the iris contours iteratively.Finally,experiments conducted on BioID face database validate the effectiveness and superiority of the proposed method.The maximum standard errors of the algorithm reach at 85.0%,97.8% and 99.8% respectively when e≤0.05,e≤0.1,and e≤0.25.
Interdiscipline & Frontier
Virtual Machine Placement Strategy with Energy Consumption Optimization under Reinforcement Learning
LU Hai-feng, GU Chun-hua, LUO Fei, DING Wei-chao, YUAN Ye, REN Qiang
Computer Science. 2019, 46 (9): 291-297.  doi:10.11896/j.issn.1002-137X.2019.09.044
Abstract PDF(1937KB) ( 1158 )   
References | Related Articles | Metrics
Although the rapid development of cloud data centers has brought very powerful computing power,the energy consumption problem has become increasingly serious.In order to reduce the energy consumption of physical servers in cloud data centers,firstly the virtual machine placement problem is modeled by reinforcement learning.Then,the Q-Learning(λ) algorithm is optimized from two aspects:state aggregation and time reliability.Finally,the virtual machine placement problem is simulated through cloud simulation platform CloudSim and actual data.The simulation results show that the optimized Q-Learning(λ) algorithm can effectively reduce the energy consumption of the cloud data center compared with the Greedy algorithm,PSO algorithm and Q-Learning algorithm,and can ensure better results for diffe-rent numbers of virtual machine placement requests.The proposed algorithm has strong practical value.
Effectiveness Evaluation of Command and Control System Based on Improved Projection Pursuit and Grey Correlation Method
ZHANG Zhuang, LI Lin-lin, YU Hong-feng, FAN Bao-qing
Computer Science. 2019, 46 (9): 298-302.  doi:10.11896/j.issn.1002-137X.2019.09.045
Abstract PDF(1638KB) ( 601 )   
References | Related Articles | Metrics
Aiming at the problem of effectiveness evaluation of command and control system that index weight is susceptible to subjective factors and affects decision analysis,an effectiveness evaluation method based on improved projection pursuit and grey correlation was proposed.First,the projection pursuit method is improved,and a new projection index function is defined by the degree of sampling aggregation and the degree of dispersion between classes.The index weight is obtained by solving the maximum value model of the projection index function.Second,based on the grey relational projection algorithm,the effectiveness evaluation of the command and control system is realized by using the projection value of the evaluation object to the reference series as the standard of comprehensive performance metrics.Finally,five command and control systems are taken as examples to verify the experimental results.The results show that the fifth object has the highest comprehensive efficiency,which is consistent with the conclusion of combination weighting method.And when the number of indicators is greater than eighteen,the proposed method is more efficient than AHP and combination weighting method.
Analysis of Interactive Process Change Propagation Based on Configuration
ZHAN Yue, FANG Xian-wen, WANG Li-li
Computer Science. 2019, 46 (9): 303-309.  doi:10.11896/j.issn.1002-137X.2019.09.046
Abstract PDF(2096KB) ( 460 )   
References | Related Articles | Metrics
Change propagation is one of the core of the business management system,and aims at adapting flexibly to changing business requirements.However,the existing change propagation mainly deals with the problem of the change region between similar processes extended by a single business process,and it is difficult to study the interactive process with information transfer.A method was proposed to analyze interactive process change propagation based on configuration.Based on the location of the change region,the configuration technology is used to improve the intra-region beha-vior relationship.In the case of the corresponding source change region locked by a given change demand,choreography and conditional abstraction are used to find target change region affected by changes in other interactive sub-processes.And the configuration was utilized to deal with the change behavior of constraints in the region under the principle of ensuring the consistency of the extra-region structure.Then,the rationality of the configured interactive process model was detected by compatibility.Finally,the feasibility of the method was verified by a specific case.
Model and Algorithm for Identifying Driver Pathways in Cancer by Integrating Multi-omics Data
CAI Qi-rong, WU Jing-li
Computer Science. 2019, 46 (9): 310-314.  doi:10.11896/j.issn.1002-137X.2019.09.047
Abstract PDF(1477KB) ( 1419 )   
References | Related Articles | Metrics
This paper proposed improved maximum weight submatrix problem model for identifying driver pathways in cancer by integrating somatic mutations,copy number variations,and gene expressions.The model tries to adjust cove-rage and mutual exclusion with the average weight of genes in a pathway,enhances the coverage of the gene set with large weight and relaxes its mutual exclusion constraint.By introducing a greedy based recombination operator,a parthenogenetic algorithm PGA-MWS was presented to solve the model.Experimental comparisons between PGA-MWS and GA were performed on glioblastoma and ovarian cancer datasets.Experimental results show that,compared with GA algorithm,PGA-MWS algorithm based on the improved model can identify gene sets with high coverage and less mutual exclusion.Many of the identified gene sets are involved in known signaling pathways,and have been confirmed to be closely related to cancer cells.Simultaneously,several potential drive pathways can also be discovered.Therefore,the proposed approach may become a useful complementary one for identifying driver pathways.
Heuristic One-dimensional Bin Packing Algorithm Based on Minimum Slack
LUO Fei, REN Qiang, DING Wei-chao, LU Hai-feng
Computer Science. 2019, 46 (9): 315-320.  doi:10.11896/j.issn.1002-137X.2019.09.048
Abstract PDF(1374KB) ( 1563 )   
References | Related Articles | Metrics
The one-dimensional bin packing problem is a NP-hard problem in the combinatorial optimization,and it is extremely difficult to obtain an accurate solution of the problem in a limited time.Heuristic algorithms and genetic algorithms are the two main methods to solve the bin packing problem.However,the results obtained by the classical heuristic packing algorithm are very poor in extreme cases.The genetic algorithm is prone to generate invalid solutions in the process of solving the packing problem,thus resulting in large amount of data to be processed.In order to obtain the approximate optimal solution of the packing problem,this paper analyzed the current packing problem algorithm and proposed a new heuristic packing algorithm.The proposed IAMBS algorithm uses the idea of random to search for local optimum by allowing a certain amount of slack in the bin-packing,and then obtains the global optimal solution of the packing problem.The allowable slack can prevent this algorithm from falling into local optimum,and has strong ability to discover global optimal solutions.1410 benchmark test instances from two sources were utilized for the experiment,and the optimal solution of 1152 instances were implemented by the IAMBS algorithm.Experimental data demonstrate that the IAMBS algorithm can effectively obtain the approximate optimal solution,and it is more advantageous than the traditional classical packing algorithm.
QoS Satisfaction Prediction of Cloud Service Based on Second Order Hidden Markov Model
JIA Zhi-chun, LI Xiang, YU Zhan-lin, LU Yuan, XING Xing
Computer Science. 2019, 46 (9): 321-324.  doi:10.11896/j.issn.1002-137X.2019.09.049
Abstract PDF(1581KB) ( 576 )   
References | Related Articles | Metrics
With the rapid development of cloud computing technology,QoS prediction of cloud service components has become an important research issue in cloud computing.Accurate prediction of the QoS value is of great difficulty in this research field.QoS is often used to measure the performance of different cloud service components.Based on the QoS values of different candidate components,it is easy to choose the best one.For the same cloud service component,the QoS values provided by different users are not necessarily the same.For different users,personalized component QoS values are needed so that accurate selection can be made.If the user’s QoS cannot be satisfied by a single cloud service component,the component composition should be considered.In this case,its QoS capability should to be predicted to meet the user’s needs.This paper presented a QoS satisfied prediction model of cloud service component.The model uses a second order hidden markov model to construct the QoS satisfaction predictive model.By considering the in- fluence of the previous two states on the current state,the proposed method can effectively improve the prediction accuracy.Finally,in the Matlab simulation experiment environment,the effectiveness of the proposed method is prove by the prototype system and QWS data set with 2507 real web services.
IIVMM:An Improved Interactive Voting-based Map Matching Algorithm for Low-sampling-rate GPS Trajectories
YAN Sheng-long, YU Juan, ZHOU Hou-pan
Computer Science. 2019, 46 (9): 325-332.  doi:10.11896/j.issn.1002-137X.2019.09.050
Abstract PDF(2693KB) ( 761 )   
References | Related Articles | Metrics
Map matching is the process of recognizing the movement track of moving objects (mainly vehicles,pedes-trians) in the road network according to the discrete sampling location data (GPS coordinates).It is a necessary processing step for many relevant applications such as GPS trajectory data analysis and position analysis.This paper proposed an improved map matching algorithm based on interactive voting to solve the problems that the existing map matching algorithms have low accuracy and efficiency for low sampling trajectory data.The main contributions of the proposed algorithm are as follows.Firstly,the proposed algorithm considers not only the spatial distances between sampling points,road topology and road segment speed limits,but also the real-time moving direction and speed of each GPS point to improve the matching accuracy.Secondly,a filter based on driving direction and speed limits is introduced to filter out noisy candidates,thus improving the efficiency of the algorithm.To evaluate the performance of the proposed algorithm,two real-world datasets were used to compare the proposed algorithm with the existing IVMM algorithm and the AIVMM algorithm.Experimental results show that the proposed algorithm outperforms the existing two algorithms in terms of matching accuracy and efficiency.