Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 47 Issue 10, 15 October 2020
  
Mobile Crowd Sensing and Computing
Review of Human Activity Recognition Based on Mobile Phone Sensors
ZHANG Chun-xiang, ZHAO Chun-lei, CHEN Chao, LUO Hui
Computer Science. 2020, 47 (10): 1-8.  doi:10.11896/jsjkx.200400092
Abstract PDF(1650KB) ( 2910 )   
References | Related Articles | Metrics
All walks of life and daily life are affected by human activities.Human activity recognition (HAR) has a wide range of application,and has been widely concerned.With the gradual development of smart phones,sensors are embedded in the phone to make the phone more intelligent and realize more flexible man-machine interaction.Modern people usually carry smart phones with them,so there is a wealth of information about human activities in the signals of mobile phone sensors.By extracting signals from the phone’s sensors,it is possible to identify users’ activities.Compared with other methods on the strength of computer vision,HAR on account of mobile phone sensors can better reflect the essence of human movements,and has the characteristics of low cost,flexibility and strong portability.In this paper,the current situation of HAR based on mobile phone sensors is described in details,and the system structure and basic principles of the main technologies are described and summarized in details.Finally,the existing problems and future development direction of HAR based on mobile phone sensors are analyzed.
Review of IoT Sonar Perception
CHEN Chao, ZHAO Chun-lei, ZHANG Chun-xiang, LUO Hui
Computer Science. 2020, 47 (10): 9-18.  doi:10.11896/jsjkx.200300138
Abstract PDF(2014KB) ( 2358 )   
References | Related Articles | Metrics
In recent years,with the rapid development of technology,smart mobile devices have become a part of people’s lives.The popularity of smart mobile devices provides sufficient physical support for the realization of sonar perception theory.When the sonar signal is propagated,it is modulated by the propagation space and life activities,so it carries a wealth of life state and space information.The popularity of smartphones,the maturity of communication technology,and the innovative use of acoustic signals have enabled sonar sensing devices to achieve low-cost,fine-grained sensing collection and calculation.Utilizing acoustic signals in sonar sensing technology does not require the support of special hardware.With the unique concealment and its typical feature of high accuracy,the acoustic signals can calculate the surrounding space information.This article elaborates on the research history of acoustic signals in the field of spatial positioning and sensing technology,summarizes the basic principles of the main technologies,and finally analyzes the problems and future development trends of acoustic signals in mobile sensing application technology.
Crowdsourcing Collaboration Process Recovery Method
WANG Kuo, WANG Zhong-jie
Computer Science. 2020, 47 (10): 19-25.  doi:10.11896/jsjkx.191200164
Abstract PDF(1809KB) ( 1155 )   
References | Related Articles | Metrics
Crowdsourcing is a distributed problem solving mechanism using group intelligence.It is widely used in Internet application scenarios based on artificial intelligence activities,using large groups of users on the Internet to work together to solve complex problems that cannot be solved by one person.Taking the development and maintenance process of open source software as an example,participants jointly complete key tasks such as code writing and bug repair through specific platforms.Different from traditional business process management (BPM),collaborative processes in the crowdsourcing scenario face challenges such as undetermined process structure,and unpredictable timing and results,which bring great difficulties to the efficiency and quality control of crowdsourcing collaboration.In this paper,aiming at a series of collaborative behaviors produced by multiple participants according to the time sequence (embodied as text in the form of natural language),natural language processing and artificial intelligence are used to propose a restoration algorithm for the process of crowdsourcing collaboration.An empirical study is carried out on the case of personnel cooperation in the process of bug repair in the field of open source software development.The collaborative process of recovery is visualized,and the accuracy of process recovery algorithm is quantitatively compared.This research can help coordinators of crowdsourcing process (such as open source project managers) to understand the problem solving process more intuitively,and find the typical patterns of collaboration,so as to make an accurate prediction for the nature of the collaborative process of the new crowdsourcing task.
Truth Inference Based on Confidence Interval of Small Samples in Crowdsourcing
ZHANG Guang-yuan, WANG Ning
Computer Science. 2020, 47 (10): 26-31.  doi:10.11896/jsjkx.191100086
Abstract PDF(1867KB) ( 1056 )   
References | Related Articles | Metrics
Crowdsourcing is an increasingly important area of computer applications,because it can address problems that difficult for computer to handle alone.For the openness of crowdsourcing,quality control becomes one of the important challenges.In order to ensure the effectiveness of truth inference,current researches leverage answers of trustful workers to infer truths by evalua-ting worker quality generally.However,most existing methods ignore the long-tail phenomena in crowdsourcing,and there is a lack of researches on the truth inference when the number of tasks completed by workers is generally small.Considering the characteristics of different task types,long-tail phenomenon and worker answers,this paper constructs the confidence interval of small samples to solve truth inference when the number of tasks completed by workers are generally small.Firstly,worker quality is pre-estimated according to the gold standard answer strategy,and different truth initialization methods are adopted according to the result of pre-estimated.Then,the confidence interval of small samples is constructed to evaluate worker quality accurately.Finally,task truths are inferred and worker quality is updated iteratively.In order to verify the effectiveness of the proposed me-thod,5 real datasets are selected to conduct experiments.Compared with the existing methods,the proposed method can solve the problem of the long tail phenomenon effectively,especially the number of tasks completed by each worker is generally small.The average accuracy of the proposed method for the single-choice tasks is as high as 93%,and higher than 16% of the bestperfor-mance of the existing methods.Meanwhile,the values of MAE and RMSE of the proposed method for the numerical tasks are lower than that of the existing methods.
Task Recommendation Model Based on Crowd Worker’s Movement Trajectory
HU Ying, WANG Ying-jie, TONG Xiang-rong
Computer Science. 2020, 47 (10): 32-40.  doi:10.11896/jsjkx.200600180
Abstract PDF(3006KB) ( 1287 )   
References | Related Articles | Metrics
With the development of mobile crowdsourcing,more and more tasks are published on crowdsourcing platforms.However,crowd workers choose tasks suitable for them will take a lot of time according to their interests,because there are a large number of tasks in the mobile crowdsourcing system.In addition,it is difficult for them to select the tasks that are most suitable for their own execution,because the crowd workers have no knowledge of the information of all tasks existing in the crowdsour-cing system.The tasks in the mobile crowdsourcing system have the spatio-temporal characteristic,which requires crowd workers to move to the specified region to complete the task within the specified time interval.However,crowd workers have their own works and life,in order to adapt to their daily movement,a mobile prediction model is proposed to predict the movement behavior of them.Based on the prediction results and the needs of crowd workers,a task recommendation model based on the movement trajectory of crowd workers is proposed to recommend tasks for crowd workers.Finally,a lot of simulations are carried out on two real data sets.The results prove that the proposed model has high accuracy and good adaptability.
Reinforcement Learning Based Win-Win Game for Mobile Crowdsensing
CAI Wei, BAI Guang-wei, SHEN Hang, CHENG Zhao-wei, ZHANG Hui-li
Computer Science. 2020, 47 (10): 41-47.  doi:10.11896/jsjkx.200700070
Abstract PDF(2327KB) ( 1059 )   
References | Related Articles | Metrics
Mobile crowdsensing system should offer the personalized privacy protection of users’ location to attract more users to participate in the task.However,due to the existence of malicious attackers,users’ enhanced privacy protection will lead to poor location availability and reduce the efficiency of task allocation.To solve this problem,this paper proposes a win-win game based on reinforcement learning.Firstly,two virtual entities of the trusted third party are used to simulate the interaction between users and the platform,one simulating user chooses the privacy budget to add noise to their locations and the other simulates the platform allocating tasks with users’ disturbed locations.Then,the interaction process is constructed as a game,in which the two virtual entities of interaction are the adversaries,and the equilibrium point is derived.Finally,the reinforcement learning method is used to try different location disturbance strategies and output an optimal location disturbance scheme.The experimental results show that the mechanism can optimize the task distribution utility while improving the user’s overall utility as much as possible,so that the user and the platform can achieve a win-win situation.
Vital Signs Monitoring Method Based on Channel State Phase Information
DAI Huan, JIANG Jing-jing, SHU Qin-dong, SHI Peng-zhan, SHI Wen-hua
Computer Science. 2020, 47 (10): 48-54.  doi:10.11896/jsjkx.200500057
Abstract PDF(3732KB) ( 1750 )   
References | Related Articles | Metrics
With the development of wireless communication technology,wireless sensing technology has been widely studied.This paper proposes vital signs monitoring method based on CSI phase.The method employs commodity WiFi to obtain CSI phase information.The liner transformation is used to reduce phase shift and delay interference caused by un-synchronization of transmitter and receiver.Hampel filter is implemented to filter out DC component and high frequency noises influenced by signal fading and multipath effects.Discrete wavelet transform is utilized to realize vital signs extraction.According to the characterizes of breathing and heartbeat frequency,multi-subcarrier fusion and Fast Fourier Transform algorithms are respectively employed to estimate breathing and heart rates.Experimental results show that the method can effectively capture vital signs in multiple scenarios.
Fog Computing and Self-assessment Based Clustering and Cooperative Perception for VANET
LIU Dan
Computer Science. 2020, 47 (10): 55-62.  doi:10.11896/jsjkx.200500154
Abstract PDF(2724KB) ( 1049 )   
References | Related Articles | Metrics
Clustering is an effective method to improve the perception quality of Vehicular Crowd Sensing (VCS) and reduce costs.However,how to maximize the cluster stability while accounting for the high mobility of vehicles remains a challenging problem.Based on the communication characteristics of VANET,a clustering algorithm based on Fog Computing and Self-Assessment (FCSAC) is proposed,which divides VANET into many clusters,and each cluster selects a Master Cluster Head(MCH) for data dissemination.The results of vehicle cooperative perception in the cluster are given to the fog nodes by MCH,the vehicle mobility rate (VMR) is introduced to improve Master Cluster Head(MCH) election method,this parameter is calculated based on mobility metrics to satisfy the need for VANET great mobility.Then,this paper evaluates the impact of vehicle joining on cluster stability by defining scaling functions and weighting mechanisms.FCSAC strengthens clusters’ stability through the election of a Slave Cluster Head (SCH) in addition to the MCH.In order to improve the accuracy,timeliness,and effectiveness of traffic information,on the basis of fog computing,via chain collaboration traffic perception between the MCH,an accurate and comprehensive view of the local traffic perception is formed.Finally,the Veins simulation platform is used to eva-luate the performance.The results show that,compared with the CBRSDN algorithm and SACBR algorithm,the proposed algorithm performs better in terms of cluster stability,and effectively improves the throughput of VANET.Compared with the Fuzzy C-Means (FCM) algorithm,it has better traffic diversion capability and reduces the consumption of network communication.
Group Perception Analysis Method Based on WiFi Dissimilarity
JIA Yu-fu, LI Ming-lei, LIU Wen-ping, HU Sheng-hong, JIANG Hong-bo
Computer Science. 2020, 47 (10): 63-68.  doi:10.11896/jsjkx.200600014
Abstract PDF(2617KB) ( 886 )   
References | Related Articles | Metrics
It is a new idea of non-intrusive perception technology to track and analyze the dynamic change of group structure in WiFi environment by using smart phone.Based on the relationship between WiFi information difference and between-user distance,a method of WiFi dissimilarity computation is designed.According to the WiFi dissimilarity between users,the dissimilarity distance is statistically calculated,and then the GSGA-RSS algorithm is used to iteratively calculate the node coordinates.Finally,the hierarchical group structure is analyzed by DBSCAN.A method of LMD (location mean deviation) computation based on mass center is proposed,and experiments on groups structures of queues and ring topology under different between-user distances are conducted.The results show that the proposed approach can identify 85% of the groups with 94% precision for the cases with the minimum intergroup distance of 5 m and the maximum intragroup distance of 3 m.The LMD is about 0.5 for the queues with between-user distance of 0.5 m,and about 1 for the ring structure with between-user distance of 1 m.
Database & Big Data & Data Science
Helpfulness Degree Prediction Model of Online Reviews Fusing Information Gain and Gradient Decline Algorithms
FENG Jin-zhan, CAI Shu-qin
Computer Science. 2020, 47 (10): 69-74.  doi:10.11896/jsjkx.190700034
Abstract PDF(1573KB) ( 838 )   
References | Related Articles | Metrics
Because it is impossible to predict whether the text content of online product reviews is helpful for viewers,many reviewers write a large number of unhelpful reviews,which increases the cost of information search for potential consumers,and even reduces the possibility of potential consumers buying products.In order to improve the helpful online reviews rate of e-commerce platform and provide test function for reviewers,a prediction model of online reviews helpfulness is established.According to the text characteristics of online reviews,the model chooses three features of online reviews:the number of words,the helpful value of words,and the number of product features,to construct a model for predicting the helpfulness of online reviews.The helpful value is the information gain of words to distinguish the helpfulness of online reviews.And then,according to a large number of online reviews,by using the gradient descent algorithm,the model parameters are solved.The experimental results show that with the increase of the number of words,helpful value of words and the number of product features,the helpfulness of reviews increases continuously.The online reviews are divided into three levels:general,helpful and very helpful.The general predicted accuracy of online reviews is 92.96%,helpful accuracy is 94.83%,and very helpful accuracy is 67.63%.The average accuracy,recall and F1 of the model are 85.05%,82.81% and 83.72%,respectively.The results verify the feasibility of the model to predict the helpfulness of online reviews.
Overlapping Community Detection Method Based on Rough Sets and Distance Dynamic Model
ZHANG Qin, CHEN Hong-mei, FENG Yun-fei
Computer Science. 2020, 47 (10): 75-82.  doi:10.11896/jsjkx.190800002
Abstract PDF(2420KB) ( 749 )   
References | Related Articles | Metrics
The real world is considered to be composed of many different complex systems.In order to model and analyze the hidden rules and functions among individuals in complex systems,the complex system may be abstracted as a complex network composed of nodes and edges.Mining community structures in complex networks has important theoretical significance and practical value in content recommendation,behavior prediction and disease spread.With the continuous changes of individuals in complex systems,overlapping nodes appear among multiple communities.How to effectively and accurately mine the overlapping nodes in communities has brought some challenges.In order to effectively detect the overlapping nodes in the community,an overlapping community detection method based on rough sets and distance dynamic model (OCDRDD) is proposed in this paper.First of all,according to the topology of the network,it selects K core nodes by combining node degree centrality and distance,then initializes the approximation sets and the boundary region of the community according to the distance ratio relationship.Combined with the distance dynamic model,the distances between boundary region nodes and the lower approximation set nodes are changed iteratively.During each iteration,boundary region nodes that conform to the defined distance ratio relationship are classified into the lower approximation of the community,and the boundary region nodes are reduced until the optimal overlapping community structure is found.Finally,the “pseudo” overlapping nodes are processed according to the two rules defined in this paper.NMI and overlapping module degree EQ are taken as evaluation indexes on real network datasets and LFR Benchmark artificial network datasets.The OCDRDD method is compared with other typical community detection methods in recent years both on real network datasets and LFR Benchmark artificial network datasets.The experimental results show that OCDRDD method is better than other community detection algorithms on the whole.The results show that the proposed algorithm is effective and feasible.
Study of Triangle Counting Algorithm with Sliding Windows Based on FLINK
WANG Xu, YANG Xiao-chun
Computer Science. 2020, 47 (10): 83-90.  doi:10.11896/jsjkx.190900014
Abstract PDF(3239KB) ( 1113 )   
References | Related Articles | Metrics
Triangle Counting,calculating global and local triangle counts,is an important work in data mining,whose number is widely used in important role identification,recommendation systems,community discovery,spam and fraud detection etc.In the graphs presented as a stream of edges,edges are temporal,and there are a large of duplicate edges in real-world graphs.For full use of the time information in the graph and mining the network knowledge,this work studied the problem of estimating global and local triangle counts on a multigraph stream with sliding windows,that simultaneously studied multiple windows by window mechanism to obtain more information in implicit time relationships.A triangle counting algorithm based on FLINK window ope-ration and a triangle increment counting algorithm based on sliding window are proposed.Like the existing edge sampling work,the edge set is used to store window history data for accurately calculating global and local triangle counts on a multigraph stream with sliding windows in one-pass.The triangle counting algorithm based on FLINK window operator uses the window mechanism provided by FLINK.While the triangle increment counting algorithm based on sliding window realizes window counting by calculating slide-in and slide-out data through the window,reducing a large number of repeated calculations of coincident edges between adjacent windows,seamlessly processing multiple time windows,and for duplicate data in slide-in and slide-out,uses a deduplication mechanism to further reduce the calculation.The theory has been proven that both the algorithms can accurately count the triangles in the sliding window,and the effects of window size,sliding distance,data distribution and data flow rate on window processing time were analyzed through experiments.Compared with the TRIEST algorithm,when the window is small,the triangle counting algorithm based on FLINK window operation and the triangle increment counting algorithm based on sliding window have faster speed.When the window is large,the accuracy of the calculation result is guaranteed.
Mobility Pattern Mining for People Flow Based on Spatio-Temporal Data
SUN Tian-xu, ZHAO Yun-long, LIAN Zuo-wei, SUN Yi, CAI Yue-xiao
Computer Science. 2020, 47 (10): 91-96.  doi:10.11896/jsjkx.200100001
Abstract PDF(3260KB) ( 1777 )   
References | Related Articles | Metrics
With the accelerating urbanization of many countries,managing people flow and mining mobility patterns become more and more important.Simultaneously,with the development of information technology,especially mobile crowd sensing,the concept of smart city is proposed by many scholars,sensing data in smart cities also provides the possibility for analysis of people flow.In smart city,spatio-temporal data is the most common data.Based on the spatio-temporal data,this paper proposes a modeling method to represent different kinds of spatio-temporal data as people flow model.Then,based on the thinking of clustering,this paper mines mobility pattern from people flow by an improved density-based clustering algorithm,designs a transportation application in smart city,and proposes a method for evaluating the effectiveness of mobility pattern.Finally,experimenting on a real dataset of a city in China and analyzing the results.The results show that the mobility pattern obtained in this paper can reduce costs by 25% in the transportation application of smart city,verifying the effectiveness of the mobility pattern.
Personalized Microblog Recommendation Model Integrating Content Similarity and Multi-feature Computing
LIU Yu-dong, SUN Hao, JIANG Yun-cheng
Computer Science. 2020, 47 (10): 97-101.  doi:10.11896/jsjkx.190700073
Abstract PDF(1693KB) ( 1220 )   
References | Related Articles | Metrics
With the popularity of microblog,problems such as information overload are increasingly prominent.How to help users find the microblog they need quickly and accurately has become an urgent problem to be solved.Although microblog recommendation based on collaborative filtering technology and LDA can achieve certain accuracy,it can not solve the problems of genernal classification of content and the disadvantages when LDA model is used to deal with short texts.Therefore,this paper proposes a personalized microblog recommendation model integrating content similarity and multi-feature computing.Firstly,the content similarity between user and microblog is calculated based on word2vec.Then,according to the characteristics such as time,number of likes,comments and reposts,the freshness and popularity of microblog are calculated.Finally,the content similarity,freshness and popularity of microblog are comprehensively considered to calculate its ranking score,so as to realize users’ personalized microblog recommendation.This model considers recommendation from the perspective of content similarity,avoiding the above problems and making the recommendation results more accurate in semantics.Experimental results show that the proposed model has good performance in accuracy,recall rate and F-measure,in particular,the accuracy has been significantly improved by about 10%,and F-Measure is increased by about 5%,and the validity of the model is proved.
Trajectory Compression Algorithm Based on Recurrent Neural Network
LI Yi-tao, SUN Wei-wei
Computer Science. 2020, 47 (10): 102-107.  doi:10.11896/jsjkx.191000194
Abstract PDF(1798KB) ( 1331 )   
References | Related Articles | Metrics
With the development of positioning technology and storage technology,massive trajectories have been recorded by humans.How to effectively compress the most interesting spatial path information in the trajectory and how to restore the original information has caused extensive research.The compression algorithm for trajectories is mainly divided into line-simplified compression and road-based trajectory compression.Existing algorithms have shortcomings such as unreasonable algorithm assumptions and poor compression capability.According to the distribution characteristics of trajectories in the road network and the probabilistic modeling ability of recurrent neural networks for variable-length time series,a trajectory compression algorithm based on recurrent neural network is proposed.The trajectory distribution is efficiently summarized by our algorithm,in which the compression space is further reduced by the road network structure.Meanwhile,the influence of different input on the compression ratio of the algorithm is quantitatively analyzed.Finally,the experiment proves that the trajectory compression algorithm based on recurrent neural network not only has a higher compression ratio than existing algorithms,but also supports the compression of untrained trajectory data,and demonstrates the compression ratio of the algorithm can be improved by using the time information.
Sparse Non-negative Matrix Factorization Algorithm Based on Cosine Similarity
ZHOU Chang, LI Xiang-li, LI Qiao-lin, ZHU Dan-dan, CHEN Shi-lian, JIANG Li-rong
Computer Science. 2020, 47 (10): 108-113.  doi:10.11896/jsjkx.190700112
Abstract PDF(2015KB) ( 1196 )   
References | Related Articles | Metrics
When the basic non-negative matrix factorization is applied to image clustering,the processing of abnormal points is not robust enough and the sparsity is poor.In order to improve the sparsity of the factorized matrix,the L2,1 norm is introduced into the basic non-negative matrix factorization,and the basic non-negative matrix factorization model is improved to achieve sparsity and improve the performance of the algorithm.At the same time,in order to reduce the correlation between the features and enhance the independence of the features of the non-negative matrix factorization model,the cosine similarity is introduced,and a sparse non-negative matrix factorization algorithm based on cosine similarity is proposed.The algorithm has significant advantages in high-dimensional data processingand feature extraction,and can improve the discrimination accuracy of the algorithm in ima-ge clustering.The experimental results show that the proposed new algorithm outperforms the traditional non-negative matrix factorization algorithm in a series of evaluation indicators.
Neural Collaborative Filtering Based on Enhanced-attention Mechanism
KANG Yan, BU Rong-jing, LI Hao, YANG Bing, ZHANG Ya-chuan, CHEN Tie
Computer Science. 2020, 47 (10): 114-120.  doi:10.11896/jsjkx.190900038
Abstract PDF(2578KB) ( 1074 )   
References | Related Articles | Metrics
The recommendation system is the core to solve the problem of information overload.The existing research on recommendation framework faces many problems,such as sparse explicit feedback data and difficulty to preprocess data,especially the recommendation performances for new users and new projects need to be further improved.With the advancement of deep lear-ning,recommendation based on deep learning has become a current research hotspot.A large number of experiments have proved the effectiveness of deep learning applied to recommendation system.This paper presents EANCF (Neural Collaborative Filtering based on Enhanced-attention Mechanism) on the basis of NCF.It studies the recommendation framework from the perspective of implicit feedback data,and considers the data feature extraction by means of max-pooling,local inference modeling and combining many different ways of data fusion.Meanwhile,attention mechanism is introduced to reasonably allocate weight value for the network,reduce the loss of information and improve the performance of recommendation.Finally,based on two large real data sets,Movielens-1m and Pinterest-20,comparative experiments are carried out between EANCF and NCF,as well as some classical algorithms,and the training process of EANCF framework is given in detail.The experimental results show that the proposed EANCF framework does have good recommendation performance.Compared with the NCF,both HR@10 and NDCG@10 are significantly improved,with the highest increase of 3.53% for HR@10 and 2.47% for NDCG@10.
Community Detection Algorithm Combing Community Embedding and Node Embedding
ZHAO Xia, LI Xian, ZHANG Ze-hua, ZHANG Chen-wei
Computer Science. 2020, 47 (10): 121-125.  doi:10.11896/jsjkx.191000099
Abstract PDF(1614KB) ( 896 )   
References | Related Articles | Metrics
As an important property of social networks,community plays an important role in understanding network functions and predicting evolution.It is a research hotspot in recent years to transform network nodes into low-dimensional dense feature vectors through network embedding and apply them to machine learning tasks such as community detection.The traditional network embedding method only focuses on node embedding and ignores the importance of community embedding.Aiming at such a problem,CNE,a method combining Community embedding and improved Node Embedding,is proposed to obtain node representation combining structure information and attribute information.Node embedding represents nodes as low-dimensional vectors.Similarly,community embedding represents communities as Gaussian distributions in low-dimensional spaces.They combine multiple node similarities to promote more accurate community detection results.The experimental results show that,compared with the traditional community detection algorithm and network embedding method on public datasets,the proposed CNE method has higher precision.
K-mediods Cluster Mining and Parallel Optimization Based on Shuffled Frog Leaping Algorithm
WEI Lin-jing, NING Lu-lu, GUO Bin, HOU Zhen-xing, GAN Shi-run
Computer Science. 2020, 47 (10): 126-129.  doi:10.11896/jsjkx.190900113
Abstract PDF(1416KB) ( 746 )   
References | Related Articles | Metrics
In order to reduce the error of K-mediods clustering algorithm and improve the performance of parallel optimization,the shuffled frog leaping algorithm is applied to the clustering and parallel optimization process.In the K-mediods clustering process,K-mediods is combined with the clustering cluster idea to optimize the shuffled frog leaping algorithm for each cluster cluster,which improves the efficiency of large-scale data sample clustering,especially for multiple clusters.When the class execution nodes complete the large-scale sample K-mediods clustering in parallel,the shuffled frog leaping algorithm effectively improves the speedup ratio.It has been proved by experiments that the K-mediods clustering based on the shuffled frog leaping algorithm has obvious clustering advantages compared with the common K-mediods clustering,and the acceleration ratio performance of processing large-scale samples is better.
Access Pattern-oriented Cache Replacement Strategy for Hybrid Memory Architecture
LIU Wei, SUN Tong-xin, DU Wei
Computer Science. 2020, 47 (10): 130-135.  doi:10.11896/jsjkx.190800115
Abstract PDF(1691KB) ( 1062 )   
References | Related Articles | Metrics
With the increasing demand on memory capacity and energy consumption,current DRAM based memory systems face the scalability challenges in terms of storage density and power.Hybrid memory architecture,a promising approach to large-capacity and energy-efficient main memory composed of emerging Non-Volatile Memory (NVM) and DRAM has received extensive attention.Cache plays an important role and highly affects the number of write and read to NVM and DRAM blocks.However,existing cache policies based on LRU failed to fully address the significant asymmetry between NVM operations and DRAM ope-rations under different type of workloads.Cache trashing and scans problems can still seriously affect the performance of the system.By analyzing characteristics of different types of load and the competition between DRAM and NVM data under different access patterns,this paper proposes a dynamically adjusted level cache replacement strategy (DLRP).Experiment results show that proposed strategy improves the performance by 16.5% on average compared with a state-of-the-art cache policy (WBAR).DLRP also reduces energy consumption and NVM writes by 5.1%,5.2% against WBAR.
Computer Graphics & Multimedia
Survey of Data Association Technology in Multi-target Tracking
GONG Xuan, LE Zi-chun, WNAG Hui, WU Yu-kun
Computer Science. 2020, 47 (10): 136-144.  doi:10.11896/jsjkx.200200041
Abstract PDF(2307KB) ( 3795 )   
References | Related Articles | Metrics
Target tracking has always been one of the hot topics in the field of computer vision.As the fundamental science of computer vision,it is applied to various fields,including intelligent monitoring,intelligent human-computer interaction,unpiloted driving and military.Target tracking can be divided into single-target tracking and multi-target tracking from the perspective of the number of tracking objects,single-target tracking is relatively simple and does not need to consider the data association of targets besides the common problems(e.g.occlusion,deformation etc.) with multi-target tracking.In the multi-target tracking system,the scenes are more complex,the number and category of targets are often uncertain,so the data association is particularly important.Data association is an important stage in the process of multi-target tracking.Many scholars domestic and abroad even regard the multi-target tracking as the problem of data association,trying to seek the research method of multi-target tracking from the process of data association.In this paper,the data association technology of multi-target tracking is reviewed and introduced systematically.Firstly,this paper gives an overview of target tracking,especially multi-target tracking,and describes the status of data association research.Secondly,the concept of data association and the problems to be solved are described in detail.Then,all kinds of data association technology are analyzed and summarized,including traditional NNDA algorithm,JPDA algorithm,data association based on the Tracking-By-Detecting framework and data association based on MTMCT(Multi Target,Multi-Camera Tracking,MTMCT).Finally,the research direction of data association technology for multi-target tracking in the future is prospected.
3D Object Detection Algorithm Based on Two-stage Network
SHEN Qi, CHEN Yi-lun, LIU Shu, LIU Li-gang
Computer Science. 2020, 47 (10): 145-150.  doi:10.11896/jsjkx.190900172
Abstract PDF(2305KB) ( 1092 )   
References | Related Articles | Metrics
This paper proposes a 3D object detection algorithm,named VoxelRCNN,on the basis of LIDAR point cloud.This algorithm is based on VoxelNet 3D object detection network algorithm,and the idea of RCNN algorithm is applied to 3D object detection from 2D object detection.The VoxelRCNN algorithm is composed of two stages.Stage-1 aims to extract the information of candidate region box with the regional proposal network,and stage-2 aims to refine the object detection box extracted in stage-1,to obtain more accurate detection results.The stage-1 network voxelizes the point cloud of the whole scene,extracts the features of each voxel block as the input of the convolutional neural network,and obtains the final characteristic map through the convolutional neural network calculation.Then,the enveloping box information is learnt by regression according to the feature map.In stage-2,on the basis of the candidate region information and feature information extracted in stage-1,equivalent feature information is obtained by pooling,and returning to learning bounding box information again.Experimental results on KITTI dataset show that the proposed network structure performs well.
Remote Sensing Image Processing Technology and Its Application Based on Mask R-CNN Algorithms
LING Chen, ZHANG Xin-tong, MA Lei
Computer Science. 2020, 47 (10): 151-160.  doi:10.11896/jsjkx.190900119
Abstract PDF(2367KB) ( 932 )   
References | Related Articles | Metrics
With the development of remote sensing,there are many fields using remote sensing image,such as agriculture,military and so on.At the same time,deep learning,now,is applying in computer vision and image processing widely.It is successful in object detection,classification and semantic segmentation.Unlike fighting ship detection in natural scenes,fighting ships in remote sensing images are overhead views,dense and easy to mix with ports.The main result on fighting ship is taking bounding box as output,which is lacking the mask of the fighting ship,so may not analyzing the weakness in model.Meanwhile,because of the tight fighting ships in remote sensing images,there are easy to have missed detection.For solving the problems,this paper uses Mask R-CNN to detect fighting ships,analyzing training situation and the results of mask and bounding box.By learning the edges of objects and modifying parameter,making model more suitable to fighting ship.After experiment,it can be concluded that the appropriate parameters can effectively reduce the false positive and false negatives caused by compact berthing of fighting ships.
No-reference Color Noise Images Quality Assessment Without Learning
YANG Yun-shuo, SANG Qing-bing
Computer Science. 2020, 47 (10): 161-168.  doi:10.11896/jsjkx.190900051
Abstract PDF(3420KB) ( 1191 )   
References | Related Articles | Metrics
Noise is one of the most common and varied types of distortion,but there are few studies on the noise types other than Gaussian noise.This paper proposed a non-reference color noise image quality assessment method that can evaluate five kinds of noise types without learning.The method is based on the quaternion singular value decomposition,and uses the relationship between the area enclosed by the reciprocal singular value curves of the image and the degree of the image distortion to derive a quality index.The method almost requires very little prior knowledge of any image or distortion nor any process of training.Experimental results on four simulated databases show that the proposed algorithm delivers quality predictions that have high correlation with human subjective judgments,and achieves better performance in comparison with the relevant state-of-the-art full-refe-rence and non-reference quality metrics.
End-to-End Speaker Recognition Based on Frame-level Features
HUA Ming, LI Dong-dong, WANG Zhe, GAO Da-qi
Computer Science. 2020, 47 (10): 169-173.  doi:10.11896/jsjkx.190800054
Abstract PDF(1790KB) ( 1437 )   
References | Related Articles | Metrics
There are still many shortcomings in the existing speaker recognition methods.The end-to-end method based on utte-rance-level features requires to process the input to be the same size due to the inconsistency of the speech length.The two-stage method of feature training with posterior classification makes the recognition system too complex.These factors affect the performance of the model.This paper proposed an end-to-end speaker recognition method based on frame-level features.The model uses frame-level speech as input,and the same size frame-level features effectively solve the problem of inconsistent speech-level speech input length,and the frame-level features can retain more speaker information.Compared with the mainstream two-stage identification system,the end-to-end identification method integrates feature training and classification,which simplifies the complexity of the model.During the training phase,each speech is segmented into multiple frame-level speech inputs to a Convolutional Neural Network (CNN) for training the model.In the evaluation phase,the trained CNN model classifies the frame-level speech,and each segment of speech calculates the prediction category of the speech data based on the prediction scores of multiple frames.The maximum predicted category of each frame and the average prediction value of each frame are adopted to calculate the class of each segment of speech respectively.In order to verify the validity of the work,the speech data of the Mandarin Emotio-nal Speech Corpus (MASC) were used for training and testing.The experimental results show that the end-to-end recognition method based on frame-level features achieves better performance than the existing methods.
Method for Traffic Video Background Modeling Based on Inter-frame Difference and Statistical Histogram
WANG Qia, QI Yong
Computer Science. 2020, 47 (10): 174-179.  doi:10.11896/jsjkx.190800014
Abstract PDF(3039KB) ( 1128 )   
References | Related Articles | Metrics
Aiming at the problem of inaccurate foreground object detection caused by the difficulty of extracting traffic background directly from urban road traffic video,a method for traffic video background modeling based on the combination of inter-frame difference and statistical histogram is proposed.A good background modeling method is conducive to the smooth development of subsequent object detecting and tracking tasks.Firstly,it uses inter-frame difference method to extract the approximate motion region of each frame in the video as the foreground moving object.Then,it uses statistical histogram to obtain the gray value distribution state of the image and estimates the background image,thereby a background image with high cleanliness and low noise points is extracted.Compared with the existing background modeling method,the experimental results show that the proposed method can extract the background image with higher matching degree with the real background,both in the ordinary traffic scenes and the typical traffic scene where vehicles are moving slowly.
Cross-modality Person Re-identification Framework Based on Improved Hard Triplet Loss
LI Hao, TANG Min, LIN Jian-wu, ZHAO Yun-bo
Computer Science. 2020, 47 (10): 180-186.  doi:10.11896/jsjkx.191100061
Abstract PDF(2673KB) ( 901 )   
References | Related Articles | Metrics
In order to improve the recognition accuracy of cross-modality person re-identification,a feature learning framework based on improved hard triplet loss is proposed.Firstly,traditional hard triplet loss is converted to a global one.Secondly,intra-modality and cross-modality triplet losses are designed to match the global one for model training based on the intra-modality and cross-modality variations.On the basis of improving the hard triplet loss,for the first time the attribute features are designed to increase the ability of the model to extract features in the cross-modality person re-identification model.Finally,for the category imbalance problem,Focal Loss is used to replace the traditional Cross Entropy loss for model training.Compared with existing algorithms,the proposed approach behaves the best on the publicly available RegDB dataset,with an increase of 1.9%~6.4% in all evaluation indicators.In addition,ablation experiments also show that all the three methods can improve the feature ability extraction of the model.
Digital Instrument Identification Method Based on Deformable Convolutional Neural Network
GUO Lan-ying, HAN Rui-zhi, CHENG Xin
Computer Science. 2020, 47 (10): 187-193.  doi:10.11896/jsjkx.191000035
Abstract PDF(2750KB) ( 1010 )   
References | Related Articles | Metrics
At present,traditional image processing methods and machine learning methods are adopted for the identification of digital display instruments,which have disadvantages such as low recognition accuracy for both characters and numbers in complicated scenarios,and difficulty to meet real-time application requirements.Aiming at the problems above,combining traditional image processing technology and deep learning methods,a method of segmentation and recognition of digital display instrument based on deformable convolutional neural network is proposed.This method includes steps such as image preprocessing,character segmentation and image recognition.Firstly,the GrayWorld algorithm is applied to perform brightness equalization on the image to be recognized for the further using of color segmentation to extract the screen area.Secondly,the projected histogram method is implemented to realize the unified segmentation of characters with its corresponding decimal point after performing morphological operation on the image.Finally,a deformable convolutional neural network is proposed and trained for character recognition,which optimizes the endogenous geometry restriction of receptive field in convolutional neural networks.The experimental results indicate that the addition of deformable convolution effectively improves the accuracy of image recognition and the convergence speed of the network,and the accuracy of the overall recognition method reaches 99.45% and the detection speed is 10FPS,which can meet the requirements of practical applications.
Brain CT and MRI Image Fusion Based on Morphological Image Enhancement and PCNN
LI Chang-xing, LEI Liu, ZHANG Xiao-lu
Computer Science. 2020, 47 (10): 194-199.  doi:10.11896/jsjkx.190700185
Abstract PDF(3203KB) ( 1159 )   
References | Related Articles | Metrics
To compensate for the problems that arise during brain CT and MRI image fusion,such as the false Gibbs pheno-menon,the lack of detailed information,ringing,a fusion method of brain CT and MRI image based on morphological image enhancement and the PCNN(Pulse Coupled Neural Network) is proposed.Firstly,the source image is enhanced by open and closed operation based on morphology.The enhancement processed image is input into the PCNN fusion model as an input stimulus of the PCNN receiving domain,to determine the final weight map of the model output.Finnaly,a clear and easily processed image is formed.Experimental results show that the proposed method is superior to other methods in maintaining image edge clearness,preserving effective information and balancing redundancy.Compared with the unenhanced PCNN method,the average gradient and spatial frequency of image after morphological enhancement and PCNN fusion increases by 24.59% and 42.56% respectively.Compared with the image fusion based on Laplacian method,the standard deviation increases by 16.67%.
Lung Cancer Subtype Recognition with Unsupervised Learning Combining Paired Learning and Image Clustering
REN Xue-ting, ZHAO Juan-juan, QIANG Yan, Saad Abdul RAUF, LIU Ji-hua
Computer Science. 2020, 47 (10): 200-206.  doi:10.11896/jsjkx.190900073
Abstract PDF(3227KB) ( 1128 )   
References | Related Articles | Metrics
In recent years,gene diagnosis has been one of the new and effective methods to improve the cure rate of lung cancer,but it has the problems of time-consuming,high cost and serious damage from invasive sampling.In this paper,an unsupervised learning method of Lung cancer subtype recognition based on paired learning and image clustering is proposed.Firstly,the unsupervised convolution feature fusion network is used to learn the deep representation of lung cancer CT images and effectively capture the important feature information that is ignored,and the final fusion features containing different levels of abstract information is used to represent lung cancer subtypes.Then,the classification learning framework of combined paired learning and image clustering is used for modeling,and the learnt feature representation is fully utilized to ensure effective clustering learning,so as to achieve higher classification accuracy.Finally,survival analysis and gene analysis are used to verify lung cancer subtypes from multiple perspectives.Experiments on the data sets of the cooperative hospital and TCGA-LUAD show that,through reliable and non-invasive image analysis and radiological imaging technology,three subtypes of lung cancer with different molecular characte-ristics have been found by this method.It can effectively assist doctors in accurate diagnosis and personalized treatment while reducing problems in gene detection,so as to improve the survival rate of lung cancer patients.
Artificial Intelligence
Open Domain Event Vector Algorithm Based on Zipf's Co-occurrence Matrix Factorization
GAO Li-zheng, ZHOU Gang, HUANG Yong-zhong, LUO Jun-yong, WANG Shu-wei
Computer Science. 2020, 47 (10): 207-214.  doi:10.11896/jsjkx.191200183
Abstract PDF(1535KB) ( 662 )   
References | Related Articles | Metrics
Event extraction is one of the hot topics of natural language processing (NLP).Existing event extraction models are mostly trained on small-scale corpora and are unable to be applied to open domain event extraction.To alleviate the difficulty of event representation in large-scale open domain event extraction,we propose a method for event embedding based on Zipf's co-occurrence matrix factorization.We firstly extract event tuples from large-scale open domain corpora and then proceed with tuple abstraction,pruning and disambiguation.We use Zipf's co-occurrence matrix to represent the context distribution of events.The built co-occurrence matrix is then factorized by principal component analysis (PCA) to generate event vectors.Finally,we construct an autoencoder to transform the vectors nonlinearly.We test the generated vectors on the task of nearest neighbors and event identification.The experimental results prove that our method can capture the information of event similarity and relativity globally and avoids the semantic deviation caused by the too fine granularity of encoding.
Study on Transportation Problem Using Monte Carlo Similarity Based Genetic Algorithm
LI Yuan-feng, LI Zhang-wei, QIN Zi-hao, HU Jun, ZHANG Gui-jun
Computer Science. 2020, 47 (10): 215-221.  doi:10.11896/jsjkx.190600101
Abstract PDF(2878KB) ( 1059 )   
References | Related Articles | Metrics
Aiming at the problem of balanced transportation,this paper proposes a Monte Carlo similarity based genetic algorithm.Firstly,the matrix elements are used to initialize the population,which increases the diversity of the population.Secondly,the dynamic mutation rate operator and the random mutation strategy are designed to enhance the search ability of the algorithm and accelerate the convergence speed.Finally,Monte Carlo similarity is adopted to avoid falling into the local optimal solution problem.The effectiveness of the algorithm is verified by the comparison of the convergence rate,the optimal solution deviation rate and the relative standard deviation by the basic genetic algorithm GA and the improved genetic algorithm IGA.According to the geographic data of Hangzhou,the transportation and distribution system based on ArcGIS platform is designed and developed to realize the function of solving the balanced transportation problem.The test results show the effectiveness of the proposed algorithm.
Comment Sentiment Classification Using Cross-attention Mechanism and News Content
WANG Qi-fa, WANG Zhong-qing, LI Shou-shan, ZHOU Guo-dong
Computer Science. 2020, 47 (10): 222-227.  doi:10.11896/jsjkx.190900173
Abstract PDF(1561KB) ( 1440 )   
References | Related Articles | Metrics
At present,news comment has become important news derived data.News comment expresses commentators’ views,positions and personal feelings on news events.Through the analysis of sentiment orientation of news comment,it is helpful to understand the social public opinion and trend.Therefore,the sentiment research of news comment is favored by many scholars.The usual news comment sentiment analysis only considers the information of the comment text itself.However,news comment text information is often closely related to news content information.Based on this,this paper proposed a comment sentiment classification method using cross-attention mechanism and combined with news content.Firstly,the bi-directional long short-term memory network model is used to characterize the news content and the comment text respectively.Then,the cross-attention mechanism is used to further capture important information,and obtain the vector representation of the two updated news content texts and comment texts.And then the semantic representation obtained by splicing them together is input into the full connection layer,and sigmoid activation function is used for classification prediction,so as to realize the sentiment classification of news comments.The results show that the model of comment sentiment classification using cross-attention mechanism and news content can effectively improve the accuracy of sentiment classification of news comment,and this model improves by 1.72%,3.24% and 6.21% on F1 respectively compare with the three benchmark models.
Loop Closure Detection Method Based on Unsupervised Deep Learning
WANG Dan, SHI Chao-xia, WANG Yan-qing
Computer Science. 2020, 47 (10): 228-232.  doi:10.11896/jsjkx.190900034
Abstract PDF(2383KB) ( 992 )   
References | Related Articles | Metrics
Loop closure detection is one of the most critical parts for simultaneous localization and mapping (SLAM) systems.It can reduce the accumulativeerror in SLAM system.If the tracking is lost during localization and mapping,it can also use the loop closure detection for relocation.Image features learned from neural networks have better environmental invariance and semantic recognition capabilities compared to traditional hand-crafted features.Considering that the landmark-based convolution features can overcome the defect that the whole image features are sensitive to viewpoint changes,this paper proposes a new loop closure detection algorithm.Firstly,it directly identifies the saliency region of the image through the convolutional layer of the convolutional neural network to generate a landmark.And then,it extracts the ConvNet features from the landmarks to generate the final image representations.In order to verify the effectiveness of the algorithm,some comparative experiments were performed on some typical datasets.The rusults show that the proposed algorithm has superior performance,and has highly robust even in drastic viewpoints and appearance changes.
Path Optimization in CNC Cutting Machine Based on Modified Variable Neighborhood Search
LIAO Yi-hui, YANG En-jun, LIU An-dong, YU Li
Computer Science. 2020, 47 (10): 233-239.  doi:10.11896/jsjkx.190800035
Abstract PDF(1713KB) ( 770 )   
References | Related Articles | Metrics
To solve the optimization problem of non-cutting path for multi-contour segments,a modified variable neighborhood search (MVNS) based metaheuristic algorithm is proposed for computer numerical control (CNC) processing systems.Firstly,this paper transfers the optimization problem into a generalized traveling salesman problem (GTSP).Secondly,for the sequential sequence problems in GTSP,it modifies the local search and shaking procedure in traditional variable neighborhood search.A 2-opt with insertion operator of neighborhood structure and an incremental calculation method are proposed for local search,which improve the solution quality and search efficiency.Combining genetic algorithm,some operators such as partition and reorganization are designed for shaking procedure,which avoid to prematurely fall into local optimum.Furthermore,a Tabu search with dynamic programming (TS-DP) algorithm is used to eliminate duplicate cutting sequences and determine the starting point of each segment.Finally,through the application examples and comparative experiments,the effectiveness of the proposed algorithm is tested from the perspective of solution accuracy and running time.In the test of cloth segments,the proposed algorithm improves the results of garment CAD about more than 51%,and the average running time is 9.3 s.In the test of TSP,the proposed algorithm achieves or exceeds the accuracy value of the comparison algorithm in most instances.In the test of GTSP,although the proposed algorithm achieves or exceeds the accuracy of the comparison algorithm in some instances,the difference of the average error between the proposed algorithm and the comparison algorithm is within 1%,and average running time of the proposed algorithm is 73.7% shorter than the comparison algorithm.Thus,the test results demonstrate that the proposed algorithm can simultaneously consider the solution accuracy and running time,which has a certain application value.
Computer Network
Task Migration Node Selection with Reliable Service Quality in Edge Computing Environment
WANG Yan, HAN Xiao, ZENG Hui, LIU Jing-xin, XIA Chang-qing
Computer Science. 2020, 47 (10): 240-246.  doi:10.11896/jsjkx.190900054
Abstract PDF(1949KB) ( 1035 )   
References | Related Articles | Metrics
With the rapid development and wide application of the Internet of things,big data and 5G network,the traditional cloud computing mode has been unable to efficiently handle the massive computing tasks generated by network edge devices,so edge computing came into being.Computing tasks in edge computing environments will be migrated to computing devices close to data sources for execution,providing new solutions for expanding terminal node resources and alleviating cloud center load.The existing task migration decisions are made on the premise that the task migration node is determined,without considering the si-tuation that multiple task migration nodes are available.The selection of the task migration node in edge computing directly affects the service quality of task migration,so,in this paper,a service quality trust model is constructed to evaluate the task migration nodes from three dimensions:time trust,behavior trust and resource trust.In order to avoid the problem of low selection efficiency caused by the large number of task migration nodes,a skyline query algorithm based on cluster coding is adopted to screen the task migration nodes,and grey relative analysis is used for the final selection of task migration nodes.The experimental results show that the proposed task migration node selection strategy based on reliable service quality can increase the success rate of task migration by 36% and the throughput of task completion by 18% on average.
Computation Offloading Scheduling Technology for DNN Applications in Edge Environment
HU Jun-qin, ZHANG Jia-jun, HUANG Yin-hao, CHEN Xing, LIN Bing
Computer Science. 2020, 47 (10): 247-255.  doi:10.11896/jsjkx.190900106
Abstract PDF(2099KB) ( 1104 )   
References | Related Articles | Metrics
Deep neural network (DNN) applications require high performance of running equipment,and can not run directly on mobile devices with limited computing resources.It is an effective method to offload some computationally complex neural network layers to resource-rich edges or remote clouds for execution by computation offloading technology.Computation offloading will incur additional time overhead.If the offloading process lasts too long,the user experience will be seriously affected.To this end,in order to obtain the minimum average response time of multi-task parallel scheduling in edge environment,this paper first proposes the computation offloading scheduling problem for DNN applications in edge environment,and designs an evaluation algorithm for the solution to the problem.Then two scheduling algorithms,greedy algorithm and genetic algorithm,are designed to solve the problem.Finally,an evaluation experiment is set up to compare and analyze the performance of the two algorithms in five different edge environments.The experimental data shows that the solution obtained by the proposed algorithms in this paper is very close to the optimal solution.Compared with traditional offloading schemes,greedy algorithm can obtain a scheduling scheme with shorter average response time.The average response time of genetic algorithm is shorter than that of greedy algorithm,but its running time is significantly longer.The experimental results show that the proposed two scheduling algorithms can effectively reduce the average response time of computation offloading scheduling for DNN applications in edge environment and improve user experience.
Inference Task Offloading Strategy Based on Differential Evolution
WANG Xuan, MAO Ying-chi, XIE Zai-peng, HUANG Qian
Computer Science. 2020, 47 (10): 256-262.  doi:10.11896/jsjkx.190800159
Abstract PDF(2064KB) ( 926 )   
References | Related Articles | Metrics
As an important technology of deep learning,convolutional Neural Network (CNN) has been widely used in intelligence applications.Due to the demand of CNN inference task for high computer memories and computation,most of the existing solutions are to offload tasks to the cloud for execution,which are hard to adapt to the time-delay sensitive mobile applications.To solve the above problem,this paper proposes a CNN inference task offloading strategy based on improved differential evolution algorithm,which can efficiently deploy computing tasks between cloud and edge devices using end-cloud collaboration mode.This strategy studies the task unloading scheme that minimizes the time delay under cost constraint.transforms the CNN inference process into a task graph and constructs it into a 0-1 integer programming problem,and finally uses the improved binary differential evolution algorithm to solve the problem so as to infer the optimal offloading policy.The experimental results show that,compared with mobile inference and cloud inference schemes,averagely,the proposed strategy can reduce the task response time by 33.60% and 6.06% respectively with cost constraints.
Simulation and Analysis on Improved NC-OFDM Algorithm
ZHOU Hui-ting, ZHOU Jie
Computer Science. 2020, 47 (10): 263-268.  doi:10.11896/jsjkx.190800043
Abstract PDF(2539KB) ( 795 )   
References | Related Articles | Metrics
Orthogonal Frequency Division Multiplexing (OFDM) is the most promising modulation technology so far.It has been adopted by most wireless and wired communication standards.N-coutinuous Orthogonal Frequency Division Multiplexing is a combination of OFDM and cognitive radio technology,but its sidelobe suppression has always been a problem to be solved.In order to reduce the complexity of the algorithm and receiver without affecting the sidelobe suppression performance,a symbol-filled OFDM time-domain algorithm is proposed,it inserts n- continuous OFDM correction symbols only into the protection interval and seamlessly connects each OFDM symbol,thus suppressing side lobes.The simulation shows that the proposed algorithm does not affect the sidelobe suppression performance,and is easy to implement.The complexity is significantly lower than the traditional N-continuous OFDM system.In different channels of K=72,Signal BER reduction by 5dB.The simulink of MATLAB is used to simulate the simulation of N-coutinuous OFDM systems in FPGA.FPGA has great flexibility,and has higher computing speed and smaller area in digital signal processing.Low cost,low risk,time advantage.
Millimeter-wave Beamforming Scheme Based on Location Fairness Guarantee for HSR Communications
JIANG Rui, YIN Hui, XU You-yun
Computer Science. 2020, 47 (10): 269-274.  doi:10.11896/jsjkx.190800029
Abstract PDF(2337KB) ( 920 )   
References | Related Articles | Metrics
In this paper,a millimeter-wave beamforming scheme based on location fairness guarantee is proposed to improve the stability and reliability of high-speed railway (HSR) communications.In the proposed scheme,the interleaved redundant coverage architecture is adopted to enhance the reliability of information transmission.Furthermore,the serving beams with different beam-width are formed by base station to transmit signals.The beam-width is determined by the distance from base station to mobile relay mounted on the top of train,which could keep the stability of data transmission rate during the running time of the train.Additionally,an adaptive searching algorithm is developed to calculate the optimal transmitting beam-width and beam boundary points.Theoretical analysis and simulation results suggest that the proposed scheme could not only improve the stability performance of HSR networks,but also obtain a lower communication outage probability.
Brittleness Control Model and Strategy for Networked Operational Equipment System
LI Hui, ZHOU Liang-ping, YANG Jun, ZHAO Shu-ping
Computer Science. 2020, 47 (10): 275-281.  doi:10.11896/jsjkx.190800087
Abstract PDF(2256KB) ( 798 )   
References | Related Articles | Metrics
Networked operational equipment system is a typical complex system with multiple elements,close correlation and dynamic evolution,and brittleness is an inherent property that directly affects its safety and operational stability.Aiming at networked operational equipment system’s characteristic of multiple constitution and complex correlation,firstly,concepts of equipment nodes,correlation relationships and networked operational equipment system are defined,and networked operational equipment system structure is abstracted.Brittleness transmission mechanism is analyzed and brittleness control causal circuit diagram is designed.And then,differential dynamic model of brittleness control is built.Secondly,immune control strategy,isolation control strategy and integrated control strategy are put forward separately,and the measurement method of brittleness control effect is given.Finally,taking networked air defense operational equipment system as example,dynamic effect of brittleness risk control threshold,pulse control coverage number and composite control strategy parameters to overall brittleness risk degree are simulated and analyzed.According to the simulation results,when brittleness risk control threshold is improved 25%,the brittle risk durations of immune control strategy,isolation control strategy and integrated control strategy are reduced respectively 53.2%,44.9% and 42.2%,and the brittleness risks are improved 24.5%,1.5% and 20.4%.When pulse control coverage number is doubled,there is no significant difference in the duration of high brittleness risk,and the brittleness risks are reduced 9.3%,1.5% and 10%.When the ratio of parameters of integrated control strategy is increased about 1.3 times,the duration of high brittleness risk and the brittleness risk are reduced 5.9% and 8.3% respectively.The research results verify the feasibility and effectiveness of the model and strategies,which provide a new idea and method for exploring the brittleness control process and low of networked combat equipment system.
Information Security
Survey of New Blockchain Techniques:DAG Based Blockchain and Sharding Based Blockchain
ZHANG Chang-gui, ZHANG Yan-feng, LI Xiao-hua, NIE Tie-zheng, YU Ge
Computer Science. 2020, 47 (10): 282-289.  doi:10.11896/jsjkx.191000057
Abstract PDF(1948KB) ( 4038 )   
References | Related Articles | Metrics
Blockchain is an innovative distributed ledger technology with wide application prospects in many important fields such as finance,credit reporting and auditing.However,the existing bitcoin-style distributed ledger systems have already encountered bottlenecks in terms of scalability,throughput,and transaction confirmation latency.To address these problems,researchers have proposed two new blockchain techniques.One is based on Directed Acyclic Graph(DAG) structure,and the other one is based on Shardind.They employ new data structure and new storage structure to overcome the native limitations and get better scalability and higher throughput.This paper reviews the state-of-the-art DAG-based blockchain systems (e.g. NXT,Byteball,etc)and sharding-based blockchain systems (e.g.,Elastico,RapidChain,etc).It analyzes the key components of these systems,including system storage structures,data structures,and consensus protocols.It also compares these blockchain techniques,and summarizes the challenges and future research directions.
Research and Development of Data Storage Security Audit in Cloud
BAI Li-fang, ZHU Yue-fei, LU Bin
Computer Science. 2020, 47 (10): 290-300.  doi:10.11896/jsjkx.191000111
Abstract PDF(1870KB) ( 1159 )   
References | Related Articles | Metrics
Compared with traditional storage,cloud storage can avoid repeated construction and maintenance of storage platform.Its storage capacity and performance scalability,non-binding geographical location and fee-on-demand service mode effectively optimize storage and social resource allocation.However,due to the separation of data ownership and management rights in cloud storage services,users pay more and more attention to the security and controllability of cloud data.Researchers at home and abroad have conducted a lot of studies on this.The security risks and security audit requirements of cloud data in each stage of its life cycle are discussed.The framework structure of mechanisms of cloud data storage security audit is constructed and the main evaluation index of the audit mechanism is proposed.This paper reviews the existing mechanisms of cloud data storage security audit,including data provable data possession mechanism,provable data retrievability mechanism,outsourcing storage regularity audit mechanism and storage location audit mechanism.Finally,the shortcomings of the existing cloud data storage security audit research from different perspectives and the direction for further research are pointed out.
Queryability Optimization of Blockchain System for Hybrid Index
ZHENG Hao-han, SHEN De-rong, NIE Tie-zheng, KOU Yue
Computer Science. 2020, 47 (10): 301-308.  doi:10.11896/jsjkx.190800148
Abstract PDF(2229KB) ( 1324 )   
References | Related Articles | Metrics
Blockchain technology has the characteristics of decentralization and immutability,and is considered to be the next generation of disruptive core technology.However,the existing blockchain system is weak in data management and can only query related transactions according to the hash value.The current research on query mostly stores data synchronously into an external database,and then uses an external database to query,or focuses on how to ensure the reliability of the whole node,but the problem of low query efficiency of blockchain remains unsolved in a practical sense.A new solution is proposed in the paper.First,dividing blockchain data into different attributes.Next,based on different attributes,combining with the Merkle tree of the blockchain and multiple index structures,a new index-MHerkle tree-is proposed to enhance the query performance of the blockchain,while ensuring the immutability of blockchain;Then,the index construction algorithm of MHerkle tree is designed,and the query algorithm based on different attributes and the range query algorithm are proposed according to the index.Finally,experiment shows the feasibility and effectiveness of the proposed index.
Approximate Safety Properties in Metric Linear Temporal Logic
CAI Yong, QIAN Jun-yan, PAN Hai-yu
Computer Science. 2020, 47 (10): 309-314.  doi:10.11896/jsjkx.191000175
Abstract PDF(1488KB) ( 696 )   
References | Related Articles | Metrics
In recent years,quantitative verification of computer systems has attracted much attention from the academic and industrial communities,where the study of system specifications over metric spaces has offered a new research line for the development of quantitative verification.In system verification,linear time attribute is often used to describe the properties of the system,and security,as one of the most important basic attributes of linear time attribute,can assert that nothing “bad” happens during execution of systems.Hence the extension of safety properties should also be concerned in the context of metrics.This paper investigates safety properties over pseudo-ultrametric spaces.First,metric linear temporal logic (MLTL) is used to characterize linear-time properties in the context of metrics metrics.Then,this paper lifts the notion of safety properties to pseudo-ultrametric spaces,called α-safety properties,by introducing the distance threshold α.Finally,the relationship between MLTL and α-safety properties is discussed.These results provide a theoretical basis for the verification of safety properties in the context of metrics.
Dynamic Hybrid Data Race Detection Algorithm Based on Sampling Technique
LI Meng-ke, ZHENG Qiu-sheng, WANG Lei
Computer Science. 2020, 47 (10): 315-321.  doi:10.11896/jsjkx.190700079
Abstract PDF(1859KB) ( 844 )   
References | Related Articles | Metrics
Data race is a major source of concurrency bugs.Numerous static and dynamic program analysis techniques have been proposed to detect data races.However,some of detectors may cause a large detection overhead and some of detectors may miss lots of true races.In this paper,a dynamic hybrid data race detection algorithm AsampleLock is proposed,which is based on the optimized FastTrack algorithm and lock mode.It uses the sampling technique,monitoring the function pairs from concurrent threads running simultaneously at the same time,and obtains memory access pairs that really involve data race through the preliminary data race detection,thereby reducing analysis overhead of race detection.In order to reduce the influence of the algorithm on thread scheduling,AsampleLock adopts nolock-hb relation to judge the concurrency relationship of access events,adopts map to record read and write informations of shared variables,and adopts the locking patterns to perform dynamic data race detection,thereby reducing false positives and false negatives.On the basis of the above methods,this paper implements the prototype system,named AsampleLock,and chooses the Parsec benchmark suite to evaluate the race detectors.Experiments compared to FastTrack algorithm,LiteRace algorithm and Multilock-HB algorithm.The results show that the time overhead of AsampleLock algorithm is reduced by 8% compared with FastTrack algorithm.Compared with LiteRace algorithm and FastTrack algorithm,the data race detection rate of AsampleLock algorithm is increased by 39% and 27%,respectively.
Operational Visual Multi-secret Sharing Scheme for Threshold Structure
DONG Chen, JI Shu-ting, ZHANG Hao-yu, LI Lei
Computer Science. 2020, 47 (10): 322-326.  doi:10.11896/jsjkx.190800069
Abstract PDF(2795KB) ( 658 )   
References | Related Articles | Metrics
Visual secret sharing (VSS) combines digital image processing with secret sharing.It encodes the secret image into multiple shares.Then the secret information can be decoded by human eyes directly when the qualified shares are superimposed.It has the merits such as low decoding complexity and large information capacity.In particular,visual multi-secret sharing (VMSS) can be used to share multiple secret images and applied to the field of group participation and control.However,the current research on operational sharing schemes is limited by the (2,2,n) access structure,that is,when any 2 of the n participants take out their shares for recovery,2 secret images at most can be recovered through rotation and superposition operations.Aiming at the problem that existing multi-secret sharing schemes are limited to 2 participants,a new secret sharing and sharing rotation operation rule is designed in this paper.By partitioning the secret image longitudinally and encrypting the pixels region by region with XOR basic matrix,an operational visual multi-secret sharing scheme (OVMSS) oriented to threshold structure is designed.Moreover,the security and validity of the scheme are proved theoretically.The experimental results show that,compared with the existing schemes,the proposed scheme achieves the equality of all shares by dividing secret images and marking the vertical regions in secret sharing,and improves the number of secrets that can be shared.WT secret images can be shared into k ring shares simultaneously at most.On the premise of meeting the safety conditions,the proposed scheme enhances the relative difference and improves the recovery quality of secret image.
Image Encryption Algorithm Based on Cyclic Shift and Multiple Chaotic Maps
TIAN Jun-feng, PENG Jing-jing, ZUO Xian-yu, GE Qiang, FAN Ming-hu
Computer Science. 2020, 47 (10): 327-331.  doi:10.11896/jsjkx.190800003
Abstract PDF(2539KB) ( 939 )   
References | Related Articles | Metrics
The encryption algorithm implemented by a single chaotic system has a simple structure and is easy to be attacked,using multiple chaotic systems to encrypt is an effective measure to improve the security of the encryption system.A new image encryption algorithm based on cyclic shift and multiple chaotic maps was proposed,and cyclic shift operation can change the va-lues of the pixels efficiently.First,using piece-wise linear chaotic map (PWLCM) and Logistic map to generate different chaotic sequences,generating index matrix and cyclic shift number according to the different chaotic sequences.Then,the plaintext image is replaced on the basis of index matrix.The left cyclic shift operation is performed on the replacement image in turn according to the cyclic shift number.Finally,the image after cyclic shift is scrambled and diffused by Logistic chaotic sequence and PWLCM chaotic sequence.Ultimately,an encrypted image is obtained.Tests and analyses of image histogram,information entropy,diffe-rential attack and correlation were carried out.Theoretical analysis and simulation results show that this algorithm has high secu-rity,a desirable ability to resist different kinds of attacks and can be used to implement an image encryption system.
Complex Attack Based Fragile Watermarking for Image Integrity Authentication Algorithm
ZHENG Qiu-mei, LIU Nan, WANG Feng-hua
Computer Science. 2020, 47 (10): 332-338.  doi:10.11896/jsjkx.191000060
Abstract PDF(2373KB) ( 1018 )   
References | Related Articles | Metrics
When the image is used in judicial,medical and other importantfields,it is often necessary to authenticate the integrity of the image to determine whether the image have been tampered with maliciously.The authentication method of image integrity based on fragile watermarking can be used to detect and locate the image tampering.In order to solve the problem that the localization accuracy and anti-complex attacks of fragile watermarking in image tamper detection can not be satisfied simultaneously,a complex attack based fragile watermarking for image integrity authentication algorithm is proposed in this paper.The fragile watermarking is embedded into the color image’s R,G and B channels to detect any channel tampering.In order to improve the localization accuracy,2 × 2 image blocks are divided.Block authentication and group authentication are used to detect the image tampering of complex attacks,and non-equilateral image scrambling transformation is used to improve the universality and anti-complex attacks ability of the algorithm.The simulation results show that the proposed algorithm has better invisibility and higher localization accuracy for image tampering under common attacks,complex attacks and multi attacks.