Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 47 Issue 4, 15 April 2020
  
Contents
Contents
Computer Science. 2020, 47 (4): 0-0. 
Abstract PDF(282KB) ( 705 )   
RelatedCitation | Metrics
Discipline Construction
Practical Exploration of Discipline Construction of Artificial Intelligence+
WANG Guo-yin, QU Zhong, ZHAO Xian-lian
Computer Science. 2020, 47 (4): 1-5.  doi:10.11896/jsjkx.200300144
Abstract PDF(1889KB) ( 1317 )   
References | Related Articles | Metrics
Nowadays,the society has gradually entered the era of intelligence.Artificial intelligence is deeply integrated with the development of economy and society,so training all kinds of talents needed in the era of intelligence is essential.In order to meet the urgent needs of various types of talents in the era of intelligence,colleges and universities need to strengthen the construction of intelligent disciplines,explore the construction of “artificial intelligence+” disciplines,improve the disciplines in the field of artificial intelligence,innovate and expand the development and construction of the directions of other disciplines.This paper proposed a cross integration mode of “artificial intelligence +” discipline construction.On the one hand,from the connotation of artificial intelligence,a solid foundation is laid for the field of artificial intelligence through building the direction of basic subjects of artificial intelligence relying on computer science and technology.On the other hand,according to the directions of key areas of social and economic industry development,new directions of intelligent disciplines in various industries is built by relying on the advantages of related fields.The coordinated development,symbiosis and mutual assistance of the construction of artificial intelligence and other disciplines are realized.This “artificial intelligence+” discipline construction mode has been applied successfully in the Chongqing University of Posts and Telecommunications and other universities in Chongqing.Chongqing implemented the “Action plan of ‘artificial intelligence +’ discipline construction in Chongqing Higher Education Institutes”,and carried out project practice exploration in 40 related disciplines in 14 universities.It could provide a reference scheme for the discipline construction in the field of artificial intelligence.
Computer Architecture
Efficient Implementation of Generalized Dense Symmetric Eigenproblem StandardizationAlgorithm on GPU Cluster
LIU Shi-fang, ZHAO Yong-hua, YU Tian-yu, HUANG Rong-feng
Computer Science. 2020, 47 (4): 6-12.  doi:10.11896/jsjkx.191000009
Abstract PDF(1983KB) ( 805 )   
References | Related Articles | Metrics
The solution of the generalized dense symmetric eigenproblem is the main task of many applied sciences and enginee-ring,and is an important part in the calculation of electromagnetics,electronic structures,finite element models and quantum che-mistry.Transforming generalized symmetric eigenproblem into a standard symmetric eigenproblem is an important computational step for solving the generalized dense symmetric eigenproblem.For the GPU cluster,the generalized blocked algorithm for gene-ralized dense symmetric eigenproblem was presented based on MPI+CUDA on GPU cluster.In order to adapt to the architecture of the GPU cluster,the generalized symmetric eigenproblem standardization algorithm presented in this paper adopts the method of combining the Cholesky decomposition of the positive definite matrix with the traditional standardized blocked algorithm,which reduces the unnecessary communication overhead in the standardized algorithm and increases the parallelism of the algorithm.Moreover,In the MPI+CUDA based generalized symmetric eigenproblem standardization algorithm,the data transferoperation between the GPU and the CPU is utilized to mask the data copy operation in the GPU,which eliminates the time spent on copying,thereby improving the performance of the program.At the same time,a fully parallel point-to-point transposition algo-rithm between the row communication domain and the column communication domain in the two-dimensional communication grid was presented.In addition,a parallel blocked algorithm based on MPI+CUDA for the solution of the triangular matrix equation BX=A with multiple right-end terms was also given.On the supercomputer system “Era” of the Computer Network Information Center of Chinese Academy of Sciences,each compute node is configured with 2 Nvidia Tesla K20 GPGPU cards and 2 Intel E5-2680 V2 processors.This paper tested different scale matrices using up to 32GPUs.The implementation performance of the ge-neralized symmetric eigenproblem standardization algorithm based on MPI+CUDA has achieved better acceleration and perfor-mance,and have good scalability.When tested with 50000×50000-order matrix using 32GPUs,the peak performance reach approximately 9.21 Tflops.
Extreme-scale Simulation Based LBM Computing Fluid Dynamics Simulations
LV Xiao-jing, LIU Zhao, CHU Xue-sen, SHI Shu-peng, MENG Hong-song, HUANG Zhen-chun
Computer Science. 2020, 47 (4): 13-17.  doi:10.11896/jsjkx.191000010
Abstract PDF(3663KB) ( 1546 )   
References | Related Articles | Metrics
Lattice Boltzmann Method (LBM) is a computational fluid dynamics method based on mesoscopic simulation scales and has been widely used in theoretical research and processing engineering problems.Improving the parallel simulation capability of LBM Computing Fluid software is an important study for high performance computing and applications.The research aims to design and implement a set of highly efficient extended LBM computational fluid dynamics software based on the “Sunway TaihuLight” supercomputing system.According to the architecture of domestic multi-core processor SW26010,several parallel optimization multi-level parallelism techniques to boost the simulation speed and improve the scalability of SWLBM are designed,including date reuse of 19-point stencil,vectorization of collision process and communication overlap computing.Based on these parallel optimization schemes,the numerical simulation with over 10million cores and up to 5.6trillion grids is tested and the SWLBM software can bring up to 172x speed up and achieve a sustained floating of 4.7 PFlops.Compared with the million-core 10000*10000*5000 grid wind filed simulation,the SWLBM machine has a core efficiency of 87%.Test results show that SWLBM has the ability to provide practical large-scale parallel simulation solutions for industrial applications.
Efficient MILP Model for HW/SW Partitioning of Dynamic Partial Reconfigurable SoC
ZHU Li-hua, WANG Ling, TANG Qi, WEI Ji-bo
Computer Science. 2020, 47 (4): 18-24.  doi:10.11896/jsjkx.190300001
Abstract PDF(2543KB) ( 1071 )   
References | Related Articles | Metrics
Heterogeneous System-on-Chip (SoC) integrates multiple types of processors on the same chip.It has great advantages in many aspects such as processing capacity,size,weight and power consumption,which makes it be widely used in many fields.The SoC with dynamic partial reconfigurability is an important type of heterogeneous SoC,which combines software flexibility with hardware efficiency.The design of such systems usually involves hardware and software coordination problems,and how to divide the software and hardware of the application is the key technology to ensure the real-time performance of the system.The hardware and software partitioning technology in this kind of system is a key technology for ensuring real-time system performance.The problem of hardware and software partitioning in DPR-SoC can be classified as a combinatorial optimization problem.The goal of this problem is to obtain the optimal solution with the shortest schedule length,including task mapping,sorting,and timing.Mixed integer linear programming (MILP) is an effective method for solving combinatorial optimization problems.However,building a proper model for a specific problem is the key part of solving the problem,which has great impacts on solving time.The existing MILP models for the HW/SW partitioning of DPR-SoC have a lot of variables and constraint equations.The redundant variables and constraint equations have negative impacts on solving time.Besides,the solution of the available method does not match with actual applications,for it makes too many assumptions.Basing on these exiting problems proposes a novel model which focuses on reducing the model complexity and improving its suitability to the application.Model the application as a DAG graph and solve the problem by an integer linear programming solver.Plenty of results show that the proposed model can reduce the model complexity and shorten the solving time.Further,as the scale of the problem grows,the solve time shortened more significantly.
Extraction Algorithm of NDVI Based on GPU Multi-stream Parallel Model
ZUO Xian-yu, ZHANG Zhe, SU Yue-han, LIU Yang, GE Qiang, TIAN Jun-feng
Computer Science. 2020, 47 (4): 25-29.  doi:10.11896/jsjkx.190500029
Abstract PDF(1728KB) ( 835 )   
References | Related Articles | Metrics
In general,the Normalized Differential Vegetation Index (NDVI) extraction algorithm optimized by GPU usually adopts GPU multi-thread parallel model,and there are problems such as data transmission between CPU and GPU and weak correlation calculations taking more time,which affect the further improvement of performance.Aiming at the above problems and the characteristics of NDVI,a NDVI extraction algorithm based on GPU multi-stream parallel model was proposed.Through the features of CUDA stream and Hyper-Q,the GPU multi-stream parallel model can overlap not only data transmission and kernel execution,but also kernel execution and kernel execution,and further improve parallelism and resources utilization of GPU.Firstly,the NDVI algorithm is optimized by the GPU multi-thread parallel model,and the optimized procedures are decomposed to find out the parts of the algorithm with data transmission or weak correlation calculation.Secondly,parts of data transmission and weak correlation calculation are reconstructed and optimized by GPU multi-stream parallel model to achieve overlapping between weak correlation calculation and weak correlation calculation,or weak correlation calculation and data transmission.Finally,expe-riments of NDVI algorithm that based on both GPU parallel models respectively were carried out,and the remote sensing image taken by the GF1 satellite were used as experimental data.The experimental results show that the proposed algorithm,when the image is larger than 12000x13400 pixels,achieves about 1.5 times acceleration compared with the traditional parallel NDVI algorithm based on the GPU multi-thread parallel model,and about 260 times acceleration compared with the NDVI sequential extraction algorithm,which has better performance and parallelism.
Application of Atomic Dynamics Monte Carlo Program MISA-KMC in Study of Irradiation Damage of Reactor Pressure Vessel Steel
WANG Dong, SHANG Hong-hui, ZHANG Yun-quan, LI Kun, HE Xin-fu, JIA Li-xia
Computer Science. 2020, 47 (4): 30-35.  doi:10.11896/jsjkx.191100045
Abstract PDF(3192KB) ( 1294 )   
References | Related Articles | Metrics
With the rapid development of material science,the microstructure of nuclear materials (reactor pressure vessel steel) is subject to radiation damage.The behavior of solute precipitation in reactor pressure vessel steel can be simulated by dynamic monte carlo method.In order to provide theoretical basis for studying the microstructure evolution and performance change of nuclear materials after long-term service,this paper introduced the parallel strategy and large-scale test results of MISA-KMC program developed by ourselves.Based on the correctness of the program,the precipitation process of solute atoms in reactor pressure vessel steel was studied by MISA-KMC program.The results show that,after a long period of evolution,solute atoms will aggregate to form Cu-rich clusters,which is one of the main microstructure leading to the embrittlement of steel in the reactor pressure vessel.The accuracy of the simulation results of MISA-KMC program,the size of the simulation that can be supported,and the diversity of simulation elements provide a guarantee for the subsequent research on material performance changes.
Streaming Parallel Text Proofreading Based on Spark Streaming
YANG Zong-lin, LI Tian-rui, LIU Sheng-jiu, YIN Cheng-feng, JIA Zhen, ZHU Jie
Computer Science. 2020, 47 (4): 36-41.  doi:10.11896/jsjkx.190300070
Abstract PDF(1901KB) ( 753 )   
References | Related Articles | Metrics
The rapid development of the Internet has prompted the generation of massive amounts of network text,which poses new performance challenges for traditional serial text proofreading algorithms.Although the text automatic proofreading task has received more and more attention in recent years,the related research work mostly focuses on serial algorithms,and rarely involves the parallelization of proofreading.Firstly,the serial proofreading algorithm is generalized,and a general framework of serialproofreading is given.Then,in view of the shortcomings of serial proofreading for processing large-scale texts,three general text proofreading parallelization methods are proposed:1)a parallel proofreading method based on multi-threading,which implements simultaneous parallelism of paragraph and proofreading functions based on the thread pool;2)a batch processing parallel proofreading method based on Spark MapReduce,which implements paragraph parallel proofreading by means of RDD parallel computing;3)a Spark Streaming-based parallel proofreading approach,which converts the real-time calculation of text streams into a series of small-scale time fragmentation based batch jobs,making it can effectively avoid fixed overhead and significantly reduce proofreading delay.Because the streaming computing has the advantages of low delay and high throughput,the paper finally chooses the streaming computing-based method to build the parallel proofreading system.Performance comparison experiments demonstrate that thread parallelism is suitable for proofreading small-scale text,batch processing is suitable for off-line proofreading of large-scale text,and streaming parallel proofreading effectively reduces the fixed delay of about 110 seconds.Compared with batch proofreading,the streaming proofreading using a real-time computing framework has achieved a great performance improvement.
Design of Fault-tolerant L1 Cache Architecture at Near-threshold Voltage
CHENG Yu, LIU Wei, SUN Tong-xin, WEI Zhi-gang, DU Wei
Computer Science. 2020, 47 (4): 42-49.  doi:10.11896/jsjkx.190300088
Abstract PDF(2235KB) ( 619 )   
References | Related Articles | Metrics
With the aggressive silicon integration and clock frequency increasing,power consumption and heat dissipation have become key challenges in the design of high-performance processors.NTC is emerging as a promising solution to achieve an order of magnitude reduction in energy consumption in future processors.However,reducing the supply voltage to near-threshold level significantly increases the SRAM bit-cell failures,leading to the high error rate in L1 cache.Researchers have proposed techniques either by sacrificing capacity or incurring additional latency to correct the errors in L1 cache.But most schemes can only adapt to the low error rate environment of SRAM bit-cell,and perform poorly in high error rate environment.In this paper,this paper proposed a fault-tolerant First-Level Cache design (FTFLC) based on conventional 6T SRAM cells to solve reliability challenges in high error rate environment.FTFLC adopts a two-level mapping mechanism,which uses block mapping mechanism and bit correction mechanism to protect the faulty bits data in the cache line.In addition,this paper proposed a FTFLC initialization algorithm to improve the available cache capacity by combining two mapping mechanisms.Experimental results show that compared with three existing schemes,FTFLC improves performance by 3.86% and increases 12.5% L1 cache capacity while maintaining a low area and energy consumption.
Database & Big Data & Data Science
Personalized Recommendation Algorithm Based on User Preference Feature Mining
LIU Xiao-fei, ZHU Fei, FU Yu-chen, LIU Quan
Computer Science. 2020, 47 (4): 50-53.  doi:10.11896/jsjkx.190700175
Abstract PDF(1897KB) ( 1296 )   
References | Related Articles | Metrics
For the purpose of personalized recommendation ability of social network,this paper proposed a personalized recommendation algorithm based on user behavior feature mining according to the distribution of user behavior.The user behavior information feature mining model of social network is constructed,the big data fusion scheduling method is used to fuse the behavior information of social network user characteristics,and the semantic information features that reflect the user preference are extracted.According to the user’s behavior feature groups from the aspects of emotion,keywords and structure,combined with the fuzzy information perception method,the information scheduling in the process of personalized recommendation of social network is carried out.Under the control of association rules constraints,a hybrid recommendation model of user preference features is constructed to realize user preference feature mining,and personalized information recommendation of social networks is realized according to semantic distribution and user behavior preference.The simulation results show that the poposed method has good feature resolution ability and accurate recognition ability to user behavior features,which improves the confidence level of social network recommendation output.
Collaborative Attention Network Model for Cross-modal Retrieval
DENG Yi-jiao, ZHANG Feng-li, CHEN Xue-qin, AI Qing, YU Su-zhe
Computer Science. 2020, 47 (4): 54-59.  doi:10.11896/jsjkx.190600181
Abstract PDF(1943KB) ( 1208 )   
References | Related Articles | Metrics
With the rapid growth of image,text,sound,video and other multi-modal network data,the demand for diversified retrieval is increasingly strong.And cross-modal retrieval has been widely concerned.However,there are heterogeneity differences among different modes.It is still a challenging to find the content similarity of heterogeneous data.Most of the existing methods project heterogeneous data into a common subspace by a mapping matrix or a deep model.In this way,a pair of correlation relation is mined,and the global information correspondence relation between image and text is obtained.However,these methods ignore the local context information and the fine-grained interaction information between the data,so the cross-modal correlation cannot be fully mined.Therefore,a text-image collaborative attention network model (CoAN) is proposed.In order to enhance the measurement of content similarity,we selectively focus on key information parts of multi-modal data.The pre-trained VGGNet model and LSTM model are used to extract the fine-grained features of image and text,and the CoAN model is used to capture the subtle interaction between text and image by using text-image attention mechanism.At the same time,this model studies the hash representation of text and image respectively.The retrieval speed is improved by using the low storage and high efficiency of hashing method.Experiments show that,on two widely used cross-modal data sets,the mean Average Precision (mAP) of CoAN model is higher than that of all other comparative methods,and the mAP value of text retrieval image and image retrieval text reaches 0.807 and 0.769.Experimental data show that CoAN model is helpful to detect key information and fine-grained interactive information of multi-modal data,and the retrieval accuracy is improved by fully mining the content similarity of cross-modal data.
Study on Multimodal Image Genetic Data Based on Deep Principal Correlated Auto-encoders
LI Gang, WANG Chao, HAN De-peng, LIU Qiang-wei, LI Ying
Computer Science. 2020, 47 (4): 60-66.  doi:10.11896/jsjkx.190300073
Abstract PDF(2243KB) ( 825 )   
References | Related Articles | Metrics
Brain imaging phenotype and genetic mutation has become the important factors that affect complex diseases such as schizophrenia,researchers based on previous work in the pathogenesis of in-depth research have proposed many models based on deep neural network or regularization,typically involving either some form of norm or auto-encoders with a reconstruction objective,but the multi-modal data of those models tend to have the number of feature dimensions which more than that of samples.In order to solve the difficulties of high-dimensional data analysis and overcome the limitations of deep canonical correlation analysis,a competent optimization algorithm is exploited to solve deep canonical correlation analysis (DCCA) with principal component analysis (PCA) on the multi-modal linear features learning and multi-layer belief network based on restricted Boltzmann machine (RBM) on multi-modal nonlinear features learning.The model,together with previous advanced model,has been applied to test and analyze the actual multi-modal data.Experiments show that the deep principal component correlation auto-encoders model has higher correlation and better classification performance than those previous model.In terms of classification accuracy,the classification accuracy of the two types of modal data is more than 90%.Compared with the CCA-based model with an average accuracy of about 65% and the DNN-based model with an average accuracy of about 80%,the classification effect of this model is significantly improved.In the experiment of clustering performance evaluation,the model further verified the significant classification effect of the model with average normalized mutual information of 93.75% and average classification error rate of 3.8%.In terms of maximum correlation analysis,on the premise that the output dimensions of top-level nodes are consistent,this model outperforms other advanced models with the maximum correlation of 0.926,showing excellent performance in high-dimensional data analysis.
Collaborative Filtering Algorithm Based on Rating Preference and Item Attributes
ZHU Lei, HU Qin-han, ZHAO Lei, YANG Ji-wen
Computer Science. 2020, 47 (4): 67-73.  doi:10.11896/jsjkx.190300056
Abstract PDF(2271KB) ( 1059 )   
References | Related Articles | Metrics
Aiming at the impact of data sparsity of traditional collaborative filtering algorithm resulting in inaccuracy of item similarity,this paper proposed an improved collaborative filtering algorithm based on user rating preference model by incorporating time factor and item attributes.The algorithm improves the accuracy by modifying item similarity formula.Firstly,a preference model is introduced by considering the differences of user’s rating habits.A user-item rating matrix is rebuilt by replacing user’s rating of item with the preference for rating class.Then time weight function is designed and put into rating similarity based on time effect.What’s more,item similarity is calculated by incorporating item attributes similarity and rating similarity.Finally,top-N recommendation is completed after calculating user preference for item by the user preference formula.The experiment results suggest that the precision and recall of the proposed algorithm is increased by 9%~27% on the MovieLens-100K dataset and 16%~28% on the MovieLens-Latest-Small dataset than classical approaches.Therefore,the improved algorithm can improve recommendation accuracy and mitigate the problem of data sparsity effectively.
Contextual Preference Collaborative Measure Framework Based on Belief System
YU Hang, WEI Wei, TAN Zheng, LIU Jing-lei
Computer Science. 2020, 47 (4): 74-84.  doi:10.11896/jsjkx.190600152
Abstract PDF(2591KB) ( 683 )   
References | Related Articles | Metrics
To reduce the human intervention in the preference measure process,this article proposes a preference collaborative measure framework based on an updated belief system,which is also capable of improving the accuracy and efficiency of preferen-ce measure algorithms.Firstly,the distance of rules and the average internal distance of rulesets are proposed for specifying the relationship between the rules.For discovering the most representative preferences that are common in all users,namely common preference,a algorithm based on average internal distance of ruleset,PRA algorithm,is proposed,which aims to finish the discoveryprocess with minimum information loss rate.Furthermore,the concept of Common belief is proposed to update the belief system,and the common preferences are the evidences of updated belief system.Then,under the belief system,the proposed belief degree and deviation degree are used to determine whether a rule confirms the belief system or not and classify the preference rules into two kinds(generalized or personalized),and eventually filters out Top-K interesting rules relying on belief degree and deviation degree.Based on above,a scalable interestingness calculation framework that can apply various formulas is proposed for accurately calculating interestingness in different conditions.At last,IMCos algorithm and IMCov algorithm are proposed as exemplars to verify the accuracy and efficiency of the framework by using weighted cosine similarity and correlation coefficients as belief degree.In experiments,the proposed algorithms are compared to two state-of-the-art algorithms and the results show that IMCos and IMCov outperform than the other two in most aspects.
Computer Graphics & Multimedia
Survey on Human Action Recognition Based on Deep Learning
CAI Qiang, DENG Yi-biao, LI Hai-sheng, YU Le, MING Shao-feng
Computer Science. 2020, 47 (4): 85-93.  doi:10.11896/jsjkx.190300005
Abstract PDF(2637KB) ( 3891 )   
References | Related Articles | Metrics
As an important research hotspot in the computer vision community,human action recognition has important research significance and broad application prospects in many fields such as intelligent surveillance,smart home and virtual reality,and it has attracted the attention of scholars at home and abroad.For the methods based on traditional handcrafted features,it is difficult to deal with human action recognition in complex scenarios.With the great successes of deep learning in image classification,the application of deep learning to human action recognition has gradually become a development trend,but there are still some difficulties and challenges.In this paper,firstly,according to the difference of the feature extraction approaches,the early traditional handcrafted representation-based methods for human action recognition were simply overviewed.Then,from the perspective of network architecture,some deep learning-based approaches for human action recognition were discussed and analyzed,including Two-Stream Networks,3D Convolutional Networks,etc.Besides,this paper introduced the current human action recognition datasets used to evaluate the methods performance,and summarized the performance of some typical methods on two well-known public datasets of UCF-101 and HMDB-51.Finally,the future trends of deep learning-based methods were discussed from two aspects of performance and application,and the shortcomings were also pointed out.
Advances in 3D Object Detection:A Brief Survey
ZHANG Peng, SONG Yi-fan, ZONG Li-bo, LIU Li-bo
Computer Science. 2020, 47 (4): 94-102.  doi:10.11896/jsjkx.190400142
Abstract PDF(1962KB) ( 3380 )   
References | Related Articles | Metrics
Object detection is useful in many application scenarios,and is one of the most important research topics in computer vision.In recent years,with the development of deep learning,3D object detection has achieved significant breakthrough.Compared with 2D object detection,3D object detection can provide space scene information such as location,orientation and size of interest object,which plays an important role in autonomous driving and robot research.This paper firstly summarized deep lear-ning-based 2D object detection,then reviewed recent novel 3D object detection algorithms based on different data type of image,point cloud and multi-sensors,and analyzd performances,advantages and limitations of typical 3D object detection algorithms in autonomous driving scenario.Finally,this paper summarized the application direction and research topics and challenges of 3D object detection.
Video Recommendation Algorithm for Multidimensional Feature Analysis and Filtering
ZHAO Nan, PI Wen-chao, XU Chang-qiao
Computer Science. 2020, 47 (4): 103-107.  doi:10.11896/jsjkx.190700177
Abstract PDF(1514KB) ( 787 )   
References | Related Articles | Metrics
In recent years,short video apps such as TikTok,Kwai,and WeiShi have achieved great success,and the number of videos taken by users and uploaded to the APP platform has skyrocketed.In this environment of information overload,mining and recommending videos of interest to users has become a problem faced by video publishing platforms.Therefore,it is particularly important to design efficient video recommendation algorithms for these platforms.Aiming at the problem of high sparseness and huge scale of datasets in the field of media big data mining and recommendation,a video recommendation algorithm for multidimensional feature analysis and filtering is proposed.First,feature extraction is performed on videos from multiple dimensions such as user behavior and video tags.Then,similarity analysis is performed to calculate the video similarity by weighting to obtain similar video candidate sets,the similar video candidate sets are filtered,and then several videos selected by ranking the highest rated videos are recommended to users.Finally,based on the MovieLens public data set,the video recommendation algorithm proposed in this paper is implemented by using python3 programming language.A large number of experiments on the data set show that compared with the traditional collaborative filtering algorithm,the video recommendation algorithm for multidimensional feature analysis and filtering proposed in the paper improves the accuracy of the recommendation results by 6%,the recall rate by 4%,and the coverage rate by 18%.The experimental data fully demonstrates that considering the similarity between videos from multiple dimensions,combined with large-scale matrix factorization technology,the problems of high sparseness and huge data volume of the data set are alleviated to some extent,thereby effectively improving the recommendation results accuracy,recall,and coverage.
Light-weight Object Detection Network Based on Grouping Heterogeneous Convolution
YAN Xiao-tian, HUANG Shan
Computer Science. 2020, 47 (4): 108-111.  doi:10.11896/jsjkx.190600067
Abstract PDF(1836KB) ( 683 )   
References | Related Articles | Metrics
The current object detection model has disadvantages like large number of parameters,large size and slow detection speed,and it cannot be applied in real-time scenarios.For example,automatic driving technology requires not only accurate detection to ensure safety,but also rapid detection to ensure real-time decision-making of vehicles.For the above questions,this paper presented an end-to-end light-weight object detection network FGHDet.Firstly,grouping heterogeneous convolution(GHConv) is proposed to solve the problem of low efficiency of HetConv channel by channel convolution.Secondly,the basic module FGH Module is built by combining GHConv and Fire Module.Finally,the end-to-end light-weight object detection network FGHDet is built based on FGH Module.FGHDet reduces the amount of parameters mainly in two ways.One is to reduce the number of input channels of the 3×3 filter,and other is to replace the traditional convolution kernel with GHConv.This paper took KITTI data set as experimental data to complete the training and evaluation of the model on the deep learning framework Keras.The experimental results indicate that the mAP of FGHDet in KITTI data set can reach 74.4%,higher than 70.8% of Faster R-CNN,and the model detection speed is 28.7FPS,better than SqueezeDet,the fastest model in the comparison models.Moreover,the size of the proposed model is only 2.6MB,1/200 times the volume of the Faster R-CNN model.
Approach to Classification of Eye Movement Directions Based on EEG Signal
CHENG Shi-wei, CHEN Yi-jian, XU Jing-ru, ZHANG Liu-xin, WU Jian-feng, SUN Ling-yun
Computer Science. 2020, 47 (4): 112-118.  doi:10.11896/jsjkx.190200342
Abstract PDF(3573KB) ( 1287 )   
References | Related Articles | Metrics
In order to improve the accuracy of eye movement directions identification based on electro-oculogram (EOG) signals,this paper utilized the electrooculogram (EEG) signals containing EOG artifacts and proposed a new approach to classify eye movement directions.Firstly,EEG signals from the 8 channels in the frontal lobe of the human brain are collected,and EEG data pre-processing is made ,including data normalization and least squares based denoising.Then support vector machine based methodis applied to perform multiple binary-classification,and finally voting strategy is used to solve four-classification problems,thus achieving eye movement directions identification.The experiment results show when using the approach of this paper to classify eye movement directions,the classification accuracy rates in the upper,lower,left and right directions are 78.47%,72.22%,84.03%,79.86% respectively,and the average classification accuracy rates reach 78.65%.In addition,compared with the existed classification methods,the classification accuracy rate of this paper is higher,and the classification algorithm is simpler.It is validated the feasibility and effectiveness of using EEG signals to identify eye movement directions.
Motion Feature Descriptor for Abnormal Behavior Detection
WANG Kun-lun, LIU Wen-can, HE Xiao-hai, QING Lin-bo, WU Xiao-hong
Computer Science. 2020, 47 (4): 119-124.  doi:10.11896/jsjkx.190300392
Abstract PDF(2276KB) ( 638 )   
References | Related Articles | Metrics
Modern motion description techniques for crowd motion in videos are mostly velocity descriptors based on optical flow.However,acceleration contains a wealth of motion information,which can provide information that the velocity descriptors are missing when describing complex motion patterns,and can better characterize complex motion patterns.This paper studies a motion descriptor,which uses an energy-based restricted Boltzmann machine model to perform anomalous behavior detection.Firstly,the optical flow information in the video is extracted,and the acceleration information is calculated through the optical flow information of two consecutive frames.Then,acceleration histogram feature is computed over spatial-temporal blocks,and all the spatial-temporal block histogram features of adjacent frames are spliced to obtain an acceleration descriptor.The Restricted Boltzmann Machine learns the normal motion patterns from the normal video training set,which is used for abnormal detection in terms of the errors of reconstructed data in detecting phase.The results show that the average area under the curve (AUC) of the UMN dataset reaches 0.984,and the area under the average curve (AUC) of UCF-Web reaches 0.958.Compared with other state-of-the-art algorithms,the proposed descriptor has superior performance on anomaly detection.
3D Shape Recognition Based on Multi-task Learning with Limited Multi-view Data
ZHOU Zi-qin, YAN Hua
Computer Science. 2020, 47 (4): 125-130.  doi:10.11896/jsjkx.190700163
Abstract PDF(2519KB) ( 681 )   
References | Related Articles | Metrics
With the rapid development of 3D scanning technology,3D shape analysis has been widely concerned by researchers.Especially with the significant success of deep learning in computer vision,the approaches of 3D shape recognition based on multi-view have become the dominant methods.In the previous work,we notice that the amount of 3D shapes is essential for the recognition accuracy.However,due to the limitation of professional 3D scanning equipment,the 3D shape data is hard to collect.In fact,the scale of existing benchmark datasets is far smaller than that of 2D datasets which impedes the development of 3D shape analysis.In order to solve this problem,we mainly develop an optimal strategy of 3D shape recognition with limited data.Inspired by multi-task learning,we develop a novel network with multiple branches and construct an auxiliary comparison module based on metric learning to exploit the similarity and discrepancy between different samples intra-class and inter-class.The proposed network mainly includes a primary branch and an auxiliary branch,which respectively use disparate loss functions with different training tasks and hyper-parameter to balance different loss items.The primary branch aims to obtain the prediction of classification and uses Cross Entropy Loss function to train it.While the similarity scores of different samples are calculated by the auxiliary module,and the Mean Square Error is used to update this branch.Both two branches share the same feature extractor to project all samples into the same representation space and train the structure jointly in training phase,while the primary branch would be used in testing phrase to calculate the accuracy.Extensive experimental results have reported on two public 3D shape benchmark datasets which demonstrate the effectiveness of our proposed architecture to enhance the discriminative power and achieve better performance compared with traditional methods,especially in the situation where merely has limited multi-view data.
Adaptive Image Inpainting Based on Structural Correlation
ZHOU Xian-chun, XU Yan
Computer Science. 2020, 47 (4): 131-135.  doi:10.11896/jsjkx.190300149
Abstract PDF(3132KB) ( 614 )   
References | Related Articles | Metrics
This paper proposed an adaptive image inpainting algorithm based on structural correlation to solve the problems of inaccurate priority function and degraded image quality in Criminisi inpainting algorithm.First,the structural correlation is introduced to improve the priority calculation and increase the reliability of the priority calculation.Then,the sample size is adaptively selected to make the repair more accurate and improve the efficiency of repair.Finally,HSV color space is introduced,and according to the chromaticity and brightness of sample,the optimal matching block is searched to reduce the repair error and complete the image restoration.Experimental results show that the proposed algorithmhas obvious improvement in subjective visual compared to Criminisi repair algorithm,its peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are improved,and repair effect is better.Compared with the traditional Criminisi inpainting algorithm,the peak signal-to-noise ratio of proposed algorithm is improved by 1~3dB and structure similarity is closer to 1.This algorithm uses structure correlation adaptively to select sample block size to repair color broken images,making the priority calculation more reasonable and accurate and the repair effect better,which is helpful to practical application.
Scene Graph Generation Model Combining Multi-scale Feature Map and Ring-type RelationshipReasoning
ZHUANG Zhi-gang, XU Qing-lin
Computer Science. 2020, 47 (4): 136-141.  doi:10.11896/jsjkx.190300002
Abstract PDF(1629KB) ( 565 )   
References | Related Articles | Metrics
The scene graph is a graph describing image content.There are two problems in its generation:one is the loss of useful information caused by two-step scene graph generation method,which promotes the difficulty of this working,and the second is the model overfitting due to the long-tail distribution of visual relationship,which increases the error rate of relationship reasoning.To solve these two problems,a scene graph generation model SGiF (Scene Graph in Features) based on multi-scale feature map and ring-type relationship reasoning was proposed.Firstly,the possibility of visual relationship is calculated for each feature point on the multi-scale feature map and the features with high possibility are extracted.Then,the subject-object combination is decoded from extracted features.According to the difference of the decoding result category,the result will be deduplicated and the scene graph structure will be obtained.Finally,the ring including the targeted relationship edge is detected according to the graph structure,then the other edges of this ring are used as input of the calculation about factor to adjust the original relationship reasoning result,at last,the scene graph generation work is completed.In this paper,SGGen and PredCls were used as verification items.The experimental results on the subset of large dataset VG (Visual Genome) used for scene graph generation show that,by using multi-scale feature map,SGiF improves the hit rate of visual relationship detection by 7.1% compared with the two-step baseline,and by using the ring-type relationship reasoning,SGiF improves the accuracy of relational reasoning by 2.18% compared with the baseline with non-ring relational reasoning,thus proving the effectiveness of SGiF.
Lane Detection Algorithm Based on Improved Enet Network
LIU Bin, LIU Hong-zhe
Computer Science. 2020, 47 (4): 142-149.  doi:10.11896/jsjkx.190500021
Abstract PDF(3287KB) ( 1959 )   
References | Related Articles | Metrics
Aiming at the complex diversity of road scenes and lane lines in the actual driving environment,a lane-line detection algorithm based on improved Enet network was proposed.Firstly,the Enet network is pruned and convolution optimized.The improved Enet network is used to segment the lane-line image semantics and separate the lane lines from the image.Then,the DBSCAN algorithm is used to cluster the segmentation results to distinguish adjacent lane lines from each other.Finally,the lane line clustering results are adaptively fitted to obtain the final lane line detection results.The proposed algorithm was trained and testedin the CULane dataset of the Chinese University of Hong Kong.The accuracy of standard pavement detection is 96.3%,the accuracy of comprehensive pavement detection is 78.9%,and the image frame processing speed is 71.4fps,which can meet the complex road conditions and real-time requirementsin actual driving environment.In addition,the proposed algorithm has been trained and tested on Tucson’s future TuSimple dataset and our actual acquisition dataset LD-Data,all of which have achievedrealtime detection results.
Crowd Counting Based on Single-column Multi-scale Convolutional Neural Network
PENG Xian, PENG Yu-xu, TANG Qiang, SONG Yan-qi
Computer Science. 2020, 47 (4): 150-156.  doi:10.11896/jsjkx.190400034
Abstract PDF(2447KB) ( 839 )   
References | Related Articles | Metrics
The problem of crowd counting in single images and monitoring videos has received increasing attention in recent years.Due to the scale change and crowd occlusion,crowd counting is a very challenging problem,but deep convolutional neural network has been proved to be effective in solving this problem.In this paper,a single-column multi-scale convolutional neural network is proposed,which provides a data-driven deep learning method that can understand various scenarios and perform accurate counting and estimation.The proposed network model is mainly composed of the front end and the middle end,for two-dimensional features extraction,as well as the back end,which is used to restore the density map.Stack pools are used to replace the maximum pooling layer,and scale invariance of the model is increased without introducing additional parameters.Partial vgg-16 structure is adopted at the front end of the network model,and FME (feature aggregation module) is adopted in the middle to break the independence between different columns,to better extract multi-scale feature information.At the back end,three columns and five layers of cavity convolution with different expansion rates are adopted to increase the sensing field while keeping the resolution unchanged,generating a crowd density map with higher quality.A relative population loss is introduced to improve the model performance in the case of sparse population density.This model works well on two of the most challenging crowd counting data sets.The results show that on two subsets of ShanghaiTech and UCF_CC_50,the mean absolute error (MAE) and mean square error (MSE) of the proposed method are 66.2 and 103.0,8.7 and 13.4,251.0 and 329.5,respectively,achieving better performance than the traditional crowd counting methods.Compared with other models,the proposed model has higher accuracy,better robustness and better counting effect for images with sparse population.
Artificial Intelligence
Survey of Implicit Discourse Relation Recognition Based on Deep Learning
HU Chao-wen, YANG Ya-lian, WU Chang-xing
Computer Science. 2020, 47 (4): 157-163.  doi:10.11896/jsjkx.190300115
Abstract PDF(1517KB) ( 1233 )   
References | Related Articles | Metrics
Implicit discourse relation recognition is still a challenging task in natural language processing.It aims to discover the semantic relations (such as transition) between two arguments (e.g.clauses or sentences) where discourse connectives are absent.In recent years,with the extensive application of deep learning in natural language processing,various methods based on deep learning have achieved promising results on implicit discourse relation recognition.Their performance is much better than that of previous methods based on manual features.This paper discussed recent implicit discourse recognition methods in three categories:argument encoding based methods,argument interaction based methods and semi-supervised methods with explicit discourse data.Results on the PDTB data set show that,by explicitly modeling the semantic relation between words or text spans in two arguments,the performance of argument interaction based methods is significantly better than that of argument encoding based methods,and by incorporating explicit discourse data,the semi-supervised methods can effectively alleviate the problem of data sparsity,and then further improve the recognition performance.Lastly,this paper analyzed the major problems faced at pre-sent,and pointed out the possible research directions.
R-Calculi For L3-Valued Propositional Logic
CAO Cun-gen, HU Lan-xi, SUI Yue-fei
Computer Science. 2020, 47 (4): 164-168.  doi:10.11896/jsjkx.190600171
Abstract PDF(1342KB) ( 555 )   
References | Related Articles | Metrics
In L3-valued propositional logic,the Gentzen deduction system G for sequents is monotonic, and the one G for co-sequents is nonmonotonic.Based on G and G, an R- calculus S is given so that any reduction Δ|A⇒Δ,C is valid if and only if it is provable in S.Therefore,S is monotonic inrestraining A from entering Δ,and nonmonotonic in adding A into Δ.
Emotional Robot Collaborative Task Assignment Auction Algorithm Based on Positive GroupAffective Tone
LI Hu, FANG Bao-fu
Computer Science. 2020, 47 (4): 169-177.  doi:10.11896/jsjkx.190900188
Abstract PDF(2475KB) ( 466 )   
References | Related Articles | Metrics
Multi robot system (MRS) can effectively improve individual's autonomous cooperation ability,decision-making ability and overall intelligent level of multi robot system by introducing individual emotional factors.However,previous researches mainly focus on individual emotional state (emotion,personality,etc.),lacking of exploring the influence of group emotional state on group cooperation ability and group effectiveness from positive group affective tone(PGAT).In order to improve positive effects of PGAT in task allocation and reduce the risk of group dissolution caused by group members’ emotional decaying,as well as increasing group cooperation ability and group effectiveness,this paper proposed collaborative task allocation auction algorithm based on PGAT.The results of simulation show that compared with modified contract network protocol multi-robot task allocation algorithm based on anxiety model and distributed task allocation method based on self-awareness of autonomous robots,the emotional robot collaborative task assignment auction algorithm based on positive group affective tone improves the pursuit success rate by 269.3% and 6.5%,and increases the task allocation success rate by 138.7% and 5% respectively,and reduces the average pursuit time by 14.5% and 26.3% respectively.Besides,in 150 episodes of pursuit comparison experiment,the proportion of the number of episodes whose pursuit time is less than the comparison algorithm is 87.3% and 90.7% respectively.
Truncated Gaussian Distance-based Self-attention Mechanism for Natural Language Inference
ZHANG Peng-fei, LI Guan-yu, JIA Cai-yan
Computer Science. 2020, 47 (4): 178-183.  doi:10.11896/jsjkx.190600149
Abstract PDF(1941KB) ( 767 )   
References | Related Articles | Metrics
In the task of natural language inference,attention mechanisms have attracted a lot of attention because it can effectively capture the importance of words in the context and improve the effectiveness of natural language inference tasks.Transformer,a deep feedforward network model solely based on attention mechanisms,not only achieves state-of-the-art performance on machine translation with much less parameters and training time,but also achieves remarkable results in tasks such as natural language inference (Gaussian-Transformer) and word representation learning (Bert).Moreover,Gaussian-Transformer has become one of the best methods for natural language inference tasks.However,the Gaussian prior distribution in Transformer,which weights the positional importance of words,although greatly improves the importance of adjacent words,the importance of non-neighborhood words in Gaussian distribution will quickly become 0,the influence of non-neighborhood words that plays an important role in the current word representation will disappear as the distance deepens.Therefore,this paper proposed a position weighting method based on the self-attention mechanism of clipped Gaussian distance distribution for natural language inference.This method not only highlights the importance of neighboring words,but also preserves non-neighborhood words those are important to the current word representation.The experimental results on the natural language inference benchmark datasets SNLI and MultiNLI confirm the validity of the cliped Gaussian distance distribution used in the self-attention mechanism for extracting the relative position information of the words in sentences.
Study of Crowd Counting Algorithm of “Weak Supervision” Dense Scene Based on DeepNeural Network
LIU Yan, LEI Yin-jie, NING Qian
Computer Science. 2020, 47 (4): 184-188.  doi:10.11896/jsjkx.190700212
Abstract PDF(1720KB) ( 1083 )   
References | Related Articles | Metrics
At present,in the crowd counting task of dense scenes,the method of annotating true density is to annotate the central position of pedestrian’s head.Gaussian convolution is used to generate the ground-truth density map as the supervision information.However,for dense scenes,such labeling method is time-consuming and laborious,and there are many “uncontrolled” factors in the images of dense scenes,such as low resolution,background noise,heavy occlusion and scale change.To solve this problem,we proposed a new annotation method,that is,we only need to know how many persons are included in the picture,and the total count of pedestrians in the picture is used as the supervision information.Compared with the traditional real density map,in proposed labeling method,the real target value is used as the “weak supervision” information.The experimental results show that the model obtained by training neural network with weak supervisory information can accurately regress the number of targets in the image for crowd regression task,indicating the effectiveness of this method.
Knowledge Graph Representation Based on Improved Vector Projection Distance
LI Xin-chao, LI Pei-feng, ZHU Qiao-ming
Computer Science. 2020, 47 (4): 189-193.  doi:10.11896/jsjkx.190300024
Abstract PDF(1562KB) ( 841 )   
References | Related Articles | Metrics
Representation learning is of great value in knowledge graph reasoning,which realizes the computability of knowledge by embedding entities and relationships into a low-dimensional space.The representation learning model based on vector projection distance has better ability of knowledge representation on complex relationships.However,the model is easily susceptible to irrelevant information,especially when dealing with one-to-one relationships,and it still has space to improve performance in representing one-to-many,many-to-one and many-to-many relationships.In this paper,we proposed an improved representation learning model SProjE,which introduces an adaptive metric method to reduce the weight of noise information and optimizes the loss function to improve the loss weight of complex relation triples.The proposed model is suitable for large scale knowledge graph representation learning.At last,the experimental results on the WN18 and FB15k data sets show that SProjE achieves significant and consistent improvements compared with the existing models and methods.
Vertical Structure Community System Optimization Algorithm
HUANG Guang-qiu, LU Qiu-qin
Computer Science. 2020, 47 (4): 194-203.  doi:10.11896/jsjkx.190200273
Abstract PDF(2120KB) ( 584 )   
References | Related Articles | Metrics
To solve global optimal solutions of a class of complex non-linear optimization problems,a new algorithm of vertical structure community system optimization,VS-CSO algorithm,is proposed based on the theory of vertical structure community dynamics.In this algorithm,the search space of an optimization problem is regarded as an ecosystem,which has several vertical structure bifurcated nutrient levels and where lives different kinds of biological populations at different nutrient levels; within each population,there are a number of biological individuals living in it; biological individuals can not migrate across populations,but there are interactions among the same population.Populations are linked by cyclic predation-prey or resource-consumption.Using the vertical structure community dynamics model,the all-eating operator,the food-selecting operator,the interference ope-rator,the infection operator,the newborn operator and the death operator are developed.Among them,the all-eating operator and the food-selecting operator can exchange information among individuals across the population,while the interference operator and the infection operator can exchange information among individuals within the population,thus ensuring the full exchange of information among individuals; the newborn operator can timely supplement new individuals into the population,and the death operator can timely eliminate weak individuals from the population,thus greatly improving the ability of the algorithm to jump out of local traps; in the process of solving,VS-CSO algorithm only deals with very few variables at a time,so it can solve high-dimensional optimization problems.The test results show that VS-CSO algorithm can solve a class of very complex optimization problems of single-peak,multi-peak and compound function,and has excellent exploitation ability,exploration ability and coordination of both,and the characteristics of global convergence.The algorithm provides a solution to find global optimal solutions for some complex function optimization problems.
Sentiment Classification Method for Sentences via Self-attention
YU Shan-shan, SU Jin-dian, LI Peng-fei
Computer Science. 2020, 47 (4): 204-210.  doi:10.11896/jsjkx.190100097
Abstract PDF(1744KB) ( 881 )   
References | Related Articles | Metrics
Although attention mechanisms are widely used in many natural language processing tasks,there still lacks of related works about its applications in sentence-level sentiment classification.By taking advantage of self-attention mechanism in learning important local features of sentences,a multi-layer attentional neural network based on long-short term memory network (LSTM) and attention mechanism,named AttLSTM,was proposed and then applied into the fields of sentiment classification for sentences.AttLSTM firstly uses LSTM network to capture the contexts of sentences,and then takes self-attention functions to learn the position information about words in the sentences and builds the corresponding position weight matrix,which yields the final semantic representations of the sentences by weighted averaging.Finally,the results is classified and outputted via a multi-layer perceptron.The experiment results show that AttLSTM outperforms some relative works and achieves the highest accuracy of 82.8%,88.3% and 91.3% respectively on open two-class sentiment classification corpora,including Movie Reviews (MR),Stanford Sentiment Treebank (SSTb2) and Internet Movie Database (IMDB),as well as 50.6% for multi-class classification corpora SSTb5.
Bin Packing Algorithm Based on Adaptive Optimization of Slack
YANG Ting, LUO Fei, DING Wei-chao, LU Hai-feng
Computer Science. 2020, 47 (4): 211-216.  doi:10.11896/jsjkx.190500132
Abstract PDF(1435KB) ( 1148 )   
References | Related Articles | Metrics
The bin packing problem is a classical and important mathematical optimization problem in logistics system and production system.A series of items are put into bins with fixed capacity in a certain order,and the number of bins used is minimized to obtain the approximate optimal solution of the bin packing problem to the greatest extent.However,the existing bin packing algorithms have obvious defects.Genetic algorithm has too much computation,and even can’t find the required solution.Heuristic algorithm can’t deal with the extreme value problem.And the existing improved algorithm will easily fall into the local minimum even if the slack is introduced.The proposed Adaptive-MBS algorithm uses adaptive weights to improve the original method.Specifically,the method is allowed to have a certain amount of slack,and has the intuition of capturing the change of object sample space with time,so as to use a better slack strategy to pack.The Adaptive-MBS algorithm first uses the current bin as the center and uses the Adaptive_Search algorithm to iteratively find a subset of all objects in the set suitable for the bin capacity.In the Adaptive_Search algorithm,the bin is not required to be completely filled,but is allowed to have a certain amount of slack.In the training process,the slack is automatically adjusted according to the change of the current state,and after finding the subset that is completely filled,the subset is iterated to the next round of search until the traversal is completed.This method is not easy to fall into local optimum and has strong ability to find global optimum.In this paper,the BINDATA and SCH_WAE data sets in the packing problem are used for experiments.The results show that 991 cases in the data set can be optimized by Adaptive-MBS algorithm.In the case where the optimal solution is not found,the proposed algorithm has the lowest relative offset percentage in all comparison algorithms.Numerical experiments show that compared with other classic bin packing algorithms,Adaptive-MBS algorithm has better effect and its convergence speed is significantly better than other algorithms.
Computer Network
Survey on Internet of Things Based on Named Data Networking Facing 5G
XIE Ying-ying, SHI Jian, HUANG Shuo-kang, LEI Kai
Computer Science. 2020, 47 (4): 217-225.  doi:10.11896/jsjkx.191000157
Abstract PDF(1608KB) ( 1491 )   
References | Related Articles | Metrics
Large scale Internet of Things (IoT) applications in the 5G era pose sever challenges on the network architecture in terms of heterogeneity,scalability,mobility and security.Due to the identification and location overloading problem of IP,TCP/IP based network architecture appears inefficient in addressing the challenges mentioned above.Named Data Networking (NDN) makes named content as the primary sematic and has consistency in logical topologies between network layer and application la-yer.The advantages of NDN in addressing these four challenges are reflected in the fact that naming shields the underlying hete-rogeneity,end-to-end decoupling and network layer caching provide native support for many-to-many communication and multicast,consumer mobility is supported natively by consumer driven communication pattern and content-based security is more lightweight.In this paper,future research directions of NDN based IoT were summarized.Especially,the combination of NDN and technologies including edge computing,blockchain and Software Defined Networking (SDN) to construct edge storage and computing model,centralized and distributed control model,distributed security model were proposed.
DVB-S2 Signal Receiving and Analysis Based on Cognitive Radio
TIAN Miao-miao, WANG Zu-lin, XU Mai
Computer Science. 2020, 47 (4): 226-232.  doi:10.11896/jsjkx.190700210
Abstract PDF(2982KB) ( 712 )   
References | Related Articles | Metrics
DVB-S2 protocol is the second generation of digital television satellite broadcasting protocol,which has been widely used all over the world because of its outstanding signal transmission performance.The existing DVB-S2 signal reception mostly uses standard commercial equipment,which is not convenient for analysis of each module of signal reception.Therefore,this paper carried out the use of cognitive radio USRP X310 equipment and Matlab digital algorithms to realize the reception and analysis final transmission effect which affected by relevant algorithm parameters.This work can provide reliable design guidance for deeper research on signal protocol,subsequent DVB-S2 signal generation and communication countermeasure.Based on DVB-S2 protocol,this paper designed a complete simulation and realization for the reception and analysis system of digital satellite television signals.In order to achieve maximum transparency of the communication protocol,this paper only used cognitive radio equipment to amplify and sample original analog signals,and the rest is completed by digital signal processing algorithms on the software platform.The hardware device to receive and sample DVB-S2 analog signals is a cognitive radio equipment USRP X310 and the software platform for all the digital signal process is MATLAB platform.This paper discussed the details of hardware configuration,digital signal processing framework and theories and implementations of some key modules.The key modules are symbol synchronization,physical layer frame head detection and analysis,and carrier synchronization.In the experimental part,this paper took a specific television program on Ku band of Asian Satellite 5 as an example to illustrate the whole DVB-S2 signal process.Finally,original data transmission stream is obtained and the video and audio of the program can be successfully broadcast.
Improved SDNE in Weighted Directed Network
MA Yang, CHENG Guang-quan, LIANG Xing-xing, LI Yan, YANG Yu-ling, LIU Zhong
Computer Science. 2020, 47 (4): 233-237.  doi:10.11896/jsjkx.190600151
Abstract PDF(2393KB) ( 1152 )   
References | Related Articles | Metrics
The data form of network can express the entity and the relation between entity and entity.Network structure is common in the real world.It is great significance to study the relationship between nodes and edges in networks.Network representation technology transforms the structure information of network into node vector,which can reduce the complexity of graph representation,and can be effectively applied to tasks such as classification,network reconstruction and link prediction.The SDNE (structural deep network embedding) algorithm proposed in recent years has made outstanding achievements in the field of graph auto-encoder.In view of the limitations of SDNE in weighted and directed networks,this paper proposed a new network representation model based on graph auto-encoder from the perspectives of network structure and measurement index.The concepts of receiving and sending vector are introduced to optimize the decoding part of the neural network,which reduce the para-meters of the network to speed up the convergence speed.This paper proposed a measurement index based on the node degree,and reflected the weighted characteristics of the network in the results of the network representation.Experiments on three directed weighted datasets show that the proposed method can achieve better results than the traditional method and the original SDNE method in network reconstruction and link prediction tasks.
Intra-domain Energy Efficient Routing Algorithm Based on Algebraic Connectivity
GENG Hai-jun, ZHANG Wen-xiang, YIN Xia
Computer Science. 2020, 47 (4): 238-242.  doi:10.11896/jsjkx.190600064
Abstract PDF(2547KB) ( 489 )   
References | Related Articles | Metrics
Reducing network energy consumption through energy-efficient routing algorithm is a key scientific problem in the network.However,the existing energy-efficient routing algorithms are all based on the known traffic matrix,because the real-time traffic is difficult to obtain.Therefore,this paper proposed an intra-domain energy efficient routing scheme based on algebraic connectivity (EERSBAC).EERSBAC does not need the real-time traffic matrix in the network,but only relies on the topological structure of the network to achieve energy saving.Firstly,a link criticality model is proposed to calculate the importance of all links in the network.And then,an algebraic connectivity model is proposed to quantitatively measure the connectivity performance of the network.The experimental results show that EERSBAC not only reduce network energy consumption,but also has a smaller path stretch.
Non-orthogonal Random Access Resource Allocation Scheme Based on Terminal Grouping
ZHANG Ji-rong, JIA Chen-qing
Computer Science. 2020, 47 (4): 243-248.  doi:10.11896/jsjkx.190300410
Abstract PDF(2487KB) ( 506 )   
References | Related Articles | Metrics
In order to solve the collisions,resource shortages and other problems in Machine to Machine(M2M) communication,a non-orthogonal random access and data transmission scheme based on terminal Grouping was proposed,i.e.TG-NORA-DT scheme.Firstly,the machine type communication devices (MTCDs) are grouped according to the speed of energy consumption,and the priority of the group is set.Secondly,the difference of arrival time is used to identify multiple MTCDs with the same preambles,and the power reuse of conflicting MTCDs is realized in the subsequent access process.Finally,based on the TG-NORA-DT scheme,a resource allocation method is proposed to reasonably allocate resources between the physical random access channel (PRACH) and the physical uplink shared channel (PUSCH).Simulation results show that compared with orthogonal random access and data transmission protocol(ORADTP) and Non-Orthogonal Random Access-Data Transmission(NORA-DT) scheme,TG-NORA-DT scheme improves system throughput and resource efficiency,and decreases the probability of preamble collisions.The resourse efficiency has increased by more than 20%.
Group Stratification Opportunistic Routing Algorithm Based on Kinship in MSN
XUE Mao-jie, WU Jun, JIN Xiao-jun, BAI Guang-wei
Computer Science. 2020, 47 (4): 249-255.  doi:10.11896/jsjkx.190200358
Abstract PDF(1964KB) ( 483 )   
References | Related Articles | Metrics
Mobile Social Network (MSN) has the characteristics of social network.Mobile intelligent terminal devices often exhi-bit node selfishness due to their own resource limitations.The existing researches mainly focus on solving the selfishness of node individuals,and thus neglecte the discrimination and utilization of node social selfishness.Therefore,this paper proposed a group stratification opportunistic routing algorithm based on kinship.First,in the communities and clusters based on the kinship index,the self-recommended nodes generate family nodes and relay nodes by comparing the recommended values.Then,the transition probability predicted by the kinship relationship is used as the forwarding basis,the family node and the relay node are used to optimize the blind forwarding.The effective reliable path link is predicted while the number of replicas is effectively controlled,and then encounter-based delivery strategy based on the node affinity is realized.Simulation results show that the proposed mecha-nism can effectively improve the message delivery rate,reduce the network delay,and improve the communication traffic of the network on the basis of protecting and utilizing social selfishness.
Load Balancing Technology of Segment Routing Based on CKSP
ZHOU Jian-xin, ZHANG Zhi-peng, ZHOU Ning
Computer Science. 2020, 47 (4): 256-261.  doi:10.11896/jsjkx.190500122
Abstract PDF(1990KB) ( 726 )   
References | Related Articles | Metrics
In view of the current emerging business demand represented by cloud computing and big data,existing MPLS networks have some problems such as complex protocols,poor scalability,and difficulty in operation and maintenance.Therefore,this paper adopted segment routing(SR) forwarding technology.According to the characteristics of centralized control and open programming of Software-Defined Networking (SDN),a technological scheme of segment routing load balancing based on CKSP algorithm was proposed.First,controller exchange information with each network node by using OpenFlow protocol to monitor the topology and link rate of the entire network.Then,the segment routing application implements forwarding table construction and segment list calculation in the way of the two-stage flow table and the multi-node relay according to the northbound interface provided by the controller.Finally,a Constrained K-Shortest Pathes (CKSP) algorithm based on link utilization and hop for non-uniform weighting was designed.The experimental results show that the proposed technology can increase network throughput and smooth traffic distribution,and reduce the average delay of data flows and the packet loss rate of the total network.
Task Intelligent Identification Method for Spatial Information Network
YANG Li, LI Xin-yu, SHI Huai-feng, PAN Cheng-sheng
Computer Science. 2020, 47 (4): 262-269.  doi:10.11896/jsjkx.190300111
Abstract PDF(3154KB) ( 705 )   
References | Related Articles | Metrics
With the continuous development of inter-satellite link technology and the maturity of on-board processing technology,the types of tasks transmitted by spatial information networks are growing and diversifying.It presents a new challenge to the multi-service coordinated transmission of spatial information networks and the global scheduling of network resources.However,traditional spatial information network resource scheduling is mostly driven by a single service,ignoring the one-to-many relationship between tasks and services.This causes some low-priority tasks to preempt network resources of high-priority tasks,resulting in lower quality of service for spatial information network tasks composed of multiple services.In addition,the spatial information network transmission environment has the characteristics of high dynamic topology change and limited node resources.The traditional task identification method cannot identify the spatial information network task with high efficiency and low cost.Therefore,this paper designed a task service support station and deployed it at the edge of the spatial information network.The support station is composed of an recognition tag module,a route control module and a data communication module,which completes the task type identification and routing and data transmission based on the task quality of service requirements.On this basis,the existing identification methods only identify the single sub-service of the task,ignoring the one-to-many relationship between the task and the service,and can not meet the service quality requirement of the task.By introducing the feature space mapping based on Gaussian kernel function,the service recognition algorithm based on support vector machine was designed.Further,based on the introduction of environmental feature items,the business type,quantity and environmental feature items were combined to design a task identification algorithm based on gradient descent method.The simulation results show that the proposed task identification algorithm has good recognition accuracy and recall rate,and has less recognition time and recognition overhead.The average accuracy of task information recognition for spatial information network reaches 95%,which is increased by 1%.The average training time for feature dimension reduction is reduced by 15%.
Information Security
Privacy Metric Model of Differential Privacy via Graph Theory and Mutual Information
WANG Mao-ni, PENG Chang-gen, HE Wen-zhu, DING Xing, DING Hong-fa
Computer Science. 2020, 47 (4): 270-277.  doi:10.11896/jsjkx.190400098
Abstract PDF(2410KB) ( 1281 )   
References | Related Articles | Metrics
Differential privacy is an important tool for privacy preserving in many fields,such as data publishing and data mining.However,the strength and effectiveness of differential privacy cannot be evaluated previously,and highly rely on empirical selection of privacy budget.To this end,a privacy metric model and a privacy leakage method via graph theory and mutual information were proposed.This work models differential privacy as an information theoretic communication channel,and constructs an information channel and privacy metric model for differential privacy.Then,a mutual information based privacy metric method is proposed by employing the distance-regular and vertex-transitive of graphs,the upper bound of this metric is proofed,and an explicit formula is proposed for the bound.Delicate analysis and comparison show that the proposed upper bound has a function relationship limited by fewer computational constraints among the original dataset’s attributes,attribute values and privacy budget.This work benefits more than related works,and provides theoretical foundation for algorithm design,algorithm evaluation,and privacy assessment.
Efficient Image Encryption Algorithm Based on 1D Chaotic Map
BAN Duo-han, LV Xin, WANG Xin-yuan
Computer Science. 2020, 47 (4): 278-284.  doi:10.11896/jsjkx.190600059
Abstract PDF(3309KB) ( 1289 )   
References | Related Articles | Metrics
With the development of multimedia technologies,the applications based on digital images have become more and more popular,the security issues of the image itself and the privacy problems of the image owners are more severe in current environment.Different from text,digital images data as a two-dimensional data with large volume of data,high redundancy and strong correlation between pixels.It is difficult to achieve the effect of encryption when traditional encryption method is applied directly to image encryption.At present,encryption based on chaos theory is one of the mainstream methods in the field of image encryption,which using the classical structure of scrambling-diffusion,making use of the high randomness of the chao-tic sequences to ensure the security of the encryption results,and has a high encryption efficiency at the same time.In order to further improve the security of chaotic encryption algorithms,a large number of complex chaotic maps,such as hyper-chaos and multi-level chaos,have been proposed,however,the computational complexity is much higher than that of one-dimensional chaotic maps.To solve this problem,an efficient chaotic map called SPM is designed through combining Sine map and PWLCM,which expands thechao-tic range,enhances the ergodicity and lifts up the generation speed of chaotic sequence without reducing security.Based on this,an image encryption algorithm based on a novel structure is designed.Only one round of scrambling-diffusion-scrambling is needed to complete the encryption in this method,which reduces the encryption rounds compared to traditional methods and further improves the efficiency of encryption.The fully experiments show that the proposed method can effectively resist the chosen plain-image/cipher-image attacks and the encryption efficiency is improved by about 58% on average,meaning a high practicability.
Medical Data Storage Mechanism Integrating Blockchain Technology
WANG Hui, LIU Yu-xiang, CAO Shun-xiang, ZHOU Ming-ming
Computer Science. 2020, 47 (4): 285-291.  doi:10.11896/jsjkx.190400001
Abstract PDF(2042KB) ( 880 )   
References | Related Articles | Metrics
The Singularity and centrality of Medical Institutions’ existing database storage makes the security,integrity and traceability of electronic medical data impossible to be guaranteed,as a result,the medical privacy of patients is threatened.Although existing research has proposed a secure data storage scheme based on cloud storage,it needs to rely on a fully trusted third party to ensure the reliability of interaction.Therefore,this paper proposed a decentralized block chain information management scheme to achieve the safe storage of medical data.This scheme adopts improved PBFT consensus algorithm and optimized Hash encryption algorithm to store medical data safely and effectively in distributed database to ensure the integrity and traceabi-lity of medical data.At the same time,it proposes and designs a new data interaction system to prevent the direct interaction between the third party and the database,prevent the untrustworthy third party from maliciously destroying medical data and ensure the data.Finally,through access control and Lucene search mechanism to ensure patient privacy and achieve rapid retrieval of medical data.Experiments show that the improved PBFT consensus algorithm provides better stability and throughput than proof of work(POW) and delegated proof of stake (DPOS).Compared with the common database interaction,the data interaction system in this paper effectively prevents the direct operation of the database and has better security and tamper resistance.The experimental data show that the decentralized medical data storage system,the improved PBFT consensus algorithm and the data interaction system architecture have realized the security,traceability and tamper-proof of medical data,solved the difficulties of centralized storage,traceability and vulnerability of medical data,and laid a foundation for further promoting the application of block chain technology in the development of medical information industry.
Online/Offline Attribute-based Encryption with User and Attribute Authority Accountability
SHI Yu-qing, LING Jie
Computer Science. 2020, 47 (4): 292-297.  doi:10.11896/jsjkx.190300144
Abstract PDF(1399KB) ( 438 )   
References | Related Articles | Metrics
As a one-to-many encryption mechanism,attribute-based encryption can provide good plaintext security and fine-grained access control for cloud storage.However,in ciphertext-policy attribute-based encryption,one decryption private key may correspond to multiple users,so users may illegally share their private keys for improper benefits,and semi-trusted attribute authority may issue decryption private keys to illegal users.In addition,the exponential computation generated by encrypting messages grows as the complexity of access policies increases,and the computational overhead generated poses a significant challenge to users who encrypt via mobile devices.Aiming at the above problems,this paper proposed an online/offline ciphertext-policy attribute-based encryption scheme with user and attribute authority accountability that supports large universe of attributes,the scheme is constructed based on prime order bilinear groups.By embedding the user’s identity information into the user’s private key to achieve accountability,and uses the online/offline encryption technology to move most of the encryption overhead to the offline phase.Lastly,the selective security and accountable proof of the scheme in the standard model was given.The analysis shows that the encryption overhead of the scheme is mainly in the offline phase,and the storage cost for tracking is also extremely low,which is suitable for users who use resource-limited mobile devices for encryption.
Android Malware Detection Method Based on Deep Autoencoder Network
SUN Zhi-qiang, WAN Liang, DING Hong-wei
Computer Science. 2020, 47 (4): 298-304.  doi:10.11896/jsjkx.190700132
Abstract PDF(2136KB) ( 789 )   
References | Related Articles | Metrics
To solve the problem of low detection rate of traditional Android malware detection methods,an Android malware detection method based on deep contractive denoising autoencoder network (DCDAN) was proposed.Firstly,the APK file is analyzed in reverse to obtain seven kinds of information in the APK file,such as permissions,sensitive API in the file,which are taken as feature attributes.Then,the feature attributes are taken as the input of the deep contractive denoising autoencoder network,train each contractive denoising autoencoder network is trined layer by layer from bottom to top by using greedy algorithm,and the The deep contractive denoising autoencoder network completed by training is used to extract the information of the original features to obtain the optimal low-dimensional representation.Finally,the back propagation algorithm is used to train and classify the acquired low-dimensional representations to realize the detection of Android malware.Adding noise to the input data of the deep autoencoder network makes the reconstructed data more robust,and adding jacobian matrix as penalty term enhances the anti-disturbance ability of the deep autoencoder network.The experimental results verify the feasibility and high efficiency of this method.Compared with the traditional detection method,the detection method can improve the accuracy of malware detection and reduce the false alarm rate effectively.
Medical Health Data Security Model Based on Alliance Blockchain
FENG Tao, JIAO Ying, FANG Jun-li, TIAN Ye
Computer Science. 2020, 47 (4): 305-311.  doi:10.11896/jsjkx.190300087
Abstract PDF(1596KB) ( 1050 )   
References | Related Articles | Metrics
In traditional medical information system,medical health data security storage and sharing have been becoming a challenging task.There are many restrictions in process of health data accessing and sharing for different people of identity,which spends a lot of resources and time on identity verification and data authentication.Aiming at these problems such as storage of the high concentration,unreliable data sharing security and the difficulty of reaching agreement,this paper proposed an alliance blockchain-based medical health data security model.According to the distribution of medical resources in reality,the medical institutions are ranked in the security model,and then combine DPOS with PBFT to ensure that the medical institutions can reach an agreement rapidly without a central node and share medical data in alliance.The security model has the advantages of decentralization,high security and tamper resistance,so it can store data records and other important information on the blockchain,but the original medical data is stored in Distributed database.The use’s medical health data is stored securely,meanwhile the sharing efficiency among the medical institutions is improved.Security analysis shows that the proposed model can protect medical health data within the scope of fault tolerance,prevent the data from tampering and the collusion problem.The proposed model has a 99% probability to ensure that the medical institutions can reach a consensus and share medical data in alliance by the consistency analysis.
Design and Implementation of Rule Processor Based on Heterogeneous Computing Platform
CHEN Meng-dong, GUO Dong-sheng, XIE Xiang-hui, WU Dong
Computer Science. 2020, 47 (4): 312-317.  doi:10.11896/jsjkx.190300104
Abstract PDF(1867KB) ( 645 )   
References | Related Articles | Metrics
Using dictionaries and their transformation rules is a common method.In recovering the secure string in the identity authentication mechanism.Through the processing of the transformation rules,a large number of targeted new strings can be quickly generated for verification.The rule processing process is complex,and has high requirements on processing performance and system power consumption.The existing tools and research are processed based on software,which are difficult to meet the needs of the actual recovery system.To this end,a rule processor technology based on heterogeneous computing platform was proposed in this paper.For the first time,reconfigurable FPGA hardware is used to accelerate the process of rule processing.At the same time,the ARM universal computing core is used to configure,manage and monitor the process of rule processing.It is implemented on Xilinx Zynq XC7Z030 chip.The experimental results show that the performance of the rule processor based on the hybrid architecture is 214 times higher than that of the rule processor based on ARM only.Typically,the performance of rule processor is better than that of Intel i7-6700 CPU.Compared with NVIDIA GeForce GTX 1080 Ti GPU,the performance power ratio of rule processor is 1.4-2.1 times higher,70 times higher than that of CPU,which effectively improves the speed and efficiency of rule processing.The experimental data fully show that the speed and efficiency of rule processing can be effectively solved by using hardware-accelerated rule processor based on heterogeneous computing platform,which can meet the actual engineering requirements and provide a basis for the design of the whole secure string recovery system.
Active Safety Prediction Method for Automobile Collision Warning
TANG Min, WANG Dong-qiang, ZENG Xin-yu
Computer Science. 2020, 47 (4): 318-322.  doi:10.11896/jsjkx.190700137
Abstract PDF(2488KB) ( 1330 )   
References | Related Articles | Metrics
The research on the active collision avoidance system of the automobile is mainly to make early warning and automatic treatment of the collision of the car,to effectively suppress the occurrence of traffic accidents.This paper studied the key techno-logies of vehicle anti-collision warning based on camera,laser radar and workshop communication,and proposed an active safety prediction algorithm of collision probability based on TTC and collision probability estimation in the stage of intelligent vehicle overtaking and changing roads.The simulation test is carried out on the 1∶10 simulation platform, and the accuracy of 200 times simultaneous and reverse approach warning emergency warning for four intelligent vehicles is 100%,which verifies the effectiveness of the proposed method.