Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 48 Issue 9, 15 September 2021
  
Intelligent Data Governance Technologies and Systems
AI Governance and System:Current Situation and Trend
CHAO Le-men, YIN Xian-long
Computer Science. 2021, 48 (9): 1-8.  doi:10.11896/jsjkx.210600034
Abstract PDF(1734KB) ( 1829 )   
References | Related Articles | Metrics
The main purpose of AI governance is to take advantage of AI and reduce the risk.AI governance also aims to build a responsible AI via embracing the influencing factors such as technology,law,policy,standard,ethics,morality,safety,economy,as well as society.AI governance has three aspects:individual intelligent governance,group intelligent governance,human-computer cooperation and symbiotic system governance,which can be divided into three levels:technical level,ethical level,social and legal level.There are four key technologies for AI governance,which are intelligible AI,defense against adversarial attacks,modeling and simulation,and real-time audit.The industry is mostly concerned about developing a responsible AI in that by studying the actual practice of AI governance from leading companies like Google,IBM and Microsoft.Furthermore,tools like interpretability,privacy protection and fairness check for AI systems are already in use.At present,the main research topics on AI governance includes software-defined AI governance,key technologies of AI governance,AI governance evaluation in large-scale machine lear-ning,AI governance based on federated learning,standardization of AI governance,enhancement on artificial intelligence and human-in-the-loop AI training.
AI Governance Oriented Legal to Technology Bridging Framework for Cross-modal Privacy Protection
LEI Yu-xiao , DUAN Yu-cong
Computer Science. 2021, 48 (9): 9-20.  doi:10.11896/jsjkx.201000011
Abstract PDF(1659KB) ( 3027 )   
References | Related Articles | Metrics
With the popularity of virtual communities among network users,virtual community groups have become a small society,which can extract user-related privacy resources through the “virtual traces” left by users' browsing and user-generated content user published.Privacy resources can be classified into data resources,information resources and knowledge resources according to their characteristics,which constitute the data,information,knowledge,and wisdom graph (DIKW graph).There are four circulation processes for privacy resources in virtual communities,namely,the sensing,storage,transfern,and processing of privacy resources.The four processes are respectively completed by the three participants,the user,the AI system,and the visitor individually or in cooperation.The right to privacy includes the right to know,the right to participate,the right to forget,and the right to supervise.By clarifying the scope of privacy rights of the three participants in the four circulation processes,and combining the protection of privacy values,an anonymous protection mechanism,risk assessment mechanism and supervision mechanism are designed to build an AI governance legal framework for privacy protection of virtual communities.
Survey on Privacy Protection Solutions for Recommended Applications
DONG Xiao-mei, WANG Rui, ZOU Xin-kai
Computer Science. 2021, 48 (9): 21-35.  doi:10.11896/jsjkx.201100083
Abstract PDF(3571KB) ( 2047 )   
References | Related Articles | Metrics
In the context of the era of big data,various industries want to train recommendation models based on user behavior data to provide users with accurate recommendations.The common characteristics of the used data are huge amount,carrying sensitive information,and easy to obtain.The recommendation system is sharing users' private data in real time while bringing accurate recommendation and market profit.Differential privacy,as a privacy protection technology,can cleverly solve the problem of privacy leakage in recommendation applications.No matter the attacker has any relevant background knowledge,differential privacy strictly defines privacy protection,and provides quantitative evaluation methods to ensure that the level of privacy protection provided by the data set is comparable.First,the concept of differential privacy and the research on mainstream recommendation algorithms is briefly described.Second,the combined application of differential privacy and recommendation algorithms is analyzed,such as matrix factorization,deep learning recommendation,and collaborative filtering.A large number of comparative experiments have been conducted on recommendation algorithms based on differential privacy technology.Then the application scenarios of the combination of differential privacy and each recommendation algorithm and the remaining problems are discussed.Finally,effective suggestions are put forward for the future development direction of the recommendation algorithm based on differential privacy.
Research on Big Data Governance for Science and Technology Forecast
WANG Jun, WANG Xiu-lai, PANG Wei, ZHAO Hong-fei
Computer Science. 2021, 48 (9): 36-42.  doi:10.11896/jsjkx.210500207
Abstract PDF(4195KB) ( 1357 )   
References | Related Articles | Metrics
From imitation to innovation,from following to leading,is not only a major change in the development of science and technology in China at this stage,but also a major strategic demand for national development.In recent years,relevant scholars at home and abroad have carried out the research of science and technology development trend analysis and hot spot tracking,but due to the lack of systematic big data collection and governance system,the scope of data analysis and mining is often limited to the single data sample of science and technology literature.Aiming at the goal of forward-looking prediction of science and technology development,this paper comprehensively analyzes the massive heterogeneous data that affect the development process of science and technology,such as all kinds of scientific and technological literature,scholar dynamics,forum hot spots and social comments.By building a data-driven big data governance system,this paper solves the data remediation problems in the process of detection and discovery,accurate collection,cleaning and aggregation,fusion processing,model construction,prediction and calculation.At the same time,on the basis of big data remediation,LDA model is used to achieve technology trend prediction and ana-lysis.The research results provide technical support for the system to solve the problem of hidden information discovery and relationship reasoning in massive scientific and technological big data.
Time Aware Point-of-interest Recommendation
WANG Ying-li, JIANG Cong-cong, FENG Xiao-nian, QIAN Tie-yun
Computer Science. 2021, 48 (9): 43-49.  doi:10.11896/jsjkx.210400130
Abstract PDF(1964KB) ( 844 )   
References | Related Articles | Metrics
In location-based social networks (LBSN),users share their location and content related to location information.Point-of-interest (POI) recommendation is an important application in LBSN which recommends locations that might be of interest to users.However,compared with other recommendation problems (such as product and movie recommendation),the users' prefe-rence for POI is particularly determined by the time feature.In this paper,the influence of time feature on POI recommendation task is explored,and a time-aware POI recommendation method is proposed,called TAPR (Time Aware POI Recommendation).Our method constructs different relation matrices based on different time scales,and uses tensor decomposition to decompose the constructed multiple relation matrices to obtain the representation of the user and the POI.Finally,our method uses cosine similarity to calculate similarity scores between users and non-visited POIs,and combines the algorithm of user preference modeling to obtain the final recommendation score.Experimental results on two public datasets show that the proposed TAPR performs better than other POI recommendation methods.
Research on Urban Function Recognition Based on Multi-modal and Multi-level Data Fusion Method
ZHOU Xin-min, HU Yi-gui, LIU Wen-jie, SUN Rong-jun
Computer Science. 2021, 48 (9): 50-58.  doi:10.11896/jsjkx.210500220
Abstract PDF(3639KB) ( 1412 )   
References | Related Articles | Metrics
The division and identification of urban functional areas is of great significance for analyzing the distribution status of urban functional areas and understanding the internal spatial structure of cities.This has stimulated the demand for multi-source geospatial data fusion,especially the fusion of urban remote sensing data and social sensing data.However,how to realize the fusion of urban remote sensing and social sensing data is a technical problem effectively.In order to realize the fusion of urban remote sensing and social sensing data and improve the accuracy of urban function recognition,taking remote sensing images and social sensing data as examples,introducing a multi-modal data fusion mechanism,and proposing a joint deep learning and ensemble learning model to infer urban regional functions.The model uses DenseNet and DPN network to extract urban remote sensing image features and social sensing features from multi-source geospatial data,and carries out multi-level data fusion of feature fusion,decision fusion and hybrid fusion to identify urban functions.The proposed model is verified on the URFC dataset,and these three evaluation index values of hybrid fusion overall classification accuracy,Kappa coefficient and average F1 are 74.29%,0.67,71.92%,respectively.Compared with the best classification method of single modal data,the three evaluation indexes of the proposed fusion model are increased by 18.83%,0.24,35.46% respectively.The experimental results show that the data fusion model has better classification performance,so that it can effectively fuse remote sensing image data and social sensing data,and realize the accurate identification of urban regional functions.
On Aircraft Trajectory Type Recognition Based on Frequent Route Patterns
SONG Jia-geng, ZHANG Fu-sang, JIN Bei-hong, DOU Zhu-mei
Computer Science. 2021, 48 (9): 59-67.  doi:10.11896/jsjkx.210100014
Abstract PDF(3566KB) ( 1568 )   
References | Related Articles | Metrics
With the development of global positioning and radar technology,more and more trajectory data can be collected.In particular,trajectories generated by aircrafts,ships,migratory birds are complicated and varied,and free from any constraints on the ground.For helping identifying the behaviors and intention of the flying objects,the recognition of the type of the aircraft tra-jectories has important value.Specifically,on the basis of identifying frequent route patterns,the paper proposes a new method,consisting of a frequent route patterns extracting algorithm and a convolution neural network model.The extracting algorithm first gets key points from the compressed trajectory,next finds the closed routes through the self-intersecting points of the trajectory,then discovers frequent patterns in the closed routes and treats them as the basis of classification.Further,the model recognizes the trajectory type via image analyses.This paper conducts extensive experiments on the real aircraft trajectory data disclosed on the FlightRadar24 website as well as the simulated data.The experimental results show that our method can effectively identify complex trajectory types.Compared with LeNet-5 CNN classification without trajectory extraction,our method has the superior performance,achieving an average accuracy of more than 95% for trajectory classification.
Heterogeneous Information Network Embedding with Incomplete Multi-view Fusion
ZHENG Su-su, GUAN Dong-hai, YUAN Wei-wei
Computer Science. 2021, 48 (9): 68-76.  doi:10.11896/jsjkx.210500203
Abstract PDF(3412KB) ( 916 )   
References | Related Articles | Metrics
Heterogeneous information network (HIN) embedding maps complex heterogeneous information to a low-dimensional dense vector space,which is conducive to the calculation and storage of network data.Most existing multi-view-based HIN embedding methods consider multiple semantic relationships between nodes,but ignore the incompleteness of the view.Most of views are incomplete and directly fusing multiple incomplete views will affect the performances of the embedding model.To address this problem,we propose a novel HIN embedding model with incomplete multi-view fusion,named IMHE.The key idea of IMHE is to aggregate neighbors of other views to reconstruct the incomplete views.Since different views describe the same HIN,neighbors in other views can restore the structure information of the missing nodes.The IMHE model first generates nodes sequences in different views,and leverages the multi-head self-attention method to obtain single-view embedding.For each incomplete view,IMHE finds the k-order neighbors of the missing nodes in other views,then aggregates the embeddings of neighbors in the incomplete view to generate new embeddings for missing nodes.IMHE finally uses the multi-view canonical correlation analysis method to obtain the joint embedding of nodes,thereby simultaneously extracting the hidden semantic relationship of multiple views.Experiment results on three real-world datasets show that the proposed method is superior to the state-of-the-art methods.
Cost-sensitive Convolutional Neural Network Based Hybrid Method for Imbalanced Data Classification
HUANG Ying-qi, CHEN Hong-mei
Computer Science. 2021, 48 (9): 77-85.  doi:10.11896/jsjkx.200900013
Abstract PDF(2590KB) ( 788 )   
References | Related Articles | Metrics
The imbalance classification is a common problem in the field of data mining.In general,the skewed distribution of data makes the classification effect of the classifier unsatisfactory.As an efficient data mining tool,convolutional neural network is widely used in classification tasks.However,if the training process is adversely affected by data imbalance,it will cause the classification accuracy of minority classes to decrease.Aiming at the classification problem of two-class unbalanced data,this paper proposes a hybrid method for unbalanced classification problems based on cost-sensitive convolutional neural networks.The proposed method first combines the density peak clustering algorithm with SMOTE,and preprocesses the data through oversampling to reduce the imbalance of the original data set.Then the cost sensitive is used to give different weights to different categories in the unbalanced data.Additionally,the Euclidean distance between the predicted value and the label value is considered.The proposed method assigns different cost losses to the majority class and the minority class in the unbalanced data to construct cost sensitivity convolutional neural network model to improve the recognition rate of convolutional neural network for minority classes.Six different datasets are used to verify the effectiveness of the proposed method.The experimental results show that the proposed method is able to improve the classification performance of the convolutional neural network model on unbalanced data.
Historical Driving Track Set Based Visual Vehicle Behavior Analytic Method
LUO Yue-tong, WANG Tao, YANG Meng-nan, ZHANG Yan-kong
Computer Science. 2021, 48 (9): 86-94.  doi:10.11896/jsjkx.200900040
Abstract PDF(4512KB) ( 1163 )   
References | Related Articles | Metrics
With the continuous development of smart city,vehicle track can be acquired automatically based on traffic bayonet,which lays a foundation for vehicle behavior analysis based on track.However,since the bayonet position is fixed,the vehicle tra-jectory is expressed as bayonet sequence.Therefore,the bayonet and trajectory are first mapped into words and sentences respectively,and the semantic similarity method is used to calculate the trajectory similarity.Then,based on the similarity of tracks,track entropy is proposed to measure the regularity of all tracks of a vehicle.Finally,the trajectory entropy is used to analyze the behavioral characteristics of vehicles.For example,vehicles with low trajectory entropy mean that the driving is particularly regular,which is likely to be commuter vehicles.To facilitate users in-depth analysis,this paper further provides a visual analysis system with more linkage view,which allows the user to compare the vehicle trajectory entropy,and combines clustering analysis and related interaction,to help users find meaningful vehicle behavior,such as commuting a commuter has a low trajectory entropy,following the model of taxi path entropy is very high.By analyzing the bayonet data set of Kunming city in February 2019,the vehicle travel behavior and its characteristics in different trajectory entropy intervals can be found effectively,which proves the effectiveness of the proposed method.
Railway Passenger Co-travel Prediction Based on Association Analysis
LI Si-ying, XU Yang, WANG Xin, ZHAO Ruo-cheng
Computer Science. 2021, 48 (9): 95-102.  doi:10.11896/jsjkx.200700097
Abstract PDF(2875KB) ( 687 )   
References | Related Articles | Metrics
With the fast development of transportation technology,the railway has become one of the main choices for people when they travel for business,vacation or visiting.As a result,the behavior of co-travel has become more and more common.Based on this co-travel relationship,people can construct a co-travel network,where each node represents a passenger and an edge indicates co-travel frequency between two passengers this edge connects,and the link prediction on the network such that persona-lized service and product can be provided even better.In light of this,this paper proposes a novel approach to predicting potential co-travel relationship.Specifically,we first propose two types of co-travel graph pattern association rules which are extended from their traditional counterparts,and can be used to predict new co-travel relationship and co-travel frequency,respectively.We then decompose this mining problem into three sub-problems,i.e.,frequent co-travel pattern mining,rules generation and association analysis,and develop parallel and centralized algorithms for these sub-problems.Extensive experimental studies on large real-life datasets show that our approach can predict potential co-travel relationship efficiently and accurately,with accuracies higher than 50% for two types of rules,and substantially superior to the traditional method (e.g.,Jaccard with accuracy 24%).
Biased Deep Distance Factorization Algorithm for Top-N Recommendation
QIAN Meng-wei , GUO Yi
Computer Science. 2021, 48 (9): 103-109.  doi:10.11896/jsjkx.200800129
Abstract PDF(1816KB) ( 803 )   
References | Related Articles | Metrics
Since traditional matrix factorization algorithms are mostly based on shallow linear models,it is difficult to learn latent factors of users and items at a deep level.When the dataset is sparse,it is inclined to overfitting.To deal with the problem,this paper proposes a biased deep distance factorization algorithm,which can not only solve the data sparse problem,but also learn the distance feature vectors with stronger characterization capabilities.Firstly,the interaction matrix is constructed through the explicit and implicit data of the user and the item.Then the interaction matrix is converted into the corresponding distance matrix.Secondly,the distance matrix is input into the depth of the bias layer by row and column respectively.The neural network learns the distance feature vectors of users and items with non-linear features.Finally,the distance between the user and the item is calculated according to the distance feature vectors.Top-N item recommended list is generated according to the distance value.The experimental results show that Precision,Recall,MAP,MRR and NDCG of this algorithm are significantly improved compared to other mainstream recommendation algorithms on four different datasets.
Smart Interactive Guide System for Big Data Analytics
YU Yue-zhang, XIA Tian-yu, JING Yi-nan, HE Zhen-ying, WANG Xiao-yang
Computer Science. 2021, 48 (9): 110-117.  doi:10.11896/jsjkx.200900083
Abstract PDF(3234KB) ( 1098 )   
References | Related Articles | Metrics
Traditional big data tools are generally built for professional data analysts,and they have the characteristics of being difficult to get started,poor operation interaction,and not intelligent enough.The intelligent interactive guidance system is a set of big data analysis auxiliary tools developed around the current problems of the big data interactive analysis system.The system not only develops core key technologies such as user intention understanding,data sampling and column recommendation,visualization recommendation,and analysis method recommendation,but also has a good graphical interface and a humanized intelligent interactive experience.While meeting the user's multiple interactive analysis needs,it also has a very high response speed.Not only can you go back to any step of the analysis process to reselect the method execution process at any time,but you can also quickly integrate with various analysis applications through the interface to deploy and apply to different scenarios.After experimental tests,the average interaction time of the system is within 3 seconds,and the execution time of the system interaction is accelerated by about 3 times compared with the traditional analysis method.After using case testing,the system is also more satisfying than the use of traditional tools.Through the exploration of ease of use,timeliness,interactivity,and intelligence,the smart interactive guide system allows users of different basic groups to use the system to complete the required big data analysis goals.
Public Opinion Sentiment Big Data Analysis Ensemble Method Based on Spark
DAI Hong-liang, ZHONG Guo-jin, YOU Zhi-ming , DAI Hong-ming
Computer Science. 2021, 48 (9): 118-124.  doi:10.11896/jsjkx.210400280
Abstract PDF(2038KB) ( 961 )   
References | Related Articles | Metrics
With the development of mobile Internet technology,social media has become the main approach for the public to share views and express their emotions.Sentiment analysis for social media texts in major social events can effectively monitor public opinion.In order to solve the problem of low accuracy and efficiency of existing Chinese social media sentiment analysis algorithms,an ensemble sentiment analysis big data method(S-FWS) based on Spark distributed system is proposed.Firstly,the new words are found by calculating the PMI association degree after pre-segmentation by Jieba library.Then,the text features are extracted by considering the importance of words and feature selection is realized by Lasso.Finally,in order to improve the traditional Stacking framework neglecting the feature importance,the accuracy information of the primary learners is used to weight the probabilistic features,and the polynomial features are constructed to train the secondary learner.A variety of algorithms are introduced in the stand-alone mode and the Spark platform receptively to carry out comparative experiments.Results show that the S-FWS method proposed in this paper has certain advantages in accuracy and time consumption;distributed system can greatly improve the operating efficiency of the algorithms,and with the increase of working nodes,the time consumption of the algorithms gradually decreases.
Computer Graphics & Multimedia
Deep Learning for Abnormal Crowd Behavior Detection:A Review
XU Tao, TIAN Chong-yang, LIU Cai-hua
Computer Science. 2021, 48 (9): 125-134.  doi:10.11896/jsjkx.201100015
Abstract PDF(1927KB) ( 2971 )   
References | Related Articles | Metrics
With the increasing demand of security industry,abnormal crowd behavior detection has become a hot research issue in computer vision.Abnormal crowd behavior detection aims to model and analyze the behavior of pedestrians in surveillance videos,distinguish between normal and abnormal behaviors in the crowd,and discover disasters and accidents in time.A large number of algorithms for abnormal crowd behavior detection based on deep learning are summarized in this paper.First,abnormal crowd behavior detection task and its current research situation are briefly introduced.Second,the research progress of convolutional neural networks,auto-encoder and generative adversarial networks on abnormal crowd behavior detection are discussed separately.Then,some commonly used datasets are listed,and the performance of deep learning methods on UCSD pedestrian datasets are compared and analyzed.Finally,the development difficulties of abnormal crowd behavior detection tasks are summarized,and its future research directions are discussed.
Vehicle Speed Measurement Method Based on Binocular Vision
CHANG Zi-ting, SHI Yu-qing, WANG Jun, YU Ming-he, YAO Lan, ZHAO Zhi-bin
Computer Science. 2021, 48 (9): 135-139.  doi:10.11896/jsjkx.201000047
Abstract PDF(1497KB) ( 912 )   
References | Related Articles | Metrics
Real-time speed measurement is a vital issue to assist truck weighing at the entrance of expressway when a truck passes through a scale.Binocular vision technology technically has the advantages of low cost,easy deployment and high stability,which qualify it a potential for prospective application.The key point for binocular vision based speed measurement is displacement-measuring of a target,which is subject to accurate target matching in multiple frames.This paper presents an alignment algorithm on region matching based on spatial location and a calculation method for spatial displacement based on template ma-tching.Specifically,relative spatial location of a wheel is introduced to restrain its matching area,which effectively reduces the mismatching on similar wheels; template matching is derived to track the key points of a wheel for spatial displacement between multiple frames.The practical traffic video data taken at an expressway entrance is applied to experiments.The results show that,compared with other binocular vision based speed measurement methods,our method declines the RMSE of the speed measurement results by 20%~40%,and it more suitable for the real scene when vehicles pass the speed measurement point at the entrance of expressway at a relatively high speed(10~20 km/h).
High-resolution Image Building Target Detection Based on Edge Feature Fusion
HE Xiao-hui, QIU Fang-bing, CHENG Xi-jie, TIAN Zhi-hui, ZHOU Guang-sheng
Computer Science. 2021, 48 (9): 140-145.  doi:10.11896/jsjkx.200800002
Abstract PDF(3703KB) ( 919 )   
References | Related Articles | Metrics
High-resolution remote sensing image building target detection has a wide range of application value in territorial planning,geographic monitoring,smart city and other fields.However,due to the complex background of remote sensing images,some detailed features of building targets are less distinguishable from the background.During the task,it is prone to problems such as distortion and missing of the building outline.Aiming at this problem,an adaptive weighted edge feature fusion network (VAF-Net) is designed.This method is aimed at remote sensing image building detection tasks,expands the classic codec network U-Net network,and makes up for the lack of detailed features in basic network learning through the fusion of RGB feature maps and edge feature maps.At the same time,relying on network learning to automatically update the fusion weight,adaptive weighted fusion can be achieved,and the complementary information of different features can be full made use of.This method is tested on the Massachusetts Buildingsdata set,and its accuracy,recall and F1-score reach 82.1%,82.5% and 82.3%,respectively.The comprehensive index F1-score increases by about 6% compared to the basic network.VAF-Net effectively improves the perfor-mance of the codec network for high-resolution image building target detection tasks,and has good practical value.
A Person Re-identification Method Based on Improved Triple Loss and Feature Fusion
ZHANG Xin-feng, SONG Bo
Computer Science. 2021, 48 (9): 146-152.  doi:10.11896/jsjkx.200800200
Abstract PDF(2329KB) ( 822 )   
References | Related Articles | Metrics
Person re-identification aims to retrieve specific pedestrian targets from the target database under the condition of cross camera.It has important application value in the field of video surveillance.At present,the difficulty of the research is that the sample images have large intra class differences and small inter class differences.Therefore,how to design and train the deep neural network to extract a more discriminative feature from pedestrian images is the key.In this paper,we propose a network structure combining global features and local features learning,which can extract global features and local features simultaneously.In view of the different importance of each part of the local features to the description of pedestrian features,this paper proposes a fusion method of local features,which can adaptively generate the weight of each local feature.Finally,the local features and glo-bal features are combined to make the pedestrian features get more comprehensive representation.In addition,in view of the fuzzy optimization objective of the previous triple loss based on hard sample mining,this paper proposes an improved triple loss function based on hard sample mining.The effectiveness of the proposed method is verified on the mainstream person re-identification data sets Market-1501 and DukeMTMC-reID,respectively,and the mAP values are 82.16% and 74.02%,and the Rank-1 values are 92.75% and 86.8%,respectively.
Non-negative Matrix Factorization Based on Spectral Reconstruction Constraint for Hyperspectral and Panchromatic Image Fusion
GUAN Zheng, DENG Yang-lin, NIE Ren-can
Computer Science. 2021, 48 (9): 153-159.  doi:10.11896/jsjkx.200900054
Abstract PDF(3727KB) ( 891 )   
References | Related Articles | Metrics
An effective algorithm for unmixing hyperspectral and panchromatic images of non-negative matrix factorization based on spectral reconstruction constraint is proposed.Firstly,this algorithm employs the regularization with minimum spectral reconstruction error in the process of non-negative matrix factorization for the hyperspectral image,and searches for the optimal regularization parameter through multi-objective optimization to inspire the spectral signature matrix to contain more real spectral features.Then,the panchromatic image is factorized by non-negative matrix to obtain the abundance matrix with the details of the image.Finally,the fusion result is reconstructed by using the spectral signature matrix and the abundance matrix.The experimental results show that the fusion result of the proposed algorithm maintains more details of panchromatic images and effectively decreases spectral distortion simultaneously.It has better performance in both visual effects and objective evaluation than traditional algorithms.
Hyperspectral Image Denoising Based on Non-local Similarity and Weighted-truncated NuclearNorm
ZHENG Jian-wei, HUANG Juan-juan, QIN Meng-jie, XU Hong-hui, LIU Zhi
Computer Science. 2021, 48 (9): 160-167.  doi:10.11896/jsjkx.200600135
Abstract PDF(3912KB) ( 729 )   
References | Related Articles | Metrics
Due to the interference of instrumental noise,hyperspectral images (HSI) are often corrupted to some extent by Gaussian noise,which will seriously affect the subsequent performance of image processing.Therefore,image denoising has been considered as an important pre-processing step.Besides,due to the high dimensionality of hyperspectral data,the running efficiency is also a critical factor along with the visual evaluation.For the sake of improving both the efficiency and efficacy,we first project the high-dimensional hyperspectral image into certain spectral subspace,and then learn an orthogonal basis matrix.On that basis,the spatial non-local similarity and the global spectral low rank property of hyperspectral are jointly introduced to denoise the low-dimensional subspaces.Finally,all the restored low-dimensional image can be used along with the orthogonal basis to recover the original HIS data.Among these steps,the non-local denoising process first forms certain amount of tensor cubes by the non-local similarity,and followed by several tensor groups using the block matching method.In general,these groups enjoy strong low-rank essense due to the explicit neighborhood similarity.For better revealing the low-rank property of each tensor group,we propose a weighted and truncated nuclear norm by taking both the advantages of weighted nuclear norm and truncated nuclear norm.Moreover,an improved optimization scheme based on the accelerated proximal gradient is presented for a fast solution.Extensive simulation results show that our denoising scheme outperforms state-of-the-art methods in objective metrics and better preserves visually salient structural features.
Improved YOLOv3 Remote Sensing Target Detection Based on Improved Dense Connection and Distributional Ranking Loss
YUAN Lei, LIU Zi-yan, ZHU Ming-cheng, MA Shan-shan, CHEN Lin-zhou-ting
Computer Science. 2021, 48 (9): 168-173.  doi:10.11896/jsjkx.200800001
Abstract PDF(3191KB) ( 493 )   
References | Related Articles | Metrics
Aiming at solving the problems of small object size,uneven sample distribution,and unclear features in remote sensing images,an improved YOLOv3 object detection algorithm is proposed.The Stitcher data enhancement method is used to solve the problem of uneven distribution of small object samples.The VOVDarkNet-53 is proposed.The residual modules of the fourth downsampling in DarkNet-53 are reduced from eight to four.And then the dense connection mode of VOVNet is adopted to extract lower features of small objects to increase the network receptive field.The distributional ranking loss is used to improve the classification loss in YOLOv3 to solve the problem of imbalance between positive and negative samples in single-stage object detector.Comparative experiments are carried out on HRRSD remote sensing datasets by using YOLOv3 object detection algorithm and improved YOLOv3 algorithm.The results demonstrate that the proposed algorithm can achieve better performance of higher detection accuracy of the improved YOLOv3 algorithm for small objects and medium objects are improved by 7.2% and 2.1%,respectively.Although the detection accuracy for large objects is reduced by 1%,the average detection accuracy (mAP) is improved by 4.1%,and the recall and accuracy are also improved.
Face Image Inpainting with Generative Adversarial Network
LIN Zhen-xian, ZHANG Meng-kai, WU Cheng-mao, ZHENG Xing-ning
Computer Science. 2021, 48 (9): 174-180.  doi:10.11896/jsjkx.200800014
Abstract PDF(3447KB) ( 876 )   
References | Related Articles | Metrics
Face image inpainting is a hot topic of image processing research in recent years.Due to the loss of excessive sematic information,it is a difficult problem to inpaint large area missing of face images.Aiming at the problem of inpainting face images,a step-by-step image inpainting algorithm based on generative adversarial network is proposed.Face images inpainting task is divided into two steps.Firstly,face images are completed through the pre-completion network,and pre-completion images is enhanced feature through the enhancement network.The discriminator judges the difference between the pre-completion images,the enhanced images and the ideal image respectively.The long-term memory unit is used to connect the information flow of two parts.Secondly,the adversarial loss,content loss and total variation loss are combined to improve the effectively.Experiments are conducted on CelebA dataset,and this algorithm has an improvement of 16.84%~22.85% in PSNR and 10%~12.82% in SSIM compared with others typical image inpainting algorithms
Multi-focus Image Fusion Method Based on PCANet in NSST Domain
HUANG Xiao-sheng, XU Jing
Computer Science. 2021, 48 (9): 181-186.  doi:10.11896/jsjkx.200800064
Abstract PDF(3610KB) ( 693 )   
References | Related Articles | Metrics
The deep learning model based image fusion methods have attracted much attention in recently years.But the traditio-nal deep learning model usually needs a time-consuming and complex training process and a difficulty parameters tuning process on large datasets.To overcome these problems,a simple deep learning model PCANet based multi-focus image fusion method in NSST domain is proposed.Firstly,multi-focus images are used to train two-stage PCANet to extract image features.Then,the input source image is decomposed by NSST to obtain the multi-scale and multi-directional representation of the source image.The low frequency subband uses the trained PCANet to extract its image features,and uses the kernel norm to construct an effective feature space for image fusion.High frequency subbands are fused using the fusion rule of regional energy maximization.Finally,the frequency coefficients fused according to different fusion rules are reconstructed by NSST to obtain a clear target image.The experimental results show that the training and fusion speed of the algorithm is 43% higher than that of the CNN-based method.The average gradient,spatial frequency and entropy of the proposed algorithm are 5.744,15.560 and 7.059 respectively,which can be comparable to or superior to the existing fusion methods.
Glioma Segmentation Network Based on 3D U-Net++ with Fusion Loss Function
ZHANG Xiao-yu, WANG Bin, AN Wei-chao, YAN Ting, XIANG Jie
Computer Science. 2021, 48 (9): 187-193.  doi:10.11896/jsjkx.200800099
Abstract PDF(2558KB) ( 1253 )   
References | Related Articles | Metrics
Glioma is the most common primary brain tumor caused by cancerous glial cells in the brain and spinal cord.Reliable segmentation of glioma tissue from multi-mode MRI is of great clinical value.However,due to the complexity of glioma itself and surrounding tissues and the blurring of boundary caused by invasion,automatic segmentation of glioma is difficult.In this paper,a 3D U-Net++ network using the fusion loss function is constructed to segment different areas of glioma.The network uses different levels of U-Net models for densely nested connections,and uses the output results of the four branches of the network as depth supervision so that the combination of deep and shallow features can be better used for segmentation,and combines Dice loss function and cross entropy loss function as a fusion loss function to improve the segmentation accuracy of small regions.In the independent test set divided by the public data set of the 2019 Multimodal Brain Tumor Segmentation Challenge (BraTs),the proposed method is evaluated with Dice coefficient,95% Hausdorff distance,mIoU(mean intersection over union),and PPV(precision) indicators.The whole tumor,the core region and the enhancing tumor region of Dice coefficient are 0.873,0.814,0.709;the 95% Hausdorff coefficient are 15.455,12.475,12.309 respectively;the mIoU are 0.789,0.720,0.601 respectively;the PPV are 0.898,0.846 and 0.735 respectively.Compared with the basis of 3D U-Net and 3D U-Net with depth of supervision,the proposed method can make use of more effective modal of the deep and shallow information,effectively use the space information.And the fusion loss function combined by the dice coefficient and the cross-entropy loss function can effectively enhance tumor segmentation accuracy of each area,especially the segmentation accuracy of small tumor areas such as enhancing tumor.
Algal Bloom Discrimination Method Using SAR Image Based on Feature Optimization Algorithm
WU Lin, BAI Lan, SUN Meng-wei, GOU Zheng-wei
Computer Science. 2021, 48 (9): 194-199.  doi:10.11896/jsjkx.200800142
Abstract PDF(2916KB) ( 635 )   
References | Related Articles | Metrics
The frequent outbreak of algal bloom in inland lakes has seriously affected the safety of surface water environment,and has brought great obstacles to the construction of ecological civilization in China.Taking full advantage of SAR(Synthetic Aperture Radar) remote sensing technologies,large-scale and periodic algal bloom discrimination and monitoring can be realized.It is of great practical significance for the protection and supervision of water environment.Based on the research and application of SAR remote sensing target recognition technology,this paper proposes an algal bloom discrimination method with feature optimization.After the in-depth analysis and extraction of algal bloom image features,the ReliefF algorithm is used to obtain the optimal feature set,which consists of 10 features from all 22 algal bloom features.And then,the BP (Back Propagation) neural network is as the classifier of this discrimination method to carry out a number of comparative experiments.The overall accuracy of the proposed method is 81.39%,which is 19.38% higher than that before optimization.The experimental results show that the optimal feature set can not only greatly reduce the algorithm complexity,but also effectively improve the discrimination accuracy of algal bloom,which has practical value for further promotion.
Cross-modal Retrieval Combining Deep Canonical Correlation Analysis and Adversarial Learning
LIU Li-bo, GOU Ting-ting
Computer Science. 2021, 48 (9): 200-207.  doi:10.11896/jsjkx.200600119
Abstract PDF(2650KB) ( 725 )   
References | Related Articles | Metrics
This paper proposes a cross-modal retrieval method (DCCA-ACMR) that integrates deep canonical correlation analysis and adversarial learning.The method can improve the utilization rate of unlabeled samples,learn more powerful feature projection models,and improve the accuracy of cross-modal retrieval.Specifically,under the DCGAN framework:1)depth canonical correlation analysis constraints are added between the two single-modal representation layers of image and text,to construct a graphic feature projection model,and the semantic relevance of sample pairs is exploited fully;2)the graphic feature projection model is used as a generator,and the modal feature classification model is used as a discriminator to form a graphic and text cross-modal retrieval model;3)the common subspace representation of samples is learned by using labeled samples and unlabeled samples in the confrontation between generator and discriminator.We utilize average accuracy rate (mAP) to evaluate the proposed method on the two public datsets,Wikipedia and NUSWIDE-10k.The average mAP values of image-to-text retrievaland text-image retrie-val are 0.556 and 0.563 respectively on the two datasets.Experimental results show that DCCA-ACMR method is superior to the existing representative methods.
Fast Local Collaborative Representation Based Classifier and Its Applications in Face Recognition
CHEN Chang-wei, ZHOU Xiao-feng
Computer Science. 2021, 48 (9): 208-215.  doi:10.11896/jsjkx.200800155
Abstract PDF(3613KB) ( 616 )   
References | Related Articles | Metrics
To solve the problem of high computational time complexity of collaborative representation based classification method(CRC),this paper proposes a local fast collaborative representation based classifier for face recognition by using the positive correlation between the reconstruction coefficient and sample labels.Firstly,the least square method is used to solve the linear regression problem with a L2 norm constraint,and then the negative reconstruction coefficients which are unsuitable for classification are discarded.Finally,the maximum similarity criterion instead of the reconstruction criterion in CRC is adopted to determine the label of the test sample.The proposed method can receive better performance by taking local similarity into account,and consumes much less time without sample reconstruction than CRC.The experimental results on AR and CMU PIE datasets demonstrate that the proposed method consumes much less time than CRC,and can achieve better recognition accuracy than some state-of-the-art methods with varying illuminations,expressions and angles in facial images.
Real-time Binocular Depth Estimation Algorithm Based on Semantic Edge Drive
ZHANG Peng, WANG Xin-qing, XIAO Yi, DUAN Bao-guo, XU Hong-hui
Computer Science. 2021, 48 (9): 216-222.  doi:10.11896/jsjkx.200800203
Abstract PDF(7033KB) ( 843 )   
References | Related Articles | Metrics
Aiming at the problem of ill-posed regions with blurred disparity edges,unsmooth disparity,discontinuous disparity of a single object,and holes in stereo matching,a lightweight real-time binocular depth estimation algorithm is proposed,which uses the semantic tags obtained by semantic segmentation of the scene graph and the edge detail images obtained by edge detection asauxi-liary loss,and the ground truth image as the main loss,to construct the joint loss function which can better supervise the generation of the disparity map.In addition,a lightweight feature extraction module is constructed to reduce the redundancy of the feature extraction stage,which can better simplify the feature extraction steps,and improve the real-time and lightness of the network.Finally,the idea of from coarse to fine is used to realize the gradual refinement process of the disparity map with fusion of low-resolution disparity map deformation and high-resolution feature map to generate disparity maps of different scales in stages,meanwhile,the detailed features are gradually enriched,thus obtaining the final accurate disparity map.The 3px error rate of 1.72% is obtained on the KITTI 2012 dataset,the Vintge error rate on the Middlebury 2014 dataset is 1.23%,the Playroom error rate is 2.23%,and the Recycle error rate is 1.65%.Meanwhile,the calculation time on the Scene Flow dataset reaches 0.76 s with 2.4 G memory occupation,which significantly improves the accuracy and computational efficiency of stereo matching algorithms in the ill-posed regions,meets the real-time requirements in engineering practice,and has important guiding significance for real-time 3D reconstruction tasks.
Artificial Intelligence
Overview of SLAM Algorithms for Mobile Robots
TIAN Ye, CHEN Hong-wei, WANG Fa-sheng, CHEN Xing-wen
Computer Science. 2021, 48 (9): 223-234.  doi:10.11896/jsjkx.200700152
Abstract PDF(2080KB) ( 4482 )   
References | Related Articles | Metrics
As a localization and map construction method,SLAM(Simultaneous Localization and Mapping) is widely used in the field of robots.SLAM algorithm enables the robot to perceive environmental information and establish environmental map through sensors carried by the robot itself in an unfamiliar environment,and calculate its own posture.In this way,the robot can move in an unknown environment.With the in-depth study of SLAM,the research results in the field of SLAM have been very rich.However,the discussion on indoor SLAM is not comprehensive enough.Through the summary and comparison of the exis-ting development results of the SLAM method,a comprehensive statement is shown.In this paper,the technical status of SLAM and the classification problem of SLAM under different sensors in indoor scenes are firstly introduced.Secondly,the classic framework of SLAM is revealed.Thirdly,the principles of SLAM algorithms with different sensors are described according to the different types of related sensors.Fourthly,the limitations of the traditional indoor SLAM algorithms are discussed and two research directions-SLAM based on multi-sensor fusion technology and SLAM based on deep learning technology are led out.Finally,the future development trend and application field of SLAM are suggested
Action Constrained Deep Reinforcement Learning Based Safe Automatic Driving Method
DAI Shan-shan, LIU Quan
Computer Science. 2021, 48 (9): 235-243.  doi:10.11896/jsjkx.201000084
Abstract PDF(3070KB) ( 1039 )   
References | Related Articles | Metrics
With the development of artificial intelligence,the field of autonomous driving is also growing.The deep reinforcement learning (DRL) method is one of the main research methods in this field.DRL algorithms have been reported to achieve excellent performance in many control tasks.However,the unconstrained exploration in the learning process of DRL usually restricts its application to automatic driving.For example,in common reinforcement learning (RL) algorithms,an agent often has to select an action to execute in each state although this action may result in a crash,deteriorating the performance,or even failing the task.To solve the problem,this paper proposes a new method of action constrained with the soft actor-critic algorithm (CSAC) where the ‘NO-OP'(NO-Option) identifies and replaces inappropriate actions,and we test the algorithm in the lane-keeping tasks.The method firstly limits the environmental reward reasonably.When the rotation angle of the driverless car is too large,it will shake,then a penalty term will be added to the reward function to avoid the driverless car falling into a dangerous state as far as possible.The contributions of this paper are as follows:first,we incorporates action constrained function with SAC algorithm,which achieves faster learning speed and higher stability;second,we propose a reward setting framework that overcomes the shaking and instability of driverless cars,achieving a better performance;finally,we trains the model in the unity virtual environment for evaluating the performance and successfully transplant the model to a donkey driverless car.
Overlapping Community Detection Algorithm Based on Subgraph Structure
CHEN Xiang-tao, ZHAO Mei-jie, YANG Mei
Computer Science. 2021, 48 (9): 244-250.  doi:10.11896/jsjkx.201100010
Abstract PDF(2662KB) ( 737 )   
References | Related Articles | Metrics
Local community detection algorithms usually select seed nodes for community detection.To improve the quality of effectiveness of seed node selection,we propose an overlapping community detection algorithm based on subgraph structure(SUSBOCD).This algorithm proposes a new measure of node importance,which not only considers the number of neighbors,but also considers the degree of density between neighbors.First,SUSBOCD selects the most important node that is not visited and the most similar neighbor node,and merges the two nodes and their common neighbor nodes to form an initial seed subgraph.The process runs iteratively until all nodes have been visited.Second,the similarity is judged according to the neighborhood information of the seed subgraph.If it is similar,it is merged to form the initial community structure.The process runs iteratively until all seed subgraphs are visited.Finally,we optimize the community.If there are nodes without assigned communities,they are added to the most similar community,and then the community structure with high overlap is merged.Experiments on real and artificial networks show that SUSBOCD can improve the quality of overlapping community partition effectively in the three evaluation indexes of ONMI,EQ and Omega.
Predicting Drug Molecular Properties Based on Ensembling Neural Networks Models
XIE Liang-xu, LI Feng, XIE Jian-ping, XU Xiao-jun
Computer Science. 2021, 48 (9): 251-256.  doi:10.11896/jsjkx.200700066
Abstract PDF(2832KB) ( 1150 )   
References | Related Articles | Metrics
Artificial intelligence (AI) methods have made great success in predicting chemical properties and bioactivity of drug molecules in the Bioinformatics field.Neural network gains wide applications in the process of drug discovery.However,the shallow neural network (SNN) gives lower accuracy while deep neural networks (DNN) are easy to be overfitting.Model ensembling is expected to further improve the predictive performance of weak learners in traditional machine learning methods.Therefore,it is the first time to apply model ensembling strategy to predict the properties of drug molecules.By encoding molecular structures,the combination strategies,averaging,and stacking methods are adopted to increase predicting accuracy of pKa of drug molecules.Compared with DNN,the stacking strategy presents the best predictive accuracy and the Pearson coefficient reaches to 0.86.Ensembling weak learners of the neural networks can reproduce the accuracy of DNN while keeping the satisfied generalization ability.The results show that ensembling method can increase the predictive accuracy and reliability.
Meta-inverse Reinforcement Learning Method Based on Relative Entropy
WU Shao-bo, FU Qi-ming, CHEN Jian-ping, WU Hong-jie, LU You
Computer Science. 2021, 48 (9): 257-263.  doi:10.11896/jsjkx.200700044
Abstract PDF(2395KB) ( 875 )   
References | Related Articles | Metrics
Aiming at the problem that traditional inverse reinforcement learning algorithms are slow,imprecise,or even unsolvable when solving the reward function owing to insufficient expert demonstration samples and unknown state transition probabilitie,a meta-reinforcement learning method based on relative entropy is proposed.Using meta-learning methods,the target task learning prior is constructed by integrating a set of meta-training sets that meet the same distribution as the target task.In the model-free reinforcement learning problem,the relative entropy probability model is used to model the reward function and combined with the prior to achieve the goal of quickly solving the reward function of the target task using a small number of samples of the target task.The proposed algorithm and the RE IRL algorithm are applied to the classic Gridworld and Object World pro-blems.Experiments show that the proposed algorithm can still solve the reward function better when the target task lacks a sufficient number of expert demonstration samples and state transition probabilities information
Computer Network
Wireless Downlink Scheduling with Deadline Constraint for Realistic Channel Observation Environment
ZHANG Fan, GONG Ao-yu, DENG Lei, LIU Fang, LIN Yan, ZHANG Yi-jin
Computer Science. 2021, 48 (9): 264-270.  doi:10.11896/jsjkx.210100143
Abstract PDF(2239KB) ( 580 )   
References | Related Articles | Metrics
Deadline-constrained wireless downlink transmissions,which have been widely used for a variety of real-time communication services that are related to the national economy and the people's livelihood,require each packet to be delivered in an ultra-reliable fashion within a strict delivery deadline.However,the base station (BS) cannot fully observe the channel state between itself and each device,and can be aware of the channel state for a device only when the BS receives a feedback from this device.This realistic channel observation environment makes the design of deadline-constrained downlink scheduling more challengeable.This paper aims to deal with this issue by allowing the BS to determine the transmission priority based on the packet information and partially-observable channel states.This paper uses an infinite-horizon partially observable Markov decision process (POMDP) to model the downlink transmission by only considering the head-of-line packets,but finding an optimal or near-optimal strategy for this model is computationally infeasible.As such,this paper proposes a suboptimal strategy with low complexity using the Q-function Markov decision process (QMDP) for the finite-horizon problems,and further proposes a simpler heuristic strategy.Simulation results demonstrate the performance advantage of the proposed strategies over baselines in various network scenarios,and indicate that the partial observability on the channel states indeed has a significant impact on the throughput performance.
Deep Reinforcement Learning Based UAV Assisted SVC Video Multicast
CHENG Zhao-wei, SHEN Hang, WANG Yue, WANG Min, BAI Guang-wei
Computer Science. 2021, 48 (9): 271-277.  doi:10.11896/jsjkx.201000078
Abstract PDF(2636KB) ( 607 )   
References | Related Articles | Metrics
In this paper,a flexible video multicast mechanism assisted by the UAV base station is proposed.In combination with SVC encoding,the dynamic deployment and resource allocation of UAV are considered jointly in order to maximize the overall number of enhancement layers received by users.The traditional heuristic algorithm is difficult to deal with the complexity of user movement,considering that the user movement within the range of macro station will change the network topology.To this end,the DDPG algorithm based on deep reinforcement learning is used to train the neural network to decide the optimal location and bandwidth allocation proportion of UAV.After the model converges,the learning agent can find the optimal UAV deployment and bandwidth allocation strategy in a short time.The simulation results show that the proposed scheme achieves the expected goal and is superior to the existing scheme based on Q-learning.
Hierarchical Management Mechanism of P2P Video Surveillance Network Based on CHBL
XIA Zhong, XIANG Min, HUANG Chun-mei
Computer Science. 2021, 48 (9): 278-285.  doi:10.11896/jsjkx.201200056
Abstract PDF(3338KB) ( 565 )   
References | Related Articles | Metrics
Load balancing and response time are the key issues in video surveillance peer-to-peer networks.A hierarchical mana-gement mechanism of a P2P video surveillance network based on the consistent hash with bounded loads is proposed.According to the geographical location of the nodes,the P2P video surveillance network is divided into the different autonomous region,which is divided into one layer of super nodes and multiple layers of ordinary nodes.The ratio of the upstream bandwidth of the node to the bandwidth required by each video transmission channel is used as the upper limit of the node load.It is stratified when the total load of the upper layer nodes reaches the upper limit.CHBL algorithm is used to control the load balance of each layer nodes,which are mapped to different hash ring.The weight of each index of the node is calculated by the independent information data fluctuation weighting method,and then the comprehensive value of the node is obtained by linear weighting.The node with the highest comprehensive value in the child node layer is selected as a replacement node for the separated node.The node with the highest comprehensive value will converge to the upper layer between long-running P2P networks.Compared with the DHT-based P2P network,the simulation results show that the proposed management mechanism can effectively improve the load ba-lance and reduce the overall response time of the network.
Extraction Method of Wireless Frame Interval Feature
LI Shuang-qiu, YU Zhi-bin, YANG Ling, ZHANG Yi-fang, LIU Li-ping
Computer Science. 2021, 48 (9): 286-291.  doi:10.11896/jsjkx.201100130
Abstract PDF(2038KB) ( 539 )   
References | Related Articles | Metrics
Aiming at the drawbacks that traditional individual identification algorithms for wireless network device are low accuracy,time-consuming and analyzing protocol,in this paper,the wireless frame interval feature extraction algorithm from the perspective of wireless frame behavior is proposed.Based on the generation mechanism of frame interval feature,the frame interval feature extraction algorithms for single-target wireless devices and multi-target wireless devices are studied,and the effectiveness of the algorithms are verified by taking the wireless router as an example.The experimental results show that when only a single device is turned on at a time experimental platform which is composed of the same model and different type wireless devices,the average recognition rate of the proposed method is 94%,which is nearly 10% higher than that of the traditional methods.When multiple wireless devices are turned on at the same time,the recognition rate of the method in this paper also reaches 90%.From theoretical analysis and experimental verification results,the frame interval of the beacon can be used to identify the wireless routing devices and distinguish different types of wireless routing devices effectively.The proposed method does not require high-precision sampling to obtain transient signals,is not susceptible to modulation mode,and does not require analyzing protocol,so it is very suitable for individual identification of wireless network equipment in communication countermeasures and network security.
Optimized Deployment of RFID Reader Antenna Based on Improved Multi-objective Salp Swarm Algorithm
LUO Wen-cong, ZHENG Jia-li, QUAN Yi-xuan, XIE Xiao-de, LIN Zi-han
Computer Science. 2021, 48 (9): 292-297.  doi:10.11896/jsjkx.200700167
Abstract PDF(1953KB) ( 556 )   
References | Related Articles | Metrics
ith the rapid development of radio frequency identification (RFID) technology,in a variety of special environments (such as factories,warehouses,prisons,etc.),the demand for optimal deployment of RFID reader antennas has attracted extensive attention.In order to solve the problems in the deployment of RFID reader antenna,such as difficult deployment,many constraints and difficult to find the optimal solution and Pareto front,this paper proposes an optimized deployment method of RFID reader antenna based on the improved multi-objective SALP swarm algorithm (MSSA).The multi-objective optimization deployment model of RFID reader antenna is constructed in advance,and the optimization target is set.The multi-objective tympana algorithm is used to train the optimal deployment model of RFID reader antenna.The separation operator is introduced to optimize the search ability,and the non dominated solutions satisfying the conditions are searched continuously through iteration,and the Pareto solution set satisfying the conditions is constructed,which is the optimization result.The results show that the proposed algorithm has faster convergence rate than the algorithms of BA-OM,PSO and MC-BFO without the prior knowledge,coverage rate increases by 33%,28% and 20% respectively.Compared with the same type of hybrid firefly (HMOFA) algorithm for Pareto solution set,the load balancing is increased by 7.14%,the economic benefit is increased by 59.74%,and the reader interfe-rence is reduced by 34.04%.
Information Security
High Capacity Reversible Data Hiding Algorithm for Audio Files Based on Code Division Multiplexing
MA Bin, HOU Jin-cheng, WANG Chun-peng, LI Jian, SHI Yun-qing
Computer Science. 2021, 48 (9): 298-305.  doi:10.11896/jsjkx.200800199
Abstract PDF(8535KB) ( 592 )   
References | Related Articles | Metrics
Aiming at the problem of small embedding capacity and low security of reversible data hiding algorithm for audio files,a reversible data hiding(RDH) algorithm for audio files based on code division multiplexing(CDM) is proposed in this paper.The orthogonal spreading sequences are employed to carry secret message.For reversible data hiding in the proposed scheme,they enable the original image can be recovered completely after the secret data having been extracted accurately.At the same time,according the orthogonal character of the embedding vector,the secret data can be overlapping embedded into the audio files and most elements of the sequence are mutually canceled in the process of the data embedding,and thus higher audio fidelity ability is obtained even at large data embedding capacity.Moreover,only the receiver who holds the same embedding vector as the sender can restore the embedded information and the original audio file losslessly,which improves the security performance of the algorithm effectively.Experimental results show that,compared with other audio reversible data hiding algorithms,the CDM based reversible data hiding (RDH) algorithm of audio file can achieve higher data embedding capacity at same audio distortion.
Blockchain-based Role-Delegation Access Control for Industrial Control System
GUO Xian, WANG Yu-yue, FENG Tao, CAO Lai-cheng, JIANG Yong-bo, ZHANG Di
Computer Science. 2021, 48 (9): 306-316.  doi:10.11896/jsjkx.210300235
Abstract PDF(4408KB) ( 1040 )   
References | Related Articles | Metrics
The concept of “network perimeter” in industrial control system is becoming vague due to the integration of IT and OT technology.The fine-grained access control strategy that intends to protect each network connection can ensure the network security of industrial control system.The role-delegation-based access control scheme can delegate an access right of user in a domain to a user in another domain or a company partner so that these users can remotely access the network resources of the industrial enterprise.However,these benefits resulted from the delegation may increase the attack surface for industrial control system.The blockchain technology with decentralization,tamper-proof,auditable and other characteristics can be considered as a basic framework of the role-delegation access control for network resources in industrial control system.This paper proposes a role-delegation access control scheme DRBAC based on blockchain.DRBAC includes several important components:user role management and delegation,access control,monitoring mechanism,etc.The DRBAC solution is implemented based on smart contract.The DRBAC ensures that each network connection must be protected by fine-grained access control strategies.Finally,the correctness,feasibility and overhead of DRBAC are tested and analyzed in a private blockchain network.
Non-byzantine Fault Tolerance Consensus Algorithm for Consortium Blockchain
WANG Ri-hong, ZHOU Hang, XU Quan-qing, ZHANG Li-feng
Computer Science. 2021, 48 (9): 317-323.  doi:10.11896/jsjkx.200600051
Abstract PDF(2512KB) ( 1021 )   
References | Related Articles | Metrics
As a consortium blockchain with the multi-center characteristics of public blockchain and the high-performance advantages of private blockchain,it has become the center of development of China.Combined with the characteristics of node trust in the consortium blockchain,non-byzantine fault tolerance consensus algorithm can provide better performance support for the consortium blockchain.By selecting Raft consensus algorithm as the research object,focusing on the leader election and log replication process in the Raft consensus algorithm,this paper proposes a non-byzantine fault tolerance consensus algorithm for consortium blockchain—Kraft(Kademlia-Raft) consensus algorithm.It improves the process of Leader election and log replication in the Raft consensus algorithm by combining the two-layer Kademlia routing protocol.First,in view of the problem of voting efficiency caused by the number of Candidate nodes and the increase of Follower nodes in the Raft consensus algorithm,KRaft consensus algorithm uses K-bucket established by the two-layer Kademlia protocol to realize the stable election in the Candidate node set.Se-condly,in view of the low efficiency of the single-node log replication process of the Leader in the log replication process of Raft consensus algorithm and the load balance problem on the nodes,a parallel log replication scheme of multiple Candidate nodes is proposed to equalize the load on the Leader node,so as to improve the data throughput and the scalability of the algorithm at the same time.Finally,the local multi-node simulation experiment shows that the data throughput of KRaft consensus algorithm is increased by 34.5% compared with Raft consensus algorithm,the voting speed of Leader node is increased by 55.6%.
Secret Verification Method of Blockchain Transaction Amount Based on Digital Commitment
ZHANG Xiao-yan, LI Qin-wei, FU Fu-jie
Computer Science. 2021, 48 (9): 324-329.  doi:10.11896/jsjkx.200800123
Abstract PDF(1561KB) ( 970 )   
References | Related Articles | Metrics
In traditional blockchain transactions,privacy protection is to encrypt users' sensitive information under the anonymity mechanism,and a trusted third party is involved to verify the transaction plaintext information.However,once the third party is attacked,the users' transaction information will be divulged.Furthermore,there is no truly trusted third party in a rational state.To better solve the privacy problems in blockchain transactions,and in view of issues of confidentiality verification of the tra-ders' transaction amount under the non-anonymous state,the PVC digital commitment protocol is adopted to hide the transaction amount in the commitment,and a publicly verifiable zero-knowledge proof scheme is established,so that verifiers are able to confidentially verify the legitimacy of the transaction without obtaining sensitive information from the traders.At the same time,the elliptic curve homomorphic encryption feature is used to encrypt the amount,thereby solving the problem of updating the traders' ciphertext ledger.The correctness of the proposed privacy protection scheme is verified and analyzed,and the results shows that compared with the existing schemes,the proposed scheme has the advantages of relatively low computational complexity,strong security and high efficiency.
Data Integrity Verification Scheme of Shared EMR Based on Red Black Tree
ZHOU Yi-hua, JIA Yu-xin, JIA Li-yuan, FANG Jia-bo, SHI Wei-min
Computer Science. 2021, 48 (9): 330-336.  doi:10.11896/jsjkx.200600139
Abstract PDF(2044KB) ( 654 )   
References | Related Articles | Metrics
In order to solve the privacy and data integrity problems of shared electronic medical records,this paper proposes a red-black tree-based shared electronic medical records data integrity verification scheme based on the parallel blockchain architecture.First,the doctor-patient integrity verification information is stored on the patient chain and the doctor chain with different attri-bute-based encryption,and the doctor-patient data specific information is stored on the CSP off-chain server.Then,the red-black tree-based data integrity verification scheme and dynamic data update scheme are constructed.Security analysis shows that the proposed scheme not only has public verifiability,can effectively resist the cloud server forgery attack,but also can protect the privacy of user and patient information,with high efficiency of integrity verification and data update.
Kernel Density Estimation-based Lightweight IoT Anomaly Traffic Detection Method
ZHANG Ye, LI Zhi-hua, WANG Chang-jie
Computer Science. 2021, 48 (9): 337-344.  doi:10.11896/jsjkx.200600108
Abstract PDF(2213KB) ( 725 )   
References | Related Articles | Metrics
In order to effectively deal with the security threats of home and personal Internet of Things(IoT) bot nets,especially for the objective problem of insufficient resources for anomaly detection in the home environment,a kernel density estimation-based lightweight IoT anomaly traffic detection (KDE-LIATD) method is proposed.Firstly,the KDE-LIATD method uses a Gaussian kernel density estimation method to estimate the probability density function and corresponding probability density of each dimension feature value of thenormal samples in the training set.Then,a kernel density estimation-based feature selection algorithm (KDE-FS) is proposed to obtain features that contribute significantly to anomaly detection,thereby reducing the feature dimension while improving the accuracy of anomaly detection.Finally,the cubic spline interpolation method is used to calculate the anomaly evaluation value of the test sample and perform anomaly detection.This strategy greatly reduces the computational overhead and storage overhead required to calculate the anomaly evaluation value of the test sample using the kernel density estimation method.Simulation experiment results show that the KDE-LIATD method has strong robustness and strong compatibility for anomaly traffic detection of heterogeneous IoT devices,and can effectively detect abnormal traffic in home and personal IoT bot nets.
Intrusion Detection Method Based on Denoising Autoencoder and Three-way Decisions
ZHANG Shi-peng, LI Yong-zhong
Computer Science. 2021, 48 (9): 345-351.  doi:10.11896/jsjkx.200500059
Abstract PDF(2012KB) ( 799 )   
References | Related Articles | Metrics
Intrusion detection plays a vital role in computer network security.Intrusion detection is one of the key technologies of network security and needs to be kept under constant attention.As the network environment becomes more and more complex,network intrusion behaviors gradually show diversified and intelligent characteristics,and network intrusion is also becoming more difficult to detect.And the research conducted in the field of network security is also an endless study.For the above reasons,people are worried about the feasibility and sustainability of the current method,specifically,it is difficult for current intrusion detection methods to perfectly abstract the features contained in intrusion behaviors,and most of the current intrusion detection methods perform poorly on unknown attacks.In response to these problems,we propose an intrusion detection method DAE-3WD based on denoising autoencoder and three-way decisions.We hope that our method can effectively promote the research on intrusion detection.This proposed methodextracts features from high-dimensional data through denoising autoencoder.Through multiple feature extractions,a multi-granular feature space can be constructed,and then an immediate decision on intrusive or no-rmal behavior is made based on the three-way decisions,and further analysis is required for suspected intrusion or normal beha-vior.Deep learning has superior hierarchical feature learning ability,and three-way decisions can avoid the risk of blind classification due to insufficient information.This method uses these characteristics to achieve the purpose of improving the performance of intrusion detection.The NSL-KDD data set is used in our experiments.The experiments prove that the proposed method can extract meaningful features and effectively improve the performance of intrusion detection.