Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 48 Issue 6A, 16 June 2021
  
Image Processing & Multimedia Technology
Recent Advances for Object Contour Detection Technology
FENG Fu-rong, ZHANG Zhao-gong
Computer Science. 2021, 48 (6A): 1-9.  doi:10.11896/jsjkx.201000044
Abstract PDF(2714KB) ( 1327 )   
References | Related Articles | Metrics
Object contour detection is one of the most foundational,significant and challenging problems in the field of computer vision research.With the development of deep learning in recent years,breakthroughs have been made in other research directions in the field of vision,such as object detection and instance segmentation,which gradually prove the close relationship between contour detection and other research directions,so more and more attention has been paid in contour detection.This paper discusses several main contents,including not only a detailed review of the existing contour detection algorithms,but also three stages according to the features of contour detection and extraction:low-level,middle-level and high-level,and a detailed analysis of the applied datasets,performance evaluation indicators,model structure and model details,the application of contour detection and the application of its results,so as to make a deep understanding of the development of contour detection.Finally,the challenges and future trends of contour detection are analyzed and predicted.This paper provides new ideas and references for the follow-up research in this field.
Review on Methods of Reducing Acoustic Reflection Artifact in Biological Photoacoustic Imaging
SUN Zheng, ZHANG Xiao-xue
Computer Science. 2021, 48 (6A): 10-14.  doi:10.11896/jsjkx.200800147
Abstract PDF(3756KB) ( 1058 )   
References | Related Articles | Metrics
Biological photoacoustic imaging (PAI) is a newly emerged noninvasive hybrid functional imaging modality.The acoustic inhomogeneity of the imaged tissue may lead to the reflection of the photoacoustically generated ultrasound at the tissue interfaces,resulting in the reduced image quality and limited penetration depth.In this paper,main methods of reducing acoustic reflection artifacts in photoacoustic images are reviewed including delay subtraction,decorrelation of clutter,short-lag spatial cohe-rence (SLSC),deep-learning method,photoacoustic-guided focused ultrasound (PAFUSion),plane wave ultrasound model and multi-wavelength excitation.The advantages and limits of the methods are analyzed.This paper concludes with future directions of acoustic reflection artifact reduction in photoacoustic images.
Vehicle Color Recognition in Natural Traffic Scene
ZHOU Xin, LIU Shuo-di, PAN Wei, CHEN Yuan-yuan
Computer Science. 2021, 48 (6A): 15-20.  doi:10.11896/jsjkx.200800078
Abstract PDF(5410KB) ( 1544 )   
References | Related Articles | Metrics
Vehicle color is one of the significant vehicle details,and the recognition for it can provide more precise and rich information for vehicle identification in the Intelligent Transportation System.In the natural traffic scenes,the vehicle images obtained by cameras are greatly affected by the illumination changes.Therefore,the vehicle color can't be determined directly by the RGB values of the image.The traditional machine learning methods for vehicle color recognition require feature selection steps depending on experience,which may lead to limited classification effect.When applied to actual applications,these approaches probably have high computation cost and are difficult to obtain real-time results.Aiming at the problem that vehicle color information is difficult to gain and describe in natural scenes,a novel deep neural network model based on multi-color spaces is proposed to identify vehicle color in natural traffic scenes(MultiColor-Net).In the MultiColor-Net model,several filters of different sizes are used to extract the features of the input images both in RGB color space and in HSV color space,respectively.Then,the above features obtained in two different color spaces are combined together to get the classification results of vehicle color through a fully connected network.By comparing the experimental results of ResNet,Inception v3 and other deep neural network models with the MultiColor-Net proposed on real intelligent transportation data sets,the accuracy rate of MultiColor-Net is improved by about 2.45% with the HSV images alone,and the accuracy rate is improved by about 0.8% with the RGB images.Consequently,the proposed model,MultiColor-Net,can achieve a high recognition accuracy rate with the real traffic image data,and maintain lower computational complexity.
MR Image Enhancement Based on Adaptive Weighted Duplicate Filtering and Homomorphic Filtering
HUANG Xue-bing, WEI Jia-yi, SHEN Wen-yu, LING Li
Computer Science. 2021, 48 (6A): 21-27.  doi:10.11896/jsjkx.200800183
Abstract PDF(4543KB) ( 872 )   
References | Related Articles | Metrics
Magnetic resonance (MR) images are usually affected by salt and pepper noise (SPN) and low contrast.In this paper,we enhance the MR images by filtering the images in the spatial and frequency domain respectively.Since most of the existing filtering algorithms are not ideal for removing high-level SPN,we propose adaptive weighted duplicate filter (AWDF).The adaptive window size is determined by continuously enlarging the window until the maximum and minimum values of the two successive windows are equal respectively,and then replacing the noise pixel with the mean value of the most duplicate noise-free pixels in the window.We apply the algorithm to the pre-processing of MR images with different SPN levels,and then apply homomorphic filtering in the frequency domain.The simulation results show that the method of combining AWDF and optimized Gaussian homomorphic filter can improve the contrast and details of the images while removing high-level SPN.The PSNR and SSIM of the image have been greatly improved,and the enhancement is remarkable.
Lung Tissue Segmentation Algorithm:Fractional Order Sparrow Search Optimization for OTSU
JIANG Yan, MA Yu, LIANG Yuan-zhe, WANG Yuan, LI Guang-hao, MA Ding
Computer Science. 2021, 48 (6A): 28-32.  doi:10.11896/jsjkx.200900176
Abstract PDF(2346KB) ( 664 )   
References | Related Articles | Metrics
Aiming at the characteristics of slow and easy to get into local optimum for traditional particle swarm optimization used for lung tissue segmentation,a lung tissue segmentation algorithm based on fractional sparrow search optimization for OTSU is proposed.Using fractional calculus algorithm to optimize sparrow search algorithm,according to the position information of sparrow,the adaptive fractional order is introduced to adjust the fractional order adaptively and accelerate the convergence speed of the algorithm.The grayscale-gradient 2D histogram is used to reduce the computation of 2D histogram and the search range of sparrow.During the implementation of the algorithm,the hole filling algorithm is used to remove the CT image background,and morphological operation is used to remove the noise and repair the holes in the lesion area.The experiment show that the number of stable convergence times achieved by the proposed algorithm is 22.75%,13.75% and 2.25% lower than that of particle swarm optimization OTSU algorithm,fractal-order particle swarm optimization OTSU algorithm and sparrow search optimization OTSU algorithm,respectively.Therefore,the algorithm in this paper not only guarantees the segmentation accuracy,but also improves the convergence speed of the algorithm.
Speech Endpoint Detection Based on Bayesian Decision of Logarithmic Power Spectrum Ratio in High and Low Frequency Band
ZHANG Zi-cheng, TAN Zhi-wei, ZHANG Chen-rui, WANG Xuan, LIU Xiao-xuan, YU Yi-biao
Computer Science. 2021, 48 (6A): 33-37.  doi:10.11896/jsjkx.200700135
Abstract PDF(2456KB) ( 934 )   
References | Related Articles | Metrics
Based on the analysis of the power spectrum of speech signal and noise in high and low frequency band,a speech endpoint detection method under low SNR based on Bayesian decision of logarithmic power spectrum ratio in high and low frequency band is proposed.Firstly,the logarithm power spectrum ratio of speech signal and background noise in two different frequency bands is calculated respectively,and the statistical distribution is obtained according to the maximum likelihood estimation,and the optimal decision threshold is derived based on Bayesian decision criterion.When the signal is input,the log energy spectrum ratio of high and low frequency bands is calculated frame by frame and it is compared with the decision threshold to classify the speech and background noise,so as to realize the endpoint detection of speech signal.The experimental results show that,compared with the traditional double threshold detection method and spectral entropy detection method,the proposed method can detect speech endpoint more accurately under the condition of low SNR,and significantly improve the accuracy and speed of endpoint detection.
Combining MCycleGAN and RFCNN to Realize High Resolution Reconstruction of Solar Speckle Image
CUI Wen-hao, JIANG Mu-rong, YANG Lei, FU Peng-ming, ZHU Ling-xiao
Computer Science. 2021, 48 (6A): 38-42.  doi:10.11896/jsjkx.201000160
Abstract PDF(5151KB) ( 846 )   
References | Related Articles | Metrics
High resolution reconstruction of solar speckle image is one of the important research contents in astronomical image processing.High resolution image reconstruction based on deep learning can obtain the end-to-end mapping function from low-resolution image to high-resolution image through neural network model learning,which can recover the high-frequency information of the image.However,when reconstructing the sun speckle image with single feature,more noise and fuzzy local details,there are some shortcomings such as too smooth edge and easy loss of high-frequency information.In this paper,the structure features of input image and reconstructed image are added to CycleGAN network to get MCycleGAN.High frequency information is obtained from structural features by generator network,and the feature difference is calculated to enhance the ability of network to reconstruct high-frequency information.Residual block and fusion layer are added to DeepFuse network to construct RFCNN,and multi frame reconstruction is carried out by using similar information between image frames.The edge of the reconstructed image is clearer.The reconstruction result is compared with the speckle mask method Level1+ used by Yunnan Observatory,which shows that the proposed algorithm has the advantages of small error and high definition of reconstructed image.
Image Seam Carving Tampering Detection by Discrete Tchebichef Transform
TIAN Yang, BI Xiu-li, XIAO Bin, LI Wei-sheng, MA Jian-feng
Computer Science. 2021, 48 (6A): 43-50.  doi:10.11896/jsjkx.200800020
Abstract PDF(3248KB) ( 808 )   
References | Related Articles | Metrics
As one of the most popular image scaling technologies in recent years,seam carving is often used for malicious tampering.We find that there are two shortcomings in the current research work of seam carving tamper detection.First,the detection methods are basically aimed at tampered JPEG image.For tampered TIFF image,the current detection methods have low accuracy.Second,when the ratio of seam carving tampering is small,the classification accuracy is relatively low.In order to solve these problems,this paper proposes a seam car-ving tamper detection method based on discrete Tchebichef transform.The tamper detection method is no longer limited to the image format,can effectively detect the tampered images of various formats,and maintain a high classification accuracy.In addition,when the tampering ratio is small,this method will not fail,and has a high classification accuracy.The method in this paper users the distribution characteristics of the coefficients in the coefficient matrix after the discrete Tchebichef transform to extract features to detect seam carved images.Moreover,this method is suitable for all kinds of image formats,and maintains a high classification accuracy under the condition of small tampering ratio.The proposed method makes use of the distribution characteristics of the coefficients in the coefficient matrix after discrete Tchebichef transform.After image segmentation and discrete Tchebichef transform,the value of the coefficient in the upper left corner of each block is very large,and the value in other positions is very small.The trace of seam carving is extracted and the tamper detection of seam carving is realized.Based on the characteristics of coefficient distribution in discrete Tchebichef transform coefficient matrix,the classification accuracy will not be affected by compression quality factor and will not change with the change of compression quality factor.The steps of this detection method are as follows.Firstly,the images are divided into 8×8 non-overlapping blocks,and then each block of the image is transformed by discrete Tchebichef transform to get the transformed coefficient matrix.Due to the characteristics of discrete Tchebichef transform,the coefficients of the upper left corner of each 8×8 block of the coefficient matrix are very large,and the coefficients of other positions are very small,so the differences between coefficients are calculated in each block of the coefficient matrix,and the histogram of the differences between coefficients is obtained.Finally,the statistical matrix is obtained from the histogram of the differences between the coefficients,and the features are extracted from the statistical matrix.The extracted features are sent to SVM for training,and a classification model is obtained.In this paper,a seam car-ving tamper detection method based on discrete Tchebichef transform is proposed to classify original images and tampered images.The experimental results show that this method can achieve high classification accuracy.The detection method based on discrete Tchebichef transform proposed in this paper can achieve high classification accuracy both in JPEG tampered image and TIFF tampered image,and also can achieve high detection accuracy for small-scale tampering.
Image Recognition for Building Components Based on Convolutional Neural Network
XIONG Zhao-yang, WANG Ting
Computer Science. 2021, 48 (6A): 51-56.  doi:10.11896/jsjkx.200500122
Abstract PDF(3508KB) ( 1413 )   
References | Related Articles | Metrics
It is necessary to convert the point cloud data into theRGB-D images of building and classify the images,when using the point cloud data obtained by 3D laser scanner to generate BIM model for a large number of existing buildings.In this paper,based on the deep learning algorithm,a method of building components image recognition employing the convolution neural network is proposed by using the transfer learning theory to dealing with the classification problem of interior building components image such as doors and windows.First of all,the VGG16 with weight parameters trained in Imagenet is used as the image recognition neural network.In addition,the network is optimized by adding Dropout layer,L2 regularization and using Fine-tune operation to improve the recognition accuracy of the network.The experimental results show that the average recognition accuracy of the model optimized by Fine-tune is 95.4%,which is about 5.1% higher than that of the model without optimization.
Research on Iris Recognition Algorithm Based on Wavelet Packet Decomposition
ZHOU Jun, WANG Shuai, LIU Fan-yi
Computer Science. 2021, 48 (6A): 57-62.  doi:10.11896/jsjkx.200900218
Abstract PDF(3578KB) ( 1004 )   
References | Related Articles | Metrics
Iris feature extraction is the key step in iris recognition.The wavelet method does not further decompose the high-frequency space when decomposing the iris image,but the iris features are more contained in the high-frequency space,and the extracted iris features are insufficient in the feature expression capabilities.Aiming at the above problems,an iris feature recognition method based on wavelet packet multi-scale decompositionis proposed in this paper,diagonal high-frequency subband map from the second layer based on wavelet packet de-composition is modulated into iris feature code,and the feature is recognized through hamming distance.In the experiment,sym2 wavelet is used as decomposition wavelet function,which carries out 5350 times of feature matching.The results show that the correct recognition rate is 98.5%,which is superior to the wavelet zero crossing method of boles and the two-dimensional Haar wavelet transform method of Lim,is second only to the two-dimensional Gabor method of Daugman.
Automatic Classification of Aviation Fastener Products Based on Image Classification
HU Jing-hui, XU Peng
Computer Science. 2021, 48 (6A): 63-66.  doi:10.11896/jsjkx.200900163
Abstract PDF(2504KB) ( 622 )   
References | Related Articles | Metrics
With the rapid development of aviation fastener manufacturing,the fastener manufacturing process on the production workshop assembly linebecomes more and more complicated.At present,the transfer of different fastener products in the production line still stays in the manual work.This method is not only complicated and tiring,but also difficult to satisfy the real-time classification requirement.In this paper,an automatic classification method for aviation fasteners based on image classification algorithms is proposed.A set of fastener image acquisition and automatic classification implementation schemes are designed,and evaluation experiments are performed based on real industrial data.The evaluation experiments count convolutional neural networks (CNN) and Inception-v3 model accuracy,recall,precision and F1-Score.The experimental results show that Inception-v3 is superior to CNN in various evaluation indicators,and the accuracy of Inception-v3 model classification reaches more than 98%,which can effectively realize automatic classification of aviation fastener products.
Research on Classification of Breast Cancer Pathological Tissues with Adaptive Small Data Set
HE Qing-fang, WANG Hui, CHENG Guang
Computer Science. 2021, 48 (6A): 67-73.  doi:10.11896/jsjkx.201000188
Abstract PDF(3652KB) ( 933 )   
References | Related Articles | Metrics
Aiming at the problems of small data set,uneven distribution of benign and malignant samples,and low automatic re-cognition accuracy of breast cancer pathological tissue image data,a lightweight pathological tissue image classification model with reasonable depth and width is designed,which is suitable for small data sets.Based on the traditional data enhancement methods such as image rotation and distortion,the random non-repeated cutting method is used to balance the number of benign and malignant samples and expand the data set.For the samples that are difficult to cluster in the training set,the concept of “weak feature”,“weak feature” sample extraction algorithm and adaptive adjustment,secondary training algorithm are proposed to improve the model training.Under the condition of the same parameter setting and running environment,eight groups of comparative experiments are carried out,and the accuracy,sensitivity and specificity of the model can reach more than 97%.The experimental results show that the performance of the model designed in this paper is stable,and it has good tolerance and adaptability for small data sets and unbalanced data sets.
Research on Shui Characters Extraction and Recognition Based on Adaptive Image Enhancement Technology
YANG Xiu-zhang, WU Shuai, XIA Huan, YU Xiao-min
Computer Science. 2021, 48 (6A): 74-79.  doi:10.11896/jsjkx.200900070
Abstract PDF(4580KB) ( 751 )   
References | Related Articles | Metrics
Aiming at the lack of digital image processing technology in traditional minority scripts,Shui characters are inherited by oral transmission,paper handwriting,embroidery,stele inscription,woodcut and ancient books.The text is not clear enough and it is difficult to digitally read,which can not meet the new requirements for rescuing endangered shui characters in the information age.In this paper,an algorithm of shui character extraction and segmentation based on image enhancement and region detection is proposed.The illumination of the image is processed by logarithmic and gamma transform,and the noise is reduced by median filtering.Then the text edge details of the gray-scale image of Shui characters are extracted by Sobel operator,and the text contours are extracted by threshold processing,expansion processing and corrosion processing.Finally,the text contours are extracted by Region detection.Detection and text location algorithm can extract and segment ancient Shui characters.This paper uses Python language to simulate the shui characters.The experimental results show that the algorithm can effectively reduce the image noise and extract the shui characters.The separated Shui characters information is more complete,which reduces the workload of ethnic researchers and archaeologists to a certain extent.The algorithm can be applied to the recognition of Shui characters,the protection of cultural relics,the inheritance of Shui culture and other fields,and has a certain application prospect and practical value.
Augmentation Technology of Remote Sensing Dataset Based on Improved DCGAN Algorithm
ZHANG Man, LI Jie, ZHU Xin-zhong, SHEN Ji, CHENG Hao-tian
Computer Science. 2021, 48 (6A): 80-84.  doi:10.11896/jsjkx.200700185
Abstract PDF(2651KB) ( 1338 )   
References | Related Articles | Metrics
The scale of the remote sensing dataset is a crucial factor to object detection algorithm's performance based on deep learning technology.How to generate large number of labeled images by using a small amount of data has become a hot research topic.To solve the problem,we propose an augmenting method of remote sensing dataset based on improved DCGAN algorithm using the secondary mask technology.In addition,the algorithmproposed in this paper can realize the amplification of images and labels by determining the number and location of targets to be generated,to solve the problem of no corresponding label generation in GAN-based image augmentation algorithm.Moreover,a multi-scale feature fusion technique for optimizing DCGAN algorithm is proposed to solve the problem of poor image quality generated by DCGAN algorithm.Experiments show that the improved DCGAN algorithm is superior to the DCGAN algorithm on both MNIST and PlANE datasets in terms of image quality and image diversity.The AP value of the dataset expanded based on the method we arranged is up to 84.45% in the design experiment using the Tiny-YoloV2 algorithm.Compared with the unaugmented algorithm and the traditional augmented method,the AP value of the method adopted in this paper is increased by 16.05% and 2.88% respectively,which fully verifies the effectiveness of the the technical scheme designed in this paper.
Dynamic Face Recognition Based on Improved Pulse Coupled Neural Network
WEN He, LUO Pin-jie
Computer Science. 2021, 48 (6A): 85-88.  doi:10.11896/jsjkx.200600172
Abstract PDF(1989KB) ( 696 )   
References | Related Articles | Metrics
Dynamic face recognition has wide application prospects in the field of real-time monitoring and tracking.It is one of the hot spots in the research of face recognition technology.In view of the problem that traditional face recognition technology can not be recognized well in the application of dynamic face recognition,a new method based on background difference method is proposed.The time and space of the pulse coupled neural network is used to generate different ignition sequences for different faces to distinguish different face.Using pulse coupled neural network space-time summation,the pulse coupled neural network neurons are matched with face image pixels,which produces different ignition sequence of different face image pixels.Through analyzing the image pixel ignition sequence,it can distinguish between different faces.Through the experiment on 500 randomly selected group of dynamic face images show that the improved pulse neural network for dynamic face recognition of the actual scene can be used to distinguish between different characters,with robust stability.
Comparative Study on Classification and Recognition of Medical Images Using Deep Learning Network
LIU Han-qing, KANG Xiao-dong, LI Bo, ZHANG Hua-li, FENG Ji-chao, HAN Jun-ling
Computer Science. 2021, 48 (6A): 89-94.  doi:10.11896/jsjkx.201000116
Abstract PDF(2854KB) ( 1145 )   
References | Related Articles | Metrics
Computer-aided diagnosis technology has practical significance in clinical medicine.The images of lung nodules and articulatio coxae fractures are used as typical regional and boundary feature images to discuss their applicability in different networks.First,the CT images of the lung nodules and the X-ray fracture images of the articulatio coxae are labeled,and they are pre-trained with CNN,Resnet,DBN and SGAN and fine-tuned,and the classification and recognition are completed via the Softmax classifier.Secondly,the image spatial resolution and noise are used as the comparative characteristics of different deep lear-ning networks,and the recognition rate is analyzed from the aspects of deep learning network structures.The simulation experiment results show that Resnet performs preeminently in all data sets,and has striking generalization ability and robustness.
Mediastinal Lymph Node Segmentation Algorithm Based on Multi-level Features and Global Context
XU Shao-wei, QIN Pin-le, ZENG Jian-chao, ZHAO Zhi-kai, GAO Yuan, WANG Li-fang
Computer Science. 2021, 48 (6A): 95-100.  doi:10.11896/jsjkx.200700067
Abstract PDF(3217KB) ( 813 )   
References | Related Articles | Metrics
Aiming at the problems of large mediastinal lymph node scale difference,unbalanced positive and negative samples,and easy to confuse soft tissue and lung tumors,a novel multi-level feature and global context segmentation network for mediastinal lymph node segmentation is proposed.In order to solve the problem that the positive and negative samples of the mediastinal lymph nodes are not balanced and are similar to the mediastinal organs and soft tissues,the mediastinal space is extracted through medical a priori to artificially enhance the attention to the location of the mediastinal lymph nodes.In order to solve the problem that the enlarged mediastinal lymph nodes are similar to lung tumors and the lymph nodes appear regionally dispersed,a global context module is designed.By calculating the global context dependence,the network's ability to classify lymph nodes and background is greatly enhanced.In order to solve the large differences in mediastinal lymph node scale,a feature fusion module was designed to greatly enhance the accuracy of segmentation of small lymph nodes by the network.Experiments show that the proposed method achieves an accuracy rate of 76.92%,a recall rate of 79.65%,and a dice score of 76.08% in the mediastinal lymph node segmentation task.The accuracy rate,recall rate,and dice score are significantly better than other algorithms currently used for mediastinal lymph nodessegmentation.
Medical Image Deblur Using Generative Adversarial Networks with Channel Attention
WANG Jian-ming, LI Xiang-feng, YE Lei, ZUO Dun-wen, ZHANG Li-ping
Computer Science. 2021, 48 (6A): 101-106.  doi:10.11896/jsjkx.200600144
Abstract PDF(4467KB) ( 909 )   
References | Related Articles | Metrics
Clear medical images can effectively help doctors to make pathological analysis and diagnosis.Aiming at the problem of image blur caused by camera unfocused during the process of medical image acquisition,this paper proposes a new image deblurring network based on the deblur generative adversarial networks(DeblurGAN).The network uses channel attention structure in Generator and extracts details effectively.During the process of image up-sampling,we use the method of bilinear interpolation with a convolution layer instead of transpose convolution,which removes the checkerboard effects.The model is trained by the combination of adversarial loss and content loss to obtain clear image.The experimental results show that the network achieves better performance in both PSNR and SSIM compared with DeblurGAN.
Saliency Detection Based on Eye Fixation Prediction and Boundary Optimization
LIU Xiang-yu, JIAN Mu-wei, LU Xiang-wei, HE Wei-kai, LI Xiao-feng, YIN Yi-long
Computer Science. 2021, 48 (6A): 107-112.  doi:10.11896/jsjkx.201100116
Abstract PDF(2781KB) ( 828 )   
References | Related Articles | Metrics
Saliency detection is one of the most fundamental challenges in computer vision.Although the rapid development of deep learning has greatly improved the accuracy of saliency-detection results,the extraction of details of salient object is still unsatisfactory.Therefore,this paper proposes an edge refinement network based on eye-fixation prediction priori for salient object detection.Firstly,eye-fixation extraction is carried out on the original image and the extracted feature image is used as the visual priori of subsequent saliency detection.Secondly,the multi-attention mechanism of VGG16 network is used for feature extraction,and finally the feature image is refined to improve the quality of the saliency image.Experimental results show that,compared with other 6 state-of-the-art methods,the proposed method achieves better results in 3 open-accessed data sets(i.e.DUTS,ECSSD,HKU-IS).
Plant Leaf Image Recognition Based on Multi-feature Integration and Convolutional Neural Network
HAN Bin, ZENG Song-wei
Computer Science. 2021, 48 (6A): 113-117.  doi:10.11896/jsjkx.201100119
Abstract PDF(2756KB) ( 1057 )   
References | Related Articles | Metrics
Plant leaf recognition is an important branch and hotspot of plant automatic classification and recognition.In order to improve the accuracy of plant leaf recognition,a method of multi-feature fusion combined with convolution neural network is proposed.In the experiment,Firstly,we extract LBP features and Gabor features.The leaf LBP coding map is evenly divided into 7×7 blocks,and different weights are assigned.The LBP histogram of each sub-block is calculated,and then normalized,and the normalized histogram of all sub-blocks is connected to obtain the whole Histogram and feature map.We use a four-direction Gabor filter to filtering the leaf image to obtain four sub-images.Each sub-image is divided into 4×4 sub-blocks,the average and variance of the filtered energy values of each sub-block are calculated,obtaining 128 dimensional Gabor features.Then the image of plant leaves is preprocessed to size 227×227 pixels with labels,local binary patterns features and Gabor features are extracted,the multi-features are added and fused through the feature fusion layer.Then the convolution neural network (AlexNet) framework is used as the classifier,and the full connection layer is used to identify the plant leaves.In order to avoid over fitting phenomenon,"dropout" method is used to train convolutional neural network,optimizing training model by adjusting learning rate and dropout value.The experimental results show that the plant recognition method based on multi-feature fusion convolution neural network classifies 32 kinds of leaves in Flavia leaves database and 189 kinds of leaves in MEW2014 leaves database,with an average correct recognition rate of 93.25% and 96.37%,respectively.This shows that compared with the general convolution neural network recognition method,this method can improve the recognition accuracy and robustness of plant leaves.
Tiny YOLOv3 Target Detection Algorithm Based on Region Activation Strategy
YU Han-qing, YANG Zhen, YIN Zhi-jian
Computer Science. 2021, 48 (6A): 118-121.  doi:10.11896/jsjkx.200700122
Abstract PDF(1748KB) ( 697 )   
References | Related Articles | Metrics
Aiming at the problem of low detection accuracy of Tiny YOLOv3 model,a method to introduce segmentation information into deep convolutional neural network structure is proposed.During the model training,the real position information of the target is added to the network layer,and these target areas are manually activated.The size of the excitation gradually decreases as the training proceeds until it drops to zero.The test results show that on the VOC2007 data set,the average accuracy of the improved Tiny YOLOv3 model is increased to 58.9%,and the detection speed is consistent with the original model to meet the needs of real-time detection.
Cross Media Retrieval Method Based on Residual Attention Network
FENG Jiao, LU Chang-yu
Computer Science. 2021, 48 (6A): 122-126.  doi:10.11896/jsjkx.201100026
Abstract PDF(3103KB) ( 688 )   
References | Related Articles | Metrics
With the rapid development of multimedia technology,cross-media retrieval has gradually replaced traditional single-media retrieval as the mainstream information retrieval method.Existing cross-media retrieval methods are highly complex,and cannot fully mine the detailed characteristics of the data,which will cause deviations in the mapping process,and it is difficult to learn accurate data associations.To solve the above problems,this paper proposes a cross-media retrieval method based onresidualattention network(CR-RAN).First of all,in order to better extract the key features of different media data and simplify the cross-media retrieval model,this paper proposes a residual neural network incorporating the attention mechanism.Then this paper proposes a cross-media retrieval joint loss function,which enhances the semantic discrimination ability of the network and improves the accuracy of network retrieval by constraining the mapping process of the network.Experimental results show that,compared with some existing methods,the cross-media retrieval method based on residual attention network proposed in this paper can better learn the association between different media data and effectively improve the accuracy of cross-media retrieval.
Three-dimensional Target Recognition Method Based on Pair Point Feature and HierarchicalComplete-linkage Clustering
YUAN Xiao-lei, YUE Xiao-feng, FANG Bo, MA Guo-yuan
Computer Science. 2021, 48 (6A): 127-131.  doi:10.11896/jsjkx.200800035
Abstract PDF(2422KB) ( 694 )   
References | Related Articles | Metrics
Aiming at the problem of low efficiency and easy to be disturbed in 3D target recognition algorithm based on original point pair features,a hierarchical compete-linkage clustering algorithm is proposed to identify 3D targets.The global model description is constructed by using all the point pair features on the model.In the two-dimensional space of the local coordinates,the candidate pose is screened by the voting scheme and the hierarchical complete link clustering algorithm to obtain the optimal pose.Experimental results on the UWA dataset show that compared with the original point pair feature algorithm,the proposed hierarchical compete-linkage clustering algorithm has a certain degree of improvement in recognition rate and efficiency compared with the point pair feature algorithm,and the proposed method is practical and effective.
Optimization Algorithm of Ship Detection Based on Multi-feature in SAR Images
YAN Jun, FENG Su-yun, LU Lin-lin, WANG Qing, CAI Ming-xiang
Computer Science. 2021, 48 (6A): 132-136.  doi:10.11896/jsjkx.200700180
Abstract PDF(3177KB) ( 820 )   
References | Related Articles | Metrics
In view of that the traditional ship detection algorithms cannot effectively avoid the influence of the side lobe effect on results,which mostly consider the gray contrast between the ship and the background.The geometric characteristics of the target object on the SAR images are not fully utilized,and the detection accuracies are low,therefore a target detection algorithm based on the ship's multi-features is proposed.The azimuth estimation method and the stepwise approximation method are used to eliminate the influence of the side lobe effect on the geometric characteristics (area,aspect ratio and rectangularity) and gray contrast,and then the variance coefficient method is used to distribute different weight for the four features to calculate the confidence.By determining the best confidence threshold to remove the non-target objects among the candidate targets and optimize the detection results,this paper uses Sentinel-1 images to verify the algorithm,the two-parameter CFAR algorithm and the KSW double-threshold algorithm are used as comparative experiments.The experimental results show that for three images with different background complexities,the quality factor of the proposed algorithm exceeds 0.7 with the minimum calculation time,and it maintains optimal detection performance for images with complex background.
Big Data & Data Science
Application Status and Future Trends of Photo Analysis in E-commerce:A Survey of Research Based on Photo Visual and Content Features
LIU Rong, ZHANG Ning
Computer Science. 2021, 48 (6A): 137-142.  doi:10.11896/jsjkx.210100017
Abstract PDF(1782KB) ( 969 )   
References | Related Articles | Metrics
The development of computer deep learning and big data mining technology makes it possible to effectively extract the visual and content features of massive photos.Photo analysis has been widely used in e-commerce research.Through combing the related literatures of photo analysis,this paper reviews the methods and applications of photo feature extraction,puts forward an analysis framework based on the research and application of photo visual and content features,and systematically expounds the application status of photo analysis in the field of e-commerce.Through analysis,it is found that the existing related research mainly focuses on the influence of the visual or content features of photos on individual preference and consumption behavior.The effect of their combination remains to be further explored.And most of the research focuses on the general analysis of the photos posted by users on social networking sites,and lacks of further research on consumption behavior.Finally,it summarizes the future research and development direction of photo analysis in the field of e-commerce,which provides a certain reference for future research.
Review of Research on Investor Sentiment Index in Stock Market
ZHANG Tong-ming, ZHANG Ning
Computer Science. 2021, 48 (6A): 143-150.  doi:10.11896/jsjkx.201000016
Abstract PDF(1818KB) ( 3396 )   
References | Related Articles | Metrics
Investor sentiment is widely used in stock market research.This paper collates the domestic and foreign literature on investor sentiment index and finds that the measurement and construction methods of investor sentiment index are divided into three categories.The first category is to use market survey indicators to directly replace investor sentiment;the second category is to select a single economic variable or a combination variable related to the stock market as a proxy variable to measure investor sentiment index;the third category is to obtain valuable information from social media to construct investor sentiment,including data source selection and text sentiment classification,and summarize the machine learning methods in sentiment classification.Based on the bullish index of investor sentiment,this paper classifies different degrees of investorsentiment in detail,and further proposes the topic-sentiment index (TSI) by using the topic extraction function of latent dirichlet allocation (LDA) model.This index overcomes the problem of only considering the sentiment characteristics of textual information in current research.The conclusion points out the deficiencies and challenges of current research,and aims to provide some reference for future research.
Stock Forecast Based on Optimized LSTM Model
HU Yu-wen
Computer Science. 2021, 48 (6A): 151-157.  doi:10.11896/jsjkx.200400011
Abstract PDF(2898KB) ( 2247 )   
References | Related Articles | Metrics
Stock forecasting research has always been a problem that plagued investors.In the past,investors used traditional analysis methods such as K-line diagrams and Yin-Yang lines to predict stock trends.However,with the advancement of science and technology,the development of economic markets,and changes in economic policies,the price trend of a stock is disturbed by various factors.Traditional analysis methods are far from being able to analyze theinformation in volatility of a stock.So prediction accuracy is greatly reduced.In order to improve the prediction accuracy of stock prices,this paper proposes a stock price prediction model based on PCA,LASSO,and LSTM neural networks.Based on the data of the five major categories of technical indicators of Ping An Bank (000001) from 2015 to 2019,the five major categories of indicators are reduced and screened using the PCA and LASSO methods,and the LSTM model is used to predict the closing price of Ping An Bank's stock,compared with the stability and accuracy ofthe previous two methods and using LSTM alone.The experimental results show that the PCA-LSTM model significantly reduces data redundancy and obtains better prediction accuracy than the LASSO-LSTM model and LSTM model.
Microblog Short Text Mining Considering Context:A Method of Sentiment Analysis
SHI Wei, FU Yue
Computer Science. 2021, 48 (6A): 158-164.  doi:10.11896/jsjkx.210200089
Abstract PDF(2073KB) ( 896 )   
References | Related Articles | Metrics
In the traditional dictionary based sentiment analysis,the polarity and intensity of sentiment words are fixed and static,without considering the change of polarity and intensity of sentiment words with different semantic environments.This paper proposes a sentiment analysis method of microblog short text based on sentiment ontology and sentiment circle considering context semantics.In order to capture their semantics and update the polarity and intensity of emotional words,we use the sentiment circle method to consider the co-occurrence patterns of words in different contexts.Combined with the constructed emotion ontology and semantic quantitative rules,a method of microblog short text mining considering semantic environment is established.The experimental results show that the proposed method is superior to the baseline method in terms of accuracy,recall,F value and accuracy from both entity level and microblog level.
Research on Factors Affecting Stock Inflection Point Based on Machine Learning Algorithms
YUAN Yu-kun, LI Gang, ZHAO Zhi-xiang, XU Li
Computer Science. 2021, 48 (6A): 165-168.  doi:10.11896/jsjkx.200900168
Abstract PDF(3103KB) ( 1364 )   
References | Related Articles | Metrics
Transaction situation in stock market can fully reflect behavior characteristics ofinvestors and affect the trend of entire stock market.As the bottom-level transaction data of stock market,detailed data of stock transaction can comprehensively reflect the situation of stock transactions and become a vital referencefor judgment of stock market trends.It can also provide regulators in capital market with effective information when making decisions in the field of risk monitoring.In this paper,we propose a method that can quickly extract the characteristics of investor transaction from detailed data of stock transaction,based on machine learning algorithms such as logistic regression,decision tree,and random forest,finding the main influencing factors of large inflection points and predictingtime range over which the larger inflection point occurs.The experimental results on the stock indexes of Shanghai and Shenzhen show that the proposed method can highly improve accuracy of prediction of large inflection point instock market by appoximately 10%,compared with a traditional model,and the accuracy rate in six-month backtesting experiment maintains a level of 70%,which demonstrates validity of the model in this paper.
Comparison of Temperature Forecasting Model Using in Weather Derivatives Designing
ZHANG Xue, LUO Zhi-hong, JIANG Jing
Computer Science. 2021, 48 (6A): 169-177.  doi:10.11896/jsjkx.200900159
Abstract PDF(4661KB) ( 844 )   
References | Related Articles | Metrics
Temperature derivatives is one of the most active contracts in the weather derivatives transactions,so making an appropriate temperature forecasting model is the basis for the design of temperature derivatives.Considering the temperature time series always saccompanied by trend characteristic,seasonality pattern and cycle,this paper uses the continuous time autoregressive model (CAR) based on ornstein-uhlenbeck process,seasonal autoregressive integrated moving average (SARIMA) model and wavelet neural network algorithm these three models to fit the temperature of Mohe,Beijing,Urumqi Wuhu,Kunming and Hai-kou,which are the regional representative cities overall the China.In the study,unbiased absolute percentage error,standard absolute percentage Error and Mean Absolute Scaled Error are used to test forecasting accuracy of these three temperature models.The forecasting accuracy results show that compared with the continuous time autoregressive model and SARIMA model,wavelet neural network has the smallest the values of the unbiased absolute percentage error,standard absolute percentage error and mean absolute scaled error,which shows the best forecasting performance.Wavelet neural network can well fit the changes of temperature's process and provide significance for temperature derivatives pricing.
Automobile Sales Forecasting Model Based on Convolutional Neural Network
LIU Ji-hua, ZHANG Meng-di, PENG Hong-xia, JIA Xing-ping
Computer Science. 2021, 48 (6A): 178-183.  doi:10.11896/jsjkx.200600104
Abstract PDF(3033KB) ( 1496 )   
References | Related Articles | Metrics
Traditionally the kewords selection of web search data is a manual selection task.It is difficult to take all the keywords into consideration.Therefore,the deep learning method is introduced into the field of automobile sales prediction,and the deep learning model is used to extract features of web search data.At first,car-related keywords and online search volumes are collectedthrough web crawlers,and then a car sales prediction model of convolutional neural network is designed,based on the characteristics of web search data and sales data.The model is adopted to predict the sales of Volkswagen in the first half of 2019.The results show that the convolutional neural network can effectively predict car sales,and the accuracy of the prediction reaches 89.51%,compared with RBF,ARIMA and ARIMA+RBF model.Due to the impact of the Spring Festival and the implementation of new policies,it has the largest forecast error in February.However,it has the highest prediction accuracy in March as the market recovers.
A Kind of High-precision LSTM-FC Atmospheric Contaminant Concentrations Forecasting Model
LIU Meng-yang, WU Li-juan, LIANG Hui, DUAN Xu-lei, LIU Shang-qing, GAO Yi-bo
Computer Science. 2021, 48 (6A): 184-189.  doi:10.11896/jsjkx.200600090
Abstract PDF(2319KB) ( 1302 )   
References | Related Articles | Metrics
Atmospheric contamination can pose a severe threat to the health of people and incur kinds of diseases,thus,forecasting the concentration of atmospheric contaminant can be of great significance for instructing the atmospheric pollution control.To solve the issue,we propose a kind of mixed forecasting model based on LSTM and full connected neural network,and we introduce the training strategyof data bucket,which can address the issue that the long interval between training data and forecasting sample.Our model has a high performance on both versatility and precision,we fully combine the advantages of LSTM and full connected together and achieve high precision forecasting with varieties of contaminants.Finally,we take an example of forecasting of Tianjin to validate its strength and the results show that our model can achieve R2>0.90,MSE<0.15performance for all six kinds of pollutant.It shows that LSTM-FC Model has its great strength for atmospheric contaminant concentrations task.
Network Public Opinion Trend Prediction of Emergencies Based on Variable Weight Combination
CHENG Tie-jun, WANG Man
Computer Science. 2021, 48 (6A): 190-195.  doi:10.11896/jsjkx.200600094
Abstract PDF(3943KB) ( 722 )   
References | Related Articles | Metrics
It is of great significance for social stability to analyze and predict the development trend of network public opinions on emergencies and discover the potential crisis in the process of spreading public opinions.On the basis of Logistic curve model and BP neural network,this paper constructs a variable weight combination prediction model from the perspective of non-linear programming based on the principle of minimum sum of squared error.The experimental results of three events show that the variable weight combination forecasting model which is constructed in this paper can better solve the problem and has higher accuracy.The validity and feasibility of the model is also verified.
Research on Propagation of COVID-19 Based on Multiple Models
LIU Han-qing, KANG Xiao-dong, GAO Wan-chun, LI Bo, WANG Ya-ge, ZHANG Hua-li, BAI Fang
Computer Science. 2021, 48 (6A): 196-202.  doi:10.11896/jsjkx.201100086
Abstract PDF(3654KB) ( 801 )   
References | Related Articles | Metrics
The propagation of COVID-19 to all provinces and cities across the country in a short period of time has not only severely affected people's normal life and social economy,but also threatened people's lives.Therefore,multi-model COVID-19 transmission research has clear theories and realistic significance.This study is based on public data.First,the small-world and scale-free network models are used to study node propagation control.Secondly,the improved SEIR model is used in conjunction with the Wuhan epidemic trend to divide the infected into symptomatic and asymptomatic infections.The hospitalization and death states are joined,andsimulation studies under three conditions are carried out:normal social behavior,social behavior to keep a distance,and social behavior of isolation measures,respectively.Finally,the level and periodicity of COVID-19 infection are analyzed based on the chaos model.The data simulation results verify that the above model has good applicability.
TAN-based Service Pricing Strategy
HAN Li-xia, ZHANG Zhan-ying
Computer Science. 2021, 48 (6A): 203.  doi:10.11896/jsjkx.200900024
Abstract PDF(2912KB) ( 937 )   
References | Related Articles | Metrics
Aiming at the pricing problem of labor crowd sourcing platform in mobile Internet,this paper uses multiple linear regression to fit the main influencing factors of price.Based on the idea of divide and conquer,the geographic information is divided into five regions by using tree gain naive Bayesian network (TAN).Through clustering analysis,the scattered points are clustered,and the regional member reputation calculation method is proposed,and each region is calculated separately.According to the credibility and geographical location,the prices of different regions are derived.The solution proposed in this paper has a certain reference significance to the pricing problem,which is greatly affected by geographic information.
Query Suggestion Method Based on Autoencoder and Reinforcement Learning
HU Xiao-wei, CHEN Yu-zhong
Computer Science. 2021, 48 (6A): 206-212.  doi:10.11896/jsjkx.200900196
Abstract PDF(1934KB) ( 681 )   
References | Related Articles | Metrics
The purpose of query suggestion is to explore the query intent of search engine users and provide relevant query suggestion.Traditional query suggestion methods mainly rely on manually extracting relevant features of queries,such as query frequency,query time,user clicks and dwell time,etc.,and use statistical learning algorithms or ranking algorithms to give query suggestion.In recent years,deep learning methods have been widely used in query suggestion problems.The existing deep learning methods for query recommendation are mostly based on recurrent neural networks,which predict the next query of the user by modeling the semantic features of all queries in the query log.However,the existing deep learning methods have poor context awareness of query suggestion,it is difficult to accurately capture user query intentions,and the influence of time factors on query suggestion is not fully considered,and it lacks timeliness and diversity.In response to the above problems,this paper proposes a query suggestion model combining autoencoder and reinforcement learning(Latent Variable Hierarchical Recurrent Encoder-Decoder with Time Information of Query and Reinforcement Learning,VHREDT-RL).VHREDT-RL introduces a reinforcement learning joint training generator and discriminator,thereby enhancing the context awareness of generating query suggestion,using latent variable hierarchical recursive autoencoders that integrate query time information as a generator,and making query suggestion better time-sensitive and diversity.The experimental results on the AOL data set show that the VHREDT-RL model proposed in this paper achieves better accuracy,robustness and stability than the benchmark method.
Anomaly Detection Based on Spatial-temporal Trajectory Data
GUO Yi-shan, LIU Man-dan
Computer Science. 2021, 48 (6A): 213-219.  doi:10.11896/jsjkx.201100193
Abstract PDF(2355KB) ( 1396 )   
References | Related Articles | Metrics
With the popularization of smart devices and the development of wireless communication technology,when users use wireless networks to meet various needs,wireless networks also record a large number of users' spatial-temporal trajectory data.Anomaly detection for spatial-temporal trajectory data becomes a new research hotspot in the field of data mining.In order to better pay attention to the healthy development of students and promote the informatization construction of campus,a spectral clustering algorithm based on the combination of multi-scalethreshold and density (MSTD-SC) is proposed,taking the real internet usage data ofcampus as an example.Firstly,it uses the affinity distance function based on the shortest time distance-shortest time distance subsequences (STD-STDSS) to construct the initial adjacency matrix.Then it introduces the covariance scale eigenvector space by threshold and spatial scale eigenvector space by threshold to perform 0-1 processing on the adjacency matrix to obtain more accurate sample similarity.Next,comstructing a eigenvalue decomposition of the adjacency matrix.Finally,it uses DBSCAN clustering algorithm to avoid to manually determine the number of clusters.Using Silhouette Index to evaluate the experimental results obtained by multiple algorithms,MSTD-SC algorithm reflects better clustering performance.Applying it to individual user anomaly detection,the abnormal user list is verified to be effective and credible.
Landmark-based Spectral Clustering by Joint Spectral Embedding and Spectral Rotation
LI Peng, LIU Li-jun, HUANG Yong-dong
Computer Science. 2021, 48 (6A): 220-225.  doi:10.11896/jsjkx.210100167
Abstract PDF(1974KB) ( 744 )   
References | Related Articles | Metrics
Classical spectral clustering algorithms consist of two separate stages.One is spectral embedding,computing eigenvalue decomposition of a Laplacian matrix to obtain a relaxed continuous indication matrix.The other is post processing,applying k-means or spectral rotation to round the real matrix into the binary cluster indicator matrix.Such a separate scheme is not guaranteed to achieve jointly optimal result because of the loss of useful information.Meanwhile,there are difficulties of low clustering precision,high storage cost for the similarity matrix and high computational complexity for the eigenvalue decomposition of Laplacian matrix.The existing joint model adopts an orthonormal real matrix to approximate the orthogonal but nonorthonormal cluster indicator matrix.The error of approximating a nonorthonormal matrix is inevitably large.To overcome the drawback,we propose replacing the nonorthonormal cluster indicator matrix with an improved orthonormal cluster indicator matrix.The proposed method is capable of obtaining better performance because it is easy to minimize the difference between two orthonormal matrices.Furthermore,a novel landmark-based joint spectral embedding and spectral rotation algorithm is proposed based on the sparse representation by landmark points,which greatly solves the effective computation of spectral clustering for large scale dataset.Experimental results on benchmark datasets demonstrate the effectiveness of the proposed method.
Intelligent Travel Route Recommendation Method Integrating User Emotion and Similarity
SUN Zhen-qiang, LUO Yong-long, ZHENG Xiao-yao, ZHANG Hai-yan
Computer Science. 2021, 48 (6A): 226-230.  doi:10.11896/jsjkx.200900119
Abstract PDF(1627KB) ( 842 )   
References | Related Articles | Metrics
In recent years,with the development of social networks,how to design a path recommendation method that meets the individual needs of users has become an important research hotspot.This paper considers the relevant characteristics of POI (point of interests),integrates the user's emotion and product similarity into the heuristic function of the ant colony algorithm,adopts the improvement strategy of EMAS,MMAS.By using the particle swarm algorithm to improve the first pheromone distribution of ant colony algorithm,combined with the scores of 593 tourists in the dataset and text comment data,this paper proposes the PS-AC (Particle Swarm-Ant Colony algorithm for user emotion and similarity) algorithm,and uses the improved ant colony algorithm to realize the user's travel route recommendation of the highly popular scenic spots in the scenic spots.Tests on real data sets show that the PS-AC algorithm has good performance in accuracy,recall,and F measurement.
Research on Elderly Population Prediction Based on GM-LSTM Model in Nanjing City
CHEN Hui-qin, GUO Guan-cheng, QIN Chao-xuan, LI Zhao-bi
Computer Science. 2021, 48 (6A): 231-234.  doi:10.11896/jsjkx.200900142
Abstract PDF(4313KB) ( 1296 )   
References | Related Articles | Metrics
At present,the aging of China's population is becoming increasingly prominent.Accurate prediction of the number of the elderly population in the future is the basic work to consolidate the situation and policy research,which has important reference value for the formulation of relevant policies and social development.In this paper,a GM-LSTM model is proposed,which combines the gray system dynamic model with the advantages of LSTM deep learning neural network to build a composite model,and the LSTM neural network model is used to modify the residual of the estimated sequence and the original sequence in the GM prediction model.The model verification shows that the GM-LSTM model has good prediction accuracy and generalization ability.Using GM-LSTM model for analysis,data from 2008 to 2017 are selected for analysis to predict the number and density of the elderly population in all administrative areas of Nanjing from 2021 to 2035.The results show that in the next 15 years,the number of elderly population in each administrative area of Nanjing shows a trend of high base and high growth,and the elderly population density difference among administrative areas is significant.The elderly population density in the central urban area is relatively high,which makes it a densely populated area,and the population density gradually decreases along with the direction of the suburbs.
Prediction Method of International Natural Gas Price Trends Based on News
PEI Ying, LI Tian-xiang, WANG Ao-qing, FU Jia-sheng, HAN Xiao-song
Computer Science. 2021, 48 (6A): 235-239.  doi:10.11896/jsjkx.201000056
Abstract PDF(2644KB) ( 900 )   
References | Related Articles | Metrics
As a new type of clean and important energy,natural gas is one of the bulk commodities of futures trading.As an important component of the national economy and international transactions,it has important economic significance.However,due to the influence of economic,political,natural and even human factors on the price of natural gas,it is very difficult to predict the price accurately.Therefore,a news-based prediction model of natural gas price trends is proposed in this paper.In this model,text embedding and sentiment analysis are conducted on natural gas-related news.The Granger causality test is employed to prove the causality between the price of natural gas and the emotional tendency of relevant news.The news sentiment is multiplied as the weight of the news vector,and the weighted vectors are the input of CNN and LSTM fused model.CNN is used to extract news features,LSTM is used to capture time series information of news and natural gas price trends.Finally,the network achieves an accuracy as 62%.The accuracy is still better than most traditional machine learning algorithms.
Collaborative Filtering Recommendation Algorithm Based on User Preference Under Trust Relationship
SHAO Chao, SONG Shu-mi
Computer Science. 2021, 48 (6A): 240-245.  doi:10.11896/jsjkx.200700113
Abstract PDF(2226KB) ( 714 )   
References | Related Articles | Metrics
With the massive increase of information,the recommendation system has effectively alleviated the problems caused by the information explosion.Collaborative filtering,as one of the mainstream technologies of recommendation system,has been widely concerned.In the research of users' interest preference,the supervised data sets based on commodity labels are mainly studied,and the unsupervised data sets are ignored.At the same time,the influence of trusted users on users' interest is not considered in the process of calculating users' interest preference.To solve these problems,a collaborative filtering recommendation algorithm based on user preference under trust relationship is proposed in this paper.Firstly,the potential feature information of the items is obtained using the matrix factorization (MF) model,and then is clustered to obtain item type information.Secondly,the user trust relationship and users-item rating information are considered to construct the user preference matrix.Finally,the users are clustered based on the user preference matrix,and then the similarities between users in one cluster are calculated to implement recommendation.Experimental results on open datasets show that the algorithm can effectively improve the accuracy of recommendation results and the quality of recommendations.
Research on Comprehensive Evaluation of Network Quality of Service Based on Multidimensional Data
SUN Ming-wei, SI Wei-chao, DONG Qi
Computer Science. 2021, 48 (6A): 246-249.  doi:10.11896/jsjkx.200900131
Abstract PDF(3611KB) ( 585 )   
References | Related Articles | Metrics
With the rapid development of modern society and economy,computer network has been widely used in all walks of life,and plays an irreplaceable important role.At the same time,we also puts forward more specific requirements for computer network service quality.How to realize network service quality assurance has always been a hot research topic in the Internet field.This paper analyzes the defects of the current comprehensive evaluation research on network service quality,and at the same time,considers that the shortcomings of traditional data processing methods will be magnified infinitely in the face of huge data volume and various data types.The sparse auto encoder network model is used to conduct data reduction and feature extraction for multidimensional data.Then,the feature data set is taken as experimental data,and the improved grey relational analysis-technique for order preference by similarity to ideal solution model is used to conduct comprehensive evaluation on network service quality.It provides a new way of thinking for multilevel and multicriteria comprehensive evaluation system.
Research on Ensemble Learning Method Based on Feature Selection for High-dimensional Data
ZHOU Gang, GUO Fu-liang
Computer Science. 2021, 48 (6A): 250-254.  doi:10.11896/jsjkx.200700102
Abstract PDF(1709KB) ( 934 )   
References | Related Articles | Metrics
From the prediction error analysis and deviation-variance decomposition of ensemble learning,it can be found that the use of limited,accurate and differentiated basic learners for ensemble learning has better generalization accuracy.A two-stage feature selection ensemble learning method is constructed by using information entropy.In the first stage,the basic feature set B with accuracy higher than 0.5 is constructed according to the relative classification information entropy.In the second stage,independent feature subset is constructed by greedy algorithm and mutual information entropy criterion on the basis of B.Then Jaccard coefficient is used to evaluate the diversity among feature subsets,and the independent feature subset of diversity is selected and the basic learner is constructed.Through the analysis of data experiments,it is found that the efficiency and accuracy of the optimization method are better than the general Bagging method,especially in multi-classification high-dimensional datasets,the optimization effect is good,but it is not suitable for the two-classification problem.
Footprint Image Clustering Method Based on Automatic Feature Extraction
CHEN Yang, WANG Jin-liang, XIA Wei, YANG Hao, ZHU Run, XI Xue-feng
Computer Science. 2021, 48 (6A): 255-259.  doi:10.11896/jsjkx.200900033
Abstract PDF(2164KB) ( 876 )   
References | Related Articles | Metrics
Footprint images are the most important clues in the detection process of public security cases.Every year,public security agencies collect many crime scene footprints.How to automatically organize and categorize these footprint images has become a difficulty for public security informatization.To meet the actual needs of public security,this paper combines a convolutional neural network and DBSCAN algorithm to propose a method for clustering footprint images.First,the footprint image is preprocessed to meet the model training requirements.Then,through model pre-training,the two types of Resnnet50 and Densenet121 convolutional neural network model structures are improved to extract footprint image features and establish a feature vector library.Based on DBSCAN Similar algorithms,we use the above feature vector library to organize and classify footprint images.Experimental results show that the method has good practicability and effectiveness.
Intelligent Computing
Improved Crow Search Algorithm Based on Parameter Adaptive Strategy
LIN Zhong-fu, YAN Li, HUANG Wei, LI Jie
Computer Science. 2021, 48 (6A): 260-263.  doi:10.11896/jsjkx.201100158
Abstract PDF(3074KB) ( 920 )   
References | Related Articles | Metrics
Crow search algorithm (CSA) is a new intelligent optimization algorithm developed in recent years.It has the advantages of high optimization accuracy and fast convergence speed.However,its search performance is strongly dependent on its parameters.The selection of parameters is very important to the global search ability as well as the convergence speed of the algorithm.In order to solve the problem of determining the optimal parameters,a method for characterizing the convergence process of the population optimization algorithm is proposed first,so that the optimization process can be divided into pre-,mid-,and late stages.On this basis,an adaptive parameter improved Crow search algorithm (APICSA) based on the optimization process is proposed.The test results of Levy No.5 function and gear system design problem show that the reliability and convergence speed of APICSA method can be better balanced,and both are improved to a certain extent.Compared with other intelligent optimization algorithms such as artificial bee colony algorithm (ABC),the standard deviation of this method in 50 operations is reduced by 55%,and the error between the average value and the optimal solution is reduced by 67.7%,which show that APICSA algorithm performs better in reliability and accuracy.
Aspect Sentiment Analysis of Chinese Online Course Review Based on Efficient Transformer
PAN Fang, ZHANG Hui-bing, DONG Jun-chao, SHOU Zhao-yu
Computer Science. 2021, 48 (6A): 264-269.  doi:10.11896/jsjkx.200800116
Abstract PDF(1922KB) ( 1319 )   
References | Related Articles | Metrics
It is of great value for the healthy development of online courses that accurately mine the emotional information contained in online course reviews.Most of the existing research on sentiment analysis of Chinese online course reviews is a coarse-grained model,which cannot accurately express the fine-grained sentiment for all aspects of the review sentence.The paper puts forward an efficient Transformer based sentiment analysis model for Chinese online course review.Firstly,the dynamic word vector coding of the review's aspect and context is obtained by the Albert pre-training model.Then,the semantic representation of the review's aspect and context is carried out by the efficient Transformer which can input the word vector in parallel.Finally,it uses the interactive attention mechanism to learn the important parts of the context and aspects in the course review,and puts its final representation into the sentiment classification layer to predict the sentiment polarity.Experimental results on real datasets of MOOC in China show that the accuracy of the proposed model achieves more than 80% at lower time cost compared with the baseline model.
Ensemble Learning Algorithm Based on Intuitionistic Fuzzy Sets
DAI Zong-ming, HU Kai, XIE Jie, GUO Ya
Computer Science. 2021, 48 (6A): 270-274.  doi:10.11896/jsjkx.200700036
Abstract PDF(2754KB) ( 709 )   
References | Related Articles | Metrics
In order to improve the classification accuracy and generalization ability of traditional machine learning algorithms,this paper proposes an ensemble learning algorithm based on intuitionistic fuzzy sets (IFS-EL).The algorithm constructs an intuitionistic fuzzy preference relation (IFPR) matrix according to the classification accuracy of the traditional classifier.The matrix is used to determine the weights of the classifiers and the multi-criteria group decision making (MCGDM) is used to determine the sample classification result.The experimental data uses 7 classification data sets in UCI,and the training set and test set are divided into 7:3.The classification results are compared with the current popular traditional classification algorithms and ensemble learning classification algorithms,SVM,LR,NB,Boosting,Bagging,the average accuracy of the algorithm in this paper is improved by 1.91%,3.89%,7.80%,3.66%,4.72%.The experimental results show that the IFS-EL can improve the classification accuracy and generalization ability.
Vehicle Flow Measuring of UVA Based on Deep Learning
NIU Kang-li, CHEN Yu-zhang, ZHANG Gong-ping, TAN Qian-cheng, WANG Yi-chong, LUO Mei-qi
Computer Science. 2021, 48 (6A): 275-280.  doi:10.11896/jsjkx.200900149
Abstract PDF(3368KB) ( 1339 )   
References | Related Articles | Metrics
With the popularization of the concept of smart city,the intelligent management of traffic road has become the focus of scholars.In order to solve the problem of road traffic statistics,this paper proposes a residual network based UAV aerial traffic flow measuring algorithm based on residual network.The fully connected multi-scale residual learning block (FMRB) is introduced into the method network to solve the gradient dispersion phenomenon and make the image features better extracted and learned.At present,the accuracy of the existing vehicle detection algorithms is low,and most of them can only detect the vehicle,and can not count the traffic flow.In this paper,combined with video frame estimation method,real-time monitoring and statistics of traffic flow is realized.Compared with SSD,YOLOv2 and YOLOv3 algorithms in vehicle detection performance,the results show that,under the condition of self built data set training,this method introduces multi-scale residual learning block (FMRB) for vehicle recognition of remote sensing image,and can achieve higher recognition accuracy.In the field traffic flow monitoring,the error detection rate is less than 1%,which has strong practical effect.
Underwater Robert Visual Simulation Based on UNITY3D
CHENG Yu, LIU Tie-jun, TANG Yuan-gui, WANG Jian, JIANG Zhi-bin, QI Sheng
Computer Science. 2021, 48 (6A): 281-284.  doi:10.11896/jsjkx.200700131
Abstract PDF(2848KB) ( 1203 )   
References | Related Articles | Metrics
The visual simulation plays a veryimportant role in the development of the robot.On the one hand,it can be used in underwater robot navigation to carry out real-time monitoring and display robot attitude information.With seabed information,it provides important auxiliary information for operator.On the other hand,it is also used in the simulation of the test stage.It can feed back the obstacle information of the seabed and provide elevation and depth data.Aiming at the needs of underwater robot visual simulation,this paper designs the visual simulation method of underwater robot,develops by using UNITY3D technology,creates the scene according to the actual map data,displays the underwater navigation attitude of underwater robot,and improves the rendering effect and real degree of sea view.This method has been actually applied in the national key research and development plan of the 13th Five-Year Plan for full-depth ocean autonomous and remotely-operated vehicle "Haidou-1" and AUV.In the demonstration,test,practical and other stages,this method is of great significance,providing a strong support for the further research of underwater robot.
Construction and Application of Knowledge Graph for Industrial Assembly
XU Jin
Computer Science. 2021, 48 (6A): 285-288.  doi:10.11896/jsjkx.200600116
Abstract PDF(2150KB) ( 1949 )   
References | Related Articles | Metrics
In the context of smart manufacturing in the new era,traditional industrial assembly design methods have been unable to meet the needs of modern users in pursuit of intelligence,efficiency,and precision.Promoting the intelligent of industrial design as a research hotspot has become a top priority in the industrial field.This paper develops assembly design-oriented atlas through existing industrial assembly design methods,constructs assembly design ontology models through assembly design specifications,acquiring part data from three-dimensional drawing files,part entities identification,and relationships between parts Starting from the extraction and the fusion of parts knowledge,the acquired assembly data is stored in a graph database to construct an indust-rial assembly knowledge map taking the automobile engine field as an example.The results of experimental verify the feasibility to use knowledge graph into assembly.
ADCSM:A Fine-grained Driving Cycle Model Construction Method
LUO Jing-jie, WANG Yong-li
Computer Science. 2021, 48 (6A): 289-294.  doi:10.11896/jsjkx.200600019
Abstract PDF(3858KB) ( 783 )   
References | Related Articles | Metrics
The driving cycles of the car reflect the kinematic characteristics of the car driving on the road.Existing methods of constructing driving cycles often have the problems of poor granularity and low accuracy.In order to solve these problems in constructing of driving cycles,a fine-grained method for constructing vehicle driving cycles model is proposed,called Construction method of automobile driving cycles based on SOM and Markov model(ADCSM).First,the data is cleaned by Daubechies-4 wavelet.The cleaned data is divided into many short strokes.The 10 features of the short stroke are extracted.10 feature parameters are clustered by using SOM network,andclustered into the (1 * 3) neural network to obtain the clustering result sequence.Markov model is established through sequence.Finally constructing driving cycle is completed through the ADCSM algorithm.The obtained driving cycles are compared with the results of the traditional K-means clustering construction method.The experimental data show that the final error of ADCSM is 4.07%,while the traditional K-means Means error is 8.77%.ADCSM uses the SOM neural network clustering method to have higher clustering accuracy than the traditional K-means method,and has the ability to self-learn working conditions.ADCSM uses the Markov model method to reflect the conversion relationship of urban driving conditions.Compared with the traditional K-means driving conditions construction method,the granularity is finer,so the synthesized driving conditions are more effective than the traditional driving cycles and reflect the driving feature of the city.
Relation Classification of Chinese Causal Compound Sentences Based on Transformer Model and Relational Word Feature
YANG Jin-cai, CAO Yuan, HU Quan, SHEN Xian-jun
Computer Science. 2021, 48 (6A): 295-298.  doi:10.11896/jsjkx.200500019
Abstract PDF(2805KB) ( 996 )   
References | Related Articles | Metrics
Chinese compound sentences have rich and complicated semantic relations.Recognition of relation category of a Chinese compound sentence is judgment of semantic relations of the sentence,and it is very important to analyze the meaning of compound sentences.Causal compound sentences are the most frequently used sentences in Chinese article.In this paper,a corpus of causal compound sentences with two clauses is taken as research object.A deep learning method is used in order to mine the hidden features of compound sentences automatically.At the same time,this paperintegrates significant linguistics knowledge of relational words in the proposed model.Combining word vector of word2vec and relation word feature of one hot coding as input to the model,a transformer model using convolutional neural network as feedforward layer is exploited to identify the relation category of causal compound sentences.Using our model,the F1 value of the experiment reaches 92.13%,which is better than the compa-rative model.
Recognition and Transformation for Complex Noun Phrases Based on Boundary Perception
LIU Xiao-die
Computer Science. 2021, 48 (6A): 299-305.  doi:10.11896/jsjkx.200500157
Abstract PDF(1698KB) ( 1056 )   
References | Related Articles | Metrics
This paper proposes a rules-based method for recognizing and transforming the complex Noun Phrases to improve the translation quality of them in patent machine translation.By analyzing the semantic chunks and the structural units of Chinese and English complex Noun Phrases,under the guide of the boundary perception,this paper extracts the feature words,builds 57 re-cognition rules,designs combination strategies and realizes the formalization of Chinese complex Noun Phrases.By comparing Chinese and English complex Noun Phrases,this paper summarizes the differences between them,and determines the transformation strategies based on that.At last,it applies the method to an existing machine translation system to test our work.Experimental results show that our rules and strategy are very efficient,and improve the translation quality in patent machine translation.
Machine Learning Process Composition Based on Hierarchical Label
CHEN Yan, CHEN Jia-qing, CHEN Xing
Computer Science. 2021, 48 (6A): 306-312.  doi:10.11896/jsjkx.200500077
Abstract PDF(2325KB) ( 514 )   
References | Related Articles | Metrics
With the rise of machine learning,the number of operators increases rapidly,the solution space of composition operators to search increases,and the process composition time exponentially increases.How to reduce the search solution space,thus reducing the assembly time,and realizing the machine learning process composition to meet the functional needs of users has become the current research hotspot.This paper proposes a process composition method based on hierarchical tagging to support machine learning.Firstly,the label is extracted from the operator semantics,and the hierarchical label model is determined accor-ding to the semantic scope of the label.Secondly,according to the machine learning domain discovery label relationship,the domain composition model is established,and the final domain label model is determined according to the functional requirements determined by users.Finally,the domain operators are bound with tag semantics,the domain operator relationship model is determined,and the operators are composed according to the assembly rules to form all operator processes that meet the functional requirements of users.At the end of this paper,an example is given to show the feasibility of the method,and the result verification standard is proposed to show the correctness and integrity of the result.
Improvement of DV-Hop Location Algorithm Based on Hop Correction and Genetic Simulated Annealing Algorithm
WANG Guo-wu, CHEN Yuan-yan
Computer Science. 2021, 48 (6A): 313-316.  doi:10.11896/jsjkx.201000101
Abstract PDF(2187KB) ( 609 )   
References | Related Articles | Metrics
In order to slove the problem of location error caused by Hop count and Average Hop distance in traditional Distance Vector-Hop (DV-Hop) algorithm,an improved DV-Hop localization algorithm based on hop correction and Genetic Simulated Annealing is proposed.The improvement of the algorithm is mainly reflected in the calculation of the exect hop count of know nodes.It calculates the coefficient of deviation,and adds a correction value to unknown node with a large number of hops,then uses Genetic Simulated Annealing algorithm to optimize the average Hop distance.The simulation results show that the improved algorithm can significantly improve the node positioning accuracy.
Study on Method for Estimating Wrist Muscle Force Based on Surface EMG Signals
GUO Fu-min, ZHANG Hua, HU Rong-hua, SONG Yan
Computer Science. 2021, 48 (6A): 317-320.  doi:10.11896/jsjkx.200600021
Abstract PDF(3592KB) ( 1187 )   
References | Related Articles | Metrics
Human-machine interaction force control based on surface electromyography (sEMG) needs to detect the force of muscle,and it is very difficult to measure muscle force directly and accurately.Therefore,muscle force estimation method is often used to estimate muscle force.A method for estimating wrist muscle force with sEMG signals is proposed.This method first makes a muscle force acquisition platform,then collects a series of muscle force signals and sEMG signals of wristat different muscle force levels,filters and matches the two signals synchronously,and takes the root mean square,mean absolute value (MAV).The mean frequency and spectral moments ratio (SMR) of the sEMG signal are taken as the fourfeatures.Finally,Support vector machine (SVM) modeling is used to achieve muscle force estimation and compared with the BP neural network modeling results.The root mean square error of the muscle force estimation of two experimenters reaches 9.1% MVC (maximum isometric contraction force) and 8.7% MVC,respectively.The results show that the method in this paper is an effective and simple method for estimating wrist muscle force.
Analysis and Application of Global Aviation Network Structure Based on Complex Network
HU Jun, WANG Yu-tong, HE Xin-wei, WU Hui-dong, LI Hui-jia
Computer Science. 2021, 48 (6A): 321-325.  doi:10.11896/jsjkx.200900112
Abstract PDF(2627KB) ( 998 )   
References | Related Articles | Metrics
With the continuous expansion of domestic and international trade activities,the economic and social value of air transportation are constantly improved.As the carrier of air transport,the empirical study and analysis of aviation network is of great significance.Based on the global flight information,this paper analyzes the global aviation network with the help of complex network,and finds that the global aviation network is a scale-free small-world network,whose degree distribution is power law distribution.Through fitting,it is found that the number of points between the number and degree is mainly exponential,but with the increase of degree,the number of points between the number and degree is mainly linear,and the clustering coefficient tends to be stable with the increase of degree.In addition,this paper finds that the global aviation network has obvious regional clustering effect through the community partition algorithm.
Music Style Transfer Method with Human Voice Based on CQT and Mel-spectrum
YE Hong-liang, ZHU Wan-ning, HONG Lei
Computer Science. 2021, 48 (6A): 326-330.  doi:10.11896/jsjkx.200900104
Abstract PDF(3666KB) ( 1105 )   
References | Related Articles | Metrics
In recent years,the generative confrontation network has performed well in the field of image style transfer,but its performance in the field of music is average.The existing music style transfer has poor effect on the style transfer of music with human voice.In order to solve these problems,the CQT feature and Mel spectrum feature of the music are extracted,and then CycleGAN is used to transfer the style of the combined feature of CQT feature and Mel spectrum.Finally,the WaveNet vocoder is used to decode the migrated spectrum.Finally,we realize the style transfer of music with vocals.The proposed model is evaluated on the public data set FMA,and the average style transfer rate of music that meets the requirements reaches 94.07%.Compared with other algorithms,the style transfer rate and audio quality of the music produced by this method are better than other algorithms.
Collision Detection Algorithm of AABB Bounding Box Based on B+ Tree
YANG Fan
Computer Science. 2021, 48 (6A): 331-333.  doi:10.11896/jsjkx.200600113
Abstract PDF(3401KB) ( 1160 )   
References | Related Articles | Metrics
For the collision detection algorithm,when using a traditional AABB bounding box to construct a bounding box hierarchy tree,the number of bounding box hierarchy trees,the number of leaf nodes,and the number of bytes stored in each node are the main factors that affect the efficiency of collision detection factor.In order to reduce the impact of node storage capacity on collision detection efficiency and improve the efficiency of collision detection,this paper adopts a B+ tree storage structure to store information such as bounding boxes.Before the bounding box intersection test,the storage index of each node is ordered,no additional sorting of each node is required,which reduces the memory overhead and avoids unnecessary bounding box testing.In addition,the non-leaf nodes of the B+ tree do not store specific data information,thereby reducing the storage space of the entire tree.Experiments show that the detection time of the AABB bounding box collision detection algorithm using B+ tree storage is significantly shorter than the traditional AABB algorithm under the same detection environment and detection object.
Application of Spatial-Temporal Graph Attention Networks in Trajectory Prediction for Vehicles at Intersections
ZENG Wei-liang, CHEN Yi-hao, YAO Ruo-yu, LIAO Rui-xiang, SUN Wei-jun
Computer Science. 2021, 48 (6A): 334-341.  doi:10.11896/jsjkx.200800066
Abstract PDF(4200KB) ( 1175 )   
References | Related Articles | Metrics
With the rapid development of artificial intelligence and big data technology,the application of deep learning to trajectory prediction on autonomous driving has become a hot topic in recent years.The premise of keeping the safety of navigation and efficient path planning in autonomous driving is to make trajectory predictions of motor and non-motor vehicles accurate,especially in mixed traffic scenes.Aiming at tackling the problem related to path planning when interactions happen among different research objects at an intersection,a modelling scheme based on graph attention networks is proposed.The model applied combines spatial and temporal interactions among traffic-agents to improve the accuracy of trajectory predictions for motor and non-motor vehicles.In addition,the proposed model can be applied to path planning of autonomous driving,ensuring that motor and non-motor vehicles are capable of passing the interactions safely and efficiently in complex traffic scenes.In the case of simple interactions,the average displacement error and final displacement error of the trajectories derived from the model reach relatively small.And in the case of complex interactions,the future paths provided by the model are more reasonable than the future ground truths.
Attribute Reduction Method Based on k-prototypes Clustering and Rough Sets
LI Yan, FAN Bin, GUO Jie, LIN Zi-yuan, ZHAO Zhao
Computer Science. 2021, 48 (6A): 342-348.  doi:10.11896/jsjkx.201000053
Abstract PDF(1928KB) ( 576 )   
References | Related Articles | Metrics
For target information systems containing both continuous and symbolic values,a novel attribute reduction method is proposed based on k-prototypes clustering and rough set theory under equivalent relations,which is suitable for hybrid data.Firstly,k-prototypes clustering is applied to obtain clusters of information systems by defining the distance of hybrid data,forming a division of the universe.Then the obtained clusters are used to replace equivalent classes in rough set theory,and the concepts of cluster-based approximate set,positive region,attribute reduction are correspondingly proposed.An attribute importance measure is also defined based on information entropy and the clusters.Finally,a variable precision positive-region reduction method is established,which can process both numerical and symbolic data,remove redundant attributes,reduce the needed storage and running time cost,and improve classification performance of classification algorithms.Besides,the division of different granularities of the universe can be obtained by adjusting the clustering parameter k and thus the attributed reduction can be optimized.A large number of experiments are carried out on 11 UCI data sets,four common classification algorithms are used for classification problems.The classification accuracy before and after reduction are compared.The influence of parameters on the results is analyzed in detail and verifies the effectiveness of the reduction method.
Research on Sentiment Analysis Based on Transformer and Multi-channel Convolutional Neural Network
HUO Shuai, PANG Chun-jiang
Computer Science. 2021, 48 (6A): 349-356.  doi:10.11896/jsjkx.200800004
Abstract PDF(3276KB) ( 1318 )   
References | Related Articles | Metrics
Text sentiment analysis is one of the classic fields of natural language processing.This paper proposes a text sentiment analysis model based on transformer feature extractor combined with multi-channel convolutional neural network.The model uses trsnsformer feature extractor to layer words and dynamically represent them on the basis of static word vectors trained by traditional Word2vector,Glove,etc.,and use Fine-Tuning for specific data sets for training,which effectively improves the representation of word vectors ability.The multi-channel convolutional neural network considers the dependence between word sequences in different size ranges,effectively extracts features and achieves the purpose of dimensionality reduction,can effectively capture the contextual semantic information of sentences,and enable the model to capture more semantic emotional information,improve the semantic expression ability of the text,and achieve the goal of emotional tendency classification through the Softmax activation function.The model is tested on the IMDb and SST-2 movie review datasets,and the accuracy rates on the test set reached 90.4% and 90.2%,indicating that the model proposed in this paper has better classification accuracy than the traditional word embedding combined with CNN or RNN.
Unsupervised Domain Adaptive Method Based on Optimal Selection of Self-supervised Tasks
WU Lan, WANG Han, LI Bin-quan
Computer Science. 2021, 48 (6A): 357-363.  doi:10.11896/jsjkx.201000030
Abstract PDF(2446KB) ( 847 )   
References | Related Articles | Metrics
The unsupervised domain adaptation method uses the knowledge learned from the source domain label data to classify the target domain unlabeled data,which has become the mainstream method to solve the feature alignment of the two domains in transfer learning.In view of the fact that the amount of labeled data is small and the quality is not high,the extracted features are incomplete,this paper proposes an unsupervised domain adaptation method based on the optimal selection of self-supervised tasks.In order to make the features have stronger semantic information,multiple self-supervised tasks are used on the unlabeled data in the two domains.In addition,a new intelligent combination optimization strategy is proposed to adaptively select effective features for self-supervised tasks.Finally,the two domains are approached along the task-related direction so that the classifier trained on the source domain label data can be better promoted to the target domain.The simulation experiment conducts a comparative analysis on the six public benchmark datasets from three aspects:classification accuracy,training data volume,and self-supervised task use effect.Experimental results show that the proposed method outperforms the existing advanced methods in three aspects.The classification accuracy is improved by 8% when using the same datasets,and the amount of data used is reduced by 12% under the same classification accuracy requirements.Compared with a single self-supervised task,the accuracy is improved by 11%.
Network & Communication
Overview of Onboard Satellite Communicating System
DU Chen-hui, XIANG Si-si, HUANG De-gang, CHEN Lang
Computer Science. 2021, 48 (6A): 364-368.  doi:10.11896/jsjkx.210100154
Abstract PDF(2531KB) ( 820 )   
References | Related Articles | Metrics
With the development of ground internet and the popularization of travel by flying,the passengers urgently want to surf the internet during flight.In this paper,based on the background of interconnection onboard,the necessity is discussed from three facts including requirement of passengers,interconnection onboard market and current application status.Focused on the development of satellite communications,it is found that the satellite broadband develops from brand Ku to brand Ka and brand HTS Ku (High Throughput Satellite).The analysis is carried out on the key technologies of interconnection onboard,and the results show that the technologies of satellite communications specially on the brand Ka and HTS Ku in USA and European are more mature.In order to realize the national production of interconnection onboard,it is necessary to increase investment in above two key technologies.The industry chain of onboard web interconnection is complex and long,it's not only an opportunity but also a challenge for the companies of the industry chain.Accordingly,the requirement of passenger,airlines,the third party service provider and airworthiness are concluded in this paper.
Review of Low Power Architecture for Wireless Network Cameras
HE Quan-qi, YU Fei-hong
Computer Science. 2021, 48 (6A): 369-373.  doi:10.11896/jsjkx.201100099
Abstract PDF(2810KB) ( 840 )   
References | Related Articles | Metrics
At present,wireless network camera is playing an increasingly important role in the field of environmental monitoring,military monitoring and urban monitoring.When used in remote or closed environments,the wireless network camera is powered by batteries and it is not convenient to replace the battery.The camera must meet the requirements of long battery life.The battery life of the camera is determined by the power consumption of the camera and the battery capacity.Since there is no breakthrough in battery technology,the low-power architecture design of wireless network cameras has become an important research direction.Firstly,the hardware solutions and power consumption performance of wireless network cameras are listed and analyzed.Then,the power performance of different coding algorithms is compared.In the aspect of camera dynamic power management,the dynamic power model of wireless network camera is proposed and analyzed,which provides the theoretical basis for dynamic power management.The power model of camera state switching is also analyzed,and the threshold time of camera state switching in time-out mode is determined.Finally,the overall design process of low power architecture of wireless network camerais presented.
Energy-aware Fault-tolerant Collaborative Task Execution Algorithm in Edge Computing
XUE Yan-fen, GAO Ji-mei, FAN Gui-sheng, YU Hui-qun, XU Ya-jie
Computer Science. 2021, 48 (6A): 374-382.  doi:10.11896/jsjkx.200900027
Abstract PDF(5275KB) ( 879 )   
References | Related Articles | Metrics
Edge computing has been envisioned as an effective solution to enhance the computing capabilities for resource-constrained mobile devices.It allows users to satisfy the resource requirement by offloading heavy computing tasks to the edge cloud.However,it still needs to commit to solving the issues of energy consumption and reliability.This paper firstly proposes an energy-aware collaborative task execution scheduling model,which combines computing offloading model and fault-tolerant model to reduce energy consumption while improving reliability of edge computing within time constraints of tasks.Then,an energy-aware fault-tolerant collaborative task execution scheduling algorithm including collaborative task execution,initial scheduling and online scheduling is proposed to improve reliability while reducing energy consumption.The collaborative task execution is to determine the execution decision of tasks by partial critical path analysis and one-climb policy.The initial scheduling is to determine the fault-tolerant strategy from replication and resubmission for tasks executed on the edge cloud,ensuring the tasks processing successfully.The online scheduling is to adjust the fault-tolerant strategy in real time when a fault occurs.Finally,through extensive simulation experiments with the three different representative task topologies,the performance difference under three different scenarios in terms of the task completion rate and the energy consumption ratio are evaluated.Results show that the proposed method is more reliable than collaborative task execution and more energy-aware than local execution in terms of the change of the deadline,the data transmission rate,and the fault tolerance rate.
Research on Mobile Edge Computing in Expressway
SONG Hai-ning, JIAO Jian, LIU Yong
Computer Science. 2021, 48 (6A): 383-386.  doi:10.11896/jsjkx.200900212
Abstract PDF(3056KB) ( 724 )   
References | Related Articles | Metrics
With the improvement of expressway construction,a large number of computing devices appear on both sides of the expressway.Based on this,mobile edge computing technology can be used in expressway scenarios.Mobile edge computing can provide vehicles with low latency,high bandwidth and reliable computing services on highways.It is also an important means to rea-lize intelligent transportation system.Considering the special environment of expressway,this paper focuses on the problems of task offloading and resource allocation.Combined with 5G mobile network,a mobile edge computing model for expresswaydri-ving task is established.An efficient and reasonable resource scheduling strategy for different kinds of computing tasksis designed.And a dynamic scheduling strategy is put forward,which combines genetic algorithm and ant colony algorithm.To verify the effectiveness of this model,load balancing is used as the performance index for the simulation experiment.The experimental results show that the proposed algorithm can effectively reduce the load gaps and the computing cost compared with similar algorithms
Low-complexity Subcarrier Allocation Algorithm for Underwater OFDM Acoustic CommunicationSystems
YOU Ling, GUAN Zhang-jun
Computer Science. 2021, 48 (6A): 387-391.  doi:10.11896/jsjkx.201100064
Abstract PDF(2558KB) ( 587 )   
References | Related Articles | Metrics
In recent years,underwater acoustic communication technology based on OFDM modulation has been developed rapidly,with the advancement of the national strategy of Smart Ocean,as well as the demand of marine resource development.One of the key issues is the allocation of subcarrier in order to optimize the system performance.In this paper,a subcarrier allocation algorithm with low complexity for underwater OFDM acoustic communication system is proposed,candidate nodes are selected according to a certain criterion in each round,node with the worst comprehensive channel state is the final objective.The algorithm can improve the overall transmission performance of the system,and the transmission performance of the worst sensor node is considered as well.Besides,in case a certain node cannot get any subcarrier resource in multi-round allocations,the idle node in the last round is assigned the subcarrier with the best channel condition in front of other nodes.Simulation results show that the improvement of the algorithm can solve the problemon the premise of hardly reducing the performance of the original algorithm.The proposed algorithm has certain reference significance for the resource allocation of underwater multi-sensor network.
SDN Traffic Prediction Based on Graph Convolutional Network
SONG Yuan-long, LYU Guang-hong, WANG Gui-zhi, JIA Wu-cai
Computer Science. 2021, 48 (6A): 392-397.  doi:10.11896/jsjkx.200800090
Abstract PDF(2970KB) ( 1628 )   
References | Related Articles | Metrics
Accurate and real-time traffic forecasting plays an important role in the SDN and is of great significance for network traffic engineer,and network plan.Because of the constraints of network topological structure and the dynamic change of time,that is,spatial and time features,network traffic prediction has been considered as a scientific issue.In order to capture the spatial and temporal dependence simultaneously,the Graph Convolutional Gated Recurrent Unit Network model(GCGRU) is proposed,a neural network-based traffic forecasting method,which is in combination with the graph convolutional network (GCN) and gated recurrent unit (GRU).Specifically,GCN is used to learn complex topological structures to capture spatial dependence and Gated Recurrent Unit is used to learn dynamic changes of traffic data to capture temporal dependence.In terms of model perfor-mance comparison,GCGRU proposed in this paper is compared with classic methods.The evaluation metrics include MSE,RMSE,MAE.The experimental results show that GCGRU can perform better in traffic prediction.
Energy Efficient Power Allocation for MIMO-NOMA Communication Systems
CHEN Yong, XU Qi, WANG Xiao-ming, GAO Jin-yu, SHEN Rui-juan
Computer Science. 2021, 48 (6A): 398-403.  doi:10.11896/jsjkx.200900175
Abstract PDF(2462KB) ( 847 )   
References | Related Articles | Metrics
In this paper,a power allocation algorithm is proposed for non-orthogonal multiple access (NOMA) communication systems with multiple input multiple output (MIMO).The base station (BS) divides the connected users into several clusters,and deals with the inter-cluster interference by zero-forcing receiving method and the intra-cluster interference by successive interference cancellation method.To improve the energy efficiency performance of the MIMO-NOMA system,the transmit power allocation is optimized at the BS.We formulate the power allocation problem as a nonconvex optimization problem with a fractio-nal form,which is difficult to solve directly.First,an auxiliary variable is introduced to transform the original objective function into a non-fractional form,and it is searched by the bisection method.Since the optimization problem after transformation is still a non-convex problem,it is then solved by a quadratic transformation and an iterative processing.The theoretical analysis and simulation results show that the proposed method can effectively improve the energy efficiency and sum-rate performance of the communication system compared with the traditional orthogonal access method.In addition,the proposed method can ensure the fairness between users by adjusting the weight factor.
Outdoor Fingerprint Positioning Based on LTE Networks
LI Da, LEI Ying-ke, ZHANG Hai-chuan
Computer Science. 2021, 48 (6A): 404-409.  doi:10.11896/jsjkx.200700170
Abstract PDF(3400KB) ( 951 )   
References | Related Articles | Metrics
Owing to the satisfactory positioning accuracy in complex environments,fingerprint-based positioning technology has always been a hot topic of research.By leveraging long term evolution (LTE) signals,a deep learning based outdoor fingerprint positioning method is proposed to construct a positioning system.Inspiring by the computer vision technology,the geo-tag signals are converted into gray scale images for positioning.The positioning accuracy is expressed by the classification accuracy of the constructed gray image dataset.In this paper,a two-level training architecture is developed to realize the classification of deep neural network (DNN).First,a deep residual network (Resnet) is used to pre-train the fingerprint database and obtain a rough positioning model.Then,a transfer learning algorithm based on back propagation neural network (BPNN) is used to further extract signal features and obtain an accurate positioning model.The experiment is conducted in a real outdoor environment,and the experiment results show that the proposed positioning system can achieve a satisfactory positioning accuracy in complex environments.
Reliable Transmission Strategy for Underwater Wireless Sensor Networks
HONG Chang-jian, GAO Yang, ZHANG Fan, ZHANG Lei
Computer Science. 2021, 48 (6A): 410-413.  doi:10.11896/jsjkx.201100048
Abstract PDF(2156KB) ( 638 )   
References | Related Articles | Metrics
Aiming at the limitation of network nodes needing to perceive the residual energy of the whole network nodes and neighbor node distance in the Layered-DBR algorithm,a reliable under water sensor network data transmission strategy RTS(Reliable Transmission Strategy) is proposed which calculates the energy and distance factors by current node depth information,residual energy and network layered spacing.This paper develops a network performance evaluation method to balance the network life cycle and data packet loss rate,to determine the proportion of energy factor and distance factor through simulation experiment,and finally gives the calculation method of the probability of message forwarding.The simulation comparison experiment shows that compared with DBR、DMBR、Layered-DBR,the RTS algorithm can effectively control network redundancy,reduce data packet loss,and has a long life cycle of the network.
Effect of Cross-polarization for Dual-polarized MIMO Channel in Satellite Communications
LENG Yue, XIE Ya-qin, LI Peng
Computer Science. 2021, 48 (6A): 414-419.  doi:10.11896/jsjkx.200900173
Abstract PDF(3022KB) ( 781 )   
References | Related Articles | Metrics
The combination of MIMO(multiple input multiple output) technology and satellite communications (SATCOM) canmake full use of space diversity and improve gain without adding additional power and bandwidth.In a mobile satellite system,due to the limitation of the space size on the satellite,it is not conducive to obtaining spatial diversity and multiplexing gains,so it is generally considered to construct a multi-antenna environment with different polarized antennas to obtain corresponding gains.This paper presents a method to analyze the impact of the Cross Polar Discrimination (XPD) on a single satellite dual-polarized MIMO communication system.In a polarization diversity MIMO satellite system model,through simulation from two aspects of bit error rate and channel capacity,different cross-polarization interference coefficients are evaluated in three scenarios:open area,suburban,and urban areas.The results show that the smaller the cross-polarization interference coefficient,the better the system BER(Bit Error Rate) performance and the larger the channel capacity.Moreover,when a signal is transmitted in urban areas,the channel capacity is higher than that in open area and suburban.
Cloud Task Scheduling Algorithm Based on Three-way Decisions
WANG Zheng, JIANG Chun-mao
Computer Science. 2021, 48 (6A): 420-426.  doi:10.11896/jsjkx.201000023
Abstract PDF(3268KB) ( 637 )   
References | Related Articles | Metrics
As an essential component of the cloud computing system,task scheduling directly impacts resource utilization and service quality.To solve the problems existing in Min-Min and Max-Min algorithms in the current cloud platform,such as load imbalance,low comprehensive resource utilization,and sizeable overall task completion time due to task distribution,a task sche-duling optimization algorithm based on the three-way decision (CTSA-3WD) is proposed.First,the algorithm divides tasks into light-load and heavy-load tasks according to their execution time and computational resource requirements.Secondly,the algorithm divides the tasks into three categories according to the proportion of the task set's two types of tasks.It develops scheduling strategies for these three task sets.Specifically,the strategy uses the Max-Min algorithm for tasks with a high percentage of light load tasks and uses the Min-Min algorithm for a high proportion of heavily loaded tasks.An improved task scheduling algorithm based on Min-Min and Max-Min is used for the set,which has close numbers between light and heavy-duty tasks.Third,the critical resources in the allocated nodes are rescheduled.The algorithm selects the best matching tasks to be allocated to the light-load resources,subject to the overall completion time reduction.The experimental based on the CloudSim reveals that the CTSA-3WD algorithm can effectively improve the overall resource utilization and quality of service to users compared to Min-Min,Max-Min,selective scheduling algorithms.Moreover,it also makes the resources in the whole system reach a better load-balancing level.
Dual-NIC Mutual Backup Scheme for Access Point Handoff in Software Defined Wireless Networks
PENG Da-chuan, YANG Xi-min, TANG Wan, ZHANG Xiao, FAN Lei
Computer Science. 2021, 48 (6A): 427-431.  doi:10.11896/jsjkx.201000022
Abstract PDF(3826KB) ( 675 )   
References | Related Articles | Metrics
In the software defined wireless network (SDWN),mobile terminal are likely to cause network interruption and improper selection of awaiting switched wireless access point (AP) during the handoff process,which may result in a significant decrease in real-time application service quality.In order to address this issue,this paper proposes an AP handoff scheme based on dual network interface controller (NIC) mutual backup.This scheme uses the dual-NIC mutual backup mechanism to realize the soft handoff at link layer.The mobile terminal can associate with the new AP first and then disassociate from the old AP.Further,the network address translation (NAT) mechanism is adopted in SDWN to realize seamless handoff at network layer.Inspired by the technique for order preference by similarity to an ideal solution (TOPSISI),the proposed AP selection algorithm constructs the candidate object to be evaluated according to the signal strength,load,transmission bandwidth and the current connections numbers of candidate APs.The AP selection are made based on the calculated closeness between the object to be evaluated and the ideal object.Finally,the experiments are conducted on NS-3.The results show that compared with the RSSI based handoff scheme,the transmission delay and the packet loss rate of the proposed scheme are reduced by 85.29% and 14.11% respectively,and the throughput is increased by 8.94%.The proposed dual-NIC mutual backup handoff scheme can significantly improve the problem of network interruption caused by AP handoff in the single-NIC handoff scheme,and better guarantee the service quality of real-time applications.
Design of Low-latency Remote Serial Communication System
YU Xin-yi, WANG Xu-yan, YING Hao-zhe, OU Lin-lin
Computer Science. 2021, 48 (6A): 432-437.  doi:10.11896/jsjkx.200500123
Abstract PDF(2291KB) ( 751 )   
References | Related Articles | Metrics
In order to solve the problems of high upgrade cost,high delay,and weak scalability of industrial equipment using serial communication,a low-latency remote serial communication system is designed from the software level in this paper.An embedded control system is used as the gateway of the original serial port data.It uses multi-coroutine feature of Golang to encapsulate,isolate,and convert multiple serial port data into Ethernet data.The Ethernet data is transmitted by KCP algorithm and P2P transmission channel to the client.The client uses a structure in which the data interface is separated from the driver,and uses diffe-rent communication modes to forward data to the application according to the production scenario.Comparing the designed system with the serial communication system based on TCP protocol,the result indicates that the system has huge advantages in terms of transmission delay.The designed system provides an effective solution for the low delay problem in remote serial communication.
Information Security
Blockchain Based Audio Copyright Deposit Model
LIU Jia-qi, LIU Bei-li, PENG Tao, DUAN Jiang, KANG Li, CHEN Zhi
Computer Science. 2021, 48 (6A): 438-442.  doi:10.11896/jsjkx.200600148
Abstract PDF(2037KB) ( 994 )   
References | Related Articles | Metrics
This paper proposes an audio copyright certificate model based on blockchain storage,which makes the audio works uniquely and can be stored on the blockchain that cannot be tampered with,without the need for an additional copyright agency.Although the audio fingerprint that uniquely identifies a piece of audio can partially solve problems such as piracy in copyright certificates,the support of a centralized platform is still needed to eliminate piracy,then copyright protection still has problems such as inefficiency,opacity,and untrustworthiness.Here a "audio fingerprint+blockchain" solution is proposed,to prevent copyright theft from the copyright registration portal,to provide a technical basis for the establishment of a digital copyright trading platform and the resolution of opaque copyright information.The model is based on the intangible advantage of the blockchain,which can not be tampered and traceable.The generated audio fingerprint is compressed and split,added with timestamp,and signed by the user's private key to construct a basic work release certificate,which is sent to each node of blockchain.Each node performs signature verification,packages into the block,and completes the deposit.
Face Anti-spoofing Algorithm for Noisy Environment
ZHUO Ya-qian, OU Bo
Computer Science. 2021, 48 (6A): 443-447.  doi:10.11896/jsjkx.200900207
Abstract PDF(2398KB) ( 708 )   
References | Related Articles | Metrics
In the era of intelligent,face recognition algorithm is one of the key technologies for smart identity authentication,and widely used in the fields of access control,mobile phone unlocking,and financial payment.Face anti-spoofing recognition is used to identify the real face and resist the fake face attacks.Among the existing methods,local binary pattern(LBP) can provide a good anti-spoofing performance in practice,but its recognition performance in noisy scenes can be improved.For this reason,we proposed a pairwise local binary pattern (PLBP) based on adjacent pixel pairs,which can improve the performance in noisy environments by exploiting the correlations between pixel pairs.Compared with LBP-based methods,the proposed algorithm compared the mean value of adjacent pixel pairs with neighboring pixels to generate a binary pattern,so that the spatial correlations between pixel pairs can be used to obtain more facial features.Experiment results show that the performance of the proposed method is better than the current mainstream LBP-based methods,the accuracy rate is nearly 95.05% under noise-free conditions.Our method also can reduce performance loss under Gaussian noise environment and provide a stronger robustness.
Generating Malicious Code Attack Graph Using Semantic Analysis
YANG Ping, SHU Hui, KANG Fei, BU Wen-juan, HUANG Yu-yao
Computer Science. 2021, 48 (6A): 448-458.  doi:10.11896/jsjkx.201100074
Abstract PDF(4539KB) ( 1142 )   
References | Related Articles | Metrics
In order to deeply analyze the logical relationship between malicious code high-level behaviors and analyze the working mechanism of malicious code,this paper takes behavior events as the research object and proposes a method for generating malicious code attack graphs based on semantic analysis.First of all,with the help of the MITRE ATT&CK model,a m-ATT&CK(Malware-Adversarial Tactics,Techniques,and Common Knowledges) model which is more suitable for malicious code behavior analysis is established.This model is composed of malware,behavior events,attack tactics and relationships between them.Then,an approximate pattern matching behavior mapping algorithm based on F-MWTO (Fuzzy Method of Window Then Occurrence) is proposed to realize the mapping of malicious code behavior information to m-ATT&CK model,and a Hidden Markov Model is constructed to mine the sequences of attack tactics.Semantic-level malicious code attack graph is defined and designed semantic-level attack graph generation algorithm,combing with identified behavior events to restore the contextual semantic information of the malicious code high-level behaviors and generating the semantic-level malicious code attack graph.Experimental results show that the semantic-level attack graph obtained based on the proposed methods can clearly show the working mechanism and attack intention of malicious code.
Research on Intrusion Detection Classification Based on Random Forest
CAO Yang-chen, ZHU Guo-sheng, QI Xiao-yun, ZOU Jie
Computer Science. 2021, 48 (6A): 459-463.  doi:10.11896/jsjkx.200600161
Abstract PDF(1902KB) ( 1167 )   
References | Related Articles | Metrics
In order to effectively detect the attack behavior of the network,the machine learning method are widely used to classify different types of network intrusion detection.The traditional decision tree methods usually use a single model to training data,which is prone to generalization errors and is prone to over-fitting.To solve this problem,this paper introduces the idea of parallel integrated learning,and proposes an intrusion detection model based on random fo-rest.Since each decision tree in the random fo-rest has decision-making power,it can improve the accuracy of classification very well.By using the NSL-KDD data set to train and test the intrusion detection model,the experimental results show that the accuracy rate can reach 99.91%,which shows that the model has a very good intrusion detection classification effect.
DDoS Attack Random Forest Detection Method Based on Secondary Screening of Feature Importance
LI Na-na, WANG Yong, ZHOU Lin, ZOU Chun-ming, TIAN Ying-jie, GUO Nai-wang
Computer Science. 2021, 48 (6A): 464-467.  doi:10.11896/jsjkx.200900101
Abstract PDF(3975KB) ( 810 )   
References | Related Articles | Metrics
Feature selection is an important method for attack detection algorithms.This method mostly uses cross-validation recursive feature elimination (Recursive Feature Elimination with Cross-Validation,RFECV) technology,and is usually combined with machine learning algorithms.However,this algorithm is mostly used to select single-model features,and its performance is also very susceptible to fluctuations due to changes in feature quantities and learners.Due to the large amount of calculation,the classification accuracy of this algorithm still needs to be improved.In response to the above problems,this paper proposes a random forest detection method for DDoS attacks based on the secondary screening of feature importance.Firstly,the algorithm preprocesses the original data set and extracts features.Secondly,in order to select the most relevant variables from the selected model,the algorithm uses the RF variable importance criterion and the random forest importance score to rank the variables.Then,on the basis of random forest feature ranking,the cumulative importance of the variables is calculated and the most important variables are obtained.Then,the most important variables selected are used for training again to generate a classification model,and a new set of important variables is defined as the current variable.Finally,the final optimal variable is obtained through the importance criterion and the cumulative importance again,which effectively removes the abnormal points and avoids the local optimum,thereby realizing accurate classification and detection of DDOS attacks.Experimental results show that this method has high accuracy and precision,can accurately classify normal traffic and various DDoS attack traffic,and is suitable for detecting DDoS attacks under big data.
Anti-target Attack Tree Model for Threat Detection
DU Jin-lian, SUN Peng-fei, JIN Xue-yun
Computer Science. 2021, 48 (6A): 468-476.  doi:10.11896/jsjkx.200900205
Abstract PDF(2273KB) ( 1119 )   
References | Related Articles | Metrics
In recent years,complex and diverse network attacks are led by increasing number of loopholes together with the continuous evolution of network intrusion methods and hacker techniques.However,the traditional attack tree model is difficult to be construct automatically,and its quality is highly dependent on analyst's expertise.Besides,it has some shortcomings in the expression about the relationship between attack intention and attack operation.In order to detect potential security threads to system assets with high quality and support the implementation of automatic detection,this paper proposes an anti-target attack tree model and its construction method based on the intention of attackers.Based on the attacker's intention,the model describes the attacker's attack process and target by iteratively decomposing the anti-target elements,and expresses it in the form of attack tree,so as to find the security problems of the system efficiently.Based on the Datalog language,a formal description of the decomposition strategy of the anti-target attack tree model is given and the inference rules are defined to provide support for the automatic construction of the anti-target attack tree model and the automatic detection of the attack risks.The method proposed in this paper is applied to real attack case scenarios for analysis,and the actual attack scenarios and potential security risks of the attacked system are successfully detected,which proves the effectiveness of the proposed method.
Formal Verification of Otway-Rees Protocol Based on Process Algebra
CAI Yu-tong, WANG Yong, WANG Ran-ran, JIANG Zheng-tao, DAI Gui-ping
Computer Science. 2021, 48 (6A): 477-480.  doi:10.11896/jsjkx.200500072
Abstract PDF(1599KB) ( 639 )   
References | Related Articles | Metrics
Otway-Rees protocol is to complete the two-way authentication between the initiator and the responder,and to distri-bute the session key generated by the server.The feature of this protocol is simple and practical.It does not use complicated synchronous clock mechanism or double encryption,and provides good timeliness with only a small amount of information.This protocol allows individual communications to be authenticated through a network,while also preventing replay attacks and eavesdropping,as well as modifying detection.The analysis of security protocols is a key issue that cannot be avoided in the information age.Formal method,which is based on strict mathematical and mechanical methods,is an important method to improve and ensure the quality of computing system.Its model,technology and tools have become an important carrier of computing thinking.The formal method can accurately reveal all kinds of logic rules,make corresponding logic rules,and make all kinds of theoretical systems more rigorous.Formal method is a mathematical description of what a program does,a description of the function of a program written in a formal language with precise semantics.It is not only the starting point of designing and programming,but also the basis of verifying whether a program is correct,so as to improve the reliability and robustness of the design.By abstracting the Otway-Rees protocol,we can get the abstract model.On this basis,the formal description based on process algebra is gi-ven and the formal verification is carried out.The verification results show that the parallel system in the form of this protocol shows the expected external behavior.
Formal Verification of Yahalom Protocol Based on Process Algebra
WANG Ran-ran, WANG Yong, CAI Yu-tong, JIANG Zheng-tao, DAI Gui-ping
Computer Science. 2021, 48 (6A): 481-484.  doi:10.11896/jsjkx.200500074
Abstract PDF(1621KB) ( 721 )   
References | Related Articles | Metrics
In the process of communication,in order to make the conversation between the two sides safe,Yahalom protocol uses the trusted third party to distribute the “good” conversation key to the two sides of communication,and uses the shared key to encrypt the conversation content to ensure the security of the conversation between the two sides.The formal verification of Yahalom protocol is of great significance.In order to make the trusted third party distribute the session key safely between the two sides of communication,this paper makes a theoretical verification of the communication process.In this paper,the process of random session key distribution based on trusted platform is abstracted,the operational semantic description of each entity's state and state transition in the abstract model is given,and the structural operational semantic concurrent computing model of Yahalom protocol is established.The formal verification of Yahalom protocol's state transition system is mainly carried out through ACP public system.The results show the expected external behavior,and theoretically proves that Yahalom protocol based on process algebra is feasible.
Application and Simulation of Ant Colony Algorithm in Continuous Path Prediction of Dynamic Network
YANG Lin, WANG Yong-jie
Computer Science. 2021, 48 (6A): 485-490.  doi:10.11896/jsjkx.200800132
Abstract PDF(2758KB) ( 693 )   
References | Related Articles | Metrics
With the widespread use of active defense methods,dynamic variability has become a prominent feature of network systems.When discussing network system security,it is inevitable to base on dynamic network environment.Path prediction,as a common method of network security assessment,also needs to adapt to dynamic network environment and have the characteristics of continuous and efficient.In order to solve this problem,it is proposed to apply the ant colony optimization algorithm to the continuous path prediction of the network,and to design a simulation experiment to compare it with the completely random algorithm and the greedy algorithm in terms of optimization accuracy and optimization speed.The simulation experiment results show that the optimization accuracy of the original ant colony algorithm is not as good as the completely random algorithm,but due to the guidance of heuristic information,its optimization speed is much better than the completely random algorithm.In order to balance the advantages of the original ant colony algorithm and the completely random algorithm,a new ant colony pheromone update strategy is proposed,and a simulation experiment is designed to verify the efficiency of the algorithm.The final experimental results show that the improved ant colony optimization algorithm can better integrate the advantages of the original ant colony algorithm and the completely random algorithm,and achieve a balance between optimization accuracy and optimization speed.Howe-ver,it is necessary to continue to optimize the algorithm in the next research,so that it can better and more completely inherit the advantages of the original ant colony algorithm and the completely random algorithm,and achieve a high level both in accuracy and speed.
Trust Evaluation Protocol for Cross-layer Routing Based on Smart Grid
CHEN Hai-biao, HUANG Sheng-yong, CAI Jie-rui
Computer Science. 2021, 48 (6A): 491-497.  doi:10.11896/jsjkx.201000169
Abstract PDF(4196KB) ( 551 )   
References | Related Articles | Metrics
Network security is the main issue to be considered in the design of a smart grid communication network.However,due to the openness and unpredictability of wireless networks,they are vulnerable to attacks,especially exploiting vulnerabilities to launch cross-layer attacks during data transmission.In order to solve this problem,a new trust-based routing framework is proposed,which uses Bayesian inference to calculate direct trust and D-S theory combined with evidence of reliable neighbors to calculate indirect trust.Then it uses AHP to calculate the credibility of the node based on cross-layer metrics such as transmission rate,buffer capacity and received signal strength.In the simulation experiment,the performance of the proposed algorithm is eva-luated by simulating the situation that malicious nodes launch different attacks.Simulation results show that the trust evaluation algorithm can effectively resist malicious attacks and ensure the security of routing.
Electronic Prescription Sharing Scheme Based on Blockchain and Proxy Re-encryption
TANG Fei, CHEN Yun-long, FENG Zhuo
Computer Science. 2021, 48 (6A): 498-503.  doi:10.11896/jsjkx.201000143
Abstract PDF(1755KB) ( 1673 )   
References | Related Articles | Metrics
The storage mechanism of electronic prescription data generally is centralized.Such centralized mechanism may suffer from the risk of insider tampering attack.In addition,prescription data is important privacy information for users.Therefore,it needs to be encrypted during storage or transmission.However,common encryption schemes often have the problem that they are difficult to sharing.In order to solve the problems of electronic prescription storage centralization,sharing difficulties and high storage and transmission security requirements,this work proposes an electronic prescription sharing scheme based on blockchain and conditional proxy re-encryption.The conditional proxy re-encryption scheme can provide an efficient ciphertext forwarding mechanism for electronic prescription sharing.It also can realize a fine-grained division method of decryption authority.The traditional proxy re-encryption scheme based on identity conditions requires a trusted key generation center (KGC) to generate user keys.However,this requirement is in contradiction with the decentralized feature of the blockchain.We use the distributed key generation technology to solve this key escrow problem,and construct a conditional proxy re-encryption scheme with multiple authorities to make it suitable for blockchain scenarios.Finally,we analyze the proposed scheme from the aspects of correctness and safety.
Routing Directory Server Defined by Smart Contract
WANG Xiang-yu, YANG Ting
Computer Science. 2021, 48 (6A): 504-508.  doi:10.11896/jsjkx.200700210
Abstract PDF(1736KB) ( 966 )   
References | Related Articles | Metrics
The routing directory server plays an important role in the anonymous network,which is restricted by its centralization.Currently,the main problems are reflected in the scalability of the system,the security of the data,and the flexibility of the network.According to the characteristics of the directory server,the following three functions are implemented by smart contracts:user registration authorization,routing information auction,and the routing information encryption with decryption.Expe-rimentsprove that the smart contract proposed in this paper can replace the interconnected routing directory server in function.The smart contract can not only complete the transaction process of routing information,but also perform a good security in performance.The addition of smart contracts implements the function of the directory server on a decentralized blockchain.The solution improves the scalability of the network,data security and network flexibility.In addition,the solution means more energetic.
Defense Method of Adversarial Training Based on Gaussian Enhancement and Iterative Attack
WANG Dan-ni, CHEN Wei, YANG Yang, SONG Shuang
Computer Science. 2021, 48 (6A): 509-513.  doi:10.11896/jsjkx.200800081
Abstract PDF(2991KB) ( 1285 )   
References | Related Articles | Metrics
In recent years,the existing deep learning network models have been able to achieve high accuracy in various classification tasks,but they are still extremely vulnerable to be attacked by adversarial samples.At present,adversarial training is one of the best methods to defend against adversarial sample attacks.However,the known single-step attack adversarial training me-thods only have a good defensive effect against single-step attacks,but have poor defense performance against iterative attacks.The iterative attack adversarial training methods only improve the defense performance against iterative attacks,but the defense effect of single-step attacks is not ideal.In order to improve the robustness of the deep learning network model against single-step attacks and iterative attacks at the same time,this paper proposes GILLC,an adversarial training defense method that combines Gaussian enhancement and ILLC iterative attacks.First,a Gaussian perturbation is added to the clean samples to improve thegene-ralization ability of the deep learning network model.Then,the adversarial samples generated by ILLC are used for adversarial training,which approximately solves the internal maximization problem of adversarial training.In this paper,a white box attack experiment is conducted with CIFAR10 as the data set.The results show that the GILLC method effectively improves the robustness of the deep learning network model against single-step attacks and iterative attacks by comparing with the baseline,single-step attack adversarial training and iterative attack adversarial training methods,without significantly reducing the classification performance of the clean samples.
Research on Forecasting Model of Internet of Vehicles Security Situation Based on Decision Tree
TANG Liang, LI Fei
Computer Science. 2021, 48 (6A): 514-517.  doi:10.11896/jsjkx.200700158
Abstract PDF(1589KB) ( 1214 )   
References | Related Articles | Metrics
With the development of vehicle intelligent technology,the combination of network and vehicle becomes inevitable,which brings great convenience to people.At the same time,hackers can also use technical loopholes to attack vehicles,resulting in serious traffic accidents and even vehicle crashes.Based on this situation,vehicle information security technology has gradually become the focus of attention.In the face of endless network attacks on Internet of vehicles,situation awareness is needed to protect the Internet of vehicles.In order to improve the accuracy of IOV security situation awareness,this paper proposes a decision tree-based IOV security situation prediction model.Because network attacks often change abnormally by certain specific attri-butes,the process of attribute change is an attack method.The tree is classified according to these attributes,the information gain rate is used to build a decision tree,and the rules for decision are derived.Through experiments,the feasibility of the proposed algorithm in the security situation awareness of the Internet of Vehicles and the accuracy of the prediction results are verified.
Distributed Combination Deep Learning Intrusion Detection Method for Internet of Vehicles Based on Spark
YU Jian-ye, QI Yong, WANG Bao-zhuo
Computer Science. 2021, 48 (6A): 518-523.  doi:10.11896/jsjkx.200700129
Abstract PDF(2869KB) ( 1080 )   
References | Related Articles | Metrics
With the application of 5G and other technologies in the field of Internet of vehicles,intrusion detection as an important detection tool for information security of Internet of vehicles plays an increasingly important role.Due to the rapid change of the structure of the Internet of vehicles,large data flow,complex and diverse forms of intrusion,traditional detection unable ensure the accuracy and real-time requirements,and unable be directly applied to the Internet of vehicles.To solve these problems,this paper proposes a distributed combination deep learning intrusion detection method for Internet of vehicles based on Apache spark framework.By constructing spark cluster,the deep learning CNN and LSTM are combined to extract intrusion features and detect data,and find abnormal behaviors from large-scale Internet of vehicles data traffic.Experimental results show that,compared with other existing models,the proposed method can achieve 20.1s in time and 99.7% in accuracy.
Research on DoS Intrusion Detection Technology of IPv6 Network Based on GR-AD-KNN Algorithm
ZHAO Zhi-qiang, YI Xiu-shuang, LI Jie, WANG Xing-wei
Computer Science. 2021, 48 (6A): 524-528.  doi:10.11896/jsjkx.200500001
Abstract PDF(1978KB) ( 598 )   
References | Related Articles | Metrics
With IPv6 network traffic rapidly increasing,the traditional intrusion detection systems,such as Snort,based on speci-fic rules to detect DoS intrusion attacks,have the poor performance and adaptability in detecting DoS attacks.In order to solve the problem of detecting DoS attacks in IPv6,the KNN algorithm is improved in this paper.First,in order to decrease the number of low influential sub-features of discrete type features,the approach of selecting and clustering of sub-feature is implemented by information gain ratio,which can decrease the number of features and improve the efficiency in detecting DoS attack in IPv6.Se-cond,the improved algorithm GR-AD-KNN using information gain ratio as the weight of features to change Euclidean distance is proposed to achieve DoS attack detection.Based on a metric about reverse distance influence,the classification decision method in KNN algorithm is optimized,then the accuracy of detection approach is further improved.Experiments show that,compared with the TAD-KNN algorithm based on the average distances to classify attacks and the GR-KNN algorithm which only optimizes the Euclidean distance definition,the GR-AD-KNN algorithm not only improves the overall detection performance in IPv6 network traffic features detection,but also has better detection results on small population attack samples.
Improved Certificateless Proxy Blind Signature Scheme with Forward Security
JIANG Hao-kun, DONG Xue-dong, ZHANG Cheng
Computer Science. 2021, 48 (6A): 529-532.  doi:10.11896/jsjkx.200700049
Abstract PDF(2139KB) ( 608 )   
References | Related Articles | Metrics
Through a security analysis of a certificateless forward security proxy blind signature scheme proposed by reference [8],this paper points out that its scheme cannot resist malicious and negative KGC public key replacement attacks,and the scheme does not satisfy non-repudiation.In view of the above problems,an improved scheme is proposed to improve the user key generation method.The one-way hash function is used to embed the user public key into part of the private key,thereby restricting the part of the private key generated by KGC,so that the adversary cannot forge Authorize by signing the legal key to impersonate the original signer.In the proxy blind signature phase,the secret value of the message owner replaces a blinding factor,which not only reduces the amount of calculation,but also the message owner cannot deny that the message was provided.Security analysis shows that the improved scheme can resist malicious and passive KGC public key replacement attacks and satisfy non-repudiation.The efficiency analysis shows that the improved scheme is more efficient than the original scheme.
Resisting Power Analysis Algorithm of Scalar Multiplication Based on Signed Sliding Window
GONG Jian-feng
Computer Science. 2021, 48 (6A): 533-537.  doi:10.11896/jsjkx.191200097
Abstract PDF(1780KB) ( 545 )   
References | Related Articles | Metrics
In order to resolve the problem that the operating efficiency of scalar multiplication will be reduced after applying the power analysis attacks measures,a resisting power analysis algorithm of scalar multiplication based on signed sliding window is presented.The presented algorithm recodes the scalar with the signed sliding window,and realizes resisting power analysis attacks by combining with the pre-computation,point mask and field operation.Finally,the scalar multiplication is completed in the system of hybrid coordinate.Performance analysis results indicate that the presented algorithm can effectively resist simple power analysis,differential power analysis,zero-value power analysis,and refined power analysis and so on,and the presented scheme also can significantly improved the operating efficiency by comparing with the resisting power analysis scheme of binary expansion and key assignment.It is concluded that the presented scheme can take into account both security and efficiency,and can be applied to kinds of cryptographic systems with limited resource.
Interdiscipline & Application
Survey of Research on Asymmetric Embedded System Based on Multi-core Processor
QU Wei, YU Fei-hong
Computer Science. 2021, 48 (6A): 538-542.  doi:10.11896/jsjkx.200900204
Abstract PDF(1687KB) ( 1154 )   
References | Related Articles | Metrics
With the development and continuous differentiation of embedded systems,many fields such as industrial control,robotics,video and image systems,etc.have higher and higher requirements for embedded systems,which not only require good functional scalability and maintainability,but also need to ensure the real-time performance.Asymmetric embedded system based on multi-core processors is an important development direction to solve these problems.According to whether there is a primary and secondary distinction between processor cores,multi-core processors can be divided into two structures:homogeneous and heterogeneous.Asymmetric embedded systems can be realized based on both homogeneous and heterogeneous multi-core processors.The cores of multi-core processors are divided from the hardware or software level to run different tasks,so that the embedded system can balance good functional scalability and real-time performance.This paper summarizes and compares the research status of asymmetric embedded systems based on multi-core processors,and summarizes its applications in the fields of scientific research and engineering,and finally this paper studies the possible future development directions of asymmetric embedded systems based on multi-core processors.
Algorithms Based on Lattice Thought for Graph Structure Similarity
WANG Xiao-min, SU Jing, YAO Bing
Computer Science. 2021, 48 (6A): 543-551.  doi:10.11896/jsjkx.201100167
Abstract PDF(2929KB) ( 629 )   
References | Related Articles | Metrics
This paper gives the definition of vertex-splitting operation and vertex-coinciding operation,introduces a new kind of connectivity-vertex-splitting connectivity based on the definition of vertex-splitting operations,proves that vertex-splitting connectivity is equivalent to connectivity of the connected graph,gives the definition of W-similarity.Secondly,it presents the definition of graph-splitting group and isomorphic subgraphs,and introduces a method for special graph-splitting group and special graph-splitting group matching.Again,it describes the operations and algorithms of graphs and graph-splitting groups,including,deterministic graph-splitting group algorithm,graph-splitting contracting algorithms,vertex expending and contracting algorithm of a graph.Then,it discusses the basic similarities of isomorphic subgraphs of graphs.Finally,it makes a brief conclusion and puts forward a few issues for further study.
Visual Analysis System of Climatic Regionalization Based on Meteorological Factors
YAO Lin, WANG Xiang-kun, JIA Yu-pei, GENG Shi-hong, ZHU Min
Computer Science. 2021, 48 (6A): 552-557.  doi:10.11896/jsjkx.200900127
Abstract PDF(5297KB) ( 652 )   
References | Related Articles | Metrics
Researchers in the fields of hydrographic investigation,environmental monitoring and agricultural production need to divide geographical areas into several sub-areas according to meteorological factors for subsequent analysis and research,such as sampling and comparison.At present,climatic regionalization based on meteorological factors has some problems,such as lack of interactive means and single output of form and result.In this paper,a visual analysis system of climate regionalization based on meteorological factors is designed and implemented.The system provides views such as stacked histogram,radar map,parallel coordinate system,as well as rich interactive means such as clicking,hovering and so on.Experts can determine climate regionalization scheme through clustering quality metrics,geographical distribution of climate,point-cluster relationships and domain know-ledge.At the same time,the system can show the time series evolution of climate regionalization,represent the matching of the attributes of the site and the area to which it belongs,and improve the interpretability of regionalization schemes.Finally,based on the meteorological data of five provinces in southwest China in the past 50 years,the effectiveness of the system is verified by exploring the climatic regionalization scheme and deducing the time series change of regionalization.
Research and Implementation of Data Authority Control Model Based on Organization
CHENG Xue-lin, YANG Xiao-hu, ZHUO Chong-kui
Computer Science. 2021, 48 (6A): 558-562.  doi:10.11896/jsjkx.200700127
Abstract PDF(2296KB) ( 1581 )   
References | Related Articles | Metrics
Data permission control is an important aspect of software system security and quality,and is also an important part of permission management and authorized access of SaaS multi-tenant software system.The core requirements of data permission management are users set into different roles,which has corresponding data access scopes.If a general set of data permission control methods can be designed to reduce the complexity of authorization management and improve software system security,it has certain practical significance.The common SaaS basically uses the RBAC-based permission control component to meet the needs of user data permission control.However,RBAC is still relatively complicated in configuring of permissions,and the form of ODAC to control data permissions can simplify the configuration of permissions.Based on the theory of the RBAC authorization model,an organization-based data authority control model (Organization-Based Data Authority Control,ODAC) is proposed.In the ODAC model,various services provided by the SaaS multi-tenant software system are collectively called resources.Resources are divided into data-controlled resources and data-uncontrolled resources.When data-controlled resources are assigned to roles,the organizational structure that can access the resources is specified.When users under the SaaS service tenant organization access data,the system usesthe organizationcorresponding to the user role in the resource tenant,to achieve data access control.On this basis,the OADC model is implemented based on Spring MVC,Spring Security and MyBatis framework.Implemented with these mature frameworks,the data authority management system based on the OADC model shows good performance,guarantee for the realization of the data permission system,and reduces the difficulty of logic implementation.The model has been used in a variety of actual production systems,which has been verified to have good versatility and feasibility.
Research and Analysis of Blockchain Internet of Things Based on Knowledge Graph
LI Jia-ming, ZHAO Kuo, QU Ting, LIU Xiao-xiang
Computer Science. 2021, 48 (6A): 563-567.  doi:10.11896/jsjkx.200600071
Abstract PDF(2206KB) ( 1016 )   
References | Related Articles | Metrics
The rapid development of blockchain IoT(Internet of Things) has greatly attracted attention from academia and industry.A systematic understanding of the research status and progress in the field of blockchain IoT is of great reference value for researchers and industry departments to carry out related work.This paper takes 970 articles in the field of blockchain IoT in Web of Science from 2015 to 2019 as research object.Based on bibliometrics theory,it is visualized and analyzed using the information visualization software CiteSpace.This paper firstly researches and discusses the countries with greater influence in the field of blockchain IoT.Secondly,it summarizes the hot keywords in the field of blockchain IoT in recent years.Thirdly,it summarizes the knowledge base of this field in conjunction with CiteSpace,lists six papers with significant influence,and uses the Timezone as the display method to study the development trend of the blockchain IoT field in the past five years.Finally,it summarizes the current status of the blockchain IoT field and propose future development outlook.
Research on Automatic Testing Technology of Model Driven Development Tools
HUANG Shuang-qin, LIU Ying-bo, HUANG Xiang-sheng
Computer Science. 2021, 48 (6A): 568-571.  doi:10.11896/jsjkx.201000139
Abstract PDF(2337KB) ( 650 )   
References | Related Articles | Metrics
The low code platform based on model driven can produce a large number of application systems by writing a small amount of code or without coding,which puts forward higher requirements for the reliability,stability and ease of use of these rapid customization application systems.Testing is an important means to ensure the high quality and reliability of these software.There are two shortcomings in traditional automated testing.One is that the efficiency of obtaining the location information of page elements by manually viewing the source code is very low,the other is that when the page changes frequently,the page element cannot be located,which leads to test failure.The rapid customization of low code platform produces a lot of application systems.The page data of the system is huge and often changes,so the traditional automatic test method is not applicable.By reading the source code of the page from the background database to get the content of the page,this paper analyzes the source code with the depth first search method,obtains the location expression and element type of the whole page element,and carries out automatic test on the form combining with the test data and the URL of the form.For different application systems with different interfaces and functions,an automatic test management system is built to test different application systems,which is well used in practical projects and greatly improves the efficiency.
Fault Localization Technology Based on Program Mutation and Gaussian Mixture Model
ZHANG Hui
Computer Science. 2021, 48 (6A): 572-574.  doi:10.11896/jsjkx.200500121
Abstract PDF(2704KB) ( 542 )   
References | Related Articles | Metrics
The efficiency of fault localization relies on the quality of regression test cases,while the same and similar test cases affect the efficiency of fault localization.In order to solve the above problem,this paper proposes program mutation based on the improved artificial immune technology to generate multiple mutants,and then reduces the mutants for fault localization by Gaussian mixture model.The experimental results show that the proposed method can improve the efficiency of fault localization compared with other methods.
Research on Construction Method of Defect Prediction Dataset for Spacecraft Software
ZHENG Xiao-meng, GAO Meng, TENG Jun-yuan
Computer Science. 2021, 48 (6A): 575-580.  doi:10.11896/jsjkx.200900133
Abstract PDF(3092KB) ( 890 )   
References | Related Articles | Metrics
As being the infrastructure of prediction model's construction and implementation,software defect prediction dataset faces two sets of problems.On the one hand,due to the difficulty of data collection from data sources,there are fewer available datasets.On the other hand,due to the difference of data in diverse fields and the inapplicability of software metrics standards,the published datasets are rarely applied in engineering.In this paper,combined with the real software testing data in the domestic space field,the method of spacecraft software metrics design and the construction process of spacecraft software defect prediction dataset are systematically expounded.According to the characteristics of the spacecraft software,a hybrid method combining the metrics based on code and quality of the software is proposed to ensure that the relevant characteristics of the spacecraft software can be described and measured comprehensively from different angles.At the same time,to solve the problem of high labor and storage cost for large-scale data collection,processing and analysis,a standardized dataset construction method combining the data cleaning process under version division and module hierarchical preprocessing is proposed.The dataset SPACE constructed based on this method is demonstrated,which proves that the method can be effectively applied to the construction of domain-specific high-quality software defect prediction dataset,and at the same time,good prediction effect of model AutoWeka can be obtained.
Double-cycle Consistent Insulator Defect Sample Generation Method Based on Local Fine-grainedInformation Guidance
ZHAO Xiao, LI Shi-lin, LI Fan, YU Zheng-tao, ZHANG Lin-hua, YANG Yong
Computer Science. 2021, 48 (6A): 581-586.  doi:10.11896/jsjkx.200500026
Abstract PDF(6897KB) ( 672 )   
References | Related Articles | Metrics
In view of the lack of data for insulator defects samples,the existing generation methods require a large number of training samples,and the details of insulator defects are often lost or distorted during the generation process.This paper presents a double-cycle consistent insulator defect sample generation method based on local fine-grained information guidance (LFGI-DCC).The approach uses the rough insulator image as the network input,and learns from the fine defect insulator sample through the cycle consistency generative adversarial method to generate more realistic defect insulator samples.At the same time,the image of the defect area in the generated image is used as the input to discriminator,and the generation network is guided to focus on the fine-grained information of the defect by the method of resisting constraints,thereby further improving the authenti-city and diversity of the insulator defect samples.Compared with the existing methods,the insulator defect dataset constructed by the proposed method has the characteristics of fidelity and diversification,which provides an important foundation for improving the accuracy of insulator defect automatic identification.
Research on DSP Register Pairs Allocation Algorithm with Weak Assigning Constraints
TANG Zhen, HU Yong-hua, LU Hao-song, WANG Shu-ying
Computer Science. 2021, 48 (6A): 587-595.  doi:10.11896/jsjkx.200600061
Abstract PDF(1899KB) ( 654 )   
References | Related Articles | Metrics
In modern high performance digital signal processors (DSP),many instructions regard register pairs as operands.To optimize register pair usage,this paper presents a register pairs allocation algorithm for DSP based on weak constraint assignment for the rules of using register pairs.In the process of register assignment,the priority of this algorithm is to assign idle register pairs to symbol register pairs.If it is not possible to assign register pairs to symbol register pairs,two registers that cannot be made up of a register pair are assigned.In order to ensure that the register pairs in the target code are consistent with the rules of register pairs,this paper provides an instruction operand correction method.This paper uses six classical algorithms as test cases.The experimental results show that the proposed algorithm is effective.
Task Collaborative Process Network Model and Time Analysis of Mine Accident Emergency Rescue Digital Plan
LAI Xiang-wei, ZHENG Wan-bo, WU Yan-qing, XIA Yun-ni, RAN Qi-hua, DONG Yin-huan
Computer Science. 2021, 48 (6A): 596-602.  doi:10.11896/jsjkx.200500041
Abstract PDF(2416KB) ( 538 )   
References | Related Articles | Metrics
The mining environment of mines is complex.Once an accident occurs,emergency rescue work is difficult.Studying the emergency rescue process of mine accidents will help scientifically guide relevant personnel to improve the efficiency of emergency response.This paper focuses on the task coordination of digital emergency management plans for mines.The emergency plan system and connection relationship of the conventional production enterprise of the provincial digital plan system are described.Se-condly,this paper establishes the Petri net model of the typical mine accident disaster emergency rescue command workflow.Thirdly,this paper uses the stochastic Petri net and stochastic process analysis technology to obtain multiple transient sums based on the preliminary calculation results,and establishes a time model of all rescue missions for predicting the command information scheduling workflow.Finally,a typical gas explosion case is used for modeling,and the model is used for performance analysis.The research results show that the empirical results of this model are reasonable and universal in mine emergency rescue.It can optimize emergency rescue deployment and improve rescue efficiency.
Application of Edge Computing in Flight Training
QIAN Ji-de, XIONG Ren-he, WANG Qian-lei, DU Dong, WANG Zai-jun, QIAN Ji-ye
Computer Science. 2021, 48 (6A): 603-607.  doi:10.11896/jsjkx.201000035
Abstract PDF(3815KB) ( 848 )   
References | Related Articles | Metrics
Eye is an important manifestation of human psychological activities and thoughts in appearance.This paper analyzes the psychological behavior of pilots by using high-speed image acquisition system to track their eye movements,to study the attention of pilots during training.With the gradual maturity of low-power embedded devices and high-speed 5G networks,it has gradually entered a new era of "Internet of Everything".Based on this,this paper proposes a solution to use edge computing devices to evaluate flight training effects.This paper introduces a real-time eye-tracking system based on edge computing architecture,which uses high-speed CMOS image sensors to capture eye images,and proposes a lightweight network structure based on MobileNet to quickly locate the pupil position,and then uses the NVIDIA Jetson Nano board to achieve the function of locating pupil coordinates in continuous video images and calculating the gaze point,to obtain the eye movement visual focus track.The experimental results show that the edge computing system is simple in structure and can meet the requirements of real-time eye tracking.It provides a new and effective method for real-time psychological behavior analysis and provides a reference for improving the effect of flight training.
LDPC Adaptive Minimum Sum Decoding Algorithm and Its FPGA Implementation
WANG Deng-tian, ZHOU Hua, QIAN He-yue
Computer Science. 2021, 48 (6A): 608-612.  doi:10.11896/jsjkx.200800134
Abstract PDF(2863KB) ( 959 )   
References | Related Articles | Metrics
The belief propagation(BP) decoding algorithm for low-density parity-check (LDPC) codes has been shown to approach the Shannon limit,however it requires extremely complex logarithmic and trigonometric functions,which is not of practical interest.The minimum sum (MS) algorithm improves the convenience speed and simplifies the calculation at the expense of loss in decoding performance.In order to reduce the loss in bit error rate (BER),this paper introduce an adaptive multiplicative factor which considers the relationship between the absolute value of the input variable node side information,the second smallest value and the hyperbolic tangent function.As a result,the performance of the proposed adaptive MS algorithm is 0.2dB superior to the traditional LLR (Log-Likelihood Ratio) BP algorithm.Also,LDPC codes of 155 lengths are implemented based on the Xilinx FPGA platform.
Design and Implementation of Emergency Command System
NING Yu-hui, YAO Xi
Computer Science. 2021, 48 (6A): 613-618.  doi:10.11896/jsjkx.201000136
Abstract PDF(4017KB) ( 1010 )   
References | Related Articles | Metrics
At present,emergency events occur from time to time in the society.Emergency command has a certain significance for dealing with emergency events and stabilizing social order.However,the types,time and degree of emergency events are not specific,and the difficulty of emergency command decision-making and implementation is correspondingly increased.How to improve the intelligence of emergency command system has become a research topic.In view of this,this paper presents a design and implementation method of emergency command system.The system application architecture and system functions are designed.The system database structure,command and communication technology based on instant messaging are elaborated in detail.The Petri net is introduced into the system modeling,and adaptive modification is carried out to realize the intelligent emergency plan precipitation,emergency resource scheduling strategy generation.Finally,the experimental environment verifies the effectiveness and superiority of the proposed method.
Application of Multi-model Ensemble Learning in Prediction of Mechanical Drilling Rate
XU Ming-ze, WEI Ming-hui, DENG Shuang, CAI Wei
Computer Science. 2021, 48 (6A): 619-622.  doi:10.11896/jsjkx.201000070
Abstract PDF(3957KB) ( 899 )   
References | Related Articles | Metrics
The drilling rate is related to drilling operation parameters,drilling fluid performance and drilling tool assembly.Accurate prediction of ROP can effectively calculate drilling costs and drilling time,thereby guiding the design of drilling process parameters,optimizing drilling parameters,rationally arranging drilling rigs and drilling staff,and providing a basis for drilling designers.Combined with current machine learning and big data processing,a drilling rate prediction model based on integrated learning is established by using the historical drilling data of Tuha oilfield in western China.The ensemble members include k-nearest neighbour (KNN),support vector ma-chine (SVM),decision tree (DT),random forest (RF).Seven feature influencing factors are input,including well depth,bit pressure,pump pressure,density,viscosity,pump flow rate,and rotary speed.The goodness of fit used as the evaluation method of ROP prediction,and the results show that the prediction output of the ensemble learning model is higher than that of any single model.Taking well 7-13 as an example,the prediction effect reaches more than 0.93.In addition,this study also explores the combination of different ensemble members.Combined with time cost and goodness of fit,it is found that the optimal combination is KNN+SVR+RF.The goodness of fit is in wells 7-13,8-17,and 4-10 reaches 0.937 8,0.918 7 and 0.912 4.Finally,taking SVR as an example,the fitting accuracy of the optimized single model is still lower than any group of combined models.Further investigation reveals that both the diversity and high accuracy of ensemble members are required to obtain an effective integrated model.These observations demonstrate that the proposed model offers a promising alternative solution for ROP prediction.
Research of ATC Simulator Training Values Independence Based on Pearson Correlation Coefficient and Study of Data Visualization Based on Factor Analysis
LUO Jing-jing, TANG Wei-zhen, DING Ji-ting
Computer Science. 2021, 48 (6A): 623-628.  doi:10.11896/jsjkx.210200021
Abstract PDF(2308KB) ( 590 )   
References | Related Articles | Metrics
In order to solve the problem of repeated scoring on training indicators of ATC simulation,the indicators' relationship is studied based on Pearson correlation coefficient,and the significance level is used to test and the scoring algorithm is modified.This paper extracts 4 162 items from the cloud small program scoring database,taks factor analysis theory as the research me-thod,uses principal component analysis method to solve the factor load matrix,and uses orthogonal rotation method to expand the load value,reasonably explains the common factors,establishes the quality and ability model of the control students,and realizs data visualization.The results show that the score of the indicators after de-correlation show a downward trend,and the fluctuation is basically the same as that of the original evaluation score.Some indicator descriptions can be changed according to the correlation and independent values.The ability value after factor analysis can clearly reflect the ability of students through radar chart,showing the quality and ability of students,and is also convenient for scientific control of post allocation,which is an effective application of data visualization.
Research on Integrated Electronic Time Synchronization Technology
LU Yong-chao, WANG Bin-yi, HU Jiang-feng, MU Yang, REN Jun-long
Computer Science. 2021, 48 (6A): 629-632.  doi:10.11896/jsjkx.201100114
Abstract PDF(1994KB) ( 547 )   
References | Related Articles | Metrics
In order to realize the time synchronization of the hardware redundancy embedded integrated electronic information system,a stable time synchronization model and a multi-node time synchronization framework are proposed in this paper.The master node time server is generated by the master-end competition,and the mode control method is used to dynamically detect the state of the working node.The server and each node complete a reliable synchronization process after two handshakes,including manual and automatic synchronization methods.The software-based implementation method can be deployed at any node in the system without the need for a separate time server,and meet the multi-bus data transmission interface.The test results have verified that the system synchronization accuracy in the LAN is less than 1ms,which is 10% higher than NTP synchronization reliability,as well asreduce the amount of synchronized data per unit time by half.It adapts to data transmission methods such as CAN,Ethernet and serial ports,and can complete synchronization of multiple time types.The method has achieve reliable time synchronization of the integrated electronic information system.
Full Traversal Path Planning and System Design of Intelligent Lawn Mower Based on Hybrid Algorithm
CHEN Jing-yu, GUO Zhi-jun, YIN Ya-kun
Computer Science. 2021, 48 (6A): 633-637.  doi:10.11896/jsjkx.201100002
Abstract PDF(2781KB) ( 1640 )   
References | Related Articles | Metrics
Due to the acceleration of urban planning and construction and the continuous enhancement of residents' awareness of environmental protection,the green area has also increased steadily.The increase in the pruning of green areas consumes a lot of manpower,material and financial resources.Intelligent lawn mowing robots with mixed logic algorithms can improve this problem.Based on the STM32F407ZGT6 Explorer microprocessor as the main control chip,based on the completion of the functional design and model making of the lawnmower robot,the path planning algorithm of the intelligent lawnmower robot is mainly stu-died.Specifically,the internal spiral algorithm and the A-star pathfinding algorithm are combined to determine the motion trajectory of the mowing robot.First,the inner spiral algorithm is used to find the dead point of the operation,and then the A-star pathfinding algorithm is used to determine the nearest operation in the unmowed area Point,at the nearest point of the job,continue to walk with the inner spiral algorithm until it traverses the entire mowing area.The experimental results show that the designed intelligent mowing robot can achieve the goals of high coverage rate,low repetition rate,and precise obstacle avoidance.It also improves mowing efficiency,meets the needs of energy saving and environmental protection,and achieves the goal of saving costs.
Emotion Recognition System Based on Distributed Edge Computing
QIAN Tian-tian, ZHANG Fan
Computer Science. 2021, 48 (6A): 638-643.  doi:10.11896/jsjkx.201000010
Abstract PDF(4052KB) ( 1051 )   
References | Related Articles | Metrics
In recent years,the combination of edge computing and artificial intelligence has become more and more popular.Facial action unit (AU) detection recognizes facial expressions by analyzing cues about the movement of certain atomic muscles in the local facial area.According to the detection of facial feature points,we can calculate the values of AU,and then use classification algorithms for emotion recognition.However,in the actual production process,due to the tremendous network overhead of transferring the facial action unit feature data,it poses new challenges of this system being deployed in a distributed manner while running in production.Therefore,we design a lightweight edge computing based distributed system using Raspberry Pi tailed for this need,and optimize the data transfer and components deployment.In the vicinity,the front-end and back-end processing modes are separated to reduce round-trip delay,thereby completing complex computing tasks and providing high-reliability,large-scale connection services.
Research on Cognitive Diagnosis Model Based on Knowledge Graph and Its Application in Teaching Assistant
HUANG Mei-gen, LIU Chuan, DU Huan, LIU Jia-le
Computer Science. 2021, 48 (6A): 644-648.  doi:10.11896/jsjkx.200700163
Abstract PDF(2330KB) ( 1348 )   
References | Related Articles | Metrics
With the gradual update of the Internet industry,online learning and online classes have become an indispensable part of most families.The development of computer-assisted learning systems has led to an increase in the research on knowledge diagnosis,in which students' performance in coursework can be predicted over time.Due to the urgent need for educational applications with knowledge graphs,this paper developes a system called KGIRT.Its specific functions are as follows.First,it constructs a knowledge map (KG) for junior and high school mathematics courses.Compared with the traditional education domain know-ledge map,the mathematical subject knowledge map created in this paper focuses on knowledge itself,not limited to grades and books.It associates the mathematics knowledge points of junior high school and high school in the same knowledge graph accor-ding to the logical relationship,and students can judge the mastery of relevant knowledge points by using this system.Second,it sets the difficulty of the topic in the system diagnosis model.And by introducing the expert method into the diagnosis model,it makes the judgment of the difficulty of the topic more accurate,more objective and more systematic.Third,it combines the knowledge map with the cognitive diagnosis model,and the cognitive diagnosis model based on the knowledge map is used.Charts and matrices indicate the current status of student knowledge.Finally,based on the joint application of the knowledge graph database Neo4j and cognitive diagnosis model,an online learning WeChat applet KGIRT is developed,which realizes the transformation from theory to application.
Research on Method of Reducing A0 to Upper Hessenberg Type with Elementary Stability Matrix
SU Er
Computer Science. 2021, 48 (6A): 649-657.  doi:10.11896/jsjkx.200800063
Abstract PDF(1827KB) ( 661 )   
References | Related Articles | Metrics
This paper discusses how to reduce A0 to Hessenberg matrix by using Gauss elimination method of partial principal elements using elementary matrix technology.In order to make the numerical stability,the essential basic problem of how to exchange is emphasized.The first part briefly summarizes the matrix formula of reduction method.The second part further clarifies the basis of deducing the formula form of recursive reduction operation rule.The third part focuses on the details of recursive algorithm complete steps and logic implementation of the reduction method,and clearly states the fact that the final reduction result is consistent with the accurate calculation result of the matrix formula.The fourth part is a concrete example to verify the conclusion:the reduction method is based on sufficient calculation basis and is actually compact and feasible.
Design and Implementation of Scientific Experiment Management System Based on jBPM
DOU Shuai, LI Zi-yang, ZHU Jia-jia, LI Xiao-hui, LI Xue-song, MI Lin, YANG Guang, LI Chuan-rong
Computer Science. 2021, 48 (6A): 658-663.  doi:10.11896/jsjkx.200600158
Abstract PDF(2395KB) ( 682 )   
References | Related Articles | Metrics
Near-space exploration scientific experiment process is complex,it involves many management and technical sessions from planning process to implementation process.Moreover,the execution process of each experiment task is also nonidentical.Therefore,it is necessary to introduce a workflow technology to establish a scientific experiment management system to control the experiment process scientifically and effectively.This paper is facing the system design and implementation for large-scale scien-tific experiment task.It applies rational planning,multiple-scheduling,customizable workflow,nodal components to support the management program which based on jBPM workflow engine system.Specifically,the system abstracts the workflow into several independent operational nodes,according to the logical relationship among nodes to build workflow,at the same time,integrates the corresponding program components of each operational nodes to achieve management functions.It can not only meet the large-scale scientific business requirements,but also effectively improve the system scalability,reduce the difficulty of system maintenance,and become an effective technical tool to support the large-scale scientific experiments.
Optimization of GHTSOM Model by Data Corrosion
SHI Jian, MO Jun
Computer Science. 2021, 48 (6A): 664-667.  doi:10.11896/jsjkx.200500129
Abstract PDF(3485KB) ( 515 )   
References | Related Articles | Metrics
Clustering algorithm is widely used in pattern recognition,information retrieval,image processing and natural language processing.Two common clustering methods based on neural network are GCS and SOM.Many scholars have proposed different improved algorithms based on them.GHTSOM(Growing Hierarchical Tree SOM) is one of them.GHTSOM works well for applications where there is a clear classification of data,but it is not suitable for applications where there is a lot of noise or disturbing data.The corrosion algorithm in image processing is used to optimize the GHTSOM algorithm,that is,before calling the GHTSOM process,the data is processed by the corrosion algorithm to remove the interference data or noise data at the junction of different classes of data,making a distinction between different categories of data more obvious.To make the presentation more intuitive,two-dimensional datas are used.The results show that the optimized Ghsom model can effectively avoid the unclassifiable problems caused by local connections between classes and the misclassified problems caused by too many neurons.
Research on Intelligent Production Line Scheduling Problem Based on LGSO Algorithm
ZHANG Ju, LI Xue-yun
Computer Science. 2021, 48 (6A): 668-672.  doi:10.11896/jsjkx.210300118
Abstract PDF(2091KB) ( 700 )   
References | Related Articles | Metrics
Aiming at the problems of starvation and congestion in the scheduling process of intelligent production line,the objective function and constraint conditions of scheduling problem are established by analyzing the scheduling process.Then a new glowworm swarm optimization algorithm based on Levy flight is proposed.Levy distribution is used to improve the search range and effectiveness of the population.The maximum and minimum fluorescein are taken as boundary constraints to optimize the ite-rative formula of fluorescein,and to improve the rationality of the fluorescein carried by individuals.And the cubic mapping is introduced to optimize the population,so as to improve the comprehensive search ability of the population.The algorithm test results show that LGSO is provided with better solution accuracy,convergence and stability than GSO,SGSO and CGSO.LGSO algorithm is used to solve the scheduling problems of four typical intelligent production lines,and compared with GSO and SGSO.The results show that LGSO is basically better than the other two algorithms in the worst value,the optimal value,the average value and the standard deviation.In the complex path,LGSO has better solution accuracy,convergence speed and stability.Further,the accuracy of the mathematical model and the feasibility of LGSO to solve the scheduling problem are verified by the test.