Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 50 Issue 11A, 16 November 2023
Artificial Intelligence
Autonomous Control Algorithm for Quadrotor Based on Deep Reinforcement Learning
LIANG Ji, WANG Lisong, HUANG Yuzhou, QIN Xiaolin
Computer Science. 2023, 50 (11A): 220900257-7.  doi:10.11896/jsjkx.220900257
Abstract PDF(3016KB) ( 240 )   
References | Related Articles | Metrics
With the wide application of UAV,the design of UAV controller has become a hot research topic in recent years.The control algorithms such as PID and MPC widely used in UAV are restricted by a series of factors such as difficult parameter adjustment,complex model construction,and large amount of calculation.Aiming at the above problems,a UAV autonomous control method based on deep reinforcement learning is proposed.This method fits the UAV controller through a neural network,directly maps the state of the UAV to the output of the steering gear to control the movement of the UAV,and can obtain a general UAV controller in the continuous interactive training with the environment.This method effectively avoids complex operations such as parameter adjustment and model building.At the same time,in order to further improve the convergence speed and accuracy of the model,on the basis of the traditional reinforcement learning algorithm soft actor critic(SAC),by introducing expert information,an ESAC algorithm is proposed,which guides the UAV to explore the environment and enhances the ease of control strategy.Finally,in the position control and trajectory tracking tasks of the UAV,compared to the traditional PID controller and the model controller constructed by SAC,DDPG and other reinforcement learning algorithms,experimental results show that the controller constructed by the ESAC algorithm can achieve the same level as the PID controller,and it is better than the controller built by SAC and DDPG in stability and accuracy.
Antigenicity Prediction of Influenza A/H3N2 Based on Graph Convolutional Networks
HE Minglong, ZHAO Kun, LI Weihua, LI Chuan
Computer Science. 2023, 50 (11A): 230100113-6.  doi:10.11896/jsjkx.230100113
Abstract PDF(2503KB) ( 136 )   
References | Related Articles | Metrics
Continual and accumulated mutations in the hemagglutinin(HA) protein of influenza A virus generates novel antigenic strains that can evade human immunity and cause seasonal influenza or influenza pandemics.Timely identification of new antigenic variants is crucial for the selection of vaccines and influenza prevention.Graph embedding models can effectively model interactions even when some data is missing.For influenza A virus H3N2,this paper proposes an antigenicity prediction method based on graph convolutional networks to obtain the low-dimensional dense embedding vector of influenza strain.Then,it encodes the sequence information as supplementary features.Furthermore,deep neural networks is adopted to fuse these features and learn the dominative features for antigenicity prediction.Experimental results on two datasets show that,compared with those of exis-ting methods,the proposed method significantly improves the performance of antigenic similarity prediction,and has good robustness and scalability.In addition,it can be seen from experiments that graph convolutional networks can effectively obtain the antigenic features of the antigenic similarity relationship.
Incremental Class Learning Approach Based on Prototype Replay and Dynamic Update
ZHANG Yu, CAO Xiqing, NIU Saisai, XU Xinlei, ZHANG Qian, WANG Zhe
Computer Science. 2023, 50 (11A): 230300012-7.  doi:10.11896/jsjkx.230300012
Abstract PDF(2982KB) ( 207 )   
References | Related Articles | Metrics
The problem of catastrophic forgetting is prevalent in incremental learning scenarios,and forgetting of old knowledge can severely affect the average performance of the model over the entire task sequence.Therefore,a class incremental learning approach based on prototype replay and dynamic update is proposed to address the problem of old knowledge forgetting caused by prototype offset in the incremental learning process.This method further updates the prototypes of the old classes in real time using a dynamic update strategy after retaining the prototypes of the new classes in the prototype update phase.Specifically,after learning the new task,the strategy achieves an approximate estimation of the unknown bias present in the old-class prototypes based on the known bias of the currently accessible data,and finally completes the update of the old-class prototypes,thus being able to alleviate the mismatch between the original old-class prototypes and the current feature mapping.Experimental results on CIFAR-100 and Tiny-ImageNet datasets show that the proposed class incremental learning approach based on prototype replay and dynamic update is effective in reducing catastrophic forgetting of old knowledge,thus improving the classification performance of the model in class incremental learning scenarios.
Improved Metaheuristics for Single Container Loading Problem with Complex Constraints
LIU Rixin, QIN Wei, XU Hongwei
Computer Science. 2023, 50 (11A): 221200091-10.  doi:10.11896/jsjkx.221200091
Abstract PDF(3724KB) ( 158 )   
References | Related Articles | Metrics
Three-dimensional single container loading problem(3D-SCLP) has become one of the most classic engineering problems in the field of optimization because of its wide application in manufacturing and logistics.However,the current optimization scheme mainly considers the optimization and improvement of algorithm and local constraints,but fails to fully consider the actual complex constraints,such as weight limit,load balance,cargo stability,stacking constraints and human factors,which leads to the problem that the theoretical loading rate of the existing methods is high,but the practicality is low.In order to solve this problem,this paper proposes an improved meta-heuristic algorithm based on the Aquila optimizer on the basis of fully considering the complex constraints of multiple realities.It is based on population optimization strategy,and combines differential mutation and Gaussian disturbance with potential point strategy to achieve rapid convergence under complex constraints,and it is verified on the data of a medium-scale industrial example.Compared with the traditional heuristic optimization method,the proposed method can solve the three-dimensional packing optimization problem under complex constraints,and is superior to the existing solutions in terms of actual space utilization and generation efficiency,thus embodies intelligent packing,realizes standardization and intelligence of packing,and reduces manual participation.
Context-rich Sarcasm Recognition Based on DPCNN and Multiple Learning Modes Loss
LIU Chang, ZHU Yan
Computer Science. 2023, 50 (11A): 230200067-5.  doi:10.11896/jsjkx.230200067
Abstract PDF(2479KB) ( 147 )   
References | Related Articles | Metrics
As a richly layered and complex linguistic expression,sarcasm is widely observed in people’s daily expressions and social platforms,and correctly detecting whether a comment has ironic intent in e-commerce,event topic analysis,etc.,is crucial to determine a commenter’s emotional tendency,attitude to the comment subject.Three types of contexts,namely,conversation context,user context and topic context,have been covered to build a context-rich sarcasm detection model.To address the problem that traditional shallow CNNs are difficult to capture sentence long-term dependencies,the proposed model introduces the DPCNN architecture to capture utterance remote association information and incorporates the bidirectional attention mechanism to learn incongruity information in conversation context.Considering the small number of sarcasm types and unbalanced levels of sarcasm expressions in realistic data samples,an asymmetric loss function with multiple learning modes is also proposed.Experiments are conducted on three public and real sarcasm datasets,and the results demonstrate that the method in this paper outperforms the benchmark model in ACC,F1 and AUC metrics by up to 2.5%,and the effectiveness of each module of the proposed model and the loss function of the multiple learning modes is demonstrated by ablation experiments,which can improve the performance of sarcasm detection.
Scene Text Recognition Based on Feature Fusion in Space Domain and Frequency Domain
HUO Huaqi, LU Lu
Computer Science. 2023, 50 (11A): 230300101-8.  doi:10.11896/jsjkx.230300101
Abstract PDF(2783KB) ( 116 )   
References | Related Articles | Metrics
Existing scene text recognition methods often face the problems of low robustness and poor generalization ability in the few-shot and language-independent scene.To solve this problem,on the one hand,a dual-stream network structure based on the fusion of space domain and frequency domain features is proposed in the feature extraction stage.It consists of a deep residual convolutional network branch for extracting spatial domain features,and a shallow neural network with one-dimensional fast fourier transform(FFT) branch for extracted frequency features.And then apply the channel attention mechanism to fuse the two features.On the other hand,in the sequence modeling stage,a multi-scale one-dimensional convolution module is proposed to replace the bidirectional long short-term memory(BiLSTM) according to the characteristics of the language-independent scene.Finally,a complete model is built by combining the existing TPS rectification module and CTC decoder.The transfer learning me-thod is adopted in the training process.Pre-training is performed on the large English datasets first,and then fine-tuning is performed on the target datasets.Experimental results on two few-shot language-independent datasets compiled in the paper show that the method is superior to the existing methods in terms of accuracy,which verifies that it has high robustness and generalization ability in this scenario.Moreover,the method using the feature extraction module described in the paper is better than the baseline on the five benchmark datasets of language-dependent scene(no fine-tuning),which verifies the effectiveness and versati-lity of the dual-stream feature fusion network proposed in the paper.
Automatic Post-editing Ensemble Model of Patent Translation Based on Weighted Distribution of Translation Errors
ZHAO Sanyuan, WANG Peiyan, YE Na, ZHAO Xinyu, CAI Dongfeng, ZHANG Guiping
Computer Science. 2023, 50 (11A): 230300072-8.  doi:10.11896/jsjkx.230300072
Abstract PDF(2927KB) ( 130 )   
References | Related Articles | Metrics
Automatic post-editing(APE) is a method of automatically modifying errors in machine translation,which can improve the quality of machine translation system.Currently,APE research mainly focuses on general domains.However,there is little research on APE for patent translations,which requires high translation quality due to their strong professionalism.This paper proposes an ensemble model of APE of patent translation based on the weighted distribution of translation errors.Firstly,the term weighted translation edit rate(WTER) calculation method is proposed,which introduces the concept of term probability factor in translation edit rate(TER),and improves the WTER value of samples with more term errors.Then,the proposed WTER model is used to select subsets of mistranslation,missing translation,additional tralslation and shift error samples from the training data constructed by the three machine translation systems to construct the error correction biased APE sub-model,respectively.Finally,the biased APE sub-model is corrected by the weighted distribution of translation errors.The proposed method considers the strong professionalism and numerous technical terms in patent translations.Based on the consideration of error-correction bias,it integrates multiple sub-models to balance the diversity of translation errors.Experimental results on an English-Chinese patent abstract dataset show that,compared with the three baseline systems,the proposed method improves the BLEU values by an average of 2.52,2.28,and 2.27,respectively.
Text Stance Detection Based on Topic Attention and Syntactic Information
KANG Shuming, ZHU Yan
Computer Science. 2023, 50 (11A): 230200068-5.  doi:10.11896/jsjkx.230200068
Abstract PDF(1843KB) ( 126 )   
References | Related Articles | Metrics
Text stance detection aims to infer users’ opinions on specific topics,such as supportive,opposing,neutral and other attitudes,from their published texts.Traditional stance detection studies often use deep learning models such as convolutional neural networks or long and short-term memory networks to learn the basic semantic information of the text,ignoring the syntactic structure information embedded in the text.To address this problem,this paper designs and implements a text stance detection model--AT-BiLSTM-GAT based on topic attention and dependent syntax,and on the basis of the text context information extracted by BiLSTM,GAT is used to further learn dependent syntactic information at the text linguistic level.Meanwhile,a topic attention mechanism incorporating contextual semantic information is designed and implemented,and scaled dot product attention is employed to learn the topic-related important content in stance text,and comparative experiments on public datasets prove the efficiency of the designed and implemented AT-BiLSTM-GAT model.Finally,to address the problem of the small size of the stance detection research dataset,a synonym replacement data enhancement scheme based on WordNet synonym database and WebVectors word embedding model-WWDA,which ensures the lexical correctness and semantic similarity of the synonym replacement process,and experiment proves that it can generate more high-quality samples and improve the detection performance of the model.
Relation Recognition of Unmarked Complex Sentences Based on Feature Fusion
YANG Jincai, MA Chen, XIAO Ming
Computer Science. 2023, 50 (11A): 221100065-6.  doi:10.11896/jsjkx.221100065
Abstract PDF(1833KB) ( 148 )   
References | Related Articles | Metrics
Unlike marked complex sentences,which lack the assistance of relation words,the identification of unmarked complex sentences is a difficult task in natural language processing.Integrating part of speech features into word vectors,and the word vector representation containing external features is obtained by training.By combining the BERT model and the BiLSTM model,the word vector and the part-of-speech vector are combined for training,and the polar feature information captured by BiLSTM model and the dependency syntax feature information captured by CNN model are added to the feature fusion layer.Experimental results show that the methods of adding features and combining multiple deep learning models can achieve better results in classification of Chinese complex sentences.Compared with the benchmark model,the macro F1 value and micro F1 value are improved.The best classification effect achieves 83.67% micro F1 value in the top layer classification and 68.28% micro F1 value in the second layer classification.
Construction and Research of Chinese Word Segmentation Corpus of Process Specification Text
WANG Peiyan, ZHANG Yingxin, FU Xiaoqiang, CHEN Jiaxin, XU Nan, CAI Dongfeng
Computer Science. 2023, 50 (11A): 221200070-6.  doi:10.11896/jsjkx.221200070
Abstract PDF(2038KB) ( 139 )   
References | Related Articles | Metrics
Chinese word segmentation is a basic task for process specification text processing,which has a critical impact on downstream tasks such as process knowledge graphs and intelligent Q&A systems.One of the challenges faced by word segmentation of process specification texts is the lack of high-quality annotated corpus,especially word segmentation specifications for special language phenomena such as terms,noun phrases,process parameters,and quantifiers.This paper formulates a special word segmentation specification for the process specification text,collects and annotates a word segmentation corpus for Chinese process specification text(WS-MPST),including 11900 sentences and 255160 words,and the consistency of word segmentation by 4 annotators achieves 95.25%.On the WS-MPST corpus,the famous BiLSTM-CRF and BERT-CRF models are tested,and the F1 values achieves 92.61% and 93.69% respectively.Experimental results show that it is necessary to construct a special word segmentation corpus for process specification test.The in-depth analysis of experimental results reveals that the out-of-vocabularywords and the words which contain Chinese and non-Chinese characters are difficultto segment in process specification texts,which provides some guidance for future word segmentation research in process specification texts and related fields.
Entity Alignment Method Combining Iterative Relationship Graph Matching and Attribute Semantic Embedding
CHI Tang, CHE Chao
Computer Science. 2023, 50 (11A): 230200041-6.  doi:10.11896/jsjkx.230200041
Abstract PDF(2411KB) ( 148 )   
References | Related Articles | Metrics
Entity alignment is a key step in knowledge fusion,which is used to solve the problem of entity redundancy and unknown reference in multi-source knowledge graph.At present,most of the entity alignment methods mainly rely on the neighborhood network,but ignore the connectivity and attribute information between the relationships.As a result,the model cannot capture the complex relationships,and the additional information is not fully utilized.To solve the above problems,an entity alignment method based on iterative graph reasoning and attribute semantic embedding is proposed.The 〈head,relation,tail〉 is transposed to generate 〈head,relation,tail〉 to construct the corresponding relationship graph with the entity graph,and then the attention mechanism is used to encode the entity and relation representation.The two can represent the entity better through iteration.The refusion property indicates the final determination of whether the two entities are aligned.Experimental results show that this model is significantly superior to the other six methods in the three cross-language data sets of DBP15K,and the index increases by 4% compared with the best method Hit@1,which proves the effectiveness of relational reasoning and attribute semantics.
Joint Method for Spoken Language Understanding Based on BERT and Multiple Feature Gate Mechanism
WANG Zhiming, ZHENG Kai
Computer Science. 2023, 50 (11A): 230300002-6.  doi:10.11896/jsjkx.230300002
Abstract PDF(2135KB) ( 133 )   
References | Related Articles | Metrics
Intent classification and slot filling are two subtasks of spoken language comprehension that are used to identify the intent of text sequences in a conversation system and to obtain slot information from the text sequences that may be used to further infer the exact substance of the intent.Recent research has revealed that these two tasks are connected and can reinforce one another.However,the majority of joint techniques now just use one feature to establish the relationship between the two by only exchanging parameters,which frequently results in issues like poor model generalization and low feature utilization.In order to solve these problems,a novel joint model is proposed that adds an intent feature extraction layer and a slot feature extraction layer for additional text feature extraction based on BERT to improve text vector representation.It also fuses features from different parties using gate mechanism to fully utilize the semantic relationship between the two tasks to predict labels.Experimental fin-dings on the openly accessible datasets ATIS and SNIPS demonstrate the effectiveness of the proposed model in improving intent categorization and slot filling performance,outperforming current approaches.
Audit Text Named Entity Recognition Based on MacBERT and Adversarial Training
QIAN Taiyu, CHEN Yifei, PANG Bowen
Computer Science. 2023, 50 (11A): 230200083-6.  doi:10.11896/jsjkx.230200083
Abstract PDF(1828KB) ( 163 )   
References | Related Articles | Metrics
In order to automatically identify the effective entity information from the audit text and improve the efficiency of policy tracking audit,a named entity recognition(NER) of audit text model(Audit-MBCA) based on MacBERT(MLM as correction BERT) and adversarial training is proposed.At present,deep learning has been maturely applied to NER task and achieved signi-ficant results.However,the audit text has some problems such as lacking corpus and unclear entity boundary recognition.To address these problems,the audit text dataset named Audit2022 is constructed in this paper.Its vector representation is obtained by using the MacBERT Chinese pre-training language model.At the same time,adversarial training is introduced and the shared word boundary information of Chinese word segmentation(CWS) task and NER task is used to help identify entity boundaries.Experimental results show that the value of F1 on the Audit2022 dataset from the Audit-MBCA model is 91.05%,which is 4.53% higher than the mainstream model;the value of F1 on the SIGHAN2006 dataset is 93.70%,which is 0.33%~3.25% higher than other models.These verify the effectiveness and generalization ability of the proposed model.
Multi-feature Fusion Based New Personalized Sentiment Classification Method for Comment Texts
WANG Youwei, LIU Ao, FENG Lizhou
Computer Science. 2023, 50 (11A): 221000217-7.  doi:10.11896/jsjkx.221000217
Abstract PDF(2507KB) ( 173 )   
References | Related Articles | Metrics
Existing research on sentiment classification fails to fully consider the influence of personality characteristics contained in user’s personal historical comments on the results of sentiment classification,and fails to comprehensively consider the combined effects of many factors such as user’s social relations,personal attributes,historical comments and current comments.To this end,a new personalized method for sentiment classification of comment texts based on multi-feature fusion is proposed.First,the user’s personality expressions is mined by using a great number of unlabeled user’s historical comments,and the user’s feature vector is extracted by combining user’s historical comments and attribute information.Then,the advantages of the node2vec algorithm in obtaining the node representation of the graph are used to learn users’ social relationship networks,so as to obtain the users’ social relationship vectors,and the pre-trained word2vec model is used to obtain the user’s current comment vector.Finally,the user’s feature vector,social relationship vector and labeled current comment vector are entered into the fully connected classifier for training to obtain the final classification model.Experimental results on the real data set crawled from the Chinese stock page show that compared with typical methods such as support vector machine,naive Bayes,TextCNN,Bert,the proposed method can effectively improve the accuracy and F1 value of sentiment classification,which verifies its effectiveness in improving sentiment classification performance.
Aspect-based Sentiment Analysis Based on Local Context Focus Mechanism and Talking-Head Attention
LIN Zhengchao, LI Bicheng
Computer Science. 2023, 50 (11A): 220900266-6.  doi:10.11896/jsjkx.220900266
Abstract PDF(2554KB) ( 181 )   
References | Related Articles | Metrics
Aspect-based sentiment analysis is an important research direction in the field of natural language processing,and its purpose is to predict the sentiment polarity of different aspects in sentences.The existing aspect-based sentiment analysis usually ignores the relationship between sentiment polarity and local context,and the operation of each attention head in the multi-head attention used is independent of each other.To this end,an aspect-based sentiment analysis model based on the localcontext focus mechanism and talking-head attention is proposed.First,preliminary features of local context and global context are captured by a BERT pretrained model.Then in the feature extraction layer,the local contextualfocus mechanism is used,and the local contextual features are further extracted through the contextual feature dynamic mask layer combined with the talking-head attention,and talking-head attention is used to further extract global context features.Finally,the local and global information are fused and input to the nonlinear layer to obtain sentiment analysis results.Experiments are conducted on three public datasets.Experiments show that compared with multiple existing baseline models,the MF1 value and accuracy of the new model are improved.
Inductive Interactive Network Model for Customs Import and Export Commodity Tax Rate Detection
WU Anqi, CHE Chao, ZHANG Qiang, ZHOU Dongsheng
Computer Science. 2023, 50 (11A): 230200086-7.  doi:10.11896/jsjkx.230200086
Abstract PDF(2170KB) ( 133 )   
References | Related Articles | Metrics
The traditional way of examining the tax rate of manual goods in China Customs has problems such as low efficiency,inconsistent judgment basis and low precision.Using text classification method to automatically determine the tax rate of commodity classification can effectively reduce the risk of customs tax rate.However,when classifying customs commodity data,commodity categories are hierarchical.Many sub-categories under the same category have highly similar commodity descriptions,which brings great challenges to commodity classification.Therefore,an inductive interactive network model is proposed,and inductive and interactive guidance modules are added on the basis of BERT and CNN.In the induction module,the dynamic routing algorithm is used to perform iterative operation on the features extracted by CNN,which can effectively solve the problem of adjacent feature fusion and redundancy.At the same time,in order to solve the feature similarity problem between different subcategories and improve the classification performance,the interactive guidance module is introduced,which is mainly to interact the feature information extracted by the induction module with [CLS] classification vector.Experiment is carried out on the real customs data set,and the results show that the method can achieve good results,the accuracy is up to 92.98%,and the performance is obviously better than that of each baseline model.
Time Optimal Trajectory Planning of Manipulator Based on Improved Butterfly Algorithm
ZHOU Mingyue, ZHOU Mingwei, LIU Guiqi, CHEN Chao
Computer Science. 2023, 50 (11A): 220900284-8.  doi:10.11896/jsjkx.220900284
Abstract PDF(3198KB) ( 151 )   
References | Related Articles | Metrics
Aiming to plan the trajectory of the manipulator,common practice makes the drive device meet the actual load requirements,the velocity and acceleration of each joint will be relatively conservative in the selection.As a result,it takes too long to complete a set of actions,so that the continuity and smoothness of the manipulator based on motion speed and acceleration cannot be fully exerted.In order to solve the problem of optimal selection of the velocity and acceleration of each joint of the manipulator,a time-optimal trajectory planning method of the manipulator based on the improved butterfly algorithm is proposed.Firstly,the 3-5-3 polynomial interpolation algorithm is used to construct the motion trajectory of the robotic arm of AUBO-i5 six-degree-of-freedom manipulator.Then,the butterfly optimization algorithm introducing Levi flight and sine cosine algorithm is used to optimize the time of the motion trajectory,which reduces the running time of the robotic arm on the premise of meeting the work requirements.Simulation results show that,compared to similar algorithms,the improved butterfly algorithm is not easily trapped in local optimization and has higher optimization accuracy.The improved butterfly algorithm is applied to trajectory planning,and the run times of the robotic arm is significantly reduced,which can ensure that the robotic arm can the task smoothly complete and efficiently in actual production.
Construction of Badminton Knowledge Graph Completion Model Based on Deep Learning
CHEN Yujue, HU He, LI Qiang
Computer Science. 2023, 50 (11A): 220900205-6.  doi:10.11896/jsjkx.220900205
Abstract PDF(2838KB) ( 175 )   
References | Related Articles | Metrics
To enhance the application value of knowledge graph in badminton field,this research first analyzes the research status of the completion model,then combines the deep learning technology and attention mechanism to build a knowledge graph completion model based on graph convolution neural network with subgraph structure decoupling,and finally evaluates the improved performance of the model.The results show that the proposed model has achieved good results in all sub datasets,which is equi-valent to the best baseline model.On the three data sets selected in the experiment,the two test indicators are reduced to varying degrees,which indicates the effectiveness of entity feature decoupling.Only 3 or 8 bases are sufficient to express the characteristics of different relationships in the model.In this paper,a knowledge graph completion model with good improvement effect is obtained.This study lays a foundation for the popularization of knowledge atlas in badminton.
Constraint-based Verification Method for Neural Networks
GAO Yuzhao, XING Yunhan, LIU Jiaxiang
Computer Science. 2023, 50 (11A): 221000045-5.  doi:10.11896/jsjkx.221000045
Abstract PDF(1783KB) ( 136 )   
References | Related Articles | Metrics
The verification of neural networks has always been one of the major challenges in the field of artificial intelligence.Based on the DeepZ method,this paper proposes a method to improve the accuracy of verifying local robustness of deep neural networks by constraints.During the propagation process,constraints are added to reduce the abstract domain,then a tighter neural network output range is solved by linear programming,hence deducing new bounds of neural network output node.With the new bounds,more accurate verification results can be obtained.Based on this method,the DeepZero tool is implemented in this paper,and comprehensive evaluation is carried out on the MNIST dataset.Experimental results show that our method effectively improves the verification success rate of DeepZ.Specifically,the verification success rate of DeepZ increases by 49% in average,indicating the effectiveness of the proposed method.
Modeling Gene Regulatory Networks with Global Coupling Parameters
MA Mengyu, SUN Jiaxiang, HU Chunling
Computer Science. 2023, 50 (11A): 221100088-7.  doi:10.11896/jsjkx.221100088
Abstract PDF(2568KB) ( 109 )   
References | Related Articles | Metrics
In systems biology,the hidden Markov model non-homogeneous dynamic Bayesian network(HMM-DBN) can reasonably infer the regulatory relationships in periodic gene expression data and is one of the important methods to reconstruct gene regulatory networks.But it usually assumes complete independence of its regulatory parameters(the parameters of each time periods need to be inferred independently),and the parameter assumption(complete independence) is equivalent to ignore the continuity of biological evolutionary processes in nature,which affects the accuracy of network reconstruction.Aiming at the above problems and combining multiple changepoint processes,a hidden Markov model non-homogeneous dynamic Bayesian network with global coupling of parameters(GCHMM-DBN) is proposed.The GCHMM-DBN model achieves the global coupling of regression parameters by adding the global coupling hyperparameters,sharing the noise variance hyperparameters and signal-to-noise ratio hyperparameters of all time periods in the similarity Gaussian distribution based on the HMM-DBN,and finally improving the reconstruction accuracy of the gene regulation network.Experimental results on Saccharomyces cerevisiae(yeast) and synthetic RAF datasets show that the GCHMM-DBN model has higher accuracy of gene regulatory network reconstruction compared with the classical HMM-DBN model.
Global Task Assignment Model for Crowdsourcing with Mixed-quality Worker Context
JIANG Jiuchuan, WEI Jinpeng, ZHANG Jinwei
Computer Science. 2023, 50 (11A): 230200079-9.  doi:10.11896/jsjkx.230200079
Abstract PDF(5885KB) ( 146 )   
References | Related Articles | Metrics
Most existing crowdsourcing researches previously use top workers to complete tasks,i.e.,crowdsourcing platforms always assign workers with the highest skill levels and reputation to tasks.However,in reality,most of the workers have relatively low skill levels and reputation,resulting in a large number of unassignable tasks and workers without tasks to do in crowdsourcing platforms.The main reasons for the problem are as follows:(1)complex tasks have high skill level requirements and the number of professional workers is small,so workers with insufficient skill levels are unable to complete complex tasks,which causes a large number of tasks fail to be assigned;(2)tasks are priority assigned to workers with high skill levels and high reputation,while workers with relatively low skill levels and reputation are not available to be assigned because they cannot meet the task requirements.In the real crowdsourcing platform,we find that many complex tasks have sufficient budgets and workers can improve their skill levels through collaboration.On the basis of these practical observations,we design a worker collaboration model in this paper.When the platform lacks of workers who meet the task requirements,the model allows workers with inadequate skill levels to participate in the team and collaborate to achieve the task skill level requirements before being assigned the task.Finally,the experiments are carried out on a real dataset and the results show that the proposed model can improve the success rate of task assignment and also reduce the budget cost of requesters,increase the income of workers.
Method for Identifying Active Module Based on Gene Prioritization
Computer Science. 2023, 50 (11A): 221200113-8.  doi:10.11896/jsjkx.221200113
Abstract PDF(3428KB) ( 107 )   
References | Related Articles | Metrics
With the rapid development of high-throughput sequencing,a vast amount of multi-omics data has been contributed to investigating the pathogenesis of cancer at the molecular level.In recent years,the identification of active modules has become a major direction in bioinformatics.However,many existing approaches cannot identify a dense module that has strong association with cancer.A method called IdeMod is proposed by integrating protein-protein interaction network(PPI) and gene expression data.More concretely,a gene scoring function is devised by using the regression model with a p-step random walk kernel.By introducing the relationship of dominance in the POC method,a gene prioritization list is presented.A simulated annealing algorithm SA-PROX is introduced to find an active module with high gene prioritization and strong connectivity.Experiments are performed on real biological datasets,including breast cancer and cervical cancer.Compared with the previous methods SigMod,LEAN,RegMod and ModFinder,IdeMod can successfully identify a well-connected module that contains a large proportion of cancer-related genes.Therefore,the proposed approach may become a useful complementary tool for identifying active module.
Image Processing & Multimedia Technology
2D Human Pose Estimation Based on Adaptive Estimation
ZHENG Quanshi, JIN Cheng
Computer Science. 2023, 50 (11A): 221000048-7.  doi:10.11896/jsjkx.221000048
Abstract PDF(3876KB) ( 154 )   
References | Related Articles | Metrics
The regression-based 2D human pose estimation methods directly predict the coordinates of human keypoints.The transformer can effectively establish the relationship between human body parts,and its application significantly improves the accuracy of the regression-based methods.However,related methods have the following two problems:1)In the cross-attention module,for different images,the fixed query can not properly focus on different keypoint regions,which leads to distraction.2)They directly learn the labeled keypoint coordinates and overfit annotations.In this paper,a pose estimation model based on adaptive prediction is proposed to solve these two problems.For the first problem,the model adaptively predicts the region of attention of the query and directs the attention to that region.For the second problem,the model adaptively predicts the probability distribution of keypoint appearing in every position,and alleviates the model's overfitting to annotations by means of soft prediction.Experiments on the MS-COCO dataset show that the model improves the accuracy of the baseline method by 2.8% and improves the highest accuracy of related methods by 0.2%.
Coupling Local Features and Global Representations for 2D Human Pose Estimation
CHEN Qiaosong, WU Jiliang, JIANG Bo, TAN Chongchong, SUN Kaiwei, DEN Xin, WANG Jin
Computer Science. 2023, 50 (11A): 221100007-5.  doi:10.11896/jsjkx.221100007
Abstract PDF(2286KB) ( 146 )   
References | Related Articles | Metrics
In recent years,both convolutional neural network and Transformer have made progress in the field of human pose estimation.Convolutional neural network(CNN) is good at extracting local features,and Transformer does well in capturing global representations.However,there are few studies on the combination of the two to achieve human pose estimation,as the same time the results are not good.Aiming at solving this problem,this paper proposes a model CNPose(CNN-Nest Pose) that couples local features and global representations.The local-global feature coupling module of this framework uses multi-head attention calculation method and residual structure to deeply couple local features and global representations.At the same time this paper proposes a local-global information exchange module to solve the problem that therange of data sources of local features and global representationis inconsistent in the local-global feature coupling module during the calculation process.The CNPose framework has been verified on COCO-val2017 and COCO-dev-test2017 datasets.Experiment results show that the CNPose model using the coupling of local features and global representations has superior performance compared to similar methods.
FMCW Radar Human Behavior Recognition Based on Residual Network
LUO Jinyan, CHANG Jun, WU Peng, XU Yan, LU Zhongkui
Computer Science. 2023, 50 (11A): 220800247-6.  doi:10.11896/jsjkx.220800247
Abstract PDF(3732KB) ( 134 )   
References | Related Articles | Metrics
For the existing FMCW radar human behavior recognition methods are mostly done by deep convolutional neural networks,however,with the deepening of the network,there will be problems such as the difficulty of network training will increase or the feature extraction will be insufficient.A method for FMCW radar human behavior recognition based on residual network is proposed.The micro-Doppler time-domain spectrogram of each behavior is obtained by analyzing and processing the radar echo data,which is used as the classification feature of the recognition model.The convolutional block attention module(CBAM) is introduced into the residual block of the residual network to build a recognition model.CBAM pays attention to the color change of the spectrogram and the position information of each color in the spectrogram,while introducing adaptive Matching normalization and changing the convolutional structure of the input part of the network improves the feature extraction ability of the model.Through experimental verification,the average recognition accuracy of the model can reach 98.17%,and for behaviors with similar micro-Doppler features,the recognition accuracy can reach 95%,which prove that the model has good recognition perfor-mance.
Visual Object Tracking Based on Adaptive Search Range Adjustment
Computer Science. 2023, 50 (11A): 221000172-6.  doi:10.11896/jsjkx.221000172
Abstract PDF(2998KB) ( 136 )   
References | Related Articles | Metrics
The mainstream visual object tracking algorithms generally set the position of object that tracked in the last frame as the center of a search range,which is used to detect the object in current frame.However,the tracking object may deviate from the center of search range due to its motion,thus its detection response in current frame can be easily inhibited by the cosine window penalty mechanism,which leads to tracking failure.To solve this problem,an adaptive search range adjustment(ASRA) method is proposed.In this method,a motion prediction model based on recurrent neural network(RNN) is used to predict the object position in current frame,and it is combined with the correlation filtering response to adjust the center of search range.The size of search range is further adjusted according to the motion vector of the tracking object.The proposed ASRA method is applied to current state-of-the-art object tracking algorithms based on Siamese networks.Experiments on OTB2015 and VOT2018 datasets show that ASRA can improve the accuracy and robustness of these algorithms.
Study on Scale Adaptive Target Detection Algorithm Based on Improved D2Det
WANG Ling, HUANG Guan, WANG Peng, BAI Yane, QIU Tianheng
Computer Science. 2023, 50 (11A): 221100247-9.  doi:10.11896/jsjkx.221100247
Abstract PDF(5034KB) ( 148 )   
References | Related Articles | Metrics
Aiming at the problem that D2Det(Towards High Quality Object Detection and Instance Segmentation) has poor detection effect and large parameter quantity in the face of scale change targets and small targets,this paper proposes a scale adaptive target detection model G-SAD2Det based on D2Det.Firstly,in the data preprocessing stage,the data enhancement algorithms CutOut and Mosaic are introduced,and the model has good robustness when dealing with complex scenes.Secondly,the feature extraction network ResNet is improved,the multi-scale feature extraction structure is built in each residual block,and the target features are better extracted from the fine-grained level.At the same time,the switchable global context semantic feature extraction module is added to the network structure,and the salience features and global context semantic information are enhanced through different pooling layers.Then,the candidate frame generation module is improved,and the center area of the self-locating target is used to guide the generation of the candidate frame,so that the adaptive ability of the algorithm to the scaling target can be enhanced.Finally,replacing ordinary convolution with Ghost convolution to reduce the amount of network parameters and computation.VOC data set and COCO sub-data set are used to verify the effectiveness of the algorithm,the mAP@0.5 value of G-SAD2Det increases by 3.6% and 4.9% respectively,compared with D2Det in the two data sets.The number of model parameters reduces by 27.42% and the amount of calculation reduces by 35.96%.It is proved that the improved algorithm not only improves the accuracy,but also reduces the amount of computation.
Object Region Guided 3D Target Detection in RGB-D Scenes
MIAO Yongwei, SHAN Feng, DU Sicheng, WANG Jinrong, ZHANG Xudong
Computer Science. 2023, 50 (11A): 221200152-8.  doi:10.11896/jsjkx.221200152
Abstract PDF(2946KB) ( 130 )   
References | Related Articles | Metrics
3D object detection for RGB-D scenes is an important issue in the literature of computer graphics and 3d vision.To overcome the poor adaptability to complex background of RGB-D scenes and it is hard to effectively combine the object region information and intrinsic feature of sampling points,a novel object region guided 3d detection framework is proposed,which can combine the global and local features of sampling points and also eliminate the background interference.Our framework takes the RGB-D data of 3Dscenes as input.First,the 2D regions of different objects in the underlying RGB image are be extracted and roughly be classified.These 2D boundary boxes of different objects can thus be lifted to their corresponding 3D oblique cone regions,and the RGB-D data located in the cone regions can also be converted to point cloud data.Furthermore,guided by the object region information,its feature of the sampling points located in each oblique cone can be extracted,and the global and local features of the sampling points are effectively fused by feature transformation and maximum pool aggregation operation.Moreover,these fused feature can be adopted to predict the probability score which reflect its correlation between each sampling point located in the foreground or background regions.According to this probability score,the sampling points of foreground and background regions can be segmented and the masked point cloud is thus generated by dividing the background sampling points from the underlying 3D scenes.Finally,the center point of the object is generated by voting in the shielded point cloud,and suggestions and 3D target prediction are made with the aid of object area information.In addition,a corner loss is added to optimize the accuracy of the bounding box.Using the public SUN RGB-D dataset,experimental results show that our proposed framework is effectively on 3D object detection.The accuracy rate of point cloud target detection under the same evaluation index reaches 59.1% if compared with the traditional method,and the boundary boxes of 3d objects can also be accurately estimated for different areas even with strong occlusion or sparse sampling points.
Multi-feature-aware Spatiotemporal Adaptive Correlation Filtering Target Tracking
MENG Qingjiao, JIANG Wentao
Computer Science. 2023, 50 (11A): 230200096-9.  doi:10.11896/jsjkx.230200096
Abstract PDF(3984KB) ( 144 )   
References | Related Articles | Metrics
Aiming at the disadvantage that the regularization filter defines the regularization term in advance but cannot suppress the learning of non-target region in real time,a new method of multi-feature-aware spatiotemporal adaptive correlation filtering target tracking is proposed.Firstly,the spatial local response variation is introduced into the objective function to realize the spatial regularization,so that the filter can focus on the trustworthy part of the learning target,and then the response model is obtained.Secondly,the update rate of the filter is determined according to the change of the global response.Finally,the non-convolution feature level fusion is realized by cascading color histograms(CN) and dimensionally reduced gradient histograms(fHOG).The conv1 and conv5 layers of imagenet-vgg-2048 are used to extract the spatial contour and semantic information of the target.The ReLU function is used to fit and train the data to improve the speed while retaining the main information.Results:In this paper,we compared 8 algorithms of the same type,and used the defined baseline algorithm STRCF(2018) in the objective function,and KCF(2014),which introduces gauss kernels to increase computational speed and sample circularly using a cyclic matrix,and MOSSE_CA(2021),which links context and scale filters,and DCF_CA(2017),which increases the number of samples but reduces the search area Staple(2016) with temporal regularization;region constraint to reduce anomalous ARCF(2019);correlation filter HSTDCF_CA(2021) with hierarchical spatiotemporal map regularization;and target segmentation into four blocks,the SAME_CA(2020) of the scale factor is calculated by using the kernel correlation filter to find the maximum response position of each block.Compared with the accuracy(0.737) and success rate(0.760) of STRCF algorithm,the accuracy rate(0.747) and success rate(0.789) of DTB70 were increased by 1% and 2.9% respectively.Conclusion:The image information learned after multi-layer feature fusion is updated to obtain the overall contour,so as to adaptively track the target.A large number of experiments show that the algorithm basically meets the real-time requirements in complex background,object occlusion,fast motion and other scenarios.
Improved YOLOv5 Small Drones Target Detection Algorithm
LU Qi, YU Yuanqiang, XU Daoming, ZHANG Qi
Computer Science. 2023, 50 (11A): 220900050-8.  doi:10.11896/jsjkx.220900050
Abstract PDF(4131KB) ( 197 )   
References | Related Articles | Metrics
The detection of low-altitude slow-speed small targets has always been the focus and difficulty in the field of early warning detection.At present,the mainstream target detection algorithms based on neural networks are mainly designed to be applied to VOC dataset or COCO dataset,and the detection accuracy is not ideal in specific scenarios.Aiming at the specific detection scene of small drones target detection in complex background,a small drones target detection algorithm based on improved YOLOv5 is proposed.First,a small target detection layer is added to obtain a large-sized shallow feature map,thereby improving the detection ability of the algorithm for small targets.Secondly,for the problem of different sizes of small drones,K-means++clustering algorithm is used to detect the prior frame.The size of the inspection frame is optimized and matched with each feature layer.Finally,the Mosaic-SOD methods of data augmentation and improved loss function are used to enhance the algorithm’s ability to perceive small targets and improve the efficiency of network training.The improved algorithm is applied to the target detection of small drones in complex background.Experimental results show that compared with the original YOLOv5 algorithm,the proposed algorithm has higher detection accuracy and characteristics in target detection of small rotor UAV.The extraction capability,although the detection speed has decreased to a certain extent,can still meet the real-time requirements by detecting the visible light video stream.
YOLOv3 Vehicle Recognition Algorithm for Adaptive Dehazing in Complex Environments
YANG Xiuzhang, WU Shuai, LI Na, YANG Wenwen, LIAO Wenjing, ZHOU Jisong
Computer Science. 2023, 50 (11A): 220700147-8.  doi:10.11896/jsjkx.220700147
Abstract PDF(3584KB) ( 177 )   
References | Related Articles | Metrics
In view of the complex environmental factors will seriously affect the performance of road vehicle target detection algorithms,traditional methods have low recognition accuracy and slow perception,which seriously threatens traffic safety.This paper proposes a YOLOv3 vehicle recognition algorithm based on adaptive image dehazing.First,an adaptive image dehazing algorithm is constructed in the image preprocessing stage.The ACE dehazing algorithm and the dark channel dehazing algorithm are combined to effectively reduce the noise of rain and fog images.Second,the improved YOLOv3 algorithm is used to identify and locate the car position.Finally,the effectiveness of the method is demonstrated through detailed comparative experiments,and the vehicles driving in complex weather are accurately identified.Experimental results show that the proposed method can effectively reduce the noise in rain and fog conditions,and can effectively locate the driving vehicle.Its precision,recall and F1 value is 0.944,0.934 and 0.939,respectively,which is higher than that of traditional SSD,YOLO and YOLOv3 algorithms.It has good robustness and speed,which will provide theoretical basis for the development of intelligent transportation and has practical significance.
Efficient Video Super-Resolution with Latent Attention
WANG Yuji, DONG Haocheng, GONG Xueluan, CHEN Yanjiao
Computer Science. 2023, 50 (11A): 221100156-10.  doi:10.11896/jsjkx.221100156
Abstract PDF(3278KB) ( 164 )   
References | Related Articles | Metrics
To solve the problem of video super-resolution,the spatio-temporal correlation information in videos can be utilized,which is an effective method for reconstructing low resolution videos into high-resolution videos.Prior works mainly focus on utilizing motion compensation to capture temporal dependency in video generation,leading to inefficient stage-wise modeling strategies.Compared to motion compensation,attention model is more efficient in the search for spatio-temporal correlation.In this paper,we formulate a latent attention model for attention estimation with amortized variational inference and instantiate two effective attention modules for video super-resolution.Based on it,a novel deep network model,which can capture spatio-temporal correlations efficiently for video super-resolution and admit end-to-end learning,is presented.Extensive experiments on public video datasets demonstrate the superior performance of our approach over several state-of-the-art methods like SPMC,DUF-16L.
Multi-dimensional Feature Excitation Network for Video Action Recognition
LUO Huilan, YU Yawei, WANG Chanjuan
Computer Science. 2023, 50 (11A): 230300115-8.  doi:10.11896/jsjkx.230300115
Abstract PDF(3703KB) ( 122 )   
References | Related Articles | Metrics
Due to the diversity of video content and the complexity of video background,how to effectively extract spatio-temporal features is the main challenge of the video action recognition.In order to use deep networks to learn spatio-temporal features,researchers usually use two-stream networks and 3D convolution networks.Two-stream networks use the optical flow as its input to learn temporal features,but optical flow cannot express long-distance temporal relationships and the calculation of optical flow requires a lot of memory and time.On the other hand,3D convolution networks increase the computational cost by an order of magnitude compared with 2D convolution networks,which easily leads to over-fitting and slow convergence.To solve these problems,an attention-based multi-dimensional feature activation residual networks(MFARs) is proposed for video action recognition.A motion supplement excitation module is proposed to model temporal information and stimulate motion information.A united information excitation module is proposed to use temporal features to stimulate channels and spatial information in order to learn a better spatio-temporal features.Combing these two modules,MFARs is constructed for video action recognition.The proposed method obtainsan accuracy of 96.5% and 73.6% respectively on datasets UCF101 and HMDB51.Compared with the current mainstream action recognition models,the proposed multi-dimensional feature excitation method can effectively express spatial and temporal characteristics,and achieve a better balance of computation complexity and classification accuracy.
Remote Sensing Image Fusion with Dual-branch Attention Network
LI He, NIE Rencan, YANG Xiaofei, ZHANG Gucheng
Computer Science. 2023, 50 (11A): 230200072-7.  doi:10.11896/jsjkx.230200072
Abstract PDF(5586KB) ( 150 )   
References | Related Articles | Metrics
In remote sensing imaging,PAN images have higher spatial resolution,while MS images contain more spectral information.Therefore,it is an important technique to fuse them to obtain high-resolution multispectral images.The spatial details of panchromatic sharpening are limited because CNNS often fail to accurately capture long-range spatial features.In order to fully extract the spatial information of panchromatic images and the spectral information of multispectral images,this paper proposes a dual-branch attention network for remote sensing image fusion tasks.Different from the previous methods that use pure convolutional neural networks to extract spatial and spectral information,this method introduces a spatial attention module and a channel attention module into the convolutional block to focus on spatial and spectral information respectively,and performs information interaction between layers to fully extract spatial information and spectral information.At the same time,based on the Transformer architecture,this paper builds the global branch of the Transformer to fully learn the spatial and spectral features in the image,and finally obtains the multispectral image with high spatial resolution after decoding.Full-resolution and reduced-resolution experiments are carried out on the IKONOS and WorldView-2 datasets.Experimental results show that the proposed method achieves better results than other methods in terms of objective indicators and subjective vision.
Generative Industrial Image Abnormal Location Model Based on Fuzzy Masking and Dynamic Inference
WU Tianyue, ZHANG Hui, ZHANG Zouquan, TANG Junkun
Computer Science. 2023, 50 (11A): 230100073-7.  doi:10.11896/jsjkx.230100073
Abstract PDF(4656KB) ( 145 )   
References | Related Articles | Metrics
The mechanization of industrial production puts forward new requirements for the inspection of industrial product quality,and a high-precision,easy-to-transplantanomaly detection algorithm is required to adapt to the update of production methods.Aiming at the inherent problem of low probability of abnormal samples in industrial production and incomplete prediction,a generative industrial anomaly localization model based on fuzzy masking and dynamic reasoning is proposed.Firstly,a contrast sample generation module based on random fuzzy occlusion is designed to obtain high-quality simulated anomalous images.At the same time,the shallow feature fusion path is used to retain more edge information,the loss-loss function weighting is used to make the model pay more attention to structural similarity,and the contrast learning method is used to make the network obtain better representation ability.Secondly,in order to alleviate the problem of blurred output images of generative models,a multi-branch anomaly dynamic inference method is designed,and the two branches of iterative generation and accurate repair cooperate with each other to widen the distance between background noise and real anomalies.Experimental results show that the proposed method achieves an average localization accuracy of 91.42% on the MVTec dataset,and the top three anomalous localization accuracy are obtained in 12 classes.The location of anomalies can be obtained more completely.For images with complex textures and large backgrounds,it still maintains high index sensitivity,and the average anomaly localization performance has reached the best in published generative detection models published in recent years.
Improved ICP Fast Point Cloud Registration Method Based on Feature Transformation Combined with KD Tree
TANG Jialin, LIN Shounan, ZHOU Zhuang, SI Wei, WANG Tenghui, ZHENG Zexin
Computer Science. 2023, 50 (11A): 230100028-5.  doi:10.11896/jsjkx.230100028
Abstract PDF(2737KB) ( 181 )   
References | Related Articles | Metrics
Point cloud registration is the key technology of 3D reconstruction.Aiming at the problems of slow convergence speed,low registration efficiency and long registration time in iterative closest point(ICP) algorithm,a fast point cloud registration method based on feature transformation combined with kdtree is proposed to improve ICP.First of all,the three-dimensional SIFT key points are obtained on the differential Gaussian model by down-sampling with voxel mesh method.Secondly,fast point feature histogram(FPFH) is established.Then sample consensus initial alignment(SAC-IA) algorithm is used to realize rough registration.Finally,according to the obtained initial transformation matrix and improved ICP algorithm based on KD tree,accurate registration is realized.Experimental results of Stanford data registration show that compared with ICP algorithm,the proposed algorithm has higher registration accuracy and time utilization,and can select a better initial pose for accurate registration.To some extent,this study avoids the local optimal phenomenon existing in point cloud collocation,and provides an efficient me-thod for subsequent target recognition and matching and 3D reconstruction.
Electrolyzer Equipment and Sample Detection Method Based on Multi-scale Improved YOLOv5
WU Jiaojiao, LIU Zheng
Computer Science. 2023, 50 (11A): 230200163-6.  doi:10.11896/jsjkx.230200163
Abstract PDF(3396KB) ( 130 )   
References | Related Articles | Metrics
Aiming at the real-time recognition problem of electrolytic cell transfer robot in electrolytic aluminum workshop,there is a problem that the size difference of the recognition object is too large for the target detection of electrolytic cell equipment and aluminum ingot samples.Generally,the parameters of the target detection algorithm are large,and it is difficult to meet the requirements of real-time detection when deployed on electrolyzer transfer robots.Therefore,a lightweight multi-scale YOLOv5 network model that solves the excessive difference in target size is proposed,and the backbone feature extraction network is replaced by a lightweight Shufflenet V2 network.Add SE attention mechanism to improve the accuracy of small target recognition.In the enhanced feature extraction network,a shallow detection layer is added as the detection layer for smaller targets to achieve the recognition accuracy of multi-scale and large size changes.Experimental results show that the average detection accuracy of the improved YOLOv5 algorithm in the electrolyzer equipment and sample identification of the electrolyzer transfer robot is 93.5%,which is 1.5% higher than the average detection accuracy of the YOLOv5 algorithm,the number of model parameters reduces by about 39.4%,and the average detection speed of each picture increases by 2.5 milliseconds,which is conducive to deployment to the Electrolyzer transfer robot.
Improved YOLOv5s Lightweight Steel Surface Defect Detection Model
JIANG Bo, WAN Yi, XIE Xianzhong
Computer Science. 2023, 50 (11A): 230900113-7.  doi:10.11896/jsjkx.230900113
Abstract PDF(3033KB) ( 175 )   
References | Related Articles | Metrics
Aiming at the problems of complex structure,large number of parameters,poor detection accuracy and real-time performance of existing steel surface defect detection models,this paper proposes an improved YOLOv5s lightweight steel surface defect detection model.Firstly,the MobileNetv3-Small is used to replace the YOLOv5s backbone extraction network,achieving model lightweight and improving detection speed.Secondly,in the feature fusion stage,a weighted bidirectional feature pyramid network (BiFPN) is used to enhance feature extraction.By fusing features of different scales,the accuracy and robustness of detection are improved.Simultaneously,the convolutional block attention module(CBAM) attention mechanism is introduced to enhance the model's ability to detect small scale targets.Finally,the K-means++ algorithm is proposed to cluster prior boxes,improve the accuracy and convergence speed of prior box clustering.The average accuracy of the improved YOLOv5s on the NEU-DET dataset (mAP@0.5) reaches 77.2%,with a detection speed of 102 FPS on NVIDIA 1080Ti.Compared to the original YOLOv5s,the mAP is increased by 3.90%,the parameter quantity is decreased by 58.6%,the volume is decreased by 34%,and the detection speed is increased by 29.7%.Experimental results demonstrate that the improved lightweight YOLOv5s effectively improves both the accuracy and speed of steel surface defect detection.Moreover,it is easy to deploy and meet the requirements of actual production in the steel strip industry.
Reconstruction of Solar Speckle Image Combined with Gated Fusion Network and Residual Fourier Transform
HUANG Yaqun, ZHENG Peiyu, JIANG Murong, YANG Lei, LUO Jun
Computer Science. 2023, 50 (11A): 220800229-7.  doi:10.11896/jsjkx.220800229
Abstract PDF(3877KB) ( 119 )   
References | Related Articles | Metrics
When using the existing deep learning algorithm to reconstruct the highly blurred solar speckle image taken by the Yunnan Observatory,there are problems such as loss of high-frequency information,blurred edges,and difficulty in reconstruction.This paper proposesd a solar speckle image reconstruction algorithm combining gated fusion network and residual Fourier transform.The gated fusion network consists of a generator and two discriminators.The generator contains a deblurring module,a high-dimensional feature extraction module,a gating module and a reconstruction module.The deblurring module adopts the U-shaped network framework based on the double attention mechanism to obtain the deblurred features of the low-resolution image;the high-dimensional feature extraction module uses the convolution block of the residual Fourier transform to extract high-dimensional features containing image spatial details;the gating module fuses the above two features to obtain a weight map,the weight map weighted with the deblurred features,and then fused with high-dimensional features to obtain fused features;the reconstruction module uses the residual Fourier transform convolution block and pixel shuffling layer to reconstruct the fusion feature map obtained by the gating module to obtain a high-resolution image.Two discriminators are used to identify the authenticity of the deblurred image produced by the deblurring module and the high-resolution image produced by the reconstruction module,respectively.Finally,a combined training loss function including pixel content loss,perceptual loss and adversarial loss is designed to guide model training.Experimental results show that compared with existing deep learning reconstruction methods,the proposed method has stronger recovery ability of high-frequency information,clearer edge contours,higher structural similarity and peak signal-to-noise ratio indicators.
Image Aesthetics-enhanced Visual Perception Recommendation System
ZHANG Kaixuan, CAI Guoyong, ZHU Kunri
Computer Science. 2023, 50 (11A): 221100083-8.  doi:10.11896/jsjkx.221100083
Abstract PDF(3536KB) ( 161 )   
References | Related Articles | Metrics
The visual perception recommendation system aims to enhance the behavioral features of user-item interaction by extracting the visual features of item images from the perspective of visual cognition,and model the user’s visual and behavior-rela-ted preferences,so as to make better recommendations.In the existing visual perception recommendation research,pre-trained convolutional neural network(CNN) is usually used to extract the semantic features of visual objects,and the hidden aesthetic style features inside the appearance image of the item are rarely considered.In addition,the embedded information of user-item interaction behavior structure is usually ignored in visual perception recommendation.To address these issues,an aesthetic feature-aware visual recommendation system is proposed that fuses image aesthetics and behavioral interaction structure embeddings(ABVR).ABVR uses the pre-trained ViT model to extract the high-level visual features of the image-semantic category features,uses the aesthetic extraction network to mine the middle-level aesthetic visual features in the image--the color,shapes and other features of the items,and uses the graph convolution neural network(GCN) module to learn the multi-layer graph structure embedding features of user item interaction graph nodes,and finally associates and fuses the three types of features to achieve aesthetically enhanced visual recommendations.Extensive experiments are conducted on two real datasets to verify the effectiveness of the ABVR model in improving visual recommendation performance.
Image Classification for Unsupervised Domain Adaptation Based on Task Relevant FeatureSeparation Network
TANG Junkun, ZHANG Hui, ZHANG Zhouquanand WU Tianyue
Computer Science. 2023, 50 (11A): 230100068-8.  doi:10.11896/jsjkx.230100068
Abstract PDF(3173KB) ( 186 )   
References | Related Articles | Metrics
Unsupervised domain adaptation(UDA) aims to assist the model in transferring learned information from a labeled source domain to an unlabeled target domain,given cross-domain distribution discrepancies.Current advanced domain adaptation techniques rely mostly on aligning the distributions of the source and target domains.Among them,the features are frequently utilized as a global object to perform inter-domain adaptation tasks,disregarding the coupling of task-related information(inter-domain invariant,intra-domain specific information) and task-irrelevant information(color contrast,image style) in the features.The situation makes it difficult for the model to comprehend the important information of features,resulting in sub-optimization.In consideration of the aforementioned issues,we propose an unsupervised domain adaptive classification method based on a task relevant feature separation network(TRFS),which helps the network extract the downstream task-related feature weight by learning the attention consistency between the features with inter-domain style mixed interference and the original features.Further,weight subtraction is used to obtain the task-irrelevant feature weight,then the task-related and irrelevant features are further pushed away by orthogonal function constraints to achieve feature separation.The task feature refinement separation layer is designed to reduce the confusion situation of alignment features and domain-specific features,as well as optimize the model’s classification and discrimination accuracy.Comprehensive experiment results demonstrate that the designed separation module has good plug-and-play performance,which can enhance the performance of other UDA methods.And the TRFS has obvious advantages over other advanced UDA methods,which achieves a classification accuracy of 73.6% in the Office-Home benchmark.
Rail Light Band Detection Algorithm Based on Deep Learning
ZHANG Xinfeng, BIAN Haonan, ZHANG Bo, ZHANG Jiaming, LIANG Yuqing
Computer Science. 2023, 50 (11A): 230200146-6.  doi:10.11896/jsjkx.230200146
Abstract PDF(4157KB) ( 173 )   
References | Related Articles | Metrics
When the train is running on the track,the rim of the wheel will crush the rail surface to form a light band.The shape of the light band reflects the position relationship between the rail and the wheel.The capture of the abnormal light band shape can effectively prevent the safety problems of the train operation and improve the comfort of the train.The traditional light band detection method uses manual detection,which has some problems such as low efficiency and strong professionalism.The early computer vision technology used the edge information and gray information of the image to locate the rail region,and then segmented the light band region on this basis,which was not satisfactory in efficiency and robustness.Therefore,it is necessary to segment the rail and the light band with high efficiency and high precision.This paper firstly uses ResNet classification network to classify the image of the non-turnout and the image of turnout.Then,for the two kinds of images,DeeplabV3+ segmentation network is used to segment the light band and rail area of the image respectively.Finally,aiming at the problem that the edge of the rail is easy to be segmented unclearly,this paper proposes a post-processing algorithm based on the Douglas-Peucker algorithm to fit the edge of the rail.The research results show that,compared with the direct use of semantic segmentation network for the segmentation of two types of images,the segmentation accuracy can be improved steadily through the classification operation and the post-processing of the segmentation results.In addition,the intersection over union(IOU) of the overall segmentation,rail segmentation and light band segmentation of the non-turnout images are 95.45%,87.48% and 92.60%,respectively.For turnout images,the values are 90.20%,76.56% and 85.42%,respectively.The proposed algorithm has high precision for the segmentation of rail and light band region,and can efficiently complete batch image processing,which has high engineering value.
Railway Track Detection Method Based on Improved YOLOv5s
JIANG Ke, SHI Jianqiang, CHEN Guangwu
Computer Science. 2023, 50 (11A): 230200101-6.  doi:10.11896/jsjkx.230200101
Abstract PDF(3686KB) ( 191 )   
References | Related Articles | Metrics
Track line detection is helpful to improve the running safety of the train,but the detection effect is easily affected by the running environment of the train,this paper proposes a method based on image pre-processing and using the improved YOLOv5s network for track line detection.Firstly,the image pre-processing,using HSV to separate out the redundant information of the image and then based on Otsu thresholding,improves the saliency of the image detection target and reduces the complexity of target recognition.Secondly,considering the requirement of light weight of the train on-board system,the YOLOv5s target recognition network is improved,and the backbone network is improved by adding CBAM attention mechanism module to enhance the effective feature information,which can improve the detection speed on the basis of ensuring the detection results and make the detection algorithm model easy to deploy to mobile devices.Experimental results show that the proposed detection algorithm achieves 94.1% mAP in the dataset test with certain real-time performance and robustness.
Study on Prediction Modeling and Compensation of Circular Target Center Positioning Error Based on GA-BP
CHEN Haiyan, ZHU Junlin, WANG Ping
Computer Science. 2023, 50 (11A): 221100170-5.  doi:10.11896/jsjkx.221100170
Abstract PDF(3720KB) ( 140 )   
References | Related Articles | Metrics
When using circular target for camera calibration,the target imaging effect will be elliptical with different camera shooting positions,so the image circle center coordinates obtained by using the conventional circle center positioning method are not the real circle center imaging position in the image,and the calibration accuracy is not high when using the circle center image coordinates for camera calibration directly.To address this problem,a method is proposed to model the error prediction of circular target image circular center positioning error,and then carry out error compensation to improve the circular center positioning accuracy.Firstly,a simulated image set of circular target image is established.Secondly,the image is pre-processed and the ellipse fitting method is used to locate the circle center coordinates in the image.Thirdly,a GA-BP neural network is constructed and trained to establish the relationship model between the circle center localization error and the camera lens position.Finally,the error compensation strategy is used to compensate for the localized circle center coordinates.Experimental results show that the error prediction accuracy of the constructed GA-BP neural network model for the horizontal and vertical coordinates of the circular center positioning is significantly better than that of the BP or E-R models,with MAPE,RMSE,and R2 of 5.51%,0.004 8,0.999 6and 6.14%,0.096 4,0.999 8,respectively.The accuracy of the circular center positioning after error compensation is higher,which verifies the feasibility of using error prediction modeling and error compensation to improve the accuracy of circular center positioning,and provides method support for the high-precision camera calibration task.
Image Retargeting Method Based on Grids and Superpixels
CHEN Meiying, BI Xiuli, LIU Bo
Computer Science. 2023, 50 (11A): 221100153-8.  doi:10.11896/jsjkx.221100153
Abstract PDF(5834KB) ( 131 )   
References | Related Articles | Metrics
Image is an important medium for communication between people.With the rapid development of information today,it is of great significance using image retargeting technology to make images adaptto a variety of device sizes.The grid-based image retargeting algorithm first generates a regular rectangular grid corresponding to the input image,and then determines the defor-mation degree of the grid by evaluating the weight of image pixels according to the image content in the grid.The global iteration of the image is carried out until the termination condition of image retargeting.However,such algorithms still have the problem of incomplete evaluation of image content,which leads to the distortion of the output image structure,and it is difficult to maintain the diagonal features and overall structure of the result image.In order to solve the above problems,this paper proposes an image retargeting method based on superpixels,gradients and saliency.Firstly,the input image is preprocessed by the superpixel me-thod,and then the superpixel block is used as the subsequent processing unit,and the image pixel weight evaluation method based on gradient and saliency is used to measure the weight of the superpixel output image,and an image retargeting weight heat map is output.Finally,the grid is iteratively optimized according to the retargeting weight heat map and realize the retargeting of the image.Experimental results show that the proposed method has certain advantages in the six no-reference image quality assessment indicators,and has certain advantages in semantic rationality,information accuracy and visual naturalness,and has great application value in the field of image retargeting.
Image Denoising Network Model Combined with Multi-head Attention Mechanism
LI Yueyue, LIU Wanping, HUANG Dong
Computer Science. 2023, 50 (11A): 230100091-8.  doi:10.11896/jsjkx.230100091
Abstract PDF(4610KB) ( 204 )   
References | Related Articles | Metrics
Due to the rapid development of GPU computing,deep learning has been applied in image denoising recently.Most of the deep learning methods require noise-free images as training labels,but they are usually difficult or even impossible to obtain.Therefore,some scholars begin to study the use of noisy images for noise reduction network training,but the restored image is faced with the problem of losing details.Inspired by the idea of Noise2Noise(N2N),this paper uses pairs of noised images to train the neural network,to learn the distribution relationship between the same type of noise in the same range,and realize a new novel image denoising network model.The newly-developed model(MA-UNet) is based on the classic UNet architecture and combines the multi-head attention mechanism and simple residual network.It can capture the key information of the image,master the glo-bal information of the feature,so as to recover clearer images.Compared with the traditional algorithm CBM3D and other me-thods,such as DnCNN and B2U,MA-UNet has excellent performance in terms of parameters.Through the comparison of visual images,our model restores much clearer image details.Compared with the model designed by N2N,under different noise magnitude,the mean value of the peak signal-to-noise ratio and the structural similarity index of the proposed model on four classical data sets improve significantly.
Hue Augmentation Method for Industrial Product Surface Defect Images
LUO Yuetong, LI Chao, DUAN Chang, ZHOU Bo
Computer Science. 2023, 50 (11A): 230200089-6.  doi:10.11896/jsjkx.230200089
Abstract PDF(3124KB) ( 127 )   
References | Related Articles | Metrics
The hue distribution of industrial sampling data and the spatial distribution of defects are often different from test data,which often leads to poor performance of defect detection models based on deep learning.Therefore,data augmentation based on generative adversarial networks(GAN) is a common solution.Two GANs (HC-GAN and T-GAN) are designed to perform hue augmentation and defect location augmentation respectively.By constructing content consistency module and hue controlled module,HC-GAN can achieve hue augmentation based on reference data without changing defect characteristics.By pairing the input and output data,T-GAN realizes the defect location transfer.In addition,two GANs can also be used in tandem to achieve both hue augmentation and position transfer.Finally,hue distribution statistics and object detection effect tests are carried out on the generated data.The results show that the data generated by the proposed method can achieve hue augmentation and position augmentation,and improve the accuracy of surface defect detection of industrial products.
Real-time Image Semantic Segmentation Algorithm Based on Hybrid Attention
WANG Yan, XIA Chuangshuai, WANG Na, NAN Peiqi
Computer Science. 2023, 50 (11A): 230200010-6.  doi:10.11896/jsjkx.230200010
Abstract PDF(3335KB) ( 148 )   
References | Related Articles | Metrics
The existing semantic segmentation algorithms are difficult to deploy on mobile devices due to the complex model and a large amount of computation.A new semantic segmentation algorithm based on hybrid attention is proposed.This algorithm is an asymmetric encoder-decoder structure.The encoder part combines depth-wise separable convolution anddilated convolution to design an efficient residual module to extract image features at different levels of the network.It pays more attention to spatial position information in the shallow layer and enhances semantic information extraction in the deep layer.In the decoder part,a hybrid attention feature fusion module is designed,which uses spatial attention to strengthen the spatial location information in the shallow layer and channel attention to enhance the expression ability of key information in the deep feature map.It can effectively integrate the spatial information and context information in the feature map of different levels,strengthen the expression of semantic information,and reduce the loss of image information in the fusion process.Finally,the segmentation results are predicted by using the classifier.A large number of experiments show that the proposed algorithm achieves 93.2% PA and 73.2% mIoU in Cityscapes,respectively,and achieves 38FPS with 1.62×106 reference on Tesla V100 GPU.In Pascal VOC 2012 data set,PA and mIoU reaches 92.4% and 74.8% respectively.Experimental results show that this algorithm can effectively and quickly complete the task of city scene image segmentation.
UFormer:An End-to-End Feature Point Scene Matching Algorithm Based on Transformer and U-Net
XIN Rui, ZHANG Xiaoli, PENG Xiafu, CHEN Jinwen
Computer Science. 2023, 50 (11A): 230300045-6.  doi:10.11896/jsjkx.230300045
Abstract PDF(3526KB) ( 133 )   
References | Related Articles | Metrics
At present,most scene matching algorithms use traditional feature point matching algorithms.The algorithm process consists of feature detection and feature matching.For weak texture scenes,both the accuracy and matching success rate are low.UFormer proposes an end-to-end solution to complete Transformer-based feature extraction and matching operations,and uses an attention mechanism to improve the algorithm’s ability to deal with weak texture scenes.Inspired by the U-Net architecture,UFormer constructs the sub-pixel-level mapping relationship of images from coarse to fine based on the encoder-decoder structure.The encoder uses the self-cross attention overlapping structure to detect and extract the relevant features of each scale of the image,establish feature connections,and perform down-sampling for coarse-grained matching to provide the initial position.The decoder upsamples,restores image resolution,fuses attentional feature maps at each scale,achieves matching at a fine-grained level,and refines the matching results to sub-pixel precision in a desired way.Introduce the ground-truth homography matrix to calculate the Euclidean distance feedback loss of coarse and fine-grained matching point-to-coordinates,and supervise the learning of the network.UFormer integrates feature detection and feature matching,with a simpler structure,which improves real-time performance while ensuring accuracy,and has the ability to deal with weak texture scenes to a certain extent.On the collected drone trajectory data set,compared with SIFT,the coordinate accuracy improves by 0.416 pixels,the matching time decreases to 0.106 s,and the matching success rate for weak texture scene images is higher.
Controlled Facial Gender Forgery Combining Wavelet Transform High Frequency Information
CHEN Wanze, CHEN Jiazhen, HUANG Liqing, YE Feng, HUANG Tianqiang, LUO Haifeng
Computer Science. 2023, 50 (11A): 221000241-10.  doi:10.11896/jsjkx.221000241
Abstract PDF(5820KB) ( 140 )   
References | Related Articles | Metrics
Image-to-image translation(I2I) technology based on generative adversarial networks has made a series of breakthroughs in various fields,and is widely used in image synthesis,image coloring,and image super-resolution,especially in face attribute manipulation.To solve the issue of disparity in the performance of generated images in different translation directions due to model architecture and data imbalance,an high-frequency injection GAN(HFIGAN) model is proposed to achieve controlled facial gender forgery for transmitting high-frequency information.Firstly,in the wavelet module for transmitting high-frequency information,the features in the coding stage are decomposed at the feature level by discrete wavelet transform,and the obtained high-frequency information is injected reciprocally in the decoding stage,so that the information composition between the source and target domains is always in a more desirable ratio.Second,images’ dynamic consistency loss addresses the inconsistent translation difficulty in different directions for multi-domain conversion tasks in I2I.By redesigning the loss function,we scale the loss of difficult and easy samples,improve the feedback of difficult samples to the model,and make the model focus more on training difficult samples to improve performance.Finally,the diversity regular term based on style features is proposed to add the distance metric of style vectors in different spaces to the traditional diversity loss for supervision,which enables the model to maintain the diversity of generated images while improving the quality of image generation.Experiments on CelebA-HQ dataset and FFHQ dataset verify the effectiveness of the proposed method.The generalization of the loss function is verified in the mainstream I2I model combined with the proposed loss in this paper.Experimental results show that HFIGAN has better performance in facial gender falsification compared with previous advanced methods,and the proposed loss function has some generality.
Speech Enhancement Based on Generative Adversarial Networks with Gated Recurrent Units and Self-attention Mechanisms
ZHANG Dehui, DONG Anming, YU Jiguo, ZHAO Kai andZHOU You
Computer Science. 2023, 50 (11A): 230200203-9.  doi:10.11896/jsjkx.230200203
Abstract PDF(3655KB) ( 132 )   
References | Related Articles | Metrics
Generative adversarial networks(GAN) have strong noise reduction ability and have been applied in the field of speech enhancement in recent years due to their ability to use two kinds of network adversarial training and constantly improve the network mapping ability.In view of the shortcomings of existing generative adversarial network speech enhancement methods,which do not make full use of temporal and global dependencies in speech feature sequences,this paper proposes a speech enhancement GAN network that integrates gated recurrent units and self-attention mechanism.The network constructs a time modeling module in series and parallel to capture the time dependence and context information of speech feature sequences.Compared with the baseline algorithm,the proposed new GAN network speech quality auditory estimation score(PESQ) improves by 4%,and performs better on several objective evaluation indexes such as segmental signal-to-noise ratio(SSNR) and short-term objective intelligibility(STOI).The results show that the integration of temporal correlation and global correlation in speech feature sequences is helpful to improve the performance of GAN network speech enhancement.
Remote Sensing Image Fusion Method Combining Edge Detection and Parameter-adaptive PCNN
SHI Ying, HE Xinguang, LIU Binrui
Computer Science. 2023, 50 (11A): 220900264-6.  doi:10.11896/jsjkx.220900264
Abstract PDF(3122KB) ( 112 )   
References | Related Articles | Metrics
In order to improve the fusion quality of panchromatic(PAN) and multispectral(MS) images,and to solve the pro-blems of difficulty in parameter adjustment ofpulse coupled neural network(PCNN)and incomplete preservation of edge features of fused images,this paper proposes a remote sensing image fusion method by combining Canny operator and parameter-adaptive.Firstly,the MS image is converted into HSV color space to obtain thevalue(V) component,and the edge information of PAN image is distinguished to the non-edge by Canny operator.The edge of PAN image is enhanced by fusing the PAN image and V-component of MS image according to the characteristics of edge distribution.Then,the new PAN image and the V-component of MS image are respectively decomposed into their corresponding high-frequency and low-frequency coefficient bands by the nonsubsampled shearlet transform(NSST).The high-frequency bands are fused by a parametric-adaptive PCNN model,in which all the PCNN parameters can be estimated adaptively by the input frequency bands to obtain a PCNN model with optimal parameters.The low-frequency bands are fused by the method of selective weighted summation.Finally,the new V-component is obtained by inverse transform of NSST,and then the final fused image is achieved by inverse transform of HSV.The proposed method is compared with other recent methods,and seven objective evaluation indicators are selected to evaluate the spatial details and spectral information of the fusion image.Experimental results show that the proposed method can obtain better fusion performance with more advantages in visual quality and objective index evaluation.
Lightweight Graph Convolution Action Recognition Algorithm Based on Multi-streamFusion
LI Hua, ZHAO Lingdi, CHEN Yujie, YANG Yang, DU Xinzhao
Computer Science. 2023, 50 (11A): 220800147-6.  doi:10.11896/jsjkx.220800147
Abstract PDF(1918KB) ( 146 )   
References | Related Articles | Metrics
Traditional action recognition based on RGB-based methods is easy to be affected by problems such as light intensity and viewing angle.Skeleton-based action recognition is less affected by these problems and has become one of the mainstream methods.However,the current skeleton-based action recognition methods have a large number of parameters and slow operation speed.In order to solve these problems,a multi-stream fusion lightweight graph convolution action recognition framework is proposed.Firstly,the data fused with various information of joint,bone,joint motion and bone motion are input into the spatial map convolution module.Secondly,the spatial attention mechanism is added to the spatial graph convolution module to better extract the relationship between the joints.Finally,in the time convolution module,depthwise convolution and pointwise convolution are used to reduce the amount of parameters.Compared with the baseline network SGN,in NTU-RGB+D120 dataset,the proposed network increases by 2.3% under cross-subject evaluation,increases by 1.9% under cross-setup evaluation,and the number of parameters reduces by 0.12×106.The validity of the proposed network is verified.
Study on Ultrasonic Phased Array Defect Detection Based on Machine Vision
ZOU Chenwei, YAO Rao
Computer Science. 2023, 50 (11A): 230200150-6.  doi:10.11896/jsjkx.230200150
Abstract PDF(3447KB) ( 116 )   
References | Related Articles | Metrics
Ultrasonic phased array inspection is a commonly used non-destructive testing (NDT) technique for workpiece defect detection and evaluation.In order to realize the modern industrial big data and automation,to solve the problem of missing information and scattering noise of ultrasonic phased array images generated during defect detection,and to realize the accurate recognition of various types of defects,a defect recognition method based on machine vision is proposed,which extracts the image features as the experimental data of BP neural network for particle swarm optimization after image denoising process using improved PM differential equations.The experimental results show that the proposed method has an accuracy rate of 99.43% for the trai-ning set,which is 1.833% higher than the traditional BP network model,and can accurately achieve defect recognition while maintaining good model performance.
Magnetic Tile Defect Detection Algorithm Based on Improved YOLOv4
ZHANG Xiaoxiao, DENG Chengzhi, WU Zhaoming, CAO Chunyang, HU Cheng
Computer Science. 2023, 50 (11A): 230100100-7.  doi:10.11896/jsjkx.230100100
Abstract PDF(3975KB) ( 139 )   
References | Related Articles | Metrics
Various defects occur in the manufacturing process of magnetic tiledue to process problems,and traditional detection algorithms have slow detection speed and low accuracy.In order to achieve fast and effective detection of surface defects of magnetic tiles,this paper proposes a defect detection method for magnetic tiles with improved YOLOv4 algorithm.Firstly,the scSE attention module is embedded in the residual unit of CSPnet in the feature extraction backbone network to enhance the spatial features and channel features of small targets.Secondly,the empty convolutional space pooling pyramid(ASPP) module is used instead of the original SPP module to increase the perceptual field of convolutional kernel,retain more image details and enhance information relevance.Finally,the traditional convolution in the five convolution blocks is replaced by the depth-separable convolution in the neck part to better extract the feature information and reduce the number of parameters of the model.Experimental results show that the improved YOLOv4 algorithm achieves an average accuracy value of 96.67%,a detection speed of 44 ms,and a model size of 249 MB.It is significantly better than the original algorithm and has higher detection accuracy and practicality.
Asphalt Pavement Crack Detection in Wetting Conditions Based on YOLOv5
ZHANG Enhua, WANG Weijie, DUAN Nan, KANG Nan
Computer Science. 2023, 50 (11A): 220900155-5.  doi:10.11896/jsjkx.220900155
Abstract PDF(2914KB) ( 133 )   
References | Related Articles | Metrics
To investigate the influence of wet environment on automatic crack detection of asphalt pavement,through YOLOv5 target detection algorithm that based on the principle of deep learning,an asphalt pavement crack detection model is established.Based on the model,a comparison experiment of crack detection under wet and dry conditions is set up,the accuracy and confidence of crack detection results of asphalt pavement under the two conditions are compared.The research results show that the wet environment expands the identification features of pavement cracks in the deep learning network,improves the effect of pavement crack detection.The accuracy of crack identification on dry pavement is 80.70%,the accuracy of crack detection on wet pavement is 89.47%,the accuracy of crack detection model on asphalt pavement under wet conditions is improved by 8.77%.At the same time,It is found that the average value of confidence is 0.72 in dry environment and 0.78 in wet environment,and there is a significant positive correlation between wetting and the confidence of crack detection.The research results provide a new idea for the improvement of automatic crack detection of asphalt pavement and an effective tool for pavement maintenance management.
Cross-modal Hash Retrieval Based on Text-guided Image Semantic Fusion
GU Baocheng, LIU Li
Computer Science. 2023, 50 (11A): 221100191-6.  doi:10.11896/jsjkx.221100191
Abstract PDF(2454KB) ( 144 )   
References | Related Articles | Metrics
Hash-based cross-modal retrieval algorithm is characterized by low storage consumption and high search efficiency,and the application of cross-modal hash retrieval in multimedia data has become a current research hot-spot.At present,the mainstream method for cross-modal hash retrieval is to study the learning ability of intermodal hash codes,ignoring the feature lear-ning ability and semantic fusion ability between different modes.This paper transforms the image-text matching problem in Clip into pixel-text matching problem,the text features query image features through Transformer decoder,encourage text features to learn the most relevant image pixel level information,and the pixel-text matching score guide image modal feature learning,dig out the deeper related semantic information between different modalities,and introduce binary cross-entropy loss function to improve the semantic fusion ability between modalities.High-quality binary hash codes can be obtained when high-dimensional features are mapped to a low-dimensional Hamming space.Comparative experiments are carried out on MIRFLICKR-25K and NUS-WIDE datasets,and the experimental results show that the present algorithm model performs better than the current mainstream algorithms under hash codes of different lengths.
Cross-view Geo-visual Localization
LIU Xudong, YU Ping
Computer Science. 2023, 50 (11A): 221100066-7.  doi:10.11896/jsjkx.221100066
Abstract PDF(3648KB) ( 150 )   
References | Related Articles | Metrics
With the explosive growth of smart terminal equipment and the rapid rise of mobile Internet,in many scenarios,such as indoor environments and remote mountainous areas with sparse population,the demand for location-based services has become more and more prominent.However,because GPS signals in these areas are blocked or the signal base stations are difficult tocover,GPS location can not working properly.Image based geo-location refersto determine the location of an image based only on visual information.Without any prior knowledge,predicting the geographic location of a photo is a very difficult task,because the images taken from the earth will show huge changes with different weather,objects or camera settings.This paper attempts to explore the cross-view geo-localization method.First,the inverse polar coordinate transformation is used to convert the street view perspective to the spatial perspective image,so as to reduce the domain gap between the two.Then deep learning is used to encode images from different perspectives to obtain more robust global vector descriptors.Finally,performing image matching on this basis.In the aspect of image feature extraction,the VGG16 model is adopted,and a smaller convolution kernel with deeper layers is used to increase the perception field of the network model and save parameters.In terms of feature encoding,the multi-scale attention mechanism is integrated into the NetVLAD model,and the features extracted from the backbone model are encoded into a more robust global feature descriptor vector.Experimental results show that the above-mentioned method can achieve higher accuracy,compared with the existing methods.And without the high-definition street view captured by professional equipment,the street view captured by ordinary smart phones can obtain good matching accuracy.
Big Data & Data Science
Survey of Community Discovery in Complex Networks
CAO Jinxin, XU Weizhong, JIN Di, DING Weiping
Computer Science. 2023, 50 (11A): 230100130-11.  doi:10.11896/jsjkx.230100130
Abstract PDF(3625KB) ( 189 )   
References | Related Articles | Metrics
Many complex systems in the real world can be modeled as complex networks,such as social networks and scientist collaboration networks.The study of complex networks has attracted the attention of many researchers in different fields.The mining of community structure,division of a network into different communities of nodes with dense intra-community links and sparse inter-community links,is one of the main problems in the study of complex network.Research on community detection in complex networks is of vital importance to the analysis of the potential structure,laws,and the formation of communities in complex networks,and has a wide range of application prospects.Since complex networks contain both network topology and node content,the study of community detection combining node content will become one of the new trends in this field.This paper introduces the research background and significance of community detection in complex networks.And from three aspects based on network topology,node content,and network topology and node content integration,we comprehensively sort out the research status of community detection and analyze the problems it faces.We select 10 representative algorithms from the mentioned three types of community detection methods,and compare their performance of identifying communities and analyze time complexity of these algorithms,hoping to draw a clear outline of the new trend of community discovery.
Survey of Medical Data Visualization Based on EHR
YE Xianyi, CHAI Yanmei, GUO Fengying
Computer Science. 2023, 50 (11A): 221100265-11.  doi:10.11896/jsjkx.221100265
Abstract PDF(7070KB) ( 196 )   
References | Related Articles | Metrics
With the development of medical information technology,the effective utilization of electronic health records(EHR) data is playing an increasingly important role in the field of assisted medical care.This paperreviews the data visualization methodsand technologies based on EHR in recent ten years.Firstly,the knowledge map method isused to show the research hotspots and development trends of EHR data visualization in the past ten years.Then the general process and four tasks of visualization technology are extracted from the literature,including comparative analysis,anomaly detection,pattern discovery and decision support.Next,the representative researchesarefurtherly summarized,classified and evaluated.Finally,5 kinds of visualization models and 3 visual dimensions of EHR aresummarized and the applicable scenarios of various methods are discussed based on the above research frameworks.It is found that the visualization technology based on EHR could not only help doctors and nurses understand patients’ status more intuitively in clinical diagnosis,but also help researchers analyze and mine the value of EHR data.At the same time,it is also of positive significance for the development of Internet medicine and intelligent medicine.However,there are still some problems in this field,such as lack of authoritative Chinese medical dictionary and knowledge database,hard to process the massive time-varying EHR data,and there is no unified and quantitative evaluation of visualization methods.
Microservice Splitting Approach Based on Database Table
HUANG Zhicheng, LIU Xianhui
Computer Science. 2023, 50 (11A): 230200102-7.  doi:10.11896/jsjkx.230200102
Abstract PDF(1875KB) ( 157 )   
References | Related Articles | Metrics
Microservice architecture and cloud platform container deployment are a hot topic in current software engineering practice.Many research reports show that more and more software developers are transforming single architecture to microservice architecture.In the process of splitting a single architecture application into a microservice architecture application,the implementer faces an important challenge,that is,the lack of a clear method to effectively and accurately split the single application.To solve this problem,a micro-service splitting method based on database tables is proposed and a splitting tool is implemented.This method generates a table association matrix by collecting all the SQL statements in the project and combining the primary and foreign key relationships between the database tables.According to this table association matrix,a part of microservices is initially divided.Then collect all the transaction links according to the test cases,and combine the relationship between the transaction link analysis table and the micro-service to calculate the association matrix of the independent table and the micro-service.According to the association matrix of the independent table and the micro-service,complete the division of the final micro-service database table,and finally split the micro-service code according to the proposed rules.The experiment shows that this method can help software developers effectively and accurately split microservices.
Node Ranking Algorithm Based on Subgraph Features
CHEN Duanbing, YANG Zhijie, ZENG Zhuo, FU Yan, ZHOU Junlin, ZHAO Junyan
Computer Science. 2023, 50 (11A): 230100122-7.  doi:10.11896/jsjkx.230100122
Abstract PDF(2519KB) ( 159 )   
References | Related Articles | Metrics
Complex network theory has been widely applied in various fields,and node ranking is an important branch in the complex networks.Node ranking and critical node mining are significant for analyzing and understanding the structure and function of complex networks.Many scholars have conducted in-depth researches on critical nodes identification and ranking in complex networks,and have achieved great success.However,with the development of artificial intelligence and rapid growth of data,the size of complex networks grows exponentially.The accuracy and generalization of traditional algorithms can no longer meet the real demand.A machine learning model on node ranking(subgraph feature extraction rank) based on subgraph features of the second-order neighborhood information of nodes is proposed in this paper.A weighted adjacency matrix of local subgraphs is established using the second-order neighborhood information firstly.Then,vector representations that can effectively reflect the local feature of nodes are extracted through matrix feature decomposition.Finally,a machine learning model is established to train the correlation between node’s subgraph feature vector and its influence.Experimental results on nine real networks show that the proposed method has better performance and generalization compared with benchmark node ranking methods.
Spatial-Temporal Attention Mechanism Based Anomaly Detection for Multivariate Times Series
LIANG Lifang, GUAN Donghai, ZHANG Ji, YUAN Weiwei
Computer Science. 2023, 50 (11A): 230300022-8.  doi:10.11896/jsjkx.230300022
Abstract PDF(2783KB) ( 160 )   
References | Related Articles | Metrics
Internet of Things systems are widely used in a variety of infrastructure,involving many interconnected sensors that generate large amounts of multivariate time series data.Since the Internet of Things systems are vulnerable to network attacks,multivariate time series anomaly detection methods are used to timely monitor anomalies occurring in the system,which is crucial for securing the system.However,due to the complex relationships of high-dimensional sensor data,most existing anomaly detection methods have difficulty in learning the correlation of multivariate time series explicitly,resulting in low accuracy of anomaly detection.Therefore,a multivariate time series anomaly detection method(STA) based on spatial and temporal attention mechanism is proposed.STA first learns the relationship between sensors in the form of a graph structure and then uses a multi-hop graph attention network to assign different attention weights to the multi-hop neighbor nodes of each sensor node in the graph for capturing the spatial correlation of the sequence.Secondly,STA use a temporal attention mechanism-based long short-term me-mory network to adaptively select the corresponding time sequences to study the temporal correlation of sequences.Experimental results on four real-world sensor datasets show that STA can detect anomalies in time series more accurately than the baseline approach,with its F1 score outperforms the optimal baseline by 31.03%,14.29%,15.91% and 21.74%,respectively.In addition,ablation experiments and sensitivity analysis validate the effectiveness of the key components in the model.In general,STA can effectively capture the spatial and temporal correlations in multivariate time series and improve the anomaly detection performance of the model.
Soil Moisture Data Reconstruction Based on Low Rank Matrix Completion Method
Computer Science. 2023, 50 (11A): 230300073-6.  doi:10.11896/jsjkx.230300073
Abstract PDF(3995KB) ( 135 )   
References | Related Articles | Metrics
Soil moisture plays an important role in meteorology,climatology and other disciplines.However,the current observational soil moisture data lacks of high precision and high spatial resolution,and its applicability is greatly limited.Matrix completion(MC) is the application of compressed sensing on matrix.It aims at partial missing,contaminated and damaged large-scale data,and aims to recover all the data of the matrix from a low-rank incomplete matrix by using the correlation between its elements.It is applicable to data with high spatial and temporal correlation but many missing values,such as soil moisture.However,the matrix rank is required to be correlated or approximately correlated,while the rank of soil moisture is unstable.Therefore,we presuppose the rank of the matrix and introduce principal component analysis(PCA) to reduce the matrix dimension while retaining most of the information.On this basis,matrix filling of soil moisture data with missing values is carried out.The experiment selects ERA-Interim 2022 satellite soil moisture data in some areas,and the results show that,compared with traditional MC algorithms,the error of experiment results of low rank matrix completion(PCA-MC) using principal component analysis is reduced by 28.6%.The root mean square error is reduced by 5.78%,the maximum error is reduced by 14.8%,and the reconstruction time is shortened at the same time,which indicates that the PCA-MC method can effectively reconstruct the large-scale matrix with missing values compared with the MC method.
Knowledge Graph Recommendation Algorithm Combined with Graph Attention Mechanism
ZHANG Xiaowan, DENG Qiujun, LIU Xianhui
Computer Science. 2023, 50 (11A): 230100057-7.  doi:10.11896/jsjkx.230100057
Abstract PDF(2146KB) ( 142 )   
References | Related Articles | Metrics
Due to the problems of data sparsity and cold start in traditional recommendation algorithms,and the item is regarded as a separate individual,the relationship between items is not considered.In order to solve these problems,recommender systems start to introduce auxiliary information.However,the existing path-based and embedding-based knowledge graph recommendation algorithms do not consider the importance of different entities to users,resulting in entities with lower importance having a greater impact on the recommendation results.Aiming at such limitations,this paper proposes a knowledge graph recommendation system combining graph attention mechanism,which firstly uses graph embedding method to generate initial representations of users and items,and then employs an attention mechanism to distinguish the importance of different neighbor entities during representation propagation,and generates user and item sums through weight summation.The final prediction layer generates the final representation of the user and item,and predicts the probability of user and item interaction based on the final representation.Compared with other algorithms on two public datasets Amazon-book and Last-fm,and experimental results show that the model has improved in indicators recall,ndcg,precision,HR,indicating that the model can effectively improve the accuracy of recommendation.
Global Feature Enhanced for Session-based Recommendation
JIN Bowen, WANG Qingmei, HU Chengzuo, WEI Jiacheng
Computer Science. 2023, 50 (11A): 220800205-8.  doi:10.11896/jsjkx.220800205
Abstract PDF(2128KB) ( 124 )   
References | Related Articles | Metrics
Most session-based recommendation system researches commonly model current session user preferences by aggregating K-hop neighborhoods of nodes while using shallow neural networks.However,such methods face the problem of over-smoo-thing.This paper proposes global feature enhanced for session-based recommendation(GFE-SR).Firstly,this method utilizes graph neural network and attention mechanism to obtain session-level item representations.Then,in feature propagation stage of the global graph,the nearest neighbors of each node are proportionally weighted to limit over-smoothing to obtain the feature-enhanced global-level item representations.These two item representations are aggregated through an attention mechanism to model current session user preferences.The final output is the probability of the candidate item.Experiments on three public datasets show that this method outperforms the state-of-the-art methods such as GCE-GNN,with a maximum improvement up to 5.2%,which proves the effectiveness of the proposed method.
Recommendation Method Based on Knowledge Graph Residual Attention Networks
FAN Hongyu, ZHANG Yongku, MENG Xiangfu
Computer Science. 2023, 50 (11A): 220900180-7.  doi:10.11896/jsjkx.220900180
Abstract PDF(2241KB) ( 131 )   
References | Related Articles | Metrics
With the rapid development of the Internet today,recommendation system has become an important means to relieve the information overload.Current recommendation methods mainly use deep learning model to mine users’ interests in the project.However,the current recommendation methods using graph neural networks cannot effectively represent the interaction behaviors between users and items well,and the increase in the number of network layers will cause the problem of gradient disappearance.Therefore,this paper proposes a model that combines the GC-OTE knowledge graph embedding approach with residual networks and attention mechanisms.First,the interaction information of users or items is represented by embedding the neighbor attributes of nodes,then user-item interactions are analyzed by graph neural and residual networks,and finally,attention mechanisms are used to distinguish different neighborhoods.Experiments on two real-world datasets Alibaba-fashion and Last-FM demonstrate that the proposed method can significantly improve the recommendation performance.
Next-basket Recommendation Algorithm Based on Correlation Between Items Collaborative Filtering
JIANG Binze, DENG Xin, DU Yulu, ZHANG Heng
Computer Science. 2023, 50 (11A): 221000076-6.  doi:10.11896/jsjkx.221000076
Abstract PDF(2315KB) ( 125 )   
References | Related Articles | Metrics
The next-basket recommendation system aims to recommend items that could be seen in their next-basket,based on the sequence of users’ historical baskets.However,the existing methods focus on the recommendation of each item in the shopping basket as an independent part,ignoring the relationship between items in the shopping basket,which impacts on recommendation accuracy.To solve this problem,a next-basket recommendation algorithm based on correlation between items collaborative filtering(CBICF) is proposed.Firstly,the historical shopping basket sequence of users is modeled to generate users’ personalized item frequency information,which is used for user’s nearest neighbor clustering.Then,item correlation matrix is generated by correlation between items measurement method,and the associated item information of the users’ preference items is obtained by weighted fusion method,to improve the accuracy of recommendations.Experimental comparison and analysis on two real data sets reveal that the proposed algorithm is superior to the comparison algorithm in indicators.Especially in the case of exploring new items,the accuracy of recommending is significantly improved compared with other methods based on collaborative filtering.
Graph Neural Network Recommendation Algorithm Based on Item Relations
LIAO Dong, YU Haizheng
Computer Science. 2023, 50 (11A): 230100019-9.  doi:10.11896/jsjkx.230100019
Abstract PDF(3750KB) ( 145 )   
References | Related Articles | Metrics
Typical social recommendation methods are limited by modeling user behavior,such as social behavior between users,interaction behavior between users and items.However,the potential correlation between multiple items that users are interested in is ignored,leading to information loss.In recommendation scenarios with sparse data,the sparsity of user behavior leads to insufficient information available in the system,so it is necessary to introduce item relationships with rich connotations as auxiliary information.This papaer aims to integrate user behavior and auxiliary information to jointly model user preferences,so as to improve the accuracy of recommendations.Most of the data in the recommendation system can be expressed as a graph structure,such as user’s social behavior,user’s interactive behavior and item relationship,which can be converted into user-user graph,user-item graph and item-item graph.Graph neural networks(GNN) are effective in processing large-scale graphic data,and building a framework with item relations based-GNN for social recommendations is facing considerable challenges:1)the item-item relationship is implicit;2)user-user graph,user-item graph,and item-item graph are three different types of graphs;3)the relationship between user and user,user and item,item and item is heterogeneous.In order to solve the above problems,this paper proposes a new social recommendation method based on graph neural network,PEVGraphRec,which introduces a mathematical way to explicitly construct connections between items.Thismodel inherently combines three different graphs to better learn user preference.Finally,an attention mechanism is proposed to consider the weight of different information comprehensively.Comprehensive experiments on three real-world datasets verify the effectiveness of the proposed framework.
Dynamic Negative Sampling for Graph Convolution Network Based Collaborative Filtering Recommendation Model
MA Handa, FANG Yuqing
Computer Science. 2023, 50 (11A): 230200149-7.  doi:10.11896/jsjkx.230200149
Abstract PDF(2947KB) ( 188 )   
References | Related Articles | Metrics
Negative sampling has a great impact on the accuracy of collaborative filtering algorithms,to solve the problem that the existing graph convolutional network lacks the exploration of negative sampling strategies,dynamic negative sampling-based graph convolution collaborative filtering recommendation model(DGCCF) is proposed.Firstly,in order to adapt more flexibly to the needs of different graph data,a normalization parameter is introduced in the graph convolutional network to adjust the influen-ce of the neighborhood.Secondly,a dynamic negative sampling strategy is proposed,which selects a set of negative samples from the item nodes that the user has not interacted with,and after graph convolution gets the negative sample score,selects the negative sample with the highest score as the hard negative sample,and finally uses the obtained hard negative sample and positive sample as samplesets to input the Bayesian personalized ranking function to optimize the model.Comparison experiments with the baseline model on the three public datasets Gowalla,Yelp2018 and Amazon-Book show that DGCCF is superior to existing baseline methods under multiple evaluation indicators.For example,compared to the optimal baseline,its recall rate increases by 0.3%,9.4%,and 10.6% respectively on three dataset.
Personalized Learning Path Recommendation and Verification Method Based on Similar Learners Determination
FENG Shu, ZHU Yi, SONG Mei, JU Chengcheng
Computer Science. 2023, 50 (11A): 220900067-10.  doi:10.11896/jsjkx.220900067
Abstract PDF(3419KB) ( 126 )   
References | Related Articles | Metrics
The similarity-based learner determination method is widely used in the field of personalized recommendation due to its light weight.At present,machine learning methods such as collaborative filtering are generally used.However,such methods cannot guarantee the interpretability of the determination process and the availability of the determination results.To solve this problem,a personalized learning path recommendation and verification method based on similar learner determination is proposed,which uses the method of process bisimulation to study the determination process of similar learners.Firstly,the behavior characteristics of calculus of communication system(CCS) are extended,and learning resources-calculus of communication system(LR-CCS) is used to model the learning behavior sequence of learners.Secondly,the bisimulation theory of process algebra is used to determine the similarity of learners’ learning behavior sequences,and the algorithms for determining the strong(weak) bisimulation relationship of learning behavior sequence is proposed.Thirdly,the bisimulation verification tool mobile workbench(MWB) is used to verify the similarity of the learner’s learning behavior sequence,and the candidate recommended paths which satisfy the bisimulation relationship are obtained to ensure the correctness of the judgment result.Finally,a case study of a recommender system based on similar learners verifies the effectiveness of this method.
Time-effective Nearest Neighbor Trusted Selection Strategy Based Collaborative Filtering Recommendation Method
HAN Zhigeng, FAN Yuanzhe, CHEN Geng, ZHOU Ting
Computer Science. 2023, 50 (11A): 220800199-11.  doi:10.11896/jsjkx.220800199
Abstract PDF(2950KB) ( 113 )   
References | Related Articles | Metrics
The traditional collaborative filtering(CF) recommendation is usually based on the assumption that the data is static.When the data is sparse,it usually leads to low recommendation accuracy.With this in mind,some studies try to add supplemen-tary information such as changes in user interest and the trustiness of recommendation ability to their strategies.However,most of them lack of consideration for the abnormal situations that mislead or interfere with the recommendation,such as malicious changes in user interests and fluctuations in the recommendation ability,and are difficult to ensure the anti-attack,recommendation stability and reliability of the recommendation system.By introducing interest time-effective similarity and re-evaluation on re-commendation trust degree,this paper proposes a time-effective nearest neighbor trusted selection strategy based collaborative filtering recommendation method.It takes into account two key factors,that is,the abnormal change of user interest and the fluctuation of user recommendation ability,which affect the quality of target user’s neighbor filtering,and construct a recommendation process that includes three strategies,that is,time-effective nearest neighbor selection,trusted nearest neighbor selection and ra-ting prediction.Based on MovieLens dataset and Amazon video game dataset,and with the metrics such as mean absolute error(MAE),average prediction shift(APS),and attacker identification of precision ratio,recall ratio and F1 means,the performance of the proposed strategy and other six baselines are compared.The results show that our strategy outperform the baselines in re-commendation accuracy,anti-attack and attacker identification.
Collaborative Recommendation Based on Curriculum Learning and Graph Embedding
HUANG Feihu, SHUAI Jianbo, PENG Jian
Computer Science. 2023, 50 (11A): 221100030-8.  doi:10.11896/jsjkx.221100030
Abstract PDF(3025KB) ( 126 )   
References | Related Articles | Metrics
Recommendation system mainly provide personalized services based on user information.However,users are widely concerned about data privacy issues,which poses new challenges to current recommendation algorithms.Existing works mainly address this problem based on the perspectives of differential privacy,anonymization,cryptography,and federated learning.Data disturbance and computational complexity are the main shortcomings of existing methods.Different from existing work,this paper proposes a collaborative filtering model based on curriculum learning and graph neural network(CLG-CF),which makes full use of rating information to learn the embedding of user and item in implicit feedback scenarios.CLG-CF utilizes a bipartite graph modeling scoring table,then realizes the representation learning of users and items based on graph convolutional networks,finally completes the prediction of(user,item) pairs through a multi-layer neural network.During the training process of the CLG-CF model,negative sampling is used to enhance training samples.In order to solve the problem of samples’ authenticity,the curriculum learning is innovatively introduced to guide the model learning.Extensive experiments are conducted on three real large-scale datasets,and the results show that the CLG-CF model can achieve good recommendation results without using user and item information.
Algebraic Properties of Rough Set Model Based on Neighborhood System
LIU Yinshan, WANG Hao, QIN Keyun
Computer Science. 2023, 50 (11A): 220800051-6.  doi:10.11896/jsjkx.220800051
Abstract PDF(1694KB) ( 127 )   
References | Related Articles | Metrics
The rough set model based on neighborhood system is an extension of generalized rough set model based on general binary relation and covering rough set model.In general,different neighborhood systems may generate the same approximation operator.This paper gives the conditions for different neighborhood systems to generate the same approximation operator,and then proposes a method to classify the neighborhood systems based on the approximation operator.In addition,the axiomatic description method of rough approximation operator based on neighborhood system is given.
Inter-cluster Optimization for Cluster Federated Learning
LI Renjie, YAN Qiao
Computer Science. 2023, 50 (11A): 221000243-5.  doi:10.11896/jsjkx.221000243
Abstract PDF(2844KB) ( 157 )   
References | Related Articles | Metrics
Clustering federated learning is often used to solve the problem of decreasing accuracy caused by data heterogeneity in federated learning.The idea is to group clients with similar data distributions into the same cluster using clustering algorithms,and then train a cluster model specifically for that distribution.However,in practical applications,it is challenging to achieve ideal results because the training and test datasets used by the local clients may not match the data distribution of the cluster model,leading to a significant drop in inter-cluster accuracy.To improve the accuracy of the cluster model in clustering federatedlear-ning,this paper proposes two solutions.The first is adaptive weighted clustering federated learning(AWCFL),which incorporates models from other clusters during intra-cluster aggregation,enabling the cluster model to learn from other distributions and effectively improve inter-cluster accuracy.The second solution is multi-distribution clustering federated learning(MCFL),which synchronizes the cluster model with each client,allowing clients to choose the appropriate model to use.Compared with the first solution,intra-cluster accuracy remains unaffected in MCFL,while inter-cluster accuracy is significantly improved.To evaluate the proposed solutions,experiments are conducted on the Mnist and EMnist datasets.Compared with IFCA,clustered federated lear-ning(CFL) and FedAvg,the accuracy rate between clusters is significantly improved.
δ-sober Spaces and Its Properties
Computer Science. 2023, 50 (11A): 220900008-4.  doi:10.11896/jsjkx.220900008
Abstract PDF(1688KB) ( 118 )   
References | Related Articles | Metrics
This paper discusses some basic properties of δ-sober spaces,introduces the concept of s2-weakly convergent spaces,and discusses the relationship between δ-sober spaces and s2-weakly convergent spaces.The main conclusions are as follows:1)The subspaces ofδ-sober spaces are δ-sober spaces.2)If (X,τ) is an IDC space,then it is an s2-weakly convergence space if and only if it is a δ-sober space.3)The topology on the s2-weakly convergence IDC space is consistent with the σ2-topology and O(X)=Oσ2(X)=OSI2(X).4)If (X,τ) is an SI2-quasicontinuous space,then it is a δ-sober space.5)Let (X,τ) be a locally hypercompact δ-sober space,then it is an s2-quasi continuous poset.
Orthogonal Locality Preserving Projection Unsupervised Feature Selection Based on Graph Embedding
ZHU Jianyong, LI Zhaoxiang, XU Bin, YANG Hui, NIE Feiping
Computer Science. 2023, 50 (11A): 220900003-9.  doi:10.11896/jsjkx.220900003
Abstract PDF(4238KB) ( 131 )   
References | Related Articles | Metrics
The traditional unsupervised feature selection algorithm based on graph learning often adopts sparse regularization method.However,this approach relies too heavily on the efficiency of graph learning,and it is not easy to tune regularization parameters.To solve this problem,an unsupervised feature selection algorithm based on graph embedding learning with orthogonal locality preserving projection is proposed in this paper.Firstly,we utilize locality preserving projection method to enhance the linear mapping ability that can maintain the local geometric manifold structure of the data,and orthogonal projection mode brings convenience to data reconstruction.Moreover,we use graph embedding learning method to quickly learn the similarity matrix of data.Then,$\ell$2,0-norm constrained projection matrix to select discriminative features.Finally,a new nonparametric algorithm is used to efficiently solve the model problem iteratively since $\ell$2,0-norm belongs to NP problem.Experimental results prove the effectiveness and superiority of the proposed algorithm.
CMIHC Algorithm for Bayesian Network Structure Learning
LI Xiaoqing, YU Haizheng
Computer Science. 2023, 50 (11A): 220800046-7.  doi:10.11896/jsjkx.220800046
Abstract PDF(2430KB) ( 125 )   
References | Related Articles | Metrics
Bayesian network originates from the research on uncertain problems in the field of artificial intelligence.It is an important tool for reasoning and data analysis of uncertain problems.Since the birth of Bayesian network structure learning,there have been many mature structure learning algorithms,including dependency analysis based method,score based search method and hybrid search method.Among them,structure pruning by information theory has become a common method,but there is no unified standard for the selection of condition set in conditional mutual information,resulting in inconsistent pruning of network structure.The hill climbing algorithm uses three search operators to update the network structure locally,and obtains the optimal structure through the scoring function.Combined with the idea of information theory and hill climbing algorithm,a new structure learning algorithm-conditional mutual information hill climbing(CMIHC) algorithm is proposed.The proposed algorithm prunes the initial connected graph by using mutual information and the created condition set,and orients it to obtain the initial network structure.Combined with the scoring function and the greedy search strategy of hill climbing algorithm,the optimal network structure is obtained.Experimental analysis shows that CMIHC algorithm is superior to other comparison algorithms in accuracy and efficiency.
Track Segment Association Based on Deep Temporal Contrasting
HOU Hailun, LEI Yi, WEI Bo, FAN Yuqi
Computer Science. 2023, 50 (11A): 220900164-9.  doi:10.11896/jsjkx.220900164
Abstract PDF(2861KB) ( 120 )   
References | Related Articles | Metrics
The radar’s tracking of a flying target is often interrupted,which seriously affects the perception of the airfield situation.Deep learning has powerful learning capabilities and has been gradually used to solve the problem of interrupted track asso-ciation.However,the existing deep learning-based interrupted track association methods fail to fully consider the similarity between the old and new track features,hence the association performance needs to be improved.Therefore,this paper proposes a track segment association algorithm based on deep temporal contrasting(TSADTC),which includes a track feature extraction mo-dule,a time comparison module,a track feature comparison module and a classifier module.The track feature extraction module uses the bidirectional LSTM(Bi-LSTM) and the encoder-decoder to extract the features of the new and the old tracks,respectively.In the time comparison module,the features of a track are used to predict the other track,so that the features of the two tracks of the same target have high similarity.The track feature comparison module calculates the feature difference of the two tracks,which is fed into the classifier to decide the association probability of the two tracks.The track pair with the largest association probability is set as the associated tracks.Experimental results show that the proposed algorithm TSADTC can effectively improve the performance of correct association rate,false association rate and missing track association rate of interrupted track association.
Mining and Application of Frequent Patterns with Counting Quantifiers
SHA Yuji, WANG Xin, HE Yanxiao, ZHONG Xueyan, FANG Yu
Computer Science. 2023, 50 (11A): 230100041-12.  doi:10.11896/jsjkx.230100041
Abstract PDF(4235KB) ( 146 )   
References | Related Articles | Metrics
Frequent pattern mining(FPM) is a classical problem in graph theory,more attention has been paid on FPM on single large graphs,which is defined as discovering all the pattern graphs Q with occurrence frequencies above a user defined threshold,in a single large graph G.In recent years,people have witnessed wide applications of FPM,such as social network analysis and fraud detection.However,emerging applications keep calling for more expressive pattern graphs along with their mining techniques to capture more complex structures in a large graph.In light of this,we incorporate counting quantifiers in pattern graphs and introduce quantified pattern graphs(QGPs) which are able to express richer semantics.We then develop a distributive algorithm to mine QGPs in parallel.Furthermore,we introduce quantified graph pattern association rules(QGPARs) for linking prediction on large graphs.We conduct experimental studies to validate the computational efficiency of the QGPs mining algorithm by using real-world and synthetic graph data.By comparing with prior link prediction methods,we find that prediction with QGPARs achieves even higher accuracy.Finally,by comparing with the link prediction results of traditional graph pattern association rules,we verify that there is a significant difference between QGPARs and GPARs in terms of link prediction results,and further verify the effectiveness of QGPARs in link prediction.
Novelty Detection Method Based on Knowledge Distillation and Efficient Channel Attention
ZHOU Shijin, XING Hongjie
Computer Science. 2023, 50 (11A): 220900034-10.  doi:10.11896/jsjkx.220900034
Abstract PDF(4277KB) ( 217 )   
References | Related Articles | Metrics
The knowledge distillation based novelty detection method usually utilizes the pre-trained network as the teacher network.The network that has the same model structure and size as the teacher network is used as the student network.For testing data,the difference between the teacher network and the student network is utilized to discriminate them as normal or novel.However,the teacher network and the student network have the same network structure and size.On the one hand,the know-ledge distillation based novelty detection method may produce a small difference in the novel data.On the other hand,because the pre-trained data set of the teacher network is much larger in scale than the training set of the student network,the student network may thus obtain lots of redundant information.To solve this problem,the efficient channel attention(ECA) module is introduced into the knowledge distillation based novelty detection method.Utilizing the cross-channel interaction strategy,the student network with a simpler network structure and smaller size in comparison with the teacher network is designed.Hence,the features of the normal data can be efficiently obtained.The redundant information may be removed.The difference between the teacher network and the student network can also be enlarged.Moreover,the novelty detection performance may be improved.In comparison with 5 related methods,experimental results on the 6 image data sets demonstrate that the proposed method obtains better detection performance.
Diagnosis Prediction Based on Graph Convolutional Network and Attention Mechanism
YANG Xianming, ZHAN Xianchun, CHEN Hengliang, DING Haiyan
Computer Science. 2023, 50 (11A): 221100232-6.  doi:10.11896/jsjkx.221100232
Abstract PDF(2297KB) ( 112 )   
References | Related Articles | Metrics
Diagnosis prediction is an important prediction task in healthcare,which aims to predict the future diagnosis of patients based on their historical health records.Predictive models based on attention mechanisms and recurrent neural network are widely used to solve this task,but they are easy to be affected by insufficient data.In addition,medical domain knowledge plays an important role in improving the performance of diagnosis prediction,but existing methods still cannot make full use of those know-ledge.Therefore,a diagnostic prediction model based on graph convolutional network and attention mechanism is designed.Firstly,the medical ontology is used to model the correlation between medical concepts,then the patient visit information is modeled as a graph.Secondly,the graph convolution module is used to obtain the spatial features between the medical codes in each visit of the patient.Finally,a multi-head attention mechanism is used to model the interrelationship between visit features and multi-level medical knowledge to predict the future health status of patients.Experimental results on two publicly available medical datasets show that the diagnosis prediction performance of the model is better than that of the existing diagnostic prediction models,and the potential information in the medical knowledge graph can beused more effectively.
Stock Market Trend Reasoning Algorithm Based on Game Dynamic Influence Diagram
YAO Hongliang, YIN Zhiyuan, YANG Jing, YU Kui
Computer Science. 2023, 50 (11A): 221100039-7.  doi:10.11896/jsjkx.221100039
Abstract PDF(2469KB) ( 132 )   
References | Related Articles | Metrics
The stock market is a complex nonlinear dynamic system with high uncertainty and variability.Stock market trend prediction is a research hotspot in the field of data mining.Aiming at the problem that the model based on the data-driven method has poor robustness and the well-trained model does not meet the actual needs,Multi-agent game dynamic influence diagrams(MAGDIDs) is proposed.First of all,from the perspective of the game,the long side and the short side are introduced as the behavior subjects(Agent) of the stock market,and the relevant characteristics of the behavioral subjects are extracted.Next,the power of the game subjects is represented by energy,and the characteristics of the behavioral subjects are quantified and integra-ted.Then,the game strategy is introduced to build a multi-agent game dynamic influence graph model,and model the game process of the stock market actors.Finally,the automatic reasoning technology of the junction tree is used to predict the stock market trend.Experiments are carried out on actual data,and the results show that the trend prediction algorithm of long-short game has good performance.
Disease Diagnosis Based on Projection Correlation and Random Forest Fusion Model
HAN Yimei, LI Dongxi
Computer Science. 2023, 50 (11A): 230200172-6.  doi:10.11896/jsjkx.230200172
Abstract PDF(2333KB) ( 113 )   
References | Related Articles | Metrics
The processing method for high-dimensional data has become one of the hot issues in the study of big data.In this paper,a two-stage random forest algorithm based on projection correlation is proposed,which integrates the projection correlation to measure the correlation of random variables with the random forest algorithm,and shows better results in prediction perfor-mance.Three kinds of gene data are used for experimental analysis.In the experiments on Leukemia and Colon datasets,the accuracy of the proposed model improves by 2.4%~6.5% compared with the existing algorithms.In the experiment on Breast data set,the accuracy rate of the proposed algorithm increases by 3.55%~9.26% compared with the traditional random forest model,and it also performs stably and well in various evaluation indexes of high-dimensional data of different scales.The application of the model in the field of disease diagnosis based on microarray data will provide more scientific and effective decision support for disease prevention,diagnosis and treatment.
Improved Feature Interaction Algorithm Based on Meta-learning
BAI Jing, GENG Xinyu, YI Liu, MU Yukun, CHEN Qin, SONG Jie
Computer Science. 2023, 50 (11A): 230100087-8.  doi:10.11896/jsjkx.230100087
Abstract PDF(3403KB) ( 131 )   
References | Related Articles | Metrics
Feature interaction is crucial in the field of advertising click-through rate(CTR) prediction in recommendation systems.However,current industry practices for feature interaction often rely on matrix transformations such as inner and outer products,which do not introduce additional information and can only serve as a means of measuring the similarity between two vectors.Therefore,such methods may not reliably represent feature interaction and may not effectively improve the performance of CTR prediction.To address this issue,this paper first introduces additional parameters to learn a mapping from the perspective of improving the feature interaction,assuming that this mapping can map the representation of two vectors to the representation of interaction.The process of learning mapping can be achieved through meta-learning,which constructs a learner to represent feature interactions in a functional manner.Additionally,different features may not adopt the same interaction method,and it is impossible to obtain all feature pairs through a single interaction method.Therefore,a set of meta-lear-ners is designed to learn the mapping function,and a GateNet is introduced to learn the distribution of meta-learners in the model,so that a set of meta-learners can represent different feature embeddings.Based on these two points,a feature interaction algorithm is proposed that combines multiple meta-learners with GateNet(gate-MML),which improves the quality of each feature interaction by learning the connections and differences between different features.To verify the performance of the proposed algorithm,gate-MML is used for further feature interaction in the xDeepFM model,and experiments are conducted on two real advertising click-through rate prediction datasets using Logloss as the loss function and AUC as the evaluation metric.Experimental results show that compared with traditional CTR prediction models,the improved algorithm enhances the prediction performance of advertising click-through rate prediction tasks.
Anomaly Detection Algorithm for Network Device Configuration Based on Configuration Statement Tree
SHEN Yuancheng, BAN Rui, CHEN Xin, HUA Runduo, WANG Yunhai
Computer Science. 2023, 50 (11A): 230200128-10.  doi:10.11896/jsjkx.230200128
Abstract PDF(3342KB) ( 169 )   
References | Related Articles | Metrics
The problem of device configuration anomalies is becoming increasingly significant with the development of network communication equipment.Traditional detection tools usually only detect spelling,formatting and other issues,and cannot identify logic problems.Consequently,engineers’ experience plays a critical role in detecting such anomalies.To improve network service quality,reduce repetitive work,and address issues like slow detection speed,weak detection capabilities,and poor versatility of traditional tools,this paper draws on the design concept of abstract syntax trees and proposes an innovative unsupervised anomaly detection algorithm based on “configuration statement trees.” It can identify seven types of detectable anomalies and provides recommendations for anomaly localization and modification plans.The paper evaluates and compares the algorithm based on indicators such as detectable types,runtime,accuracy,and recall using configurations from the operator’s current network operation.The results demonstrate that the algorithm has good robustness and can effectively address network communication issues resulting from configuration anomalies in network communication equipment.
Bio-inspired Frequent Itemset Mining Strategy Based on Genetic Algorithm
ZHAO Xuejian, ZHAO Ke
Computer Science. 2023, 50 (11A): 220700200-8.  doi:10.11896/jsjkx.220700200
Abstract PDF(2005KB) ( 141 )   
References | Related Articles | Metrics
Precise frequent itemset mining algorithms usually have a low time efficiency,particularly in processing large-scale data sets.To solve this problem,a frequent itemset mining algorithm,genetic algorithm combining apriori property based frequent itemset mining(GAA-FIM),is proposed,which combines the genetic algorithm and the downward closure property of precise frequent itemset mining algorithms.The detailed operation rules of coding operation,crossover operation,mutation operation and selection operation are described in detail.In GAA-FIM algorithm,individuals with good genes are preferentially added to the latest generation of candidate population through the asexual crossover operation process and the scale of the new generation candidate can be expanded through mutation operation.Therefore,the time efficiency of the proposed algorithm can be improved greatly and the frequent itemsets with better quality can be obtained.The performance of GAA-FIM algorithm is validated based on both synthetic data sets and real data sets.Experimental results show that the proposed GAA-FIM algorithm has a better time efficiency than GAFIM algorithm and GA-Apriori algorithm.Moreover,the quality of mining frequent itemsets has been further improved.
Compliance Check Method for Data Flow Process Based on Extended Reachability Graph withLabeled Timing Constraint Petri Net
LIU Zhenyu, DONG Hui, LI Hua, WANG Lu
Computer Science. 2023, 50 (11A): 221000118-12.  doi:10.11896/jsjkx.221000118
Abstract PDF(4004KB) ( 118 )   
References | Related Articles | Metrics
With the continuous improvement of social system and laws and regulations,the business management process of enterprise is facing more and more requirements of compliance check.The labeled timing constraint Petri net(LTCPN) model is used to describe the laws,regulations and industry rules followed in the process of data flow.In order to support the rule expression of more dimensions,firstly,it is necessary to construct extended reachability graph GNR based on LTCPN reachability graph,and then automatically generate actual data flow model GNP according to the timestamp event log trace.By examining whether GNP|=GNR to determine that the data flow process based on the event log of timestep is conform to the rule specification described by LTCPN.For the problem of process model compliance check with unknown semantic information,same connection structure of node and edge can be used to detect the functional attribute compliance of semantically independent event.In terms of process models for explicit semantic information,the semantic information of nodes or edges can effectively reduce the number of state spaces explored in checking process,and further enrich the non-functional attribute check of compliance check.The feasibility of method in compliance check is verified by experiments.
Bayesian Time-series Model Based on spike-and-slab Prior
GUO Chenlei, LI Dongxi
Computer Science. 2023, 50 (11A): 221200131-6.  doi:10.11896/jsjkx.221200131
Abstract PDF(3096KB) ( 158 )   
References | Related Articles | Metrics
Bayesian method makes the results of estimation and prediction more accurate by introducing prior information and combining with likelihood for parameter estimation and variable selection.ABayesian hierarchical time-series model based on spike-and-slab prior with partial autocorrelation coefficients(SS-PAC ) is proposed under the Bayesian framework,considering the correlation between time series,fusing with the partial autocorrelation coefficient and prior information,the SS-PAC model uses spike-and-slab prior and partial autocorrelation coefficient to realize the selection,parameter estimation and prediction of time series lag order.Empirical research through simulated data and real data shows that the model performs better than previous models in variable selection and prediction results.
Multivariate Time Series Forecasting Method Based on FRA
WANG Hao, ZHOU Jiantao, HAO Xinyu, WANG Feiyu
Computer Science. 2023, 50 (11A): 221100144-8.  doi:10.11896/jsjkx.221100144
Abstract PDF(2229KB) ( 149 )   
References | Related Articles | Metrics
Derivative industries in the field of science and technology have accumulated a large amount of high-dimensional time series data due to the general existence of strong time constraints.Severe data pressure makes traditional data modeling and prediction methods limited by data scale and attribute dimensions.Services supporting high-quality put forward higher requirements for big data intelligent prediction technology.How to improve the prediction performance at the data level is a main problem that needs to be solved urgently at this stage.Combined with the above problems,a feature re-abstraction(FRA) algorithm for multivariate time series data is proposed.First,the RobustSTL decomposition algorithm is used to extract trend and seasonality features(TSFs),realize the second-order abstraction of features of multivariate data,and replace the traditional extraction strategy of “labels are features” with “abstract is features”.Then,the correlation strength between the TSFs captured by the re-abstract technology and the target parameters is evaluated by the calculation result of the Pearson correlation coefficient,which confirms the data value of the TSF.On the basis of FRA algorithm,combined with deep learning model,a data-driven multivariate time series prediction algorithm is constructed,and the effectiveness of FRA algorithm is verified by the prediction effect.Experimental results show that the introduction of TSFs as the training vector of the data-driven model can maintain the characteristics of data dimensionality reduction,noise reduction and strong correlation,so as to avoid model overfitting and alleviate model underfitting,and improve the accuracy and robustness of time series prediction algorithms.
Study on Value Calculation of Big Data Based on Granular Tree and Usage Relationship
MA Wensheng, HOU Xilin, WANG Hongbo, LIU Sen
Computer Science. 2023, 50 (11A): 230300109-8.  doi:10.11896/jsjkx.230300109
Abstract PDF(2278KB) ( 114 )   
References | Related Articles | Metrics
Study the core “data results of value” of big data.Firstly,the rough set method,cluster-based method,quotient space method,fuzzy information method and cloud model method for granulating big data are described.According to their common characteristics —“division”,the big data is “granulated”,and a “granularity tree” is established in the big data according to the size of division.“granular space” is defined in the “granular Tree”.Then it defines the usage relationship between the granular space and the representative project,and the conditions that the usage relationship of different granular spaces meets.Finally,according to the usage of each particle and each particle set in the usage relationship of the particle space,the usage is divided into three types:“regular use” “inevitable use”and “related use”.Take the average value of their attributes and objects and round them to 0~100,as the values of “data results of value”“inevitable value” and “relevant value” of big data.The effective calculation method of the core “data results of value” of big data is given,and the application examples of the core “data results of value” calculation of big data in telemedicine,urban management,universities and other fields are also given.
Prediction Method of Long Series Time Series Based on Improved Informer Model with Kernel Technique
PAN Liqun, WU Zhonghua, HONG Biao
Computer Science. 2023, 50 (11A): 221100186-6.  doi:10.11896/jsjkx.221100186
Abstract PDF(2437KB) ( 188 )   
References | Related Articles | Metrics
Nowadays,the prediction of long sequence time series problems is mainly based on RNN like models,and most of the loss functions used are mean square error(MSE).However,MSE loss function can not capture the nonlinear problems commonly existing in long time series data.Moreover,MSE loss function itself is sensitive to outliers and has low robustness.Therefore,this paper proposes to use the improved Kernel MSE loss function based on kernel technique to replace the traditional MSE loss function in Informer model,and solve the nonlinearity in data by mapping the error from the original feature space to a higher dimensional space.Moreover,the first and second derivatives of the new loss function ensure robustness to outliers.Under the background of multivariable prediction and multivariable,this paper compares the prediction accuracy with the classical Informer model,LSTM model and GRU model,taking eight data sets in three types of data as examples.The results show that the improved Informer model has higher prediction accuracy,and the relative improvement value of accuracy increases with the increase of the original data volume,which is suitable for the prediction of long series time series.
Study on Credit Anti-fraud Based on Heterogeneous Information Network
LIU Hualing, ZHANG Guoxiang, WANG Liuyue, LIANG Huabi
Computer Science. 2023, 50 (11A): 221100173-9.  doi:10.11896/jsjkx.221100173
Abstract PDF(3981KB) ( 169 )   
References | Related Articles | Metrics
In recent years,the digitization of mobile terminal equipment has risen sharply,and fraudulent behaviors in the credit industry have shown new characteristics such as dynamic development,concealment of behavior,and professional camouflage.The cross-order growth of massive data has brought considerable challenges to the effectiveness and computational efficiency of traditional anti-fraud algorithms.Therefore,this paper aims to fully learn the interaction information between different entities in the credit scene,reduce the computational consumption of the algorithm to make it suitable for large-scale graph data tasks,and propose a specific group mining algorithm BKH-II(Bron-Kerbosh-H-II) based on heterogeneous information networks.First,defining and classifing the credit entities and the relationships between them in the source data,and using the similarity between different entities as the relationship weight to build a credit heterogeneous information network.A two-stage H-graph-based maximal clique enumeration algorithm is adopted for the network to mine unique groups.Finally,potential fraud groups are obtained through local feature engineering correction and division.Experiments prove that the accuracy of BKH-II on the four evaluation indicators is NMI=0.983,NRI=0.96,F-score=0.943,Omega=0.95,and shows good generalization and low computational complexity.
Unbiased Deep Learning to Rank Algorithm for Suggestion Auto-completion
ZHOU Mingxing, YAN Xiangzhou, YU Jing, GAO Changju, CHEN Yunwen, JI Daqi, JIN Ke
Computer Science. 2023, 50 (11A): 220800179-5.  doi:10.11896/jsjkx.220800179
Abstract PDF(2119KB) ( 126 )   
References | Related Articles | Metrics
Suggestion auto-completion is one of the key means to influence users’ input before searching submission,and it is one of the indispensable core functions of commercial search engines.How to provide better suggestion words is also a ranking pro-blem.In the field of machine learning ranking,it has been a common perception that the collected training data has position bias [1-8] which can affect the ranking effect of a training model.To address the above problem of biased training data,this paper combines improved context-based semantic feature to design an unbiased deep learning to ranking algorithm for suggestion auto-completion(UDLTR-SAc) which learns position bias and suggestion relevance simultaneously.According to offline experiments and online A/B tests,UDLTR-SAc can automatically learn the training data bias introduced by the position to obtain a more accurate model in calculating correlation when compared with the similar algorithm without considering the bias problem or the classical completion ranking algorithm respectively.What’s more,it also achieves a 0.1%(p< 0.1) increase in GMV on the online A/B tests.
Ship Traffic Flow Prediction Algorithm Based on Attention Mechanism and ConvLSTM
LI Gang, SONG Wen, CHEN Zhiyuan
Computer Science. 2023, 50 (11A): 230800067-7.  doi:10.11896/jsjkx.230800067
Abstract PDF(3241KB) ( 205 )   
References | Related Articles | Metrics
Ship traffic flow prediction is one of the key technologies of port intelligent transportation system,which plays a vital role in the efficiency and safety of port transportation.Aiming at the problem that the existing prediction methods are difficult to effectively extract the spatio-temporal feature information from the ship traffic flow data,a prediction method based on attention mechanism and ConvLSTM(ACLN) is proposed.ACLN first constructs a encoding network through the deep ConvLSTM to effectively extract the spatio-temporal feature information from the ship traffic flow data.Secondly,the attention mechanism is used to pay attention to the importance of the extracted spatio-temporal feature information,so that the model can automatically pay attention to the more important feature information in the process of prediction.Finally,the prediction network is constructed by multiple layers of ConvLSTM and CNN to parse the extracted spatiotemporal feature information and output the prediction result.The effectiveness of the proposed method is verified on the real port ship traffic flow data.Experimental results show that the prediction performance of the proposed method is significantly better than that of the state-of-art prediction methods,and it can perform the long and short time prediction effectively in a certain area,and has a certain practical value.
Network & Communication
Modulation Signal Recognition Based on Multimodal Fusion and Deep Learning
YANG Xiaomeng, ZHANG Tao, ZHUANG Jianjun, QIAO Xiaoqiang, DU Yihang
Computer Science. 2023, 50 (11A): 220900007-7.  doi:10.11896/jsjkx.220900007
Abstract PDF(3636KB) ( 160 )   
References | Related Articles | Metrics
Aiming at the problem that most of the existing modulation classification algorithms ignore the complementarity between different features and feature fusion,this paper proposes a method of feature fusion using deep learning model.This method attempts to fuse the temporal and spatial features of modulated signals to obtain more distinct recognition features.First,the A/P signal and I/Q signal of the modulation signal are obtained.Then,the convolution long-term and short-term memory module and the complex dense residual convolution module are built to extract the temporal features of A/P signal and the spatial features of I/Q signal respectively,and fuse them to obtain the fusion complementary recognition features.Finally,the recognition features are input into the classification network to obtain the recognition results.Experimental results show that based on the open source data set,when the signal-to-noise ratio is greater than 5 dB,the recognition rate reaches 93.25%,and the recognition accuracy is 3%~11% higher than that based on single feature recognition.The actual collected data is used for classification and recognition,which further proves the effectiveness of the proposed feature extraction model and fusion strategy.
Dependency-aware Task Scheduling in Cloud-Edge Collaborative Computing Based on Reinforcement Learning
HU Shengxi, SONG Rirong, CHEN Xing, CHEN Zheyi
Computer Science. 2023, 50 (11A): 220900076-8.  doi:10.11896/jsjkx.220900076
Abstract PDF(3509KB) ( 133 )   
References | Related Articles | Metrics
In cloud-edge collaborative computing,computing resources are scattered among mobile devices,edge servers and cloud servers,Offloading the computation-intensive tasks from mobile devices to remote servers for execution and thus expand local computing capability by utilizing powerful remote resources,which is an effective way to solve the resource-constrained problem of mobile devices.Aiming at the scheduling decision problem of tasks with dependencies in cloud-edge collaborative computing,this paper proposes a model-free approach based on reinforcement learning.First,this paper models the mobile application as a directed acyclic graph,and builds a task scheduling problem model in cloud-edge collaborative computing.Second,it models the task scheduling process as a Markov decision process,using Q-learning to learn reasonable scheduling decisions by interacting with the network environment.Experimental results show that,the dependency-aware task scheduling based on Q-learning method proposed in this paper outperforms the compared benchmark algorithms in different scenarios,and effectively reduces the execution time of the application.
Study on Relay Decision in Wireless Heterogeneous Networks Based on Deep ReinforcementLearning
ZHOU Tianyu, GUAN Zheng
Computer Science. 2023, 50 (11A): 221000088-5.  doi:10.11896/jsjkx.221000088
Abstract PDF(2340KB) ( 127 )   
References | Related Articles | Metrics
For large-scale multi-user scenarios of the Internet of Things,remote nodes need to access the network through relay.In order to solve the adaptive access control problem of relay in heterogeneous access technology environment,an intelligent relay access control strategy based on deep reinforcement learning is proposed,which regards the transmission and reception process of relay to remote user data as a partially observable Markov decision process,and dynamically decides the relay working state to maximize the total system throughput and node fairness.Firstly,the uplink model of wireless heterogeneous network with relay is established.With the goal of improving the total throughput of the system,the dynamic decision optimization model of relay is established.Secondly,a deep Q network(DQN) with LSTM hidden layer is constructed as a behavior state value function to optimize the total system throughput.Test results show that DRL-RAP can provide network access for remote users on the premise of ensuring the original user’s quality of service.The total throughput of the system is significantly improved on the basis of the original network,and the maximum throughput can be increased by 30%.
Design of Adaptive Hybrid Precoder in mmWave MU-MIMO Systems
XUE Jianbin, WANG Jiahao
Computer Science. 2023, 50 (11A): 221200047-5.  doi:10.11896/jsjkx.221200047
Abstract PDF(2795KB) ( 134 )   
References | Related Articles | Metrics
Based on millimeter-wave(mmWave) communication and massive multi-input multi-output(MIMO) technology,a massive MIMO system for multi-user and multi-data stream scenarios such as cellular vehicle-to-everything(C-V2X) is constructed to reduce the total power consumption,hardware complexity and computational complexity of the system.For this purpose,a bitstream-based adaptive-connected massive MIMO architecture is designed.Compared with other adaptive-connected architectures,the proposed adaptive-connected architecture uses fewer phase shifters and switching switches with smaller arrays.As the number of arrays increases,the power consumption of the architecture in mmWave multi-user MIMO(MU-MIMO) systems decreases gradually.Simulation results show that in mmWave MU-MIMO-OFDM systems utilizing this architecture,some existing hybrid precoding schemes can achieve higher data transmission rates with the increase of the total number of data streams.
Dynamic Unloading Strategy of Vehicle Edge Computing Tasks Based on Traffic Density
ZHAO Hongwei, YOU Jingyue, WANG Yangyang, ZHAO Xike
Computer Science. 2023, 50 (11A): 220900199-7.  doi:10.11896/jsjkx.220900199
Abstract PDF(2899KB) ( 168 )   
References | Related Articles | Metrics
To address the problems and challenges of vehicle edge computing,a scenario model of vehicle-road-edge collaboration is proposed.Using vehicle density as the entry point,this paper defines the communication link outage probability minimization problem and establishes a communication rate model regarding vehicle density.Combining the three strategies of vehicle unloading,pricing and resource allocation,the system optimization objective is described as the problem of minimizing the vehicle-side cost and maximizing the RSU-side utility value.The problem decomposition idea is introduced to reduce the problem coupling,and the original optimization objective is transformed into the balance problem between unloading and pricing and the resource allocation problem.The existence of the Nash equilibrium point of the unloading and pricing game is verified,and a distributed algorithm(SDA) based on Stackelberg's game is proposed to solve the optimization problem.Finally,the impact of traffic density on transmission rate is verified through simulation experiments,and SDA reduces the unloading cost of vehicles by 24%,and increases the revenue of RSU by 11%.
Study on Dynamic Task Offloading Scheme Based on MAB in Vehicular Edge Computing Network
XUE Jianbin, WANG Hainiu, GUAN Xiangrui, YU Bowen
Computer Science. 2023, 50 (11A): 230200186-9.  doi:10.11896/jsjkx.230200186
Abstract PDF(2884KB) ( 142 )   
References | Related Articles | Metrics
The mobile edge computing system formed by applying mobile edge computing technology to Internet of Vehicles can provide computing services for other mobile devices through task offloading.However,due to the mobility of vehicle equipment,the environment of vehicular task offloading is dynamic and uncertain,with rapidly changing network topology,wireless channel state and computing load.These uncertainties make the task offloading process non-idealized.In view of these uncertainties,the computing resources of the MEC server are sunk into the vehicle equipment to study the task offloading between vehicles,and a solution is proposed to enable vehicles to learn the service performance of surrounding vehicles and offload tasks without knowing the status information.Based on the multi-arm bandits framework,a second-order exploration reinforcement learning algorithm is designed to maximize the average offloading return of users,and a service set update method is proposed after the end of an offloading phase to ensure the quality of service for users.Simulation results show that,compared with the existing algorithm based on upper confidence bound,the offloading return under the proposed scheme is improved by about 34%.
Fairness-aware Service Caching and Task Offloading with Cooperative Mobile Edge Computing
WU Chun, CHEN Long, SUN Yifei, WU Jigang
Computer Science. 2023, 50 (11A): 230200095-8.  doi:10.11896/jsjkx.230200095
Abstract PDF(2843KB) ( 136 )   
References | Related Articles | Metrics
Caching services in edge servers can reduce the response time of user requests and improve user experience.Most of existing works focus on optimizing the overall system performance,i.e.,maximizing the system throughput,which cannot gua-rantee the user fairness in requesting heterogeneous services.To fill this gap,this paper investigates fairness-aware joint service caching and task offloading strategy with cooperative edge computing.A minimum service completion rate maximization problem is formulated based on max-min fairness principle,which is proved to be NP-hard.A randomized rounding algorithm with MS/N(S-2 ln S)-approximation ratio is proposed by transforming the original problem from 0-1 integer programming into linear programming using linear relaxation,where S,N and M are the numbers of edge servers,services and end devices,respectively.Moreover,a fast and efficient greedy algorithm is proposed by caching the service with the minimum completion rate and offloa-ding its corresponding tasks preferentially.Extensive simulation results demonstrate that not only the minimum service completion rate can be improved by at least 44.1% and 90.6% by our two algorithms,but also the extra loss of system throughput is no more than 22.4% and 27.0%,respectively,compared with the existing algorithms for maximizing system throughput.
Multi-edge Server Load Balancing Strategy Based on Game Theory
WENG Jie, LIN Bing, CHEN Xing
Computer Science. 2023, 50 (11A): 221200150-8.  doi:10.11896/jsjkx.221200150
Abstract PDF(2686KB) ( 135 )   
References | Related Articles | Metrics
As an emerging computing paradigm,mobile edge computing aims to make up for the shortage of computing,storage and bandwidth of mobile devices in the Internet of Things.Due to geographical and time factors,the load of edge servers varies greatly,so the load balancing of multi-edge servers is very important.This paper proposes a multi-edge server load balancing strategy based on game theory to meet the load balancing requirements among edge servers.Firstly,the MEC server load balancing problem is modeled as a non-cooperative game,and the unique Nash equilibrium solution is obtained by introducing the regularization method based on proximal decomposition algorithm.Then,according to the established game model,a distributed load balancing algorithm(DLBA) is proposed to optimize the system response time and energy consumption.Experimental results show that DLBA can quickly reach Nash equilibrium with fewer iterations.Compared with the local computing strategy and computing power allocation strategy,the average response delay of DLBA strategy is reduced by 18.39% and 9.91%,and the average energy consumption is reduced by 2.42% and 7.33%.The gap between DLBA and the optimal strategy obtained by particle swarm optimization genetic algorithm is small,but the computation time is only 1.81% of that of particle swarm optimization genetic algorithm.Therefore,the proposed strategy can effectively reduce the system response time and energy consumption,and the execution speed is fast,which is applicable to real scenarios.
Optimal Edge Server Placement Method Based on Delay and Load
YUAN Peiyan, MA Yiwen
Computer Science. 2023, 50 (11A): 220900260-8.  doi:10.11896/jsjkx.220900260
Abstract PDF(2713KB) ( 110 )   
References | Related Articles | Metrics
At present,the placement of edge servers has become a key step in the development of edge computing.Existing edge server placement methods are optimized by combining placement cost,network latency and system energy consumption,but most work ignores load balancing among edge servers.The goal of this paper is to minimize the service delay and load balancing of edge servers,and an optimization model of edge computing server placement is established.According to the optimization model,the optimal placement location is selected,and an edge server placement scheme based on an improved meta-heuristic algorithm,MIWOA-ESP,is proposed.It completes the multi-objective optimization and determines the distribution relationship between the base station and the edge server,and gives the optimal placement and distribution scheme.Finally,experiments are carried out using the Shanghai Telecom base station dataset.The results show that compared with other benchmark schemes,the MIWOA-ESP placement strategy has better performance in terms of network latency and server load balancing.
LN-ERCL Lightning Network Optimization Scheme
SUN Min, XU Senwei, SHAN Tong
Computer Science. 2023, 50 (11A): 230200115-5.  doi:10.11896/jsjkx.230200115
Abstract PDF(2351KB) ( 134 )   
References | Related Articles | Metrics
In recent years,blockchain has developed rapidly,and low transaction frequency has become an obstacle to the further development of blockchain.Lightning network,as one of the best solutions to the problem of blockchain transaction frequency,has the advantages of short confirmation time and low cost.However,there are also problems such as low channel capacity,high routing cost and channel congestion.Most of the existing optimization schemes use third-party custody to extend the transaction waiting time,but they cannot solve the channel congestion problem from the root.To solve these problems,this paper proposes a new lightning network optimization scheme.First,set up super nodes in the lightning network,and give super node tokens for mutual channel construction.Users can convert bitcoin into tokens through Ethereum Request for Comment protocol to enter the lightning network.Secondly,the concept of escape value is proposed.The user node chooses to join a super node by calculating the escape value.Finally,the improved landmark algorithm prunes the network channels,improves the network scalability,and solves the channel congestion problem.Simulation results show that this scheme has a good effect on network congestion and long path optimization time when the transaction volume in the lightning network is large.
Noise Tolerant Algorithm for Network Traffic Classification Method
MA Jiye, ZHU Guosheng, WEI Cao, ZENG Yuxuan
Computer Science. 2023, 50 (11A): 220800120-7.  doi:10.11896/jsjkx.220800120
Abstract PDF(1903KB) ( 129 )   
References | Related Articles | Metrics
Aiming at the problem that the correctness of the sample labels in the traditional machine learning-based network traffic classification method will directly affect the accuracy of the results,a noise-tolerant network traffic classification method is proposed,which is based on the deep residual network method.After normalization and data enhancement,the data is mapped into a grayscale image,and the sample labels are added to different degrees of noise.Then,based on the Res2Net deep residual neural network,a dimensional module suitable for the interference of network traffic noise is designed,and a deep neural network model suitable for traffic label noise tolerance is constructed.Experimental results on public datasets show that compared with the traditional noise-tolerant classification algorithm,the improved deep residual neural network improves the classification accuracy under different noise rates,and the improvement is more significant at high noise rates.
Study on NOMA-MEC System Based on JTORATPAIA in Emergency Communication Scenarios
XUE Jianbin, AN Na, WANG Qi, ZHANG Han
Computer Science. 2023, 50 (11A): 221000240-8.  doi:10.11896/jsjkx.221000240
Abstract PDF(2684KB) ( 121 )   
References | Related Articles | Metrics
In the emergency communication scenario combining mobile edge computing(MEC) and non-orthogonal multiple access(NOMA) technology,aiming at the problem that the user’s own battery power is limited and cannot meet the user’s business needs,an UAV assisted NOMA-MEC emergency communication system with the goal of minimizing the user’s total energy consumption is proposed.A low complexity iterative algorithm for joint task offloading proportion and transmission power allocation(JTORATPAIA) is designed.Simulation results show that compared with other benchmark schemes,this scheme can reduce the energy consumption of all users more effectively.Especially when the input data size is 7.5 Mbits,the energy consumption the proposed scheme is reduced by about 40% compared with literature [30].It can be seen that the scheme is very suitable for improving the energy consumption of users in the emergency communication scenario.
Design of QPSK Intelligent Receiver Based on LSTM Neural Network
ZHU Li, HAN Huimei, ZHAI Wenchao
Computer Science. 2023, 50 (11A): 230200219-5.  doi:10.11896/jsjkx.230200219
Abstract PDF(2805KB) ( 131 )   
References | Related Articles | Metrics
To solve the problem of low detection accuracy and high complexity of quadrature phase shift keying(QPSK) receiver,this paper proposes a QPSK intelligent receiver based on long short-term memory(LSTM) neural network.The proposed intelligent receiver consists of LSTM and fully connected layers,which employs the LSTM to capture the temporal correlation of the received signal.Furthermore,the proposed intelligent receiver has low complexity.Simulation results show that,compared with the existing QPSK receivers,the proposed QPSK intelligent receiver significantly improves detection performance in the scenarios of additive Gaussian white noise,inphase quadrature imbalance and frequency deviation channel.
Computer Software & Architecture
IC3 Hardware Verification Algorithm Based on Variable Hiding Abstraction
YANG Liu, FAN Hongyu, LI Dongfang, HE Fei
Computer Science. 2023, 50 (11A): 230200112-6.  doi:10.11896/jsjkx.230200112
Abstract PDF(1769KB) ( 149 )   
References | Related Articles | Metrics
As the complexity and scale of hardware designs have increased significantly,hardware verification has become more challenging.Model checking techniques,as an automated verification technique,can automatically construct counterexample paths and thus become one of the most important research directions in the field of hardware verification.The IC3 algorithm is the most successful hardware verification algorithm at the bit level in recent years.In order to improve the scale and efficiency of verification,the design of hardware verification algorithms is gradually shifting from the bottom bit level to a higher abstraction level.The research goal is to design a new effective word-level IC3 algorithm.Aimed at this research goal,a word-level IC3 algorithm that speaks of a combination of variable hidden abstraction and implicit abstraction,called IC3VA,is proposed.The approach attempts to combine variable hiding abstraction and IC3 algorithm,and designs a corresponding generalization and refinement scheme.It is compared with the predicate abstraction-based approach on a test set collected by the open-source community and a hardware verification competition.Experimental results show the effectiveness of the IC3 algorithm based on variable hiding abstraction.
Automated Testing Method of Android Applications Based on SA-UCB Algorithm
WANG Xi, ZHAO Chunlei, BU Zhiliang, YANG Yi
Computer Science. 2023, 50 (11A): 221200145-7.  doi:10.11896/jsjkx.221200145
Abstract PDF(2557KB) ( 112 )   
References | Related Articles | Metrics
Aiming at the problem that the traditional reinforcement learning algorithm needs to learn the code of conduct,which leads to low testing efficiency,a model-based automated testing method for Android applications,SA-UCB,is proposed.The Sarsa algorithm is used to guide the test process,and the Q table is used as the reference for action strategy selection.And for the randomness of ε-greedy integrated by the classical Sarsa algorithm is too strong,the upper confidence bound algorithm(UCB algorithm) is introduced to balance the “exploration-exploitation dilemma”,which makes action decisions more decentralized.And it is applied to the Android automated testing process,the testing efficiency is improved.The SA-UCB method is compared with other five test methods in terms of test coverage,test efficiency and fault detection.The results show that SA-UCB strategy has certain advantages in test coverage and test efficiency under the same experimental conditions.
Design of Ship Mission Reliability Simulation System Based on Agent
WEN Haolin, DI Peng, CHEN Tong
Computer Science. 2023, 50 (11A): 220800272-7.  doi:10.11896/jsjkx.220800272
Abstract PDF(5228KB) ( 145 )   
References | Related Articles | Metrics
In view of the complex effect of support resources on mission reliability during ship missions,the autonomy,reactivity and sociality of agent technology are used to solve modeling and calculation problems of many complex influence relationships in mission reliability modeling,the ship task flow,equipment reliability structure,fault and maintenance support resources are simulated.We have established a multi-element,modular,flexible configuration and easy-to-use ship mission reliability simulation system,which can calculate the mission reliability,support resource configurable number and other indicators under diversified mission conditions.It provides technical support for the optimal allocation of ship support resources in the use phase and supportability design of the ship’s equipment in the development phase.
CFD Mesh Density Optimization Method Based on Characteristic Flow Distributions
LIU Jiang, ZENG Zhiyong
Computer Science. 2023, 50 (11A): 230200019-8.  doi:10.11896/jsjkx.230200019
Abstract PDF(2964KB) ( 129 )   
References | Related Articles | Metrics
The generation and optimization of CFD mesh is a key technology in numerical computation of CFD,which significantly determines the final accuracy and efficiency of numerical simulation.The scale of the mesh used in CFD simulation of practical engineering problems can be tens of millions.How to obtain higher numerical accuracy within a given calculation time is a key technology urgently needed to develop CFD.It is shown that the errors of numerical simulation are positively correlated with the gradients of their characteristic physical quantities.In this paper,a mesh density optimization method based on the gradients of chara-cteristic flow distributions is proposed.The mesh density optimization method,the OpenFOAM and cfMesh tools are used to simulate an incompressible fluid,a combustion flow,and a multiphase flow.Simulation results show that for various cases with differentcharacteristics,the optimized mesh by the proposed method can significantly improve the calculation accuracy within almost same calculation time.
Large-scale Efficient Hybrid Parallel Computing for DSMC/PIC Coupled Simulation
WANG Qingsong, QIU Haozhong, LIN Yongzhen, YANG Fuxiang, LI Jie, WANG Zhenghua, XU Chuanfu
Computer Science. 2023, 50 (11A): 230300146-9.  doi:10.11896/jsjkx.230300146
Abstract PDF(4209KB) ( 127 )   
References | Related Articles | Metrics
DSMC/PIC coupled simulation is an important class of high-performance computing applications.Due to the dynamic particle injection and migration,the pure MPI parallelization of DSMC/PIC coupled simulation usually suffers from huge communications costs and load imbalance.In this paper,we present approaches to implement large-scale and efficient MPI+OpenMP hybrid parallelization and dynamic load balancing research for a self-developed DSMC/PIC coupled simulation software.Firstly,we propose a MPI parallel algorithm based on nested dual unstructured grid with two parallel communication strategies,centralized and distributed,to support the dynamic migration of particles between any parallel processes.Then,we present a weighted load performance model,and a dynamic load balancing algorithm and an efficient grid remapping mechanism are designed and implemented,which greatly improves the parallel efficiency of coupled parallel simulation.Furthermore,we design and implement a hybrid parallel algorithm of MPI+OpenMP for coupled simulation,which effectively reduces the grid redecomposition and communication overheads of pure MPI parallelization with dynamic load balance.On the BSCC HPC system,the DSMC/PIC coupled parallel simulation of thousands of processor cores is carried out for the billion particle scale pulsed vacuum arc plasma plume,and the effect of the parallel algorithm and dynamic load balancing has been verified.
Lightweight Network Hardware Acceleration Design for Edge Computing
YU Yunjun, ZHANG Pengfei, GONG Hancheng, CHEN Min
Computer Science. 2023, 50 (11A): 220800045-7.  doi:10.11896/jsjkx.220800045
Abstract PDF(2885KB) ( 135 )   
References | Related Articles | Metrics
With the increase of edge device data and the continuous application of neural networks,the rise of edge computing has shared the pressure on big data technologies with cloud computing as the core.Field programmable gate arrays(FPGAs) have shown excellent properties in edge computing and building neural network accelerators due to their flexible architecture and low power consumption.But traditional FPGA solutions based on traditional convolution algorithms are often limited by the number of on-chip computing units.In this paper,Zynq is used as a hardware acceleration platform,to quantize parameters at a fixed point,and array partitioning is used to improve pipeline running speed.The Winograd fast convolution algorithm is used to improve the traditional convolution,and the multiplication operation in the convolution operation is converted into an addition operation,which reduces the computational complexity of the model.The computational performance of the designed accelerator is greatly improved.Experiments show that XC7Z035 can achieve 43.5GOP/s performance under 150 MHz clock,and the energy efficiency is 129 times of Xeon(R) Silver 4214R and 159 timesof dual-core ARM.The proposedsolution is limited in resources and power consumption.It can provide high performance and is suitable for the landing application of lightweight neural networks at the edge of the network.
Transplantation and Optimization of Row-vector-matrix Multiplication in Complex Domain Based on FT-M7002
MO Shangfeng, ZHOU Zhenfen, HU Yonghua, XU Minmin, MAO Chunxian, YUAN Yudi
Computer Science. 2023, 50 (11A): 220900277-6.  doi:10.11896/jsjkx.220900277
Abstract PDF(3016KB) ( 124 )   
References | Related Articles | Metrics
FT-M7002 is a high-performance DSP independently developed in China,with powerful vector processing capability.In order to give full play to its performance advantages,it is urgent to optimize and transplant the efficient VSIP function library for FT-M7002.Row vector matrix multiplication in complex domain is a frequent algorithm used in VSIP library,which is widely used in digital communication,image processing and other application fields.In this paper,we study the optimization algorithm of row vector matrix multiplication in complex domain on FT-M7002 DSP,and improve the performance of the algorithm by changing the column vector of the computation matrix to the row vector of the computation matrix,vectorization,loop expansion and software pipelining.The test results show that the optimized vector C algorithm achieves a speedup ratio of 6.2~20.6 compared with the VSIP library function,and the assembly optimization algorithm achieves a speedup ratio of 3.4~14.3 compared with the vector C algorithm.The speedup effect is obvious.
Application of Air-Sea Coupled Mode in High-speed Interconnection Environment
HAN Qiqi, LIU Xin
Computer Science. 2023, 50 (11A): 221000136-5.  doi:10.11896/jsjkx.221000136
Abstract PDF(3023KB) ( 133 )   
References | Related Articles | Metrics
With the development of supercomputers,large-scale numerical computing and big data analysis require increasing demand for high-performance computing capacities.Limited by cost and power consumption,a single supercomputing center cannot expand indefinitely.Interconnection of supercomputers in different places is a good solution.Based on the 10Gbps DWDM fiber optic network from Jinan to Qingdao,a long distance geographically interconnected computing cluster that consisted of the nodes located at Jinan cluster and some nodes at Qingdao cluster is built,which realizes the unified scheduling of computing resources in the two clusters.The ROMS model and WRF model in air-sea coupled model COAWS Tare used to conduct multiple sets of comparative experiments with nodes of varying sizes and locations.Experiments results show that it is feasible to perform coupled numerical simulation on long distance geographically interconnected computing cluster without substantial drop inperformance.Si-mulation results of WRF and ROMS running the same example in Jinan cluster and Jinan-Qingdao cluster are the same.When WRF runs in Jinan cluster and ROMS runs in Qingdao cluster,the running time is 5% more than that when WRF and ROMS both run in Jinan cluster.When WRF and ROMS are split in Jinan-Qingdao cluster,the communication takes up a lot of time.The high-speed interconnection environment is more suitable for coupled model that have low communication requirements.
Information Security
Review of Relationship Between Side-channel Attacks and Fault Attacks
WU Tong, ZHOU Dawei, OU Qingyu, CHU Weiyu
Computer Science. 2023, 50 (11A): 220700223-7.  doi:10.11896/jsjkx.220700223
Abstract PDF(1731KB) ( 129 )   
References | Related Articles | Metrics
Side-channel attacks and fault attacks are widely used at present.This paper analyzes and compares the leakage models of the above two attack methods,and expounds the inherent consistency from algorithm level and physical level.Finally,the current research hotspots such as how to build a unified physical leakage function model,propose a unified physical security evaluation standard,and design a general protection strategy are analyzed,which are of great significance for further research from the perspective of the relationship between the two.
Implementation and Verification of Reinforcement Learning Strategy in Automated Red Teaming Testing
CHEN Yufei, LI Saifei, ZHANG Lijie, ZHAO Yue
Computer Science. 2023, 50 (11A): 230200162-6.  doi:10.11896/jsjkx.230200162
Abstract PDF(2399KB) ( 151 )   
References | Related Articles | Metrics
Red teaming testing is a method to evaluate the security of network system by simulating real hacker attack behavior.However,manual test has the problems of high cost and poor adaptability at present.Red teaming testing intelligence and automation is currently a hot research topic,aiming at reducing the cost of red teaming testing and improving the test performance and efficiency of cybersecurity assessments.Automated attack strategy is the core of automated red teaming testing,it is designed to replace security experts in the attack technology decision-making process.In this paper,the red teaming attack technique is mapped to reinforcement learning,the red teaming testing process is modeled as a Markov decision process model,and the fixed strategy and reinforcement learning strategy are implemented through the finite state machine.Reinforcement learning strategy is trained and tested in the real network environment to verify the convergence and feasibility.Experimental results show that the SARSA(λ) algorithm is superior to other reinforcement learning algorithms and has the fastest convergence speed.The three reinforcement learning strategies can achieve the test objective stably in the test experiment,and the performance is much better than that of the fixed strategy.
Batch Zeroth Order Gradient Symbol Method Based on Substitution Model
LI Yanda, FAN Chunlong, TENG Yiping, YU Kaibo
Computer Science. 2023, 50 (11A): 230100036-6.  doi:10.11896/jsjkx.230100036
Abstract PDF(2690KB) ( 122 )   
References | Related Articles | Metrics
In the field of adversarial attacks for neural networks,for universal attacks on black-box model,how to generate universal perturbation which can cause most sample output errors is an urgent problem to be solved.However,existing black-box universal perturbation generation methods have poor attack effects and the generated perturbations are easy to be detected by the naked eye.To solve this problem,this paper takes the typical convolutional neural networks as the research object and proposed batch zeroth order gradient symbol method based on substitution model.This method initializes universal perturbation with white-box attacks on a set of alternative models,then realizes the stable and efficient updating of the universal perturbation by querying the target model under the black-box condition.Experimental results on two image retrieval datasets(CIFAR-10 and SVHN) show that the attack capability of this method is significantly improved,and the performance of generating universal perturbation is increased by 3 times.
Hybrid Encryption Algorithm Based on I-SM4 and SM2
SUN Min, SHAN Tong, XU Senwei
Computer Science. 2023, 50 (11A): 221100116-4.  doi:10.11896/jsjkx.221100116
Abstract PDF(2191KB) ( 142 )   
References | Related Articles | Metrics
In recent years,data leakage incidents have occurred frequently,and information security issues have become increasingly prominent.Since a single encryption algorithm cannot meet the security requirements of information in the transmission process,data encryption is generally performed through a hybrid encryption algorithm.The existing hybrid encryption algorithms are mainly based on encryption algorithms designed abroad,which do not meet the autonomous and controllable requirements of cyberspace security.Aiming at this problem,a new hybrid encryption algorithm is designed by combining the improved SM4 algorithm(I-SM4) and SM2 algorithm.It improves the key expansion part of the SM4 encryption algorithm,and uses the linear congruence sequence instead of the original key expansion method to expand the round key,which reduces the correlation between the round keys and improves the security of the key.In addition,the combination of I-SM4 and SM2 can strengthen the management of I-SM4 keys and improve security on the one hand.On the other hand,it can reduce the time required to use the SM2 encryption algorithm alone.Through experiments and analysis,it is proved that the hybrid encryption algorithm proposed in this paper can effectively improve the confidentiality,integrity and non-repudiation of information during network transmission.
Grey Evaluation Method of Network Security Grade Based on Comprehensive Weighting
QIN Futong, YUAN Xuejun, ZHOU Chao, FAN Yongwen
Computer Science. 2023, 50 (11A): 230300144-6.  doi:10.11896/jsjkx.230300144
Abstract PDF(1773KB) ( 117 )   
References | Related Articles | Metrics
Network security grade evaluation is the key of information system grade protection,to evaluate the grade of network security,it is necessary to establish an index system according to the national or industrial standards of network security grade protection,set index weights,and select an appropriate model for comprehensive evaluation.Based on the analytic hierarchy process and rough set theory,the index is comprehensively weighted,which overcomes the subjectivity of index weight setting and the burst of sample data.The correlation degree of the number series and the target sequence is measured by the grey correlation degree,and the coincidence degree between the actual network security level and the evaluation standard is more reflected.The example shows that the proposed method can effectively evaluate the network security grade.
Dummy Location Generation Algorithm Against Side Information Inference Attack
ZHANG Xuejun, YANG Yixing, LI Jiale, TIAN Feng, HUANG Haiyan, HUANG Shan
Computer Science. 2023, 50 (11A): 221000036-9.  doi:10.11896/jsjkx.221000036
Abstract PDF(3622KB) ( 126 )   
References | Related Articles | Metrics
Aiming at the existing dummy location generation algorithm,a multiple query request attack algorithm (MQRA) is designed to test its security.In order to effectively protect user’s location privacy,a dummy location generation algorithm against side information inference attack(DLG_SIA) is proposed.It comprehensively considers the side information such as query probability,time distribution,location semantics and physical dispersion to generate an effective dummy location set to resist probability distribution attacks,location semantics attacks and location homogeneity attacks,and avoid attackers filtering dummy locations with side information.When the user requests for the first time,the DLG_SIA algorithm first uses the location entropy and time entropy to select the location points with similar query probability at the current request time to generate a dummy location set,and then uses the adjusted cosine similarity to generate the location points that meet the semantic differences.Next,distance entropy is used to ensure that the selected location points have a larger anonymous range,and the best dummy location set of the current request location is cached.Security analysis and simulation results show that MQRA algorithm can identify the real location of users in the dummy location set with high probability.Compared with the existing dummy location generation algorithm,DLG_SIA algorithm can effectively resist the side information inference attack and protect the user’s location privacy.
Lightweight Group Key Agreement for Industrial Internet of Things
WANG Zichen, YUAN Chengsheng, WANG Yili, GUO Ping, FU Zhangjie
Computer Science. 2023, 50 (11A): 230700075-10.  doi:10.11896/jsjkx.230700075
Abstract PDF(2852KB) ( 185 )   
References | Related Articles | Metrics
In recent years,the industrial Internet of Things based on group information sharing has been widely used in industrial manufacturing,financial trade and other fields due to its real-time,security and information exchange characteristics.However,this technology is based on the group key agreement protocol,which has defects such as high overhead,weak security,and low scalability.Therefore,how to design a safe and efficient group key agreement protocol has become a scientific problem that needs to be solved urgently.In this paper,using the mathematical structure of balanced incomplete block design and the elliptic curve Qu Vanstone authentication protocol,a new method based on structured group key agreement protocol is proposed.First,in order to reduce the computational overhead of the protocol,the ECQV authentication protocol is used to avoid performing pairing operations.Then,the security of the proposed protocol is proved with the help of ECDDH assumption.Finally,in order to reduce the communication overhead of the protocol and improve the scalability of the protocol,the existing group key agreement protocol is extended by using the asymmetric balanced incomplete block design.And the number of supported members is changed from p2 to p2 and p2+p+1.Experimental results show that the proposed protocol can reduce the computational overhead to O(nnm),and the communication overhead to O(nn).While ensuring security against chosen plaintext attacks,the protocol can flexibly and adaptively expand the number of participants in group key agreement,which further improves the security and efficiency of the group key agreement protocol.
Search and Optimization of GIFT Integral Distinguisher Based on MILP
ZU Jinyuan, LIU Jie, SHI Yipeng, ZHANG Tao, ZHANG Guoqun
Computer Science. 2023, 50 (11A): 220900231-8.  doi:10.11896/jsjkx.220900231
Abstract PDF(1758KB) ( 132 )   
References | Related Articles | Metrics
The lightweight block cipher GIFT algorithm proposed by Banik et al.has been selected for the final round of the NIST standardization competition for international lightweight cryptographic algorithms.At present,there have been linear analysis,difference analysis and other related studies,but the integral analysis of GIFT still needs to be further studied.Aiming at the problem of division trails expression redundancy in the process of integral cryptanalysis of GIFT,an integral dividers solution and search optimization algorithm based on mixed integer linear programming model(MILP) is proposed.Firstly,the linear layer and the nonlinear layer of the GIFT algorithm are respectively described according to their bit division property.The linear layer is expressed by the propagation rule,the greedy algorithm is used to simplify the expression for the nonlinear S-box based on the propagation rule,and 15 inequalities are obtained as constraint conditions.64 9-round integral discriminators are found after the MILP solution.On this basis,in order to solve the problem of insufficient accuracy of the MILP solution model based on the greedy algorithm,the MILP model is introduced to reconstruct the bit division property of the S-box.Design a MILP-based reduction algorithm to optimize the GIFT integral dividers search,and re-solve the MILP model,then obtain two 13-round integral discriminators.Therefore,the MILP-based S-box new reduction algorithm can optimize the expression of the S-box division property,and can effectively increase the number of rounds of the integral dividers attack on the GIFT algorithm,and improve the integral attack effect.
Safe Efficient and Decentralized Model for Mobile Crowdsensing Incentive
ZHOU Yuying, MA Miao, SHEN Qiqi, REN Jie, ZHANG Mingrui, YANG Bo
Computer Science. 2023, 50 (11A): 221000184-10.  doi:10.11896/jsjkx.221000184
Abstract PDF(3998KB) ( 133 )   
References | Related Articles | Metrics
In order to solve the problem of trust safety and inefficient perception task execution in the existing mobile crowdsen-sing incentive model,this paper proposes a safe efficient and decentralized model for mobile crowdsensing incentive.Employing blockchain to decentralize user management,the model completes the interaction and chain transaction among task publisher,participant and miner,realizes task publishing,participant selection,data quality evaluation and payment through PCSC(participant control smart contract) and TCSC(task control smart contract).In the process of participant selection,this paper proposes a “task-participants” matching mechanism based on BP neural network,which refers the time-location attribute of participants in historical data respectively to find out the most suitable participants for the current task.Then an adaptive reputation updating mechanism is suggested,that is,“giving reward and reputation incentive to the winner,giving reputation compensation to the non-winner who is willing to participate,and giving reputation punishment to the continuous non-participants who are suitable for the current task”.Security analysis and experimental results show that the proposed incentive model is safe,efficient and decentra-lized,since it not only can significantly improve the task completion rate,perceived data quality,participants’ benefits and user participation on the international open benchmark Brightkite dataset,but also can work on blockchain due to the efficiency of PCSC and TCSC using Solidity.
Deepfake Images Detection Based on Quantitative Data Features Statistics
XIE Fei, GAO Shuhui
Computer Science. 2023, 50 (11A): 230300013-9.  doi:10.11896/jsjkx.230300013
Abstract PDF(5232KB) ( 133 )   
References | Related Articles | Metrics
Due to the characteristics of “low threshold,high efficiency and high simulation”,deepfake technology is abused to forge identity,the personal information security problems caused by it are bringing serious challenges to public security gover-nance.At present,the mainstream detection of deepfake images is mainly convolution features,while quantitative features are rarely used,which have the advantages of small space and low operation cost.This paper explores the correlation degree of the texture,color features and image authenticity of the images,selects the effective features for the automatic detection of deepfake images,and studies the application value of the quantitative features in the deepfake images identification.40 000 images in the ForgeryNet dataset are used as experimental samples,which are divided into four groups.Texture features and color features in Gray,YCrCb,Lab,HSV and RGB color space of each group of images are extracted,and features with both significant difference and correlation are screened by Mann-Whitney U test and point biserial correlation analysis.Then XGBoost,logistic regression classifier,linear SVM,multilayer perceptron and TabNet are used to verify the seleted features,and finally compared with the mainstream convolutional neural network.Among the five algorithms,MLP and LP are less effective.XGBoost and LSVM are better.TabNet is unstable and greatly affected by classification type,with accuracy ranging from 52% to 89%.The accuracy of the features selected based on mathematical statistics is improved.For example,in the true and false image group,the screening features and texture features in the verification of XGBoost is 1.10% and 1.43% higher than all the features,respectively.The accuracy of texture features verified by LSVM and MLP improves by 0.12% and 0.10%,respectively.The accuracy of the structured feature algorithm based on screening is higher than that of the mainstream convolutional neural network,and the result of texture features is better than that of color features.It is easier to recognize the deepfake image with identity replacement.
Enhanced Federated Learning Frameworks Based on CutMix
WANG Chundong, DU Yingqi, MO Xiuliang, FU Haoran
Computer Science. 2023, 50 (11A): 220800021-8.  doi:10.11896/jsjkx.220800021
Abstract PDF(3430KB) ( 125 )   
References | Related Articles | Metrics
The emergence of federated learning solves the problem of "data silos" in traditional machine learning.Federated learning enables the training of collective models while protecting the privacy of the client's local data.When the client’s dataset is independently identically distributed(IID) data,federated learning can achieve an accuracy similar to that of centralized machine learning.However,in real scenarios,due to differences in client devices and geographic locations,there are often cases where client’s dataset contain noisy data and non-independent identical distribution(Non-IID).Therefore,this paper proposes a CutMix-based federated learning framework,namely CutMix enhanced federated learning(CEFL),which first filters out noisy data through data cleaning algorithms and then trains through CutMix-based data enhancement.Compared with the traditional federated learning algorithm,the accuracy of CutMix enhanced federated learning can be improved by 23% and 19% for the model on Non-IID dataset.
Federated Learning Privacy-preserving Approach for Multimodal Medical Data
ZHANG Lianfu, TAN Zuowen
Computer Science. 2023, 50 (11A): 230800021-8.  doi:10.11896/jsjkx.230800021
Abstract PDF(2965KB) ( 210 )   
References | Related Articles | Metrics
Electronic health records(EHRs) data has become a valuable resource for biomedical research.By learning multi-dimensional features hidden in EHRs data that are difficult for humans to distinguish,machine learning methods can achieve better results.However,some existing studies only consider some privacy leaks that may be faced during or after model training,resulting in a single privacy preservation measure that cannot cover the whole life cycle of machine learning.In addition,most of the existing programs are focused on federated learning privacy preservationmethods for single-mode data.Therefore,a federated learningprivacy preservation approach for multimodal data is proposed.To prevent the adversaryfrom stealing the original data information through reverse attack,differential privacy perturbation is performed on the model parameters uploaded by each participant.To prevent the leakage of local model information of each participant in the process of model training,the Paillier cryptosystem is used for homomorphic encryption of local model parameters.The security of the method is analyzed from the theoretical point of view,the security model is defined,and the security of the subprotocol is proved.Experimental results show that this method can preserveprivacy of training data and model with almost no loss of performance.
Study on Intrusion Detection Algorithm Based on TCN-BiLSTM
BAI Wanrong, WEI Feng, ZHENG Guangyuan, WANG Baohui
Computer Science. 2023, 50 (11A): 230300142-8.  doi:10.11896/jsjkx.230300142
Abstract PDF(2547KB) ( 141 )   
References | Related Articles | Metrics
Network security is directly related to national security.How to accurately and efficiently detect network threats in the power grid is very important.Aiming at the problems of small receptive field and no consideration of data timing characteristics of traditional CNN,combined with spatial and temporal characteristics of network traffic data,an attention intrusion detection algorithm based on time convolution network(TCN) and BiLSTM is proposed.First,feature coding is performed on network traffic characteristics.Then the forest optimization feature screening algorithm is used to reduce the redundancy of the data,and then resampling is carried out to solve the problem of data imbalance.Finally,the data is input into the deep neural network,and the processed data is extracted by the TCN and BiLSTM networks for feature learning.The self-attention mechanism is used for weight allocation,and finally the classification is carried out to realize the intrusion detection.The data set adopts NSL-KDD,and the experimental results show that the algorithm can identify network intrusion detection effectively.
Interdiscipline & Application
Study on Scheduling Algorithm of Intelligent Order Dispatching
JIA Jingdong, ZHANG Minnan, ZHAO Xiang, HUANG Jian
Computer Science. 2023, 50 (11A): 230300029-7.  doi:10.11896/jsjkx.230300029
Abstract PDF(2113KB) ( 141 )   
References | Related Articles | Metrics
With the development of national digital construction,the intellectualization and specialization of social governance have become the basic requirements for the progress of urban science and technology.All government systems need to deal with the demands of people efficiently and accurately.However,the public appeal information collected from the appeal channels of major government portals is manually judged by responsible departments and then manually assigned to relevant departments for follow-up verification and processing,which greatly limits the efficiency and accuracy of appeal processing.Using artificial intelligence and deep learning methods,the intelligent dispatching algorithm based on real public demand information data training,accurately and efficiently dispatches demands to relevant departments,accelerates the speed of government affairs processing process and greatly reduces unnecessary labor costs.Therefore,the research of this scheduling algorithm is of great significance.First,the data is denoised and desensitized,and hierarchical stitching is used to build data labels and standard process libraries for label alignment.Then,a baseline model for address recognition is trained on publicly available datasets,and a label fusion method based on category proportion sampling is proposed to solve the problem of imbalanced data in work order classification.Experimental results show that the method improves the baseline model by varying degrees.Finally,combining the classification model and the address recognition model,an intelligent response template is constructed to complete the entire process of intelligent dispatching for complaint handling.
Novel Method for Trash Classification Based on Causal Inference
YUAN Zhen, LIU Jinfeng
Computer Science. 2023, 50 (11A): 220800218-6.  doi:10.11896/jsjkx.220800218
Abstract PDF(2700KB) ( 123 )   
References | Related Articles | Metrics
Trash classification is an effective measure to protect the environment and improve resource utilization.In recent years,deep learning has been successful in various fields with its powerful modeling capabilities,which makes the use of deep learning for waste classification an emerging direction.Most of the trash classification datasets have the problem of an uneven number of category images (long-tail distribution).This paper proposes a new framework based on causal inference for TrashNet which is a long-tail dataset in trash classification.The framework mitigates the long-tail problem of the TrashNet dataset by finding the direct causal effects caused by the input samples through a causal inference approach.The model is trained with a migration learning approach,which reduces the number of training parameters,and is de-confounded using causal intervention and counterfactual inference.The proposed method is validated with class activation map(CAM),and the experimental results show that the proposed model has better feature extraction capability.The model has better recognition effect for difficult classes in TrashNet dataset.And it achieves a better accuracy of 94.23% on the TrashNet dataset.
Study on Decomposition of Two-dimensional Polygonal Objects
JIN Jianguo
Computer Science. 2023, 50 (11A): 230300237-5.  doi:10.11896/jsjkx.230300237
Abstract PDF(2376KB) ( 121 )   
References | Related Articles | Metrics
This paper studies how to decompose the two-dimensional polygonal objects into meaningful parts.Psychologists have found that meaningful decomposition of objects is an important process for human beings to recognize objects.Especially,in image recognition,after the edge of the object in the image has been detected,the edge can be expressed as a closed polygon.So how to decompose the polygon is a very important step to recognize the object in image.In this paper,we first separate the vertices of polygon into several clusters by spectral analysis combined with K-means,and then by computing cut line fitness proposed in the paper,the algorithm choose the best cut line recursively on between-cluster and within-cluster.Experimental results show the effectiveness of this method.The quantitative analysis and comparison between the algorithm and the well-known artificial decomposition data set show that the algorithm decomposition results are in line with human thinking and have achieved good decomposition results.
Verification Algorithm for Weak Prognosability of Discrete Event Systems
CAO Weihua, LIU Fuchun
Computer Science. 2023, 50 (11A): 220800224-6.  doi:10.11896/jsjkx.220800224
Abstract PDF(2040KB) ( 113 )   
References | Related Articles | Metrics
This paper proposes the concept of weak prognosability.For fault detection,prognosis can reduce the loss caused by faults to the system more than diagnosis.However,even if most fault strings are prognosable,as long as one fault string is unprognosable and can only be diagnosed,the whole system is unprognosable and can only be handled by diagnosis,which is unfavorable to most fault strings.The concept of weak prognosability can avoid this situation.Weak prognosability is the prediction of whether the system will be in a fault state in the future.Compared with prognosability,weak prognosability does not require all fault event strings to be prognosable.Weak prognosability can alarm the prognosable fault strings before the fault occurs,and it can also alarm the unprognosable but diagnosable fault strings after the fault occurs.A verifier is constructed to test the weak prognosability of the system,a polynomial algorithm of weak prognosability of the system is given based on the verifier,and the sufficient and necessary conditions of weak prognosability are also given.
Study on Programmatic Trading Investors Recognition Based on Model Fusion
YUAN Yukun, XU Gang, WU Wei, XU Li
Computer Science. 2023, 50 (11A): 230300131-6.  doi:10.11896/jsjkx.230300131
Abstract PDF(1980KB) ( 158 )   
References | Related Articles | Metrics
Programmatic trading has recently gained popularity among financial institutions due to the advancements of information and electronic technology in the financial market.It makes a significant impact on futures markets and draws the attention of regulators and investors.This paper develops recognition models based on the idea of model fusion for programmatic trading investors,combing the rule-based models and machine learning models,and proves the validity of the model on investor data in China’s A-share market.The proposed model achieves over 90% accuracy and recall on recognizing programmatic trading accounts,which is better than the state of the art.Our experiments show that the proposed model is able to support the technical regulation on programmatic trading.
Simulation of Equipment Procurement Model Based on Dynamic Evolutionary Game
LI Yunzhe, DONG Peng, YE Weimin, WEN Haolin
Computer Science. 2023, 50 (11A): 220900051-10.  doi:10.11896/jsjkx.220900051
Abstract PDF(4156KB) ( 185 )   
References | Related Articles | Metrics
Aiming at the equipment procurement problem,a two-party procurement model with the purchaser and the contractor as the main body and a three-party procurement model with the purchaser,the main contractor and the sub contractor as the main body are established.Based on the dynamic evolutionary game theory,the strategy selection and game equilibrium of each subject under the two procurement models are analyzed,and the game evolution process is dynamically simulated and analyzed by numerical simulation using AnyLogic.In the game model of both parties,the evolutionary equilibrium strategy of the contractor is not actively perform the contract.The three-party game model can make the main contractor actively perform the contract through proper design,so as to better guarantee the quality.Therefore,when the purchaser selects the procurement mode,it is more advantageous to select the mode of the purchaser,the main contractor and the sub contractor,and it is necessary to reasonably set the terms such as the supervision accuracy rate,the contract penalty and the spot check rate.
Study on Decision-making for a Low-carbon Supply Chain with Capital Constraint on Both Supply and Demand Sides
WANG Min, LI Liying, ZHOU Jun
Computer Science. 2023, 50 (11A): 221200130-9.  doi:10.11896/jsjkx.221200130
Abstract PDF(1853KB) ( 138 )   
References | Related Articles | Metrics
To alleviate the problem of financing difficulties for both the supply and demand sides with capital constraints under the low-carbon environment,a bi-level Stackelberg game model with the bank as the leader,the supplier as the subleader,and the carbon-dependent manufacturer as the follower,is formulated under the government’s “carbon cap-and-trade mechanism”.In the stochastic demand scenario,the supplier’s optimal wholesale price decision,the manufacturer’s optimal ordering and emission reduction decisions,and bank’s the optimal interest rate decision are investigated.Theoretical and numerical analyses show that,when the emission cap allocated by the government low,it will increase the bankruptcy risk of the manufacturer,and the manufacturer with limited liability will adopt a more aggressive ordering strategy.In order to reduce losses from the bankruptcy risk of borrowers,the bank will strengthen their regulation of the whole supply chain operation.The more free funds the supplier has,the more beneficial it is for the manufacturer and the overall supply chain performance.
Sound Source Arrival Direction Estimation Based on GRU and Self-attentive Network
HE Ruhan, CHEN Yifan, YU Yongshengand JIANG Aisen
Computer Science. 2023, 50 (11A): 220900135-7.  doi:10.11896/jsjkx.220900135
Abstract PDF(2103KB) ( 145 )   
References | Related Articles | Metrics
Neural network-based sound source localization has received wide attention in recent years.However,it is still challenging to mitigate the problems such as loss of implied DOA location information and small sample data.Therefore,a sound source arrival direction estimation method based on GRU and self-attentive network is proposed.The method uses GRU,which works well for small data sets,as the backbone network to compensate for the difficulty of pure sound data collection.At the same time,it uses sound sources from multichannel recordings to form a training set.After the short-time Fourier transform feature extraction to obtain the Meier spectrogram and acoustic intensity vector,then form the input features superimposed by the multi-channel speech spectrogram and the normalized main feature vector.Avoiding the implicit DOA information corrupted by the combination of speech spectrogram and GCC-PHAT features,effectively mitigating the loss of implicit DOA location information.It is used as input into the convolutional recurrent neural network model for supervised learning to obtain the model parameters.The model output uses 3D Cartesian product coordinate regression to obtain DOA location estimates,and adds a self-attentive network for parameter back-propagation during model training,enables the network to calculate the loss and predict the correlation matrix while training to solve the optimal allocation between predicted and reference localization.Experimental results show that the network has high localization accuracy and robustness under different reverberation conditions and signal-to-noise ratios.
Detection of Farmland Change Based on Unified Attention Fusion Network
LI Tao, WANG Hairui, ZHU Guifu
Computer Science. 2023, 50 (11A): 221100060-6.  doi:10.11896/jsjkx.221100060
Abstract PDF(2764KB) ( 121 )   
References | Related Articles | Metrics
In order to quickly find out the number of houses built on arable land illegally occupied and realize the detection of houses built on encroached farmland,a unified attention fusion network is proposed to identify houses built on encroached farmland.In order to solve the problem of mutual influence of remote sensing image features in different phases,the network firstly uses siamese network instead of VGG16 network for feature extraction.Secondly,in order to reduce the size of network model on the premise of increasing the receptive field of network and obtaining more multi-scale information,simple pyramid pooling module is used at the bottom layer of coding stage.In order to improve the segmentation accuracy,highlight the useful features and use the unified attention fusion module to replace the original upsampling part for decoding to obtain the change detection results.The network is trained and tested on the data set of building houses on encroached farmland.Experimental results show that the unified attention fusion network has an accuracy rate of 98.82%,precision of 89.69%,recall rate of 82.14%,and F1 score of 85.74% on the test set.It can quickly identify illegal houses suspected of occupying farmland at different scales,and provide a technical detection method for the construction of houses in rural areas.
Early Screening Method for Depression Based on EEG Signal
REN Shuyao, SONG Jiangling, ZHANG Rui
Computer Science. 2023, 50 (11A): 221100139-6.  doi:10.11896/jsjkx.221100139
Abstract PDF(3890KB) ( 198 )   
References | Related Articles | Metrics
Depression is a common and curable psychiatric disorder.If a prompt diagnosis can be taken at the early stage of depression(early screening),appropriate treatment could effectively control the depression progression or even cure it.The traditional method of diagnosing depression is a comprehensive judgment from doctors by clinical manifestations and clinical examination(diagnostic scales,etc.),but the diagnosis accuracy relies heavily on the clinical experience of the physician and the inclination of cooperation from the patient.In addition,early-stage symptoms of depression are difficult to observe,making traditional diagnostic methods susceptible to underdiagnosis.Research indicates that electroencephalogram(EEG) responds effectively to the mental state of subjects from a physiological perspective,which provides an effective way of early screening for depression.On this basis,this paper proposes an EEG-based method combined with deep learning models for early screening of depression.First,extracting the temporal-spectral-spatial sequences of EEG signals by segmentation processing,frequency domain transformation,etc.Secondly,constructing a hybrid deep neural network based on extracted sequences to identify the EEG signals of mild depression patients.Finally,the feasibility and effectiveness of proposed method are verified by conducting numerical experiments in the public datasets MODMA.Numerical results show that the accuracy,recall rate and sensitivity of the proposed method is 82.64%,78.42%,and 75.37%,respectively.
Medical Image Super-resolution Method Based on Semantic Attention
LIN Yi,ZHOU Peng, CHEN Yanming
Computer Science. 2023, 50 (11A): 221200107-6.  doi:10.11896/jsjkx.221200107
Abstract PDF(3326KB) ( 145 )   
References | Related Articles | Metrics
In the field of medical images processing,clear medical images can help doctors to diagnose diseases better.However,due to the limitations of imaging equipment,the generated medical images are often of low resolution and thus may be in appro-priate for diagnosis.Therefore,it is very important to use super-resolution method to improve the image resolution.In recent years,with the development of deep learning,natural image super-resolution methods based on deep learning have been widely studied and achieved promising performance.However,unlike natural image super-resolution,medical image super-resolution often serves downstream medical tasks.The downstream medical tasks,such as disease diagnosis and semantic segmentation,tend to be of interest to certain regions.However,traditional image super-resolution methods often tend to treat all regions in the image equally,without considering the importance of the regions of interest for downstream medical tasks.To tackle this problem,this paper proposes a medical image super-resolution method based on semantic attention.The semantic attention module pays extra attention to the regions of interest in the image by weighting,so that the super-resolution image is more helpful for downstream medical tasks.Experimental results show that the proposed method outperforms other mainstream super-resolution methods on COVID-19 dataset and gastrointestinal polyps dataset Kvasir-SEG.
Medical Microscopic Image Segmentation Model Based on CNN Structure and Swin Transformer
SUN Kaixin, LIU Bin, SU Shuguang
Computer Science. 2023, 50 (11A): 230200119-8.  doi:10.11896/jsjkx.230200119
Abstract PDF(4868KB) ( 205 )   
References | Related Articles | Metrics
Medical microscopic image segmentation has important application value in clinical diagnosis and pathological analysis.However,due to the complex visual features such as shape,texture,and size of microscopic images,accurate segmentation of these images is a challenging task.In this paper,we propose a new segmentation model called UMSTC,which is based on a U-shaped structure and combines the U-Net model and Swin Transformer model to balance the details and macro features of images while maintaining modeling integrity.Specifically,the down-sampling part of the UMSTC model uses the Swin Transformer network to optimize its inherent attention mechanism for extracting micro and macro features,while the up-sampling part is based on a CNN network's deconvolution operation and uses a residual mechanism to receive and fuse feature maps from the down-sampling stage to reduce image synthesis accuracy loss.Experimental results show that the proposed UMSTC segmentation model has better segmentation performance than current mainstream medical image semantic segmentation models,with mPA and mIoU increases by approximately 3%~ 5% and 3%~8%,respectively,and the segmentation results have higher subjective visual quality and fewer artifacts.Therefore,the UMSTC model has broad application prospects in the field of medical microscopic image segmentation.
Cascade Dynamic Attention U-Net Based Brain Tumor Segmentation
CHEN Bonian, HAN Yutong, HE Tao, LIU Bin, ZHANG Jianxin
Computer Science. 2023, 50 (11A): 221100180-7.  doi:10.11896/jsjkx.221100180
Abstract PDF(3128KB) ( 148 )   
References | Related Articles | Metrics
Brain tumor is a common brain disease that heavily threatens human health,so accurate brain tumor segmentation is vital for clinic diagnosis and treatment of patients.Due to different shapes and sizes,unstable positions and fuzzy boundaries of brain tumors,it is a challenging task to achieving high-precision automatic brain tumors segmentation.Recently,U-Net has become the mainstream model for medical image segmentation due to its concise architecture and excellent performance.But it also has some problems,such as limited local receptive field,spatial information loss and insufficient use of context information.Therefore,we propose a new cascade U-Net model based on dynamic convolution and non-local attention mechanism,named CDAU-Net.Firstly,a two-stage cascade 3D U-Net architecture is proposed to reconstruct more detailed and high-resolution spatial information of brain tumors.Then,the expectation-maximization attention is added to the skip connection of CDAU-Net,and the tumor context information is better utilized by improving the network’s ability to capture long-distance dependencies.Finally,the normal convolution is replaced by the dynamic convolution with local adaptive ability in CDAU-Net,which can further enhance the local feature capture ability of the network.Extensive experiments are conducted on public dataset BraTS 2019/2020 and compared with other representative methods,and experimental results show that the proposed method is effective in brain tumor segmentation.The CDAU-Net obtained the Dice values of whole tumor,tumor core and enhancing tumor segmentation on BraTS 2019/2020 verification sets are 0.897/0.903,0.826/0.828 and 0.781/0.786,respectively,which achieves good brain tumor segmentation performance.
Medical Image Segmentation Based on Multi-scale Edge Guidance
JIANG Haotian, WANG Qizhi, HUANG Yanglin, ZHANG Yaqin andHU Kai
Computer Science. 2023, 50 (11A): 220900059-7.  doi:10.11896/jsjkx.220900059
Abstract PDF(3460KB) ( 140 )   
References | Related Articles | Metrics
Medical images have small gray-scale changes,and segmentation targets and backgrounds are not easy to distinguish,thus image segmentation is full of challenging problems.Most of the existing models unify the segmented high-frequency edges with the low-frequency subjects for learning,ignoring the difference between high-frequency information and low-frequency information and the difference in the proportion of both in the image.To address this problem,edge guided V-shape network(EGV-Net),a multi-scale convolutional neural network based on edge guidance,is proposed to perform targeted learning from two feature perspectives:low-frequency segmented subjects and high-frequency segmented edges.Among them,the low-frequency features are passed through the feature transfer by the encoder-decoder connection method to learn the main part of the segmentation target.The high-frequency features are firstly extracted from the segmentation mapping by edge extraction method,and then the segmentation edges are filtered and separated from it.The segmented edges of high frequency are guided by edge guidance module to make accurate segmentation of low frequency segmented edges and recover edge detail accuracy.Experimental results in liver images and ISIC2016 show that the proposed algorithm has better control over the overall segmentation and better segmentation effect at the edge details than other models.