Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of Intelligent Computing in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Survey of the Application of Natural Language Processing for Resume Analysis
    LI Xiao-wei, SHU Hui, GUANG Yan, ZHAI Yi, YANG Zi-ji
    Computer Science    2022, 49 (6A): 66-73.   DOI: 10.11896/jsjkx.210600134
    Abstract533)      PDF(pc) (2488KB)(1487)       Save
    With the rapid development of information technology and the dramatic growth of digital resources,enormous resumes is generated in the Internet.It is a concern of scholars to analyze the resumes of job seekers to obtain the information of various personnel of candidates,industry categories and job recommendations.The inefficiency of manual resume analysis has promoted the wide application of natural language processing(NLP) technology in resume analysis.NLP can realize automated analysis of resumes by using artificial intelligence and computer technology to analyze,understand and process natural language.This paper systematically reviews the relevant literature in the past ten years.Firstly,the natural language processing is introduced.Then based on the principal line of resume analysis in NLP,the recent works in three aspects:resume information extraction,resume classification and resume recommendation are generalized.Finally,discussing the future development trend in this research area and summarizing the paper.
    Reference | Related Articles | Metrics
    Review of Reasoning on Knowledge Graph
    MA Rui-xin, LI Ze-yang, CHEN Zhi-kui, ZHAO Liang
    Computer Science    2022, 49 (6A): 74-85.   DOI: 10.11896/jsjkx.210100122
    Abstract1586)      PDF(pc) (2519KB)(2951)       Save
    In recent years,the rapid development of Internet technology and reference models has led to an exponential growth in the scale of computer world data,which contains a lot of valuable information.How to select knowledge from it,and organize and express this knowledge effectively attract wide attention.Knowledge graphs are also born from this.Knowledge reasoning for knowledge graphs is one of the hotspots of knowledge graph research,and important achievements are obtained in the fields of semantic search and intelligent question answering.However,due to various defects in the sample data,such as the lack of head and tail entities in sample data,the long query path,as well as the wrong sample data.In the face of the above characteristics,the knowledge graph reasoning of zero-shot,one-shot,few-shot and multi-shot get more attention.Based on the basic concepts and basic knowledge of knowledge graph,this paper introduces the latest research progress of knowledge graph reasoning methods in recent years.Specifically,according to the size of sample data,the knowledge graph reasoning method is divided into multi-shot reasoning,few-shot reasoning,zero-shot and single-shot reasoning.Models that use more than five instances for reasoning are multi-sample reasoning,models that use two to five instances for reasoning are few-shot reasoning,and those use zero or one instance number for reasoning are zero-shpt and one-shot reasoning.The multi-shot knowledge graph reasoning is subdivided into rule-based reasoning,distributed-based reasoning,neural network-based reasoning,and other reasoning.The few-shot knowledge graph reasoning is subdivided into meta-learning-based reasoning and neighboring entity information-based reasoning.And these methods are analyzed and summarized.In addition,this paper further describes the typical application of knowledge graph reaso-ning,and discusses the existing problems,future research directions and prospects of knowledge graph reasoning.
    Reference | Related Articles | Metrics
    Survey on Bayesian Optimization Methods for Hyper-parameter Tuning
    LI Ya-ru, ZHANG Yu-lai, WANG Jia-chen
    Computer Science    2022, 49 (6A): 86-92.   DOI: 10.11896/jsjkx.210300208
    Abstract731)      PDF(pc) (1927KB)(1298)       Save
    For most machine learning models,hyper-parameter selection plays an important role in obtaining high quality models.In the current practice,most of the hyper-parameters are given manually.So the selection or estimation of hyper-parameters is an key issue in machine learning.The mapping from hyper-parameter set to the modeĹs generalization can be regarded as a complex black box function.The general optimization method is difficult to apply.Bayesian optimization is a very effective global optimization algorithm,which is suitable for solving optimization problems in which their objective functions could not be expressed,or the functions are non-convex,computational expensive.The ideal solution can be obtained with a few function evaluations.This paper summarizes the basics of the Bayesian optimization based on hyper-parameter estimation methods,and summarizes the research hot spots and the latest developments in the recent years,including the researches in agent model,acquisition function,algorithm implementation and so on.And the problems to be solved in existing research are summarized.It is expected to help beginners quickly understand Bayesian optimization algorithms,understand typical algorithm ideas,and play a guiding role in future researches.
    Reference | Related Articles | Metrics
    Review of Multi-instance Learning Algorithms
    ZHAO Lu, YUAN Li-ming, HAO Kun
    Computer Science    2022, 49 (6A): 93-99.   DOI: 10.11896/jsjkx.210500047
    Abstract843)      PDF(pc) (3279KB)(688)       Save
    Multi-instance learning(MIL) is a typical weakly supervised learning framework,where every training example,called bag,is a set of instances.Since the learning process of an MIL algorithm depends on only the labels of bags rather than those of any individual instances,MIL can fit well with applications in which instance labels are difficult to get.Recently,deep multi-instance learning methods attract widespread attention,so deep MIL has become a major research focus.This paper reviews some research progress of MIL.Firstly,MIL algorithms are divided into shallow and deep models according to their hierarchical structure.Secondly,various algorithms are reviewed and summarized in these two categories,and then different pooling methods of deep MIL models are analyzed.Moreover,the fundamental theorem of symmetric functions for models with set-type data as training samples and its application in deep MIL are expounded.Finally,the performance of different algorithms is compared and analyzed through experiments,and their interpretability is analyzed thoroughly.After that,problems to be further investigated are discussed.
    Reference | Related Articles | Metrics
    Fast and Transmissible Domain Knowledge Graph Construction Method
    DENG Kai, YANG Pin, LI Yi-zhou, YANG Xing, ZENG Fan-rui, ZHANG Zhen-yu
    Computer Science    2022, 49 (6A): 100-108.   DOI: 10.11896/jsjkx.210900018
    Abstract607)      PDF(pc) (5171KB)(927)       Save
    Domain knowledge graph can clearly and visually represent domain entity relations,acquire knowledge efficiently and accurately.The construction of domain knowledge graph is helpful to promote the development of information technology in rela-ted fields,but the construction of domain knowledge graph requires huge manpower and time costs of experts,and it is difficult to migrate to other fields.In order to reduce the manpower cost and improve the versatility of knowledge graph construction me-thod,this paper proposes a general construction method of domain knowledge graph,which does not rely on a large of artificial ontology construction and data markup.The domain knowledge graph is constructed through four steps:domain dictionary construction,data acquisition and cleaning,entity linking and maintenance,and graph updating and visualization.This paper takes the domain of network security as an example to construct the knowledge graph and details the build process.At the same time,in order to improve the domain correlation of entities in the knowledge graph,a fusion model based on BERT(Bidirectional Encoder Representations from Transformers) and attention mechanism model is proposed in this paper.The F-score of this model in text classification is 87.14%,and the accuracy is 93.51%.
    Reference | Related Articles | Metrics
    Redundant Literals of Ternary Clause Sets in Propositional Logic
    LI Jie, ZHONG Xiao-mei
    Computer Science    2022, 49 (6A): 109-112.   DOI: 10.11896/jsjkx.210700036
    Abstract254)      PDF(pc) (2901KB)(394)       Save
    Automatic reasoning is one of the core issues in the field of artificial intelligence.Since a large number of redundant li-terals and redundant clauses are generated in the process of automatic reasoning based on resolution,the resolution efficiency will be affected.It is of great significance to eliminate redundant literals and redundant clauses in the clause set.In propositional logic,according to the related concepts and properties of necessary literals,useful literals and useless literals,this paper classifies and gives the judgment methods of redundant literals in some ternary clause sets,and explains these judgment methods through specific examples.
    Reference | Related Articles | Metrics
    Active Metric Learning Based on Support Vector Machines
    HOU Xia-ye, CHEN Hai-yan, ZHANG Bing, YUAN Li-gang, JIA Yi-zhen
    Computer Science    2022, 49 (6A): 113-118.   DOI: 10.11896/jsjkx.210500034
    Abstract431)      PDF(pc) (3241KB)(530)       Save
    Metric learning is an important issue in machine learning.The measuring results will significantly affect the perfor-mance of machine learning algorithms.Current researches on metric learning mainly focus on supervised learning problems.How-ever,in real world applications,there is a large amount of data that has no label or needs to pay a high price to get labels.To handle this problem,this paper proposes an active metric learning algorithm based on support vector machines(ASVM2L),which can be used for semi-supervised learning.Firstly,a small size of samples randomly selected from the unlabeled dataset are labeled by oracles,and then these samples are used to train the support vector machine metric learner(SVM2L).According to the output measuring result,the rest unlabeled samples are classified by K-NN classifiers with different values of K,and the sample with the largest voting differences is selected and submitted to the oracle to get a label.Then,the sample is added to the training set to retrain the ASVM2L model.Repeating the above steps until the termination condition is met,then the best metric matrix can be obtained from the limited labeled samples.Comparative experiments on the standard datasets verify that the proposed ASVM2L algorithm can obtain more information with the least labeled samples without affecting the classification accuracy,and therefore has better measuring performance.
    Reference | Related Articles | Metrics
    Relation Classification Method Based on Cross-sentence Contextual Information for Neural Network
    HUANG Shao-bin, SUN Xue-wei, LI Rong-sheng
    Computer Science    2022, 49 (6A): 119-124.   DOI: 10.11896/jsjkx.210600150
    Abstract437)      PDF(pc) (2150KB)(576)       Save
    Information extraction is a technique of extracting specific information from textual data.It has been widely used in knowledge graph,information retrieval,question answering system,sentiment analysis and text mining.As the core task and important part of information extraction,relation classification can realize the recognition of semantic relations between entities.In recent years,deep learning has made remarkable achievements in relation extraction tasks.So far,researchers have focused their efforts on improving neural network models,but there is still a lack of effective methods to obtain cross-sentence semantic information from paragraphs or discourse level texts with close semantic relationships between different sentences.However,semantic relationships between sentences for relation extraction tasks are of great use.In this paper,for such paragraphs or discourse level relation extraction datasets,a method to combine sentences with their cross-contextual information as the input of the neural network model is proposed,so that the model can learn more semantic information from paragraphs or discourse level texts.Cross-sentence contex tual information is introduced into different neural network models,and experiments are carried out on two relation classification datasets in different fields including San Wen dataset and Policy dataset.The effects of cross-sentence contex-tual information on model accuracy are compared.The experiment show that the proposed method can effectively improve the performance of relation classification models including Convolutional neural network,bidirectional long short-term memory network,attention-based bidirectional long short-term memory network and convolutional recurrent neural network.In addition,this paper proposes a relation classification dataset named Policy based on the texts of policies and regulations in the field of four social insurance and one housing fund,which is used to verify the necessity of introducing cross-sentence contextual information into the relation classification tasks in some practical fields.
    Reference | Related Articles | Metrics
    Hybrid Improved Flower Pollination Algorithm and Gray Wolf Algorithm for Feature Selection
    KANG Yan, WANG Hai-ning, TAO Liu, YANG Hai-xiao, YANG Xue-kun, WANG Fei, LI Hao
    Computer Science    2022, 49 (6A): 125-132.   DOI: 10.11896/jsjkx.210600135
    Abstract361)      PDF(pc) (2655KB)(511)       Save
    Feature selection is very important in the stage of data preprocessing.The quality of feature selection not only affects the training time of the neural network but also affects the performance of the neural network.Grey Wolf improved Flower pollination algorithm(Grey Wolf improved Flower pollination algorithm,GIFPA) is a hybrid algorithm based on the fusion of flower pollination algorithm framework and gray wolf optimization algorithm.When it is applied to feature selection,it can not only retain the connotation information of the original features but also maximize the accuracy of classification features.The GIFPA algorithm adds the worst individual information to the FPA algorithm,uses the cross-pollination stage of the FPA algorithm as the global search,uses the hunting process of the gray wolf optimization algorithm as the local search,and adjusts the search process of the two through the conversion coefficient.At the same time,to overcome the problem that swarms intelligence algorithm is easy to fall into local optimization,this paper uses the RelifF algorithm in the field of data mining to improve this problem and uses the RelifF algorithm to filter out high weight features and improve the best individual information.To verify the performance of the algorithm,21 classical data sets in the UCI database are selected for testing,k-nearest neighbor(KNN) classifier is used for classification and evaluation,fitness value and accuracy are used as evaluation criteria,and K-fold crossover verification is used to overcome the over-fitting problem.In the experiment,a variety of classical algorithms and advanced algorithms,including the FPA algorithm,are compared.The experimental results show that the GIFPA algorithm has strong competitiveness in feature selection.
    Reference | Related Articles | Metrics
    Construction of Named Entity Recognition Corpus in Field of Military Command and Control Support
    DU Xiao-ming, YUAN Qing-bo, YANG Fan, YAO Yi, JIANG Xiang
    Computer Science    2022, 49 (6A): 133-139.   DOI: 10.11896/jsjkx.210400132
    Abstract340)      PDF(pc) (3935KB)(717)       Save
    The construction of the knowledge graph in the field of military command and control support is an important research direction in the process of the military information equipment support.Aiming at the current situation that the named entity re-cognition model lacks the corresponding basic training corpus in the construction of the guarantee domain knowledge graph,based on the analysis of the relevant research status,this paper designs and implements a GUI named entity recognition corpus construction system based on the basic framework of the PyQt5 application program.First,it briefly describes the overall system architecture and corpus processing technical process.Secondly,it introduces the system's data preprocessing,labeling system,automatic labeling,labeling analysis and coding conversion related content in five major functional modules.Among them,the automatic labeling function module is automatic.The implementation of automatic labeling and the realization of automatic de-duplication algorithm is the most important and difficult point,and also is the core of the entire system.Finally,the graphical user interface of each functional module is implemented through the basic framework of the PyQt5 application program and various functional components.The design and implementation of this system can automatically process various original equipment manuals on military computers,and quickly generate the corpus required for named entity recognition model training,so as to provide effective technical support for the subsequent construction of the corresponding domain knowledge graph.
    Reference | Related Articles | Metrics
    Topological Properties of Fuzzy Rough Sets Based on Residuated Lattices
    XU Si-yu, QIN Ke-yun
    Computer Science    2022, 49 (6A): 140-143.   DOI: 10.11896/jsjkx.210200123
    Abstract186)      PDF(pc) (3009KB)(395)       Save
    This paper is devoted to the study of the topological structure of L-fuzzy rough sets based on residuated lattices.The L-fuzzy topologies induced by the lower approximation operators determined by fuzzy implication operators are presented and its basic properties being discussed.The knowledge of the L-fuzzy approximation space is a general L-fuzzy relation,and there is no need to assume its reflexivity and strong seriality.Based on the transitive closures of the L-fuzzy relations,the interior operators and closure operators of the corresponding L-fuzzy topologies are constructed.The relationships among L-fuzzy topologies induced by lower approximation operators corresponding to different L-fuzzy relations are investigated,and a classification method for L-fuzzy relations is presented by using related topologies.
    Reference | Related Articles | Metrics
    Aspect-level Sentiment Classification Based on Imbalanced Data and Ensemble Learning
    LIN Xi, CHEN Zi-zhuo, WANG Zhong-qing
    Computer Science    2022, 49 (6A): 144-149.   DOI: 10.11896/jsjkx.210500205
    Abstract229)      PDF(pc) (2680KB)(516)       Save
    Sentiment classification remains an important part of the field of natural language processing.The general task is to classify the emotional data into two categories,which is positive and negative.In many models,it is assumed that the positive and negative data are balanced.Contrarily,the two class of data are always imbalanced in reality.This paper proposes an ensemble learning model based on aspect-levelLSTM to process aspect-level problem.Firstly,the data sets are under-sampled and divided into multiple groups.Secondly,a classification algorithm is assigned to each group of data for training.Finally,it yields the classification result through joining all models.The experimental results show that the ensemble learning model based on aspect-level LSTM significantly improves the accuracy of classification,and its performance is better than the traditional LSTM model.
    Reference | Related Articles | Metrics
    Deep Integrated Learning Software Requirement Classification Fusing Bert and Graph Convolution
    KANG Yan, WU Zhi-wei, KOU Yong-qi, ZHANG Lan, XIE Si-yu, LI Hao
    Computer Science    2022, 49 (6A): 150-158.   DOI: 10.11896/jsjkx.210500065
    Abstract388)      PDF(pc) (3362KB)(589)       Save
    With the rapid growth of software quantity and types,effectively mine the textual features of software requirements and classify the textual features of software functional requirements becomes a major challenge in the field of software enginee-ring.The classification of software functional requirements provides a reliable guarantee for the whole software development process and reduces the potential risks and negative effects in the requirements analysis stage.However,the validity of software requirement analysis is limited by the high dispersion,high noise and sparse data of software requirement text.In this paper,a two-layer lexical graph convolutional network model(TVGCCN) is proposed to model the graph of software requirement text innovatively,build the graph neural network of software requirement,and effectively capture the knowledge edge of words and the relationship between words and text.A deep integrated learning model is proposed,which integrates several deep learning classification models to classify software requirement text.In experiments of data set Wiodows_A and data Wiodows_B,the accuracy of deep ensemble learning model integrating Bert and graph convolution reaches 96.73% and 95.60% respectively,which is ob-viously better than that of other text classification models.It is fully proved that the deep ensemble learning model integrating Bert and graph convolution can effectively distinguish the functional characteristics of software requirement text and improve the accuracy of software requirement text classification.
    Reference | Related Articles | Metrics
    Solve Data Envelopment Analysis Problems with Particle Filter
    HUANG Guo-xing, YANG Ze-ming, LU Wei-dang, PENG Hong, WANG Jing-wen
    Computer Science    2022, 49 (6A): 159-164.   DOI: 10.11896/jsjkx.210600110
    Abstract326)      PDF(pc) (2084KB)(382)       Save
    Data envelopment analysis is a method to evaluate the production efficiency of multi-input&multi-output decision ma-king units.The data envelopment analysis method is widely used to solve efficiency analysis problems in various fields.However,the current methods for solving data envelopment analysis problems mainly use some specialized software to solve the problem,and the entire process requires a high specialization.In order to solve the data envelopment analysis problem conveniently,the optimization philosophy is used to solve the data envelopment analysis problem.In this paper,an optimization method based on particle filter is proposed for solving the data envelopment analysis problem.Firstly,the basic principles of the particle filter method are systematically interpreted.Then the optimization problem of the data envelopment analysis is transformed into the minimum variance estimate problem of particle filter.Therefore,the basic principles of particle filter can be used to solve the optimization problem of data envelopment analysis to obtain a global optimal solution.Finally,several simulation examples are conducted to verify the effectiveness of the proposed method.The simulation results show that the optimization method based on particle filter can accurately and effectively solve the problem of data envelopment analysis.
    Reference | Related Articles | Metrics
    TS-AC-EWM Online Product Ranking Method Based on Multi-level Emotion and Topic Information
    YU Ben-gong, ZHANG Zi-wei, WANG Hui-ling
    Computer Science    2022, 49 (6A): 165-171.   DOI: 10.11896/jsjkx.210400238
    Abstract355)      PDF(pc) (2255KB)(380)       Save
    The information of e-commerce platforms has a significant impact on consumers' purchase decisions.It is of great research value to integrate the information of large-scale stores,commodity information and online review information and get online commodity ranking to assist consumers in purchasing decisions.To solve the problems,this paper proposes an online product ranking method TS-AC-EWM,which integrates multi-level emotion and topic information,and makes full use of scoring information and review content information.Firstly,the online commodity ranking evaluation system is designed from two dimensions of measurement and content,including four measurement indexes and three content indexes.Secondly,we crawl the measurement indexes and online review content of each candidate commodity.Thirdly,three content indexes are calculated by TS method,which combines topic and affective information,and AC method,which is based on appending comments.Finally,using the entropy weight method to calculate the index weight,commodity grading and sorting.Experiments on Jingdong microwave oven dataset prove the feasibility and effectiveness of the proposed method,so the ranking method has a practical significance.
    Reference | Related Articles | Metrics
    Automatic Generation of Patent Summarization Based on Graph Convolution Network
    LI Jian-zhi, WANG Hong-ling, WANG Zhong-qing
    Computer Science    2022, 49 (6A): 172-177.   DOI: 10.11896/jsjkx.210400117
    Abstract370)      PDF(pc) (2170KB)(436)       Save
    The patent specification contains much useful information.However,due to the long space,it is difficult to obtain effective information quickly.Patent summarization is a summary of a complete patent specification.The right-claiming document determines the scope of protection of the patent application documents.It found that there is a special structure in the right-clai-ming document.Therefore,this paper proposes a method of automatic generation of patent summarization based on graph convolution network.The patent summarization is generated through the patent right-claiming document and its structural information.Firstly,this model obtains patent structural information,and the graph convolution neural network is introduced in the encoder to fuse the serialization information and structural information,to improve the quality of summarization.Experimental results show that this method has a significant improvement in ROUGE evaluation compared with the current main stream extractive summarization method and the traditional encoder-decoder abstractive summarization.
    Reference | Related Articles | Metrics
    Projected Gradient Descent Algorithm with Momentum
    WU Zi-bin, YAN Qiao
    Computer Science    2022, 49 (6A): 178-183.   DOI: 10.11896/jsjkx.210500039
    Abstract479)      PDF(pc) (2413KB)(443)       Save
    In recent years,deep learning is widely used in the field of computer vision and has achieved outstanding success.However,the researchers found that the neural network is easily disturbed by adding subtle perturbations in the dataset,that can cause the model to give incorrect outputs.Such input examples are called “adversarial examples”.At present,a series of algorithms for generating adversarial examples have emerged.Based on the existing adversarial sample generation algorithm-projected gradient descent(PGD),this paper proposes an improved method-MPGDCW algorithm,which combines momentum and adopts a new loss function to ensure the stability of the update direction and avoid bad local maximums.At the same time,it can avoid the disappearance of the gradient by replacing the cross-entropy loss function.Experiments on 4 robust models containing 3 architecturesconfirm that the proposed MPGDCW algorithm has better attack effect and stronger transfer attack capacity.
    Reference | Related Articles | Metrics
    Data Debiasing Method Based on Constrained Optimized Generative Adversarial Networks
    XU Guo-ning, CHEN Yi-peng, CHEN Yi-ming, CHEN Jin-yin, WEN Hao
    Computer Science    2022, 49 (6A): 184-190.   DOI: 10.11896/jsjkx.210400234
    Abstract370)      PDF(pc) (3438KB)(514)       Save
    With the wide application of deep learning technology in image recognition,natural language processing and financial predicting,once there is bias in analysis results,it will cause negative impacts both on individuals and groups,thus any effects on its performance it is vital to enhance the fairness of the model without affecting the perfomance of deep learning model.Biased information about data is not only sensitive attributes,and non-sensitive attributes will also contain bias due to the correlation among attributes,therefore,the bias cannot be eliminated when debiasing algorithms only consider sensitive attributes.In order to eliminate the bias in the classification results of the deep learning model caused by the correlated sensitive attributions in the data,this paper proposes a data debiasing method based on the generative adversarial network.The loss function of the model combines the fairness constraints and the accuracy loss,and the model utilizes adversarial code to eliminate bias to generate debiased dataset,then with the alternating gaming training of the generator and the discriminator to reduce the loss of the no-bias information in the dataset,and the classification accuracy is ensured while the bias in the data is eliminated to improve the fairness of the subsequent classification tasks.Finally,data debiasing experiments are carried out on several real-world dataset to verify the effectiveness of the proposed algorithm.The results show that the proposed method can effectively decrease the bias information in datasets and generate datasets with less bias.
    Reference | Related Articles | Metrics
    Vehicle Routing Problem with Time Window of Takeaway Food ConsideringOne-order-multi-product Order Delivery
    YANG Hao-xiong, GAO Jing, SHAO En-lu
    Computer Science    2022, 49 (6A): 191-198.   DOI: 10.11896/jsjkx.210400005
    Abstract513)      PDF(pc) (2375KB)(585)       Save
    The rapid growth of take-out food transaction makes take-out food develop fast and becomes a kind of new demand in consumers' market.With more and more transactions in take-out food order volumes,consumers require more on the basis of fundamental take-out food delivery service.The demand of consumers for take-out food is becoming increasingly various,which captures the structural characteristic that one take-out food order can be composed of different kinds of food provided by two or more different food merchants.Under the background of one-order-multi-product for take-out food delivery,aiming at the problem of takeaway order delivery with time window,this paper studies vehicle routing planning for delivery.This application can improve the performance of merchant service level and efficiency of delivery vehicles.Food merchants accept orders from consumers via the online food-selling platform,then prepare food.The delivery vehicle will come and pick up the food in the specific time window and send to consumers.Then this paper constructs the objective function for the mathematical model considering the lowest delivery cost during the whole delivery process,and set the time window limits of entity merchants and consumer.The genetic algorithm is used to solve the problem of take-out order delivery.Finally,the validity and feasibility of the mathematical model are verified by an example experiment.At last,suggestions on practical management and enlightenments on vehicle path planning problem are given from the perspective of practice.
    Reference | Related Articles | Metrics
    Cutting Edge Method for Traveling Salesman Problem Based on the Shortest Paths in Optimal Cycles of Quadrilaterals
    WANG Yong, CUI Yuan
    Computer Science    2022, 49 (6A): 199-205.   DOI: 10.11896/jsjkx.210400065
    Abstract466)      PDF(pc) (2048KB)(391)       Save
    With the expansion of traveling salesman problem,the search space for optimal solution on the complete graph increases exponentially.The cutting edge algorithm is proposed to reduce the search space of the optimal solution for the traveling salesman problem.It proves that the probability that an optimal Hamiltonian cycle edge contained in the shortest paths in the optimal cycles of quadrilaterals is different from that for a common edge.A number of shortest paths in the optimal cycles of quadrilaterals are selected to compute the edge frequency of edges.As the edges are cut according to the average edge frequency of all edges,the retention probability that an optimal Hamiltonian cycle edge is derived based on the constructed binomial distributions.Given Knof a traveling salesman problem,there are four steps for eliminating edges.Firstly,a finite number of quadrilaterals containing each edge are chosen.Secondly,the shortest path in the optimal cycle of every selected quadrilateral is used to compute the edge frequency.Thirdly,5/6 of edges having the smallest edge frequencies are cut.In the last step,some edges are added for vertices of degree below 2 according to the edge frequency.Experiments illustrate that the preserved edges occupy 1/6 of the total edges.Moreover,the computation time of the exact algorithms for the traveling salesman problem on the remaining graphs is reduced to some extent.
    Reference | Related Articles | Metrics
    TI-FastText Automatic Goods Classification Algorithm
    SHAO Xin-xin
    Computer Science    2022, 49 (6A): 206-210.   DOI: 10.11896/jsjkx.210500089
    Abstract496)      PDF(pc) (2547KB)(750)       Save
    In order to achieve automatic classification of goods according to title information,a Chinese words goods classification algorithm based on TF-IDF(term frequency-inverse document frequency) and FastText is proposed.In this algorithm,the lexicon is represented as a prefix tree by FastText.The TF-IDF filting is performed on the dictionary processed by n-grammar model.Thus,the high group degree of the entries is biased in the process of computing the mean value of input word sequence vectors,making them more suitable for the Chinese short text classification environment.This paper uses Anaconda platform to implement and optimize the product classification algorithm based on FastText.After evaluation,the algorithm has a high accuracy rate and can meet the needs of goods classification on e-commerce platforms.
    Reference | Related Articles | Metrics
    Fishing Type Identification of Marine Fishing Vessels Based on Support Vector Machine Optimized by Improved Sparrow Search Algorithm
    SHAN Xiao-ying, REN Ying-chun
    Computer Science    2022, 49 (6A): 211-216.   DOI: 10.11896/jsjkx.220300216
    Abstract350)      PDF(pc) (3833KB)(454)       Save
    The identification of fishing type has significance for monitoring the fishing activities of motor vessels and maintaining the marine ecological balance.To protect the marine environment and improve the supervision efficiency of fishing vessels,a fi-shing type identification algorithm based on support vector machine optimized by the improved sparrow search algorithm (ISSA-SVM) is proposed.First,the t-distribution mutation operator is introduced to optimize the population selection,which improves the global search ability and local development ability of the original SSA.Second,the position update formula of the spectators of SSA is modified to further improve the convergence speed of the algorithm.Finally,the fishing type identification model ISSA-SVM is constructed by using ISSA to optimize the parameters of SVM.The experimental results on 3 546 fishing vessels show that compared with SVM,PSO-SVM,GWO-SVM and SSA-SVM,the fishing type identification model of ISSA-SVM proposed in this paper has higher accuracy and faster convergence speed.
    Reference | Related Articles | Metrics
    Improved Sparrow Search Algorithm Based on A Variety of Improved Strategies
    LI Dan-dan, WU Yu-xiang, ZHU Cong-cong, LI Zhong-kang
    Computer Science    2022, 49 (6A): 217-222.   DOI: 10.11896/jsjkx.210700032
    Abstract800)      PDF(pc) (4076KB)(797)       Save
    To solve the shortcomings of sparrow search algorithm,such as slow convergence speed,easy to fall into local optimal value and low optimization precision,an improved sparrow search algorithm(IM-SSA) based on Various improvement strategies is proposed.Firstly,the initial population of the sparrow search algorithm is enriched by Tent chaotic sequence,which expand the search area.Then,the adaptive crossover and mutation operator is introduced into the finders to enrich the diversity of the producers population and balance the global and local search ability of the algorithm.Secondly,the t-distribution perturbation or differential mutation is used to perturbate the population after each iteration according to the individual characteristics,which can avoid the population singularity in the later stage of the algorithm and enhance the ability of jump out of the local optimal value of the algorithm.Finally,the IM-SSA algorithm proposed in this paper,gray Wolf algorithm,particle swarm optimization algorithm,whale algorithm and classical sparrow search algorithm are used to simulate the eight test functions,respectively.Through the comparative analysis of simulation results,it can be concluded that the IM-SSA algorithm has faster convergence speed,stronger ability to get out of local optimal value and higher optimization precision than the other four algorithms.Compared the simulation results of the IM-SSA algorithm with the ones of the existing improved sparrow search algorithm,it is found that the strategy of IM-SSA algorithm proposed in this paper is better.
    Reference | Related Articles | Metrics
    Study on Computing Capacity of Novel Numerical Spiking Neural P Systems with MultipleSynaptic Channels
    YIN Xiu, LIU Xi-lin, LIU Xi-yu
    Computer Science    2022, 49 (6A): 223-231.   DOI: 10.11896/jsjkx.210200171
    Abstract189)      PDF(pc) (3646KB)(361)       Save
    Membrane system,also known as P system,are a distributed parallel computing model.The P systems can be roughly divided into three types:cell-like,tissue-like and neural-like.Numerical spiking neural P systems(NSN P systems) gains the ability to process numerical information by introducing numerical variables and production functions in Numerical P systems(NP systems).Based on NSN P systems,this paper proposes novel numerical spiking neural P systems with multiple synaptic channels(MNSN P systems).In MNSN P systems,each production function is assigned a threshold to control firing and each neuron has one or more synaptic channels to transmit the production value.This paper mainly studies the computing power of MNSN P systems,i.e.,through the simulation of register machines,it is proved that MNSN P systems are Turing universal as a number gene-rating/accepting device and construct a universal MNSN P system containing 70 neurons to compute functions.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 1, 24 records