Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 47 Issue 12, 15 December 2020
  
Refactoring of Complex Software Systems Research:PresentProblem and Prospect
MENG Fan-yi, WANG Ying, YU Hai, ZHU Zhi-liang
Computer Science. 2020, 47 (12): 1-10.  doi:10.11896/jsjkx.200800067
Abstract PDF(1695KB) ( 3081 )   
References | Related Articles | Metrics
Software refactoring is the process of improving the design of existing code by changing its internal structure without affecting its external behaviorwith the main aim of improving the quality of software products.Thereforethere is a belief that refactoring improves quality factors such as understandabilitymaintainabilityand extensibility.With the rapid development of open source softwarethe size and complexity of software are continuously increasing.The refactoring result is less than satisfactory based on large-scale and complex software systems.Thereforeimproving the scalability of refactoring Technology has always been a hot topic in the software engineering field.From the perspective of technical debtthis paper explores refactoring opportunities and consides the impact of refactoring Technology on software quality.The refactoring Technology should provide an automated refactoring approach to reduce maintenance costs and improve code quality.Based on the analysis of engineering examples and literature reviewthis paper investigates 96 domestic and foreign literature in related fields since 2010.It first compares these researches from the perspective of complex systems and summarizes the research direction and technical methods in the field of software refactoring.Thenit explores the characteristics and difficulties and considers the problems and shortcomings in the research of refactoring Technology .Finallythe research trend of software refactoring Technology is discussed.
Development of Complex Service Software in Microservice Era
WU Wen-jun, YU Xin, PU Yan-jun, WANG Qun-bo, YU Xiao-ming
Computer Science. 2020, 47 (12): 11-17.  doi:10.11896/jsjkx.200700181
Abstract PDF(2389KB) ( 1987 )   
References | Related Articles | Metrics
Due to increasingly complexity of software systems in the era of microservicestraditional software development me-thods and techniques are no longer applicable.With the advantages of strong scalability and high flexibility towards the development process of complex service softwarethe microservice architecture puts forward higher requirements for the capabilities of service operation and maintenance as well as service management.To tackle these challengesthis paper utilizes the research approaches of collective intelligence to explore new paradigms for building complex software systems.Guided by the methodology of complex systems and collective intelligencethis paper proposes a new technical approach for development of complex service software systems based on microservice architecture.It elaborates the major ideas in such an approach including the adaptive software architecturemodeling frameworkdevelopment technologies and typical supporting tools.Moreoverit presents a case study to explain how to apply such an approach in the realm of ride sharing.
What Users Think about Predictive Analytics?——A Domestic Survey on NFRs
YANG Jing-wei, WEI Zi-qi, LIU Lin
Computer Science. 2020, 47 (12): 18-24.  doi:10.11896/jsjkx.201200055
Abstract PDF(2274KB) ( 1176 )   
References | Related Articles | Metrics
With the recent advancement in data sciencepredictive analytics (PA) functions have been built into many commercial productswhich affects several "non-functional" goalsincluding usabilityperformanceand transparency of the softwareas well as privacy and well-being of the user.The direct and indirect consequences are yet to be understood better before the service providers take any further actions in response.In this worka domestic survey is conducted with a sample set of 565 domestic respondents from Chinaon their acceptance of applications with PA.The result shows that many consumers recognize the benefit of PA featuresbut they are not without concerning about transparencyprivacyand personal well-being.Once users are highly concernedthey may choose not to use these features or even give up the products altogether.Based on the survey resultthis paper discusses requirements engineering can help the stakeholders make better decisions related to PA adoption and designand how RE tools can help address user concerns related to PA.
Analysis of Focuses of Requirements Engineering in Industry
JIA Jing-dong, ZHANG Xiao-man, HAO Lu, TAN Huo-bin
Computer Science. 2020, 47 (12): 25-34.  doi:10.11896/jsjkx.201200048
Abstract PDF(2587KB) ( 1362 )   
References | Related Articles | Metrics
In order to effectively guide theory into practice and further improve the quality of requirements engineering (RE)it is necessary to understand the focuses of RE in industry.To solve this problemthis paper proposes a research scheme with four steps based on data mining.Firstlysuitable data sources are selectedincluding blogs and Q&A websites.Secondlysuitable keywords are determinedand data related to REare crawled and cleaned.Thenaccording to the characters of different datatext similarity analysis and label data are conducted.Finallydata analysis are done.The research results show that the focuses of RE between domestic and foreign industry have similarities and differences.Both domestic and foreign industries focus on agile requirementsand both concern the difference between user story and use casewhich potentially reflects the requirements issue of hybrid development combing traditional with agile in practice.The applications of RE tools are concerned by bothandalthough the types of RE tools used in domestic practice are multipletools developed by domestic companies are relatively few.The concepts and methods of RE and the career development of requirements engineers are the focuses in domestic industrybut not in foreign industry.In additiondomestic industry pays more attention to requirements analysis than requirements changeand two fields (test and project management) related to RE are also focused on in domestic industry.The research results can effectively guide the application of related RE theory into focuses in industryso as to solve the difficulties in RE practiceand provide possible research and development directions for academia and industry.
Classification and Analysis of Ubuntu Bug Reports Based on Topic Model
ZHOU Kai, REN Yi, WANG Zhe, GUAN Jian-bo, ZHANG Fang, ZHAO Yan-kang
Computer Science. 2020, 47 (12): 35-41.  doi:10.11896/jsjkx.200100022
Abstract PDF(2318KB) ( 1135 )   
References | Related Articles | Metrics
Software bug is the main cause of system failure.Better understanding of bug characteristics is needed to develop software and repairing failure.Ubuntu is one of the most successful distributions of the Linux operating system and also a popular open-source software platform in the world.Using bug reports to discover software bug characteristicsanalyze and classify reasonably common bugs of the operating systemhas important guiding value for the bug analysis during the developmenttesting and maintenance of the domestic mixed source operating system based on Ubuntu.Firstly32805 bug reportsare downloaded from launchpad through crawler.Though analyzing the common bug of Ubuntu by using topic modebug are divided into 5 categories:kernel relateddesktop environmentnetworkhardware driver related anomaly and the abnormal system management based on Ubuntu operating system composition and experience.Nextthe results of the classification are evaluated through F1 value.Finallythe general distribution rules and characteristics of the recent bugs in the Ubuntu operating system are obtained by analyzing the statistical results of the bug reports.At the same timethrough further analysis of the classification resultsrelevant findings and conclusions that help to further understand the bugs of Ubuntu operating system are obtained.
Requirements Modeling and Decision-making for Machine Learning Systems
YANG Li, MA Jia-jia, JIANG Hua-xi, MA Xiao-xiao, LIANG Geng, ZUO Chun
Computer Science. 2020, 47 (12): 42-49.  doi:10.11896/jsjkx.201200021
Abstract PDF(3545KB) ( 1786 )   
References | Related Articles | Metrics
The application of systems supported by machine learning is becoming more and more common.Howeverbecause the requirements of such systems are often difficult to express completely and there may be some conflicts which are hard to detectthese systems usually cannot efficiently meet the comprehensive needs of users in a real application environment.In additionfor Machine Learning Systems (MLS) used in actual scenariosuser trust usually depends on the satisfaction of comprehensive requirements including non-functional requirements such as interpretability and fairnessand application of machine learning in different fields usually has specific needswhich brings challenges to ensure the quality of requirement description and decision-making for implementation process.To solve above-mentioned problemsthis paper presents a machine learning system requirements and decision-making framework which includes a concept MLS requirements model and a Meta-Model of MLS pipeline processas well as decision making method for training datasets and algorithms selection.The purpose is to standardize the designdevelopment and evaluation of requirements for machine learning used in actual scenarios.The case study shows that the proposed MLS requirement description and implementation method is feasible and effective.
System Usage Analysis and Failure Analysis for Cloud Computing
TIAN Yu-li, LI Ning
Computer Science. 2020, 47 (12): 50-55.  doi:10.11896/jsjkx.200700145
Abstract PDF(2774KB) ( 1240 )   
References | Related Articles | Metrics
From the perspective of software system usagethe system usage pattern and fault analysis can help the software provider to more accurately grasp user demandevaluate system qualityguide system operation and improve system maintenance.Cloud computing systems (CCS) provide configurable online accessed computational resolutions to end users from an integrated resource poolwhich have received great attention from both academia and industry.Understanding CCS usage workload and fai-lure patterns is important to improve system resource utilization efficiency as well as system service reliability.This paper performs a deep analysis on the Google cluster dataset to characterize system operation in terms of both usage workload and fa-ilure patterns.The results reveal potential vulnerability to the system and provide the basis for follow-up quality assurance activities.
Software Requirement Mining Method for Chinese APP User Review Data
WANG Ying, ZHENG Li-wei, ZHANG Yu-yao, ZHANG Xiao-yun
Computer Science. 2020, 47 (12): 56-64.  doi:10.11896/jsjkx.201200031
Abstract PDF(2077KB) ( 1834 )   
References | Related Articles | Metrics
Mining requirements from APP user review data is an important way to obtain requirementsbecause users publish reviews of different dimensions of APP in the APP application marketwhich contain many requirements for APP.The APP user review data on the 360 mobile assistant is chosen in our experimentsaiming to discover the software requirements contained in these review data.Firstlythe software requirements contained in APP user review data are divided into five categorieswhich include functions to be addedfunctions to be improvedperformanceavailabilityand reliability.Secondlydata collectionlabeling of user comments and constructing app review requirements mining data set are carried on.Finallythe constructed data set is used for model training and testing to explore the performance of deep learning methods compared with statistical machine lear-ning models on this task.The experiment results show that the deep learning modelsTextCNNTextRNNand Transformer used in this paperhave more advantages in this task than traditional statistical machine learning models.
Transformational Approach from Problem Models of Cyber-Physical Systems to Use Case Diagrams in UML
LI Zhi, DENG Jie, YANG Yi-long, WEI Shang-feng
Computer Science. 2020, 47 (12): 65-72.  doi:10.11896/jsjkx.201200044
Abstract PDF(3269KB) ( 1623 )   
References | Related Articles | Metrics
Problem Frames (PF) have attracted extensive attention and research from the requirements engineering communityparticularly in the environment-based modeling of cyber-physical systems (CPS).Howevereffectively transforming the problem models (i.e.problem diagrams with associated descriptions) of PF into the design and implementation of software is still an open problem to be solved.This paper proposes an approach to automatically transforming problem diagrams into UML (Unified Modeling Language) conceptual class diagrams and use case diagramswhich can directly guide downstream designs and implementations.A method of combining PF and model-driven Technology is proposed.This methodtogether with the developed tool supportimproves the quality of requirements through collaborative modeling by stakeholders and software designersthus allowing for the smooth transition from modeling in the problem space to software design in the solution space.This method is applied to the package router control problema benchmark case study in the PF literatureto illustrate its feasibility and how the work can be used in a practical settingwhich plays an important role in promoting the PF approach from theory further into practice.
Transformation Method for AltaRica3.0 Model to NuSMV Model
CHEN Shuo, HU Jun, TANG Hong-ying, SHI Meng-ye
Computer Science. 2020, 47 (12): 73-86.  doi:10.11896/jsjkx.190400035
Abstract PDF(6000KB) ( 1309 )   
References | Related Articles | Metrics
AltaRica 3.0 is a safety modeling and analysis language for safety-critical systems.Because AltaRica 3.0 lacks model checking techniques for temporal properties and does not support exhaustive spaceexaminationNuSMV supports exhaustive model checking techniques.This paper expands AltaRica 3.0proposes the transformation rules and algorithms of the AltaRica 3.0 model to the NuSMV model based on the language parser generator ANTLR (Another Tool for Language Recognition).FirstlyASTLR is used to construct the AST (Abstract Syntax Tree) of the AltaRica 3.0 flattening GTS model.Secondlythe language structure transformation rules are designed to show the behavioral semantic correspondence between AltaRica 3.0 and NuSMV.Thenwe design the transformation algorithm G2N.When traversing the ASTG2N acquires and transforms the GTS model language information stored by the nodeand obtains the transformed NuSMV file through the continuous traversal transformation process while ensuring semantic retention.Finallyexample of four typical cases in requirement engineering are used for experimental analysis to verify the effectiveness of the G2N and the safety of the requirement model.Experiments show that the G2N algorithm can complete the conversion of AltaRica 3.0 model to NuSMV model at lexical and grammatical level.
Growth Framework of Autonomous Unmanned Systems Based on AADL
DING Rong, YU Qian-hui
Computer Science. 2020, 47 (12): 87-92.  doi:10.11896/jsjkx.201100173
Abstract PDF(1602KB) ( 1339 )   
References | Related Articles | Metrics
Recent yearsthe development cost of autonomous unmanned systems increases with the improvement of hardware equipment performance.How to efficiently and intelligently develop systems is a hot research field for unmanned systems.The growth framework of autonomous unmanned systems based on AADL has improved the software adaptability of unmanned systems (dronesunmanned vehiclesetc.) from the system architecturethe system working mode based on configuration itemsand the prototype system.It realizes the growth and evolution of unmanned system software when resourcestasksand environments change.The system framework is based on model-driven thinkingand the AADL(Architecture Analysis and Design Language) model base is used to represent the intermediate components of the system.It not only retains the inheritance relationship between componentsbut also facilitates developers to observe the system structure more intuitively.System modularization is the basis for the growth.Through a unified standardized interfacethe AADL model base encapsulates replaceable algorithms in intermediate componentsand the iteration and evolution of the algorithm maps the sustainable evolution process of the system.An ever-expanding library of system components is established by crawlers.In addition to supporting adaptive extension functionsthe component library also supports custom model-based functions.The growth characteristic of the system framework is not only manifested in the expands of the content of the system filesbut also manifested in the diversity of system configuration options.The optimal configuration item scheme of the system may change under different environmentstasksand resource conditions.In order to find the optimal solution of the unmanned system configuration item options under adaptive conditionsthe idea of evolutionary algorithm is adopted to make the system realize the process of autonomous evolution.Finallythe automatic code generation Technology is used to realize the conversion from AADL model to system file.The feasibility of the growth framework of the autonomous unmanned system is verified through the operation and test of the growth software management platform.
Study on DCGAN Model Improvement and SAR Images Generation
XU Yong-shi, BEN Ke-rong, WANG Tian-yu, LIU Si-jie
Computer Science. 2020, 47 (12): 93-99.  doi:10.11896/jsjkx.200700109
Abstract PDF(2800KB) ( 1373 )   
References | Related Articles | Metrics
This paper proposes a method of generating SAR images based on the improved DCGAN.This method improves DCGANadopts the model structure of multi-generator versus single discriminatorand uses the algorithm to control the average image quality generated by each generator.In order to test and verify multiple similar image recognition software and select the best onetesters need to design the images that are different from those used in training to test the testing software.This method can provide a fair set of benchmarks for selective testing.Respectively in the experimentsbased on the original DCGAN model and the improved DCGAN modeltarget images and the images are generatedand the public discriminator is used to verify the quality of the new images generated by the two models.The experimental results show that the improved DCGAN model generates better images than the original DCGAN modeland the new SAR images have the same quality and better diversity as the original SAR imagesand they can meet the needs of software selective testing.
Analysis of Key Developer Type and Robustness of Collaboration Network in Open Source Software
LU Dong-dong, WU Jie, LIU Peng, SHENG Yong-xiang
Computer Science. 2020, 47 (12): 100-105.  doi:10.11896/jsjkx.200300147
Abstract PDF(2733KB) ( 932 )   
References | Related Articles | Metrics
Taking the open source software project AngularJS as an examplethis paper studies the key developer type and the robustness of collaboration network in open source software.The network is constructed by the project code-collaboration relationships to analyze the structure and function.The nodes in the network are classified into different types according to their structure features and function features.Thenthe impact of different developers turnover on the network structure and functional robustness is explored to identify the key developer type.What's morethe promotion strategy on robustness of the network is promoted by simulating the joining mechanism for new developers.The study shows that the developer's structure and function features are asymmetricalwhich is the reason for the structure and function robustness on the network are inconsistent.Compared with traditional methodsthe type division of developers can more effectively identify the key developer type.Central core deve-lopers who are active in the community and have connections with other communities also have a large number of contributionsand this type of developers have the greatest impact on the robustness of the network.New developers with higher initial degree and choosing a preference connection mechanism can effectively improve the robustness of collaboration network.
Software Crowdsourcing Task Recommendation Algorithm Based on Learning to Rank
YU Dun-hui, CHENG Tao, YUAN Xu
Computer Science. 2020, 47 (12): 106-113.  doi:10.11896/jsjkx.200300107
Abstract PDF(2366KB) ( 1358 )   
References | Related Articles | Metrics
In order to realize software crowdsourcing task recommendation more effectivelyimprove the quality of software developmentrecommend suitable tasks for workersreduce the risk of workers' interests being damagedand achieve a win-win result for workers and crowdsourcing platformsa software crowdsourcing task recommendation method based on learning to rank is designed.Firstthe hidden features between workers and tasks are extracted based on the improved latent factor model.Thenthe model of learning to rank is improved by combining implicit informationand the extracted hidden features are ranked and trained to obtain the optimal ranking model.The ranking model sorts the test set tasks to get a task recommendation list to perform crowdsourcing task recommendation for workersand uses relevant evaluation indicators to verify the recommendation results.Experiments show that the proposed method can effectively improve the software crowdsourcing task recommendation accuracy.The NDCGMAPand Recall values of the recommended evaluation indicators reach 0.7220.3260.169respectively.Compared with the user-based collaborative filtering algorithmthe recommendation accuracy is improved by 18.6%.Compared with rank learning algorithm based on RankNet onlythe accuracy is improved by 10.2%which can effectively guide software crowdsourcing task recommendation.
Open-source Course and Open-sourcing Intro to Data Science
CHAO Le-men
Computer Science. 2020, 47 (12): 114-118.  doi:10.11896/jsjkx.200900028
Abstract PDF(1383KB) ( 1116 )   
References | Related Articles | Metrics
A novel initiative called Open Source Courses is proposed to provide an alternative solution for improving the poor efficiency of teachers' lesson preparation activitiesexpending the limited incentive coverage of education management departmentsand satisfying the interests of teachers in MOOC courses.Furthermorethe definitioncharacteristics of open source courses are describedand the processprinciplesand key technologies of open source course construction are proposed based upon the lessons from the construction of the first Open Source Course named Introduction to Data Science.The Open Source Course action initiative is a novel teaching reform idea after MOOCwhich has a high reference value for further promoting teaching reformespecially to reuse teaching resourcesimprove the quality of lesson preparationshare teaching experienceand make up for the deficiency of MOOC courses.
Network Representation Learning Method on Fusing Node Structure and Content
ZHANG Hu, ZHOU Jing-jing, GAO Hai-hui, WANG Xin
Computer Science. 2020, 47 (12): 119-124.  doi:10.11896/jsjkx.190900027
Abstract PDF(3207KB) ( 1147 )   
References | Related Articles | Metrics
With the rapid development of neural network Technology the network representation learning method for complex network has got more and more attention.It aims to learn the low-dimensional potential representation of nodes in the network and to apply the learned characteristic representation effectively to various analysis tasks for graph data.The typical shallow random walk network representation model is mainly based on two kinds of characteristic representation methodswhich are the node structure similarity and node content similarity.Howeverthe methods can't effectively capture similar information of node structure and content at the same timeand perform poorly on the network data with the equivalent structure and content.To this endthis paper explores the fusion characteristics of node structure and node contentand proposes a representation method called SN2vecwhich is based on joint learning of unsupervised shallow neural networks.Furtherin order to validate the effectiveness of the proposed modelthis paper respectively conduct the multi-label classification and down-dimensional visualization tasks in Brazilian air-trafficAmerican air-trafficand Wikipedia datasets.The results show that the Micro-F1 of using SN2vec in multi-label classification task is better than the existing shallow random walk network representation methodsand SN2vec can also learn better potential structural representation of consistent nodes.
Financial Data Prediction Method Based on Deep LSTM and Attention Mechanism
LIU Chong, DU Jun-ping
Computer Science. 2020, 47 (12): 125-130.  doi:10.11896/jsjkx.200700050
Abstract PDF(2144KB) ( 3131 )   
References | Related Articles | Metrics
With the rapid development of the Internetfinancial markets generate a large amount of online financial data every daysuch as the number of daily transactions and the total amount of transactions.The dynamic prediction of financial market data has become a research hotspot in recent years.Howeverthe financial market has a large amount of datamany input sequencesand changes over time.Aiming at solving these problemsthis paper proposes a financial data prediction model based on deep LSTM and attention mechanism.Firstthe model can handle complex financial market data which are mainly multi-sequence data.Secondthe model uses deep LSTM networks to model financial datasolves the problem of long dependence between dataand can learn more complex market dynamic characteristics.Finallythe model introduces the attention mechanismwhich makes the data of different time have different importance to the prediction and make the prediction more accurate.Experiments on real large data sets show that the proposed model has the characteristics of high accuracy and good stability in the field of dynamic prediction.
Extraction of Water Conservancy Spatial Relationship Words Based on Bootstrapping
XIANG Ying, FENG Jun, XIA Pei-pei, LU Jia-min
Computer Science. 2020, 47 (12): 131-138.  doi:10.11896/jsjkx.191000161
Abstract PDF(1686KB) ( 1378 )   
References | Related Articles | Metrics
At presentthe following problems are found in the extraction of water conservancy spatial relational words in the process of using water conservancy domain database to construct knowledge map.Firstthere are few water conservancy object spatial relational words in the databasewhich is difficult to meet the needs of query.Secondthe relationship between water conservancy objects is complex and it is too laborious to rely on manual construction.In order to solve the above problemsfirstlythis paper extracts spatial relation words from professional high-quality water conservancy official documents to form seed sets.Thenit expands spatial relationship words through external dictionariesand combines corpus to extract water-related spatial relationship words Syntactic pattern.Finallythrough the generalized syntactic patternspatial relation words are extracted from large-scale water conservancy text dataspatial relationship triples are generatedand then used as seed sets.Repeating the above steps can gradually expand and construct water resources.This method can obtain a large number of spatial semantic syntactic patterns and spatial relationship tuples from the corpus with a small amount of manual operationsgradually expand the construction and eventually form a dictionary of water conservancy spatial relationship words.The word dictionary plays an important role in expanding the knowledge map of water conservancy objects and improving the accuracy of intelligent retrieval.
Parallel FP_growth Association Rules Mining Method on Spark Platform
ZHU An-qing, LI Shuai, TANG Xiao-dong
Computer Science. 2020, 47 (12): 139-143.  doi:10.11896/jsjkx.191000110
Abstract PDF(1410KB) ( 1126 )   
References | Related Articles | Metrics
In order to improve the efficiency of association rule mininga parallel FP_growth association rule mining method suitable for spark platform is proposed.Firstthe Spark platform is used to complete the traversal scan operation in the memory RDD of all nodes of the distributed system to obtain frequent sets in order to generate FP_Table and update FP_Tree.Thenthe time series is introduced to predict the itemsets to be minedso that all nodes in the distributed system can share the mining tasks in a balanced mannerso as to make full use of the traversal FP_Tree calculation function of each node to obtain the FP_growth association rule mining results.The experimental results show that compared to the single machine casethe parallelized FP_growth association rule mining improves the efficiency by about 60%.After the load balancing processthe mining efficiency of the FP_growth association rule is higherincreasing by about 14%which indicates that the traversal task allocation of each node is more balanced and the degree of parallelism is higher.
Personalized Recommendation of Social Network Users' Interest Points Based on ProbabilityMatrix Decomposition Algorithm
ZHANG Min-jun, HUA Qing-yi
Computer Science. 2020, 47 (12): 144-148.  doi:10.11896/jsjkx.191000064
Abstract PDF(1937KB) ( 1034 )   
References | Related Articles | Metrics
In the social network environmentthe traditional personalized recommendation method of social network users' in-terest points has the problems of low prediction accuracy of network users' interest behavior and low coverage of users' social datawhich can not fully mine the temporal and spatial sequence characteristics of users' interest points.Thereforea personalized recommendation method of social network users' interest points based on probability matrix decomposition algorithm is proposed.In the model training pseudo-code groupthe numerical results related to the matrix probability mutation operator are calculated to achieve the physical segmentation of the social networkand the node modeling of the social network based on the probability matrix decomposition algorithm is completed.On this basisthe framework of personalized social network is builtand the results are mined according to the characteristics of users' interest behaviorsand the personalized users are selected to recommend nodesso as to complete the establishment of personalized recommendation method for users' interest points in social network.The practical test results show thatcompared with the traditional methodthe new personalized recommendation method can predict the interest behavior of network users with the highest accuracy of 100%and the coverage rate of social data of users is about 75%which improve the prediction accuracy of interest behavior of network users and the coverage rate of social data of usersand fully excavate the temporal and spatial sequence characteristics of interest points of social network users.
Survey of Image Captioning Methods
MIAO Yi, ZHAO Zeng-shun, YANG Yu-lu, XU Ning, YANG Hao-ran, SUN Qian
Computer Science. 2020, 47 (12): 149-160.  doi:10.11896/jsjkx.200500039
Abstract PDF(3031KB) ( 3214 )   
References | Related Articles | Metrics
Image captioning is a task that uses an image as input to generate the natural language description of this image by modeling and calculationso that computers have the ability to "talk about the pictures".It is another new type of computer vision task after image recognitionimage segmentation and target tracking.This paper focuses on the development of image captioning and gives a detailed survey of the image captioning methods based on templateretrieval and deep learning.And this paper especially focuses on the deep learning-based methods and discusses the experimental results of various methods.Experimental evalu-ation indexes and the common datasets used in this field are introduced in detail.Finallythis paper points out the problems and research directions in the future.
Fine-grained Facial Makeup Image Ordering via Language
YAO Lin-li, CHEN Shi-zhe, JIN Qin
Computer Science. 2020, 47 (12): 161-168.  doi:10.11896/jsjkx.200800209
Abstract PDF(2150KB) ( 973 )   
References | Related Articles | Metrics
This paper studies text-based fine-grained visual reasoning in makeup domain and explores a novel multi-modal taskwhich sorts a set of facial images from a makeup video into the correct order according to the given ordered step descriptions.On this novel taskthis paper first does data processing and analysis to learn the characteristic of the makeup datasetand then proposes two baseline models to solve the image ordering task.The first baseline model only uses image information and ignores the guiding role of the text description from a single-modal aspect.The second model utilizes the text semantics to guide image orderingestablishes the relationship between text description and images and can reason the visual appearance change brought by step description.This paper conducts extensive experiments on the YouMakeup VQA dataset.The experiments show that the two models are complementary to each otherand achieve good performance on the image ordering taskwith the selection accuracy on the test set of 70% and 58.93% respectively.
Robust Long-term Adaptive Object Tracking Based onMulti-correlation Filtering Strategy
TAN Jian-hao, YIN Wang, LIU Li-ming, WANG Yao-nan
Computer Science. 2020, 47 (12): 169-176.  doi:10.11896/jsjkx.191000021
Abstract PDF(5328KB) ( 1046 )   
References | Related Articles | Metrics
The traditional correlation filtering methods have recently achieved excellent performance and shown great robustness to exhibiting motion blur and illumination changes.Howeverit is difficult to achieve tracking when the object has interference factors such as deformationcolor changeand heavy occlusion.It shows poor robustness when the object is lost and cannot be recovered to achieve long-term tracking.Thereforthis paper proposes a robust long-term object tracking algorithm.Firsta feature complementation strategy is proposedwhich linearly weights the feature responses of the directional gradient histogram and the global color histogramand learns a correlation filtering model that is robust to color changes and deformations to estimate the target displacement.Thenthe object features are taken to learn a discriminant correlation filter to maintain long-term memory of object appearance.We use the output responses of this model to determine if tracking failure occurs.We use the online SVM classifier to re-detect the lost objectand retrack the lost target which can effectively recover the tracking target from failure to achieve long-term tracking.In additionthis paper learns a correlation filter over a feature pyramid centered at the estimated object position for predicting scale changes and further enhance robustness and accuracy.Finallythis paper compares the proposed algorithm with the state-of-the-art performance tracking algorithms on the online object tracking benchmark.The result shows that the proposed algorithm performs great robustness and accuracy.
Social Image Tag and Group Joint Recommendation Based on Deep Multi-task Learning
GENG Lei-lei, CUI Chao-ran, SHI Cheng, SHEN Zhen, YIN Yi-long, FENG Shi-hong
Computer Science. 2020, 47 (12): 177-182.  doi:10.11896/jsjkx.191000141
Abstract PDF(2998KB) ( 1060 )   
References | Related Articles | Metrics
With the rapid development of multimedia sharing websitesthe social image recommendation has become a hot research topic recently.It is much easier to manage social images by taging and grouping them.Traditional image recommendation methods tend to focus on a specific task just like tag or group recommendationwhich ignore the correlation between the two tasks.By fusing image features extracted from multiple tasksmulti-task learning method exploit image sharing or correlation representation among different tasks to improve the accuracy of the single task.Thereforethis paper proposes a novel social image tag and group joint recommendation model based on deep multi-task learning.In the signal taskthe tag and group recommendation are solved using comparison-based partial order deep learning method respectively for alleviating the data sparsity.Furthermorethe features extracted from their intermediate layer are saved for multi-task learning.In the convolutional neural network processing the visual features of social imagesthe features of two recommendations are connected.Then the dimension-reduction and automatic fusion of the features are realized by convolution.Hence image features extracted from different recommendations are shared.Moreoverthe size of processed features is suitable for the next layer of the convolutional neural network so that the network architecture of single recommendation task can be maintained.The experimental results on a real Flickr dataset show that compared with the traditional methodsthe accuracy and recall rate of the proposed algorithm are greatly improvedwhich proves the effectiveness and feasibility of the proposed method.
Study on Joint Generation of Bilingual Image Captions
ZHANG Kai, LI Jun-hui, ZHOU Guo-dong
Computer Science. 2020, 47 (12): 183-189.  doi:10.11896/jsjkx.190900181
Abstract PDF(1960KB) ( 940 )   
References | Related Articles | Metrics
Most of the research on image caption is to generate a single language caption from an imagebut in the context of the convergence of languages in various countriesit is necessary to generate two languages or even multiple languages from one image.Native speakers understand what other people say about the imageso this paper proposes an approach to generation of bilingual image captionsi.e.generating two captions in two languages for an image.The architecture consists of an encoder and two decodersin which the encoder uses convolutional neural network to extract image features while the decoders adopt Long Short-Term Memory networks.Motivated by the fact that the two captions of an image are semantically equivalentwe propose a joint model to generate bilingual image captions.Specificallythe two decoders generate image captions in alternative waymaking the decoding history information of two languages are both available to predict the next word.The experimental results based on the MSCOCO2014 data set show that the joint generation of bilingual image caption can improve the performance of two languages at the same time.Compared with the single language image caption performance in Englishthe BLEU_4 increases by 1.0CIDEr increases by 0.98 in Japanese.Compared with the Japanese single image caption generation performancethe BLEU_4 increases by 1.0CIDEr increases by 0.31.
Cross Subset-guided Adaptive Measurement for Block Compressive Sensing
TIAN Wei, LIU Hao, CHEN Gen-long, GONG Xiao-hui
Computer Science. 2020, 47 (12): 190-196.  doi:10.11896/jsjkx.200800197
Abstract PDF(2940KB) ( 962 )   
References | Related Articles | Metrics
Compared with traditional image processing methodsthe block compressive sensing can concurrently finish both acquisition and compression with a very low complexitywhich will be an ideal choice for some wireless sensors with limited power.For block compressive sensing of any imagethis paper proposes a cross subset-guided adaptive measurement method.The proposed method can adaptively allocate its sampling subrate to different regionsand also introduce the spatial prediction of mea-surement blockswhich effectively improves the quality of image reconstruction and the coding efficiency of measurement blocks.Specificallystarting from the center block in the spiral scanning orderall blocks of any image are divided into three regions:inner regionmiddle regionand outer region.Every few blocks of each region are put into a cross subset.Firstlythese blocks of each cross subset are measured by the same measurement matrix at a basic sampling subrate.Secondlyaccording to the cross-subset measurement values of three regionstheir weights are used to assign different sampling subrates for the remaining subset.Thirdlythe remaining-subset blocks of the three regions are measured by different sampling subrateswhich are proportional to their weights.For each measurement blockthe optimal predictive block is found from the surrounding area of the measurement blockand the difference between them is quantized by scalar quantization.The experimental results show that compared with theexis-ting measurement methodsthe proposed method not only improves the subjective quality of reconstructed imagebut also improves the average rate-distortion performance of image reconstruction by at least 3.2%.
Blurred Image Recognition Based on LoG Edge Detection and Enhanced Local Phase Quantization
CHEN Xiao-wen, LIU Guang-shuai, LIU Wang-hua, LI Xu-rui
Computer Science. 2020, 47 (12): 197-204.  doi:10.11896/jsjkx.191000054
Abstract PDF(4315KB) ( 1201 )   
References | Related Articles | Metrics
As ablur insensitive texture descriptorLocal phase quantization (LPQ) algorithm describes phase features with blurred invariance in accurately.Besidesit lacks in describing important details of images.In order to solve the issuesan enhanced local phase quantization (ELPQ) combined with Laplace of Gaussian (LoG) edge detection is proposed in this papernamed MrELPQ&MsLoG(Multi-resolution ELPQ and Multi-scale LoG).Firstlythe real and imaginary parts obtained by performing the short-term Fourier transform on the image arepositive and negative quantification and amplitude quantizationcomplementary symbol feature ELPQ_S and amplitude feature ELPQ_M are obtained.Secondlythe edge features in spatial domain are obtained by convolving images with multi-scale Laplace of Gaussian filters.Finallythe symbol feature ELPQ_S and the amplitude feature ELPQ_M in the frequency domain are combined with edge features on the spatial domain.The recognition result is calculated through SVM.On the Brodatz and KTH-TIPS texture data bases with blurred interferencethe ELPQ algorithm has a great improvement over the original LPQ algorithm.Moreoverthe MrELPQ&MsLoG algorithm can further improve the recognition rate of the algorithm.On the ARExtend Yale Band railway fastener data bases with blurred interferencecompared with the current algorithm which has robustness to the blurthe MrELPQ&MsLoG algorithm always maintains a high recognition rate.The experimental results show that the MrELPQ&MsLoG algorithm is robust to blur and has less time for feature extraction.
Cross-modal Retrieval Method for Special Vehicles Based on Deep Learning
SHAO Yang-xue, MENG Wei, KONG Deng-zhen, HAN Lin-xuan, LIU Yang
Computer Science. 2020, 47 (12): 205-209.  doi:10.11896/jsjkx.191000132
Abstract PDF(2297KB) ( 1164 )   
References | Related Articles | Metrics
To ensure the right of way of special vehicles is the premise of rational allocation of urban traffic resourcesimplementation and guarantee of emergency rescue.The cross-modal identification of special vehicles is an important core Technology in rea-lization of intelligent transportationespecially in the environment where the Internet of Vehicles is not yet mature and there will be the long-term unmanned and manned mixed traffic in the future.To make way for the special vehicles reasonable that are performing the mission is particularly important.Aiming at the demand of driverless vehicle for special vehicle identificationthis paper constructs a cross-modal retrieval and recognition net(CMR2Net)and proposes a method of cross-modal recognition and retrieval of special vehicles based on deep learning.CMR2Net consists of two convolution sub-networks and one feature fusion network.The convolution sub-networks are used to extract the features of the image and audio of the special vehiclethen the similarity measurement method is used in the high-level semantic space to perform feature matching to achieve cross-modal retrieval and recognition.Cross-modal identification experiments performed on special vehicle cross-modal dataset show that this method performs a high recognition rate for cross-modal retrieval and recognition tasks.Furthermoreit can be accurately identified special vehicles even one modal absence.This research has major theoretical guiding significance for improving the performance of "urban brain"and also can be used in the engineering for designingrealizing and improving the smart transportation in the future.
Double Weighted Learning Algorithm Based on Least Squares
LI Bin, LIU Quan
Computer Science. 2020, 47 (12): 210-217.  doi:10.11896/jsjkx.191100084
Abstract PDF(2633KB) ( 1100 )   
References | Related Articles | Metrics
Reinforcement Learning is one of the most challenging and difficult concerns in the field of artificial intelligence.Least-squares method is one of the advanced function approximate methods that can be used to solve the problem of reinforcement learning.It has advantages of fast convergence rate and sufficient utilization of sample data.After the study and analysis of least squares temporal diffe-rence algorithm (LSTD)this paper proposes a double weights with least-squares Sarsa algorithm (DWLS-Sarsa) based on the LSTD algorithm.DWLS-Sarsa combines two weights in a certain way and takes control of temporal diffe-rence error with Sarsa methods.During the training processtwo weights will produce different values because of the difference in the updated samples and will gradually narrow the gap between the two weights until they converge to the same optimal value duo to the distribution of the sample data.So that the exploration performance and convergence of the algorithm will be ensured.FinallyDWLS-Sarsa algorithm is applied to the experiment and compared with other reinforcement learning algorithms.The experimental results show that DWLS-Sarsa algorithm can deal with local optimum problems effectively to achieve more precise convergence value and has better learning performance and robustness.
Crow Search Algorithm with Cauchy Mutation and Adaptive Step Size
HUO Lin GUO, Ya-rong, QIN Zhi-jian
Computer Science. 2020, 47 (12): 218-225.  doi:10.11896/jsjkx.191100207
Abstract PDF(2000KB) ( 1505 )   
References | Related Articles | Metrics
Aiming at the problems of slow convergence speed and local optimization of crow algorithmthis paper proposes a cauchy mutation crow algorithm with adaptive step size (CMCSA)to improve the position updating strategy of two situations in standard crow algorithm.In each iterationthe Cauchy mutation is used to optimize the gbestto enhance the global searchcapabi-lity and increase the variation rangeso as to improve the population diversity and avoid falling into local optimization.The discriminant probability is introduced to optimize the updating strategy of the current individual's position when the leader finds himself followed.The step length is adjusted adaptively according to the position distance between the current position and the leader's positionso that the algorithm converges smoothly and quickly to the global optimumthus controlling the search speed and accuracyand effectively compensating for the blindness and slow convergence of the standard CSA.In order to evaluate the effectiveness of the algorithmthe proposed CMCSA is applied to optimize ten basic test functionsand compared with eight other famous and recent intelligent optimization algorithms.The experimental results show that the proposed algorithm is superior to other algorithms in average convergence and robustness.The average ranking of the mean value and standard deviation value of the algorithm is the firstso it has better overall performance.
Signal Control of Single Intersection Based on Improved Deep Reinforcement Learning Method
LIU Zhi, CAO Shi-peng, SHEN Yang, YANG Xi
Computer Science. 2020, 47 (12): 226-232.  doi:10.11896/jsjkx.200300021
Abstract PDF(3420KB) ( 1717 )   
References | Related Articles | Metrics
Using deep reinforcement learning Technology to achieve signal control is a researches hot spot in the field of intelligent transportation.Existing researches mainly focus on the comprehensive description of traffic conditions based on reinforcement learning formulation and the design of effective reinforcement learning algorithms to solve the signal timing problem.Howeverthe influence of signal state on action selection and the efficiency of data sampling in the experience pool are lack of considerationswhich may result in unstable training process and slow convergence of the algorithm.This paper incorporates the signal state into the state design of the agent modeland introduces action reward and punishment coefficients to adjust the agent's action selection in order to meet the constraints of the minimum and maximum green light time.Meanwhileconsidering the temporal correlation of short-term traffic flowthe PSER (Priority Sequence Experience Replay) method is used to update the priorities of sequence samples in the experience pool.It facilitates the agent to obtain the preorder correlation samples with higher matching degree corresponding to traffic conditions.Then the double deep Q network and dueling deep Q network are used to improve the performance of DQN (Deep Q Network) algorithm.Finallytaking the single intersection of Shixinzhong Road and Shanyin RoadXiaoshan DistrictHangzhouas an examplethe algorithm is verified on the simulation platform SUMO (Simulation of Urban Mobility).Experimental results show that the proposed agent model outperforms the unconstrained single-state agent models for traffic signal control problemsand the algorithm proposed in the paper can effectively reduce the average waiting time of vehicles and total queue length at the intersection.The general control performance is better than the actual signal timing strategy and the traditional DQN algorithm.
Chinese Event Detection Based on Document Information and Bi-GRU
ZHU Pei-pei, WANG Zhong-qing, LI Shou-shan, WANG Hong-ling
Computer Science. 2020, 47 (12): 233-238.  doi:10.11896/jsjkx.191100031
Abstract PDF(1841KB) ( 1111 )   
References | Related Articles | Metrics
Event extraction is an important research task in information extraction and event detection is the key to event extraction.Existing Chinese neural network event detection methods are sentence-based and the local context information obtained by this method is not enough to resolve the event triggers semantic ambiguity.In order to solve this problemthis paper studies document information effects.Firstlybased on the bidirectional gated recurrent units network (Bi-GRU)this paper defines three windows to learn sentence features.Thenthe sentence-level representation is concatenated and the document features are learned by using the bidirectional gated recurrentunits network.Finallyto enrich the semantic information of sentences and reduce the event-trigger sematic event triggers ambiguityit merges the sentence-level representation and the document-level representation and then classifies eventtriggers through the Softmax function.Experimental results on the ACE2005 dataset show that the sentences-context representation can improve the Chinese event detection performance and this event detection method outperforms state-of-the-art results by 1.5% on F1.
Solving Multi-flexible Job-shop Scheduling by Multi-objective Algorithm
DONG Hail, XU Xiao- peng, XIE Xie
Computer Science. 2020, 47 (12): 239-244.  doi:10.11896/jsjkx.191100042
Abstract PDF(2591KB) ( 1336 )   
References | Related Articles | Metrics
In view of machine flexibilityworker flexibility and parallel operation flexibility in the job- shop schedulingthis paper denotes the parallel operation flexibility by replacing sequence constraints between individual operations with sequence constraints between prioritiesand proposes a multi-flexible job- shop scheduling model with objectives of minimizing the maximum comple-tion timetotal energy consumption and average completion time.A four -chromosome coding method and corresponding crossover and mutation operators are designedin which two chromosomes are used to encode the processing sequence.A multi- objective optimization algorithm is proposedbased on the combination of the structure of the invasive tumor growth optimization and the screening mechanism of NSGAI.The algorithm uses a fast non- dominant sorting method and a feature- based selection method to classify and transform cells.A mechanism is designed to replace duplicate cells.Finallythe proposed algorithm is compared with several intelligent algorithms in calculation examples by hypervolumedistribution and extensibilitywhich prove its effectiveness and feasibility.
TNTlink Prediction Model Based on Feature Learning
WANG Hui, LE Zi-chun, GONG Xuan, ZUO Hao, WU Yu-kun
Computer Science. 2020, 47 (12): 245-251.  doi:10.11896/jsjkx.190700020
Abstract PDF(2793KB) ( 971 )   
References | Related Articles | Metrics
In the co-author networklink prediction can predict the missing links in the current network and the new or disbanded links.It is of great significance for mining and analyzing the evolution of the network and remaking the network model to infer whether the two authors will cooperate in the near future according to the observed information in the network.As an important research direction of computer science and physicslink prediction has been studied in depth up to now.Their main research idea is based on the markov chainmachine learning and unsupervised learning.Howevermost of these work use only a single featurenamely the network topology features or attribute features to predictfew will consider these interdisciplinary featuresand papers combined with multidisciplinary on link prediction are fewer.This paper designed and developed the TNTlink model.This model combines the network topology featuresbasic featues and the additional featurescombines physics and computer science domain knowledgeand uses the depth of neural network to integrate these features into a deep learning framework dealing with the problem of link predictionand good results have been achieved.Five data sets (ca-astrophca-condmatca-grqcca-hepph and ca-hepth) were used in this papercontaining 69032 nodes and 450617 edges.Binary similarity and fuzzy cosine similarity were used to calculate and identify these features from captured information.If nodes show more similarity in these features (for examplesimilar nodesthe same keywordsor a close relationship between them)the two nodes are more likely to generate links.Besides the features of nodesthe influence of node importance on link formation was also considered.A new link prediction index MI was proposed to distinguish strong effects from weak effects and to model the important effects of nodes.The proposed model was compared with mainstream classifiers on five datasets.The results show that MI and TNTlink can effectively improve link prediction AUC value.
Point Cloud Coarse Alignment Algorithm Based on Feature Detection and Depth FeatureDescription
SHI Wen-kai, ZHANG Zhao-chen, YU Meng-juan, WU Rui, NIE Jian-hui
Computer Science. 2020, 47 (12): 252-257.  doi:10.11896/jsjkx.191000069
Abstract PDF(3067KB) ( 1330 )   
References | Related Articles | Metrics
Point cloud alignment is one of the important steps in point cloud data processingand coarse alignment is the hard part.In recent yearsgreat progress has been made in point cloud alignment based on deep learning.In particularthe method of 3DMatch can achieve a better alignment effect under the conditions of noiselow resolution and missing data.Howeverthis method uses random sampling to generate alignment points.When the number of sampling points is smallthe matching rate will be low and the alignment effect is not good.ThereforeISS feature point detection is used instead of random samplingand then 3DMatch is used to generate descriptors for feature points.Finallydata alignment is achieved through matching feature descriptors.Since ISS feature point detection has good repeatability and 3DMatch can provide descriptors with high degree of discriminationthis method greatly improves the robustness and accuracy of matching.Eexperiment shows thatcompared with random samplingthe alignment effect and robustness of feature point sampling are better when the initial point cloud has no noiseweak noise and strong noise.Moreoverwhen the coarse alignment effect is similarthe number of feature points only accounts for 10% of the number of random pointswhich greatly improves the alignment efficiency.
Elevator Boot Fault Diagnosis Method Based on Gabor Wavelet Transform and Multi-coreSupport Vector Machine
ZHU Xiao-ling, LI Kun, ZHANG Chang-sheng, DU Fu-xin
Computer Science. 2020, 47 (12): 258-261.  doi:10.11896/jsjkx.200700039
Abstract PDF(2040KB) ( 873 )   
References | Related Articles | Metrics
As an important part of the elevator carthe elevator boot has a direct impact on the safety of the elevator.In order to make a more accurate comprehensive diagnosis of the elevator boot failurea diagnosis method based on Gabor wavelet transform and multi-core support vector machine is proposed.Firstthe vibration signal of the main body of the device is collected by an acceleration sensorand the eigenmode function component is obtained by empirical mode decomposition.Thena Gabor filter is used to filter and denoise the low frequency components to achieve the feature enhancement of the extracted data at low frequencies.Finallythe local and global kernel functions are linearly added using weights to form a multi-core support vector machine to classify the data.Experimental results verify the effectiveness of the proposed method.Compared with the fault diagnosis method based on wavelet transform and least squares support vector machinethe fault diagnosis accuracy of the proposed method is improved by about 5%reaching 87.6%.
Novel Semi-supervised Extreme Learning Machine and its Application in Anti-vibration HammerCorrosion Detection
WANG Hong-xing, CHEN Yu-quan, SHEN Jie, ZHANG Xin, HUANG Xiang, YU Bin
Computer Science. 2020, 47 (12): 262-266.  doi:10.11896/jsjkx.200500085
Abstract PDF(1882KB) ( 954 )   
References | Related Articles | Metrics
Visual inspection based on machine learning has been widely used in industrial fields including rust detection.In view of the existing problems of high complexity and relying on a large number of manual annotationa new semi-supervised Extreme Learning Machine named HyLap-S3ELM is proposed in this paper and applied to the detection of corrosion defects of shock hammer.Model parameters have closed solutionsso they can be calculated directly and have little dependence on operation resources.A hypergraph Laplacian matrix is introduced to better describe the smoothness of dataso as to improve the accuracy of semi-supervised classification.The risk regularization term is introduced to improve the stability of semi-supervised classifier when the assumption of data smoothness is in accurate or there is deviation of marked samples.Finallythe effectiveness and superiority of the proposed method are proved by a large number of experiments.
Wireless Network Authentication Method Based on Physical Layer Channel Characteristics
LI Zhao-bin, CUI Zhao, WEI Zhan-zhen, ZHAO Hong, GUO Chao
Computer Science. 2020, 47 (12): 267-272.  doi:10.11896/jsjkx.190900095
Abstract PDF(2091KB) ( 1332 )   
References | Related Articles | Metrics
In lightweight Internet of Things (IoT)the traditional authentication method has problems such as high energy consumption and high delay.Thereforethis paper proposed a wireless network authentication mechanism based on physical layer channel characteristics.The channel impulse frequency response (CIR) is used for identity authenticationand it is used as the initial message authentication code (MAC) for message authentication.It uses "Hash chain" to generate tag signalsso as to realize MAC updating and improve the sensitivity of packet exchangetampering and other attacks.This method combines identity authentication with message authenticationtag signal and communication informationand is suitable for the communication environment with high security requirements and limited equipment resourcessuch as industrial Internet of Things and smart home.The security analysis and simulation results show that compared with HMACEIA3 and other algorithmsthe authentication delay of this scheme is small and it has certain practicability.
Spectrum Allocation Scheme of Vehicular Ad Hoc Networks Based on Improved Crow Search Algorithm
FAN Ying, ZHANG Da-min, CHEN Zhong-yun, WANG Yi-rou, XU Hang, WANG Li-qiao
Computer Science. 2020, 47 (12): 273-278.  doi:10.11896/jsjkx.190900199
Abstract PDF(2398KB) ( 851 )   
References | Related Articles | Metrics
The vehicle Ad Hoc network is a new type of intelligent network.By intelligently accessing the networkit realizes the interconnection communication between people and vehiclesvehicles and vehiclesvehicles and infrastructure of roadsideenhances the safety prediction and alarm during the driving process of the vehiclesatisfies users' needs of vehicle multimedia accessand thus improves vehicle users' experience.Aiming at the problem of low efficiency of spectrum allocation in cognitive vehicular Ad Hoc networks(CR-VANET)a spectrum allocation scheme based on improved crow algorithm is proposed.Firstlythe two updated position parameters of the crow algorithm are improved by referencing curve adaptive parameters to better balance intensification and diversification.Secondlythe convergence factor strategy is adopted to solve the problem of slow convergence and instability of the crow algorithm.Thirdlythe chaotic map is used for random numbers to improve the ergodicity and convergence speed of the search.Finallythe throughput of the vehicle network and the access fairness between the users of cognitive vehicle are used as the reference evaluation indexthe improved crow algorithm is applied to the spectrum allocation of the cognitive vehicle network.The improved schemeis seperately compared with genetic algorithm(GA) and particle swarm optimization algorithm(PSO) allocation scheme.Simulation results show that the improved allocation scheme has a better performance.
Study on Co-citation Enhancing Directed Network Embedding
WU Yong, WANG Bin-jun, ZHAI Yi-ming, TONG Xin
Computer Science. 2020, 47 (12): 279-284.  doi:10.11896/jsjkx.191000199
Abstract PDF(2083KB) ( 871 )   
References | Related Articles | Metrics
Network embedding algorithms embed a network into a low-dimensional vector space where the structure and the inherent properties of the graph can be preserved to the greatest extent.Compared with undirected networksdirected networks have special asymmetric transitivity which can be reflected in the high-order similarity measurement between nodes.A hot spot and difficulty of current directed network embedding research is how to preserve this feature well.Aiming at this problemthis paper introduces the co-citation network of directed networks and designs a metric function of the co-introduction information.At the same timea unified framework is created for fusing the co-citation information and the high-order similarity metrics of directed networks.Thenthis paper proposes a co-citation enhancing high-order proximity preserved embedding methodcalled CCE-HOPEwhich can preserve the asymmetric transitivity well.In experimentsthe proposed model is evaluated on link prediction using four real data sets.The results show that under different high-order similarity metricsthe performance of different proportions of co-introduction information follows a general regularityso the optimal range of the proportion can be determined.Compared with other state-of-the-art methodsthe method can effectively improve the accuracy of link prediction when the proportion of co-introduction information is within the optimal range.
New Routing Methods of LEO Satellite Networks
DONG Chao-ying, XU Xin, LIU Ai-jun, CHANG Jing-hui
Computer Science. 2020, 47 (12): 285-290.  doi:10.11896/jsjkx.191000067
Abstract PDF(1755KB) ( 1877 )   
References | Related Articles | Metrics
The Low-orbit satellite constellation has the characteristics of low transmission delaysmall propagation lossand wide coverage area.It is one of the hot research directions in the field of satellite communications.Constellation routing Technology as one of the core technologies of low-orbit satellite constellation networkshas attracted widespread attention and research.In recent yearsin order to further meet the emerging transmission needs of satellite communications services with large capacityhigh efficiency and high quality of servicemany research results have been proposedincluding:Artificial intelligence-based QoS constellation routing algorithmsMultilayer satellite constellation routing algorithms and Low-orbit constellation multipath routing algorithm.This article summarizes the recent advances in the routing algorithms of low-orbit satellite constellations at home and abroad;compares and analyzes the optimization of the algorithm in terms of computational complexitymeeting service QoS and congestion control;according to the business needs of future constellation networksthe research direction is prospected.
Anycast Routing Algorithm for Wireless Sensor Networks Based on Energy Optimization
ZHOU Wen-xiang, QIAO Xue-gong
Computer Science. 2020, 47 (12): 291-295.  doi:10.11896/jsjkx.190900069
Abstract PDF(1981KB) ( 832 )   
References | Related Articles | Metrics
Routing algorithm is one of the key technologies in wireless sensor networks.Anycast is one of the three major communication modes of IPv6.Anycast has broad application prospects in balancing network and server load.In order to extend the life of the networkthis paper proposed a new routing algorithm based on energy optimization.Firstlythis model divides the area where the network is located.Thenit calculates the weight of the path which from the sending nodes to the base stations.Finallythe sending nodes divide and transmit data by path weight.In the weight calculationthe residual energy of the node is added as a conditiona low energy threshold and the current network lifetime are added to prevent excessive loss of certain paths and adjust the proportion of the energy weight before and after the network operation.Meanwhileit introduces gray wolf optimization (GWO) to optimize path weight and find out the optimal weight adjustment parameters to extend the lifetime of network.The simulation results show that GWO can find better weight adjustment parameters and make lifetime extended.And compared with the existing WSN routing algorithmthe proposed algorithm can achieve longer lifetime and node energy consumption is more uniform.
Research Advance on Efficiency Optimization of Blockchain Consensus Algorithms
ZHANG Peng-yi, SONG Jie
Computer Science. 2020, 47 (12): 296-303.  doi:10.11896/jsjkx.200700020
Abstract PDF(1727KB) ( 3047 )   
References | Related Articles | Metrics
Blockchain and its related technologies have developed rapidly in recent yearsand blockchain has rapidly become a hot field in the research field.Howeverblockchain consensus algorithm has been criticized in terms of resource consumptionenergy consumption and performance.Thereforeit needs to develop an indicator that can measure its execution efficiencyso as to evaluate the design quality of consensus algorithm.Howeverthe correlation between resource consumptionenergy consumption and performance of consensus algorithm is complicatedso it is necessary to analyze the existing blockchain consensus algorithm from the aspect of efficiency and summarize the research ideas.This paper summarizes the progress of the efficiency optimization of blockchain consensus algorithms.First of allwe define the efficiency of blockchain consensus algorithm as "the performance of consensus algorithmrequired resources and energy consumption calculated under the premise of correctness and effectiveness"and analyze the correlation of the three factors.Then the efficiency optimization of consensus algorithm is collated and summarized from the two aspects of public chain and alliance chai.Finallythe resource sharing problems of consensus algorithm are put forward from three aspects of multi-chain blockchainmultiple blockchain and BaaS for the reference of researchers.
Self-adaptive Deception Defense Mechanism Against Network Reconnaissance
ZHAO Jin-long, ZHANG Guo-min, XING Chang-you, SONG Li-hua, ZONG Yi-ben
Computer Science. 2020, 47 (12): 304-310.  doi:10.11896/jsjkx.200900126
Abstract PDF(2917KB) ( 1050 )   
References | Related Articles | Metrics
The statically configured network host information is easy to be exposed in the face of network reconnaissancewhich brings serious security risks.Deception methods such as host address mutation and deployment of fake nodes can disruptattac-ker's awareness of the network and increase the difficulty of reconnaissance.Howeverthere are still many challenges in using these methods to counter attacker's reconnaissance behavior effectively.For this reasonby modeling the behaviors of bothattaker and defenderan efficient self-adaptive deception defense mechanism SADM (Self-adaptive Deception Method) is proposed.SADM considers the characteristics of the multi-stage continuous confrontation between attacker and defender in the network reconnaissance processmodeling with the goal of maximizing the defender's accumulative payoffs under cost constraintsand then makes adaptive defense decisions through heuristic methodsto respond quickly to attacker's diverse scanning behavior.The simulation experiment results show that SADM can effectively delay the attacker's detection speed and reduce the cost of deploying deception scenarios while ensuring the defense effect.
Formalization of Finite Field GF(2n) Based on COQ
FAN Yong-qian, CHEN Gang, CUI Min
Computer Science. 2020, 47 (12): 311-318.  doi:10.11896/jsjkx.190900197
Abstract PDF(1418KB) ( 1056 )   
References | Related Articles | Metrics
The finite field GF(2n) is the basis of many security-critical algorithmsincluding AES encryption algorithmselliptic curve cryptographyinfection function masksand so on.There is data that the operations on the finite field are prone to errors due to their complexitywhich causes problems in the system.Verification methods based on testing and model checking can only be applied on specific finite field with fixed by nand the computation time for the verification often exceeds the capability of the computer.Formal verification using interactive theorem provers provides the possibility for generic verification of finite field pro-pertiesbut working in this direction fairly challenging.The existing researches mainly focused on the formal verification of abstract properties of finite fieldshoweversolving practical problems require the use of constructive definitions of finite fields and the verification of its properties.In response to this requirementthis paper uses the theorem prover COQ to develop a constructive and generic definition of finite field GF(2n)and formally verified a large amount basic properties of finite fieldsincluding verification of the basic properties of finite field additionverification of the basic properties of polynomial multiplication which is the buliding block of finite field multiplicationas well as verification of other related properties.This work lays a foundation for the complete formalization of finite fieldswhich will support formal verfications of various algorithms using finite field.
Message Format Inference Method Based on Rough Set Clustering
LI Yi-hao, HONG Zheng, LIN Pei-hong, FENG Wen-bo
Computer Science. 2020, 47 (12): 319-326.  doi:10.11896/jsjkx.191000193
Abstract PDF(2202KB) ( 850 )   
References | Related Articles | Metrics
Message clustering is an important procedure of message format inference.Most of the existing message clustering methods take message global similarity as the clustering criteria.Howeverthe accuracy of such clustering methods is often not high enoughand affects the accuracy of subsequent message format extraction.To solve this problemthis paper proposes a message format inference method based on rough set clusteringwhich consists of preprocessing phaserough-setbased clustering phasefeature word extraction phase and message format extraction phase.Firstlymessages are separated into business messages and control messages.Secondlymessages are clustered on the basis of position attributions according to rough set theoryand the clustering method considers local features of message sequences which ensures high accuracy of message clustering.Thirdlyprotocol feature words are extracted according to lengthfrequency and position characteristics.Finallyprotocol feature words are classified into mandatory fields and optional fieldsand they are used to represent message formats.Experimental results show that the proposed method can extract message formats precisely.
MeTCa:Multi-entity Trusted Confirmation Algorithm Based on Edit Distance
SUN Guo-zi, LYU Jian-wei, LI Hua-kang
Computer Science. 2020, 47 (12): 327-331.  doi:10.11896/jsjkx.191100176
Abstract PDF(1913KB) ( 890 )   
References | Related Articles | Metrics
With the development of We-mediaevery individual can publish and forward information on the internet at will.The information may have real recordsbut it may also be hearsay or even contents being intentionally tampered with.The data on the Internet has serious redundancy and weak credibility problemsthus resulting in low availability of existing network media data.Although the Bi-LSTM-CRF network can solve the problem of the accuracy of named entity recognition in datait cannot meet the requirement that the identified entity is credible.In this papera multi-parameter fusion credible confirmation algorithm based on multi-source weakly trusted data is proposedwhich is verified by identifying instances of person named entities.This paper uses distributed spiders to crawl Top N pages with the same mailbox address on multiple search engines.AfterwardsBi-LSTM-CRF algorithm trained by bilingual corpus is adopted to extract person named entities from each page.Finallythe person named entities corresponding to the mailbox are determined by multi-parameter entity fusion trusted confirmation algorithm.The experimental results show that the multi-parameter fusion credible confirmation algorithm can improve the accuracy of MRR (MRR) of the matching between the mailbox address and the real owner of the mailbox to 91.32%which is 23.08% higher than the traditional algorithm using only the term frequency model.The experimental data reasonably demonstrates that the multi-parameter fusion credible confirmation algorithm can obtain strong credibility entities from weakly trusted data and reduce the low-quality characteristics of massive datathus effectively enhancing the credibility of entity data sources.
Chain Merging Method for Unknown Text Protocol Candidate Keyword Stored in Multi-levelDictionary
CHEN Qing-chao, WANG Tao, YIN Shi-zhuang, FENG Wen-bo
Computer Science. 2020, 47 (12): 332-335.  doi:10.11896/jsjkx.190900116
Abstract PDF(1382KB) ( 810 )   
References | Related Articles | Metrics
Keyword extraction is a key step in the reverse engineering of unknown network protocols.The existing keyword extraction methods have some problemssuch as low accuracycomplex operation and more prior knowledge is required.Thereforean automatic keyword extraction algorithm based on location information is proposed.Firstthe candidate keywords are obtained by Trigram word segmentation.After adding the location informationthese keywords are organized into a multi-level dictionary.On this basisthe traditional tree merging of candidate keywords is improved to chain merging according to the location informationso as to obtain more precise and the longest candidate keywords.The experimental results show thatwhen the frequency threshold is set to 0.6this method can accurately extract the keywords of text protocol.At the same timethe influence of frequency setting on experimental result is analyzedand the limitations of related algorithms for keyword mining based on frequent sequences are also discussed.