Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 48 Issue 1, 15 January 2021
  
Intelligent Edge Computing
Edge Computing Enabling Industrial Internet:Architecture,Applications and Challenges
LI Hui, LI Xiu-hua, XIONG Qing-yu, WEN Jun-hao, CHENG Lu-xi, XING Bin
Computer Science. 2021, 48 (1): 1-10.  doi:10.11896/jsjkx.200900150
Abstract PDF(3930KB) ( 2071 )   
References | Related Articles | Metrics
Industrial Internet integrates advanced technologies such as 5G communication and artificial intelligence,and integrates various sensors and controllers with perception and control capabilities into the industrial production process to optimize production processes,reduce costs and increase productivity.Due to the centralized deployment of the traditional cloud computing mo-del,the location of computing node is usually far away from the smart terminal,which is difficult to meet the requirements of the industrial field for high real-time and low latency.By sinking computing,storage and network resources to the edge of the industrial network,edge computing can respond to device requests more conveniently,meet key requirements such as intelligent access,real-time communication and privacy protection in the Industrial Internet environment,and realize intelligent green communication.This paper firstly introduces the development status of the Industrial Internet and the related concepts of edge computing,then systematically discusses the Industrial Internet edge computing architecture and the core technologies that promote the development of Industrial Internet edge computing.Finally,it lists some successful application cases of edge computing and elaborates the current status and challenges of applying edge computing technology in Industrial Internet.
Survey of Task Offloading in Edge Computing
LIU Tong, FANG Lu, GAO Hong-hao
Computer Science. 2021, 48 (1): 11-15.  doi:10.11896/jsjkx.200900217
Abstract PDF(1452KB) ( 4047 )   
References | Related Articles | Metrics
Recently,with the popularization of mobile smart devices and the development of wireless communication technologies such as 5G,edge computing is proposed as a novel and promising computing mode,which is regarded as an extension of traditional cloud computing.The basic idea of edge computing is to transferm the computing tasks generated on mobile devices from offloading to remote clouds to offloading to the edge of the network,to meet the low-latency requirements of computing-intensive applications such as real online game and augmented reality.The offloading problem of computing tasks in edge computing is an important issue that studies whether computing tasks should be performed locally or offloaded to edge nodes or remote clouds,since it has a big impact on task completion delay and energy consumption of devices.This paper firstly explains the basic concepts of edge computing and introduces several system architectures of edge computing.Then,it expounds the task offloading problem in edge computing.Considering the research necessity and difficulty of task offloading in edge computing,it comprehensively reviews the existing related works and discusses the future research directions.
Survey on Task Offloading Techniques for Mobile Edge Computing with Multi-devices and Multi-servers in Internet of Things
LIANG Jun-bin, TIAN Feng-sen, JIANG Chan, WANG Tian-shu
Computer Science. 2021, 48 (1): 16-25.  doi:10.11896/jsjkx.200500095
Abstract PDF(2195KB) ( 1361 )   
References | Related Articles | Metrics
With the rapid development of the Internet of Things (IoT) technology,there are a large number of devices with different functions (such as a variety of smart home equipment,mobile intelligent transportation devices,intelligent logistics or warehouse management equipment,etc.,with different sensors),which are connected to each other and widely used in intelligent cities,smart factories and other fields.However,the limited processing power of these IoT devices makes it difficult to meet the demand for delay-sensitive,computation-intensive applications.The emergence of mobile edge computing (MEC) effectively solves this problem.IoT devices can offload tasks to edge servers and use them to perform computing tasks.These servers are usually deployed by the network operator at the edge of the network,that is,the network access layer close to the client,which is used to aggregate the user network.At a certain time,IoT devices may be in the coverage area of multiple edge servers,and they share the limited computing and communication resources of the servers.In this complex environment,it is an NP-hard problem to formulate a task offloading and resource allocation scheme to optimize the delay of task completion or the energy consumption of IoT devices.At present,lots of work has been done on this issue and make some progress,but some problems still exist in the practical application.In order to further promote the research in this field,this paper analyzes and summarizes the latest achievements in recent years,compares their advantages and disadvantages,and looks forward to the future work.
Survey on Service Resource Availability Forecast Based on Queuing Theory
ZHNAG Kai-qi, TU Zhi-ying, CHU Dian-hui, LI Chun-shan
Computer Science. 2021, 48 (1): 26-33.  doi:10.11896/jsjkx.200900211
Abstract PDF(1486KB) ( 2091 )   
References | Related Articles | Metrics
Queuing theory solves various complex queuing problems in many fields.This paper first introduces the general model representation and common models of queuing theory.Secondly,it briefly summarizes the various problems solved by queuing theory,and focuses on the literatures on service availability prediction of queuing theory in recent years,including its application in different fields,such as daily life,cloud computing,and network resources.To find the relationship between service and user needs,it classifies and summarizes the purpose of predicting service availability,including predicting resources,reasonably planning and allocating resources,meeting user needs,reducing user waiting time,and improving system reliability,etc.Through the summary of such documents,it finds the existing problems,and puts forward improvement methods and suggestions.Finally,the application of service resource availability predictions based on queuing theory in recommendation is forecasted,and the future research directions and challenges are briefly explained.
Mobile Edge Computing Based In-vehicle CAN Network Intrusion Detection Method
YU Tian-qi, HU Jian-ling, JIN Jiong, YANG Jian-feng
Computer Science. 2021, 48 (1): 34-39.  doi:10.11896/jsjkx.200900181
Abstract PDF(2718KB) ( 1178 )   
References | Related Articles | Metrics
With the rapid development and pervasive deployment of the Internet of Vehicles (IoV),it provides the services of Internet and big data analytics to the intelligent and connected vehicles,while incurs the issues of security and privacy.The closure of traditional in-vehicle networks leads to the communications protocols,particularly,the most commonly applied controller area network (CAN) bus protocol,lack of security and privacy protection mechanisms.Thus,to detect the network intrusions and protect the vehicles from being attacked,a support vector data description (SVDD) based intrusion detection method is proposed in this paper.Specifically,the weighted self-information of message IDs and the normalized values of IDs are selected as features for SVDD modeling,and the SVDD models are trained at the mobile edge computing (MEC) servers.The vehicles use the trained SVDD models for identifying the abnormal values of the selected features to detect the network intrusions.Simulations are conducted based on the CAN network dataset published by the HCR Lab of Korea University,where three conventional information entropy based in-vehicle network intrusion detection methods are adopted as the benchmarks.As compared to the benchmarks,the proposed method has dramatically improved the intrusion detection accuracy,especially when the number of intruded messages is small.
Multi-workflow Offloading Method Based on Deep Reinforcement Learning and ProbabilisticPerformance-awarein Edge Computing Environment
MA Yu-yin, ZHENG Wan-bo, MA Yong, LIU Hang, XIA Yun-ni, GUO Kun-yin, CHEN Peng, LIU Cheng-wu
Computer Science. 2021, 48 (1): 40-48.  doi:10.11896/jsjkx.200900195
Abstract PDF(4467KB) ( 1272 )   
References | Related Articles | Metrics
Mobile edge computing is a new distributed and ubiquitous computing model.By transferring computation-intensive and time-delay sensitive tasks to closer to the edge servers,it effectively alleviates the resource shortage of mobile terminals andthe communication transmission overhead between users and computing processing nodes.However,if multiple users request computation-intensive tasks simultaneously,especially process-based workflow task requests,edge computing are often difficult to respond effectively and cause task congestion.Inaddition,the performance of edge servers is affected by detrimental factors such as task overload,power supply and real-time change of communication capability,and its performance fluctuates and changes,which brings challenges to ensure task execution and user-perceived service efficiency.To solve the above problems,a Deep-Q-Network (DQN) and probabilistic performance aware based multi-workflow scheduling approach in edge computing environment is proposed.Firstly,the historical performance data of edge cloudservers is analyzed probabilistically,then the DQN model is driven by performance probability distribution data,and iterative optimization is carried out continuously to generate multi-workflow offloading strategy.In the process of experimental verification,simulation experiments areconducted in multiple scenarios reflecting difterent levels of system load based on edge server Location data set,performance test data and multiple scientific workflow templates.The results show that the proposed method is superior to the traditional method in the execution efficiency of multi-workflow.
Multi-user Task Offloading Based on Delayed Acceptance
MAO Ying-chi, ZHOU Tong, LIU Peng-fei
Computer Science. 2021, 48 (1): 49-57.  doi:10.11896/jsjkx.200600129
Abstract PDF(2576KB) ( 888 )   
References | Related Articles | Metrics
With the application of artificial intelligence,the demand for computing resources is higher and higher.Due to the limi-ted computing power and energy storage,mobile devices can not deal with this kind of computing intensive applications with real-time requirements.Mobile edge computing (MEC) can provide computing offload service at the edge of wireless network,so as to reduce the delay and save energy.Aiming at the problem of multi-user dependent task offloading,a user dependent task model is established based on the comprehensive consideration of delay and energy consumption.The multi-user task offloading strategy based on delay acceptance (MUTODA) is proposed to solve the problem of minimizing energy consumption under delay constraints.MUTODA solves the problem of multi-user task offloading through two steps of non dominated single user optimal offloading strategy and adjustment strategy to solve resource competition.The experimental results show that compared with the benchmark strategy and heuristic strategy,the multi-user task offloading strategy based on delayed acceptance can improve about 8% user satisfaction and save 30%~50% of the energy consumption of mobile terminals.
User Allocation Approach in Dynamic Mobile Edge Computing
TANG Wen-jun, LIU Yue, CHEN Rong
Computer Science. 2021, 48 (1): 58-64.  doi:10.11896/jsjkx.200900079
Abstract PDF(3029KB) ( 1140 )   
References | Related Articles | Metrics
In edge computing environment,matching suitable servers for users is a key issue,which can effectively improve the quality of service.In this paper,the edge user assignment (EUA) problem is converted into a bipartite graph matching problem constrained by distance and server resources,and it is modeled as a 0-1 integer programming problem for optimal assignment solution.In the offline state,the optimization model based on exact algorithm can obtain the optimal assignment strategy,but its solution time is too long,and it cannot process large-scale of data,which is not suitable for the real service environment.Therefore,the online user assignment method based on heuristic strategy is proposed to optimize the user-server assignment under limited time.The experimental results show that the competitive ratio obtained by Proximal Heuristic online method (PH) can reach close to 100%,and can obtain a better assignment solution within an acceptable time.At the same time,the online PH method performs better than other basic heuristic methods.
Privacy Protection Offloading Algorithm Based on Virtual Mapping in Edge Computing Scene
YU Xue-yong, CHEN Tao
Computer Science. 2021, 48 (1): 65-71.  doi:10.11896/jsjkx.200500098
Abstract PDF(2098KB) ( 742 )   
References | Related Articles | Metrics
With the development of mobile edge computing (MEC) and wireless power transfer (WPT),more and more computing tasks are offloaded to the MEC server for processing.The terminal equipment is powered by WPT technology to alleviate the limited computing power of the terminal equipment and high energy consumption of the terminal equipment.However,since the offloaded tasks and data often carry information such as users' personal usage habits,tasks are offloaded to the MEC server for processing results in new privacy leakage issues.A privacy-aware computation offloading method based on virtual mapping is proposed in this paper.Firstly,the privacy of the computing task is defined,and then a virtual task mapping mechanism that can reduce the amount of privacy accumulated by users on the MEC server is designed.Secondly,the online privacy-aware computing offloading algorithm is proposed by considering the optimization of the mapping mechanism and privacy constraints jointly.Finally,simulation results validate that the proposed offloading method can keep the cumulative privacy of users below the threshold,increase the system calculation rate and reduce users' calculation delay at the same time.
Multi-edge Collaborative Computing Unloading Scheme Based on Genetic Algorithm
GAO Ji-xu, WANG Jun
Computer Science. 2021, 48 (1): 72-80.  doi:10.11896/jsjkx.200800088
Abstract PDF(2314KB) ( 1299 )   
References | Related Articles | Metrics
As a supplement to cloud computing,edge computing can ensure that the calculation delay meets system requirements when processing computing tasks generated by lOT equipment.Aiming at the problem of insufficient utilization of the remote edge cloud due to the empty window period of the computing task in the traditional offloading scenario,a genetic algorithm-based multi-edge and cloud collaborative computing offloading model (Genetic Algorithm-based Multi-edge Collaborative Computing Offloading Model,GAMCCOM) is proposed.This computing offloading solution combines local edge and remote edge to perform task offloading and uses a genetic algorithm to get the minimum system cost under consideration of both delay and energy consumption at the same time.The results of simulation experiments show that when considering the time delay consumption and energy consumption of the unloading system,the overall cost of this scheme is reduced by 23% compared with the basic three-layer unloading scheme.In the case of considering time delay consumption and energy consumption respectively,the system cost can still be reduced by 17% and 15% respectively.Therefore,the GAMCCOM offloading method can effectively reduce the system cost for different offloading targets of edge computing.
Computational Task Offloading Scheme Based on Load Balance for Cooperative VEC Servers
YANG Zi-qi, CAI Ying, ZHANG Hao-chen, FAN Yan-fang
Computer Science. 2021, 48 (1): 81-88.  doi:10.11896/jsjkx.200800220
Abstract PDF(2388KB) ( 784 )   
References | Related Articles | Metrics
In the Vehicular Edge Computing (VEC) network,a large number of computational tasks cannot be processed due to the vehicle's limited computation resource.Therefore,computational tasks generated by on-board applications need to be offloa-ded to the VEC servers for processing.However,the mobility of vehicles and the differences in regional deployment lead to the unbalance among VEC servers,resulting in low computation offloading efficiency and resource utilization.In order to solve the problem,a scheme of computation offloading and resource utilization is proposed to maximize the utility of users.The problem of user utility maximization is decoupled into two subproblems,the VEC server selection decision algorithm based on matching and the joint optimization algorithm for offloading ratio and computation resource allocation based on Adam are proposed to solve the subproblems respectively.After that,the above two algorithms are iterated together until convergence,and the approximate optimal solution is obtained to achieve the load balance.The simulation results show that the proposed scheme can effectively decrease the processing delay of computational tasks,save vehicle's energy,enhance the vehicle utility,and perform well on load balance compared to the nearest offloading scheme and the predictive offloading scheme.
L-YOLO:Real Time Traffic Sign Detection Model for Vehicle Edge Computing
SHAN Mei-jing, QIN Long-fei, ZHANG Hui-bing
Computer Science. 2021, 48 (1): 89-95.  doi:10.11896/jsjkx.200800034
Abstract PDF(3375KB) ( 890 )   
References | Related Articles | Metrics
In the vehicle edge computing unit,due to the limited resources of its hardware equipment,it becomes more and more urgent to develop a lightweight and efficient traffic sign detection model for vehicle edge computing.This paper proposes a lightweight traffic sign detection model based on Tiny YOLO,which is called L-YOLO.Firstly,L-YOLO uses partial residual connection to enhance the learning ability of lightweight network.Secondly,in order to reduce the false detection and missed detection of traffic signs,L-YOLO uses Gauss loss function as the location loss of boundary box.In the traffic sign detection dataset named TAD16K,the parameter amount of L-YOLO is 18.8M,the calculation amount is 8.211BFlops,the detection speed is 83.3FPS,and the mAP reaches 86%.Experimental results show that the algorithm not only guarantees the real-time performance,but also improves the detection accuracy.
Cache Management Method in Mobile Edge Computing Based on Approximate Matching
LI Rui-xiang, MAO Ying-chi, HAO Shuai
Computer Science. 2021, 48 (1): 96-102.  doi:10.11896/jsjkx.200800215
Abstract PDF(2352KB) ( 764 )   
References | Related Articles | Metrics
For the case of massive identical or similar computing requests from end users,a search of similar data in the cache space of the edge server by approximate match can be applied to select computing results that can be reused.Most of the existing algorithms do not consider the uneven distribution of data,resulting in a large amount of calculation and time overhead.In this paper,a cache selection strategy based on dynamic-locality sensitive hashing (LSH) algorithm and Weighted-k nearest neighbor (KNN) algorithm(CSS-DLWK) is proposed.The Dynamic-LSH algorithm can deal with uneven data distribution by dynamically adjusting the hash bucket size accordingly,thereby selecting data sets that are similar to the input data from the cache space.Then,regarding distance and sample size as weights,the weighted-KNN algorithm re-selects the data in the similar data sets acquired by the dynamic-LSH algorithm.From this approach,the data most similar to the input data are obtained,and the corresponding computing result is acquired for reuse.As demonstrated by simulation experiments,in the CIFAR-10 dataset,CSS-DLWK increases the ave-rage selection accuracy by 4.1% compared to the cache selection strategy based on A-LSH and H-KNN algorithms.The improvement is 16.8% compared to traditional LSH algorithms.Overall,with acceptable time costs in data selection,the proposed strategy can effectively improve the selection accuracy of reusable data,thereby reducing repetitive computation in the edge server.
Mobile Edge Server Placement Method Based on User Latency-aware
GUO Fei-yan, TANG Bing
Computer Science. 2021, 48 (1): 103-110.  doi:10.11896/jsjkx.200900146
Abstract PDF(3351KB) ( 1115 )   
References | Related Articles | Metrics
The rapid development of the Internet-of-Things and 5G networks generates a large amount of data.By offloading computing tasks from mobile devices to edge servers with sufficient computing resources,network congestion and data propagation delays can be effectively reduced.The placement of edge server is the core of task offloading,and efficient placement method can effectively satisfy the needs of mobile users to access services with low latency and high bandwidth.To this end,an optimization model of edge server placement is established through minimizing both access delay and load difference as the optimization goal.Then,based on the heuristic algorithm,a mobile edge server placement method called ESPHA (Edge Server Placement Method Based on Heuristic Algorithm) is proposed to achieve multi-objective optimization.Firstly,the K-means algorithm is combined with the ant colony algorithm,the pheromone feedback mechanism is introduced into the placement method by emulating the mechanism of ant colony sharing pheromone in the foraging process,and the ant colony algorithm is improved by setting the taboo table to improve the convergence speed.Finally,the improved heuristic algorithm is used to solve the optimal placement.Experiments using Shanghai Telecom's real datasets show that the proposed method achieves an optimal balance between low latency and load balancing under the premise of guaranteeing quality of service,and outperforms several existing representative methods.
Database & Big Data & Data Science
Survey on Fake Review Recognition
YUAN Lu, ZHU Zheng-zhou, REN Ting-yu
Computer Science. 2021, 48 (1): 111-118.  doi:10.11896/jsjkx.200500101
Abstract PDF(1541KB) ( 2671 )   
References | Related Articles | Metrics
In Web 2.0 era,consumers mostly rely on online reviews from former consumers when they are shopping,learning and entertaining on the Internet.Fake review can mislead users on making consumption decisions and affect the real credit of stores.Therefore,recognizing fake reviews effectively is necessary and meaningful.This paper first starts from the definition of fake review,introduces the research content of false review from four directions,which are fake review recognition,motivation,influence on consumers and how to prevent false review,and then puts forward the research framework of fake reviews and the workflow of general recognition methods.Then it sums up current perspectives of relevant research from the text of fake reviews and fake reviewers,introduces common datasets and evaluation indicators,statistically analyzes the effective recognition method of fake review on open datasets.Specifically,it makes a conclusion about the feature selection,fake review recognition models,training datasets and evaluation indicators of current research works,and makes a comparison among different detection models.Finally,the future research directions of fake review recognition,such as the limit of large scale labeled datasets are discussed.
Two-step Authorization Pattern of Data Product Circulation
YE Ya-zhen, LIU Guo-hua, ZHU Yang-yong
Computer Science. 2021, 48 (1): 119-124.  doi:10.11896/jsjkx.191100217
Abstract PDF(1461KB) ( 855 )   
References | Related Articles | Metrics
A data product is an electronified non-material product that can be replicated and transferred with almost zero cost,this feature poses a challenge to the design of the production system of data product.At present,data products reach consumers mainly through supporting platforms.At the same time,critical issues such as the final shape,authorization mechanism and corres-ponding pricing methods of data products have not been effectively addressed.In this paper,based on the shareability and easy to copy of data product,it is concluded that the nature of data product circulation is a series of authorization process.Therefore,data product pricing is a form of authorization pricing.This article proposes principles of data product authorization and its pricing mechanism,a two-step authorization pattern of data product circulation,as well as corresponding structure of data product operation platform.
Method of Concept Reduction Based on Concept Discernibility Matrix
WANG Xia, PENG Zhi-hua, LI Jun-yu, WU Wei-zhi
Computer Science. 2021, 48 (1): 125-130.  doi:10.11896/jsjkx.200800013
Abstract PDF(1378KB) ( 790 )   
References | Related Articles | Metrics
The concept reduction of a formal context based on Boolean factor analysis can preserve all binary relations of the formal context.That is the relations between objects and attributes contained in a concept reduction based on Boolean factor analysis are consistent with the binary relations represented by the formal context.Inspired by the idea of discernibility matrix solving attribute reduct in a concept lattice,a concept discernibility matrix is defined in a formal context,and a method of concept reduct based on the concept discernibility matrix is proposed to find all concept reducts.Firstly,a new discernibility matrix is defined in a formal context,which is called concept discernibility matrix of the formal context.Both the rows and columns of the matrix are the formal concepts.Each element of the matrix is a set consisted of all pairs of object and attribute,which belong to the formal concept in the corresponding row,but not to the formal concept in the corresponding column.Secondly,the relationship between the concept discernibility matrix and the concept consistent set is studied,and the method of judging concept consistent set is givenby using the concept discernibility matrix.Then,all formal concepts of a formal context are divided into three categories:core concept,relatively necessary concept and unnecessary concept according to their relationship to concept reducts.And characteristics of core concept,relatively necessary concept and unnecessary concept are discussed in detail.Moreover,methods of judging these three kinds of formal concepts are developed respectively by using the concept discernibility matrix.The detailed process of solving all concept reducts of a formal context is given by an example based on the concept discernibility matrix.Finally,solution steps to find all concept reducts are given by using the concept discernibility matrix,and the complexity of each step is simply analyzed.
Dynamic Updating Method of Concepts and Reduction in Formal Context
ZENG Hui-kun, MI Ju-sheng, LI Zhong-ling
Computer Science. 2021, 48 (1): 131-135.  doi:10.11896/jsjkx.200800018
Abstract PDF(1401KB) ( 620 )   
References | Related Articles | Metrics
Concept lattice is widely used as a knowledge structure in many real-life applications,and the updating of a formal concept is inevitable in dynamic cases.The updating of concepts is not only the supplement of knowledge but also the fusion of information.This paper mainly studies the method of concept updating when a single attribute or a subset of attributes is added into the formalcontext.The changes of reduction and the minimum vertex covering are discussed.Finally,the redundancy rules extraction and optimization problems are discussed when dynamic attribute is added into a decision formal context.Under the condition of keeping the antecedents of rules,the changes of non-redundant rules are studied when a decision attribute is added dynamically.
Three-way Filtering Algorithm of Basic Clustering Based on Differential Measurement
LIANG Wei, DUAN Xiao-dong, XU Jian-feng
Computer Science. 2021, 48 (1): 136-144.  doi:10.11896/jsjkx.200700213
Abstract PDF(2287KB) ( 801 )   
References | Related Articles | Metrics
The pre-processing of basic clustering members is an important research step in the ensemble clustering algorithm.Numerous studies have shown that the difference in the set of basic clustering members affects the performance of the ensemble clustering.The current ensemble clustering research revolves around the generation of basic clustering and the integration of basic clustering,while the differential measurement and optimization of basic clustering members are not perfect.Based on Jaccard's similarity,this study proposes a measurement for the differential of basic clustering members and constructs a differential three-way filtering method for basic clustering members by introducing the three-way decisions idea.This method first sets the initial thresholds α(0) and β(0) of the three-way decisions for basic clustering members and then calculates the differential of each basic clustering member to implement the three-way decisions.Its decision strategy is:when the differential metric of the basic clustering member is less than the specified threshold α(0),the basic clustering member will be deleted; when the differential metric of the basic clustering member is greater than the specified threshold β(0),the basic clustering member will be retained; and when the differential metric of the basic clustering member is greater than α(0)and less than β(0),the basic clustering member will be added into the boundary domain of the three-way decisions,and boundary domains will be further judged by the three-way decisions with new thresholds.After completing a round of the three decisions,the algorithm recalculates thresholds of the three-way decisions and remakes the three-way decisions on boundary domains of the three-way decisions remained in the last round until no basic clustering member is added to boundary domains of the three-way decisions or the specified number of iterations is reached.The comparative experiment shows that the differential measurement three-way filtering method for basic clustering can effectively improve the performance of ensemble clustering.
Weighted Hesitant Fuzzy Clustering Based on Density Peaks
ZHANG Yu, LU Yi-hong, HUANG De-cai
Computer Science. 2021, 48 (1): 145-151.  doi:10.11896/jsjkx.200400043
Abstract PDF(1795KB) ( 731 )   
References | Related Articles | Metrics
Due to cognitive limitations and the information uncertainty,traditional fuzzy clustering cannot effectively solve the decision-making problems in a real-life scenario when cluster analysis is carried out on the decision problem.Therefore,hesitant fuzzy sets(HFSs) clustering algorithms were proposed.The conception of hesitant fuzzy sets is evolved from fuzzy sets which are applied to fuzzy linguistic approach.The distance function of the hierarchical hesitant fuzzy K-means clustering algorithm has the same weight since the datasets information is seldom considered,and the computational complexity for computing the cluster center is exponential which is unavailable in the big data environment.In order to solve the above problems,this paper presents a novel clustering algorithm for hesitant fuzzy sets based on density peaks,called WHFDP.Firstly,a new method for extending the short hesitant fuzzy elements set to calculate the distance between two HFSs is proposed and a new formula for calculating the weight of distance function combined with the coefficient of variation is given.In addition,the computational complexity for computing the cluster center is reduced by using density peaks clustering method to select cluster center.Meanwhile,the adaptability to data sets with different sizes and arbitrary shapes is also improved.The time complexity and space complexity of the algorithm are reduced to polynomial level.Finally,typical data sets are used for simulation experiments,which prove the effectiveness of the new algorithm.
Similarity Construction Method for Pythagorean Fuzzy Set Based on Fuzzy Equivalence
HU Ping, QIN Ke-yun
Computer Science. 2021, 48 (1): 152-156.  doi:10.11896/jsjkx.191100102
Abstract PDF(1356KB) ( 624 )   
References | Related Articles | Metrics
The notion of Pythagorean fuzzy set is a generalization of Zadeh's fuzzy sets.The study of similarity measures between Pythagorean fuzzy sets is an important topic of Pythagorean fuzzy set theory.Most of the existing similarity measures are presented based on specific practical problems.This paper focuses on general constructing methods of similarity measures between Pythagorean fuzzy sets by using fuzzy equivalences.The notion of fuzzy equivalence is extended to Pythagorean fuzzy numbers and the notion of PFN fuzzy equivalence is proposed.The constructing methods of PFN fuzzy equivalences are presented.Furthermore,by using aggregation operators,some general methods for constructing similarity measures between Pythagorean fuzzy sets are proposed.It is shown that some of the existing similarity measures are special cases of the similarity measures proposed in this study.
Variable Three-way Decision Model of Multi-granulation Decision Rough Sets Under Set-pair Dominance Relation
XUE Zhan-ao, ZHANG Min, ZHAO Li-ping, LI Yong-xiang
Computer Science. 2021, 48 (1): 157-166.  doi:10.11896/jsjkx.191200175
Abstract PDF(1437KB) ( 621 )   
References | Related Articles | Metrics
Multi-granulation decision rough sets are important model to deal with decision-making under uncertain data and risk from multiple perspectives.In view of the decision-making analysis problem in incomplete information system,this paper firstly introduces the set-pair dominance relation to the multi-granulation decision rough sets,improves the set-pair dominance degree in the set-pair dominance relation,and makes the result more reasonable.Then,the multi-granulation approximation space is expan-ded,and five kinds of multi-granulation decision rough sets models are proposed,which are optimistic,pessimistic,mean,optimistic-pessimistic and pessimistic-optimistic under the set-pair dominance relation.Meanwhile,the related properties and the relation among these models are discussed.Furthermore,combined with the theory ofthree-way decisions,the loss function is represented by interval value in incomplete information system,and different thresholds are obtained.Then five corresponding variable three way decision models are established,and the decision rules are derived.Finally,the case of employee evaluation shows that the proposed model is more flexible in practical application,not too loose or too strict,and the final decision is more reasonable.It provides a novel method for decision-making of uncertainty problems in incomplete information system.
Computer Graphics & Multimedia
Multi-label Video Classification Assisted by Danmaku
CHEN Jie-ting, WANG Wei-ying, JIN Qin
Computer Science. 2021, 48 (1): 167-174.  doi:10.11896/jsjkx.200800198
Abstract PDF(2404KB) ( 1347 )   
References | Related Articles | Metrics
This work explores the multi-label video classification task assisted by danmaku.Multi-label video classification can associate multiple tags to a video from different aspects,which can benefit video understanding tasks such as video recommendation.There are two challenges in this task,one is the high annotation cost of dataset,and the other is how to understand video from multi-aspect and multimodal perspectives.Danmaku is a new trend of online commenting.Danmaku video has lots of manual annotations added by website users for high user engagement.It can be used as classification data directly.This work collects a multi-label danmaku video dataset and builds a hierarchical label correlation structure for the first time on danmaku video data.The dataset will be released in the future.Danmaku contains informative and fine-grained interaction data with the video content.This paper introduces danmaku modality to assist classification based on previous works,most of which only combine the visual and audio modalities.This paper choses cluster-based model NeXtVLAD,attention Dbof and temporal based GRU models as baselines.Experiments show that danmaku data is helpful,which improves GAP by 0.23.This paper also explores the use of label correlation,updating the video labels by a relationship matrix to integrate the semantic information into training.Experiments show that the leverage of label correlation improves Hit@1 by 0.15.Besides,the MAP can be improved by 0.04 in fine-grained labels,which indicates that the label semantic information benefits the prediction of small classes and it is valuable to explore.
Domain Alignment Based Object Detection of X-ray Images
HE Yan-hui, WU Gui-xing, WU Zhi-qiang
Computer Science. 2021, 48 (1): 175-181.  doi:10.11896/jsjkx.200200023
Abstract PDF(3097KB) ( 1376 )   
References | Related Articles | Metrics
Significant progress has been made towards building accurate automatic object detection systems for a variety of parcel security check applications using convolutional neural networks.However,the performance of these systems often degrades when they are applied to new data that differs from the training data,for example,due to variations in X-ray imaging.In this paper,we propose a context-based and transmittance adaptive domain alignment method to address the above performance degradation.Firstly,by using color information existed in X-ray images,we design an attention mechanism to process each color channel of an X-ray image separately to solve the problem of color differences among different X-ray machines.Next,we develop a feature alignment method to reduce the statistics difference among different X-ray images generated by various manufacturers.Finally,we propose to use a context vector as a regularization for the improvement of adversarial training to improve the precision.Theme-thod proposed in this paper solves the problem of the accuracy degradation of the object detection in the test domain,which is different from the training domain.
Anime Character Portrait Generation Algorithm Based on Improved Generative Adversarial Networks
ZHANG Yang, MA Xiao-hu
Computer Science. 2021, 48 (1): 182-189.  doi:10.11896/jsjkx.191100092
Abstract PDF(4547KB) ( 1993 )   
References | Related Articles | Metrics
In order to solve the problems of poor diversity,generation by class and detail control in existed method,we present an improved model named LMV-ACGAN.It is based on ACGAN and involved with mutual information and multiscale discrimination.Our model includes a feature combined generator,a multiscale discriminator and three fully connected nets for real-fake judging,classifying and latent label restoration.As a semi-supervised generative model,except class label,we also use a group of continuous latent label to enhance the constraint of the generator.Moreover,in our algorithms,pooling layers in VGG-NET are replaced by stride convolutions.Then the discriminator uses the multiscale information of the image to feature fusion.Finally,we improve the tail-end structure of the model and the rules of parameters update so as to reduce the influence between classification,real-fake judgement and latent label restoration as far as possible.Our experiment shows that the proposed method effectively solve the problem of mode collapse on our dataset,meanwhile compared with origin ACGAN,our method increases the success rate and accuracy of generating specified class image.For the image which is generated poorly or classified incorrectly by ACGAN,our method can achieve the goal.In addition,our model enable people to modify the continuous latent label to realize image editing such as changing the face orientation.
Remote Sensing Image Description Generation Method Based on Attention and Multi-scale Feature Enhancement
ZHAO Jia-qi, WANG Han-zheng, ZHOU Yong, ZHANG Di, ZHOU Zi-yuan
Computer Science. 2021, 48 (1): 190-196.  doi:10.11896/jsjkx.200600076
Abstract PDF(2149KB) ( 1248 )   
References | Related Articles | Metrics
Remote sensing image description generation is a hot research topic involving both computer vision and natural language processing.Its main work is to automatically generate a description sentence for a given image.This paper proposes a remote sensing image description generation method based on multi-scale and attention feature enhancement.The alignment relationship between generated words and image features is realized through soft attention mechanism,which improves the pre-interpretability of the model.In addition,in view of the high resolution of remote sensing images and large changes in target scale,this paper proposes a feature extraction network (Pyramid Pool and Channel Attention Network,PCAN) based on pyramid pooling and channel attention mechanism to capture ofmulti-scale remote sensing image and local cross-channel mutual information.Image features extracted by the model are used as the input to describe the soft attention mechanism of the generation stage,thereby calculating the context information,and then inputting the context information into the LSTM network to obtain the final output sequence.Effectiveness experiments of PCAN and soft attention mechanism on RSICD and MSCOCO datasets prove that the joi-ning of PCAN and soft attention mechanism can improve the quality of generated sentences and realize the alignment between words and image features.Through the visualization analysis of the soft attention mechanism,the credibility of the model results is improved.In addition,experiments on the semantic segmentation dataset prove that the proposed PCAN is also effective for semantic segmentation tasks.
Fine-grained Image Recognition Method Combining with Non-local and Multi-region Attention Mechanism
LIU Yang, JIN Zhong
Computer Science. 2021, 48 (1): 197-203.  doi:10.11896/jsjkx.191000135
Abstract PDF(2787KB) ( 1014 )   
References | Related Articles | Metrics
The goal of fine-grained image recognition is to classify object subclasses at a fine-grained level.Because the differences between different subclasses are very subtle,fine-grained image recognition is very challenging.At present,the difficulty of this kind of algorithm is how to locate the distinguishable parts of fine-grained targets and how to extract fine-grained features of fine-grained levels.To this end,a fine-grained recognition method combining Non-local and multi-regional attention mechanisms is proposed.Navigatoronly uses image labels to locate some discriminative regions,and achieves good classification results by fusing global features and discriminative regional features.However,Navigator is still flawed.Firstly,the navigator does not consider the relationship between different locations,so the algorithm proposed in this paper combines the non-local module with the navigator to enhance the global information perception ability of the model.Secondly,aiming at the defect that the Non-local module does not establish the relationship between feature channels,a feature extraction network based on channel attention mechanism is constructed,which makes the network pay more attention to the important feature channels.Finally,the algorithm proposed in this paper achieves recognition accuracy of 88.1%,94.3% and 91.8% on three open fine-grained image databases,CUB-200-2011,Stanford Cars and FGVC Aircraft respectively,and has a significant improvement over Navigator.
Efficient Semi-global Binocular Stereo Matching Algorithm Based on PatchMatch
SANG Miao-miao, PENG Jin-xian, DA Tong-hang, ZHANG Xu-feng
Computer Science. 2021, 48 (1): 204-208.  doi:10.11896/jsjkx.191000205
Abstract PDF(2562KB) ( 1496 )   
References | Related Articles | Metrics
In recent years,the binocular stereo matching has developed rapidly.The application of high accuracy,high resolution and large disparityput forward higher requirement for the computational efficiency.Since the computational complexity inherent in the traditional stereo matching algorithm is proportional to the disparity range,it has been difficult to meet the high resolution and large disparity applications.Considering the pros and cons of several types of stereo matching algorithms from the aspects of computational complexity,an efficient semi-global stereo matching algorithm based on PatchMatch through the effective combination of the two algorithms is proposed.It significantly reduces the computational complexity of the original SGM algorithm,since it reduces the possible disparity with only agroup of best t candidate disparities(t is much smaller than the disparity range) instead of the whole disparity range by means of the PatchMatch spatial propagation scheme.The evaluation results on KITTI2015 dataset demonstrate that the proposed algorithm achieves a significant improvement in accuracy and real-time performance with an 5.81% error matching rate and a matching time of 20.2 seconds.Therefore,as an improved algorithm for traditional stereo matching,this design can provide an efficient solution for large disparity binocular stereo matching system.
Artificial Intelligence
Survey on Target Site Prediction of Human miRNA Based on Deep Learning
LI Ya-nan, HU Yu-jia, GAN Wei, ZHU Min
Computer Science. 2021, 48 (1): 209-216.  doi:10.11896/jsjkx.191200111
Abstract PDF(1908KB) ( 1255 )   
References | Related Articles | Metrics
MicroRNAs(miRNAs) are 22~23 nt small non-coding RNAs that play an important role in biological evolution.Mature miRNA will completely or incompletely pair with the target site in 3'UTR region of message RNAs(mRNAs) through its seed region,to achieve the function of cleavage and translational repression so on.As the mechanism of miRNA binding to mRNA target sites is still unclear,the prediction of miRNA target sites has been a major challenge and problem in the field of miRNA research.Although the experimental method is accurate,it is time-consuming and expensive.In Bioinformatics,although the calculation method based on rule matching can predict the target site,it has the problem of low accuracy.With the development of deep learning and the abundance of experimental data,the method based on deep learning has become a research hotspot in the field of miRNA target prediction.Firstly,this paper introduces the commonly used data sets,prediction types and common feature of miRNA prediction,then explains the commonly used deep learning model in prediction research.Next,the conventional prediction methods and prediction methods based on deep learning are introduced.Meanwhile,these methods are classified and summarized.Finally,the current problems and future development of using deep learning to predict miRNA target are discussed.
Survey on Multi-winner Voting Theory
LI Li
Computer Science. 2021, 48 (1): 217-225.  doi:10.11896/jsjkx.200600013
Abstract PDF(1425KB) ( 759 )   
References | Related Articles | Metrics
With the advent of the intelligent age,the way of collective decision-making is also changing.People are no longer satis-fied with a single-winner decision result,but need a committee which is composed of multiple winners as a winner set,and this committee set is applied to the recommendation system and search engine,policy vote and corporate decision-making,etc.The biggest advantage of the multi-winner voting theory is that the decision cost is low and the decision efficiency is quite high,which is an excellent collective decision method.The research core of multi-winner voting theory lies in finding multi-winner voting rules which are suitable for different application scenarios.This paper introduces two categories of multi-winner decision-making methods,the committee's voting rules and the multi-winner voting rules based on approval voting.The two types of rules represent the research directions of two different types of multi-winner voting theory.This paper explains the representative multi-winner voting rules under the two categories of rules based on the establishment of a logic model,and tries to discuss the development trend of the multi-winner voting theory by sorting out the current influential literatures.It is expected to help more researchers to solve problems in practice with this theory.
Deep Interest Factorization Machine Network Based on DeepFM
WANG Rui-ping, JIA Zhen, LIU Chang, CHEN Ze-wei, LI Tian-rui
Computer Science. 2021, 48 (1): 226-232.  doi:10.11896/jsjkx.191200098
Abstract PDF(2661KB) ( 987 )   
References | Related Articles | Metrics
The recommendation system can sort out and display the information that may be of interest from the mass of information according to users' preferences.As deep learning has achieved good results in multiple research fields,it has also begun to be applied to recommendation systems.However,the current recommendation ranking algorithms based on deep learning often use Embedding & MLP mode and can only obtain high-level feature interactions.In order to solve the problem that only high-order feature interaction can be obtained,DeepFM adds FM to the above mode,which can learn the low-order and high-order feature interaction end-to-end.But the DeepFM cannot express the diversity of user interests.In view of this,this paper proposes a Deep Interest Factorization Machine Network(DIFMN) by introducing the multi-head attention mechanism into DeepFM.DIFMN can adaptively learn the user representation according to the different items to be recommended,showing the diversity of user intere-sts.In addition,the model adds preference representations according to the type of user's historical behaviors,so that it can be applied not only to tasks that record only historical behaviors that the user likes,but also to tasks that record both historical beha-viors that the user likes and dislikes.This paper uses tensorflow-gpu to implement the algorithm,and performs comparative tests on two public datasets of Amazon(Electronics) and movieLen-20 m.Experiment results show that RelaImprimproves by 17.70% and 35.24% respectively compared to DeepFM,which validates the feasibility and effectiveness of the proposed method.
Multi-view Dictionary-pair Learning Based on Block-diagonal Representation
ZHANG Fan, HE Wen-qi, JI Hong-bing, LI Dan-ping, WANG Lei
Computer Science. 2021, 48 (1): 233-240.  doi:10.11896/jsjkx.200800211
Abstract PDF(2338KB) ( 703 )   
References | Related Articles | Metrics
Dictionary learning is widely used in multi-view classification as an efficient feature learning technology.Most multi-view dictionary learning methods either only use part of the information of multi-view data,or only learn one type of dictionary in their frameworks.However,in practice,both the diversity information of multi-view data and the correlation of multi-view data are equally important.A single synthetic dictionary learning scheme or a single analytic dictionary learning scheme cannot meet the requirements of the processing speed,interpretability and application feasibility at the same time.To solve these issues,a novel block-diagonal representation based multi-view dictionary-pair learning framework (BDR-MVDPL) is proposed in this paper.This algorithm obtains the representation coefficients that contain more useful information for classification by introducing dictionary-pair learning model.Firstly,in order to ensure the discriminant ability of the coding coefficient matrix,the proposed me-thod directly enforces a block-diagonal constraint on the coding coefficients with explicit formulation.Then,it adopts a feature fusion strategy to concatenate the coding coefficients of different views,and regresses the concatenated coding coefficients to thecorresponding label vectors.In this way,both the diversity information of multi-view data and the correlation of multi-view data are considered.Finally,it integrates dictionary learning and classifier learning into a unified framework,so that the dictionary-pair and classifier can update alternately in an iterative manner and the whole classification task can be realized automatically.Experiments on several multi-feature datasets show that,compared to other multi-view dictionary learning algorithms,the proposed method achieves competitive performance in terms of classification accuracy,while enjoying a low computational complexity.
Conditional Generative Adversarial Network Based on Self-attention Mechanism
YU Wen-jia, DING Shi-fei
Computer Science. 2021, 48 (1): 241-246.  doi:10.11896/jsjkx.200700187
Abstract PDF(2879KB) ( 1761 )   
References | Related Articles | Metrics
In recent years,more and more generative adversarial networks appear in various fields of deep learning.Conditional generative adversarial networks(cGAN) are the first to introduce supervised learning into unsupervised GANs,which makes it possible for adversarial networks to generate labeled data.Traditional GAN generates images through multiple convolution operations to simulate the dependency among different regions.However,cGAN only improves the objective function of GAN,but does not change its network structure.Therefore,the problem also exists in cGAN that when the distance between features in thegene-rated image is long,features have relatively less relationship,resulting in unclear details of the generated image.In order to solve this problem,this paper introduces Self-attention mechanism to cGAN and proposes a new model named SA-cGAN.The model generates consistent objects or scenes by using features in the long distance of the image,so that the generative ability of conditional GAN is improved.SA-cGAN is experimented on the CelebA and MNIST handwritten datasets and compared with several commonly used generative models such as DCGAN,cGAN.Results prove that the proposed model has made some progress in the field of image generation.
Semantic Slot Filling Based on BERT and BiLSTM
ZHANG Yu-shuai, ZHAO Huan, LI Bo
Computer Science. 2021, 48 (1): 247-252.  doi:10.11896/jsjkx.191200088
Abstract PDF(1821KB) ( 1230 )   
References | Related Articles | Metrics
Semantic slot filling is an important task in the dialogue system,which aims to label each word of the input sentence correctly.Slot filling performance has a marked impact on the following dialog management module.At present,random word vector or pretrained word vector is usually used as the initialization word vector of the deep learningmodel used to solveslot filling task.However,the random word vector has no semantic and grammatical information,and the pre-trained word vector only pre-sent one meaning.Both of them cannot provide context-dependent word vector for the model.We proposed an end-to-end neural network model based on pre-trained model BERTand Long Short-Term Memory network(LSTM).First,the pre-trained model(BERT) encoded the input sentence as context-dependentword embedding.After that,the word embedding served as input to subsequent Bidirectional Long Short-Term Memory network(BiLSTM).Andusing the Softmax function and conditional random field to decode prediction labels finally.The pre-trained model BERT and BiLSTM networks were trained as a wholein order to improve the performance of semantic slot filling task.The model achieves F1 scores of 78.74%,87.60% and 71.54% on three data sets(MIT Restaurant Corpus,MIT Movie Corpus and MIT Movie trivial Corpus) respectively.The experimental results show that our model significantly improves the F1 value of Semantic slot filling task.
Improved Grey Wolf Optimizer for RFID Network Planning
QUAN Yi-xuan, ZHENG Jia-li, LUO Wen-cong, LIN Zi-han, XIE Xiao-de
Computer Science. 2021, 48 (1): 253-257.  doi:10.11896/jsjkx.200200095
Abstract PDF(2581KB) ( 709 )   
References | Related Articles | Metrics
With the rapid development of Internet of things technology,radio frequency identification(RFID) system,with its advantages of non-contact and rapid identification,has become the first choice to solve the problem of Internet of things.RFID network planning should consider multiple objectives,which has been proved to be a multi-objective optimization problem.In this paper,an improved grey wolf optimizer is proposed,which uses Gauss mutation operator and inertia constant strategy to realize RFID network planning.Through the establishment of the optimization model,on the basis of satisfying the four objectives of 100% coverage of tags,deploying fewer readers,avoiding signal interference and consuming less power,this paper makes a comparative analysis with particle swarm optimization(PSO),genetic algorithm(GA) and monarch butterfly algorithm(MMBO).The experimental results show that grey wolf algorithm performs better in RFID network planning.In the same experimental environment,compared with other algorithms,the fitness value of IGWO is 20.2% higher than GA,13.5% higher than PSO,9.66% higher than MMBO,and the number of tags covered is more,so the optimization scheme can be found more effectively.
Information Security
Survey on Adversarial Sample of Deep Learning Towards Natural Language Processing
TONG Xin, WANG Bin-jun, WANG Run-zheng, PAN Xiao-qin
Computer Science. 2021, 48 (1): 258-267.  doi:10.11896/jsjkx.200500078
Abstract PDF(1650KB) ( 3408 )   
References | Related Articles | Metrics
Deep learning models have been proven to be vulnerable and easy to be attacked by adversarial examples,but the current researches on adversarial samples mainly focus on the field of computer vision and ignore the security of natural language processing models.In response to the same risk of adversarial samples faced in the field of natural language processing(NLP),this paper clarifies the concepts related to adversarial samples as the basis of further research.Firstly,it analyzes causes of vulnerabilities,including complex structure of the natural language processing model based on deep learning,the training process that is difficult to detect and the naive basic principles,further elaborates the characteristics,classification and evaluation metrics of text adversarial examples,and introduces the typical tasks and classical datasets involved in the adversarial examples related to researches in the field of natural language processing.Secondly,according to different perturbation levels,it sorts out various text adversarial examples generation technology of mainstream char-level,word-level,sentence-level and multi-level.What's more,it summarizes defense methods,which are relevant to data,models and inference,and compares their advantages and disadvantages.Finally,the pain points of both attack and defense sides in thefield of current NLP adversarial samples are further discussed and anticipated.
Study on Correctness of Memory Security Dynamic Detection Algorithm Based on Theorem Proving
SUN Xiao-xiang, CHEN Zhe
Computer Science. 2021, 48 (1): 268-272.  doi:10.11896/jsjkx.200100097
Abstract PDF(1400KB) ( 716 )   
References | Related Articles | Metrics
With the development and improvement of software runtime verification technology,many C-oriented runtime memory security verification tools have appeared.Most of these tools are based on source code or intermediate code instrumentation technology to achieve memory-safe runtime detection.However,some of these verification tools that have not been rigorously proven often have two problems.One is that the addition of instrumentation programs may change the behavior and semantics of the source program,and the other is that instrumentation programs cannot effectively guarantee memory safety.In order to solve these two problems,this paper proposes a formal method that uses the Coq theorem prover to determine whether the memory security verification tool algorithm is correct,and uses this method to check the correctness of the dynamic runtime algorithm of the C language verification tool Movec Proven.The proof of the nature of the security specification shows that Movec's dynamic detection algorithm for memory security is correct.
Deep Neural Network Based Ponzi Scheme Contract Detection Method
ZHANG Yan-mei, LOU Yin-cheng
Computer Science. 2021, 48 (1): 273-279.  doi:10.11896/jsjkx.191100020
Abstract PDF(3614KB) ( 2258 )   
References | Related Articles | Metrics
The development of blockchain technology has attracted the attention of global investors.Currently,tens of thousands of smart contracts are deployed on Ethereum.In spite of bringing disruptive innovation to finance,traceability and many other industries,some smart contracts on Ethereum contain fraudulent forms such as Ponzi schemes,causing millions of dollars of losses to global investors.However,at present,there are few quantitative identification methods for Ponzi scheme under the background of Internet finance,few researches on detection of Ponzi scheme contract on Ethereum,and the detection accuracy needs to be improved.Therefore,a Ponzi scheme contract detection method based on deep neural network is proposed.It extracts the features of smart contract that are helpful to identify Ponzi scheme,such as operation code features and account features,to form a data set.Then,the model is trained on the dataset and performance is tested on test set.The experimental results show that the Ponzi scheme contract detection method based on deep neural network has a precision of 99.6% and a recall rate of 96.3%,which are better than that of existing methods.
Malicious Code Family Detection Method Based on Knowledge Distillation
WANG Run-zheng, GAO Jian, HUANG Shu-hua, TONG Xin
Computer Science. 2021, 48 (1): 280-286.  doi:10.11896/jsjkx.200900099
Abstract PDF(2984KB) ( 1109 )   
References | Related Articles | Metrics
In recent years,the variety of malicious code emerges in an endless stream,and malware is more covert and persistent.It is urgent to identify malicious samples by rapid and effective detection methods.Aiming at the present situation,a method of malicious code family detection based on knowledge distillation is proposed.The model decompiles malicious samples in reverse and transforms binary text into images by malicious code visualization technology,so as to avoid dependence on traditional feature engineering.In the teacher network model,residual network is used to extract the deep-seated features of image texture,and channel domain attention mechanism is introduced to extract the key information from the image according to the change of channel weight.In order to speed up the identification efficiency of the samples to be tested and solve the problems of large parameters and serious consumption of computing resources based on deep neural network detection model,the teacher network model is used to guide the training of the student network model.The results show that the student network maintains the detection effect of malicious code family on the basis of reducing the complexity of the model.It is conducive to the detection of batch samples and the deployment of mobile terminal.
Integrated Emergency-Defense System Based on Blockchain
SHAO Wei-hui, WANG Ning, HAN Chuan-feng, XU Wei-sheng
Computer Science. 2021, 48 (1): 287-294.  doi:10.11896/jsjkx.191200124
Abstract PDF(4842KB) ( 832 )   
References | Related Articles | Metrics
The emergency-defense system of China is decentralized,lacking information communication channels and coordination mechanisms.This condition results to low mobilization efficiency,resource scheduling conflicts and poor coordination perfor-mance,which highlights the urgency to upgrade the capacity of integrated emergency-defense system.In response to this situation,this paper studies the integrated emergency-defense mechanism with the expert system as the supervisory decision-making layer and the P2P(Peer-to-Peer) network as the autonomous decision-making layer.Based on the location,time and category of emergency response events or national defense events,a three-dimensional blockchain model is established to describe resource scheduling problems in different situations.Considering the difference between the integrated emergency-defense scenario and the conventional peer-to-peer transaction scenario,a consensus mechanism based on credit certificate is designed.Finally,an integrated emergency-defense blockchain prototype system of limited nodes to participate is built on Ethereum.Simulation results of the prototype system prove that the blockchain based integrated emergency-defense mechanism is reasonable and feasible.
Anomaly Judgment of Directly Associated Nodes Under Cloud Computing Environment
LEI Yang, JIANG Ying
Computer Science. 2021, 48 (1): 295-300.  doi:10.11896/jsjkx.191200186
Abstract PDF(1819KB) ( 644 )   
References | Related Articles | Metrics
Currently,more and more users deploy their services on cloud computing environment.Due to the services diversity and dynamic deployment,the anomalies will be occurred on nodes under cloud computing environment.The impact of anomaly nodes on associated nodes are usually neglected in the traditional node anomaly detection methods,which will result in anomaly propagation and nodes failure.In this paper,a method of anomaly judgment for directly associated nodes under cloud computing environment is proposed.At first,the Agent is deployed on each node and the running data of nodes are collected through the Agent at specific time intervals.The node relationship graph is established based on the relationship between the nodes.Secondly,the anomaly detection model is trained by the running data.Then the weights and comprehensive scores of the running data is calculated.The anomaly of the single node is judged by the sliding time window-based method.Finally,other nodes affected by the anomaly nodes are found through the normalized mutual information in the case of a single node anomaly.In this paper,the relevant experiments are carried out on the cloud computing platform.In order to simulate all kinds of anomaly situations,the anomaly conditions are injected during the experiment and the state of nodes under the injection anomaly is observed.The validity of single node and directly associated node of anomaly judgment method was verified by experiments.The experimental results showed that the accuracy and specificity of the method in this paper are better than other methods about single node anomaly judgment.Under the multi-node structure,the method of this paper could find the directly associated anomaly node with the higheraccuracy and stability.
Mutation Based Fault Localization Technique for BPEL Programs
SUN Chang-ai, ZHANG Shou-feng, ZHU Wei-zhong
Computer Science. 2021, 48 (1): 301-307.  doi:10.11896/jsjkx.200900051
Abstract PDF(3359KB) ( 760 )   
References | Related Articles | Metrics
Unlike traditional C,C++,or Java programs,BPEL (Business Process Execution Language) programs are composed of a set of activities and their interactions,which have the new features such as concurrency,synchronization,and XML-based representation.These new features pose difficulties for effectively locating faults in BPEL programs.To address the limited effectiveness of existing fault localization techniques,we propose a mutation-based BPEL program fault localization technique,design a set of optimization strategies based on characteristics of BPEL programs and their mutation operators,and develop a supporting tool.6 real-life BPEL programs are conducted to evaluate the feasibility and fault localization effectiveness of the proposed technique and its effectiveness is also compared with that of a set of benchmark techniques.Experimental results show that the proposed technique has a higher recall rate while a comparable cost is compared with benchmark techniques,demonstrating that the proposed optimization strategies reduce the mutation cost of the proposed technique.
Interdiscipline & Frontier
Overview of Application of Positioning Technology in Virtual Reality
ZHANG Yu-xiang, REN Shuang
Computer Science. 2021, 48 (1): 308-318.  doi:10.11896/jsjkx.200800010
Abstract PDF(2066KB) ( 1949 )   
References | Related Articles | Metrics
In recent years,virtual reality technology in our country has rapidly developed with the development of 5G technology,sensor technology and civilian graphics processors.The demand for virtual reality in education,transportation,commerce,entertainment,industry and other fields is increasing rapidly.Virtual reality technology is a brand-new comprehensive information technology,in which positioning technology is the key technology that determines user's immersion and interactionand is an important support for virtual reality technology.Therefore,it is necessary to focus on the summary of positioning technology,which is one of the core technologies of virtual reality.This paper first introduces virtual reality and positioning technology,then focuses on detailed analysis and comparison of typical positioning technologies currently used in virtual reality systems,and introduces the principles of these technologies,related research results,and their usage scenarios in virtual reality.After that,it introduces the mainstream virtual reality positioning equipment on the market,then discusses the positioning algorithms used in virtual reality positioning technology,and finally introduces current problems and future development directions of virtual reality positioning technology.
Application Research on Container Technology in Scientific Computing
XU Yun-qi, HUANG He, JIN Zhong
Computer Science. 2021, 48 (1): 319-325.  doi:10.11896/jsjkx.191100111
Abstract PDF(2015KB) ( 1767 )   
References | Related Articles | Metrics
Container is a new virtualization technology that has emerged in recent years.Due to its ability to provide isolated environment for running applications and services with minimal resource overhead,it quickly explodes in popularity among enterprises and has seen wide applications in a number of business scenarios such as continuous integration,continuous deployment,automated testing and micro-services.Although not as fully utilized as in industry,the packaging ability of containers also holds promise for improving productivity and code portability in the domain of scientific computing.In this paper,we discuss how the container and related technologies can be used in scientific computing by surveying existing application examples.The different application patterns represented by these examples suggest that the scientific computing community may benefit from the container technology and the ecosystem evolving around it in many different ways.
Highly Available Elastic Computing Platform for Metagenomics
HE Zhi-peng, LI Rui-lin, NIU Bei-fang
Computer Science. 2021, 48 (1): 326-332.  doi:10.11896/jsjkx.191200030
Abstract PDF(3670KB) ( 646 )   
References | Related Articles | Metrics
Next generation sequencing(NGS) has significantly promoted the development of metagenomics due to its low cost and ultra-high throughput.However,it has brought great challenges to researchers at the same time since processing large-scale and high-complexity sequencing data is a tough task.On the one hand,the analysis of large-scale sequencing data consumes too many resources such as hardware resources and the cost of time,etc.On the other hand,in the process of computational analysis,a large number of metagenomics computational analysis tools need to be deployed,debugged and maintained inevitably which are difficult for common users.For the above reasons,this paper compares the mainstream metagenomics computing platforms in the field and analyzes the main advantages and disadvantages of each platform comprehensively.Furthermore,a highly available and flexible metagenomics computing platform MWS-MGA(More than a Web Service for Metagenomic Analysis) focusing on meta-genomics computational analysis has been constructed which is combined with the current effective computing service technology.Meanwhile,not only multiple interactive access methods but also rich and flexible computing tools are provided in MWS-MGA.Thus,the scientific research threshold for researchers to conduct metagenomics analysis has been greatly reduced.