Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Transplantation and Optimization of Graph Matching Algorithm Based on Domestic DCUHeterogeneous Platform
    HAO Meng, TIAN Xueyang, LU Gangzhao, LIU Yi, ZHANG Weizhe, HE Hui
    Computer Science    2024, 51 (4): 67-77.   DOI: 10.11896/jsjkx.230800193
    Abstract37)      PDF(pc) (3041KB)(59)       Save
    Subgraph matching is a basic graph algorithm that is widely used in various fields such as social networks and graph neural networks.As the scale of graph data grows,there is an increasing need for efficient subgraph matching algorithms.GENEVA is a GPU-based parallel subgraph matching algorithm.It uses the interval index graph storage structure and parallel matching optimization method to greatly reduce storage overhead and improve subgraph matching performance.However,due to the diffe-rence in the underlying hardware architecture and compilation environment of the platform,GENEVA cannot be directly applied to the domestic DCU platform.In order to solve this problem,this paper proposes GENEVA's transplantation and optimization scheme for domestic DCU.IO time consumption is the main performance bottleneck of GENEVA algorithm.This paper proposes three optimization strategies of page-locked memory,preloading,and scheduler to alleviate this bottleneck.Among them,page-locked memory technology avoids additional data transfer from pageable memory to temporary page-locked memory,and greatly reduces the time consumption of IO transfer on the DCU platform.The preloading technology overlaps IO data transmission with DCU kernel function computation to mask IO time consumption.The scheduler reduces redundant data transfer while satisfy preloading requirements.In this paper,Experiments are carried out on three real-world datasets of different sizes,and the results show that the algorithm performance is significantly improved after using the optimization strategies.On 92.6% of the test cases,the optimized GENEVA-HIP execution time on the Sugon DCU platform is less than that of the unported GENEVA on the GPU server.On a larger dataset,the execution time of the optimized Geneva-HIP algorithm on the DCU platform is reduced by 52.73% compared with the the pre-port GENEVA algorithm on the GPU server.
    Reference | Related Articles | Metrics
    Sequential Recommendation Based on Multi-space Attribute Information Fusion
    WANG Zihong, SHAO Yingxia, HE Jiyuan, LIU Jinbao
    Computer Science    2024, 51 (3): 102-108.   DOI: 10.11896/jsjkx.230600078
    Abstract90)      PDF(pc) (2941KB)(180)       Save
    The goal of sequential recommendation is to model users' dynamic interests from their historical behaviors,and hence to make recommendations related to the users' interests.Recently,attribute information has been demonstrated to improve the performance of sequential recommendation.Many efforts have been made to improve the performance of sequential recommendation based on attribute information fusion,and have achieved success,but there are still some deficiencies.First,they do not explicitly model user preferences for attribute information or only model one attribute information preference vector,which cannot fully express user preferences.Second,the fusion process of attribute information in existing works does not consider the in-fluence of user personalized information.Aiming at the above-mentioned deficiencies,this paper proposes sequential recommendation based on multi-space attribute information fusion(MAIF-SR),and proposes a multi-space attribute information fusion framework,fuse attribute information sequence in different attri-bute information spaces and model user preferences for different attribute information,fully expressing user preferences using multi-dimensional interests.A personalized attribute attention mechanism is designed to introduce user personalized information during the fusion process,enhance the personalized effect of the fusion information.Experimental results on two public data sets and one industrial private data set show that MAIF-SR is superior to other comparative sequential recommendation models based on attribute information fusion.
    Reference | Related Articles | Metrics
    Review of Public Opinion Dynamics Models
    LIU Shuxian, XU Huan, WANG Wei, DENG Le
    Computer Science    2024, 51 (2): 15-26.   DOI: 10.11896/jsjkx.230100072
    Abstract128)      PDF(pc) (1813KB)(2536)       Save
    Social network provides a medium for information dissemination,leading to the rapid development of public opinion.Controlling the development direction of public opinion is one of the core issues of public opinion dynamics.However,the public opinion dynamics model mainly studies the way of updating the opinions of the subject so as to deduce the law of public opinion evolution.This paper classifies the current public opinion dynamics models,analyzes their advantages and disadvantages,and their applications in different fields,and summarizes the future research direction of public opinion dynamics.It is helpful to understand the law of the evolution of public opinion,so as to provide better guidance for the government and other institutions to control the direction of public opinion.
    Reference | Related Articles | Metrics
    Sparse Adversarial Examples Attacking on Video Captioning Model
    QIU Jiangxing, TANG Xueming, WANG Tianmei, WANG Chen, CUI Yongquan, LUO Ting
    Computer Science    2023, 50 (12): 330-336.   DOI: 10.11896/jsjkx.221100068
    Abstract171)      PDF(pc) (2242KB)(2068)       Save
    Despite the fact that multi-modal deep learning such as image captioning model has been proved to be vulnerable to adversarial examples,the adversarial susceptibility in video caption generation is under-examined.There are two main reasons for this.On the one hand,the video captioning model input is a stream of images rather than a single picture in contrast to image captioning systems.The calculation would be enormous if we perturb each frame of a video.On the other hand,compared with the video recognition model,the output of the model is not a single word,but a more complex semantic description.To solve the above problems and study the robustness of video captioning model,this paper proposes a sparse adversarial attack method.Firstly,a method is proposed based on the idea derived from saliency maps in image object recognition model to verify the contribution of different frames to the video captioning model output and a L2norm based optimistic objective function suited for video caption models is designed.With a high success rate of 96.4% for the targeted attack and a reduction in queries of more than 45% compared to randomly selecting video frames,the evaluation on the MSR-VTT dataset demonstrates the effectiveness of our strategy as well as reveals the vulnerability of the video caption model.
    Reference | Related Articles | Metrics
    ZUC High Performance Data Encryption Scheme Based on FPGA
    ZHANG Bolin, LI Bin, YAN Yunfei, WEI Yuanxin, ZHOU Qinglei
    Computer Science    2023, 50 (11): 374-382.   DOI: 10.11896/jsjkx.221100070
    Abstract335)      PDF(pc) (2131KB)(1227)       Save
    ZUC algorithm is a stream cipher algorithm independently developed by China,which has been adopted as the fourth generation mobile communication encryption standard by 3GPP LTE.In order to meet the high requirements of the big data era for the performance of domestic passwords,a set of high-performance data encryption scheme with ZUC algorithm as the core is designed.The scheme includes two encryption algorithm cores of different structure forms.Aiming at two different application situations of short message and long message respectively,based on the FPGA platform,the semi-pipelined and full-pipelined ZUC stream cipher circuit structures are designed by using CLA and CSA adders.With the improved ZUC encryption mode,combined with high-speed memory communication and multi iv parallel encryption,the high-performance encryption scheme is realized,which greatly improves the encryption and decryption efficiency.When the scheme works,the encryption algorithm can be configured using the control module.Experimental results show that,compared with other schemes,the working frequency of the proposed algorithmis increased by 40.8%~209.5% and 62.1%~445.4% respectively,and the data throughput reaches 25.728 Gb/s and 46.08 Gb/s,meeting the high-performance encryption scenarios such as edge devices and Internet of Vehicles data encryption.
    Reference | Related Articles | Metrics
    Tiny Person Detection for Intelligent Video Surveillance
    YANG Yi, SHEN Sheng, DOU Zhiyang, LI Yuan, HAN Zhenjun
    Computer Science    2023, 50 (9): 75-81.   DOI: 10.11896/jsjkx.230400204
    Abstract359)      PDF(pc) (1861KB)(2100)       Save
    Person detection has significant practical implications for social governance and urban security.Monitoring data is an important source of data security.Tiny object detection,which focuses on less than 20 pixels objects in large-scale images,is a challenging task.One of the main challenges is the scale mismatch between the dataset used for pre-training/co-training the detectors,such as COCO,and the dataset used for fine-tuning the detectors,such as TinyPerson,which negatively affects the performance of detectors on tiny object detection.To address this challenge,this paper proposes an optimization strategy called scale distribution searching(SDS) to match the scale of different datasets for tiny object detection,which also balance the information gain and loss.The Gauss model is used to model the scale distribution of targets in the dataset,and the optimal distribution parameters are found through iteration.The feature distribution and the performance of the detector is comparedto find the best scale distribution.Through the SDS strategy,mainstream object detection methods have achieved better performance on TinyPerson,demonstrating the effectiveness of the SDS strategy in improving pre-training/co-training efficiency.
    Reference | Related Articles | Metrics
    Edge Offloading Framework for D2D-MEC Networks Based on Deep Reinforcement Learningand Wireless Charging Technology
    ZHANG Naixin, CHEN Xiaorui, LI An, YANG Leyao, WU Huaming
    Computer Science    2023, 50 (8): 233-242.   DOI: 10.11896/jsjkx.220900181
    Abstract261)      PDF(pc) (2442KB)(432)       Save
    A large amount of underutilized computing resources in IoT devices is what mobile edge computing requires.An edge offloading framework based on device-to-device communication technology and wireless charging technology can maximize the utilization of computing resources of idle IoT devices and improve user experience.The D2D-MEC network model of IoT devices can be established on this basis.In this model,the device chooses to offload multiple tasks to multiple edge devices according to the current environment information and the estimated device state.It applies wireless charging technology to increase the success rate of transmission and computation stability.The reinforcement learning method is used to solve the joint optimization allocation problem,which aims to minimize the computation delay,energy consumption,and task dropping loss as well as maximize the utilization of edge devices and the proportion of task offloading.In addition,to adapt to larger state space and improve learning speed,an offloading scheme based on deep reinforcement learning is proposed.Based on the above theory and model,the optimal solution and upper limit of performance of the D2D-MEC system are calculated by mathematical derivation.Simulation results show that the D2D-MEC offloading model and its offloading strategy have better all-around performance and can make full use of the computing resources of IoT devices.
    Reference | Related Articles | Metrics
    Water Resources Governance Mode in Watersheds Oriented to “RNAO-Ecology” Hypernetwork Complex Structure
    SUO Liming, LI Jun
    Computer Science    2023, 50 (7): 355-367.   DOI: 10.11896/jsjkx.220900134
    Abstract194)      PDF(pc) (2201KB)(284)       Save
    For a long time,the integrity of watersheds and the fragmentation of authority have resulted in high cost of water resources governance in China’s watersheds.The transition to network governance has become a hot topic of watershed research in recent years,and a broad consensus has been formed.Compared with the three traditional schemes proposed by western scholars,the RNAO(Restricted Network Administration Organization)network structure focusing on coordination strategies is more in line with the practical needs of localization of water resources governance in the watersheds.A more general governance theory for RNAO is the direction of its theoretical development.First,this paper sorts out the triple advanced logic of the “traditional-network-hypernetwork” process of water resources governance of watershed research.Secondly,it integrates RNAO and “social-ecological” system theory,innovatively proposes an “RNAO-ecological” scale matched watershed hypernetwork governance mode,and analyzes the complex interaction mechanism of the three-layer sub-network of “organization network,behavior network,and ecological network” in detail,thus initially forming a general theoretical construction of RNAO.Finally,according to the two cases of “united river chief system” and “water resource governance of Heihe watershed”,this paper analyzes the application of RNAO and “RNAO-ecology” system in China’s watershed governance practice and gives relevant policy suggestions and possible academic topics for the future network transformation of watershed governance.
    Reference | Related Articles | Metrics
    Many-core Optimization Method for the Calculation of Ab initio Polarizability
    LUO Haiwen, WU Yangjun, SHANG Honghui
    Computer Science    2023, 50 (6): 1-9.   DOI: 10.11896/jsjkx.220700162
    Abstract458)      PDF(pc) (3054KB)(4181)       Save
    Density-functional perturbation theory(DFPT) based on quantum mechanics can be used to calculate a variety of physicochemical properties of molecules and materials and is now widely used in the research of new materials.Meanwhile,heteroge-neous many-core processor architectures are becoming the mainstream of supercomputing.Therefore,redesigning and optimizing DFPT programs for heterogeneous many-core processors to improve their computational efficiency is of great importance for the computation of physicochemical properties and their scientific applications.In this work,the computation of first-order response density and first-order response Hamiltonian matrix in DFPT is optimized for many-core processor architecture and verified on the new generation Sunway processors.Optimization techniques include loop tiling,discrete memory access processing and colla-borative reduction.Among them,loop tiling divides tasks so that they can be executed by many cores in parallel;discrete memory access processing converts discrete accesses into more efficient continuous memory accesses;collaborative reduction solves the write conflict problem.Experimental results show that the performance of the optimized program improves by 8.2 to 74.4 times over the pre-optimization program on one core group,and has good strong scalability and weak scalability.
    Reference | Related Articles | Metrics
    Survey on Knowledge Transfer Method in Deep Reinforcement Learning
    ZHANG Qiyang, CHEN Xiliang, CAO Lei, LAI Jun, SHENG Lei
    Computer Science    2023, 50 (5): 201-216.   DOI: 10.11896/jsjkx.220400235
    Abstract526)      PDF(pc) (3352KB)(608)       Save
    Deep reinforcement learning is a hot issue in artificial intelligence research.With the deepening of research,some shortcomings are gradually exposed,such as low data utilization,weak generalization ability,difficult exploration,lack of reasoning and representation ability,etc.These problems greatly restrict the application of deep reinforcement learning method in practical pro-blems.Knowledge transfer is a very effective method to solve this problem.This study discusses how to use knowledge transfer to accelerate the process of agent training and cross domain transfer from the perspective of deep reinforcement learning,analyzes the existing forms and action modes of knowledge in deep reinforcement learning,and classifies and summarizes the knowledge transfer methods in deep reinforcement learning according to the basic elements of reinforcement learning.Finally,the existing problems and cutting-edge development direction of knowledge transfer in deep reinforcement learning in algorithm,theory and application are reported.
    Reference | Related Articles | Metrics
    Deep Learning-based Visual Multiple Object Tracking:A Review
    WU Han, NIE Jiahao, ZHANG Zhaowei, HE Zhiwei, GAO Mingyu
    Computer Science    2023, 50 (4): 77-87.   DOI: 10.11896/jsjkx.220300173
    Abstract993)      PDF(pc) (2496KB)(623)       Save
    Multiple object tracking(MOT)aims to predict trajectories of all targets and maintain their identities from a given video sequence.In recent years,MOT has gained significant attention and become a hot topic in the field of computer vision due to its huge potential in academic research and practical application.Benefiting from the advancement of object detection and re-identification,the current approaches mainly split the MOT task into three subtasks:object detection,re-identification feature extraction,and data association.This idea has achieved remarkable success.However,maintaining robust tracking still remains challenging due to the factors such as occlusion and similar object interference in the tracking process.To meet the requirement of accurate,robust and real-time tracking in complex scenarios,further research and improvement of MOT algorithms are needed.Some review literature on MOT algorithms has been published.However,the existing literatures do not summarize the tracking approaches comprehensively and lack the latest research achievements.In this paper,the principle of MOT is firstly introduced,as well as the challenges in the tracking process.Then,the latest research achievements are summarized and analyzed.According to the tracking paradigm used to complete the three subtasks,the various algorithms are divided into separate detection and embedding,joint detection and embedding,and joint detection and tracking.The main characteristics of various tracking approaches are described.Afterward,the existing mainstream models are compared and analyzed on MOT challenge datasets.Finally,the future research directions are prospected by discussing the advantages and disadvantages of the current algorithms and their development trends.
    Reference | Related Articles | Metrics
    Survey of Medical Knowledge Graph Research and Application
    JIANG Chuanyu, HAN Xiangyu, YANG Wenrui, LYU Bohan, HUANG Xiaoou, XIE Xia, GU Yang
    Computer Science    2023, 50 (3): 83-93.   DOI: 10.11896/jsjkx.220700241
    Abstract620)      PDF(pc) (2148KB)(623)       Save
    In the process of digitisation of medical data,choosing the right technology for efficient processing and accurate analysis of medical data is a common problem faced by the medical field today.The use of knowledge graph technology with the excellent association and reasoning capabilities to process and analyse medical data can better enable applications such as wise information technology of medicine and aided diagnoses.The complete process of constructing a medical knowledge graph includes know-ledge extraction,knowledge fusion and knowledge reasoning.Knowledge extraction can be subdivided into entity extraction,relationship extraction and attribute extraction,while knowledge fusion mainly includes entity alignment and entity disambiguation.Firstly,the constructiontechnologies and practical applications of medical knowledge graphs are summarised,and the development of the technologies is clarified for each specific construction process.On this basis,the relevant techniques are introduced,and their advantages and limitations are explained.Secondly,introducing several medical knowledge graphs that are being successfully applied.Finally,based on the current state of technology and applications of knowledge graphs in the medical field,future research directions for knowledge graphs in technology and applications are given.
    Reference | Related Articles | Metrics
    Survey of Container Technology for High-performance Computing System
    CHEN Yiyang, WANG Xiaoning, LU Shasha, XIAO Haili
    Computer Science    2023, 50 (2): 353-363.   DOI: 10.11896/jsjkx.220100163
    Abstract387)      PDF(pc) (3088KB)(424)       Save
    Container technology has been widely used in the cloud computing industry,mainly for rapid migration and automated deployment of service software environments.With the deep integration of high performance computing,big data and artificial intelligence technologies,the application software dependency and configuration of high performance computing systems are beco-ming increasingly complex,and the demand for user-defined software stacks in supercomputing centers is getting stronger.Therefore,in the application environment of high-performance computing systems,a variety of container implementations have also been developed to meet the practical needs such as user-defined software stacks.This paper summarizes the development history of container technology,explains the technical principles of containers in Linux platform,analyzes and evaluates the container implementation software for high-performance computing systems,and finally the future research direction of container technology for high-performance computing system is prospected.
    Reference | Related Articles | Metrics
    Survey of Learned Index
    WANG Yitan, WANG Yishu, YUAN Ye
    Computer Science    2023, 50 (1): 1-8.   DOI: 10.11896/jsjkx.211000149
    Abstract755)      PDF(pc) (2528KB)(663)       Save
    Due to the explosive growth of data in the era of big data,it is difficult for the traditional index structures to handle this huge and complex data.In order to solve this problem,the learned index has emerged and become one of the most popular research topics in the database.Learned indexes employ machine learning models for index construction.By training and learning the relationship between data and physical location,the learning model can be obtained so as to master the distribution characte-ristics between the two to realize the improvement and optimization of the traditional index.Extensive experiments show that learned indexes can adapt to large-scale data sets,and provide better search performance with lower memory requirements than traditional indexes.This paper introduces the applications of learned indexes and reviews the existing learned index models.According to data types,learned indexes are divided into two categories:one-dimensional and multi-dimensional.The advantages,disadvantages,and supported searches of learned index models in each category are also introduced and analyzed in detail.Finally,some future research directions of learned indexes are prospected to provide references for related researches.
    Reference | Related Articles | Metrics
    Survey of Incentive Mechanism for Federated Learning
    LIANG Wen-ya, LIU Bo, LIN Wei-wei, YAN Yuan-chao
    Computer Science    2022, 49 (12): 46-52.   DOI: 10.11896/jsjkx.220500272
    Abstract743)      PDF(pc) (2478KB)(884)       Save
    Federated Learning(FL) is driven by multi-party data participation,where participants and central servers continuously exchange model parameters rather than directly upload raw data to achieve data sharing and privacy protection.In practical applications,the accuracy of the FL global model relies on multiple stable and high-quality clients participating,but there is an imba-lance in the data quality of participating clients,which can lead to the client being in an unfair position in the training process or not participating in training.Therefore,how to motivate clients to participate in federated learning actively and reliably is the key,which ensuring that FL is widely promoted and applied.This paper mainly introduces the necessity of incentive mechanisms in FL and divides the existing research into incentive mechanisms based on contribution measurement,client selection,payment allocation and multiple sub-problems optimization according to the sub-problems of incentive mechanisms in the FL training process,analyzes and compares existing incentive schemes,and summarizes the challenges in the development of incentive mechanisms on this basis,and explores the future research direction of FL incentive mechanisms.
    Reference | Related Articles | Metrics
    Cooperation and Confrontation in Crowd Intelligence
    ZHU Di-di, WU Chao
    Computer Science    2022, 49 (11A): 210900249-7.   DOI: 10.11896/jsjkx.210900249
    Abstract582)      PDF(pc) (2277KB)(503)       Save
    Crowd intelligence has rich connotations and denotations.Its algorithms include both the early algorithms based on the characteristics of biological groups(particle swarm optimization,ant colony algorithm,etc.) and the later large-scale crowd algorithms based on network interconnection(multi-agent system,crowd intelligence perception,federated learning,etc.).The core idea of these crowd intelligence algorithms is cooperation or confrontation.Collaboration can combine the limited intelligence of individuals into the powerful intelligence of the group.However,collaboration itself has certain limitations,which may lead to the over-dependence between individuals and the unfairness of the system.Confrontation can overcome this limitation,and its basic idea is that individuals seek their maximum interests through the game.Therefore,cooperation and confrontation are indispensable.It is the inevitable development trend of a crowd intelligence to promote cooperation with confrontation,and to build a crowd intelligence ecology in which cooperation and confrontation coexist.This paper mainly focuses on the cooperation and confrontation methods of crowd intelligence algorithms,expounds on the classical crowd intelligence algorithms,and prospects the next development direction of emerging crowd intelligence algorithms.
    Reference | Related Articles | Metrics
    Review of Mobile Air-Ground Crowdsensing
    CHENG Wen-hui, ZHANG Qian-yuan, CHENG Liang-hua, XIANG Chao-can, YANG Zhen-dong, SHEN Xin, ZHANG Nai-fan
    Computer Science    2022, 49 (11): 242-249.   DOI: 10.11896/jsjkx.220400264
    Abstract415)      PDF(pc) (2333KB)(726)       Save
    As an emerging sensing mode,mobile crowdsensing can realize low-cost and large-scale urban sensing by reusing a large number of existing mobile sensing resources of air and ground.Therefore,it is of great significance to improve the utilization of mobile sensing resources and promote the development of smart cities by jointly utilizing air-ground mobile sensing resources to realize air-ground cooperative mobile crowdsensing.To this end,this paper reviews the recent research on air-ground cooperative mobile crowdsensing.Firstly,it introduces the rising background and development status of air-ground cooperative mobile crowdsensing.Then it analyzes the existing research work on mobile crowdsensing from two dimensions of ground-based mobile devices and air-based mobile devices,and summarizes the current problems.Finally,three important future research directions for air-ground cooperative mobile crowdsensing in cross-platform user information learning,cross-air-ground mobile device scheduling,and cross-task sensing resource allocation are proposed to provide valuable reference for relevant researchers.
    Reference | Related Articles | Metrics
    Distributed Privacy Protection Data Search Scheme
    LIU Ming-da, SHI Yi-juan, RAO Xiang, FAN Lei
    Computer Science    2022, 49 (10): 291-296.   DOI: 10.11896/jsjkx.210900233
    Abstract347)      PDF(pc) (2589KB)(588)       Save
    Aiming at the problem of data island caused by high-sensitivity data in the cloud,which makes the data unable to search,discover and share with each other,a distributed privacy protection data search scheme is proposed to realize the two-way confidentiality of data and search conditions in distributed scenarios,and a trusted search certificate could be established.Firstly,the data model,the objectives and application scenarios of scheme protection are defined.Next,the design framework and protocol flow of the scheme are proposed,focusing on the overall flow of three parts:trusted data interaction channel based on blockchain,trusted key sharing module and ciphertext search engine.Then,a full-text search engine tantivy SGX in ciphertext state based on trusted execution environment is proposed,and the principle and implementation method are analyzed in detail.Finally,the overall process and core methods are implemented and verified.Experiments show that the scheme is efficient and feasible,and can effectively enhance the security of data discovery and search in distributed environment.
    Reference | Related Articles | Metrics
    Overview of Natural Language Video Localization
    NIE Xiu-shan, PAN Jia-nan, TAN Zhi-fang, LIU Xin-fang, GUO Jie, YIN Yi-long
    Computer Science    2022, 49 (9): 111-122.   DOI: 10.11896/jsjkx.220500130
    Abstract457)      PDF(pc) (2218KB)(628)       Save
    Natural language video localization(NLVL),which aims to locate a target moment from a video that semantically corresponds to a text query,is a novel and challenging task.Different from the task of temporal action localization,NLVL is more flexible without restrictions from predefined action categories.Meanwhile,NLVL is more challenging since it requires align semantic information from both visual and textual modalities.Besides,how to obtain the final timestamp from the alignment relationship is also a tough task.This paper first proposes the pipeline of NLVL,and then categorizes them into supervised and weakly-supervised methods according to whether there is supervised information,following by the analysis of the strengths and weaknesses of each kind of method.Subsequently,the dataset,evaluation protocols and the general performance analysis are presented.Finally,the possible perspectives are obtained by summarizing the existing methods.
    Reference | Related Articles | Metrics
    Accelerating Persistent Memory-based Indices Based on Hotspot Data
    LIU Gao-cong, LUO Yong-ping, JIN Pei-quan
    Computer Science    2022, 49 (8): 26-32.   DOI: 10.11896/jsjkx.210700176
    Abstract430)      PDF(pc) (2090KB)(970)       Save
    Non-volatile memory(NVM),also known as persistent memory(PM),has the characteristics of bit-based addressing,durability,high storage density and low latency.Although the latency of NVM is much smaller than that of solid-state drives,it is greater than that of DRAM.In addition,NVM has shortcomings such as unbalanced reading and writing as well as short writing life.Therefore,currently NVM cannot completely replace DRAM.A more reasonable method is using NVM to build a hybrid memory architecture based on DRAM+NVM.Based on the observation that many data accesses in database applications are skewed,this paper focuses on the hybrid memory architecture composed of NVM and DRAM and proposes a hotspot data-based speedup method for persistent memory indices.Particularly,we utilize the low latency of DRAM and the durability and high sto-rage density of NVM,and propose to add a DRAM-based hotspot-data cache for persistent memory indices.Then,we present a query-adaptive indexing method that can automatically adjust the cache according to the change of hotspot data.We apply the proposed method to several persistent memory indices,including wBtree,FPTree and Fast&Fair,and conduct comparative experiments.The results show that when the number of hotspot data visits accounts for 80% of the total visits,the proposed method can accelerate the query performance of the three indices by 52%,33% and 37%,respectively.
    Reference | Related Articles | Metrics
    Survey on Action Quality Assessment Methods in Video Understanding
    ZHANG Hong-bo, DONG Li-jia, PAN Yu-biao, HSIAO Tsung-chih, ZHANG Hui-zhen, DU Ji-xiang
    Computer Science    2022, 49 (7): 79-88.   DOI: 10.11896/jsjkx.210600028
    Abstract1037)      PDF(pc) (2123KB)(1674)       Save
    Action quality assessment refers to evaluate the action quality performed by human in video,such as calculating the quality score,level and evaluating the performance of different people.It is an important direction in video understanding and computer vision research.This paper summarizes the main methods of action quality assessment,including action quality score prediction methods,level classification and ranking methods.The performance of these methods on public datasets is also analyzed.Finally,the challenge problems in future research are discussed.
    Reference | Related Articles | Metrics
    PPO Based Task Offloading Scheme in Aerial Reconfigurable Intelligent Surface-assisted Edge Computing
    XIE Wan-cheng, LI Bin, DAI Yue-yue
    Computer Science    2022, 49 (6): 3-11.   DOI: 10.11896/jsjkx.220100249
    Abstract1221)      PDF(pc) (2653KB)(1056)       Save
    In order to compensate the performance loss caused by obstacle blocking in mobile edge computing (MEC) system in 6G-enabled “intelligent Internet of Things”,this paper proposes a partial task offloading scheme supported by aerial reconfigurable intelligent surface (RIS).Firstly,we investigate the joint design of the RIS phase shift vector,the proportion of offloading task,time slot allocation,the transmit power of users and the position of UAV,formulating a non-convex problem for minimization of the total energy consumption of users.Then,the original non-convex problem is decomposed into four subproblems,and the proximal policy optimization (PPO) method in deep reinforcement learning (DRL) is utilized to provide time slot allocation.The alternative optimization (AO) is leveraged to decouple the original problem into four subproblems,including the RIS phase shift design,the convex optimization of transmit power and offloading task amount,and the UAV altitude optimization.Simulation results show that the proposed PPO model can be trained quickly,the total energy consumption of users can be reduced by about 23% and 5.3%,compared with the fully-offload strategy and fixed-UAV-height strategy,respectively.
    Reference | Related Articles | Metrics
    Android Malware Detection Method Based on Heterogeneous Model Fusion
    YAO Ye, ZHU Yi-an, QIAN Liang, JIA Yao, ZHANG Li-xiang, LIU Rui-liang
    Computer Science    2022, 49 (6A): 508-515.   DOI: 10.11896/jsjkx.210700103
    Abstract422)      PDF(pc) (2977KB)(723)       Save
    Aiming at the problem of limited detection accuracy of a single classification model,this paper proposes an Android malware detection method based on heterogeneous model fusion.Firstly,by identifying and collecting the mixed feature information of malicious software,the random forest algorithm based on CART decision tree and the Adaboost algorithm based on MLP are used to construct the integrated learning model respectively,and then the two classifiers are fused by Blending algorithm.Finally,a heterogeneous model fusion classifier is obtained.On this basis,the mobile terminal malware detection is implemented.Experimental results show that the proposed method can effectively overcome the problem of insufficient accuracy of single classification model.
    Reference | Related Articles | Metrics
    Review of Privacy-preserving Mechanisms in Crowdsensing
    LI Li, HE Xin, HAN Zhi-jie
    Computer Science    2022, 49 (5): 303-310.   DOI: 10.11896/jsjkx.210400077
    Abstract595)      PDF(pc) (2308KB)(1162)       Save
    In recent years,the rapid popularity of intelligent terminals has greatly promoted the development of crowdsensing service paradigm,which integrates data collection,analysis and processing.As a necessary base to ensure the safe operation of services and encourage the participation of sensing users,privacy-preserving has become the primary issue to be solved.This paper presents the state-of-the-art in privacy-preserving mechanisms for crowdsensing service.After describing its main components,this paper discusses the definition and metrics of privacy-preserving from the view of crowdsensing’s whole life cycle.The privacy-preserving mechanisms designed in literatures are analyzed and discussed according to different stages in crowdsensing’s whole-life-cycle,and the experimental datasets used in literatures are given.Finally,Future research challenges are proposed based on the development of crowdsensing and global regulatory requirements for privacy-preserving.
    Reference | Related Articles | Metrics
    Develop Social Computing and Social Intelligence Through Cross-disciplinary Fusion
    MENG Xiao-feng, YU Yan
    Computer Science    2022, 49 (4): 3-8.   DOI: 10.11896/jsjkx.yg20220402
    Abstract581)      PDF(pc) (1740KB)(3080)       Save
    The era of digital intelligence offers new opportunities for the development of social computing and social intelligence.Cross-disciplinary fusion shall be a critical approach for its deep development.This paper elaborates the connotation and denotation of social computing, discusses the paradigm shift of social computing research, and the general development of social computing and social intelligence.Next, it looks forward to the social computing and social intelligence in the era of digital intelligence, and proposes three pillars for constructinga social intelligence system based on the new infrastructure.The construction of such an intelligent system is mainly composed by three components, including the construction of large-scale high-velocity data intelligence, the integration of multi-scale flexible spatial intelligence, and the formation of complex adaptive social intelligence.There is a level progression from data intelligence to social intelligence, in which data, computing and society are entangled.As such, computing science, data science, spatial science, complex science and social science are requested to interact from both theoretical and methodological perspectives.With the rapid update of digital-intelligent technologies and their penetration into the whole social-economic system, social computing and social intelligence is bound to seek breakthrough and deep development in the interdisciplinary cross-integration.
    Reference | Related Articles | Metrics
    GSO:A GNN-based Deep Learning Computation Graph Substitutions Optimization Framework
    MIAO Xu-peng, ZHOU Yue, SHAO Ying-xia, CUI Bin
    Computer Science    2022, 49 (3): 86-91.   DOI: 10.11896/jsjkx.210700199
    Abstract772)      PDF(pc) (2335KB)(3184)       Save
    Deep learning has achieved great success in various practical applications.How to effectively improve the model execution efficiency is one of the important research issues in this field.The existing deep learning frameworks usually model deep learning in the form of computational graphs,try to optimize computational graphs through subgraph substitution rules designed by experts and mainly use heuristic algorithms to search substitution sequences.Their shortcomings mainly include:1)the exis-ting subgraph substitution rules result in a large search space and the heuristic algorithms are not efficient;2)these algorithms are not scalable for large computation graphs;3)cannot utilize the history optimization results.In order to solve the above problem,we propose GSO,a graph neural network-based deep learning computation graph optimization framework.We transfer the graph substitution optimization problem as the subgraph matching problem.Based on the feature information from the operators and the computation graph topology,we utilize the graph neural network to predict the subgraph matching feasibility and positions.We implement the framework using Python,which is compatible with the mainstream deep learning systems.The experimental results show that:1)compared to the total graph substitution rules,the proposed rule can reduce the search space by up to 92%;2)compared to the existing heuristic algorithms,GSO can complete the subgraph replacement process of the computational graph 2 times faster.The optimized computation graph is up to 34% faster the original graph.
    Reference | Related Articles | Metrics
    Survey on Video Super-resolution Based on Deep Learning
    LENG Jia-xu, WANG Jia, MO Meng-jing-cheng, CHEN Tai-yue, GAO Xin-bo
    Computer Science    2022, 49 (2): 123-133.   DOI: 10.11896/jsjkx.211000007
    Abstract1062)      PDF(pc) (2634KB)(1590)       Save
    Video super-resolution (VSR) aims to reconstruct a high-resolution video from its corresponding low-resolution version.Recently,VSR has made great progress driven by deep learning.In order to further promote VSR,this survey makes a comprehensive summary of VSR,and makes a taxonomy,analysis and comparison of existing algorithms.Firstly,since different frameworks are very important for VSR,we group the VSR approaches into two categories according to different frameworks:iterative- and recurrent-network based VSR approaches.The advantages and disadvantages of different networks are further compared and analyzed.Secondly,we comprehensively introduce the VSR datasets,summarize existing algorithms and further compare these algorithms on some benchmark datasets.Finally,the key challenges and the application of VSR methods are analyzed and prospected.
    Reference | Related Articles | Metrics
    New Cryptographic Primitive: Definition, Model and Construction of Ratched Key Exchange
    FENG Deng-guo
    Computer Science    2022, 49 (1): 1-6.   DOI: 10.11896/jsjkx.yg20220101
    Abstract1539)      PDF(pc) (1385KB)(1641)       Save
    In the application of traditional cryptography,people always assume that the endpoints are secure and the adversary is on the communication channel.However,the prevalence of malware and system vulnerabilities makes endpoint compromise a se-rious and immediate threat.For example,it is vulnerable to various attacks such as memory content being destroyed by viruses,randomness generator being corrupted,etc.What's worse,protocol sessions usually have a long lifetime,so they need to store session-related secret information for a long time.In this situation,it becomes essential to design high-strength security protocols even in the setting where the memory contents and intermediate values of computation (including the randomness) can be exposed.Ratchet key exchange is a basic tool to solve this problem.In this paper,we overview the definition,model and construction of ratchet key exchange,including unidirectional ratcheted key exchange,sesquidirectional ratcheted key exchange and bidirectionalratcheted key exchange,and prospect the future development of ratchet key exchange.
    Reference | Related Articles | Metrics
    Survey on Retrieval-based Chatbots
    WU Yu, LI Zhou-jun
    Computer Science    2021, 48 (12): 278-285.   DOI: 10.11896/jsjkx.210900250
    Abstract717)      PDF(pc) (2335KB)(1975)       Save
    With the rapid progress of natural language processing techniques and the massive accessible conversational data on Internet,non-tasked oriented dialogue systems,also referred to as Chatbots,have achieved great success,and drawn attention from both academia and industry.Currently,there are two lines in chatbots research,retrieval-based chatbots and generation-based chatbots.Due to the fluent responses and low latency,retrieval-based chatbots is a common method in practice.This paperfirst briefly introduces the research background, basic structure and component modules of retrieval-based chatbots,and then illustrates the constraints of the response selection module and related data set in details.Subsequently,we summarize recent popular techniques for response selection problem,including:statistic method,representation-based neural network method,interaction-based neural network method,and pre-training-based method.Finally,we pose the challenges of chatbots and outline promising directions as future work.
    Reference | Related Articles | Metrics
    Theoretical Research and Efficient Algorithm of Container Terminal Quay Crane Optimal Scheduling
    GAO Xi, SUN Wei-wei
    Computer Science    2021, 48 (11A): 22-29.   DOI: 10.11896/jsjkx.201200167
    Abstract322)      PDF(pc) (2558KB)(1030)       Save
    Quay crane scheduling problem is one of the most important scheduling problems in container terminals.The existing research results can not calculate the optimal scheduling of large-scale business in the feasible time,so the existing quay crane scheduling algorithms generally adopt heuristic strategies to ensure that scheduling can be calculated in the feasible time.In this paper,firstly,the correctness of the lower bound of completion time is proved theoretically,an optimal scheduling construction method is designed and the theoretical system of quay crane scheduling problem is completed.Secondly,based on the theoretical work,an algorithm of linear time complexity is designed to find the optimal scheduling.Finally,experiments show that the proposed method is significantly better than the existing methods in terms of solution quality and efficiency.
    Reference | Related Articles | Metrics
    Research Progress on Blockchain-based Cloud Storage Security Mechanism
    XU Kun, FU Yin-jin, CHEN Wei-wei, ZHANG Ya-nan
    Computer Science    2021, 48 (11): 102-115.   DOI: 10.11896/jsjkx.210600015
    Abstract550)      PDF(pc) (2831KB)(1628)       Save
    Cloud storage enables users to obtain cheap online storage services on demand through network connection anytime and anywhere.However,due to the untrustability of cloud service providers,third-party institutions and users as well as the inevitable malicious attacks,there are many security vulnerabilities of cloud storage.Blockchain has the potential to build a trusted platform with its characteristics of decentralization,persistence,anonymity and auditability.Therefore,the research on cloud storage security mechanism based on blockchain technology has become a research trend.Based on this,the security architecture of cloud sto-rage system and the security of blockchain technology are first outlined,then the literature review and comparative analysis are conducted from four aspects of access control,integrity verification,data deduplication and data provenance.Finally,the technical challenges of blockchain-based cloud storage security mechanism are analyzed,summarized and prospected.
    Reference | Related Articles | Metrics
    Microservices User Requests Allocation Strategy Based on Improved Multi-objective Evolutionary Algorithms
    ZHU Han-qing, MA Wu-bin, ZHOU Hao-hao, WU Ya-hui, HUANG Hong-bin
    Computer Science    2021, 48 (10): 343-350.   DOI: 10.11896/jsjkx.201100009
    Abstract590)      PDF(pc) (2643KB)(862)       Save
    How to allocate concurrent user requests to a system based on a microservices architecture to optimize objectives such as time,cost,and load balance,is one of the important issues that microservices-based application systems need to pay attention to.The existing user requests allocation strategy based on fixed rules only focuses on the solving of load balance,and it is difficult to deal with the balance between multi-objective requirements.A microservices user requests allocation model with multiple objectives of total requests processing time,load balancing rate,and total communication transmission distance is proposed to study the allocation of user requests among multiple microservices instances deployed in different resource centers.The multi-objective evolutionary algorithms with improved initial solutions generation strategy,crossover operator and mutation operator are used to solve this problem.Through many experiments on data sets of different scales,it is shown that the proposed method can better handle the balance between multiple objectives and has better solving performance,compared with the commonly used multi-objective evolutionary algorithms and traditional methods based on fixed rules.
    Reference | Related Articles | Metrics
    AI Governance Oriented Legal to Technology Bridging Framework for Cross-modal Privacy Protection
    LEI Yu-xiao , DUAN Yu-cong
    Computer Science    2021, 48 (9): 9-20.   DOI: 10.11896/jsjkx.201000011
    Abstract596)      PDF(pc) (1659KB)(3033)       Save
    With the popularity of virtual communities among network users,virtual community groups have become a small society,which can extract user-related privacy resources through the “virtual traces” left by users' browsing and user-generated content user published.Privacy resources can be classified into data resources,information resources and knowledge resources according to their characteristics,which constitute the data,information,knowledge,and wisdom graph (DIKW graph).There are four circulation processes for privacy resources in virtual communities,namely,the sensing,storage,transfern,and processing of privacy resources.The four processes are respectively completed by the three participants,the user,the AI system,and the visitor individually or in cooperation.The right to privacy includes the right to know,the right to participate,the right to forget,and the right to supervise.By clarifying the scope of privacy rights of the three participants in the four circulation processes,and combining the protection of privacy values,an anonymous protection mechanism,risk assessment mechanism and supervision mechanism are designed to build an AI governance legal framework for privacy protection of virtual communities.
    Reference | Related Articles | Metrics
    Data Science Platform:Features,Technologies and Trends
    CHAO Le-men, WANG Rui
    Computer Science    2021, 48 (8): 1-12.   DOI: 10.11896/jsjkx.210600033
    Abstract639)      PDF(pc) (1952KB)(3957)       Save
    The concept and types of data science platform are proposed based upon in-depth studies of more than 35 data science platforms from the annual report of Magic Quadrant for Data Science Platforms since 2015.The main scientific issues in the academic research of data science platform involve the design of data science platform,the scalability of data science platform,the research and development of data science platform based on data lake,the supporting team cooperation ability of data science platform,the open strategy of data science platform and the engineering methodology of data science platform.The main features of data science platform include modular development and integration capability,DevOps,emphasis on scalability,emphasis on user experience,emphasis on citizen data scientist,and emphasis on human-machine collaboration scenario.The key technologies for the realization of data science platform are machine learning,stream processing,tidy data,containerization and data visualization.The future development trend of data science platform is mainly reflected in the integration with artificial intelligence,the support for open source technology,the emphasis on citizen data scientists,the integration of data governance,the introduction of data lake,the exploration of advanced analysis and application,the transformation to the whole pipeline of data science and the diversification of application fields.The research and development activities of data science platform should follow the design principles of activating data value as the center,human-in-the loop,DevOps,balance of usability and explainability,cultivation of data science product ecosystem,emphasis on user experience and ease of use,and integration with other business systems.At present,the research and development of data science platform needs theoretical breakthroughs in data bias and fairness,robustness and stability,privacy protection,causal analysis,trusted/responsible data science platform.
    Reference | Related Articles | Metrics
    Survey on Artificial Intelligence Model Watermarking
    XIE Chen-qi, ZHANG Bao-wen, YI Ping
    Computer Science    2021, 48 (7): 9-16.   DOI: 10.11896/jsjkx.201200204
    Abstract1256)      PDF(pc) (2905KB)(3039)       Save
    In recent years,with the rapid development of artificial intelligence,it has been used in voice,image and other fields,and achieved remarkable results.However,these trained AI models are very easy to be copied and spread.Therefore,in order to protect the intellectual property rights of the models,a series of algorithms or technologies for model copyright protection emerge as the times require,one of which is model watermarking technology.Once the model is stolen,it can prove its copyright through the verification of the watermark,maintain its intellectual property rights and protect the model.This technology has become a hot spot in recent years,but it has not yet formed a more unified framework.In order to better understand,this paper summarizes the current research of model watermarking,discusses the current mainstream model watermarking algorithms,analyzes the research progress in the research direction of model watermarking,reproduces and compares several typical algorithms,and finally puts forward some suggestions for future research direction.
    Reference | Related Articles | Metrics
    Geographic Local Differential Privacy in Crowdsensing:Current States and Future Opportunities
    WANG Le-ye
    Computer Science    2021, 48 (6): 301-305.   DOI: 10.11896/jsjkx.201200223
    Abstract468)      PDF(pc) (1838KB)(1015)       Save
    Geographic privacy protection is one of the key design issues in crowdsensing.Traditional protection mechanisms need to make assumptions on adversaries’ prior knowledge to ensure protection effect.Recently,a breakthrough in the privacy research community,namely local differential privacy (LDP),is introduced into crowdsensing for location protection,which can provide theoretically guaranteed protection effect regardless of adversaries’ prior knowledge,without requiring trustful third parties.This paper conducts a concise review of the works applying this new privacy-preserving technique in crowdsensing.For diverse existing Geo-LDP (geographic LDP) mechanisms that serve different crowdsensing tasks,this paper analyzes their characteristics and extracts common design considerations in practice.It also points out potential research opportunities in the future study.
    Reference | Related Articles | Metrics
    Terminology Recommendation and Requirement Classification Method for Safety-critical Software
    YANG Zhi-bin, YANG Yong-qiang, YUAN Sheng-hao, ZHOU Yong, XUE Lei, CHENG Gao-hui
    Computer Science    2021, 48 (5): 32-44.   DOI: 10.11896/jsjkx.210100105
    Abstract415)      PDF(pc) (3149KB)(1470)       Save
    Most of the knowledge in the requirements of safety-critical software needs to be manually extracted,which is time-consuming and laborious.Recently,artificial intelligence technology has been gradually used in the design and development of safety-critical software,to reduce the work of engineers and shorten the life cycle of software development.This paper proposes a terminology recommendation and requirement classification method for safety-critical software.Firstly,the terminology recommendation method extracts candidate terms based on part-of-speech rules and dependency rules and clusters candidate terms through term similarity calculation.The clustering results are recommended to engineers.Secondly,the requirement classification method automatically classifies safety-critical software requirements as functional,safety,reliability,etc.based on feature extraction.Finally,the prototype tool TRRC4SCSTool is implemented in the AADL open-source modeling environment OSATE,and the experimental analysis is carried out through the dataset collected from the industrial requirements and safety certification standards,and the results show the effectiveness of the method.
    Reference | Related Articles | Metrics
    Survey of Constrained Evolutionary Algorithms and Their Applications
    LI Li, LI Guang-peng, CHANG Liang, GU Tian-long
    Computer Science    2021, 48 (4): 1-13.   DOI: 10.11896/jsjkx.200600151
    Abstract973)      PDF(pc) (1765KB)(3271)       Save
    Constrained optimization problems exist widely in scientific research and engineering practice,and the corresponding constrained evolutionary algorithms have become an important research direction in the field of evolutionary computation.The essential problem of constrained evolutionary algorithm is how to effectively use the information of infeasible and feasible solutions and balance the objective function and constraints to make the algorithm more efficient.Firstly,this paper defines the problem of constraint optimization.Then it analyzes the current mainstream constraint evolution algorithms in detail.At the same time,based on different constraint handling mechanisms,these mechanisms are divided into constraint and objective separation methods,pena-lty function methods,multi-objective optimization methods,hybrid methods and so on,and these methods are analyzed and summarized comprehensively.Next,it points out the urgent problems that need to be solved as well as the research direction.Finally,the application of constrained evolutionary algorithm in engineering optimization,electronic and communication engineering,mechanical design,environmental resource allocation,scientific research and management allocation are introduced.
    Reference | Related Articles | Metrics
    Review of Sign Language Recognition, Translation and Generation
    GUO Dan, TANG Shen-geng, HONG Ri-chang, WANG Meng
    Computer Science    2021, 48 (3): 60-70.   DOI: 10.11896/jsjkx.210100227
    Abstract1068)      PDF(pc) (2250KB)(3367)       Save
    Sign language research is a typical cross-disciplinary research topic,involving computer vision,natural language processing,cross-media computing and human-computer interaction.Sign language research mainly includes isolated sign language recognition,continuous sign language translation and sign language video generation.Sign language recognition and translation aim to convert sign language videos into textual words or sentences,while sign language generation synthesizes sign videos based on spoken or textual sentences.In other words,sign language translation and generation are inverse processes.This paper reviews the latest progress of sign language research,introduces its background and challenges,reviews typical methods and cutting-edge research on sign language recognition,translation and generation tasks.Combining with the problems in the current methods,the future research direction of hand language is prospected.
    Reference | Related Articles | Metrics
    Knowledge Graph Construction Techniques:Taxonomy,Survey and Future Directions
    HANG Ting-ting, FENG Jun, LU Jia-min
    Computer Science    2021, 48 (2): 175-189.   DOI: 10.11896/jsjkx.200700010
    Abstract1387)      PDF(pc) (2659KB)(5254)       Save
    With the concept of knowledge graph proposed by Google in 2012,it has gradually become a research hotspot in the field of artificial intelligence and played a role in applications such as information retrieval,question answering,and decision analysis.While the knowledge graph shows its potential in various fields,it is easy to find that there is no mature knowledge graph construction platform currently.Therefore,it is essential to research the knowledge graph construction system to meet the application needs of different industries.This paper focuses on the construction of the knowledge graph.Firstly,it introduces the current mainstream general knowledge graphs and domain knowledge graphs and describes the differences between the two in the construction process.Then,it discusses the problems and challenges in the construction of the knowledge graph according to various types.To address the above-mentioned issues and challenges,it describes the five-level solution methods and strategies of knowledge extraction,knowledge representation,knowledge fusion,knowledge reasoning,and knowledge storage in the current graph construction process.Finally,it discusses the possible directions for future research on the knowledge graph and its application.
    Reference | Related Articles | Metrics
    Page 1 of 2 59 records