
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK


-
Survey of Interprocedural Flow-sensitive Pointer Analysis Technology
帅东昕, 葛丽丽, 谢金言, 张迎周, 薛渝川, 杨嘉毅, 密杰, 卢跃. 过程间流敏感的指针分析技术研究[J]. 计算机科学, 2023, 50(12): 1-13.
SHUAI Dongxin, GE Lili, XIE Jinyan, ZHANG Yingzhou, XUE Yuchuan, YANG Jiayi, MI Jie, LU Yue. Survey of Interprocedural Flow-sensitive Pointer Analysis Technology[J]. Computer Science, 2023, 50(12): 1-13. - SHUAI Dongxin, GE Lili, XIE Jinyan, ZHANG Yingzhou, XUE Yuchuan, YANG Jiayi, MI Jie, LU Yue
- Computer Science. 2023, 50 (12): 1-13. doi:10.11896/jsjkx.221000195
-
Abstract
PDF(2019KB) ( 1914 )
- References | Related Articles | Metrics
-
Pointer analysis technology is a basic static program analysis technology,it has always been one of the research hotspots in the direction of software security,which plays an important role in software defect detection,malware analysis,program verification,compiler optimization and other application scenarios.The accuracy of pointer analysis in these application scenarios is crucial.Flow-sensitive analysis and interprocedural analysis are the two most effective techniques for improving the accuracy of pointer analysis.This paper summarizes the existing techniques for improving the accuracy of interprocedural flow-sensitive pointer analysis,starting from the information eliminated by methods to improve accuracy,and it is divided into two categories.One is to eliminate false information in the analysis to avoid the propagation of pointing information along a false return path or false call relations.The other is to eliminate the conservative points-to relations,so that to determine the unique location assigned to the pointer at each program point,rather than generally calculating the possible multiple points of the pointer.Accor-dingly,this paper compares the similarities and differences of the interprocedural flow sensitive pointer analysis technology in detail,and outlines the future research direction of the pointer analysis technology.
-
Examining the Quality of Bug Report Titles:An Empirical Study
续永, 孙龙飞, 张汤浩然, 毛新军. 软件缺陷标题质量的实证研究[J]. 计算机科学, 2023, 50(12): 14-23.
XU Yong, SUN Longfei, ZHANGTANG Haoran, MAO Xinjun. Examining the Quality of Bug Report Titles:An Empirical Study[J]. Computer Science, 2023, 50(12): 14-23. - XU Yong, SUN Longfei, ZHANGTANG Haoran, MAO Xinjun
- Computer Science. 2023, 50 (12): 14-23. doi:10.11896/jsjkx.230300211
-
Abstract
PDF(2840KB) ( 1634 )
- References | Related Articles | Metrics
-
The title of a software bug serves as a concise summary of the bug,which can help developers swiftly comprehend the bug and facilitate effective software bug management.Current software development practices reveal that the quality of bug titles varies considerably,characterized by issues such as verbosity,obscurity,and a lack of crucial information,ultimately impacting the efficiency of software bug management.To this end,this study attempt to understand which factors influence the quality of bug title and identify the current quality status.We examine 190 online documents to elicit the quality requirements of developers for bug title,construct a bug title quality measurement model using the GQM paradigm,and analyze the prevalence of quality issues in 1 804 bug titles from five open-source projects on GitHub.The findings indicate that developers primarily focus on four aspects of quality for bug titles,i.e.conciseness(110,58%),clarity(65,34%),description of the core idea of the bug(157,83%),and accurate descriptions(67,35%).Approximately 70% of bug titles exhibit varying degrees of quality issues,with a lack of crucial information and inaccurate descriptions being the most common issues.Specifically,42% of bug titles lack information expected by developers,and 24% require being rewritten accurately.The study can offer guidance for reporters seeking to submit high-quality bug titles.
-
Aggregation Model for Software Defect Prediction Based on Data Enhancement by GAN
徐金鹏, 郭新峰, 王瑞波, 李济洪. 基于GAN数据增强的软件缺陷预测聚合模型[J]. 计算机科学, 2023, 50(12): 24-31.
XU Jinpeng, GUO Xinfeng, WANG Ruibo, LI Jihong. Aggregation Model for Software Defect Prediction Based on Data Enhancement by GAN[J]. Computer Science, 2023, 50(12): 24-31. - XU Jinpeng, GUO Xinfeng, WANG Ruibo, LI Jihong
- Computer Science. 2023, 50 (12): 24-31. doi:10.11896/jsjkx.221100171
-
Abstract
PDF(1772KB) ( 2117 )
- References | Related Articles | Metrics
-
In the task of software defect prediction,the machine learning classification algorithm is usually used to build a software defect prediction(SDP) model based on dataset with static softwarefeatures such as C&K metrics.However,the number of defects in most datasets with static software metrics is small,the class imbalance in the dataset is serious,resulting in the low prediction performance of the model.Based on generation adversarial network(GAN),this paper uses FID score screening to ge-nerate positive sample data,enhances the amount of postitive data,and then aggregates the results of learned models by majority-voting,and finally build the SDP model based on block-regularized m×2 Cross validation(m×2BCV).20 datasets in PROMISE database are used as the experimental datasets,and random forest algorithm is used to build model.Experimental results show that,compared with the traditional random over-sampling,SMOTE,and random under-sampling,the average F1 values of the SDP aggregation model in the 20 datasets is increased by 10.2%,5.7%,and 3.4% respectively,and the stability of F1 is also improved accordingly.In 17 of the 20 datasets,the SDP aggregation models have the highest F1 values.From the AUC index,there is no significant difference between the proposed method and the traditional sampling method.
-
Mobile Application Accessibility Enhancement Method Based on Recording and Playback
李向民, 沈立炜, 董震. 基于录制回放的移动应用可访问性增强方法[J]. 计算机科学, 2023, 50(12): 32-48.
LI Xiangmin, SHEN Liwei, DONG Zhen. Mobile Application Accessibility Enhancement Method Based on Recording and Playback[J]. Computer Science, 2023, 50(12): 32-48. - LI Xiangmin, SHEN Liwei, DONG Zhen
- Computer Science. 2023, 50 (12): 32-48. doi:10.11896/jsjkx.230300164
-
Abstract
PDF(5601KB) ( 1563 )
- References | Related Articles | Metrics
-
Accessibility of mobile applications refers to the ability to use mobile applications conveniently without being affected by physical and cognitive impairments,which is of great significance to the elderly and disabled groups.Shortening the interaction path(reducing the number of steps) during the use of an application is an important way to enhance the accessibility of mobile applications.The recording and playback technology automatically executes fixed operations in the interactive process based on the recorded scripts to reduce the interactive operations.However,existing recording and playback tools still have limitations,including relying on ROOT permissions or using intrusive means to record and realize script migration.In addition,scripts recorded by existing tools do not support parameterized operations.In response to these problems,this paper proposes a mobile application accessibility enhancement method based on recording and playback.In the recording process of this method,the accessibility service is used as the medium to avoid applying for ROOT permission or using intrusive means.The path index algorithm is designed to ensure the portability of scripts,and the script parameterization algorithm is designed to record parameterized operations,so as to generate application execution scripts with terminal mobility and operation data generalization.Based on the proposed method,a prototype tool RRA for recording and playback is developed and 50 common execution scripts for 10 popular applications are constructed.The playback success rate on the same device using these scripts is 80%,which is comparable to the comparison method SARA.Parameterized recording of 5 scripts out of 40 scripts successfully played back by RRA with a 100% playback success rate on the same device.The 29 scripts and 5 parameterized scripts that can be successfully recorded by both methods are migrated and executed,and the playback success rate of RRA is 94%,which is higher than that of SARA.
-
Category-directed Fuzzing Test Method for Error Reporting Mechanism in JavaScript Engines
卢凌, 周志德, 任志磊, 江贺. 面向JavaScript引擎报错机制的类别导向模糊测试方法[J]. 计算机科学, 2023, 50(12): 49-57.
LU Ling, ZHOU Zhide, REN Zhilei, JIANG He. Category-directed Fuzzing Test Method for Error Reporting Mechanism in JavaScript Engines[J]. Computer Science, 2023, 50(12): 49-57. - LU Ling, ZHOU Zhide, REN Zhilei, JIANG He
- Computer Science. 2023, 50 (12): 49-57. doi:10.11896/jsjkx.221200166
-
Abstract
PDF(1646KB) ( 1643 )
- References | Related Articles | Metrics
-
Error reporting mechanism is an indispensable part of JavaScript engines.For programs with errors,the error reporting mechanism of JavaScript engines should output reasonable error message,point out location and cause of the error,help develo-pers to repair the program.However,there are defects in the JavaScript engine error reporting mechanism that will preventdeve-lopers from repairing errors.In this paper,the first category directed fuzzy testing method for JavaScript engine error reporting mechanism called CAFJER is proposed.For a given seed program,CAFJER first selects an error message of the target category for it and dynamically analyzes it to obtain its context information.Secondly,CAFJER generates test cases that can trigger target category error information according to the context information of the seed program.Thirdly,CAFJER inputs the generated test cases into different JavaScript engines for differential testing.If there are differences between error messages thrown by Java-Script engines,it indicates that there may be a defect.Finally,CAFJER automatically filters repeated and invalid test cases,effectively reducing manual participation.In order to verify the effectiveness of CAFJER,it is compared with the current advanced similar methods JEST and DIPROM.Experimental results show that the unique defects found by CAFJER in the JavaScript engine error reporting mechanism is 2.17 times and 26.00 times that of JEST and DIPROM respectively.During the three-month experiment,CAFJER also submitted 17 defect reports to developers and 7 of which have been confirmed.
-
Protocol Fuzzing Based on Testcases Automated Generation
徐威, 武泽慧, 王子木, 陆丽. 基于测试用例自动化生成的协议模糊测试方法[J]. 计算机科学, 2023, 50(12): 58-65.
XU Wei, WU Zehui, WANG Zimu, LU Li. Protocol Fuzzing Based on Testcases Automated Generation[J]. Computer Science, 2023, 50(12): 58-65. - XU Wei, WU Zehui, WANG Zimu, LU Li
- Computer Science. 2023, 50 (12): 58-65. doi:10.11896/jsjkx.221000225
-
Abstract
PDF(2254KB) ( 1543 )
- References | Related Articles | Metrics
-
As a specification for the interaction between devices,network protocols play an important role in computer networks.Vulnerabilities in protocol implementation can cause devices to be attacked remotely,which poses a huge security risk.Fuzzing is an important method to discover security vulnerabilities in programs.Before fuzzing of protocols,it is necessary to conduct reverse analysis on them,and generating high-quality testcases under the guidance of protocol format and state machine model.However,in the above process,testcase generation requires manual construction,and the constructed testcase is difficult to cover the deep level state.To solve these problems,this paper proposes an automated testcases generation technology.Defining testcase generation rules in the template,building complete test paths based on the state transition path generation algorithm,and effectively performing fuzzing on protocol programs.Experimental results show that compared with the current advanced protocol fuzzer Boo-fuzz,the number of effective testcases generated by the proposed method can be increased by 51.8%.It is tested in four real software to verify three open vulnerabilities.At the same time,a new flaw is found and confirmed by developers.
-
RVTDS:A Trace Debugging System for Microprocessor
高轩, 何港兴, 车文博, 扈啸. RVTDS:面向微处理器的追踪调试系统[J]. 计算机科学, 2023, 50(12): 66-74.
GAO Xuan, HE Gangxing, CHE Wenbo, HU Xiao. RVTDS:A Trace Debugging System for Microprocessor[J]. Computer Science, 2023, 50(12): 66-74. - GAO Xuan, HE Gangxing, CHE Wenbo, HU Xiao
- Computer Science. 2023, 50 (12): 66-74. doi:10.11896/jsjkx.230100030
-
Abstract
PDF(2922KB) ( 1433 )
- References | Related Articles | Metrics
-
Software debugging is one of the most challenging factors in embedded system development.When debugging high-complexity,high real-time systems,single-step-breakpoint debugging time overhead is high,and tends to corrupt program execution behavior;the JTAG interface with serial-connection mechanism is flawed in achieving parallel access to complex multicore processors in operation.The on-chip tracing and debugging technology solves the problems of traditional debugging methods such as single-step-breakpoint and JTAG in debugging highly complex and real-time systems by non-intrusively obtaining program execution status through dedicated hardware.The existing on-chip tracing and debugging technologies mainly trace complete information,and generate a large amount of meaningless data,which easily causes path blocking or data loss,especially during concurrent debugging.In addition,the transmission of compressed data on narrow buses is not considered.A non-intrusive trace debugging system based on RISC-V for multicore microprocessors,RVTDS,is designed and implemented,which solves the data loss problem during high-speed parallel debugging of multicore microprocessors by reusing the platform-level interrupt controller within RISC-V cores.A data-flow tracing scheme for an on-chip bus and a control-flow filtering mechanism based on instruction bit domain matching are proposed to realize signal filtering and provide bus bandwidth statistics.A data compression method based on differential coding is proposed with an average data compression rate of more than 82%.A data packing scheme is proposed to realize the data transmission problem on a narrow bus with an average of about 1.5 path information per effective data beat.The system verification results show that RVTDS has a small amount of trace data compared with traditional debugging methods,and accomplishes the acquisition,transmission,and storage of multiple on-chip operation information of complex multi-core microprocessors flexibly and efficiently.
-
CodeBERT-based Language Model for Design Patterns
陈时非, 刘东, 江贺. 基于CodeBERT的设计模式语言模型[J]. 计算机科学, 2023, 50(12): 75-81.
CHEN Shifei, LIU Dong, JIANG He. CodeBERT-based Language Model for Design Patterns[J]. Computer Science, 2023, 50(12): 75-81. - CHEN Shifei, LIU Dong, JIANG He
- Computer Science. 2023, 50 (12): 75-81. doi:10.11896/jsjkx.230100115
-
Abstract
PDF(2161KB) ( 1970 )
- References | Related Articles | Metrics
-
As summarizations of the experiences of practical software design,design patterns are regarded as an effective means for software design assistance.Most of the current researches on design patterns mining aim at recognition of design pattern instance in source codes,modelling design patterns with natural language corpus is largely unexplored.In order to enhance the performance of language model for recommending design patterns with codes,class diagram or object collaboration,a design pattern classification mining model based on CodeBERT,named dpCodeBERT,is proposed,achieving the contrast understanding of design patterns in natural language and programming language.Firstly,multi-classification dataset and code search dataset are ge-nerated using random combination and used as inputs of the model.Using dpCodeBERT to get attention weights of each layer of transformer of each token and statement from the inputs.Secondly,the input dataset is further improved by analyzing attention weights and discovering the most important category of inputs.Finally,dpCodeBERT is applied to specific software engineering downstream tasks such as design patterns selection and design patterns code search.The purposes of tasks are accomplished by mapping distributed features to sample space trough fully connected layers and outputting multi values.The result of the experiment on 80 software design problems in design pattern selection task shows that ratio of correct detection of design pattern(RCDDP)and mean reciprocal rank(MRR) of dpCodeBERT are improved by the average of 10%~20% compared with baseline mo-dels,and the design pattern selection is more accurate.Through in-depth study of the data demand of the model,dpCodeBERT improves the understanding of class code of CodeBERT and discovers the application of CodeBERT in design patterns mining.It has the characteristics of accurate prediction and great scalability.
-
Standardization Definition and Design of Robotic Process Automation
赖琪, 蔡宇辉, 夏斯琼, 谢晓全, 刘沛, 李肯立. RPA流程标准化定义与设计[J]. 计算机科学, 2023, 50(12): 82-88.
LAI Qi, CAI Yuhui, XIA Siqiong, XIE Xiaoquan, LIU Pei, LI Kenli. Standardization Definition and Design of Robotic Process Automation[J]. Computer Science, 2023, 50(12): 82-88. - LAI Qi, CAI Yuhui, XIA Siqiong, XIE Xiaoquan, LIU Pei, LI Kenli
- Computer Science. 2023, 50 (12): 82-88. doi:10.11896/jsjkx.230100020
-
Abstract
PDF(1745KB) ( 1680 )
- References | Related Articles | Metrics
-
To address the problem of lacking of a standardized method for describing processes in the field of robotic process automation(RPA),this paper proposes a specification for defining RPA processes,including various objects in RPA processes.The specification can be used to analyze complex RPA application scenarios.Additionally,to better define and describe workflow systems and overcome the problem of incompatible process scripts in the RPA field caused by the lack of process modeling stan-dards,a set of RPA process modeling symbols and label systems are defined based on the business process model and notation(BPMN) standard.Finally,it demonstrates the results of defining and describing a typical business process in a banking system using the specification,and verifies the process correctness using Petri nets.
-
Stripe Matching and Merging Algorithm-based Redundancy Transition for Locally Repairable Codes
杜清鹏, 许胤龙, 吴思. 基于条带配对合并算法的局部可修复码冗余度转换机制[J]. 计算机科学, 2023, 50(12): 89-96.
DU Qingpeng, XU Yinlong, WU Si. Stripe Matching and Merging Algorithm-based Redundancy Transition for Locally Repairable Codes[J]. Computer Science, 2023, 50(12): 89-96. - DU Qingpeng, XU Yinlong, WU Si
- Computer Science. 2023, 50 (12): 89-96. doi:10.11896/jsjkx.221100257
-
Abstract
PDF(1990KB) ( 1692 )
- References | Related Articles | Metrics
-
Compared with traditional replication technique,erasure coding is another data redundancy mechanism with lower space overhead at the cost of high repair cost.Locally repairable code is a special kind of erasure code with low repair cost,which is widely deployed in big data storage systems.In order to accommodate itself to dynamic workload and varying failure rate of sto-rage media,modern storage systems make better trade off between access performance and reliability for erasure coded data by means of redundancy transitions.A redundancy transition method,which selectively matches and merges stripes of specific layout,decouples data placement and redundancy transition,is proposed.Furthermore,it reduces the cross-rack network traffic by designing cost quantification and optimization model.In contrast to those algorithms that designing data placement,the proposed algorithm has almost the same performance but eliminates the constraints on data layout and can be run iteratively.Experiment results show that under the two common transition setups,compared with naive approach of random layout,the proposed algorithm approximates the theoretical optimum,mitigates network traffic by 27.74% and 27.47%,and shortens transition time by 39.10% and 22.32% respectively.
-
Transformer Feature Fusion Network for Time Series Classification
段梦梦, 金城. 基于Transformer特征融合的时间序列分类网络[J]. 计算机科学, 2023, 50(12): 97-103.
DUAN Mengmeng, JIN Cheng. Transformer Feature Fusion Network for Time Series Classification[J]. Computer Science, 2023, 50(12): 97-103. - DUAN Mengmeng, JIN Cheng
- Computer Science. 2023, 50 (12): 97-103. doi:10.11896/jsjkx.221100112
-
Abstract
PDF(1806KB) ( 2178 )
- References | Related Articles | Metrics
-
Model ensemble methods train multiple basic models and use a certain rule to aggregate the output of the basic models for time series classification.However,they mainly focus on two aspects.The first one is which model is chose as the basic mo-del.And the Second one is how to increase the difference and the diversity of the basic models.They all ignore the exploration of aggregation rules.Aiming at this problem,Transformer feature fusion network for time series classification(TFFN) is proposed.TFFN have two key components,dual Transformer encoder decoder(Dual TED) and Transformer encoder head(TEH).Dual TED leverage attention module to fuse the basic feature into more discriminative fusion features.Transformer encoder head,a sample-distribution-aware classifier,is adopted to classify time series more accurately.Experiments show that TFFN achieves state-of-the-art results on multiple mainstream time series classification datasets.
-
Self-optimized Single Cell Clustering Using ZINB Model and Graph Attention Autoencoder
孔凤玲, 吴昊, 董庆庆. 联合ZINB模型与图注意力自编码器的自优化单细胞聚类[J]. 计算机科学, 2023, 50(12): 104-112.
KONG Fengling, WU Hao, DONG Qingqing. Self-optimized Single Cell Clustering Using ZINB Model and Graph Attention Autoencoder[J]. Computer Science, 2023, 50(12): 104-112. - KONG Fengling, WU Hao, DONG Qingqing
- Computer Science. 2023, 50 (12): 104-112. doi:10.11896/jsjkx.221000167
-
Abstract
PDF(4286KB) ( 1706 )
- References | Related Articles | Metrics
-
One of the most important aspects of single-cell data analysis is the clustering of individual cells into clusters of subpopulations.However,due to the limitation of sequencing principle and sequencing platform,the obtained single cell dataset ge-nerally has high-dimensional sparsity,high variance noise and a large amount of data loss,which lead to many challenges in cluster analysis and application of single cell data.Single-cell clustering methods proposed in recent years mainly model the relationship between cell and gene expression,ignoring the full mining of the potential characteristic relationship between cells and the remo-val of noise,resulting in unsatisfactory clustering results,which hinders the later analysis of data.In view of the above problems,a self-optimized single-cell clustering algorithm(scZDGAC) combining zero expansion negative binomial(ZINB) model with graph attention autoencoder is proposed.The algorithm firstly uses ZINB model combined with extensible DCA denoising algorithm,better fit data feature distribution through ZINB distribution,to improve the denoising performance of autoencoder,and reduce the impact of noise and data loss on the output of KNN algorithm.And then using the graph attention autoencoder to spread the information between cells of different weights,which can better capture the potential features between cells for clustering.Finally,scZDGAC uses the self-optimization method to make the originally two independent clustering modules and feature modules benefit from each other,and constantly update the clustering center iteratively to further improve the clustering performance.In order to evaluate the clustering results,this paper uses adjusted RAND index(ARI) and standardized mutual information(NMI) as two general evaluation indicators.Compared with six single cell datasets of different scales,experimental results show that the clustering performance of the proposed clustering algorithm has greatly improved.
-
Adaptive Location Recommendation Based on Time Slots Clustering and User Dynamic Similarity
朱俊, 韩立新, 宗平, 刘红英, 谢玲, 李景仙. 基于时间聚类和用户动态相似度的自适应位置推荐算法[J]. 计算机科学, 2023, 50(12): 113-122.
ZHU Jun, HAN Lixin, ZONG Ping, LIU Hongying, XIE Ling, LI Jingxian. Adaptive Location Recommendation Based on Time Slots Clustering and User Dynamic Similarity[J]. Computer Science, 2023, 50(12): 113-122. - ZHU Jun, HAN Lixin, ZONG Ping, LIU Hongying, XIE Ling, LI Jingxian
- Computer Science. 2023, 50 (12): 113-122. doi:10.11896/jsjkx.230200105
-
Abstract
PDF(3126KB) ( 1815 )
- References | Related Articles | Metrics
-
Location recommendation is an important service for businesses and users in location-based social networks,and the recommended results are greatly influenced by user preferences and spatial-temporal contexts.Most existing researches ignore the variation in user similarity over time,lack adaptability when making recommendations,and suffer from serious data sparsity pro-blem.To address the issues above,this paper proposes an adaptive location recommendation algorithm(ALRTU) based on time slots clustering and user dynamic similarity.First,time slots clustering based on fuzzy c-means is devised according to the statistics of historical check-in data.Time similarity in each time cluster is calculated,and original ratings are updated further by app-lying smoothing technology to solve the problem of data sparsity.The dynamic similarities of users are calculated by hour.Diffe-rent rating subsets are selected adaptively according to time slots clustering and then realize user preferences and temporal influences mining.Second,users are classified on the basis of check-in frequency,and then kernel density estimation or power law distribution algorithm is selected adaptively to mine geographical features.Finally,user preferences and spatial-temporal contexts are effectively fused to produce location recommendations.Extensive offline experiments are conducted on two real-world datasets(Brightkite and Gowalla) to verify the accuracy.Experimental results show that the accuracy of ALRTU on Brightkite and Gowalla datasets is respectively improved by 3.74% and 1.42% on average,compared with the highest recommendation accuracy among the benchmark methods.
-
High Speed Data Compression Method of Merge Unit Based on SCD File
陈星田, 熊小伏, 白勇, 胡海洋. 一种基于SCD文件的合并单元高速数据压缩方法[J]. 计算机科学, 2023, 50(12): 123-129.
CHEN Xingtian, XIONG Xiaofu, BAI Yong, HU Haiyang. High Speed Data Compression Method of Merge Unit Based on SCD File[J]. Computer Science, 2023, 50(12): 123-129. - CHEN Xingtian, XIONG Xiaofu, BAI Yong, HU Haiyang
- Computer Science. 2023, 50 (12): 123-129. doi:10.11896/jsjkx.230700230
-
Abstract
PDF(1470KB) ( 1568 )
- References | Related Articles | Metrics
-
In modern smart grid,many merging units are installed in smart substation to release transient data of current transformer and voltage transformer synchronously,these transient data need to be saved for several years,so as to cover the life cycle of equipment and provide original information support for condition maintenance and reliability of equipment,but such long-time and high-frequency massive data is a difficult problem for storage equipment.In this paper,the high-frequency transient data are preprocessed in three forms:fixed,state-changing and periodic-changing.Tthe fixed part is replaced by merge's APPID in SCD file,the state-changing part is replaced by event record file,and the periodic-changing part is represented by two-channel diffe-rence and periodic difference in SCD file,and the final compression coding is completed with 16-bit Huffman.The final test shows that the compression ratio of this compression method is larger than that of common hardware compression card,and the compression rate is faster than that of common compression card.
-
Review of Transformer in Computer Vision
陈洛轩, 林成创, 郑招良, 莫泽枫, 黄心怡, 赵淦森. Transformer在计算机视觉场景下的研究综述[J]. 计算机科学, 2023, 50(12): 130-147.
CHEN Luoxuan, LIN Chengchuang, ZHENG Zhaoliang, MO Zefeng, HUANG Xinyi, ZHAO Gansen. Review of Transformer in Computer Vision[J]. Computer Science, 2023, 50(12): 130-147. - CHEN Luoxuan, LIN Chengchuang, ZHENG Zhaoliang, MO Zefeng, HUANG Xinyi, ZHAO Gansen
- Computer Science. 2023, 50 (12): 130-147. doi:10.11896/jsjkx.221100076
-
Abstract
PDF(6634KB) ( 2607 )
- References | Related Articles | Metrics
-
Transformer is an attention-based encoder-decoder architecture.Due to its long-range sequence modeling and parallel computing capability,Transformer have made a significant breakthrough in natural language processing and is gradually expanding to computer vision(CV) fields,which has become an important research direction in CV tasks.Three sorts of visual Transformer-based CV task,including classification,object detection and segmentation,are focused on by this paper,which summarizes their application and modification.Starting from image classification,this paper first analyses the existing issue in vision Transformer including data size,structure and computational efficiency,then sorts out the corresponding solutions according to the issue.Besides,this paper provides a literature review on object detection and segmentation,which organizes these methods accor-ding to their structures and motivations and summarizes their corresponding pros and cons.Finally,the challenges and future development trends of the Transformer in vision transformer are summarized and discussed in this paper.
-
Prior-guided Blind Iris Image Restoration Algorithm
王甲, 项刘宇, 黄昱博, 夏玉峰, 田青, 何召锋. 先验引导的虹膜图像盲修复算法[J]. 计算机科学, 2023, 50(12): 148-155.
WANG Jia, XIANG Liuyu, HUANG Yubo, XIA Yufeng, TIAN Qing, HE Zhaofeng. Prior-guided Blind Iris Image Restoration Algorithm[J]. Computer Science, 2023, 50(12): 148-155. - WANG Jia, XIANG Liuyu, HUANG Yubo, XIA Yufeng, TIAN Qing, HE Zhaofeng
- Computer Science. 2023, 50 (12): 148-155. doi:10.11896/jsjkx.230500217
-
Abstract
PDF(3594KB) ( 2548 )
- References | Related Articles | Metrics
-
As one of the most potential biometric technologies,iris recognition has been widely used in various industries.How-ever,the existing iris recognition system is easily disturbed by external factors during the image acquisition process,and the acquired iris images have inevitable problems of insufficient resolution and easy blurring.To address these challenges,a prior-guided blind iris image restoration method is proposed,which utilizes the generative adversarial network and iris priors to recover unknown degraded iris images mixed with degradation factors such as low resolution,motion blur,and out-of-focus blur.The network includes a degradation removal sub-network,a prior estimation sub-network,and a prior fusion sub-network.The prior estimation sub-network models the distribution of the style information of the input as prior knowledge to guide the generative network.Besides,the prior fusion sub-network uses an attentive fusion mechanism to integrate multi-level style features,which improves the utilization of information.Experimental results show that the proposed method outperforms other methods in both qualitative and quantitative indexes,achieves blind recovery of degraded irises,and improves the robustness of iris recognition.
-
Improved Fast Image Translation Model Based on Spatial Correlation and Feature Level Interpolation
李玉强, 李欢, 刘春. 基于空间相关性与特征级插值改进的快速图像翻译模型[J]. 计算机科学, 2023, 50(12): 156-165.
LI Yuqiang, LI Huan, LIU Chun. Improved Fast Image Translation Model Based on Spatial Correlation and Feature Level Interpolation[J]. Computer Science, 2023, 50(12): 156-165. - LI Yuqiang, LI Huan, LIU Chun
- Computer Science. 2023, 50 (12): 156-165. doi:10.11896/jsjkx.221100027
-
Abstract
PDF(3539KB) ( 2358 )
- References | Related Articles | Metrics
-
In recent years,with the popularity of deep learning algorithms,the image translation tasks have achieved remarkable results.Many researches are devoted to reduce model running time while maintaining the quality of image generation,among which ASAPNet model is a typical representative.However,the feature level loss function of this model cannot completely decouple image features and appearance,and most of its calculations are performed at extremely low resolution,resulting in poor image quality.In response to the above issues,this paper proposes an improved ASAPNet model—SRFIT,based on spatial correlation and feature level interpolation.Specifically,according to the principle of self-similarity,the spatially-correlative loss is used to replace the feature matching loss in the original model to alleviate the problem of scene structure differences during image translation,so as to improve the accuracy of image translation.In addition,inspired by the data augmentation method in ReMix,we also increase the amount of data at the image feature level through linear interpolation,which addresses the overfitting problem of the generator.Finally,the results of comparative experiments on two public datasets,facades and cityscapes,show that compared with the current mainstream models,the proposed method shows better performance,it can effectively improve the quality of generated image while maintaining a faster running speed.
-
Feature Fusion and Boundary Correction Network for Salient Object Detection
陈慧, 彭力. 基于特征融合与边界修正显著性目标检测[J]. 计算机科学, 2023, 50(12): 166-174.
CHEN Hui, PENG Li. Feature Fusion and Boundary Correction Network for Salient Object Detection[J]. Computer Science, 2023, 50(12): 166-174. - CHEN Hui, PENG Li
- Computer Science. 2023, 50 (12): 166-174. doi:10.11896/jsjkx.221100203
-
Abstract
PDF(4504KB) ( 2434 )
- References | Related Articles | Metrics
-
Saliency object detection aims to find visually significant areas in an image.Existing salient object detection methods have shown strong advantages,but they are still limited by scale perception and boundary prediction.First of all,there are many scales of salient objects in various scenes,which makes it difficult for the algorithm adapt to different scale changes.Secondly,salient objects often have complex contours,which makes detection of boundary pixels more difficult.To solve these problems,this paper proposes a feature fusion and boundary correction network for salient object detection.This network extracts salient features at different levels on the feature pyramid.Firstly,a feature fusion decoder composed of multi-scale feature decoding modules is designed for the scale diversity of the object.By fusing the features of adjacent layer by layer,the network's ability to perceive the scale is improved.At the same time,a boundary correction module is designed to learn the contour features of salient objects to generate high quality salient images with clear boundaries.Experimental results on five commonly used salient object detection datasets show that the proposed algorithm can achieve better results on the average absolute error,F index and S index.
-
Multi-temporal Hyperspectral Anomaly Change Detection Based on Dual Space Conjugate Autoencoder
李沙沙, 邢红杰, 李刚. 基于双空间共轭自编码器的多时相高光谱异常变化检测[J]. 计算机科学, 2023, 50(12): 175-184.
LI Shasha, XING Hongjie, LI Gang. Multi-temporal Hyperspectral Anomaly Change Detection Based on Dual Space Conjugate Autoencoder[J]. Computer Science, 2023, 50(12): 175-184. - LI Shasha, XING Hongjie, LI Gang
- Computer Science. 2023, 50 (12): 175-184. doi:10.11896/jsjkx.221100092
-
Abstract
PDF(3740KB) ( 2322 )
- References | Related Articles | Metrics
-
Hyperspectral anomaly change detection can find anomaly changes from multi-temporal hyperspectral remote sensing images.These anomaly changes are rare,different from the overall background change trend,difficult to be found,but very intere-sting.For the problems of small-sized data sets,existing noise disturbance,and limitation of linear prediction models,the detection performance of the conventional hyperspectral anomaly change detection methods are greatly degraded.At present,Autoencoder has been successfully applied to hyperspectral anomaly change detection.However,when processing multi-temporal hyperspectral images,a single autoencoder only focuses on the reconstruction quality of images,while usually ignores the complex spectral changes in these images as it obtains bottleneck features.To tackle this problem,the multi-temporal hyperspectral anomaly change detection based on dual space conjugate Autoencoder(DSCAE) method is proposed.The proposed method contains two conjugate autoencoders that construct their own latent features from different directions.In the training process of the proposed method,first,two hyperspectral images at different times respectively obtain their corresponding feature representation in the latent space by their encoders.Then,the predicted image at another time can be obtained by their decoders.Second,different constraints are imposed in the sample space and the latent space,respectively.Moreover,the corresponding loss functions are minimized in the two spaces.Finally,the anomaly loss maps are obtained by the conjugate autoencoders for the two images.The minimization operation is conducted on the two obtained anomaly loss maps to derive the final anomaly change intensity maps to simultaneously decrease the background spectral difference between the two input images and highlight anomaly changes.Experimental results on the benchmark data sets for the hyperspectral anomaly change detection demonstrate that DSCAE achieves better detection performance in comparison with its 10 pertinent methods.
-
Stereo Visual Localization and Mapping for Mobile Robot in Agricultural Environments
余涛, 熊盛武. 农业场景下移动机器人的双目视觉定位与地图构建方法[J]. 计算机科学, 2023, 50(12): 185-191.
YU Tao, XIONG Shengwu. Stereo Visual Localization and Mapping for Mobile Robot in Agricultural Environments[J]. Computer Science, 2023, 50(12): 185-191. - YU Tao, XIONG Shengwu
- Computer Science. 2023, 50 (12): 185-191. doi:10.11896/jsjkx.230300116
-
Abstract
PDF(1804KB) ( 2420 )
- References | Related Articles | Metrics
-
Visual-based localization and mapping is the key technology for autonomous robots.Visual localization and mapping in agricultural environments faces more challenges,including few distinguishable landmarks for tracking,large-scale scene,unstable movements.To address these problems,a stereo visual localization and mapping method is proposed.Static stereo matching points are used to increase the number andcoverage of map points,which improve the accuracy of depth calculation.A point selection method is proposed to further improve the accuracy and efficiency by sampling the dense map points and removing outliers.Then scale estimation is proposed to reduce the scale error of localization and mapping in large scale agricultural scenes.Keyframe criteria is adapted to avoid the impact of large far-away objects that could cause abnormal keyframe distribution.Finally,a new motion assumption is proposed to recover the system from failure tracking,which improves the system's robustness at the case of unstable movements.Experimental results show that the proposed method achieves better performance than other state-of-the-art vi-sual localization and mapping systems.By addressing the challenges individually,the proposed visual localization and mapping system is more accurate and robust in agricultural environments.
-
Following Method of Mobile Robot Based on Fusion of Stereo Camera and UWB
付勇, 吴炜, 万泽青. 基于立体相机和UWB融合的移动机器人跟随方法[J]. 计算机科学, 2023, 50(12): 192-202.
FU Yong, WU Wei, WAN Zeqing. Following Method of Mobile Robot Based on Fusion of Stereo Camera and UWB[J]. Computer Science, 2023, 50(12): 192-202. - FU Yong, WU Wei, WAN Zeqing
- Computer Science. 2023, 50 (12): 192-202. doi:10.11896/jsjkx.221000188
-
Abstract
PDF(4474KB) ( 2480 )
- References | Related Articles | Metrics
-
This paper studies the autonomous following random robots in a human-machine blending environment.Especially,a stable and effective method is presented for the robot to determine the desired following target and the recognition after the target is lost,that is,to achieve the visual tracking and positioning of pedestrians based on the image of stereo camera and point cloud data.Then,the location information of UWB is introduced to determine the target pedestrian,and a filter algorithm is used to fuse the sensor data to get the coordinate information under the camera coordinate system.Finally,the coordinate transformation is used to convert the location under the robot coordinate system.An improved dynamic window algorithm(MDWA) is also proposed to improve the following tasks performed by the robot.In addition,based on sensor data,a behaviour decision module including following behaviour,recovery behaviour and transition behaviour is proposed.Through the switching between behaviours,the robot can also retrieve the target when it is lost due to the turning of the target or the change of ambient lighting conditions which make the camera invalid.Experimental results show that the proposed following system can automatically determine the desired following target at starting up,and the robot can achieve good obstacle avoidance following in the scene with static obstacles or in the dynamic scene with other non-target pedestrian disturbances in the view.In particular,the robot can independently retrieve the following target in a turning scene or in a scene with varying lighting conditions,and the success rate of the robot in a turning scene is 81%.
-
Hierarchical Graph Convolutional Network for Image Sentiment Analysis
谈钱辉, 温佳璇, 唐继辉, 孙玉宝. 图像情感分析的层次图卷积网络模型[J]. 计算机科学, 2023, 50(12): 203-211.
TAN Qianhui, WEN Jiaxuan, TANG Jihui, SUN Yubao. Hierarchical Graph Convolutional Network for Image Sentiment Analysis[J]. Computer Science, 2023, 50(12): 203-211. - TAN Qianhui, WEN Jiaxuan, TANG Jihui, SUN Yubao
- Computer Science. 2023, 50 (12): 203-211. doi:10.11896/jsjkx.221100177
-
Abstract
PDF(4395KB) ( 2563 )
- References | Related Articles | Metrics
-
The image sentiment analysis task aims to use machine learning models to automatically predict the observer's emotional response to images.At present,the sentiment analysis method based on the deep network has attracted wide attention,mainly through the automatic learning of the deep features of the image through the convolutional neural network.However,image emotion is a comprehensive reflection of the global contextual features of the image.Due to the limitation of the receptive field size of the convolution kernel,it is impossible to effectively capture the dependencies between long-distance emotional features.At the same time,the emotional features of different levels in the network cannot be effectively fused and utilized.It affects the accuracy of image sentiment analysis.In order to solve the above problems,this paper proposes a hierarchical graph convolutional network model,and constructs spatial context graph convolution(SCGCN) and dynamic fusion graph convolution(DFGCN).The spatial and channel dimensions are mapped respectively to learn the global context association within different levels of emotional features and the relationship dependence between different levels of features,which could improve the sentiment classification accuracy.The network is composed of four hierarchical prediction branches and one fusion prediction branch.The hierarchical prediction branch uses SCGCN to learn the emotion context expression of single-level features,and the fusion prediction branch uses DFGCN to self-adaptively aggregate the context emotion features of different semantic levels to realize fusion reasoning and classification.Experiment results on four emotion datasets show that the proposed method outperforms existing image emotion classification models in both emotion polarity classification and fine-grained emotion classification.
-
Continuous Dense Normalized Flow Model for Anomaly Detection in Industrial Images
张邹铨, 张辉, 吴天月, 陈天才. 面向工业图像异常检测的连续密集标准化流模型[J]. 计算机科学, 2023, 50(12): 212-220.
ZHANG Zouquan, ZHANG Hui, WU Tianyue, CHEN Tiancai. Continuous Dense Normalized Flow Model for Anomaly Detection in Industrial Images[J]. Computer Science, 2023, 50(12): 212-220. - ZHANG Zouquan, ZHANG Hui, WU Tianyue, CHEN Tiancai
- Computer Science. 2023, 50 (12): 212-220. doi:10.11896/jsjkx.221000183
-
Abstract
PDF(4429KB) ( 2391 )
- References | Related Articles | Metrics
-
Anomaly detection on the surface of industrial products is an indispensable link in manufacturing.In actual industrial production,there are common phenomena such as low proportion of abnormal samples and complex and changeable unknown abnormal,which in turn cause a series of negative effects such as overfitting and poor generalization ability on few-shot datasets.In recent years,the idea of normalized flow has brought a new approach to the field of industrial image anomaly detection based on deep learning,but the inherent architecture of normalized flow easily leads to insufficient model expressiveness.Aiming at the above difficulties,a continuous dense normalized flow model for industrial image anomaly detection is proposed.First,a feature extraction network pre-training strategy based on contrastive learning is designed,which involves simulated abnormal data and a small amount of real abnormal data in the contrastive learning task,and trains the feature backbone network AlexNet to narrow or widen the distance between specific samples.Secondly,a continuous dense normalized flow model is designed,and it uses a composite architecture of reversible transformation to construct a dense flow module to enhance the fitting ability of the generative model to the distribution.The experimental datasets include MVTec AD,Magnetic Tile Defects and self-made industrial cloth datasets.Compared with other anomaly detection models,our method achieves optimal or sub-optimal detection performance on the three datasets,respectively.
-
Low-dose CT Reconstruction Algorithm Based on Iterative Asymmetric Blind Spot Network
郭广行, 阴桂梅, 刘晨旭, 段永红, 强彦, 王艳飞, 王涛. 基于迭代非对称盲点网络的低剂量CT重建算法[J]. 计算机科学, 2023, 50(12): 221-228.
GUO Guangxing, YIN Guimei, LIU Chenxu, DUAN Yonghong, QIANG Yan, WANG Yanfei, WANG Tao. Low-dose CT Reconstruction Algorithm Based on Iterative Asymmetric Blind Spot Network[J]. Computer Science, 2023, 50(12): 221-228. - GUO Guangxing, YIN Guimei, LIU Chenxu, DUAN Yonghong, QIANG Yan, WANG Yanfei, WANG Tao
- Computer Science. 2023, 50 (12): 221-228. doi:10.11896/jsjkx.230300014
-
Abstract
PDF(3134KB) ( 2508 )
- References | Related Articles | Metrics
-
Aiming at the problem that the method of low-dose CT reconstruction by machine learning method relies too much on pairwise legends,a low-dose CT reconstruction algorithm based on iterative asymmetric blind spot network is proposed.Firstly,low-dose CT is self-supervised by pixel-mixed washing sampling blind spot network,and the preliminarily reconstructed CT images are obtained.Secondly,an iterative model is established,and the result image obtained by the previous network is used as the low-dose input of the network for training to obtain the final network model.Finally,the asymmetric method is used to adjust the stride of the sampling under pixel mixing to minimize aliasing artifacts and obtain the final usable model.Theoretical analysis and experimental results show that compared with the traditional low-dose CT reconstruction algorithm,the iterative asymmetric blind spot network algorithm can greatly reduce the dependence of the low-dose CT reconstruction algorithm on pairwise legends,and can generate images similar to or even better than the traditional method in terms of image quality,texture features and structure.
-
Method of Document Level Relation Extraction Based on Fusion of Relational Transfer Information Using Double Graph
寇嘉颖, 赵卫东, 柳先辉. 融合关系传递信息的双图文档级关系抽取方法[J]. 计算机科学, 2023, 50(12): 229-235.
KOU Jiaying, ZHAO Weidong, LIU Xianhui. Method of Document Level Relation Extraction Based on Fusion of Relational Transfer Information Using Double Graph[J]. Computer Science, 2023, 50(12): 229-235. - KOU Jiaying, ZHAO Weidong, LIU Xianhui
- Computer Science. 2023, 50 (12): 229-235. doi:10.11896/jsjkx.230500010
-
Abstract
PDF(2384KB) ( 2487 )
- References | Related Articles | Metrics
-
Document-level relation extraction refers to the extraction of entities and their relations from long paragraphs of unstructured text.Compared to traditional sentence-level relation extraction,document-level relation extraction requires the integration of contextual information from multiple sentences and logical reasoning to extract relation triples.In response to the current limitations of document-level relation extraction methods,such as incomplete modeling of document semantic information and li-mited extraction effects,a double-graph document-level relation extraction method that integrates relational transfer information is proposed.Interactions mentioned between different sentences are introduced into the path construction through the transitivity of relational information,and the interaction information mentioned in the same sentence as well as the coreference information between mentions are used to construct the path set between mention nodes,so as to improve the completeness of document modeling.A mention hierarchy graph aggregation network is constructed using the path set and mention nodes,and a document semantic information model is established.After the information iteration of the graph convolutional network(GCN),the information of different mention nodes of the same entity is fused to form entity node,which constitutes an entity-level graph reasoning network.Finally,logical inference is performed based on the path information between entity graph nodes to extract the relation between entities.The proposed model is experimented on the public dataset DocRED(document-level relation extraction dataset),and the experimental results show a improvement of 1.2(F1) compared to the baseline model,which proves the effectiveness of the proposed method.
-
Multi-level Semantic Structure Enhanced Emotional Cause Span Extraction in Conversations
秦鸣飞, 付国宏. 多层面语义结构增强的对话情感诱因片段抽取[J]. 计算机科学, 2023, 50(12): 236-245.
QIN Mingfei, FU Guohong. Multi-level Semantic Structure Enhanced Emotional Cause Span Extraction in Conversations[J]. Computer Science, 2023, 50(12): 236-245. - QIN Mingfei, FU Guohong
- Computer Science. 2023, 50 (12): 236-245. doi:10.11896/jsjkx.221100189
-
Abstract
PDF(2657KB) ( 2424 )
- References | Related Articles | Metrics
-
Emotional cause span extraction in conversations aims to extract causal spans that induce target emotion expression from conversational history,which plays a pivotal role in emotional conversation systems.However,causal spans extracted by exi-sting methods still have problems to be solved urgently,such as utterance position errors and boundary recognition errors.To this end,this paper proposes a multi-level semantic structure enhanced emotional cause span extraction method in conversations.The discourse-level coreferential structure is used to enhance the positioning of utterances where causal spans are located.The sentence-level syntactic structure is used to enhance the recognition of causal span boundaries.Firstly,according to preprocessed semantic structures and conversational content feature representations,the graph attention network is utilized to construct comprehensive graphs and model conversations at token level and utterance level,respectively.Meanwhile,the biaffine mechanism is utilized to promote interactions and integrations between two-level graphs,and structure-enhanced semantic comprehensive representations are obtained.Then,the linear layer is applied to extract causal spans.Experimental results on the two public datasets show that compared with the benchmark model,the F1 value and EMpos value are improved by 2.42% and 2.26%,respectively.The proposed model also outperforms other baseline models in both F1pos and EMpos metrics,and can also be effectively compatible withutterance-level emotion cause entailment.
-
Aspect-based Multimodal Sentiment Analysis Based on Trusted Fine-grained Alignment
范东旭, 过弋. 基于可信细粒度对齐的多模态方面级情感分析[J]. 计算机科学, 2023, 50(12): 246-254.
FAN Dongxu, GUO Yi. Aspect-based Multimodal Sentiment Analysis Based on Trusted Fine-grained Alignment[J]. Computer Science, 2023, 50(12): 246-254. - FAN Dongxu, GUO Yi
- Computer Science. 2023, 50 (12): 246-254. doi:10.11896/jsjkx.221100038
-
Abstract
PDF(3164KB) ( 2691 )
- References | Related Articles | Metrics
-
Aspect based multimodal sentiment analysis task(MABSA) aims to identify the sentiment polarity of a specific aspect word in a text based on text and image information.However,the current mainstream model does not make full use of the fine-grained semantic alignment between different modes.Instead,it uses the image features of the entire image to fuse information with each word in the text,ignoring the strong correspondence between the local image information and aspect words,which will lead to the noise information in the image being integrated into the final multimodal representation,Therefore,this paper proposes a trusted fine-grained alignment model TFGA(MABSA based on trusted fine-grained alignment).Specifically,we use FasterRCNN to capture the visual objects contained in the image,and then calculate the correlation between them and aspect words respectively.To avoid the inconsistency of the local semantic similarity between the visual object and aspect words in the global perspective of the image-text,confidence is used to weight the local semantic similarity and filter out the unreliable matching pairs,then the model can focuse on the most reliable and highest visual local information related to aspect words in the image to reduce the impact of redundant noise information in the image.Then a fine-grained feature fusion mechanism is proposed to fully fuse the focused local image information with the text information to obtain the final sentiment classification result.Experiments on Twitter datasets show that fine-grained alignment of text and vision is beneficial to aspect based sentiment analysis.
-
Chinese Implicit Sentiment Classification Combining Multiple Linguistic Features
陆靓倩, 王中卿, 周国栋. 结合多种语言学特征的中文隐式情感分类[J]. 计算机科学, 2023, 50(12): 255-261.
LU Liangqian, WANG Zhongqing, ZHOU Guodong. Chinese Implicit Sentiment Classification Combining Multiple Linguistic Features[J]. Computer Science, 2023, 50(12): 255-261. - LU Liangqian, WANG Zhongqing, ZHOU Guodong
- Computer Science. 2023, 50 (12): 255-261. doi:10.11896/jsjkx.221000214
-
Abstract
PDF(2784KB) ( 2468 )
- References | Related Articles | Metrics
-
Sentiment analysis has always been a hot research direction in natural language processing.Implicit sentiment classification refers to the task of sentiment classification without explicit sentiment words.At present,implicit sentiment analysis is still in its infancy.Implicit sentiment analysis is faced with problems such as lack of explicit sentiment words,euphemism of expression,and difficulty in understanding semantics.Traditional sentiment analysis methods,such as sentiment dictionary and bag-of-word models,are difficult to be effective,making the task of implicit sentiment classification more difficult.To solve the above problems,this paper proposes a graph neural network model that combines text,part-of-speech tags and dependency to perform implicit sentiment classification.Specifically,the model first extracts part of speech and dependency features of the text,and then uses pre-training language model BERT to extract text vector features,thus builds a graph attention neural network based on multiple linguistic features.The model has been tested on SMP2021 implicit sentiment recognition public dataset for several times.Experimental results show that the proposed model achieves the best results compared with multiple baseline models.The proposed implicit sentiment classification method is feasible and effective.
-
Aspect-level Sentiment Analysis Integrating Syntactic Distance and Aspect-attention
张隆基, 赵晖. 融合句法距离与方面注意力的方面级情感分析[J]. 计算机科学, 2023, 50(12): 262-269.
ZHANG Longji, ZHAO Hui. Aspect-level Sentiment Analysis Integrating Syntactic Distance and Aspect-attention[J]. Computer Science, 2023, 50(12): 262-269. - ZHANG Longji, ZHAO Hui
- Computer Science. 2023, 50 (12): 262-269. doi:10.11896/jsjkx.221000090
-
Abstract
PDF(2417KB) ( 2382 )
- References | Related Articles | Metrics
-
Currently,the over-smoothing problem arises from deep convolution in syntactic dependency tree-based graph convolutional networks.This problem prevents the convolutional graph network from extracting the global node information of the syntactic dependency tree.Although the sequential model can extract information about the context of the sentence,the timing-dependent nature of the sequential model leads to the inability of the graph convolutional network to effectively distinguish the contribution of context features to aspect terms.This paper proposes a novel graph convolutional network model based on syntactic distance and aspect focus attention mechanisms to address the above problems.First,the model learns the contextual information of sentences and aspect terms separately using a bidirectional long short-term memory network and uses a convolutional graph network to learn the syntactic dependency information of sentences.Secondly,this model calculates the syntactic dependency distance among all nodes based on the syntactic dependency tree,sets a threshold to weaken the weight share of long-distance features,and improves the ability of the graph convolution model to distinguish context features.Finally,this paper also designs attention mechanisms with residual connectivity to automatically guide the aspect terms to focus on the critical information in the sentence.Experimental results demonstrate that the model exhibits better analytical performance on several publicly available datasets compared to the baseline approach,with sentiment classification accuracy as high as 75.94% and 78.59% on the Twitter and Laptop datasets,demonstrating the effectiveness of the proposed approach.
-
SemFA:Extreme Multi-label Text Classification Model Based on Semantic Features and Association Attention
王振东, 董开坤, 黄俊恒, 王佰玲. SemFA:基于语义特征与关联注意力的大规模多标签文本分类模型[J]. 计算机科学, 2023, 50(12): 270-278.
WANG Zhendong, DONG Kaikun, HUANG Junheng, WANG Bailing. SemFA:Extreme Multi-label Text Classification Model Based on Semantic Features and Association Attention[J]. Computer Science, 2023, 50(12): 270-278. - WANG Zhendong, DONG Kaikun, HUANG Junheng, WANG Bailing
- Computer Science. 2023, 50 (12): 270-278. doi:10.11896/jsjkx.230300239
-
Abstract
PDF(2408KB) ( 2400 )
- References | Related Articles | Metrics
-
Extreme multi-label text classification(XMTC) is a challenging task that involves finding the most relevant labels from a large and complex label set for a given text sample.Currently,deep learning methods based on the Transformer model have achieved great success in XMTC.However,existing methods have not fully utilized the advantages of the Transformer model,ignoring the subtle local semantic information of texts at different granularities,and failing to establish and utilize the potential associations between labels and texts robustly.To address this issue,this paper proposes SemFA model—an extreme multi-label text classification model based on semantic features and association-attentionthat leverages semantic features and association attention for XMTC.In SemFA,the top-level outputs of multiple encoders are firstly concatenated as global features.Then,a con-volutional neural network is used to extract local features from shallow vectors of multiple encoders.By combining the rich global information and subtle local information at different granularities,more accurate and comprehensive semantic features are obtained.Finally,the potential association is established between label features and text features using an association-attention mechanism,and an association loss is introduced to continuously optimize the model.Experimental results on the Eurlex-4K and Wiki10-31K public datasets show that SemFA outperforms most existing XMTC models,effectively integrating semantic features and association attention to improve overall classification performance.
-
Semi-supervised Semantic Segmentation Method Based on Multiple Teacher Network Model
许华杰, 肖毅烽. 基于多教师网络模型的半监督语义分割方法[J]. 计算机科学, 2023, 50(12): 279-284.
XU Huajie, XIAO Yifeng. Semi-supervised Semantic Segmentation Method Based on Multiple Teacher Network Model[J]. Computer Science, 2023, 50(12): 279-284. - XU Huajie, XIAO Yifeng
- Computer Science. 2023, 50 (12): 279-284. doi:10.11896/jsjkx.221000245
-
Abstract
PDF(2238KB) ( 2515 )
- References | Related Articles | Metrics
-
The methods based on consistency regularization show better performance in semi-supervised semantic segmentation task.Such methods usually involve two roles,an explicit or implicit teacher network,and a student network which is trained by minimizing the consistency loss between the prediction results of two networks for different perturbation samples.But unreliable predictions from a single-teacher network may cause the student network to learn wrong information.By extending the mean teacher(MT) model to the multiple teacher network,multiplemeanteacher network(MMTNet) is proposed to make the student network learn from the average prediction results of multiple teacher networks,which can effectively reduce the impact of single-teacher network prediction errors.In addition,MMTNet implements data perturbation of unlabeled data by applying strong data augmentation and weak data augmentation to the unlabeled data,which increases the diversity of unlabeled data,alleviates the coupling problem between student network and teacher network to a certain extent and avoids the overfitting of student network to teacher network,so as to further reduce the impact of pseudo-label prediction errors in the teacher network.Experimental results on VOC 2012 augmented dataset show that the proposed multiple mean teacher network model MMTNet can achieve higher mean intersection over union than other mainstream semi-supervised semantic segmentation methods,and the actual segmentation performance is better.
-
Intelligent Networked Electric Vehicles Scheduling Method for Green Energy Saving
陈瑞, 沈鑫, 万得胜, 周恩亦. 面向绿色节能的智能网联电动车调度方法[J]. 计算机科学, 2023, 50(12): 285-293.
CHEN Rui, SHEN Xin, WAN Desheng, ZHOU Enyi. Intelligent Networked Electric Vehicles Scheduling Method for Green Energy Saving[J]. Computer Science, 2023, 50(12): 285-293. - CHEN Rui, SHEN Xin, WAN Desheng, ZHOU Enyi
- Computer Science. 2023, 50 (12): 285-293. doi:10.11896/jsjkx.230100099
-
Abstract
PDF(3852KB) ( 2302 )
- References | Related Articles | Metrics
-
With the rapid development of new energy electric vehicles,intelligent networked electric vehicles featuring intelligence,networking,and energy saving not only have the advantages of group intelligence and are suitable for performing large-scale urban tasks,but also are widely used in the construction of social services in smart cities.For this reason,this paper focuses on the urban task dispatching problem for groups of electric vehicles with intelligent networked electric vehicles as the research object,which mainly faces the following challenges:since the urban task dispatching strategy is closely related to the ability of individual vehicles to perform the task,the regional benefits generated by each vehicle on its driving trajectory needs to be consi-dered when developing a dispatching strategy for a group of vehicles to ensure that the vehicles complete their tasks under the constraint of limited power and return.Therefore,the vehicle group dispatching strategy and the individual vehicle path planning scheme interact as a tightly coupled NP-hard problem with a weighted bipartite graph matching problem and a travel quotient problem.To solve the above challenges,a vehicle dispatching algorithm based on maximum weight matching is proposed,which first selects task sections for individual vehicles within sub-regions by employing a greedy strategy.Then,the optimal dispatching strategy for vehicles and sub-regions is developed using the regional benefits generated by vehicle travel trajectories,to maximize the total regional benefits.Finally,the proposed algorithm is evaluated based on a 30-day operation dataset of 238 intelligent sanitation vehicles in Chengdu,Sichuan province.Experimental results show that the proposed algorithm has an average 11.2% improvement in urban road sweeping rate compared to the source data method,the randomized algorithm and the non-updated map algorithm.
-
Semantic Matching Method Integrating Multi-head Attention Mechanism and Siamese Network
臧洁, 周万林, 王妍. 融合多头注意力机制和孪生网络的语义匹配方法[J]. 计算机科学, 2023, 50(12): 294-301.
ZANG Jie, ZHOU Wanlin, WANG Yan. Semantic Matching Method Integrating Multi-head Attention Mechanism and Siamese Network[J]. Computer Science, 2023, 50(12): 294-301. - ZANG Jie, ZHOU Wanlin, WANG Yan
- Computer Science. 2023, 50 (12): 294-301. doi:10.11896/jsjkx.221000083
-
Abstract
PDF(1667KB) ( 2631 )
- References | Related Articles | Metrics
-
Considering the matching problem of enterprise resources and customer requirements,the existing methods have the problems that the resource and requirement encapsulation is not accurate enough and the matching effect can't satisfy uses' requirement.In order to solve the problem of diversity and ambiguity of enterprise resource and requirement description,this paper proposes the dynamic user-defined template encapsulation.According to the feature that most of the encapsulated requirements and resources are Chinese short texts,an interactive text matching model which integrates multi-head attention mechanism and sia-mese network is proposed.The semantic differences and similarities between sentences are considered in this model.It uses word mixing vectors as input to enhance the semantic information of the text,combines the Siamese network with the multi-head attention mechanism,and extractes the semantic features of the context as an independent unit to fully interact with the semantic features.In order to verify the effectiveness of the model,the classical data set LCQMC and the self-constructed CSMD data set are used to conduct experiments on the model.The results show that the accuracy and performance of the model are improved in different degrees,which provides a more accurate matching method for enterprise resources and requirements.
-
DL+:An Enhanced Double-layer Framework for Knowledge Graph Reasoning
武月佳, 周建涛. DL+:一种增强型双层知识图谱推理框架[J]. 计算机科学, 2023, 50(12): 302-313.
WU Yuejia, ZHOU Jiantao. DL+:An Enhanced Double-layer Framework for Knowledge Graph Reasoning[J]. Computer Science, 2023, 50(12): 302-313. - WU Yuejia, ZHOU Jiantao
- Computer Science. 2023, 50 (12): 302-313. doi:10.11896/jsjkx.221000170
-
Abstract
PDF(3734KB) ( 2278 )
- References | Related Articles | Metrics
-
As an important research field of the graph database,the knowledge graph(KG) can formally describe things and their relationships in the real world.However,its incompleteness and sparsity hinder its application in many fields.The knowledge graph reasoning(KGR) technology aims to complete the knowledge graph by inferring new knowledge or identifying wrong knowledge according to the existing knowledge in the knowledge graph.Although existing reasoning methods can obtain partially effective knowledge paths,there are still some problems such as incomplete acquisition paths,ignoring local information,introducing noise.Based on this,this paper finds and explicitly proposes the problem of poor path connectivity,proves that the reasoning validity is positively correlated with the path connectivity ratio between entities,and further proposes a double-layer framework DL+ which is used to enhance the performance of existing reasoning methods.The first layer is a knowledge augmenter,which mainly uses the community discovery algorithm to extract the entity neighborhood information on the initial KG and build new knowledge to expand the knowledge scale,and then designs a community pruning optimization method to remove the noise introduced in the construction.Finally,the augmented KG is extracted and restored to the same structure as the initial KG representation and output to the second layer to ensure the “plug-and-play” feature of the model.The second layer is a knowledge reasoner,which can enhance the existing KGR model by learning and reasoning on the KG after knowledge augmentation,so that the model can obtain better reasoning results when the graph path connectivity ratio is high.Finally,a large number of experimental results on four standard KG datasets show that the DL+ can effectively alleviate the problem of poor path connectivity between entities,and improve the average prediction accuracy by 4.798% compare with nine types of benchmark methods.
-
Hierarchical Reinforcement Learning Method Based on Trajectory Information
徐亚鹏, 刘全, 栗军伟. 基于轨迹信息量的分层强化学习方法[J]. 计算机科学, 2023, 50(12): 314-321.
XU Yapeng, LIU Quan, LI Junwei. Hierarchical Reinforcement Learning Method Based on Trajectory Information[J]. Computer Science, 2023, 50(12): 314-321. - XU Yapeng, LIU Quan, LI Junwei
- Computer Science. 2023, 50 (12): 314-321. doi:10.11896/jsjkx.221100096
-
Abstract
PDF(3071KB) ( 2380 )
- References | Related Articles | Metrics
-
The option-based hierarchical reinforcement learning(O-HRL) algorithm has the characteristics of temporal abstraction,which can effectively deal with complex problems such as long-term temporal order and sparse rewards that are difficult to solve in reinforcement learning.The existing studies of O-HRL methods mainly focus on data efficiency improvement by increa-sing the sampling efficiency as well as the exploration ability of the agent to maximize its probability of obtaining excellent expe-riences.However,in terms of policy stability,the high-level policy guides the low-level action by only considering the state,resulting in the underutilization of option information,which leads to the instability of the low-level policy.To address this problem,a hierarchical reinforcement learning method based on trajectory information(THRL) is proposed.THRL uses different types of information of option trajectories to guide the selection of low-level actions,and also generates inferred options by the obtained extended trajectory information.A discriminator is introduced to use the inferred options and the original options as inputs to obtain internal rewards,which makes the selection of low-level actions more consistent with the current option policy,thus solving the instability problem of low-level policies.The effectiveness of THRL is verified by applying it to the MuJoCo environment,along with the best deep reinforcement learning algorithms,and experimental results show that the THRL algorithm has better stability and performance.
-
Mixed Path HMC Sampling Methods for Molecular Tree Spaces
李晓鹏, 凌诚, 高敬阳. 基于混合路径HMC的分子树空间采样方法[J]. 计算机科学, 2023, 50(12): 322-329.
LI Xiaopeng, LING Cheng, GAO Jingyang. Mixed Path HMC Sampling Methods for Molecular Tree Spaces[J]. Computer Science, 2023, 50(12): 322-329. - LI Xiaopeng, LING Cheng, GAO Jingyang
- Computer Science. 2023, 50 (12): 322-329. doi:10.11896/jsjkx.221100057
-
Abstract
PDF(2095KB) ( 2513 )
- References | Related Articles | Metrics
-
With the increasing abundance of modern molecular sequence data and the dramatic expansion of the tree-like topological space describing historical relationships between species,reliable inference of phylogenetic trees continues to face enormous challenges.In recent years,the most advanced Hamiltonian Markov Monte Carlo(HMC) algorithm in the Markov Chain Monte Carlo(MCMC) family has been shown to be applicable to phylogenetic analysis,which can avoid the large amount of random walk behaviors present in traditional MCMC algorithms and speed up the mixing of Markov chains.However,in the more complex multimodal development tree space,the HMC algorithm cannot escape from the local high probability region by obtaining propo-sals from other modes.In order to improve the robustness of the algorithm,a hybrid path Hamiltonian Markov Monte Carlo(MPHMC) optimization strategy is proposed in this paper.Without adding additional computational cost,the algorithm samples paths with a non-HMC update component for discrete parameters,alternating with HMC deterministic updates,and introduces a branch rearrangement strategy with greater topological variation in the tree space,enabling freer traversal of the entire posterior distribution's tree space.Experiments on five empirical datasets demonstrate that the MPHMC method better samples from the correct posterior distribution,and the HMC single-path sampling algorithm may fail when run on larger datasets that are more difficult to sample,while the MPHMC method achieves a sampling efficiency gain over 14% than the widely used phylogenetic analysis tool,Mrbayes(MCMC).
-
Sparse Adversarial Examples Attacking on Video Captioning Model
邱江兴, 汤学明, 王天美, 王成, 崔永泉, 骆婷. 针对视频语义描述模型的稀疏对抗样本攻击[J]. 计算机科学, 2023, 50(12): 330-336.
QIU Jiangxing, TANG Xueming, WANG Tianmei, WANG Chen, CUI Yongquan, LUO Ting. Sparse Adversarial Examples Attacking on Video Captioning Model[J]. Computer Science, 2023, 50(12): 330-336. - QIU Jiangxing, TANG Xueming, WANG Tianmei, WANG Chen, CUI Yongquan, LUO Ting
- Computer Science. 2023, 50 (12): 330-336. doi:10.11896/jsjkx.221100068
-
Abstract
PDF(2242KB) ( 2361 )
- References | Related Articles | Metrics
-
Despite the fact that multi-modal deep learning such as image captioning model has been proved to be vulnerable to adversarial examples,the adversarial susceptibility in video caption generation is under-examined.There are two main reasons for this.On the one hand,the video captioning model input is a stream of images rather than a single picture in contrast to image captioning systems.The calculation would be enormous if we perturb each frame of a video.On the other hand,compared with the video recognition model,the output of the model is not a single word,but a more complex semantic description.To solve the above problems and study the robustness of video captioning model,this paper proposes a sparse adversarial attack method.Firstly,a method is proposed based on the idea derived from saliency maps in image object recognition model to verify the contribution of different frames to the video captioning model output and a L2norm based optimistic objective function suited for video caption models is designed.With a high success rate of 96.4% for the targeted attack and a reduction in queries of more than 45% compared to randomly selecting video frames,the evaluation on the MSR-VTT dataset demonstrates the effectiveness of our strategy as well as reveals the vulnerability of the video caption model.
-
Domain-Flux Botnet Detection Method with Fusion of Character and Word Dual-channel
李晓冬, 宋元凤, 李育强. 一种融合字词双通道的Domain-Flux僵尸网络检测方法[J]. 计算机科学, 2023, 50(12): 337-342.
LI Xiaodong, SONG Yuanfeng, LI Yuqiang. Domain-Flux Botnet Detection Method with Fusion of Character and Word Dual-channel[J]. Computer Science, 2023, 50(12): 337-342. - LI Xiaodong, SONG Yuanfeng, LI Yuqiang
- Computer Science. 2023, 50 (12): 337-342. doi:10.11896/jsjkx.221000179
-
Abstract
PDF(2195KB) ( 3169 )
- References | Related Articles | Metrics
-
Domain-Flux is a technique for keeping a malicious botnet in operation by constantly changing the domain name of the botnet owner's command and control(C&C) server,which can effectively evade the detection of network security devices.Aming at the problem that the information extraction of Domain-Flux domain names is not comprehensive and the key classification features cannot be effectively captured in the existing detection methods,this paper proposes a detection model based on fusion cha-racter and word dual-channel.It extracts local features and global features by using convolutional neural network(CNN) and bidirectional long short-term memory network(BiLSTM) on the two channels respectively,which enriches the feature information of input domain names and improves the classification performance.In the character vector channel,the local spatial features are extracted for random character domain names.In the root vector channel,based on the TF-IDF algorithm,Intra-class factor is introduced to weight the root importance into the word vector,and then the temporal features before and after the combination sequence of domain names are extracted.Experimental results show that the detection accuracy of the model based on fusion character and word dual-channel is improved by 7.12% and 5.86% compared with the model of single TextCNN or BiLSTM.It also has higher precision for dictionary-based Domain-Flux detection.
-
Contribution-based Federated Learning Approach for Global Imbalanced Problem
吴飞, 宋一波, 季一木, 胥熙, 王木森, 荆晓远. 面向全局不平衡问题的基于贡献度的联邦学习方法[J]. 计算机科学, 2023, 50(12): 343-348.
WU Fei, SONG Yibo, JI Yimu, XU Xi, WANG Musen, JING Xiaoyuan. Contribution-based Federated Learning Approach for Global Imbalanced Problem[J]. Computer Science, 2023, 50(12): 343-348. - WU Fei, SONG Yibo, JI Yimu, XU Xi, WANG Musen, JING Xiaoyuan
- Computer Science. 2023, 50 (12): 343-348. doi:10.11896/jsjkx.221100111
-
Abstract
PDF(2001KB) ( 2907 )
- References | Related Articles | Metrics
-
Under the premise of protecting the data privacy,federated learning unites multiple parties to train together to improve the accuracy of the global model.Class imbalance of data is a challenging problem in the federated learning paradigm.Data imba-lance in federated learning can be divided into local data imbalance and global data imbalance.At present,there are few researches on global data imbalance.This paper proposes a contribution-based federated learning approach for global imbalance problem(CGIFL).First,a contribution-based global discriminant loss is designed to adjust the model optimization direction in the local training process and make models give more attention to the global minority classes in training to improve the generalization ability of models.And a contribution-based dynamic federated aggregation algorithm is designed to optimize the participation weight of each node and better balance the updating direction of the global model.Experimental results on MNIST,CIFAR10 and CIFAR100 datasets demonstrate the effectiveness of CGIFL in solving the problem of global data imbalance.
-
Network Asset Security Assessment Model Based on Bayesian Attack Graph
曾昆仑, 张尼, 李维皓, 秦媛媛. 基于贝叶斯攻击图的网络资产安全评估模型[J]. 计算机科学, 2023, 50(12): 349-358.
ZENG Kunlun, ZHANG Ni, LI Weihao, QIN Yuanyuan. Network Asset Security Assessment Model Based on Bayesian Attack Graph[J]. Computer Science, 2023, 50(12): 349-358. - ZENG Kunlun, ZHANG Ni, LI Weihao, QIN Yuanyuan
- Computer Science. 2023, 50 (12): 349-358. doi:10.11896/jsjkx.221000019
-
Abstract
PDF(2506KB) ( 2887 )
- References | Related Articles | Metrics
-
Current attack graph models do not consider the reuse of vulnerabilities,and the calculation of risk probability is not comprehensive and accurate.In order to overcome these difficulties and evaluate security of network assets environment accurately,a network assets security assessment model based on Bayesian attack graph is proposed.Firstly,successful probabilities of atomic attacks are calculated according to vulnerability exploitability,host protection strength,vulnerability time exploitability and vulnerability source.Then attack graph is quantified by Bayesian network.Secondly,successful probabilities of partial atomic attacks and corresponding prior reachable probabilities are modified according to the reuse of vulnerabilities to evaluate static security risk of network assets.Thirdly,reachable probabilities of related nodes are updated dynamically according to real-time attack events to realize the dynamic assessment of network assets security risk.Finally,the proposed model is analyzed and verified effectively by experimental simulation and comparison with existing works.
-
Generate Transferable Adversarial Network Traffic Using Reversible Adversarial Padding
杨有欢, 孙磊, 戴乐育, 郭松, 毛秀青, 汪小芹. 使用RAP生成可传输的对抗网络流量[J]. 计算机科学, 2023, 50(12): 359-367.
YANG Youhuan, SUN Lei, DAI Leyu, GUO Song, MAO Xiuqing, WANG Xiaoqin. Generate Transferable Adversarial Network Traffic Using Reversible Adversarial Padding[J]. Computer Science, 2023, 50(12): 359-367. - YANG Youhuan, SUN Lei, DAI Leyu, GUO Song, MAO Xiuqing, WANG Xiaoqin
- Computer Science. 2023, 50 (12): 359-367. doi:10.11896/jsjkx.221000155
-
Abstract
PDF(3012KB) ( 2756 )
- References | Related Articles | Metrics
-
More and more deep learning methods are used for network traffic classification,at the same time,it also brings the threat of adversarial network traffic(ANT).ANT will make network traffic classifier based on deep learning method predict incorrectly,and then cause the security protection system to make wrong decision.Although the adversarial algorithms in the vision field can be used to generate ANT,the perturbations generated by these algorithms will change the header information of the network traffic,causing the network traffic to lose its attributes and information.In this paper,the differences of adversarial examples between network traffic tasks and vision tasks are analyzed,and an attack algorithm suitable for generating ANT is proposed,i.e.,reversible adversarial padding(RAP).RAP uses the difference between the length of the network traffic packet and the input length of the network traffic classifier to fill the tail padding area with no -ball perturbations.Besides,to solve the pro-blem that it is difficult to compare the effects of different lengths perturbations,this paper proposes gain on evaluating metrics,which comprehensively considers the impact of the length of the perturbations and the strength of the adversarial attack algorithm.Experimental results show that RAP not only retains the property of network traffic transferability but also obtains a higher gain of attack than traditional algorithms.
-
CASESC:A Cloud Auditing Scheme Based on Ethereum Smart Contracts
郭彩彩, 金瑜. CASESC:基于以太坊智能合约的云审计方案[J]. 计算机科学, 2023, 50(12): 368-376.
GUO Caicai, JIN Yu. CASESC:A Cloud Auditing Scheme Based on Ethereum Smart Contracts[J]. Computer Science, 2023, 50(12): 368-376. - GUO Caicai, JIN Yu
- Computer Science. 2023, 50 (12): 368-376. doi:10.11896/jsjkx.221000185
-
Abstract
PDF(2237KB) ( 2737 )
- References | Related Articles | Metrics
-
People prefer to use cloud storage due to its advantages of high scalability and low cost,but ensuring the integrity of cloud data has become a security challenge that needs to be solved immediately.While blockchain's characteristics of de-centralization and tamper resistance can greatly solve the problems such as single-point failures and security threats existing in cloud auditing schemes based on third party auditor(TPA),some scholars propose blockchain-based cloud auditing schemes.But these schemes need data owner(DO) or a delegated DO to validate the auditing proof,which not only requires DO to keep online,but increases its auditing burden.Moreover,most of them are only implemented in a simulated blockchain environment.Therefore,this paper proposes a cloud auditing scheme with Ethereum smart contracts-CASESC.CASESC uses solidity language to write Ethereum smart contract code which can send auditing requests and validate the auditing proof returned from cloud server provi-der(CSP) and stores auditing results and related information in the Ethereum that can be referred to by DO.Without delegating others or keeping online status,CASESC can replace DO to work and reduces its auditing overhead.Besides,CASESC conducts experiments in Ethereum public blockchain called Goerli and private blockchain constructed by Ganache in order to prove its availability.Theoretical analysis and experimental evaluation show that CASESC can significantly reduce the auditing overhead of DO without increasing overall auditing overhead.