Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
CODEN JKIEBK
-
Hybrid MPI+OpenMP Parallel Method on Polyhedral Grid Generation in OpenFoam
刘江, 刘文博, 张矩. OpenFoam中多面体网格生成的MPI+OpenMP混合并行方法[J]. 计算机科学, 2022, 49(3): 3-10.
LIU Jiang, LIU Wen-bo, ZHANG Ju. Hybrid MPI+OpenMP Parallel Method on Polyhedral Grid Generation in OpenFoam[J]. Computer Science, 2022, 49(3): 3-10. - LIU Jiang, LIU Wen-bo, ZHANG Ju
- Computer Science. 2022, 49 (3): 3-10. doi:10.11896/jsjkx.210700060
- Abstract PDF(3603KB) ( 1346 )
- References | Related Articles | Metrics
-
Grid generation is an important step of computational fluid dynamics.In the process of large-scale numerical simulation,the time consumption of grid generation increases with the number of grids which often increases with the simulation accuracy.Based on the grid generation algorithm in an open-source software called OpenFoam,this paper proposes a hybrid parallel me-thod of OpenMP and MPI for polyhedral grid generation.By theoretical analysis,we show that when the hybrid parallel method is used to generate the same quality grids,increasing the number of threads and grid cells will reduce the time consumption of grid generation.Three numerical simulations using different solvers show that the grids generated by the hybrid parallel method and the original method have close qualifications,and the simulation results are almost indistinguishable from those of the original method.Furthermore,the time consumption of this method to generate the same quality and quantity grids can be reduced to less than a quarter of the time consumption without using OpenMP parallel method.
-
Reducing Head-of-Line Blocking on Network in Hadoop Clusters
田冰川, 田臣, 周宇航, 陈贵海, 窦万春. 减少Hadoop集群中网络队头阻塞的调度算法[J]. 计算机科学, 2022, 49(3): 11-22.
TIAN Bing-chuan, TIAN Chen, ZHOU Yu-hang, CHEN Gui-hai, DOU Wan-chun. Reducing Head-of-Line Blocking on Network in Hadoop Clusters[J]. Computer Science, 2022, 49(3): 11-22. - TIAN Bing-chuan, TIAN Chen, ZHOU Yu-hang, CHEN Gui-hai, DOU Wan-chun
- Computer Science. 2022, 49 (3): 11-22. doi:10.11896/jsjkx.210900117
- Abstract PDF(3477KB) ( 912 )
- References | Related Articles | Metrics
-
Users of big data analytics systems want the execution time of tasks to be as short as possible.However,during task execution,both network and computational moments may become resource bottlenecks that hinder task execution.Through the observation and analysis of the big data analysis system,the following conclusions are drawn:1)the data-parallel framework should switch between multiple working modes depending on the current resource bottlenecks;2)the scheduling of subtasks should fully consider the new tasks that may arrive in the future,not only the currently submitted tasks.Based on the above observations,a new task scheduling system Duopoly is designed and implemented,which consists of two parts:cans,a network scheduler that senses computational resources,and nats,a sub-task scheduler that senses network resources.The effectiveness of Duopoly is evaluated by small-scale physical clusters and large-scale simulation experiments,and the experimental results show that Duopoly can reduce the average task completion time by 37.30%~76.16% compared with existing work.
-
Incentive Mechanism for Hierarchical Federated Learning Based on Online Double Auction
杜辉, 李卓, 陈昕. 基于在线双边拍卖的分层联邦学习激励机制[J]. 计算机科学, 2022, 49(3): 23-30.
DU Hui, LI Zhuo, CHEN Xin. Incentive Mechanism for Hierarchical Federated Learning Based on Online Double Auction[J]. Computer Science, 2022, 49(3): 23-30. - DU Hui, LI Zhuo, CHEN Xin
- Computer Science. 2022, 49 (3): 23-30. doi:10.11896/jsjkx.210800051
- Abstract PDF(2220KB) ( 1490 )
- References | Related Articles | Metrics
-
In hierarchical federated learning,energy constrained mobile devices will consume their own resources for participating in model training.In order to reduce the energy consumption of mobile devices,this paper proposes the problem of minimizing the sum of energy consumption of mobile devices without exceeding the maximum tolerance time of hierarchical federated learning.Different training rounds of edge server can select different mobile devices,and mobile devices can also train models under diffe-rent edge servers concurrently.Therefore,this paper proposes ODAM-DS algorithm based on an online double auction mechanism.Based on the optimal stopping theory,the edge server is supported to select the mobile device at the best time,so as to minimize the average energy consumption of the mobile device.Then,the theoretical analysis of the proposed online double auction mechanism proves that it meets the characteristics of incentive compatibility,individual rationality and weak budget equilibrium constraints.Simulation results show that the energy consumption of ODAM-DS algorithm is 19.04% lower than that of the existing HFEL algorithm.
-
Reliable Incentive Mechanism for Federated Learning of Electric Metering Data
王鑫, 周泽宝, 余芸, 陈禹旭, 任昊文, 蒋一波, 孙凌云. 一种面向电能量数据的联邦学习可靠性激励机制[J]. 计算机科学, 2022, 49(3): 31-38.
WANG Xin, ZHOU Ze-bao, YU Yun, CHEN Yu-xu, REN Hao-wen, JIANG Yi-bo, SUN Ling-yun. Reliable Incentive Mechanism for Federated Learning of Electric Metering Data[J]. Computer Science, 2022, 49(3): 31-38. - WANG Xin, ZHOU Ze-bao, YU Yun, CHEN Yu-xu, REN Hao-wen, JIANG Yi-bo, SUN Ling-yun
- Computer Science. 2022, 49 (3): 31-38. doi:10.11896/jsjkx.210700195
- Abstract PDF(1813KB) ( 1289 )
- References | Related Articles | Metrics
-
Federated learning has solved the problem of data interoperability under the premise of satisfying user privacy protection and data security.However,traditional federated learning lacks an incentive mechanism to encourage and attract data owners to participate in federated learning.Meanwhile,the lack of a federated learning audit mechanism provides the possibility for malicious nodes to conduct sabotage attacks.In response to this problem,this paper proposes a reliable federated learning incentive mechanism for electric metering data based on blockchain technology.This method starts from two aspects:rewarding data parti-cipants for training participation and evaluating data reliability for all of them.We design an algorithm to evaluate the training effect of data participants.The contribution of data participants is determined from the perspective of training effect and training cost,and the participants are rewarded according to the contribution.At the same time,a reputation model is established for the reliability of the data participants,and the reputation of the data participants is updated according to the training effect,so as to achieve the reliability assessment for data participants.Based on the open-source framework of federated learning and real electric metering data,a case study is carried out,and the obtained results verify the effectiveness of our method.
-
Study on Communication Optimization of Federated Learning in Multi-layer Wireless Edge Environment
赵罗成, 屈志昊, 谢在鹏. 面向多层无线边缘环境下的联邦学习通信优化的研究[J]. 计算机科学, 2022, 49(3): 39-45.
ZHAO Luo-cheng, QU Zhi-hao, XIE Zai-peng. Study on Communication Optimization of Federated Learning in Multi-layer Wireless Edge Environment[J]. Computer Science, 2022, 49(3): 39-45. - ZHAO Luo-cheng, QU Zhi-hao, XIE Zai-peng
- Computer Science. 2022, 49 (3): 39-45. doi:10.11896/jsjkx.210800054
- Abstract PDF(2040KB) ( 1167 )
- References | Related Articles | Metrics
-
Existing model synchronization mechanisms of federated learning (FL) are mostly based on single-layer parameter server architecture,which are difficult to adapt to current heterogeneous wireless network scenarios.There are some problems such as excessive communication load on single-point and poor scalability of FL.In response to these problems,this paper proposes an efficient model synchronization scheme for FL in hybrid wireless edge networks.In a hybrid edge wireless network,edge devices transmit local models to nearby small base stations.After receiving local models from edge devices,small base stations exe-cute the aggregation algorithm and send the aggregated models to the macro base station to update the global model.Considering the heterogeneity of channel performance and the competitive relationship of data transmission on the wireless channel,this paper proposes a new type of grouping asynchronous model synchronization scheme and designs a transmission rate aware channel allocation algorithm.Experiments are carried out on real data sets.Experimental results show that the proposed transmission rate aware channel allocation algorithm in grouping asynchronous model synchronization scheme can reduce communication time by 25%~60% and greatly improve the training efficiency of FL.
-
DRL-based Vehicle Control Strategy for Signal-free Intersections
欧阳卓, 周思源, 吕勇, 谭国平, 张悦, 项亮亮. 基于深度强化学习的无信号灯交叉路口车辆控制[J]. 计算机科学, 2022, 49(3): 46-51.
OUYANG Zhuo, ZHOU Si-yuan, LYU Yong, TAN Guo-ping, ZHANG Yue, XIANG Liang-liang. DRL-based Vehicle Control Strategy for Signal-free Intersections[J]. Computer Science, 2022, 49(3): 46-51. - OUYANG Zhuo, ZHOU Si-yuan, LYU Yong, TAN Guo-ping, ZHANG Yue, XIANG Liang-liang
- Computer Science. 2022, 49 (3): 46-51. doi:10.11896/jsjkx.210700010
- Abstract PDF(2133KB) ( 1340 )
- References | Related Articles | Metrics
-
Using deep learning technology to control vehicles at intersections is a research hotspot in the field of intelligent transportation.Previous studies suffer from the inability to adapt to dynamic changes in the number of self-driving vehicles,slow convergence of training,and locally optimal training results.This work focuses on how autonomous vehicles can use distributed deep reinforcement methods to improve the efficiency of intersections at unsignalized intersections.First,an efficient reward function is proposed to apply the distributed reinforcement learning algorithm to the unsignalized intersection scenario,which can effectively improve the efficiency of intersection passage by relying on only local information even if the vehicle cannot obtain the whole intersection state information.Then,to address the problem of inefficient training of reinforcement learning methods in open intersection scenarios,a transfer learning approach is used to improve the training efficiency by using the trained strategy in the closed figure-of-eight scenario as a warm start and continuing the training in the unsignalized intersection scenario.Finally,this paper proposes a strategy that can be adapted to all proportions of autonomous vehicles,and this strategy can improve intersection access efficiency in scenarios with any proportion of autonomous vehicles.The algorithm is validated on the simulation platform Flow,and the experimental results show that the proposed smart body model converges quickly in training,can adapt to dynamic changes in the proportion of self-driving vehicles,and can effectively improve the efficiency of intersections.
-
Overview of Vulnerability Detection Methods for Ethereum Solidity Smart Contracts
张潆藜, 马佳利, 刘子昂, 刘新, 周睿. 以太坊Solidity智能合约漏洞检测方法综述[J]. 计算机科学, 2022, 49(3): 52-61.
ZHANG Ying-li, MA Jia-li, LIU Zi-ang, LIU Xin, ZHOU Rui. Overview of Vulnerability Detection Methods for Ethereum Solidity Smart Contracts[J]. Computer Science, 2022, 49(3): 52-61. - ZHANG Ying-li, MA Jia-li, LIU Zi-ang, LIU Xin, ZHOU Rui
- Computer Science. 2022, 49 (3): 52-61. doi:10.11896/jsjkx.210700004
- Abstract PDF(1865KB) ( 4237 )
- References | Related Articles | Metrics
-
Based on blockchain technology,Ethereum Solidity smart contract as a computer protocol is designed to spread,verify,or execute contracts in an informative way,and it provides a foundation for various distributed application services.Although implemented for less than six years,its security problems have frequently broken out and caused substantial financial losses,which attracts more attention in the security inspection research.This paper firstly introduces some specific mechanisms and operating principles of smart contracts based on Ethereum related techniques,and analyzes some smart contract vulnerabilities occurring frequently and deriving from the characteristics of smart contracts.Then,this paper explains the traditional mainstream smart contract vulnerability detecting tools in terms of symbolic execution,fuzzing,formal verification,and taint analysis.In addition,in order to cope with the endless new vulnerabilities and the need to improve the efficiency of detection,vulnerabilities detection based on machine learning in recent years is classified and summarized according to the various ways of problem transformation in three perspectives including text processing,non-Euclidean graph and standard image.Finally,this paper proposes to formulate more extensive and accurate standardized information database and measurement indicators towards the insufficiency of the detection methods in two directions.
-
Dynamic Network Security Analysis Based on Bayesian Attack Graphs
李嘉睿, 凌晓波, 李晨曦, 李子木, 杨家海, 张蕾, 吴程楠. 基于贝叶斯攻击图的动态网络安全分析[J]. 计算机科学, 2022, 49(3): 62-69.
LI Jia-rui, LING Xiao-bo, LI Chen-xi, LI Zi-mu, YANG Jia-hai, ZHANG Lei, WU Cheng-nan. Dynamic Network Security Analysis Based on Bayesian Attack Graphs[J]. Computer Science, 2022, 49(3): 62-69. - LI Jia-rui, LING Xiao-bo, LI Chen-xi, LI Zi-mu, YANG Jia-hai, ZHANG Lei, WU Cheng-nan
- Computer Science. 2022, 49 (3): 62-69. doi:10.11896/jsjkx.210800107
- Abstract PDF(2323KB) ( 1137 )
- References | Related Articles | Metrics
-
In order to overcome the difficulties that current attack graph model cannot reflect real-time network attack events,a method is proposed including a forward risk probability update algorithm and a forward-backward combined risk probability update algorithm,which meets the needs of real-time analyzing network security.It first performs specific quantitative analysis on the uncertainty of each node in the graph,and uses Bayesian networks to calculate their static probabilities.After that,it updates the dynamic probability of each node along the forward and backward paths according to the real-time network security events,instantly reflecting the changes of external conditions and assessing real-time risk levels across the network.Experimental results show that the method can calibrate and adjust the risk probability of each node according to the actual situation,which helps the network operator correctly understand the dangerous levels of the network and make better decision for defense and prevention of the next attack.
-
Homomorphic and Commutative Fragile Zero-watermarking Based on SVD
任花, 牛少彰, 王茂森, 岳桢, 任如勇. 基于奇异值分解的同态可交换脆弱零水印研究[J]. 计算机科学, 2022, 49(3): 70-76.
REN Hua, NIU Shao-zhang, WANG Mao-sen, YUE Zhen, REN Ru-yong. Homomorphic and Commutative Fragile Zero-watermarking Based on SVD[J]. Computer Science, 2022, 49(3): 70-76. - REN Hua, NIU Shao-zhang, WANG Mao-sen, YUE Zhen, REN Ru-yong
- Computer Science. 2022, 49 (3): 70-76. doi:10.11896/jsjkx.210800015
- Abstract PDF(2492KB) ( 790 )
- References | Related Articles | Metrics
-
Most of the existing watermarking and encryption schemes are difficult to ensure the commutativity of watermarking and encryption as well as the visual quality of the protected image.These schemes complete watermark embedding and image encryption processes in a fixed order,and they modify the protected image content more or less.Few of them complete the commutativity of watermarking and encryption process without affecting the quality of the protected image content.Therefore,a homomorphic and commutative fragile zero-watermarking based on SVD (singular value decomposition) is proposed.At the sender side,the content owner adopts homomorphic modular encryption to encrypt the original image content,and the two stages of image encryption and watermarking generation do not affect each other.The zero-watermarking information can be generated from the encrypted image and the original host image,respectively.At the receiver end,the legitimate receiver first decrypts the image and then performs watermarking detection on the decrypted image content,and the extracted watermarking information can detect and locate the deliberately tampered area of the watermarked image.Experimental results confirm that the use of zero-watermarking will not lead to gray level value alteration of the image content,and the deliberately tampered area of the watermarked image can be located perfectly while ensuring the commutativity.
-
Lightweight Medical Data Sharing Scheme with Access Policy Hiding and Key Tracking
王梦宇, 殷新春, 宁建廷. 支持访问策略隐藏和密钥追踪的轻量级医疗数据共享方案[J]. 计算机科学, 2022, 49(3): 77-85.
WANG Meng-yu, YIN Xin-chun, NING Jian-ting. Lightweight Medical Data Sharing Scheme with Access Policy Hiding and Key Tracking[J]. Computer Science, 2022, 49(3): 77-85. - WANG Meng-yu, YIN Xin-chun, NING Jian-ting
- Computer Science. 2022, 49 (3): 77-85. doi:10.11896/jsjkx.210800001
- Abstract PDF(1673KB) ( 630 )
- References | Related Articles | Metrics
-
In the traditional ciphertext-policy attribute-based encryption (CP-ABE) scheme,the access policy exists together with the ciphertext.This may leak the privacy of the data owner and bring potential security risks to the data owner in medicalscena-rios Therefore,solutions supporting access policy hiding have been proposed.However,most solutions need to generate redundant ciphertexts or key components in the process of implementing the decryption test,which increases the computing overhead of data owners and the storage overhead of data users.At the same time,malicious users may be motivated by its own interest to reveal their decryption keys.In order to solve the problems above,a lightweight medical data sharing scheme with access policy hiding and key tracking is proposed.Firstly,part of the master key is stored in the Enclave in advance by using software guard extensions(SGX) technology,so that the test results can be calculated accurately and quickly,and the generation of redundant ciphertexts and key components are avoided.Then,verifiable outsourcing technology is employed to reduce user’s computing overhead,ensuring the correctness and completeness of decryption result.Finally,key tracking is realized by embedding the identity identifier in the decryption key of the data user.Performance analysis shows that the proposed scheme has certain advantages in terms of function and computing.The security analysis proves that the proposed scheme is secure under the selected plaintext attack.
-
GSO:A GNN-based Deep Learning Computation Graph Substitutions Optimization Framework
苗旭鹏, 周跃, 邵蓥侠, 崔斌. GSO:基于图神经网络的深度学习计算图子图替换优化框架[J]. 计算机科学, 2022, 49(3): 86-91.
MIAO Xu-peng, ZHOU Yue, SHAO Ying-xia, CUI Bin. GSO:A GNN-based Deep Learning Computation Graph Substitutions Optimization Framework[J]. Computer Science, 2022, 49(3): 86-91. - MIAO Xu-peng, ZHOU Yue, SHAO Ying-xia, CUI Bin
- Computer Science. 2022, 49 (3): 86-91. doi:10.11896/jsjkx.210700199
- Abstract PDF(2335KB) ( 3355 )
- References | Related Articles | Metrics
-
Deep learning has achieved great success in various practical applications.How to effectively improve the model execution efficiency is one of the important research issues in this field.The existing deep learning frameworks usually model deep learning in the form of computational graphs,try to optimize computational graphs through subgraph substitution rules designed by experts and mainly use heuristic algorithms to search substitution sequences.Their shortcomings mainly include:1)the exis-ting subgraph substitution rules result in a large search space and the heuristic algorithms are not efficient;2)these algorithms are not scalable for large computation graphs;3)cannot utilize the history optimization results.In order to solve the above problem,we propose GSO,a graph neural network-based deep learning computation graph optimization framework.We transfer the graph substitution optimization problem as the subgraph matching problem.Based on the feature information from the operators and the computation graph topology,we utilize the graph neural network to predict the subgraph matching feasibility and positions.We implement the framework using Python,which is compatible with the mainstream deep learning systems.The experimental results show that:1)compared to the total graph substitution rules,the proposed rule can reduce the search space by up to 92%;2)compared to the existing heuristic algorithms,GSO can complete the subgraph replacement process of the computational graph 2 times faster.The optimized computation graph is up to 34% faster the original graph.
-
Data Stream Ensemble Classification Algorithm Based on Information Entropy Updating Weight
夏源, 赵蕴龙, 范其林. 基于信息熵更新权重的数据流集成分类算法[J]. 计算机科学, 2022, 49(3): 92-98.
XIA Yuan, ZHAO Yun-long, FAN Qi-lin. Data Stream Ensemble Classification Algorithm Based on Information Entropy Updating Weight[J]. Computer Science, 2022, 49(3): 92-98. - XIA Yuan, ZHAO Yun-long, FAN Qi-lin
- Computer Science. 2022, 49 (3): 92-98. doi:10.11896/jsjkx.210200047
- Abstract PDF(1699KB) ( 2920 )
- References | Related Articles | Metrics
-
In the dynamic data stream,due to its instability and the existence of concept drift,the ensemble classification model needs the ability to adapt to the new environment in time.At present,the weight of the base classifier is usually updated by using the supervision information,so as to give higher weight to the base classifier suitable for the current environment.However,supervision information cannot be obtained immediately in a real data stream environment.In order to solve this problem,this paper presents a data stream ensemble classification algorithm,which updates the weight of the base classifier through information entropy.Firstly,the random feature subspace is used to initialize each base classifier to construct the ensemble classifier.Secondly,a new base classifier is constructed based on each new data block to replace the base classifier with the lowest weight in the ensemble.Then,the weight update strategy based on information entropy will update the weights in the base classifier in real time.Finally,the base classifier that meets the requirements participates in weighted voting to obtain the classification result.Comparing the proposed algorithm with several other classic learning algorithms,the experimental results show that the proposed me-thod has obvious advantages in classification accuracy and is suitable for various types of concept drift environments.
-
Deep Learning Recommendation Algorithm Based on Reviews and Item Descriptions
王美玲, 刘晓楠, 尹美娟, 乔猛, 荆丽娜. 基于评论和物品描述的深度学习推荐算法[J]. 计算机科学, 2022, 49(3): 99-104.
WANG Mei-ling, LIU Xiao-nan, YIN Mei-juan, QIAO Meng, JING Li-na. Deep Learning Recommendation Algorithm Based on Reviews and Item Descriptions[J]. Computer Science, 2022, 49(3): 99-104. - WANG Mei-ling, LIU Xiao-nan, YIN Mei-juan, QIAO Meng, JING Li-na
- Computer Science. 2022, 49 (3): 99-104. doi:10.11896/jsjkx.210200170
- Abstract PDF(2073KB) ( 927 )
- References | Related Articles | Metrics
-
Reviews contain rich user and item information,which helps to alleviate the problem of data sparsity.However,the existing recommendation model based on reviews is not sufficient and effective enough to mine the review texts,and most of them ignore the migration of user interest over time and the item description documents containing the item attribute,which makes the recommendation result not accurate enough.In this paper,a deep semantic mining based recommendation model (DSMR) is proposed.By mining the semantic information of review texts and item description documents in depth,user characteristics and item attributes can be extracted more accurately,so as to realize more accurate recommendation.Firstly,the BERT pre-training model is used to process the comment text and item description document,and the user characteristics and item attributes are excavated deeply,which effectively alleviated the problems of data sparse and item cold start.Then,the forward LSTM is used to pay attention to the change of user preferences over time,and more accurate recommendations are obtained.Finally,in the model training stage,the experimental data are randomly selected from 1 to 5 points at 1∶1∶1∶1∶1 to ensure the same amount of data for each score value,so as to make the results more accurate and the model more robust.Experiments on four commonly used Amazon open datasets show that the root mean square error (RMSE) of DSMR is at least 11.95% lower than the two classical recommendation models based only on rating data,and it is better than the three new recommendation models based only on review text,and 5.1% lower than the optimal model.
-
Node Label Classification Algorithm Based on Structural Depth Network Embedding Model
陈世聪, 袁得嵛, 黄淑华, 杨明. 基于结构深度网络嵌入模型的节点标签分类算法[J]. 计算机科学, 2022, 49(3): 105-112.
CHEN Shi-cong, YUAN De-yu, HUANG Shu-hua, YANG Ming. Node Label Classification Algorithm Based on Structural Depth Network Embedding Model[J]. Computer Science, 2022, 49(3): 105-112. - CHEN Shi-cong, YUAN De-yu, HUANG Shu-hua, YANG Ming
- Computer Science. 2022, 49 (3): 105-112. doi:10.11896/jsjkx.201000177
- Abstract PDF(3719KB) ( 781 )
- References | Related Articles | Metrics
-
In the era of Internet,where massive data is growing explosively,traditional algorithms have been unable to meet the needs of processing large-scale and multi type data.In recent years,the latest graph embedding algorithm has achieved excellent results in link prediction,network reconstruction and node classification by learning graph network characteristics.Based on the traditional automatic encoder model,a new algorithm combining Sdne algorithm and link prediction similarity matrix is proposed.By introducing a high-order loss function in the process of back-propagation,the performance is adjusted according to the new characteristics of the auto-encoder.The disadvantages of traditional algorithm in determining node similarity in a single way are improved.A simple model is established to analyze and prove the rationality of the optimization.Compared with the most effective Sdne algorithm in the latest research,the improvement effect of this algorithm on Micro-F1 and Macro-F1 two evaluation indicators is close to 1%,and the visual classification effect is good.At the same time,it is found that the optimal value of the hyperparameter of the higher-order loss function is approximately in the range of 1~10,and the change of the numerical value can basically maintain the robustness of the whole network.
-
Friend Closeness Based User Matching
郭磊, 马廷淮. 基于好友亲密度的用户匹配[J]. 计算机科学, 2022, 49(3): 113-120.
GUO Lei, MA Ting-huai. Friend Closeness Based User Matching[J]. Computer Science, 2022, 49(3): 113-120. - GUO Lei, MA Ting-huai
- Computer Science. 2022, 49 (3): 113-120. doi:10.11896/jsjkx.210200137
- Abstract PDF(2451KB) ( 937 )
- References | Related Articles | Metrics
-
The typical aim of user matching is to detect the same individuals cross different social networks.The existing efforts in this field usually focus on users’ attributes and network embedding,but these methods often ignore the closeness between users and their friends.To this end,we present a friend closeness based user matching algorithm(FCUM).It is a semi-supervised and end-to-end cross social networks user matching algorithm.Attention mechanism is used to quantify the closeness between users and their friends.Quantification of close friends improves the generalization ability of the FCUM.We consider both individual similarity and their close friend similarity by jointly optimizing them in a single objective function.Due to the expensive costs of labeling new match users for training FCUM,we also design a bi-directional matching strategy.Experiments on real datasets illustrate that FCUM outperforms other state-of-the-art methods that only consider the individual similarity.In the situation that the privacy protection of users is becoming more and more strict and it is difficult to obtain other complete attribute information of users,the algorithm has the characteristics of practicality and easy promotion.
-
Complex Network Community Detection Algorithm Based on Node Similarity and Network Embedding
杨旭华, 王磊, 叶蕾, 张端, 周艳波, 龙海霞. 基于节点相似性和网络嵌入的复杂网络社区发现算法[J]. 计算机科学, 2022, 49(3): 121-128.
YANG Xu-hua, WANG Lei, YE Lei, ZHANG Duan, ZHOU Yan-bo, LONG Hai-xia. Complex Network Community Detection Algorithm Based on Node Similarity and Network Embedding[J]. Computer Science, 2022, 49(3): 121-128. - YANG Xu-hua, WANG Lei, YE Lei, ZHANG Duan, ZHOU Yan-bo, LONG Hai-xia
- Computer Science. 2022, 49 (3): 121-128. doi:10.11896/jsjkx.210200009
- Abstract PDF(2060KB) ( 931 )
- References | Related Articles | Metrics
-
The community detection algorithm is very important for analyzing the topology and hierarchical structure of complex networks and predicting the evolution trend of complex networks.Traditional community detection algorithm does not have high accuracy and ignores the importance of network embedding.Aiming at such problems,a parameter-free community detection algorithm based on node similarity and network embedding Node2Vec method is proposed.First,we use the network embedding Node2Vec method to map network nodes into data points represented by low-dimensional vectors in Euclidean space,calculate the cosine similarity between the data points represented by the low-dimensional vector,construct a preference network according to the maximum similarity between the corresponding nodes,obtain the initial community detection,and use the maximum degree node of each initial community as a candidate node.Then we find the central node among the candidate nodes according to the average degree of the network and the average shortest path.Finally,the data points and their numbers corresponding to the central node are used as the initial centroid and cluster number,and the data represented by the low-dimensional vector are calculated by K-Means algorithm.The points are clustered,and the corresponding network nodes are divided into communities.This algorithm is a method of community division without parameters,which can independently extract parameters from the network without setting different hyper-parameters according to different networks,so that it can automatically and quickly identify the community structure of complex networks.In 8 real networks and artificial networks above,by comparing with other 5 well-known community discovery algorithms,numerical simulation experiments show that the proposed algorithm has good community discovery effect.
-
Multi-site Hyper-graph Convolutional Neural Networks and Application
周海榆, 张道强. 面向多中心数据的超图卷积神经网络及应用[J]. 计算机科学, 2022, 49(3): 129-133.
ZHOU Hai-yu, ZHANG Dao-qiang. Multi-site Hyper-graph Convolutional Neural Networks and Application[J]. Computer Science, 2022, 49(3): 129-133. - ZHOU Hai-yu, ZHANG Dao-qiang
- Computer Science. 2022, 49 (3): 129-133. doi:10.11896/jsjkx.201100152
- Abstract PDF(1317KB) ( 968 )
- References | Related Articles | Metrics
-
Recently,the exploitation of graph neural networks for neurological brain disorder diagnosis has attracted much attention.However,the graphs used in the existing studies are usually based on the pairwise connections of different nodes,and thus cannot reflect the complex correlation of three or more subjects,especially in the multi-site dataset,i.e.,the dataset collected from different medical institutions with the problem of data heterogeneity resulted from various scanning parameters or subject population.To address this issue,a multi-site hypergraph data structure is proposed to describe the relationship between multi-site data.This hypergraph consists of two types hyper-edge,one is intra-site hyper-edge that describes the relationship within the site,and the other is inter-site hyper-edge that describes relationship between different sites.Also,a hypergraph convolutional network is proposed to learn the feature representation of each node.The hypergraph convolution consists of two parts:the first part is the hypergraph node convolution,the second part is the super edge convolution.Experimental results on two multi-site datasets can also validate the effectiveness of the proposed method.
-
Self-supervised Deep Clustering Algorithm Based on Self-attention
韩洁, 陈俊芬, 李艳, 湛泽聪. 基于自注意力的自监督深度聚类算法[J]. 计算机科学, 2022, 49(3): 134-143.
HAN Jie, CHEN Jun-fen, LI Yan, ZHAN Ze-cong. Self-supervised Deep Clustering Algorithm Based on Self-attention[J]. Computer Science, 2022, 49(3): 134-143. - HAN Jie, CHEN Jun-fen, LI Yan, ZHAN Ze-cong
- Computer Science. 2022, 49 (3): 134-143. doi:10.11896/jsjkx.210100001
- Abstract PDF(3909KB) ( 1001 )
- References | Related Articles | Metrics
-
In recent years,deep clustering methods using joint optimization strategy,such as DEC (deep embedding clustering) and DDC (deep denoising clustering) algorithms,have made great progress in image clustering that heavily related to features representation ability of deep networks,and brought certain degree breakthroughs in clustering performances.The quality of feature extraction directlyaffects the subsequent clustering tasks.However,the generalization abilities of these methods are not satisfied,exactly as different network structures are used in different datasets to guarantee the clustering performance.In addition,there is a quite larger space to enhance clustering performances compared to classification performances.To this end,a self-supervised deep clustering (SADC) method based on self-attention is proposed.Firstly,a deep convolutional autoencoder is designed to extract features,and noisy images are employed to enhance the robustness of the network.Secondly,self-attention mechanism is combined with the proposed network to capture useful features for clustering.At last,the trained encoder combines with K-means algorithm to form a deep clustering model for feature representation and clustering assignment,and iteratively updates parameters to improve the clustering accuracy and generalization ability of the proposed network.The proposed clustering method is verified on 6 traditional image datasets and compared with the deep clustering algorithms DEC and DDC.Experimental results show that the proposed SADC can provide better clustering results,and is comparable to the state-of-the-art clustering algorithms.Overall,the unified network structure ensures the clustering accuracy and simultaneously reducing computational complexity of the deep clustering algorithms.
-
Anomaly Detection Model Based on One-class Support Vector Machine Fused Deep Auto-encoder
武玉坤, 李伟, 倪敏雅, 许志骋. 单类支持向量机融合深度自编码器的异常检测模型[J]. 计算机科学, 2022, 49(3): 144-151.
WU Yu-kun, LI Wei, NI Min-ya, XU Zhi-cheng. Anomaly Detection Model Based on One-class Support Vector Machine Fused Deep Auto-encoder[J]. Computer Science, 2022, 49(3): 144-151. - WU Yu-kun, LI Wei, NI Min-ya, XU Zhi-cheng
- Computer Science. 2022, 49 (3): 144-151. doi:10.11896/jsjkx.210100142
- Abstract PDF(2711KB) ( 848 )
- References | Related Articles | Metrics
-
Large-scale high-dimensional unbalanced data handling is a major challenge in anomaly detection.One-class support vector machine(OCSVM) is very efficient at handling unbalanced data,but it is not suitable for large-scale high-dimensional dataset.Meanwhile,the kernel function of OCSVM also has an important influence on the detection performance.An anomaly detection model combining a deep auto-encoder and a one-class support vector machine is proposed.The deep auto-encoder is not only responsible for extracting features and dimensionality reduction,but also mapping an adaptive kernel function.As a whole,the model adopts the gradient descent method to carry out joint training and realizes end-to-end training.Experiment is conducted on four public datasets and compared with other anomaly detection methods.Experimental results show that the proposed model has better performance than single-kernel or multi-kernel one-class support vector machines and other models in terms of AUC and RECALL,and the proposed model is robust at different anomaly rate and has great advantages in time complexity.
-
Object Initialization in Multiple Object Tracking:A Review
文成宇, 房卫东, 陈伟. 多目标跟踪的对象初始化综述[J]. 计算机科学, 2022, 49(3): 152-162.
WEN Cheng-yu, FANG Wei-dong, CHEN Wei. Object Initialization in Multiple Object Tracking:A Review[J]. Computer Science, 2022, 49(3): 152-162. - WEN Cheng-yu, FANG Wei-dong, CHEN Wei
- Computer Science. 2022, 49 (3): 152-162. doi:10.11896/jsjkx.210200048
- Abstract PDF(1898KB) ( 789 )
- References | Related Articles | Metrics
-
Object initialization method determines how to treat the multi-object tracking problem,being directly related to the subsequent tracking result.Different object initialization methods confirm different multi-object tracking frameworks and each framework provides a way to solve the problem,which makes the object initialization of multi-object tracking a huge research prospect.Currently there are few literature on object initialization methods of multi-target tracking,or lacks a systematic overview of object initialization.Therefore,we analyze the object initialization methods on four aspects:multi-hypothesis tracking,network flow,deep learning and topic discovery.We systematically expound the task conversion and object mapping problems under diffe-rent multi-object tracking frameworks,and summarize the object initialization methods for the multi-object tracking.
-
Cross-attention Guided Siamese Network Object Tracking Algorithm
赵越, 余志斌, 李永春. 基于互注意力指导的孪生跟踪算法[J]. 计算机科学, 2022, 49(3): 163-169.
ZHAO Yue, YU Zhi-bin, LI Yong-chun. Cross-attention Guided Siamese Network Object Tracking Algorithm[J]. Computer Science, 2022, 49(3): 163-169. - ZHAO Yue, YU Zhi-bin, LI Yong-chun
- Computer Science. 2022, 49 (3): 163-169. doi:10.11896/jsjkx.210300066
- Abstract PDF(4101KB) ( 762 )
- References | Related Articles | Metrics
-
Most traditional Siamese trackers cannot perform robust when facing the similar object,deformation,background clutters and other challenges.Accordingly,a cross-attention guided Siamese network (called SiamCAN) is proposed to solve the above problem in this paper.Firstly,different layers of ResNet50 are used to get various revolutions of object feature and a cross-attention module is designed to bridge the information flow between search branch and template branch.After that,each feature from different layers of backbone is sent to CNNs to update parameters and combined with each other,in classification network and regression network.Finally,the predicted location and target size are calculated according to the max response on response map.Simulation experimental results on the UAV123 tracking dataset show that the tracking precision is improved by 1.7% and the tracking accuracy is improved by 0.7%,compared to the mainstream algorithm SiamBAN.Moreover,on the VOT2018 benchmark,the EAO of our method outperforms 2.5 than the mainstream algorithm SiamRPN++,and the tracking speed of our method maintains 35FPS.
-
Person Re-identification Based on Feature Location and Fusion
杨晓宇, 殷康宁, 候少麒, 杜文仪, 殷光强. 基于特征定位与融合的行人重识别算法[J]. 计算机科学, 2022, 49(3): 170-178.
YANG Xiao-yu, YIN Kang-ning, HOU Shao-qi, DU Wen-yi, YIN Guang-qiang. Person Re-identification Based on Feature Location and Fusion[J]. Computer Science, 2022, 49(3): 170-178. - YANG Xiao-yu, YIN Kang-ning, HOU Shao-qi, DU Wen-yi, YIN Guang-qiang
- Computer Science. 2022, 49 (3): 170-178. doi:10.11896/jsjkx.210100132
- Abstract PDF(3277KB) ( 785 )
- References | Related Articles | Metrics
-
Pedestrian appearance attributes are important semantic information distinguishing pedestrian differences.Pedestrian attribute recognition plays a vital role in intelligent video surveillance,which can help us quickly screen and retrieve target pedestrians.In the task of person re-identification,we can use attribute information to obtain fine feature expressions,thereby improving the effect of pedestrian re-identification.This paper attempts to combine pedestrian attribute recognition with person re-identification,looking for a way to improve the performance of person re-identification,and proposes a person re-identification framework based on feature positioning and fusion.Firstly,we use the method of multi-task learning to combine person re-identification with attribute recognition,and improve the performance of the network model by modifying the convolution step size and using double pooling.Secondly,to improve the expression ability of attribute features,a parallel spatial channel attention module based on the attention mechanism is designed.It can not only locate the spatial position of the attribute on the feature map,but also can effectively mine the channel with higher correlation with the attribute features,and uses multiple groups of parallel branch structure to reduce errors and further improve the performance of the network model.Finally,we use the convolutional neural network to design the feature fusion module to effectively integrate the attribute features and pedestrian identity features to obtain more robust and expressive pedestrian features.The experiment is conducted on two commonly used person re-identification datasets DukeMTMC-reID and Market-1501.The results show that this method is at the leading level among the existing person re-identification methods.
-
Multiple Fundamental Frequency Estimation Algorithm Based on Generative Adversarial Networks for Image Removal
黎思泉, 万永菁, 蒋翠玲. 基于生成对抗网络去影像的多基频估计算法[J]. 计算机科学, 2022, 49(3): 179-184.
LI Si-quan, WAN Yong-jing, JIANG Cui-ling. Multiple Fundamental Frequency Estimation Algorithm Based on Generative Adversarial Networks for Image Removal[J]. Computer Science, 2022, 49(3): 179-184. - LI Si-quan, WAN Yong-jing, JIANG Cui-ling
- Computer Science. 2022, 49 (3): 179-184. doi:10.11896/jsjkx.201200081
- Abstract PDF(2425KB) ( 530 )
- References | Related Articles | Metrics
-
Multiple fundamental frequency estimation is widely used in music structure analysis,music aided education,information retrieval and other fields.In order to meet the requirements of accurate identification of random chords in music,a multiple fundamental frequency estimation algorithm based on generative adversarial networks is proposed.Firstly,the complete audio is divided into note segments,and a homophonic fingerprint is proposed to extract the spectrum characteristics of the note segment.Then,the current dominant fundamental frequency of the homophonic fingerprint is identified by convolution neural network,and the identified dominant fundamental frequency is considered as the image that interferes with the next fundamental frequency re-cognition.Then,the interference image is removed by generative adversarial networks,and the homophonic fingerprint image affected by interference is processed in a new round.Finally,the multiple fundamental frequency estimation of complete chords is realized by iterative de imaging operation step by step.Experiments on the piano audio database composed of random two tone chord and random three tone chord are carried out.The results show that,compared with the classical spectrum iterative deletion algorithm and the large vocabulary chord recognition algorithm,the algorithm in this paper can adapt to the recognition of random chords,has high robustness in different ranges,and improves the overall accuracy significantly.
-
Super Resolution Reconstruction Method of Solar Panel Defect Images Based on Meta-transfer
周颖, 常明新, 叶红, 张燕. 基于元迁移的太阳能电池板缺陷图像超分辨率重建方法[J]. 计算机科学, 2022, 49(3): 185-191.
ZHOU Ying, CHANG Ming-xin, YE Hong, ZHANG Yan. Super Resolution Reconstruction Method of Solar Panel Defect Images Based on Meta-transfer[J]. Computer Science, 2022, 49(3): 185-191. - ZHOU Ying, CHANG Ming-xin, YE Hong, ZHANG Yan
- Computer Science. 2022, 49 (3): 185-191. doi:10.11896/jsjkx.210100234
- Abstract PDF(2462KB) ( 658 )
- References | Related Articles | Metrics
-
It is difficult to detect solar panel crack defect due to low resolution and contrast,and few samples lead to inadequate training problem.To solve these problems,this paper puts forward the super resolution reconstruction method of solar panel images based on meta-transfer,and we adopt joint training method,that is,the internal image and external large-scale image information are used as the different stages of training data.First,a large amount of data is used to pretrain the model to learn the external public characteristics of large-scale data.Then,we use the meta-learning model MAML for multi-task training to find initial parameters,which are suitable for the unsupervised task of few samples to improve the generalization ability of the model.Finally,we put pretrained parameters in improved ZSSR to improve the Self-supervised Learning.Through DIV2K,Set5,BSD100 and solar panels electroluminescent imaging training dataset,the experimental results show that compared with the traditional CARN,RCAN,IKC and ZSSR,this method has the higher peak signal-to-noise ratio,up to 36.66,and fewer parameters,compared with ZSSR,the number of parameters decreases by 70 000 with shorter computation time,and compared with CARN,the computation time decreases by 0.51 s.It is obvious that our method has the better reconstruction effect,the higher reconstruction efficiency.
-
Concrete Pavement Crack Detection Based on Dilated Convolution and Multi-features Fusion
瞿中, 陈雯. 基于空洞卷积和多特征融合的混凝土路面裂缝检测[J]. 计算机科学, 2022, 49(3): 192-196.
QU Zhong, CHEN Wen. Concrete Pavement Crack Detection Based on Dilated Convolution and Multi-features Fusion[J]. Computer Science, 2022, 49(3): 192-196. - QU Zhong, CHEN Wen
- Computer Science. 2022, 49 (3): 192-196. doi:10.11896/jsjkx.210100164
- Abstract PDF(2804KB) ( 670 )
- References | Related Articles | Metrics
-
Crack detection for concrete pavement is an important fundamental task to ensure the safety of the road.Due to the complicated concrete pavement background and the diversity of cracks,a novel crack detection network of concrete pavement based on dilated convolution and multi-features fusion is proposed.The proposed network is based on the encoding-decoding structure of U-Net.In the encoding stage,the improved residual network Res2Net can be used to improve the ability of feature extraction.A cascade and parallel mode dilates convolution as center part,it can enlarge the receptive field of feature points,but without reducing the resolution of the feature maps.The decoder aggregates multi-scale and multi-level features from the low convolutional layers to the high-level convolutional layers,which improves the accuracy of crack detection.We use F-score to eva-luate our network performance.To demonstrate the validity and accuracy of the proposed method,we compare it with existing methods.The experiment results in multiple crack datasets reveal that our method is superior to these methods.The algorithm improves the accuracy of crack detection and has good robustness.
-
Outdoor Image Weather Recognition Based on Image Blocks and Feature Fusion
左杰格, 柳晓鸣, 蔡兵. 基于图像分块与特征融合的户外图像天气识别[J]. 计算机科学, 2022, 49(3): 197-203.
ZUO Jie-ge, LIU Xiao-ming, CAI Bing. Outdoor Image Weather Recognition Based on Image Blocks and Feature Fusion[J]. Computer Science, 2022, 49(3): 197-203. - ZUO Jie-ge, LIU Xiao-ming, CAI Bing
- Computer Science. 2022, 49 (3): 197-203. doi:10.11896/jsjkx.201200263
- Abstract PDF(2574KB) ( 786 )
- References | Related Articles | Metrics
-
In video surveillance and intelligent traffic,bad weather such as foggy,rainy and snowy can seriously affect the visibility of video images.Therefore,it is very important to quickly identify the current weather conditions and make adaptive clearness processing of surveillance videos.Aiming at the problems of poor effect of traditional weather recognition methods and lack of weather image data sets,a multi-class weather image blocks data set is constructed,and a weather recognition algorithm based on image blocks and feature fusion is proposed.The algorithm uses traditional methods to extract four features,namely average gradient,contrast,saturation and dark channel,which are taken as the shallow features of weather images.The algorithm uses transfer learning to fine -tune the VGG16 pre-training model,and extracts the full-connection layer features of the fine-tuning model,which are taken as the deep features of the weather image.The shallow and deep features of weather images are fused and used as the final features to train the Softmax classifier.The classifier can realize the recognition of foggy,rainy,snowy and sunny wea-ther images. Experimental results show that the recognition accuracy of the proposed algorithm can reach 99.26%,and the algorithm can be used as a weather recognition module in the adaptive video image sharpening system.
-
Classification Algorithm of Nuclear Cataract Based on Anterior Segment Coherence Tomography Image
章晓庆, 方建生, 肖尊杰, 陈浜, RisaHIGASHITA, 陈婉, 袁进, 刘江. 基于眼前节相干光断层扫描成像的核性白内障分类算法[J]. 计算机科学, 2022, 49(3): 204-210.
ZHANG Xiao-qing, FANG Jian-sheng, XIAO Zun-jie, CHEN Bang, Risa HIGASHITA, CHEN Wan, YUAN Jin, LIU Jiang. Classification Algorithm of Nuclear Cataract Based on Anterior Segment Coherence Tomography Image[J]. Computer Science, 2022, 49(3): 204-210. - ZHANG Xiao-qing, FANG Jian-sheng, XIAO Zun-jie, CHEN Bang, Risa HIGASHITA, CHEN Wan, YUAN Jin, LIU Jiang
- Computer Science. 2022, 49 (3): 204-210. doi:10.11896/jsjkx.201100085
- Abstract PDF(1889KB) ( 752 )
- References | Related Articles | Metrics
-
Cataract is a main ocular disease for visual impairment and blindness.Anterior segment optical coherence tomography (AS-OCT) technique has the characteristics of non-invasiveness,high resolution,rapid inspection,and objective quantitative measurement.AS-OCT images have been widely used for the diagnosis of ocular diseases in clinical ophthalmology.Inthecurrent,it is lack of the research on classification methods of nuclear cataract based on AS-OCT images.To this end,this paper proposes a nuclear cataract classification method based on AS-OCT images.First,the nucleus region of the lens is extracted from AS-OCT images using a combination of adaptive threshold method,edge detection Canny algorithm and manual correction pattern.Then,eighteen pixel features are extracted based on image intensity and histogram feature statistical methods,and the Pearson correlation coefficient method is used to analyze the correlation between the extracted pixel features and the severity of nuclear cataract.Finally,the random forest algorithm is used to build a classification model for getting nuclear cataract classification results.Experimental results on an AS-OCT image dataset show that the proposed method achieves the accuracy and recall with 75.53% and 74.04% respectively.Therefore,the proposed method has the potential as a quantitative analysis reference tool for the clinical diagnosis of nuclear cataract.
-
SSD Network Based on Improved Convolutional Attention Module and Residual Structure
张侣, 周博文, 吴亮红. 基于改进卷积注意力模块与残差结构的SSD网络[J]. 计算机科学, 2022, 49(3): 211-217.
ZHANG Lyu, ZHOU Bo-wen, WU Liang-hong. SSD Network Based on Improved Convolutional Attention Module and Residual Structure[J]. Computer Science, 2022, 49(3): 211-217. - ZHANG Lyu, ZHOU Bo-wen, WU Liang-hong
- Computer Science. 2022, 49 (3): 211-217. doi:10.11896/jsjkx.201200019
- Abstract PDF(3575KB) ( 921 )
- References | Related Articles | Metrics
-
SSD(single shot multibox detector) is a single-order detection algorithm based on convolution neural network.Compared with the two-stage detection algorithm,it can not meet the requirements of many practical applications,especially in the small target detection task.In order to solve this problem,this paper proposes a feature extraction network Res-Am CNN based on improved residual structure and convolutional attention module.The feature extraction ability of the network is greatly improved,and the additive fusion with upsample (AFU) is introduced into the original SSD pyramid structure for feature fusion to enhance the representation ability of shallow features.The experimental results on PASCAL VOC data set show that compared with the original SSD network and mainstream detection network,the mean average precision (mAP) of Res-Am &AFU SSD (SSD with Res-Am CNN and AFU) network on VOC test set is 69.1%,which is ahead of one stage network in accuracy,close to two stage network,and greatly ahead of two stage network in speed.The experimental results on a small target test set show that the mAP of Res-Am&AFU SSD network is 67.2%,which is 9.4% higher than that of the original SSD,and the method is more flexible and does not need pre training.
-
Transferable Emotion Analysis Method for Cross-domain Text
张舒萌, 余增, 李天瑞. 跨领域文本的可迁移情绪分析方法[J]. 计算机科学, 2022, 49(3): 218-224.
ZHANG Shu-meng, YU Zeng, LI Tian-rui. Transferable Emotion Analysis Method for Cross-domain Text[J]. Computer Science, 2022, 49(3): 218-224. - ZHANG Shu-meng, YU Zeng, LI Tian-rui
- Computer Science. 2022, 49 (3): 218-224. doi:10.11896/jsjkx.210400034
- Abstract PDF(1662KB) ( 964 )
- References | Related Articles | Metrics
-
With the rapid development of mobile internet,social network platform is full of a large number of text data with emotional color.Mining the emotion in such text not only helps to understand the attitude and emotion of internet users,but also plays an important role in scientific research institutions and the government to grasp the emotional changes and trends of society.Traditional sentiment analysis mainly focuses on the analysis of sentiment tendency,which can not accurately and multi-dimensionally describe the emotion of the text.In order to solve this problem,this paper studies the emotion analysis of the text.Firstly,aiming at the lack of fine-grained sentiment tags in text data sets of different fields,a deep learning based emotion analysis model,FMRo-BLA,is proposed.The model pre-trains the general domain text,and then applies the pre-trained model to the downstream situation of specific domain through parameter based migration learning,feature fusion and FGM Adversarial training.Compared with the best performance of RoBERTa pre-trained language model,the F1 value of the proposed method is improved by 5.93% on the target domain dataset,and it achieves 12.38% by further adding transfer learning.
-
Fiber Bundle Meta-learning Algorithm Based on Variational Bayes
刘洋, 李凡长. 基于变分贝叶斯的纤维丛元学习算法[J]. 计算机科学, 2022, 49(3): 225-231.
LIU Yang, LI Fan-zhang. Fiber Bundle Meta-learning Algorithm Based on Variational Bayes[J]. Computer Science, 2022, 49(3): 225-231. - LIU Yang, LI Fan-zhang
- Computer Science. 2022, 49 (3): 225-231. doi:10.11896/jsjkx.201100111
- Abstract PDF(2368KB) ( 683 )
- References | Related Articles | Metrics
-
Deep learning based on neural network has achieved excellent results in a large number of fields,but it is difficult to deal with similar or untrained tasks,and it is difficult to learn and adapt to new tasks.Moreover,it requires a high scale of trai-ning samples,resulting in its poor generalization and expansion.Meta learning is a new learning framework,which aims to solve the problem that traditional learning methods can’t solve fast learning and adapt to new tasks.Aiming at the meta learning problem of image classification,a novel fiber bundle meta learning algorithm based on Bayesian theory is proposed.Firstly,the convolution neural network is used to extract the image information supporting the dataset,and the image representation is obtained.Then the manifold structure of data features and the fiber bundle of data features are constructed.The input query set selects the manifold section of the current new task to obtain the fiber suitable for the new task,so as to get the correct label of the image.Experimental results show that the model based on the proposed algorithm (FBBML) achieves the best accuracy performance compared with the standard four-layer convolutional neural network model on the common data set (mini-ImageNet).At the same time,the fiber bundle theory is introduced into meta learning,which makes the algorithm more interpretable.
-
Conversational Comprehension Model for Question Generation
时雨涛, 孙晓. 一种会话理解模型的问题生成方法[J]. 计算机科学, 2022, 49(3): 232-238.
SHI Yu-tao, SUN Xiao. Conversational Comprehension Model for Question Generation[J]. Computer Science, 2022, 49(3): 232-238. - SHI Yu-tao, SUN Xiao
- Computer Science. 2022, 49 (3): 232-238. doi:10.11896/jsjkx.210200153
- Abstract PDF(2328KB) ( 669 )
- References | Related Articles | Metrics
-
Conversational question generation (CQG) is different from the question generation task of generating single-round questions based on paragraphs and answers.CQG additionally considers the conversational information composed of historical question and answer pairs,and the generated questions inherit the historical content of the conversation and maintain high consistency.In response to this feature,the article proposes word-level and sentence-level attention mechanism modules to enhance the ability to extract conversation history information,ensuring that the current round of questions integrates the characteristics of each word and sentence in the conversation history,thereby generating a coherent,high-quality question.The accuracy of the question word is more important.The generated question needs to match the answer type corresponding to the original question in the data set.An additional loss function is constructed in the question word prediction module as a limitation of the question word type.The conversational comprehension network (CCNet) model is obtained by synthesizing each module.Experiments show that this model is higher than the baseline model in most evaluation indicators.On the CoQA dataset,Bleu1 and Bleu2 reach 39.70 and 23.76,respectively,and the quality of the generated questions is higher.The model is proved to be effective in ablation experiments and cross-dataset experiments,indicating that the CCNet model has strong general capabilities.
-
Double Speedy Q-Learning Based on Successive Over Relaxation
周琴, 罗飞, 丁炜超, 顾春华, 郑帅. 基于逐次超松弛技术的Double Speedy Q-Learning算法[J]. 计算机科学, 2022, 49(3): 239-245.
ZHOU Qin, LUO Fei, DING Wei-chao, GU Chun-hua, ZHENG Shuai. Double Speedy Q-Learning Based on Successive Over Relaxation[J]. Computer Science, 2022, 49(3): 239-245. - ZHOU Qin, LUO Fei, DING Wei-chao, GU Chun-hua, ZHENG Shuai
- Computer Science. 2022, 49 (3): 239-245. doi:10.11896/jsjkx.201200173
- Abstract PDF(1770KB) ( 537 )
- References | Related Articles | Metrics
-
Q-Learning is a mainstream reinforcement learning algorithm atpresent,but its convergence speed is poor in random environment.Previous studies have improved the overestimation problem of Spee-dy Q-Learning,and have proposed Double Speedy Q-Learning algorithm.However,the Double Speedy Q-Learning algorithm does not consider the self-loop structure exis-ting in the random environment,that is,the probability of entering the current state when the agent performs an action,which will not be conducive to the agent’s learning in the random environment,thereby affecting the convergence speed of the algorithm.Aiming at the self-loop structure existing in Double Speedy Q-Learning,the Bellman operator of Double Speedy Q-Learning algorithm is improved by using successive over-relaxation technology,and the Double Speedy Q-Learning algorithm based on successive over relaxation (DSQL-SOR) is proposed to further improve the convergence speed of the Double Speedy Q-Learning algorithm.By using numerical experiments to compare the error between the actual rewards and expected rewards of DSQL-SOR and other algorithms,the experimental results show that the proposed algorithm has a lower error of 0.6 than the existing mainstream algorithm SQL,which is lower than the successive over-relaxation algorithm GSQL 0.5,indicating that the performance of the DSQL-SOR algorithm is better than other algorithms.The experiment also tests the scalability of the DSQL-SOR algorithm.When the state space is increased from 10 to 1 000,the average time of each iteration increases slowly,always maintaining at the magnitude of 10-4,indicating that DSQL-SOR has strong scalability.
-
Fine-grained Sentiment Classification of Chinese Microblogs Combining Dual Weight Mechanismand Graph Convolutional Neural Network
李浩, 张兰, 杨兵, 杨海潇, 寇勇奇, 王飞, 康雁. 融合双重权重机制和图卷积神经网络的微博细粒度情感分类[J]. 计算机科学, 2022, 49(3): 246-254.
LI Hao, ZHANG Lan, YANG Bing, YANG Hai-xiao, KOU Yong-qi, WANG Fei, KANG Yan. Fine-grained Sentiment Classification of Chinese Microblogs Combining Dual Weight Mechanismand Graph Convolutional Neural Network[J]. Computer Science, 2022, 49(3): 246-254. - LI Hao, ZHANG Lan, YANG Bing, YANG Hai-xiao, KOU Yong-qi, WANG Fei, KANG Yan
- Computer Science. 2022, 49 (3): 246-254. doi:10.11896/jsjkx.201200073
- Abstract PDF(2706KB) ( 606 )
- References | Related Articles | Metrics
-
Using deep learning models and attention mechanisms to classify fine-grained emotions of Chinese microblogs has become a research hotspot.However,the existing attention mechanisms consider the impact of words on words,and lack effective integration of the various dimensional characteristics of the words themselves (such as word meaning,part of speech,semantics and other characteristic information).In order to solve this problem,the paper proposes a dual weight mechanism WDWM (word and dimension weight mechanism),and combines it with the GCN model based on the analytical dependency tree,so that it can not only select the words that contain key information in each microblog,but also extract the important dimensional characteristics of the word and effectively integrate multiple dimensional characteristics of words,so as to capture more rich feature information.The F measure of fine-grained sentiment classification of Chinese microblogs combining dual weight mechanism and graph convolutional neural network(WDWM-GCN) reaches 84.02%,which is 1.7% higher than the latest algorithm proposed by WWW in 2020,which further proves that WDWM-GCN can effectively integrate the multi-dimensional characteristics of words and capture rich feature information.In the experiment on the classification of Sogou news data set,after the BERT model is addedto the WDWM mechanism,the classification effect is further improved,which fully provs that the WDWM has a significant improvement on the text classification model.
-
Label-based Approach for Dynamic Updating Approximations in Incomplete Fuzzy Probabilistic Rough Sets over Two Universes
薛占熬, 侯昊东, 孙冰心, 姚守倩. 带标记的不完备双论域模糊概率粗糙集中近似集动态更新方法[J]. 计算机科学, 2022, 49(3): 255-262.
XUE Zhan-ao, HOU Hao-dong, SUN Bing-xin, YAO Shou-qian. Label-based Approach for Dynamic Updating Approximations in Incomplete Fuzzy Probabilistic Rough Sets over Two Universes[J]. Computer Science, 2022, 49(3): 255-262. - XUE Zhan-ao, HOU Hao-dong, SUN Bing-xin, YAO Shou-qian
- Computer Science. 2022, 49 (3): 255-262. doi:10.11896/jsjkx.201200042
- Abstract PDF(2198KB) ( 440 )
- References | Related Articles | Metrics
-
When the missing values are obtained in incomplete fuzzy probabilistic rough sets over two universes,the time efficiency of the traditional static algorithm for updating approximations in incomplete fuzzy probabilistic rough sets over two universes is too low.To solve this problem,a label-based approach for dynamic updating approximations in incomplete fuzzy probabilistic rough sets over two universes isstudied.Firstly,some definitions of incomplete fuzzy probabilistic rough over two universes are given,then based on the matrix method,a label-based model of incomplete fuzzy probabilistic rough sets over two universes is proposed,and the related theorems are proved.After that,a label-based method for calculating approximations in incomplete fuzzy probabilistic rough sets over two universes is proposed and analyzed.Then,when the missing values are obtained in incomplete fuzzy probabilistic rough sets over two universes,the theorem for dynamic updating its approximations is proved,and a label-based algorithm for dynamic updating approximations in incomplete fuzzy probabilistic rough sets over two universes is designed and analyzed.Finally,the simulation experiments are conducted on six datasets from UCI and three man-made datasets.The experimental results show that the proposed dynamic updating algorithm can improve the time efficiency of updating approximations.Then an example shows that the dynamic algorithm does not affect the correctness of the results when updating approximations,which proves the validity of the proposed dynamic updating algorithm.
-
On Topological Properties of Generalized Rough Approximation Operators
李妍妍, 秦克云. 广义粗糙近似算子的拓扑性质[J]. 计算机科学, 2022, 49(3): 263-268.
LI Yan-yan, QIN Ke-yun. On Topological Properties of Generalized Rough Approximation Operators[J]. Computer Science, 2022, 49(3): 263-268. - LI Yan-yan, QIN Ke-yun
- Computer Science. 2022, 49 (3): 263-268. doi:10.11896/jsjkx.210100204
- Abstract PDF(1388KB) ( 574 )
- References | Related Articles | Metrics
-
Rough set theory is a mathematical tool for dealing with uncertain information.The core notions of rough set theory is approximation operators of approximation spaces.Pawlak approximation operators are established by using equivalence relations on the universe.They are extended to generalized rough approximation operators based on arbitrary binary relations.The topolo-gical structures of approximation operators are important topics in rough set theory.This paper is devoted to the study of topological properties of generalized rough approximation operators induced by arbitrary binary relations.Four types of topologies induced by generalized approximation operators based on granules and subsystems are presented and the relationships among these four types of topologies are discussed.The bases of the topologies induced by generalized approximation operators based on granules are presented by the right-neighborhood systems for objects,and the normality and regularity for related topologies are investigated.By analyzing the properties of the generalized upper approximation operators based on the subsystem,it is proved that the topologies induced by the subsystem-based generalized upper approximation operators can be transformed into the topologies induced by the generalized lower approximation operators based on the objects.
-
Predicting Electric Energy Consumption Using Sandwich Structure of Attention in Double -LSTM
高堰泸, 徐圆, 朱群雄. 基于A-DLSTM夹层网络结构的电能消耗预测方法[J]. 计算机科学, 2022, 49(3): 269-275.
GAO Yan-lu, XU Yuan, ZHU Qun-xiong. Predicting Electric Energy Consumption Using Sandwich Structure of Attention in Double -LSTM[J]. Computer Science, 2022, 49(3): 269-275. - GAO Yan-lu, XU Yuan, ZHU Qun-xiong
- Computer Science. 2022, 49 (3): 269-275. doi:10.11896/jsjkx.210100006
- Abstract PDF(2825KB) ( 556 )
- References | Related Articles | Metrics
-
The rapid growth of the global population and technological progress has significantly increased the world’s total power generation.Electric energy consumption forecasts play an essential role in power system dispatch and power generation management.Aim at the complex characteristics of time series of energy consumption data,and to improve the prediction accuracy of power consumption,a novel sandwich structure is proposed,in which an Attention mechanism is placed in the double layer long short-term memory artificial neural network,namely A-DLSTM.This network structure uses the attention mechanism in the mezzanine to adaptively focus on different features in each single time unit and uses the two-layer LSTM network to capture the time information in the sequence to predict the sequence data.The experimental data comes from the UCI machine learning data set,and it is the electricity consumption of a family in the past five years.The parameters of the experiment are adjusting by the grid search method.The experiment compares the prediction performance of A-DLSTM and the existing model on energy consumption data.The network of this article reaches the state-of-the-art in terms of mean square error,root mean square error,average absolute error,and average absolute percentage error.By analyzing the heat map’s attention layer,the factor that has the most significant impact on electricity consumption forecasting is determined.
-
Implicit Causality Extraction Method Based on Event Action Direction
缪峰, 王萍, 李太勇. 基于事件动作方向的隐式因果关系抽取方法[J]. 计算机科学, 2022, 49(3): 276-280.
MIU Feng, WANG Ping, LI Tai-yong. Implicit Causality Extraction Method Based on Event Action Direction[J]. Computer Science, 2022, 49(3): 276-280. - MIU Feng, WANG Ping, LI Tai-yong
- Computer Science. 2022, 49 (3): 276-280. doi:10.11896/jsjkx.211100249
- Abstract PDF(722KB) ( 793 )
- References | Related Articles | Metrics
-
Extracting the causality between events can be applied to automatic question answering,knowledge extraction,common sense reasoning and so on.Due to the lack of obvious lexical features and the complex syntactic structure of Chinese,it is very difficult to extract implicit causality,which has become the bottleneck of the current research.In contrast,it is easy to extract expli-cit causality with high accuracy,and the logical causal relationship between events is stable.Therefore,an original method is proposed in this paper.Firstly,the extracted explicit causal event pairs are normalized to form the event direction,and then the event subject is generalized to form a standard set of matched causal event pairs.This set is used to extract implicit causal event pairs according to event similarity.In order to identify more implicit causality,a new causal connectives discovery algorithm is proposed.The experimental data crawling on NetEase Finance,Tencent Finance and Sina Finance show that the extraction precision is improved by 1.02% compared with the traditional method.
-
FMNN:Text Classification Model Fused with Multiple Neural Networks
邓维斌, 朱坤, 李云波, 胡峰. FMNN:融合多神经网络的文本分类模型[J]. 计算机科学, 2022, 49(3): 281-287.
DENG Wei-bin, ZHU Kun, LI Yun-bo, HU Feng. FMNN:Text Classification Model Fused with Multiple Neural Networks[J]. Computer Science, 2022, 49(3): 281-287. - DENG Wei-bin, ZHU Kun, LI Yun-bo, HU Feng
- Computer Science. 2022, 49 (3): 281-287. doi:10.11896/jsjkx.210200090
- Abstract PDF(2193KB) ( 899 )
- References | Related Articles | Metrics
-
Text classification is a basic and important task in natural language processing.Most of the text classification methods based on deep learning only focus on a single model structure.The single structure lacks the ability to simultaneously capture and utilize both global and local semantic features.Besides,the deepening of the network will lose more semantic information.In order to overcome the above problems,a text classification model FMNN which is a text classification model fused with multiple neural network is proposed in this paper.The model combines the performances of BERT,RNN,CNN and Attention while minimizing the network depth.BERT is used as the embedding layer to obtain the matrix representation of the text.BiLSTM and Attention are used to jointly extract the global semantic features of the text.CNN is used to extract the local semantic features of the text at multiple granularities.The global semantic features and local semantic features are applied to the softmax classifier respectively.The results are finally fused by arithmetic average.The experimental results on three public data sets and one judicial data set show that the proposed FMNN model achieves higher accuracy rate,and the accuracy rate on the judicial data set reaches 90.31%,which proves that the model has good practical value.
-
Semi-supervised Learning Method Based on Automated Mixed Sample Data Augmentation Techniques
许华杰, 陈育, 杨洋, 秦远卓. 基于混合样本自动数据增强技术的半监督学习方法[J]. 计算机科学, 2022, 49(3): 288-293.
XU Hua-jie, CHEN Yu, YANG Yang, QIN Yuan-zhuo. Semi-supervised Learning Method Based on Automated Mixed Sample Data Augmentation Techniques[J]. Computer Science, 2022, 49(3): 288-293. - XU Hua-jie, CHEN Yu, YANG Yang, QIN Yuan-zhuo
- Computer Science. 2022, 49 (3): 288-293. doi:10.11896/jsjkx.210100156
- Abstract PDF(1659KB) ( 1059 )
- References | Related Articles | Metrics
-
Consistency-based semi-supervised learning methods typically use simple data augmentation methods to achieve consistent predictions for both original inputs and perturbed inputs.The effectiveness of this approach is difficult to be guaranteed when the proportion of labeled data is relatively low.Extending some advanced data augmentation method in supervised learning to be used in a semi-supervised learning setting is one of the ideas to solve this problem.Based on the consistency-based semi-supervised learning method MixMatch,a semi-supervised learning method AutoMixMatch based on automated mixed sample data augmentation techniques is proposed,which uses a modified automatic data augmentation technique in the data augmentation phase,and a mixed-sample algorithm is proposed to enhance the utilization of unlabeled samples in the sample mixing phase.The performance of the proposed method is evaluated through image classification experiments.In image classification benchmark datasets,the proposed method outperforms several mainstream semi-supervised classification methods in three labeled sample proportions,which validates the effectiveness of the method.In addition,the proposed method performs better with a very low proportion of labeled data to the training data (only 0.05%),and the classification error rate of the proposed method on the SVHN dataset is 30.17% lower than that of MixMatch.
-
Interactive Attention Graph Convolutional Networks for Aspect-based Sentiment Classification
潘志豪, 曾碧, 廖文雄, 魏鹏飞, 文松. 基于交互注意力图卷积网络的方面情感分类[J]. 计算机科学, 2022, 49(3): 294-300.
PAN Zhi-hao, ZENG Bi, LIAO Wen-xiong, WEI Peng-fei, WEN Song. Interactive Attention Graph Convolutional Networks for Aspect-based Sentiment Classification[J]. Computer Science, 2022, 49(3): 294-300. - PAN Zhi-hao, ZENG Bi, LIAO Wen-xiong, WEI Peng-fei, WEN Song
- Computer Science. 2022, 49 (3): 294-300. doi:10.11896/jsjkx.210100180
- Abstract PDF(2149KB) ( 810 )
- References | Related Articles | Metrics
-
Aspect-based sentiment classification aims at identifying the sentiment polarity of the given aspect in a sentence.Most of the previous methods are based on long short-term memory network(LSTM)and attention mechanisms,which largely rely on the semantic correlation between aspects and contextual words in the modeled sentence,but ignore the syntactic information in the sentence.To tackle this problem,an interactive attention graph convolutional network(IAGCN) is proposed to model the semantic correlation and syntactic correlation of words in a sentence.Firstly,IAGCN starts with a bi-directional long short-term memory network(BiLSTM) to capture contextual semantic information regarding word orders.Then,the position information is introduced and put it into the graph convolutional network to learn the syntactic information.After that,aspect representation is obtained through mask mechanism.Finally,the interactive attention mechanism is used to interactively calculate and generate the aspect-specific contextual representation as the final classification feature.Through this complementary design,the model can obtain a good contextual representation that aggregates the aspect target information,and is helpful for sentiment classification.Experimental results show that the model achieve a good performance on multiple datasets.Compared with the Bi-IAN model without considering the syntax information,our model are superior to Bi-IAN model on all datasets,especially on the REST14,REST15 and REST16 datasets in the restaurant domain.Our model improves by 4.17%,7.98% and 8.03% on F1 scores respectively compare with the Bi-IAN model.Compared with the ASGCN model,which also takes semantic information and syntax information into account,the F1 scores of our model is better than that of the ASGCN model in all datasets except LAP14 dataset,especially on the REST14,REST15 and REST16 datasets in the restaurant domain.Compared with the ASGCN model,the F1 scores of our model is increased by 2.05%,1.66% and 2.77% respectively.
-
Industrial Serial Protocol State Detection Algorithm Based on DTMC
刘凯祥, 谢永芳, 陈新, 吕飞, 刘俊矫. 基于DTMC的工业串行协议状态检测算法[J]. 计算机科学, 2022, 49(3): 301-307.
LIU Kai-xiang, XIE Yong-fang, CHEN Xin, LYU Fei, LIU Jun-jiao. Industrial Serial Protocol State Detection Algorithm Based on DTMC[J]. Computer Science, 2022, 49(3): 301-307. - LIU Kai-xiang, XIE Yong-fang, CHEN Xin, LYU Fei, LIU Jun-jiao
- Computer Science. 2022, 49 (3): 301-307. doi:10.11896/jsjkx.210200078
- Abstract PDF(1968KB) ( 488 )
- References | Related Articles | Metrics
-
Aiming at the problem that the existing research on industrial security mainly focuses on industrial ethernet and lacks the research on serial link protocol protection,an industrial serial protocol state detection algorithm based on discrete time Mar-kov chain (DTMC) is proposed.This method utilizes the characteristics of limited behavior and state of the industrial control system (ICS),and automatically constructs the normal behavior model of ICS——DTMC,based on the historical traffic data of the serial link protocol.The model contains behavior information such as state event,state transition,state transition probability and state transition time interval.Then the behavior information contained in the model is used as the state detection rule set.When the state information generated in the detection phase is different from the state detection rule set information or the deviation exceeds the threshold,actions such as alarm or rejection are generated.At the same time,combined with the comprehensive packet inspection (CPI) technology,the detectable range of protocol payload data is increased.Finally,the experimental results show that the proposed algorithm can effectively detect semantic attacks and protect the security of serial links,the false positive rate is 5.3% and false negative rate is 0.6%.
-
User Trajectory Identification Model via Attention Mechanism
李昊, 曹书瑜, 陈亚青, 张敏. 基于注意力机制的用户轨迹识别模型[J]. 计算机科学, 2022, 49(3): 308-312.
LI Hao, CAO Shu-yu, CHEN Ya-qing, ZHANG Min. User Trajectory Identification Model via Attention Mechanism[J]. Computer Science, 2022, 49(3): 308-312. - LI Hao, CAO Shu-yu, CHEN Ya-qing, ZHANG Min
- Computer Science. 2022, 49 (3): 308-312. doi:10.11896/jsjkx.210300231
- Abstract PDF(1809KB) ( 919 )
- References | Related Articles | Metrics
-
Recently the application of location-based services has gradually become popular.It provides convenience in people’s daily life,and also brings a great threat to personal privacy.The existing research shows that,with a large amount of historical trajectory data,attackers can identify the user who generates the trajectory from the anonymous trajectory dataset.In these rela-ted studies,both data sparsity and poor data quality are faced.Data sparsity refers to the fact that trajectories are often distributed only in a few local areas,and there is no large corpus contrast to the natural language processing field.The poor data quality refers to the low sampling rate and existing noise of the location points in a trajectory.To address these two problems,this paper proposes a user trajectory identification model based on attention mechanism,including the location embedding module,the attention-based transitional feature encoder and trajectory-user identification module.The location embedding module is used to embed the spatial relation of the trajectory points into the location vector;the attention-based transitional feature encoder is used to extract the sequential dependencies from a single trajectory;and the trajectory-user identification module is used to predict the user identity of the trajectory based on the outputs of the transitional feature encoder.Finally,the experimental verification is carried out on Gowalla and Geolife datasets.The experimental results show that the proposed model in this paper can effectively alleviate the problem of data sparsity and poor data quality,and can achieve better accuracy than existing methods.
-
Expressive Attribute-based Searchable Encryption Scheme in Cloud Computing
高诗尧, 陈燕俐, 许玉岚. 云环境下基于属性的多关键字可搜索加密方案[J]. 计算机科学, 2022, 49(3): 313-321.
GAO Shi-yao, CHEN Yan-li, XU Yu-lan. Expressive Attribute-based Searchable Encryption Scheme in Cloud Computing[J]. Computer Science, 2022, 49(3): 313-321. - GAO Shi-yao, CHEN Yan-li, XU Yu-lan
- Computer Science. 2022, 49 (3): 313-321. doi:10.11896/jsjkx.201100214
- Abstract PDF(2038KB) ( 1106 )
- References | Related Articles | Metrics
-
Searchable encryption technology can realize keyword search without decrypting the data,and thus well protects user’sprivate information.Aiming at the problem that most current searchable encryption schemes cannot support user-defined search strategies,this paper proposes an attribute-based searchable encryption scheme which is secure,efficient and can support arbitrary search expressions.Firstly,the scheme,based on LSSS access structure,allows keyword search policy to be represented by conjunction,disjunction or any monotone Boolean expression,user generates trapdoor for LSSS search policy by utilizing the private key,and cloud server can search ciphertexts that satisfy specific keywords search policy through trapdoor.Secondly,it can realize fine-grained access control of encrypted data in cloud through combining with attribute-based encryption scheme.In addition,attackers cannot infer the sensitive information of keyword values from ciphertext and trapdoor by splitting keywords into keyword names and values through “linear splitting” technology.Finally,the computing burden of users is reduced due to part of decryption work is transfered to cloud server.The security of the proposed scheme is proved based on BDHE,(q-2) assumption.Theoretical analysis and experimental results also show that the scheme is effective.
-
Secure Data Link of Unmanned Aerial Vehicle Based on Chaotic Sub-carrier Modulation
赵耿, 宋鑫宇, 马英杰. 混沌子载波调制的无人机安全数据链路[J]. 计算机科学, 2022, 49(3): 322-328.
ZHAO Geng, SONG Xin-yu, MA Ying-jie. Secure Data Link of Unmanned Aerial Vehicle Based on Chaotic Sub-carrier Modulation[J]. Computer Science, 2022, 49(3): 322-328. - ZHAO Geng, SONG Xin-yu, MA Ying-jie
- Computer Science. 2022, 49 (3): 322-328. doi:10.11896/jsjkx.210200022
- Abstract PDF(4572KB) ( 518 )
- References | Related Articles | Metrics
-
With the rapid development of electronics and communication technology,UAV (unmanned aerial vehicle) supported by these technologies have attracted the extensive attention of academia and industry.In order to adapt to the increasingly complex tasks and application fields,the UAV secure data link has become an important factor to promote the development of UAV.Focusing on the security issues of UAV data link,this paper discusses a series of advantages of OFDM (orthogonal frequency division multiplexing) technology,such as high spectrum utilization and anti-multipath-fading ability,and explains its application in UAV data link.Combining the long-term unpredictable characteristics of chaotic systems,a CSCM (chaotic sub-carrier modulation) scheme is proposed,which completes the chaotic serial-to-parallel conversion and chaotic disturbance on the data by using the quantized grouped chaotic sequence.This scheme realized the encrypted transmission of UAV data and the judgment on the reliability of the data source.Finally,MATLAB is used to simulate the scheme to verify the feasibility and effectiveness of the proposed scheme.
-
New Certificateless Generalized Signcryption Scheme for Internet of Things Environment
张振超, 刘亚丽, 殷新春. 适用于物联网环境的无证书广义签密方案[J]. 计算机科学, 2022, 49(3): 329-337.
ZHANG Zhen-chao, LIU Ya-li, YIN Xin-chun. New Certificateless Generalized Signcryption Scheme for Internet of Things Environment[J]. Computer Science, 2022, 49(3): 329-337. - ZHANG Zhen-chao, LIU Ya-li, YIN Xin-chun
- Computer Science. 2022, 49 (3): 329-337. doi:10.11896/jsjkx.201200256
- Abstract PDF(2039KB) ( 493 )
- References | Related Articles | Metrics
-
Certificateless generalized signcryption (CLGSC) scheme has been widely applied in resource-limited IoT environments for they could not only solve the problems of the certificate management and key escrow,but also serve as encryption,signature,or signcryption scheme according to the security requirements of the network.Firstly,concrete attacks are given to prove that Karati’s scheme could not resist forgery attacks.This paper analyzes the essential reason why the adversaries can forge a valid signature or signcryption in CLGSC schemes.Then,an efficient certificateless generalized signcryption scheme without bilinear pairing is proposed.The proposed scheme is secure under the random oracle model based on the computational Diffie-Hellman problem and discrete logarithm problem.Finally,performance evaluation and comparison prove that the proposed scheme outperforms other CLGSC schemes in terms of computation cost,communication overhead and security functionalities.Therefore,the proposed scheme can provide the service of secure data transmission among resource-limited IoT devices.
-
Linear System Solving Scheme Based on Homomorphic Encryption
吕由, 吴文渊. 基于同态加密的线性系统求解方案[J]. 计算机科学, 2022, 49(3): 338-345.
LYU You, WU Wen-yuan. Linear System Solving Scheme Based on Homomorphic Encryption[J]. Computer Science, 2022, 49(3): 338-345. - LYU You, WU Wen-yuan
- Computer Science. 2022, 49 (3): 338-345. doi:10.11896/jsjkx.201200124
- Abstract PDF(1546KB) ( 801 )
- References | Related Articles | Metrics
-
In the fields of scientific computing,statistical analysis and machine learning,many practical problems can be reduced to solving linear system Ax=b,such as least square estimation and regression analysis in machine learning.In practice,the data used for calculation often belong to different users,containing their sensitive information.When different data owners want to collaboratively solve a model,homomorphic encryption can be one of the methods to deal with the privacy leakage in computing.In a scenario with only two parties,based on the HEAAN scheme proposed by Cheon et al,we propose a new scheme involving two-party to securely solve the linear system through Gram-Schmidt orthogonalization,and design an interactive secure multiplicative inverse protocol to solve a problem that they cannot do efficient division.We analyze the security,communication cost and computational complexity,and also implement our scheme based on HEAAN library using C++ language.Through a large number of experiments,it shows that our scheme can solve a linear system with dimension up to 17 safely and efficiently.Compared with the results on unencrypted data,the relative error is no more than 0.000 1.By the proposed parallel encoding method,our scheme can process multiple linear systems simultaneously in SIMD mode,which expands the availability of the scheme.Our scheme can be practically applied in some specific scenarios,and can be further used for the design of privacy-preserving data mining algorithms.
-
Composite Blockchain Associated Event Tracing Method for Financial Activities
李素, 宋宝燕, 李冬, 王俊陆. 面向金融活动的复合区块链关联事件溯源方法[J]. 计算机科学, 2022, 49(3): 346-353.
LI Su, SONG Bao-yan, LI Dong, WANG Jun-lu. Composite Blockchain Associated Event Tracing Method for Financial Activities[J]. Computer Science, 2022, 49(3): 346-353. - LI Su, SONG Bao-yan, LI Dong, WANG Jun-lu
- Computer Science. 2022, 49 (3): 346-353. doi:10.11896/jsjkx.210700068
- Abstract PDF(2320KB) ( 612 )
- References | Related Articles | Metrics
-
The existing blockchain system mostly adopts the equal mining mode.All bookkeepers (entities) record the books on a single main chain,and the data storage is random.Moreover,in complex or classified financial scenarios,the data of the main chain is difficult to realize association or regular storage,leading to low storage and query efficiency.At the same time,most event traceability in the existing blockchain system can only query the source block,and the implied association between entities cannot be identified,so the query has limitations.To solve these problems,composite blockchain associated event tracing method is proposed.Firstly,the composite chain storage structure model of blockchain is constructed,and the concepts of private chain and al-liance chain are proposed to realize the adaptive data association storage under complex or classified scenarios.Then,in the traceability query,on the basis of obtaining the event source entity block,the auxiliary storage space is set up to transfer the relevant data.A tracing method of the associated entity block based on the Apriori algorithm is proposed,and then the obtained traceability entity block is constructed as the source event correlation graph to describe the correlation relationship between the event entities.Finally,the risk assessment system based on reinforcement learning is proposed to realize the traceability entity risk assessment.Experiments show that the composite blockchain associated event tracing method can reduce the storage overhead by 60%,improve the query accuracy by 90% and improve security by 50%.
-
Trust Evaluation Model of Cloud Manufacturing Services for Personalized Needs
杨玉丽, 李宇航, 邓岸华. 面向个性化需求的云制造服务可信评价模型[J]. 计算机科学, 2022, 49(3): 354-359.
YANG Yu-li, LI Yu-hang, DENG An-hua. Trust Evaluation Model of Cloud Manufacturing Services for Personalized Needs[J]. Computer Science, 2022, 49(3): 354-359. - YANG Yu-li, LI Yu-hang, DENG An-hua
- Computer Science. 2022, 49 (3): 354-359. doi:10.11896/jsjkx.210200116
- Abstract PDF(1540KB) ( 670 )
- References | Related Articles | Metrics
-
Aiming at the problems of weak extensibility and difficulty in meeting personalized requirements in traditional trust evaluation models of cloud manufacturing service,a trust evaluation model of cloud manufacturing service for personalized needs is proposed.Firstly,a multi-level and multi-granularity trust evaluation framework of cloud manufacturing service is designed.Then,based on the framework,a trust evaluation method of cloud manufacturing services based on cloud model is proposed.In this method,the cloud model theory is used to characterize different types of evaluation indexes uniformly,and also used to describe the personalized needs.The standard deviations are used to calculate the weight coefficients of different evaluation indexes.Finally,the effectiveness and feasibility of the proposed model are verified through a case analysis and a comparative experiment of time overhead,respectively.Compared with traditional methods,the experimental results show that within a reasonable amount of time,according to the personalized requirements of users,the proposed model could make more accurate trust evaluations for different cloud manufacturing service providers,and then help users choose the cloud manufacturing service with the higher satisfaction.
-
Rational PBFT Consensus Algorithm with Evolutionary Game
杨昕宇, 彭长根, 杨辉, 丁红发. 基于演化博弈的理性拜占庭容错共识算法[J]. 计算机科学, 2022, 49(3): 360-370.
YANG Xin-yu, PENG Chang-gen, YANG Hui, DING Hong-fa. Rational PBFT Consensus Algorithm with Evolutionary Game[J]. Computer Science, 2022, 49(3): 360-370. - YANG Xin-yu, PENG Chang-gen, YANG Hui, DING Hong-fa
- Computer Science. 2022, 49 (3): 360-370. doi:10.11896/jsjkx.210900110
- Abstract PDF(2602KB) ( 878 )
- References | Related Articles | Metrics
-
Byzantine fault-tolerant algorithm is vital to ensure the distributed system such as blockchain reaching consistency.The performance of algorithm affects the security and stability of the system.In view of the low efficiency and lack of incentive mechanism of existing consensus algorithms,a rational practical Byzantine fault-tolerant consensus algorithm with evolutionary game is proposed.Firstly,the trustworthiness of nodes in the consensus process is determined by node trust evaluation,the reputation value is used as the basis for the consensus enthusiasm of rational nodes,consensus nodes are divided based on reputation value,and the consensus method of node network fragmentation is adopted to improve consensus efficiency;secondly,the evolutionary game model is established for the impact of link dynamics between nodes in the consensus process on the reputation value,and based on the existence of the reputation stabilization strategy,an incentive mechanism based on reputation value rewards is designed to enhance the enthusiasm of consensus nodes to participate in consensus.Simulation results show that the consensus algorithm has a throughput increase of 40%,and the reputation evolution game model designed for nodes has a rapid convergence effect in the consensus process.