Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 47 Issue 6, 15 June 2020
  
Intelligent Software Engineering
Survey on Runtime Input Validation for Context-aware Adaptive Software
WANG Hui-yan, XU Jing-wei, XU Chang
Computer Science. 2020, 47 (6): 1-7.  doi:10.11896/jsjkx.200400081
Abstract PDF(1523KB) ( 1523 )   
References | Related Articles | Metrics
With the widespread of intelligence and big data,context-aware adaptive software,one representative of intelligent software,has gained increasing popularities.It has two key characteristics:1) “context-aware”,referring to the ability of becoming aware of environments through ubiquitous sensors.2) “adaptive”,referring to the ability of making adaptations based on collected contexts.As such,context-aware adaptive software can at runtime sense its surrounding environment and make adaptations smartly.Besides,with the growing development of artificial intelligence (AI) technologies,more AI models have been applied in context-aware adaptive software for smarter adaptations.Therefore,on one hand,due to the complexity of environments at runti-me,the software suffers from severe reliability issues during its deployment,which is difficult to avoid by sole testing due to the lack of practically controllable environments,thus leading to great challenges for its runtime reliability assurance.On the other hand,the application of AI models in context-aware adaptive software further aggravates its reliability issues.As such,how to maintain the runtime reliability of context-aware adaptive software has been a widely-open research problem in intelligent software engineering,while input validation has shown promising in this field by identifying and isolating unexpected inputs from being fed into the software in order to avoid possible uncontrollable consequences at runtime.In this article,we survey techniques on runtime input validation for context-aware adaptive software concerning its two key characteristics:“context-aware” and “adaptive”.Meanwhile,we also dig into the reliability issue problem concerning its cost-effectiveness in solving,and overview the concerned research framework.Finally,we discuss some latest concerns of context-aware adaptive software at present and in future,and present how context-aware adaptive software supports the emergence of self-growing software in the vision.As a summary,we survey existing efforts on runtime input validation for context-aware adaptive software,and aim to form a structured framework for the potential solutions on its reliability issues.We wish this may shed some light on relative researchers in future.
Experiment on Formal Verification Process of Parser of CompCert Compiler in Trusted Compiler Design
LI Ling, LI Huang-hua, WANG Sheng-yuan
Computer Science. 2020, 47 (6): 8-15.  doi:10.11896/jsjkx.191000173
Abstract PDF(1516KB) ( 1498 )   
References | Related Articles | Metrics
Jourdan and others presented a method to formally verify a parser in their paper Validating LR(1) Parsers published in 2012,and successfully applied it to the parser verification of CompCert compiler(version 2.3 and above).With this method,formal validation of the Lustre* parser is completed,which is a part of the Open L2C project,and one of the two options of the front-end parser of the Open L2C compiler is implemented.Firstly,this paper discussesthe implementation of the parser,inclu- ding some valuable technical details.Thenit analyzes the running performance and correctness of the parser.And finally,how to apply this method tomore general parsersis summarized.
Multi-objective Optimization Methods for Software Upgradeability Problem
ZHAO Song-hui, REN Zhi-lei, JIANG He
Computer Science. 2020, 47 (6): 16-23.  doi:10.11896/jsjkx.200400027
Abstract PDF(1670KB) ( 1119 )   
References | Related Articles | Metrics
Open-source Package management as a means of reuse of software artifacts has become extremely popular,most notably in Linux distributions.Software upgradeabilty problem is a significant challenge which package management system must resolve.This problem aims to find the most suitable upgrade scheme that satisfies upgrade requests from users.An upgrade scheme comprises of a sequence of operations,including installing,removing,and/or upgrading packages.In the existing approaches for solving this problem,multiple upgrade requests are handled in aggregate ways.Hence,a potential risk of such approaches is that,the relationships between different upgrade objectives may not be considered properly.This paper introduces a novel approach SATMOEA,which forms software upgradeability problem as a SAT plus multi-objective optimization problem and addresses this problem combining constraint solving and multi-objective search-based optimization algorithms.We evaluate it on real instances provided by MISC (Mancoosi International Solver Competitions) and obtain promising results where we can find some Pareto optimal solutions for a complex instance with myriad constraints in a single run.In comparison with other solvers,it can provide more solutions with better diversity property to satisfy requirements in different scenarios.
Model of Embedded Software for Solving Concurrent Defects
CUI Kai, ZHAO Guo-liang, ZHOU Kuan-jiu, LI Ming-chu
Computer Science. 2020, 47 (6): 24-31.  doi:10.11896/jsjkx.191100187
Abstract PDF(4113KB) ( 1099 )   
References | Related Articles | Metrics
Randomicity and nondeterminism of programs such as interrupts nesting or threads interleaving in embedded software can lead to concurrent defects such as data race and atomicity violations.Meanwhile,the insidious programming errors are difficult to restore and rebuild.Aiming at the concurrent defects of data race and atomicity violations in embedded software,a thin interrupt service routine (thin ISR) method is proposed in this paper.By using the state transition matrix (STM) to model,the program segments relatedto accessing shared variables in the interrupt processing program are migrated to the main program.The interrupt handler is only responsible for storing the external interrupt request data in the buffer,and the function of interrupt services is executed in the main program.Then the corresponding C codes are generated by the STM model.This method can efficiently avoid the happening of concurrent defects,such as data race and atomicity violations.Finally,queuing theory method is used to simulatethe arrival time and the departure time of the interrupt.Experimental results verify that this approach is feasible and effective in solving data race and atomicity violations.
Modeling and Simulation of Q&A Community and Its Incentive Mechanism
XU Zi-xi, MAO Xin-jun, YANG Yi, LU Yao
Computer Science. 2020, 47 (6): 32-37.  doi:10.11896/jsjkx.191000088
Abstract PDF(2079KB) ( 1102 )   
References | Related Articles | Metrics
Question and Answer (Q&A) community has become an important platform of knowledge sharing over the Internet.It provides a series of incentive mechanisms (such as reputation,badge,privilege,etc.) to encourage users to participate,contri-bute,and improve the activities of the community.How to analyze the effectiveness of these incentive mechanisms and guide their improvement is an important challenge for the research and practice of Q&A community.This paper proposes a modeling and simulation analysis method based on multi-agent system.The community with a great amount of users is modelled as a multi-agent system consisting of autonomous agents,the contribution and interaction among community users are modelled as the coo-perative behavior of agents driven by the incentive mechanism.This paper specifies the incentive mechanism as the belief of agents,examines the generation of agent desires and the behaviors of agents based on self-determination theory.This paper collects the data of Stack Overflow community from 2016 to 2018,and conducts a simulation experiments on the development and evolution of the community based on NetLogo.The results show that the proposed model and mechanism abstractions can effectively explain and reveal the evolution process of Q&A community under the influence of incentive mechanism.
Test Case Prioritization Based on Multi-objective Optimization
XIA Chun-yan, WANG Xing-ya, ZHANG Yan
Computer Science. 2020, 47 (6): 38-43.  doi:10.11896/jsjkx.191100113
Abstract PDF(2041KB) ( 1661 )   
References | Related Articles | Metrics
Regression testing is the most frequently used and expensive testing method in software testing.Test case prioritization is an effective way to reduce the cost of regression testing.Its purpose is to improve the ability of software fault detection by prio-ritizing the execution of high-level test cases.In this paper,a method of test case prioritization based on multi-objective optimization is proposed.The method integrates choice function into individual evaluation mechanisms of genetic algorithm.By designing a reasonable coding method and appropriate selection,crossover and mutation strategies,taking fault detection rate,sentence covera-ge rate and effective execution time as optimization objectives,non-dominated sorting genetic algorithm is used to optimize test case sort.The experimental results based on four benchmark programs and four industrial programs show that the proposed method can improve the effectiveness of software testing compared with other methods.
Defect Recognition of APP Software Based on User Feedback
DUAN Wen-jing, JIANG Ying
Computer Science. 2020, 47 (6): 44-50.  doi:10.11896/jsjkx.191100133
Abstract PDF(1440KB) ( 1027 )   
References | Related Articles | Metrics
At present,APP software has been widely used,and its quality has been widely concerned.The high quality software defects should be fewer.However,software testing cannot find all defects.Some software defects can not be found until the user uses the software.This paper puts forward a method of software defect recognition based on user feedback.By defining the APP software defect extraction rule,software defects in user feedback are mined.During the mining of APP software’s defect,the extraction rule is dynamically updated.Then the classification and severity for the extracted defects are analyzed.The experimental results show that the proposed method is effective,the accuracy of extracting user comments including APP software’s defects is 85.19%,and the accuracy of defect classification is 83.23%.
Analysis of Open Source Software Cliff Walls for Group Collaborative Development
HE Peng, YU Lv-jun
Computer Science. 2020, 47 (6): 51-58.  doi:10.11896/jsjkx.190300140
Abstract PDF(1932KB) ( 810 )   
References | Related Articles | Metrics
Due to the characteristics of low threshold and high freedom,open source software encounters slow progress,low efficiency and low quality in the development process.Software cliff wall as a criterion of project robustness,indicates unexpected acceleration in common incremental development activities over short periods of time,which is a potential threat to the sustainable development in software evolution.Therefore,analyzing the causes of software cliff walls is an effective method to deeply understand the development process of open source projects,to more accurately describe the evolution of software,and to improve the efficiency of software development.The experiment firstly constructes a series of developer collaboration networks (DCNs) over more than 150 thousand commits from 9 GitHub projects by month and quarter respectively.This paper consideres a single commit of more than 10 000 lines of code as a software cliffs.And then it introduces 9 metrics,such as the number of nodes,the number of connected edges,the node update rate,the module degree,the average path length,the average degree,the node penetration index,the node out-of-mean,and the diversity,to analyze the relationship between DCN and cliff walls from the perspectives of network scale,network structure and network quality.The results show that:1)smaller development teams and greater member turnover tend to cause a cliff wall;2)‘small world’ features among developers is helpful to avoid the emergence of software cliff walls;3)the relationship between DCN and software cliffs in the software development process is more appropriate in a quarterly cycle,and the diversity of the development team will also affect the creation of cliff walls in software development.
Databωe & Big Data & Data Science
Application Mode and Challenges of Vehicular Big Data
GE Yu-ming, HAN Qing-wen, WANG Miao-qiong, ZENG Ling-qiu, LI Lu
Computer Science. 2020, 47 (6): 59-65.  doi:10.11896/jsjkx.191200165
Abstract PDF(1450KB) ( 1569 )   
References | Related Articles | Metrics
With the technical evolution of connected vehicles,people-vehicle-road-cloud are all connected,and a large number of application services emerge which cover many parts such as manufacturing,connected vehicle products,vehicle service market and intelligenttravel service.The core of these applications is big data of vehicles.The effective utilization of vehicular big data may be an important breakthrough in the transformation and upgrading of automotive industry in the future.To promote the application of vehicular big data in connected vehicles,related works are reviewed in this paper.According to the application demands,this paperstarts from the connotation and architecture of vehicular big data,and analyzes the characteristic of data sources and corresponding applications,such as manufacturing,connected vehicle products and vehicle service market,etc.Then the key technologies of vehicular big data are discussed from four aspects,which are data collection,data processing and analysis,computing resource and privacy protection.Based on a comprehensive analysis fits development status of policy and technology,this paper anticipates the future application trend of vehicular big data.
Big Data Decomposition-Fusion and Its Intelligent Acquisition
LIU Ji-qin, SHI Kai-quan
Computer Science. 2020, 47 (6): 66-73.  doi:10.11896/jsjkx.191000072
Abstract PDF(1484KB) ( 724 )   
References | Related Articles | Metrics
The concepts of big data decomposition-fusion and big data distance generated by big data decomposition and fusion are given.By using these concepts,an union-intersection decomposition theorem of big data,an intersection-union decomposition theorem of big data and their attribute conjunction relation are given.Intelligent generation theorems and the distance relationship of big data fusion are given.A recognition criterion of big data decomposition-fusion,an intelligent algorithm and algorithm process of big data decomposition-fusion acquisition are given.The application of these theoretical results in big data decomposition-fusion intelligent acquisition is presented.In this paper,the new characteristics of ∧-type big data are given,∧-type big data is obtained by using P-sets model.
Chinese Short Text Summarization Generation Model Based on Semantic-aware
NI Hai-qing, LIU Dan, SHI Meng-yu
Computer Science. 2020, 47 (6): 74-78.  doi:10.11896/jsjkx.190600006
Abstract PDF(1482KB) ( 1505 )   
References | Related Articles | Metrics
The text summary generation technology can summarize the key information from the massive data and effectively solve the problem of information overload.At present,the sequence-to-sequence model is widely used in the field of English text abstraction generation,but there is no in-depth study on this model in the field of Chinese text abstraction.In the conventional sequence-to-sequence model,the decoder applies the hidden state of each word output by the encoder as the overall semantic information through the attention mechanism,nevertheless the hidden state of each word which encoder outputs only in consideration of the front and back words of current word,which results in the generated summary missing the core information of the source text.To solve this problem,a semantic-aware based Chinese short text summarization generation model called SA-Seq2Seq is proposed,which uses the sequence-to-sequence model with attention mechanism.The model SA-Seq2Seq applies the pre-training model called BERT to introduce source text in the encoder so that each word contains the overall semantic information and uses gold summary as the target semantic information in the decoder to calculate the semantic inconsistency loss,thus ensuring the semantic integrity of the generated summary.Experiments are carried out on the dataset using the Chinese short text summary dataset LCSTS.The experimental results show that the model SA-Seq2Seq on the evaluation metric ROUGE is significantly improved compared to the benchmark model,and its ROUGE-1,ROUGE-2 and ROUGE-L scores increase by 3.4%,7.1% and 6.1% respectively in the dataset that is processed based on character and increase by 2.7%,5.4% and 11.7% respectively in the dataset that is processed based on word.So the SA-Seq2Seq model can effectively integrate Chinese short text and ensure the fluency and consistency of the generated summary,which can be applied to the Chinese short text summary generation task.
Noisy Label Classification Learning Based on Relabeling Method
YU Meng-chi, MU Jia-peng, CAI Jian, XU Jian
Computer Science. 2020, 47 (6): 79-84.  doi:10.11896/jsjkx.190600041
Abstract PDF(1956KB) ( 1214 )   
References | Related Articles | Metrics
The integrity of sample labels has a significant impact on the accuracy of supervised learning algorithms.However,in real data,due to the unprofessional and random nature of the labeling process,the label of the dataset is inevitably polluted by noise,i.e.the assigned label of sample is different from its real label.In order to reduce the negative impact of noise labels on the classification accuracy of classifiers,this paper proposes a noise label correction approach.It firstly identifies the noise label data by applying the base classifier to classify the samples and estimating the noise rate to identify noisy label data,and then uses the base classifier to relabel the noisy samples.As a result,the noisy samples are relabeled to obtain a sample dataset in which the noisy samples are corrected.Experiments on synthetic datasets and real datasets show that the relabel algorithm has a certain improvement effect on classification results under different base classifiers and different types of noise rate interference.Compared with the base classifier,the accuracy of relabel algorithm is improved by about 5% in the synthetic dataset,while in the high noise environment of CIFAR and MNIST datasets,the F1 score of the proposed algorithm is 7% higher than that of Elk08 and Nat13 on average,and is improved by 53% compared with base classifier.
Data Composition View Positioning Update Approach with Incremental Logs
ZHANG Yuan-ming, LI Meng-ni, HUANG Lang-you, LU Jia-wei, XIAO Gang
Computer Science. 2020, 47 (6): 85-91.  doi:10.11896/jsjkx.190500085
Abstract PDF(2398KB) ( 611 )   
References | Related Articles | Metrics
Data resources stored in different units and departments in cloud environment are cross-domain,heterogeneous and complex.As a unified data model for cross-origin and heterogeneous data sources,data service can publish data sources in the form of services,and generate data composition view by composing several data services according to users’ data requirements.Since the data sources are autonomous,it becomes a key issue to update data composition view in real time with minimal cost.This paper proposes a data composition view positioning update approach based on incremental logs.The latest data changes of data sources are captured according to incremental logs,and then the attributes and tuples in data composition view are indexed.The index numbers of different tuples can be calculated with positioning attributes.The corresponding tuple update operations can be performed according to data changes’ type.A log-based update data acquisition algorithm and a data composition view positioning update algorithm are presented.The proposed approach has been evaluated in a cross-origin heterogeneous elevator data service system by using datasets from multiple departments.When the proportion of the number of changed tuples is much smaller than the total number of tuples,the update efficiency of positioning update approach is much higher than existing methods.When the number of attributes of the data composition view is larger,the update efficiency of the positioning update approach is much higher than existing methods.
Robust Low Rank Subspace Clustering Algorithm Based on Projection
XING Yu-hua, LI Ming-xing
Computer Science. 2020, 47 (6): 92-97.  doi:10.11896/jsjkx.190500074
Abstract PDF(1863KB) ( 896 )   
References | Related Articles | Metrics
With the advent of the era of big data,how to effectively cluster,analyze and effectively use massive amounts of high-dimensional data has become a hot research topic.When the traditional clustering algorithms are used to process high-dimensional data,the accuracy and stability of the clustering results are low.The subspace clustering algorithm can reduce the feature space of the original data to form different feature subsets,reduce the influence of uncorrelated features between data on clustering results.It can mine the information that is difficult to display in high-dimensional data,and has significant advantages in processing high-dimensional data.Aiming at the limitations of existing graph-based subspace clustering algorithms in dealing with unknown type noise and solving complex convex problems,based on subspace clustering algorithm,combined with spatial projection theory,this paper proposes a projection-based robust low-rank subspace clustering algorithm.Firstly,the original data is projected,the noise of the projection space is eliminated by coding and the missing data is compensated.Then a new method map is used to construct the sparse similarity l2 graph,and finally the subspace clustering is performed on the basis of the l2 graph.The algorithm does not need a priori knowledge of the type of noise,and the l2 graph can well describe the characteristics of high-dimensional data sparsity and spatial dispersion.Three datasets of face recognition are selected as experimental datasets.Firstly,the optimal parameters affecting the clustering effect are determined,and then the algorithm is verified from three aspects:accuracy,robustness and time complexity.The experimental results show that the algorithm has high accuracy,low time complexity and good robustness,when the unknown type of noise is mixed in the datasets of face recognition.
Application Research of Improved XGBoost in Imbalanced Data Processing
SONG Ling-ling, WANG Shi-hui, YANG Chao, SHENG Xiao
Computer Science. 2020, 47 (6): 98-103.  doi:10.11896/jsjkx.191200138
Abstract PDF(1355KB) ( 1616 )   
References | Related Articles | Metrics
When dealing with imbalanced data,traditional classifiers tend to guarantee the accuracy of the majority class and sacrifice the accuracy of the minority class,resulting in a higher error rate of the minority class.Aiming at this problem,an improved XGBoost method for binary imbalanced data is proposed.The main idea is to improve the characters of imbalanced data from three levels,data,features,and algorithms.Firstly,at the data level,Conditional Generative Adversarial Nets (CGAN) learns the distributive information of minority samples and then trains the generator to generate a few supple-mentary samples to adjust the imbalance of the data.Secondly,at the feature level,it uses XGBoost for feature combination to generate new features,and then uses the minimal Redundancy-Maximal Relevance (mRMR) algorithm to screen out a subset of features that are more suitable for imbalanced data classification.Finally,at the algorithm level,it introduces a Focal Loss function for imbalanced data classification to improve XGBoost.The improved XGBoost is trained on the new dataset to obtain the final model.In the experimental stage,G-mean and AUC are selected as the evaluation indicators.The experimental results on 6 sets of KEEL datasets verify the feasibility of the proposed improved method.At the same time,the method is compared with the existing four imbalanced classification models.The experimental results show that the proposed improved method has better classification effect.
Computer Graphics & Multimedia
Novel Image Classification Based on Test Sample Error Reconstruction Collaborative Representation
WANG Jun-qian, ZHENG Wen-xian, XU Yong
Computer Science. 2020, 47 (6): 104-113.  doi:10.11896/jsjkx.200200135
Abstract PDF(5020KB) ( 869 )   
References | Related Articles | Metrics
Collaborative representation-based classification (CRC) has shown noticeable results on image classification tasks like face recognition and object recognition.It solves a linear problem of the test sample with norm regularization,to obtain a more stable numerical solution.Previous studies have shown that the choice of regularization parameters plays a very important role in the numerical stability of the collaborative representation.This paper proposes a novel image classification method based on test sample error reconstruction collaborative representation-based classification,called TSER-CRC.The first phase of the proposed method uses a smaller regularization parameter to calculate a collaborative representation coefficient and reconstructs the test sample with the obtained coefficient to weaken the error in the original test sample and reduce the inconsistency between the origi-nal test sample and the training samples.The second phase of the proposed method uses the larger regularization parameter and the test samples reconstructed in the first phase to solve the collaborative representation coefficients to obtain the relationship between the numerically stable test sample and the training samples for each class.Finally,the test sample will be classified by conventional classification strategy in CRC.The poposed method can effectively reduce the errors and outliers in the test samples represented by the collaborative subspace composed of all training samples,thereby increasing the stability of the collaborative representation coding coefficients and the robustness of image classification.Experimental results on five standard datasets show that the proposed method can achieve more satisfactory in image classification accuracy than traditional CRC and some others classical image classification methods.
Human Keypoint Matching Network Based on Encoding and Decoding Residuals
YANG Lian-ping, SUN Yu-bo, ZHANG Hong-liang, LI Feng, ZHANG Xiang-de
Computer Science. 2020, 47 (6): 114-120.  doi:10.11896/jsjkx.200300079
Abstract PDF(3251KB) ( 1099 )   
References | Related Articles | Metrics
Human pose estimation,especially multi-person pose estimation,is gradually penetrating into various aspects,such as education and sports.High-precision and lightweight multi-person pose estimation is a current research hotspot.Generally,bottom-up multi-person pose estimation method has strong real-time performance,however,its accuracy is not high and the network structure is huge.For the key point association problem,this paper proposes a few parameters and efficient pose estimation matching network.This network improves the basic ResNet module in the encoding stage to obtain the layer structure.Using these structures to extract features can greatly reduce the model’s parameter amount.In the decoding stage,a specially designed deconvolution structure is used,and residual connections are added to the entire network,which greatly improves the accuracy of the network.The whole algorithm can correctly match the heat map of key points to everyone,and obtain the final human key point estimate.The proposed model is a portable and efficient human keypoint matching network,because its mAP value on the ground truth of the COCO dataset is as high as 89.7,and the parameters are only 8.01 M.Compared with the current best bottom-up multi-person pose estimation method,the proposed model improves accuracy mAP value by 0.5 and reduces to 1/10 of the original in terms of para-meters.The proposed model uses the COCO 2017 and COCO 2014 datasets to train and verify,andachieves high accuracy.It shows that the proposed model is suitable for the input of heat maps of key points of various human bodies,and can get good results.In addition,this paper designs a variety of ablation experiments for different layer structures of the network model.The lightest structural parameter is only 1.28 M,and the accuracy mAP value can reach 81.8.
Face Recognition in Non-ideal Environment Based on Sparse Representation and Support Vector Machine
WU Qing-hong, GAO Xiao-dong
Computer Science. 2020, 47 (6): 121-125.  doi:10.11896/jsjkx.190500058
Abstract PDF(2205KB) ( 966 )   
References | Related Articles | Metrics
Currently face recognition algorithms have high recognition accuracy and strong adaptive ability in ideal environment,but in non-ideal environment,the accuracy of face recognition declines sharply.In order to improve the stability of face recognition results,a non-ideal environment face recognition algorithm based on sparse representation and support vector machine fusion is designed.Firstly,the feature dictionary of face recognition in non-ideal environment is constructed,then the training samples and test samples of face recognition in non-ideal environment are processed by feature dictionary,and the learning samples of facere-cognition in non-ideal environment are constructed.Finally,the classifier of face recognition in non-ideal environment is established by using support vector machine,and face recognition in non-ideal environment is processed.A number of standard face databases are used to test the non-ideal environment face recognition algorithm.The non-ideal environment face recognition accuracy of this algorithm is high,the false recognition rate and rejection rate of non-ideal environment face recognition are low.Compared with other face recognition algorithms,it is more adaptable to environmental changes,and the overall recognition effect of non-ideal environment face is better.It improves the efficiency of face recognition in non-ideal environment and has obvious advantages.
Small Size Face Detection Based on Feature Map Fusion
YANG Shao-peng, LIU Hong-zhe, WANG Xue-qiao
Computer Science. 2020, 47 (6): 126-132.  doi:10.11896/jsjkx.19050002
Abstract PDF(3854KB) ( 952 )   
References | Related Articles | Metrics
Face detection is finding and locating all faces from the input pictures or videos.In order to solve the difficulties caused by the diversity of face size,especially small-sized faces,a new single shot small-scale face detection method is presented based on feature map fusion.The method first selects the feature map to be detected reasonably,and uses different feature maps to detect faces of different sizes.Then,by combining the deep feature map and the shallow feature map,the context information is introduced reasonably,thereby improving the detection precision of the small-sized face.The proposed model is trained and tested on the NVIDIA GTX TATAN X using the WIDERFACE dataset.The results on the three test subsets of WIDERFACE are 88.9% (hard),93.5% (medium),94.3% (easy) AP,at 39 fps.It is superior to other excellent detection methods in both detection accuracy and detection speed.
Scene Graph Generation Model Combining Attention Mechanism and Feature Fusion
HUANG Yong-tao, YAN Hua
Computer Science. 2020, 47 (6): 133-137.  doi:10.11896/jsjkx.190600110
Abstract PDF(2712KB) ( 1086 )   
References | Related Articles | Metrics
Understanding a visual scene can not only identify a single object in isolation,but also get the interaction between different objects.Generating scene graph can obtain all the tuples(subject-predicate-object) and describe the object relationships inside an image,which is widely used in image understanding tasks.To solve the problem that the existing scene graph generation models use complicated structures with slow inference speed,a scene graph generation model combining attention mechanism and feature fusion with Factorizable Net structure was proposed.Firstly,a image is decomposed into subgraphs,where each subgraph contains several objects and their relationships.Then,the position and shape information is merged in the object features,and the attention mechanism is used to realize the message transmission between the object features and the subgraph features.Finally,the object classification and the relationship between the objects are inferred according to the object features and the subgraph features.The experimental results show that the accuracy of the visual relationship detection is 22.78% to 25.41%,and the accuracy of the scene graph generation is 16.39% to 22.75%,which is 1.2% and 1.8% higher than Factorizable Net on multiplevi-sual relationship detection datasets.Besides,the proposed model can perform object relationship detection task in 0.6 seconds with a GTX 1080Ti graphics.The results demonstrate that the number of image regions to be inferred is significantly reduced by using the subgraph structure.The feature fusion method and the attention mechanism are used to improve the performance of depth features,so the objects and their relationships can be predicted more quickly and accurately.Therefore,it solves the problem of poor timeliness and low accuracy in the traditional scene graph generation models.
Color Image Super-resolution Algorithm Based on Inter-channel Correlation and Nonlocal Self-similarity
MO Cai-wang, CHANG Kan, LI Heng-xin, LI Ming-hong, QIN Tuan-fa
Computer Science. 2020, 47 (6): 138-143.  doi:10.11896/jsjkx.190500047
Abstract PDF(2139KB) ( 889 )   
References | Related Articles | Metrics
Most of the existing single image super-resolution (SR) algorithms are designed to improve the resolution of a single channel.When dealing with the color images,the inter-channel correlation is ignored,so the reconstructed high resolution (HR) image is prone to distortion.To solve the above problem,this paper proposes a SR algorithm for color images,which jointly takes the inter-channel correlation (ICC) and non-local self-similarity (NLSS) into consideration.First of all,in order to fully make use of the inter-channel correlation of color images,the total variation (TV)-norm of the residual signals and the TV-norm of the ave-rage signal of three color channels are respectively calculated.Secondly,to further improve the SR results,the reconstructed HR images are updated based on the non-local self-similarity of nature images.Finally,to solve the established optimization problem,a split-Bregman method-based iteration is proposed.The proposed algorithm is compared with several state-of-the-art methods.At a scale factor of 3,the average peak signal to noise ratio (PSNR) improvement achieved by the proposed algorithm reaches 0.5 dB on Set5,and 0.36 dB on Set14,respectively.The experimental results demonstrate that jointly utilizing ICC and NLSS is able to effectively improve the quality of the reconstructed HR color images.
Object-level Edge Detection Algorithm Based on Multi-scale Residual Network
ZHU Wei, WANG Tu-qiang, CHEN Yue-feng, HE De-feng
Computer Science. 2020, 47 (6): 144-150.  doi:10.11896/jsjkx.190700121
Abstract PDF(3705KB) ( 971 )   
References | Related Articles | Metrics
Object-level edge detection technology is a key basic technology in the field of intelligent vision processing.However,there are some problems in the edge detection results based on convolutional neural network,such as low resolution and high noise.Therefore,an object-level edge detection algorithm based on multi-scale residual network is proposed.Firstly,a hybrid dilated convolution residual block is designed to replace the ordinary convolution kernel in the original residual network to enlarge the receptive field of the network.Secondly,a multi-scale feature enhancement module is designed to extract multi-scale features from edge information to enlarge the information receiving domain of the network.Finally,a pyramid multi-scale feature fusion module combining top-level semantic features is designed to fuse the feature information at different scales and output the image after edge detection.In order to verify the effectiveness of the proposed algorithm,experimental analysis is performed on the public dataset BSDS500.The experimental results show that compared with existing algorithms,the proposed algorithm has better edge detection effect,and the objective indicators ODS,OIS and AP are increased to 0.819,0.838 and 0.849,respectively,meanwhile the subjective detection effect is closer to the real value with less noise.
Local Gabor Convolutional Neural Network for Hyperspectral Image Classification
WANG Yan, WANG Li
Computer Science. 2020, 47 (6): 151-156.  doi:10.11896/jsjkx.190500147
Abstract PDF(2475KB) ( 992 )   
References | Related Articles | Metrics
In order to solve the problem of insufficient utilization of hyperspectral image features,a new classification method based on spatial-spectral features was proposed.Firstly,principal component analysis (PCA) and linear discriminant analysis (LDA) are used to reduce the dimension of hyperspectral images.Secondly,the Gabor kernel is introduced to design a Local Gabor Convolution (LGC) layer based on the local Gabor kernel.Finally,a new convolutional neural network (LGCNN) is designed based on the LGC layer for classification.The proposed method is validated on Indian Pines and Salinas scene datasets and compared with other classical classification methods.Experiment results show that this method not only saves the learning parameters greatly,reduces the complexity of the model,but also shows good classification performance.Its overall accuracy can reach 99%,the average classification accuracy can reach more than 98%,and the Kappa coefficient can reach more than 98%.
Correlation Filter Object Tracking Algorithm Based on Global and Local Block Cooperation
YU Lu, HU Jian-feng, YAO Lei-yue
Computer Science. 2020, 47 (6): 157-163.  doi:10.11896/jsjkx.190500078
Abstract PDF(2662KB) ( 820 )   
References | Related Articles | Metrics
Traditional correlation filter trackers are not effective in dealing with the problem that caused by target scale changing and partial occlusion.Aiming at sloving this problem,a block tracking algorithm based on KCF was proposed in this paper.In the first step,tracking object is divided horizontally or vertically according to its apperance feature.Then,in the tracking process,local filter is used to track local block,and center point position of the global block can be predicted by the tracked result of the local blocks.At last,the final position of the target is determined by the global filter.The relevant information renewal and scale parameters are fed back to the local filters to update both global and local filter.In addition,different from KCF,which only uses HOG feature,CN feature is imported in the proposed algorithm to enhace the ability of traget deformation tracing and motion blurring tracing.Moreover,in order to solve model drift problem caused by partial occlusion,a method based on effective local block is raised to guide model updating.Criteria of evaluating effective local block also defined.Furthermore,the scale of the target can be effectively estimated by analyzing the distance between local blocks,which solves tracking failure problem caused by target scale changing.The algorithm is evaluated on the public dataset OTB-100,which contains 100 video samples.The results show that the proposed algorithm performs quite well in the situation of scale changing and partial occlusion.Compared with KCF,the accuracy of the proposed algorithm is improved by 10%,and the overall performance is better than other four KCF based algorithms.The processing speed of the algorithm reaches 32 fps.
Person Re -identification Fusing Viewpoint Mechanism and Pose Estimation
PEI Jia-zhen, XU Zeng-chun, HU Ping
Computer Science. 2020, 47 (6): 164-169.  doi:10.11896/jsjkx.190500013
Abstract PDF(2262KB) ( 708 )   
References | Related Articles | Metrics
Person re-identification is a very challenging task in video surveillance.Person have significant changes in appearance due to occlusion and differences in illumination,posture and perspective,which will ultimately have a great impact on the accuracy of person re-identification.To overcome these difficulties,this paper proposes a method for person re-identification based on viewpoint mechanism and pose estimation.First,the pose estimation algorithm Openpose is used to locate the joint points of person.Then,view discrimination is performed on the image to obtain viewpoint information.Local regions based on viewpoint information and joint point locations is proposed to generate a partial image.Next,the global image and the partial image are input into the CNN simultaneously to extract features.Finally,in order to obtain a more robust feature representation,the feature fusion network is used to fuse the global and local features.Experimental results show that the proposed method has higher person re-identification accuracy.On CHUK03 dataset,the value of rank1 reaches 71.3%,and on Market1501 dataset and DukeMTMC-reID dataset,the mAP reaches 63.2% and 60.5%,respectively.Therefore,the proposed methokd can well cope with person attitude changes,pose changes and other issues.
Block Integration Based Image Clustering Algorithm
LIU Shu-jun, WEI Lai
Computer Science. 2020, 47 (6): 170-175.  doi:10.11896/jsjkx.190400052
Abstract PDF(2874KB) ( 877 )   
References | Related Articles | Metrics
Spectral based subspace clustering algorithms have shown good results.But the traditional subspace clustering algorithms need to vectorize the image,which will lead to the losses of the two-dimensional structure informations carried by the ima-ge itself.In order to reduce the losses,block integration based image clustering(BI-CI) algorithm is proposed.First,the images are divided into several matrix blocks.Then,the nuclear norm based matrix regression is used to get the coefficient matrix of one block,and a method is proposed to set the weight for each matrix block according to the rank information of matrix blocks.Finally,based on each coefficient matrix and according to the rank of the corresponding matrix block,the integral coefficient matrix is obtained.The final clustering results are obtained by using spectral clustering performed on the coefficient matrix.Experimental results show that the proposed method is more robust than the existing algorithms and can achieve more accurate clustering results.
Flower Image Enhancement and Classification Based on Deep Convolution Generative Adversarial Network
YANG Wang-gong, HUAI Yong-jian
Computer Science. 2020, 47 (6): 176-179.  doi:10.11896/jsjkx.190600142
Abstract PDF(2022KB) ( 1123 )   
References | Related Articles | Metrics
In order to improve the accuracy of flower image recognition and classification,an algorithm based on deep convolution to generate a network is used to identify and classify flower images.In order to ensure the feature integrity of the flower image during the convolution process,the real flower images with different sizes are quantitatively averaged,the size of the block size is ignored,the number of blocks is equalized,and then the image of the block is deeply convolved.The pooling is enhanced,the enhancement method is the maximum value enhancement,and the noise is generated by the maximum pool.Then the two are compared and discriminated.The cross-entropy error is used to evaluate the value function to solve the flower image recognition and classification results.In this paper,the image enhancement of flowers,the image recognition of similar flowers and the classification of different flower images are simulated respectively.It is proved by experiments that the algorithm has obvious advantages and good stability in the classification accuracy of flower images.
Gesture Recognition Algorithm Based on Improved Multiscale Deep Convolutional Neural Network
JING Yu, QI Rui-hua, LIU Jian-xin, LIU Zhao-xia
Computer Science. 2020, 47 (6): 180-183.  doi:10.11896/jsjkx.200200030
Abstract PDF(1820KB) ( 873 )   
References | Related Articles | Metrics
Since the traditional shallow learning networks rely too much on manual selection of gesture features,they cannot adapt to complex and varied natural scenes in real time.Based on the convolutional neural network architecture,this paper proposes an improved multi-scale deep network gesture recognition model,which makes it possible to overcome the drawbacks of ma-nual extraction features by using the convolutional layer to automatically learn gesture features.In this method,the adaptive multi-scale features are introduced to realize that convolution kernels with different sizes at the same convolutional layer to gene-rate different scale features,and achieves feature map fusion with different levels by cascading shallow and deep features.In addition,in order to enhance the generalization ability of the model,this paper proposes a loss function based on regularization constraints.The experimental results show that the recognition accuracy of the proposed network model is higher than that of the ordinary single -scale convolutional neural network,and the shortcomings of imprecise and incomprehensive extraction as well as poor stability are overcome,and the time required for network training is not greatly increased.
Artificial Intelligence
Application of Natural Language Processing in Social Communication:A Review and Future Perspectives
WU Xiao-kun, ZHAO Tian-fang
Computer Science. 2020, 47 (6): 184-193.  doi:10.11896/jsjkx.191200151
Abstract PDF(1581KB) ( 2051 )   
References | Related Articles | Metrics
Natural language processing (NLP),as a branch of artificial intelligence,has accelerated the development of social communication studies in both theory and application.This paper introduces the historical development of NLP,and then reviews the application of NLP in social communication studies,including five aspects:fake news detection,commonsense reasoning,automated journalism,offensive language identification,and affective computing.Some commonly used datasets have been provided,and the advantages and deficiencies of existing researches are discussed.Furthermore,to promote the deep integration of NLP techniques and social communication,this paper proposes four promising application fields after investigating communication theories:building group decision support system,computer-mediated intimate relationship judgment,attribute analysis based on social judgment theory,the generating of public agenda.Overall,this paper paves the way for intelligent social communication analysis.
Review of Comment-oriented Aspect-based Sentiment Analysis
ZHANG Yan, LI Tian-rui
Computer Science. 2020, 47 (6): 194-200.  doi:10.11896/jsjkx.200200127
Abstract PDF(1384KB) ( 2995 )   
References | Related Articles | Metrics
Comment-oriented aspect-level sentiment analysis is one of the key issues in text analysis.With the rapid development of social media,the number of online comments has exploded.More and more people are willing to express their attitudes and emotions on the Internet,but the style and quality of online comments are uneven.How to extract the user’s perspective accurately has become a difficulty.At the same time,users also pay more attention to some fine-grained information when browsing comments,and performing aspect-level sentimentanalysis on comments can help users make decisions better.This paper first introduces the related concepts and problem descriptions of aspect-level sentimentanalysis,and then introduces the research status of aspect-level sentiment analysis at home and abroad in recent years from aspects of aspect extraction and aspect-based sentiment analysis.The corpus and sentiment dictionary resources related to the aspect-level sentiment analysis task are shared,and finally the challenges faced by the aspect-level sentiment analysis and the possible future research directions are analyzed.
Information Cascade Prediction Model Based on Hierarchical Attention
ZHANG Zhi-yang, ZHANG Feng-li, CHEN Xue-qin, WANG Rui-jin
Computer Science. 2020, 47 (6): 201-209.  doi:10.11896/jsjkx.200200117
Abstract PDF(2333KB) ( 1256 )   
References | Related Articles | Metrics
Information cascade prediction is a research hotspot in the field of social network analysis.It learns the propagation mode of information in online social media through the diffusion sequence and topology map of the information cascade.Most current models for solving this task are based on recurrent neural networks and only consider information cascading time series structure information or spatial structure information inside sequences,and cannot learn topological relationships between sequences.And the existing cascade graph structure learning methods cannot assign different weights to the neighbors of the nodes,resulting in poor association learning between the nodes.In response to the above problems,this paper proposes an information cascade sampling method based on node representation,which models the information cascade as a node representation rather than a sequence representation.This paper also proposes an information cascade prediction model based on hierarchical attention network (ICPHA),which learns the time series structure information of the node sequence through a recurrent neural network layer with self-attention mechanism,and learns the spatial structure information between node representations through a multi-head attention mechanism.By this way,ICPHA jointly models the structural information of the information cascade through a hierarchical attention network.ICPHA has achieved leading prediction results on Twitter,Memes,and Digg,and has good generalization ability.
Knowledge-driven Method Towards Dynamic Partners Recommendation in Inter-enterprise Collaboration
WANG Tie-xin, LI Wen-xin, CAO Jing-wen, YANG Zhi-bin, HUANG Zhi-qiu, WANG Fei
Computer Science. 2020, 47 (6): 210-218.  doi:10.11896/jsjkx.190700194
Abstract PDF(2780KB) ( 581 )   
References | Related Articles | Metrics
The rapid development of information technology strongly promotes the process of market globalization.The trend of economic globalization has brought unprecedented opportunities and challenges to small and medium-sized enterprises (SMEs).Enterprises can no longer survive in an isolated-island way.In order to quickly respond to the changing market demand,SMEs need to establish dynamic collaborative relationship with other enterprises while focusing on their core business.To solve the problem of how to construct dynamic collaborative enterprise alliance efficiently,a method of dynamically recommending the best partners in the process of inter-enterprise collaboration is proposed by building domain ontologies and using semantic detection technology.This method aims to break through the defects of traditional enterprise collaboration,such as “fixed cooperative participants” and “single cooperative mode”,and can quickly and efficiently recommend competent participants considering the matching between the specific cooperative goals (& preferences) and capabilities and attributes.By studying enterprise modeling and enterprise collaboration managementand summarizing the research status of model-driven enterprise collaboration construction methods,a meta-model to describe the context of inter-enterprise collaboration is defined.Furthermore,the corresponding domain ontologies and semantics detection methods are proposed to improve the efficiency of dynamic recommendation of partners.Finally,the effectiveness of this recommendation method is demonstrated by a case study of disassembly and connection machine manufacturing and its performance is evaluated.
Fuzzy C-means Clustering Based Partheno-genetic Algorithm for Solving MMTSP
HU Shi-juan, LU Hai-yan, XIANG Lei, SHEN Wan-qiang
Computer Science. 2020, 47 (6): 219-224.  doi:10.11896/jsjkx.190500137
Abstract PDF(5592KB) ( 884 )   
References | Related Articles | Metrics
Due to the rapid development of application fields such as modern logistics industry,the multiple traveling salesman problem has attracted more and more attention.Thus,for the multiple traveling salesman problem with multiple depots and closed paths(MMTSP),this paper proposes a fuzzy C-means clustering based partheno-genetic algorithm (FCMPGA).The algorithm firstly uses the fuzzy C-means clustering method to classify all cities into several classes according to their subordinative degrees,and then establishes a corresponding traveling salesman problem for each class and solve it using an improved partheno-genetic algorithm.Finally,the results for each class are combined together to form a solution of the MMTSP.The solution strategy adopted in the proposed algorithm,which performs clustering prior to genetic operations,can not only greatly reduce the search space of the algorithm,but also make the reduced search space be explored more adequately and thereby obtain optimal solutions of the problems more quickly.Compared with several other related algorithms,the experimental results of a number of test instances in the TSPLIB database show that FCMPGA exhibits overall good performance on all test instances of different sizes,and especially on large-scale problems,the performance of the algorithm is better and its convergence speed is faster.
Improved FMEA Method Based on Interval-Valued Hesitant Fuzzy TODIM
XIAO Cheng-xue, GUO Jian
Computer Science. 2020, 47 (6): 225-229.  doi:10.11896/jsjkx.200200082
Abstract PDF(1364KB) ( 1037 )   
References | Related Articles | Metrics
Failure mode and effect analysis (FMEA) is a pre prevention risk analysis method,which has many shortcomings in the practical application.The application environment of the traditional FMEA method is highly uncertain.The final result using the traditional method for analysis deviates greatly from the actual situation.Based on fuzzy theory and multi-attribute decision model,a risk ranking method of interval-valued hesitant fuzzy TODIM is proposed.Firstly,the interval-valued hesitant fuzzy element is used to construct the expert evaluation matrix.Secondly,the objective weights of risk factors are calculated by the maximum deviation method,and the subjective weights of risk factors are determined by expert evaluating method,and the comprehensive weights obtained by combining the two weights.Finally,the O,S and D of failure modes are evaluated by the TODIM me-thod.Taking the risk assessment of subway doors as an example,the effectiveness of the proposed method is verified.
SIR Propagation Model Combing Incomplete Information Game
BAO Jun-bo, YAN Guang-hui, LI Jun-cheng
Computer Science. 2020, 47 (6): 230-235.  doi:10.11896/jsjkx.190400164
Abstract PDF(2873KB) ( 869 )   
References | Related Articles | Metrics
Social networks have become an important form of people’s communication in modern society.The information transmission and control mechanism in social networks has become a hot topic in the current research field.Taking into account the uncertainty of information authenticity in society,this paper introduces game theory and social reinforcement effect to accurately describe the diffusion probability of information in the process of communication,highlights the individual differences of nodes in the process of information propagation, and considers the impact of different propagation probabilities on the propagation of nodes in the context of different true and false messages from a game perspective,combines with incomplete information game to describe the basic propagation probability,and then adjusts the basic propagation probability according to social reinforcement effect,designs and studies the SIR propagation model based on incomplete information game.And based on the small world mo-del,the scale-free model and the actual network data set to simulate,from the network model type,network size,propagation probability and other aspects of experiments.The results show that the proposed propagation model enriches the research techniques of message communication control and immunity in social networks,and the social reinforcement effect has a good effect on communication.
Computer Network
Hybrid Software Defined Network Energy Efficient Routing Algorithm Based on Genetic Algorithm
ZHANG Ju, WANG Hao, LUO Shu-ting, GENG Hai-jun, YIN Xia
Computer Science. 2020, 47 (6): 236-241.  doi:10.11896/jsjkx.191000139
Abstract PDF(1565KB) ( 1122 )   
References | Related Articles | Metrics
With the rapid development of software defined network (SDN) technology,the Internet will be in the hybrid SDN network where the traditional network devices and SDN devices coexist for a long time.It is a key scientific problem to studyenergy efficient algorithm in hybrid SDN networks.Therefore,this paper proposes a hybrid software defined network energy efficient routing algorithm based on genetic algorithm (EEHSDNGA).This paper is devoted to solving two problems.Firstly,how to choose some traditional network devices to upgrade to SDN devices in network.Secondly,how to shut down links.This paper employs genetic algorithm to solve the first problem.To solve the second problem,this paper proposes a link criticality model,which closes the links in the network one by one according to the importance of the links.The experimental results show that the energy saving ratio of EEHSDNGA in Abilene network is 36%,and in GEANT network is 42.5%.The energy saving ratio of EEHSDNGA is better than that of LF,HEATE and EEGAH.
Diffusion Maximum Correntropy Criterion Variable Step-size Affine Projection Sign Algorithm
LIN Yun, HUANG Zhen-hang, GAO Fan
Computer Science. 2020, 47 (6): 242-246.  doi:10.11896/jsjkx.190500080
Abstract PDF(1914KB) ( 561 )   
References | Related Articles | Metrics
At present,most distributed estimation algorithms minimize mean square error as a cost function,which willarise the performance deteriorates or even diverge under the impulsive noises.The diffusion affine projection sign algorithm (DAPSA) uses L1 norm as cost function,which is robustness to impulsive noises environment,and has a fast convergence speed.However,there is a contradiction between maintaining a large initial convergence speed and a low steady-state erro under a fixed step-size.In order to reduce the steady-state adjustment of DAPSA in a non-Gaussian noise environment while maintaining a fast initial convergence speed,a diffusion maximum correntropy criterion variable step size affine projection sign algorithm (DMCCVSS-APSA) is proposed.Firstly,the algorithm uses the improved chi-square kernel instead of improved gaussian kernel as the kernel function.The adaptive step size method can effectively reduce the steady-state error while achieving the faster initial convergence speed.The adaptive dynamic range method based on a priori error estimation can further reduce the steady-state error.Then the improved chi-square kernel is compared with the improved gaussian kernel,the DMCCVSS-APSA is compared with other distributed algorithms and the DMCCVSS-APSA is compared with DAPSA under different impulsive noises.Experiments verify the performance of the proposed algorithm.Simulation results show that DMCCVSS-APSA performs better than the contrast algorithms,and the steady-state error is reduced more than 5 dB at a similar initial convergence speed.The experimental data fully demonstrates that the variable step size method and the adaptive dynamic range method based on fixed step-size DAPSA can effectively reduce the steady-state error and have strong robustness to impulsive noises.It is an optimization of the distributed affine projection algorithm.Finally,the proposed algorithm needs further research on the combination of ATC mode and the optimal sensitivity factor.
Energy Optimization Oriented Resource Management in Mobile Cloud Computing
JIN Xiao-min, HUA Wen-qiang
Computer Science. 2020, 47 (6): 247-251.  doi:10.11896/jsjkx.190400020
Abstract PDF(1933KB) ( 606 )   
References | Related Articles | Metrics
As an extension of the traditional cloud computing,mobile cloud computing (MCC) breaks through the bottleneck of mobile device resources and enhances its capabilities by computation offloading.However,MCC faces many problems while bringing advantages.The problem of resource managementis related to the benign operation of MCC,and it is the key to determining whether MCC can be scaled up.To solve the problem of resource management in MCC,firstly,a resource management model aiming at optimizing energy consumption of the cloud resource operator is established,which is a constrained combinatorial optimization problem.Then a resource management strategy solution algorithm based on the heuristic adaptive simulated annealing genetic algorithm is proposed.This algorithm initializes the population by using the first fit algorithm and combines the adaptive algorithm and the simulated annealing algorithm to optimize its genetic operations.Simulation shows that the proposed algorithm can obtain the approximate optimal resource management strategy and has advantages of fast convergence rate and not easy to fall into local optimal solutions.The simulation experiments also compare the resource management effects of the traditional round robin algorithm and the first fit algorithm,and the results show that these two algorithms are not suitable for resource management in MCC.
Workflow Scheduling Strategy Based on HEDSM Under Cloud Environment
SUN Min, CHEN Zhong-xiong, YE Qiao-nan
Computer Science. 2020, 47 (6): 252-259.  doi:10.11896/jsjkx.190400047
Abstract PDF(1803KB) ( 669 )   
References | Related Articles | Metrics
The traditional algorithm has poor performance and its optimization solution cannot meet the diversity needs of users,when deals with the task scheduling in the cloud environment.Based on three optimization goals:task completion time,completion cost,and resource idle rate,this paper simulates the process of heuristic algorithm (the initialization,fitness assessment,task scheduling and selection stages) to construct a hierarchical evaluation and dynamic selection model(HEDSM).In the initialization phase,in order to ensure that tasks have a certain priority,the workflow task model is preprocessed using the traditional table scheduling algorithm (HEFT).In the fitness assessment phase,in order to meet the need of two aspects,the difficult solution evaluation modelsare constructed from two levels which are cloud users and cloud service providers.In the task scheduling phase,two-step scheduling is set.First,the policy set is setting,the task is pre-scheduled to ensure that the pre-scheduling scheme inherits the scheduling advantages of each strategy.Second,in order to enhance the performance of the algorithm,the task migration policy is setting to process the pre-scheduling plan.In the selection phase,the appropriate scheduling scheme is selected in the solution set according to the evaluation modle.The experiment uses WorkflowSim simulation platform and scientific workflow instance to make comparative analysis.Traditional Min-Min,Max-Min,FCFS scheduling strategies and existing IMax-Min and LWRound_Robin scheduling strategies are as comparison algorthms.The algorithms are evaluated from the diversity of user requirements and the IROS two aspects.The results show that the propsed algorithm improves the complete time and cost,therefore it is more suitable for the complex task scheduling in cloud environment.
Task Migration Strategy with Energy Optimization in Mobile Edge Computing
HU Jin-tian, WANG Gao-cai, XU Xiao-tong
Computer Science. 2020, 47 (6): 260-265.  doi:10.11896/jsjkx.190400074
Abstract PDF(1981KB) ( 990 )   
References | Related Articles | Metrics
With the advancement of communication technology,resource-constrained mobile terminal devices have been unable to meet the rapidly increasing demand for data processing by mobile users.On the one hand,mobile edge computing can be processed by migrating tasks on the mobile device to the edge computing server,which can solve the problem of insufficient computingpower of the mobile device to some extent.On the other hand,how to maintain high service performance during task migration as well as reducing the energy consumption of mobile terminals is also a topic of concern for researchers and mobile users.This paper focuses on the study of the problem of minimizing the average energy consumption of data migration based on the migration time benefit.Firstly,migration rate threshold of edge computing server detected periodically by mobile terminal is obtained by migration time revenue formula.Secondly,the optimal stopping problem of minimizing the average energy consumption of data migration with time-return constraint is constructed.It is proved that there is an optimal stopping rule and the optimal average energy consumption of data migration is obtained.Finally,the mobile terminal selects the edge computing server for task migration together with the obtained migration rate threshold and the optimal data migration average energy consumption,thereby implementing a task migration strategy with energy optimization.In the simulation experiment,the optimization strategy and other migration strategies proposed in the paper are compared on the performance parameters such as the average migration data,the average migration time,and the average data migration energy consumption.The experimental results show that compared with the other two comparison strategies,the task migration strategy with energy optimization has shorter migration time and smaller average data migration energy consumption.In addition,the performance of the effective data mobility parameter can also achieve about 10% to 40% performance improvement,and obtain better migration performance improvement effect.
Millimeter Wave Based Adaptive Multi-beamforming Scheme for High-speed Railway Vehicle- ground Communications
JIANG Rui, YIN Hui, XU You-yun
Computer Science. 2020, 47 (6): 266-270.  doi:10.11896/jsjkx.200100058
Abstract PDF(2213KB) ( 752 )   
References | Related Articles | Metrics
A millimeter wave based adaptive multi-beamforming scheme for high-speed railway (HSR) vehicle-ground communications is proposed in this paper.In the proposed scheme,multiple beams with different beam-width are formed by base station simultaneously to improve the system throughputs.In the multi-beam transmission method,the outage probability could be decreased as well.Unfortunately,the inter beam interference (IBI) is introduced when multiple beams transmit signals simultaneously.Therefore,an adaptive algorithm adjusting the activated beams in real time is presented in this scheme to mitigate the IBI and maintain an optimal throughput at current time.Theoretical analysis and simulation results suggest that the proposed scheme can improve the system throughput with a lower outage probability in HSR vehicle-ground communications.
MIMO Channels with Arbitrary AoA Power Spectrum for Various Wireless Environments
CHEN Qian, ZHOU Jie, SHAO Gen-fu
Computer Science. 2020, 47 (6): 271-275.  doi:10.11896/jsjkx.190500022
Abstract PDF(2770KB) ( 1473 )   
References | Related Articles | Metrics
For wireless environments,this paper proposes an approximate algorithm for the arbitrary AoA power spectrum,which is to expand for the large-angle AoA PDF with small angles approximation,calculates and derives the fading correlation of wireless channel in the case of various fittings and measured data,then reconstructs the channel model of the MIMO for special environments,by using spatial fading correlation (SFC).Firstly,it investigates in depth the approximate algorithm and its complexity in SFC of multi-antenna arrays with small AoA angles under Gaussian and Laplace distributions,which are capable of Macrocell and Microcell.Secondly,the common channel Von Mises distribution data are used as a reference to obtain their MIMO multi-antenna SFC approximation simplifications in the angular domain.The calculation and simulation experiments show that the approximate method has a good fitting under certain conditions.The selection of the number of samples and the weighting coefficient in the large-angle expansion model and its fitting accuracy are discussed in detail.Furthermore,a method is used to quantify the applicability and calculation efficiency while analyzing the massive MIMO antenna array.Therefore,the proposed method has a good approximation,and can greatly reduce computational complexity.
Information Security
Research Progress of Social Sensor Cloud Security
LIANG Jun-bin, ZHANG Min, JIANG Chan
Computer Science. 2020, 47 (6): 276-283.  doi:10.11896/jsjkx.190400116
Abstract PDF(1599KB) ( 829 )   
References | Related Articles | Metrics
Social sensor cloud is a new type of sensor cloud system generated by social networks,wireless sensor networks and cloud computing,combines the virtual social networks world with the physical world,and provides new services and applications for social users continuously.It collects external information with the powerful social sensing ability of wireless sensor networks,and solves the limitations of traditional sensor networks in data processing and storage by using cloud computing technology.However,social sensors deployment in untrusted social cloud environment,which causes many serious security issues for social sensor cloud services,such as,malicious attacks when social sensors are sharing data,reputation issues between different service providers and users,social sensor data privacy leaks,service integrity issues.These security issues deeply hinder the further development of social sensor cloud services.For the related research progress of social sensor cloud,this paper introduces the background,the system framework,application fields and new system characteristics of social sensor cloud,and analyzes and compares typical security technology schemes.In addition,the key scientific issues to be solved in this field are discussed,and the future research work is prospected.
GDL:A Gadget Description Language for General Code Reuse Attack
JIANG Chu, WANG Yong-jie
Computer Science. 2020, 47 (6): 284-293.  doi:10.11896/jsjkx.190700109
Abstract PDF(1846KB) ( 929 )   
References | Related Articles | Metrics
Considering code reuse attacks have various types,and the corresponding gadgets are different in structure,there is no general method to describe gadgets under multiple code reuse attacks.Combining several common attack models of code reuse attack and Turing machine,this paper proposes a general model of code reuse attack.A gadget description language(GDL) for code reuse attack is designed to describe the gadget in code reuse attack structurally.Firstly,the development history of code reuse attack is introduced,and the attack model and gadget characteristics of code reuse attack are summarized.Secondly,GDL is designed and the key words and grammatical specifications of various constraint types in GDL are given.Finally,on the basis of open-source project such as ply and BARF,the gadget searching prototype system named GDLgadget is implemented,which is based on GDL.The execution process of GDLgadget is described,and the effectiveness of GDLgadget is verified in experiments.
BitXHub:Side-relay Chain Based Heterogeneous Blockchain Interoperable Platform
YE Shao-jie, WANG Xiao-yi, XU Cai-chao, SUN Jian-ling
Computer Science. 2020, 47 (6): 294-302.  doi:10.11896/jsjkx.191100055
Abstract PDF(3071KB) ( 3161 )   
References | Related Articles | Metrics
In order to make the information between heterogeneous blockchains interact and realize the interoperability of blockchain,a general cross-chain message transfer protocol IBTP is proposed.Based on the protocol IBTP and side-relay chain strategy,this paper constructs a highly scalable,easily compatible,dynamically upgradeable,secure and highly available heterogeneous blockchain cross-chain system called BitXHub,which realizes heterogeneous asset exchange,information interoperability and service complementarity.BitXHub consists of three roles:relay chain,application chain and cross-chain gateway called Pier.It has three core technologies:universal cross-chain transmission protocol,heterogeneous transaction verification engine and multi-layer routing.It ensures the security and flexibility of cross-chain transactions.Compared to Polkadot and Cosmos,BitXHub provides a unified cross-chain contract template for homogeneous and heterogeneous application chains and relay chain contains a dynamically upgradeable verification engine,so it has good heterogeneous blockchain compatibility.Based on distributed hash table,cross-chain gateways form ad hoc network which let BitXHub achieve high scalability,and cross-chain gateways can forward cross-chain messages statelessly.It has been verified by experiments that BitXHub guarantees asynchronous distributed transactions between heterogeneous blockchains,achieving high throughput,low latency,high scalability,and low overhead.
Public Integrity Auditing for Shared Data in Cloud Supporting User Identity Tracking
ZHANG Xi, WANG Jian
Computer Science. 2020, 47 (6): 303-309.  doi:10.11896/jsjkx.190600079
Abstract PDF(1732KB) ( 626 )   
References | Related Articles | Metrics
Public integrity auditing for shared data in the cloud is used to verify the integrity of data shared by a group of users.Compared with the integrity auditing forsingle-user data,the integrity auditing for shared data of a group needs to consider more issues,such as efficient user revocation,identity privacy protection and so on.If there is a dispute or other situation in the data,the source of the data needs to be tracked,and existing integrity auditing schemes for shared cloud data have not yet handled this problem well.In order to track the source of data and ensure efficient user revocation and the protection of user’s identity privacy,an integrity auditing scheme based on group signature algorithm for shared cloud data is proposed.When it is necessary to track the identity of the signer of a data block,the group manager can track it by using his/her private key and others cannot know the identity of this signer.The private key update mechanism in this scheme can well support user revocation,and greatly reduce the computation and communication overhead during the user revocation process.Safety analysis and experimental results show that the scheme is safe and efficient.
Image Forgery Detection Based on DCT Coefficients Hashing
SHANG Jin-yue, BI Xiu-li, XIAO Bin, LI Wei-sheng
Computer Science. 2020, 47 (6): 310-315.  doi:10.11896/jsjkx.190600081
Abstract PDF(3926KB) ( 885 )   
References | Related Articles | Metrics
With the continuous improvement of digital image processing technology,tampered images are flooded with the Internet and various media,seriously affecting people’s daily life.Therefore,digital image forensics technology,which can judge the authenticity and integrity of images,is particularly important.An image forgery detection algorithm based on DCT coefficients hashing was proposed,for dealing with the splicing forgery detection of digital images.In the process of JPEG compression,first,the DCT coefficient matrix of the Y channel after DCT is extracted,then the image hashing is constructed by DCT coefficients,and finally the image hashing is embedded in the header of file of the compressed code stream.At the time of tampering detection,a tampering image hashing is constructed by compressed code stream corresponding to the tampering image,and then compared with the embedded original image hashing for initial detection.In order to achieve the pixel-level detection,a method of secondary detection was proposed based on the preliminary detection results.The experimental results show that the proposed algorithm not only has good robustness,but also has a shorter hash length and a 10% higher detection accuracy.
Study on Secure Beamforming for Full-duplex Energy Harvesting Relaying System
CHEN Pei-pei, LI Tao-shen, FANG Xing, WANG Zhe
Computer Science. 2020, 47 (6): 316-321.  doi:10.11896/jsjkx.190500115
Abstract PDF(2027KB) ( 575 )   
References | Related Articles | Metrics
The optimization of secrecy rate is studied in the full-duplex relay-eavesdropper channel in which the nodes harvest energy.For guaranteeing the secure communications,an artificial noise aided secure beamforming design is proposed under simultaneous wireless information and power transfer method.An optimization problem is considered aiming to maximize the secrecy rate (SRM) of the system by jointly optimizing the beamforming matrix,artificial noise covariance matrix and the power splitting ration at the relay subject to the transmission power and energy harvesting requirement for relay.Because this problem is a non-convex secrecy rate maximization problem,the objective problem is decoupled into two subproblems.First,this paper recasts this SRM as a two-level optimization problem to optimize the beamforming matrix and artificial noise covariance matrix.The outer optimization problem is solved by one-dimensional search,and the inner optimization problem is solved by semidefinite relaxation technique.Then,it fixes the value of beamforming matrix and artificial noise covariance matrix,and once again uses the one-dimensional search to solve the power splitting ration.Theoretical derivation proves that there always exists a rank-one optimal solution for the SDR problem,the relaxation technique adopted is tight.The simulation results show that the proposed method can effectively improve the security performance of the system by 2 to 3 times.