Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of Surveys in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Survey on Smart Contract Based on Blockchain System
    FAN Ji-li, LI Xiao-hua, NIE Tie-zheng, YU Ge
    Computer Science    2019, 46 (11): 1-10.   DOI: 10.11896/jsjkx.190300013
    Abstract2404)      PDF(pc) (1481KB)(5139)       Save
    Blockchain is a decentralized global distributed database leger.Smart contract is a piece of event-driven program with states that runs over blockchain systems,which can take custody over digital assets.Smart contracts running on a common platform can also implement parts of the functions of traditional applications.Development of the blockchain provides an appropriate platform for smart contract,and smart contract plays an important role on blockchain systems.With the rapid development of blockchain platforms such as Bitcoin and Ethereum,smart contracts have a good development opportunity.However,applications of smart contract are still in the early stage of development,and there are relatively few related studies.The application scenarios of smart contracts are not enough in practical application.This paper studied programming languages and implementation technologies of smart contract,discussed and explored the development status as well as challenges and future prospects.It described the characteristics of different development languages and took a comparison among them.Then,it classified blockchain systems based on the running environment of smart contract,and studied the development,deployment and running mechanism of smart contracts in various blockchain systems.Also,this paper explored the application scope of various smart contract platforms,and took a comprehensive comparison of different blockchain systems on smart contract development,community support and corresponding ecosystems.It introduced the status and challenges of smart contract research,and conducted analysis on security,scalability,and maintainability.Finally,it analyzed the development trend of blockchain and smart contract technology,and discussed the application scenarios in the future.
    Reference | Related Articles | Metrics
    Research Advances and Future Challenges of FPGA-based High Performance Computing
    JIA Xun, QIAN Lei, WU Gui-ming, WU Dong, XIE Xiang-hui
    Computer Science    2019, 46 (11): 11-19.   DOI: 10.11896/jsjkx.191100500C
    Abstract692)      PDF(pc) (1580KB)(2413)       Save
    Improving the energy efficiency and satisfying the performance need of emerging applications are two important challenges faced by current supercomputing systems.Featured with low power consumption and flexible reconfigurability,FPGA is a promising computation platform for overcoming the above challenges.To explore the feasibility,performance of high-performance computing (HPC) kernels on FPGA has been analyzed by extensive researches.How-ever,kernel of convolutional neural network is not considered in these studies,and the analysis lacks a high-performance processor for reference.Aiming at the dominant kernels in today’s HPC landscape,including breadth-first search,sparse matrix vector multiplication,stencil,smith-waterman and convolutional neural network,this paper summarized the implementation and performance optimization of these kernels on FPGA.Meanwhile,a comparison between FPGA and SW26010 many-core processor regarding their performance and energy efficiency was conducted.Furthermore,major problems of adopting FPGA for constructing HPC systems were also discussed.For the kernels considered in this paper,FPGA can outperform SW26010 processor by 63x in terms of energy efficiency.As for performance of emerging applications like graph analytics and deep learning,FPGA can outperform SW26010 by 26x.Lower communication overhead,better programmability and more integral software library for scientific computing will make FPGA an amenable platform for future supercomputing systems.
    Reference | Related Articles | Metrics
    Survey of Concepts Related to Data Assets
    YE Ya-zhen, LIU Guo-hua, ZHU Yang-yong
    Computer Science    2019, 46 (11): 20-24.   DOI: 10.11896/jsjkx.190800019
    Abstract815)      PDF(pc) (1228KB)(2292)       Save
    Under different technological,social and economic backgrounds,different terminologies such as Information assets,Digital assets and Data assets were created due to people’s different understanding about contents in cyberspace.Because the term Asset is related to Resource,Capital and Economy,a series of concepts are extended,such as Information Resources(Capital,Economy),Digital Resources(Capital,Economy),Data Resources(Capital,Economy),etc.This paper reviewed these concepts.Based on the physical attributes,existence attributes and information attributes of data in Big Data context,this paper proposed and advocated the standardization of these concepts into Data Resources,Data Assets,Data Capital and Data Economy,which will be helpful to the exploitation of data resources.
    Reference | Related Articles | Metrics
    Survey on DNA-computing Based Methods of Computation Tree Logic Model Checking
    HAN Ying-jie, ZHOU Qing-lei, ZHU Wei-jun
    Computer Science    2019, 46 (11): 25-31.   DOI: 10.11896/jsjkx.181102091
    Abstract492)      PDF(pc) (1421KB)(1314)       Save
    Computation tree logic (CTL) model checking is an important approach to ensuring the correctness and reliability of systems.However,the severe spatio-temporal complexity problems restrict the application of CTL model checkingin industry.The large-scale parallelism of DNA computing and the huge storage density of DNA molecules provide new ideas for resolving the problems.The background and the principle of DNA-computing based methods of CTL modelchecking were introduced.The research progress was reviewed from three aspects:the improvement of power,the improvement of autonomy and the resolution of related problems.Firstly,the research progress of methods in terms of power was summarized from checking only one basic CTL-formula to general CTL-formulas,from CTL-formulas with only future operators to CTL-formulas with past-time operators,and from CTL-formulas to linear temporal logic,projection temporal logic and interval temporal logic formulas.Secondly,the research progress of methods in terms of autonomy from non-autonomous methods based on manual operations of memory-less filtering models to autonomous methods based on molecular autonomy of sticker automata was reviewed,showing that the methods are highly autonomous.At last,relevant problems in improving the predictive efficiency of specific hybridization of DNA molecules and constructing DNA molecules of CTL-formulas were described.In the end,corresponding research directions were discussed by concerning different methods,new models and new applications.
    Reference | Related Articles | Metrics
    Survey of Research on Computation Unloading Strategy in Mobile Edge Computing
    DONG Si-qi, LI Hai-long, QU Yu-ben, ZHANG Zhao, HU Lei
    Computer Science    2019, 46 (11): 32-40.   DOI: 10.11896/jsjkx.181001872
    Abstract765)      PDF(pc) (1756KB)(4483)       Save
    Improvement of technology makes smart mobile devices more and more popular.Mobile device traffic is growing rapidly.However,due to the limited resources and computing performance of smart mobile devices,mobile device may face the situation of insufficient capacity when dealing with compute-intensive and time-sensitive applications.Unloading the computations which the mobile terminal needs to process to the computing nodes in the edge network for calculation is an effective way to solve this problem.This paper first introduced the existing calculation offloading strate-gies and elaborated from the aspects of minimizing delay,minimizing energy consumption and maximizing benefits.Then,it compared the advantages and disadvantages of different offloading strategies.At last,it considered and prospected the future development of calculation offloading strategies of mobile edge network.
    Reference | Related Articles | Metrics
    Research Progress on Data Query Technology in Dynamic Wireless Sensor Networks
    LIANG Jun-bin, MA Fang-qiang, JIANG Chan
    Computer Science    2019, 46 (11): 41-48.   DOI: 10.11896/jsjkx.181202258
    Abstract533)      PDF(pc) (1513KB)(713)       Save
    Wireless sensor networks (WSN) are a self-organizing network composed of a large number of sensor nodes with limited communication,computing and storage capabilities,which can be deployed to perform long-term monitoring tasks in harsh environments.Data query processing is one of the most basic operations for WSN to obtain monitoring data.It mainly refers to the user distributing query requests to the network through a specific node,and then the node that satisfies the requirements returns the data to the user.In the process of query,because of the dynamic nature of the network (e.g.,the destruction of nodes by external forces,the movement or sleep,resulting in changes in network topology and connectivity,and unreliable communication links,etc.),the data transmission has large delay,high energy consumption and even data loss,resulting in low success rate of query.At present,many scholars study this problem and make some progress,but there are still many problems to be solved in practical application.In order to further promote the in-depth study of data query technology in dynamic wireless sensor networks,this paper analysed and summarized the typical work in recent years,and compared their advantages and disadvantages.Then,this paper discussed the key issues that need to be solved in this field,and finally pointed out the next research directions.
    Reference | Related Articles | Metrics
    Weakly Supervised Learning-based Object Detection:A Survey
    ZHOU Xiao-long, CHEN Xiao-jia, CHEN Sheng-yong, LEI Bang-jun
    Computer Science    2019, 46 (11): 49-57.   DOI: 10.11896/jsjkx.181001899
    Abstract1279)      PDF(pc) (2223KB)(3498)       Save
    Object detection is one of the fundamental problems in the field of computer vision.Currently,supervised learning-based object detection algorithm is one of the mainstream algorithms for object detection.In the existing researches,high-precision image labels are the precondition of supervised object detection to gain good performance.How-ever,it becomes more and more difficult to gain accurate labels due to the complexity of background and variety of objects in a real scenario.With the development of deep learning,how to receive good performance with the low-cast image labels becomes the key point in this field.This paper mainly introduced object detection algorithms based on weakly supervised learning with image-level labels.Firstly,this paper described the background of object detection and pointed out the shortcomings of training data.Then,it reviewed weakly supervised object detection algorithm based on image-level labels from three aspects:image segmentation,multi-instance learning and convolutional neural network.The multi-instance learning and convolutional neural network were comprehensively illustrated in several ways like saliency learning and collaborative learning.Finally,this paper compared mainstream algorithms based on weakly supervised learning horizontally and compared them with object detection algorithms based on supervised learning.The results prove that weakly supervised object detection algorithm has achieved great progress,especially the convolutional neural network has greatly promoted the development and gradually replaced multi-instance learning.After taking fusion algorithm,its accuracy rate is remarkably increased to 79.3% on Pascal VOC 2007.However,it still performs worse than supervised object detection algorithm.To achieve better performance,the fusion algorithm based on convolutional neural network is becoming a mainstream algorithm in weakly supervised object detection.
    Reference | Related Articles | Metrics
    3D Shape Feature Extraction Method Based on Deep Learning
    ZHOU Yan, ZENG Fan-zhi, WU Chen, LUO Yue, LIU Zi-qin
    Computer Science    2019, 46 (9): 47-58.   DOI: 10.11896/j.issn.1002-137X.2019.09.006
    Abstract863)      PDF(pc) (2527KB)(3469)       Save
    Research on extracting 3D shape features with low dimension and high discriminating ability can solve the problem such as classification,retrieval of 3D shape data.With the continuous development of deep learning,3D shape feature extraction method combineds with deep learning has become a research hotspot.Combining deep learning with traditional 3D shape feature extraction methods can not only break through the bottleneck of non-deep learningme-thods,but also improve the accuracy of 3D shape data classification,retrieval and other tasks,especially when 3D shape is non-rigid body.However,deep learning is still developing,and there are still problems that require a large number of training samples.Therefore,how to effectively extract 3D shape features by using deep learning methods has become the research focus and difficulty in the field of computer vision.At present,most researchers focus on improving the ability of neural network to extract features by improving network structure,training methods and other aspects.First,the re-levant deep learning model are introduced,and there are some new ideas about the network improvement and training methods.Second,the feature extraction methods of rigid body and non-rigid body based on deep learning are comprehensively expounded which combined with the development of deep learning and 3D shape feature extraction methods,and the current deep learning methods for the 3D shape feature extraction are described.And then,the current situation of the existing 3D shape retrieval system and the similarity calculation method are described.Finally,the current problems of 3D shape feature extraction methods are introduced,and its future development trend are explored.
    Reference | Related Articles | Metrics
    Overview of Routing Availability in Intra-domain Routing Networks
    GENG Hai-jun,ZHANG Shuang,YIN Xia
    Computer Science    2019, 46 (7): 1-6.   DOI: 10.11896/j.issn.1002-137X.2019.07.001
    Abstract790)      PDF(pc) (1317KB)(1253)       Save
    Route availability refers to the probability that a user can get the requested service.With the development of the Internet,a large number of real-time services have emerged,and the requirements for the timeliness of the network is becoming higher and higher,and high demands have been placed on the “self-repairing ability” of the Internet.However,network faults occur frequently,and routing loops and long convergence times may occur during the process of repairing network failures.The repair time is usually between several seconds and tens of seconds,which cannot meet the real-time requirements of the Internet.Therefore,improving routing availability has become an urgent problem to be solved.This paper summarized and analyzed the existing schemes to improve routing availability,and divided these schemes into two major categories,namely passive protection scheme and route protection scheme.The research results at home and abroad were introduced in detail,the advantages and disadvantages of each program were compared,the main contributions and shortcomings of these programs were summarized and analyzed,and the research directions were proposed for further research.
    Reference | Related Articles | Metrics
    Survey on Deep-learning-based Machine Reading Comprehension
    LI Zhou-jun,WANG Chang-bao
    Computer Science    2019, 46 (7): 7-12.   DOI: 10.11896/j.issn.1002-137X.2019.07.002
    Abstract1038)      PDF(pc) (1292KB)(2612)       Save
    Natural language processing is the key to achieving artificial intelligence.Machine reading comprehension,as the crown jewel in the field of natural language processing,has always been the focus of research in the field.With the rapid development of deep learning and neural network in recent years,machine reading comprehension has made great progress.Firstly,the research background and development history of machine reading comprehension were introduced.Then,by reviewing the important progress in the development of word vector,attention mechanism and answer prediction,the problems in recent research related to machine reading comprehension were proposed.Finally,the outlook of machine reading comprehension was discussed.
    Reference | Related Articles | Metrics
    Summary of Stylized Line Drawing Generation
    LIU Zi-qi, LIU Shi-guang
    Computer Science    2019, 46 (7): 13-21.   DOI: 10.11896/j.issn.1002-137X.2019.07.003
    Abstract1069)      PDF(pc) (2570KB)(2113)       Save
    Line drawing has a great advantage in the transmission of visual information.As a simple and effective means of visual communication,it stresses main features of the details so that people can get the main information quickly.At the same time,stylized line drawing,as an art form,enables people to appreciate and understand their artistic characte-ristics quickly.Line drawing generation technology can be divided into 2D image-based methods and 3D image-based methods.Line drawing generation technology based on 2D images includes deep learning method and traditional me-thod,which contains data drive method and non-data-driven method.Line drawing generation technology based on 3D model contains image space method,object space method and their blending method.By introducing and analyzing va-rious methods and analyzing the advantages and disadvantages of different methods with comparisons among them,this paper summarized the existing problems of line drawing generation technology and their possible solutions.And on this basis,the future development trend of line painting was prospected.
    Reference | Related Articles | Metrics
    Review of Computer Aided Diagnosis for Parkinson’s Tremor and Essential Tremor
    ZHANG Yu-qian,GU Dong-yun
    Computer Science    2019, 46 (7): 22-29.   DOI: 10.11896/j.issn.1002-137X.2019.07.004
    Abstract482)      PDF(pc) (1489KB)(1294)       Save
    The diagnosis of Parkinson’s tremor and essential tremor has been a clinical problem,and proper diagnosis is of vital importance for the treatment and rehabilitation of patients.With the development of sensor technology and artificial intelligence (AI),more and more scholars begin to use state-of-the-art technology to assist diagnosis of two diseases,and satisfied results were achieved.This paper summarized the wearable devices currently used for the diagnosis of two diseases and related AI classification algorithms,and discussed their advantages and limitations.Finally,this paper analyzed the main problems existing in the related researches and pointed out the possible research directions in this field.
    Reference | Related Articles | Metrics
    Review of Shape Representation for Objects
    WU Gang,XU Li-min
    Computer Science    2019, 46 (7): 30-37.   DOI: 10.11896/j.issn.1002-137X.2019.07.005
    Abstract536)      PDF(pc) (1318KB)(1063)       Save
    Shape retrieve and objection are widely applied into medical diagnostics,target recognition,image retrieve and computer vision,etc.The efficient retrieve and objection of shapes completely depend on an excellent shape representation algorithm.This paper proposed the assessment criterion for shape representation.Then,according to the criterion,the existing shape representations were categorized into linear combination representations,spatial association relationship,feature representation based on differential and integral property of shapes and deformation representations.Each of these methods was analyzed and accessed in terms of mathematical principle,the ability of multiscale representation,variants,robust,reconstruction of original shapes,identification of signal and noise,etc.Furthermore,the advantages and disadvantages of each algorithm were discussed,especially,explored from the principle of mathematics.Finally,the suggestions for the future research were also given.
    Reference | Related Articles | Metrics
    Review on Click-through Rate Prediction Models for Display Advertising
    LIU Meng-juan,ZENG Gui-chuan,YUE Wei,QIU Li-zhou,WANG Jia-chang
    Computer Science    2019, 46 (7): 38-49.   DOI: 10.11896/j.issn.1002-137X.2019.07.006
    Abstract809)      PDF(pc) (3900KB)(1770)       Save
    In recent years,the study of the click-through rate prediction model has attracted much attention from academia and industry.As for the existing CTR prediction models for displaying targeted advertising,this paper studied the preprocessing techniques for features of samples,the CTR prediction schemes based on traditional machine learning models and the latest deep learning models,and the main performance evaluation indexes of CTR prediction models.Specially,these typical CTR prediction schemes were evaluated based on a public dataset,further some quantitative analysis and performance comparison were given.Finally,the problems and research trends in CTR prediction were discussed.
    Reference | Related Articles | Metrics
    Research on Task Scheduling in Cloud Computing
    MA Xiao-jin, RAO Guo-bin, XU Hua-hu
    Computer Science    2019, 46 (3): 1-8.   DOI: 10.11896/j.issn.1002-137X.2019.03.001
    Abstract852)      PDF(pc) (1598KB)(2154)       Save
    In cloud computing,virtualization technology separates various kinds of computing resources from the underlying infrastructure and expands them dynamically,and it allows users to pay on the basis of usage.Cloud platform is a heterogeneous system which consists of different hardware and huge data resources.With the increasing number of tasks,it is critical to schedule users’ tasks and allocate resources effectively through task scheduling algorithm.This paper illustrated a brief introduction of cloud computing,task scheduling algorithm and the core scheduling process including evaluation metrics with some figures.Then,it proposed an overview of the related literatures and algorithms in recent years.Finally,this paper presented some key aspects of the research.In realistic applications due to the varying si-tuation of tasks and uncertainty in resources,it is crucial to select the scheduling strategy accordingly,and taking more performance indicators into consideration can enhance the efficiency and quality of service in cloud computing.
    Reference | Related Articles | Metrics
    Survey of Distributed Machine Learning Platforms and Algorithms
    SHU Na,LIU Bo,LIN Wei-wei,LI Peng-fei
    Computer Science    2019, 46 (3): 9-18.   DOI: 10.11896/j.issn.1002-137X.2019.03.002
    Abstract1747)      PDF(pc) (1744KB)(8024)       Save
    Distributed machine learning deploys many tasks which have large-scale data and computation in multiple machines.For improving the speed of largek-scale calculation and less overhead effectively,its core idea is “divide and conquer”.As one of the most important fields of machine learning,distributed machine learning has been widely concerned by researchers in each field.In view of research significance and practical value of distributed machine learning,this paper gave a summarization of mainstream platforms like Spark,MXNet,Petuum,TensorFlow and PyTorch,and analyzed their characteristics from different sides.Then,this paper made a deep explain for the implementation of machine learning algorithm from data parallel and model parallel,and gave a view of distributed computing model from bulk synchronous parallel model,asynchronous parallel model and delayed asynchronous parallel model.Finally,this paper discussed the future work of distributed machine learning from five aspects:improvement of platform,algorithms optimization,communication of networks,scalability of large-scale data algorithms and fault-tolerance.
    Reference | Related Articles | Metrics
    Survey on Adaptive Random Testing by Partitioning
    LI Zhi-bo, LI Qing-bao, YU Lei, HOU Xue-mei
    Computer Science    2019, 46 (3): 19-29.   DOI: 10.11896/j.issn.1002-137X.2019.03.003
    Abstract684)      PDF(pc) (1966KB)(1368)       Save
    As a fundamental software testing technique,random testing (RT) has been widely used in practice.Adaptive random testing (ART),an enhancement of RT,performs better than original RT in terms of fault detection capability.Firstly,this paper analyzed the classical ART algorithm with high detection effectiveness and large time overhead.Se-condly,it summarizedthe ART algorithms by partitioning to reduce the time cost,analyzed and compared various partition strategies and test case generation algorithms.Meanwhile,this paper analyzed the problems of the key factors affecting the effectiveness of ART algorithm and leading to low efficiency of algorithm in high dimensional input domain.Finally,it discussed the problems and challenges in the ART algorithm.
    Reference | Related Articles | Metrics
    Comprehensive Review of Grey Wolf Optimization Algorithm
    ZHANG Xiao-feng, WANG Xiu-ying
    Computer Science    2019, 46 (3): 30-38.   DOI: 10.11896/j.issn.1002-137X.2019.03.004
    Abstract1023)      PDF(pc) (1416KB)(4888)       Save
    Grey wolf optimization (GWO) algorithm is a new kind of swarm-intelligence-based algorithm and some significant developments have been made since its introduction in 2014.GWO has been successfully applied in a variety of fields due to its simplicity and efficiency.This paper provided a complete survey on GWO,including its search mechanism,implementation process,relative merits,improvements and applications.The studies on GWO about its improvements including improvement of population initialization,search mechanism,and parameters were especially discussed.The application status of GWO in aspect of parameter optimization combinatorial optimization and complex function optimization was summarized.Finally,some novel research directions for future development of this powerful algorithm were given.
    Reference | Related Articles | Metrics
    Survey on Short-term Traffic Flow Forecasting Based on Deep Learning
    DAI Liang,MEI Yang,QIAO Chao,MENG Yun,LV Jin-ming
    Computer Science    2019, 46 (3): 39-47.   DOI: 10.11896/j.issn.1002-137X.2019.03.005
    Abstract827)      PDF(pc) (1685KB)(2726)       Save
    Short-term traffic flow forecasting is a hot topic in the field of intelligent transportation,which is of great significance in traffic control and management.The traditional traffic flow forecasting methods are difficult to describe the internal characteristics of the traffic data accurately.Deep learningcan learn the internal complex multivariate coupled structure of the traffic flow data through its deep structure and then make a more accurate forecasting of the traffic flow,which makes deep learning a hot topic in the current traffic flow forecasting field.Firstly,the traditional traffic flow forecasting methods and the current research status of deep learning were briefly introduced.Then the methods of short-term traffic flow forecasting based on deep learningwere classified according to generative deep architecture and discriminative deep architecture.This paper also summarized the main methods of deep learning in the field of traffic flow forecasting and compared their performance.Finally,the existing problems and development directions of deep learning in short-term traffic flow forecasting were discussed.
    Reference | Related Articles | Metrics
    Review of Bottom-up Salient Object Detection
    WU Jia-ying,YANG Sai,DU Jun,LIN Hong-da
    Computer Science    2019, 46 (3): 48-52.   DOI: 10.11896/j.issn.1002-137X.2019.03.006
    Abstract534)      PDF(pc) (2305KB)(1712)       Save
    This paper reviewed the current development status at home and abroad in the field of salient object detection.Firstly,this paper introduced the research background and development process of salient object detection.Then,aiming at the difference of the features used by each saliency model,it summarized the saliency calculation from two aspects of hand-crafted features and deep learning features.While the saliency calculation based on hand-crafted features are addressed,it is further classified into the following three subcategories,i.e.the saliency calculation based on contrast prior,the saliency calculation based on foreground prior,and the saliency calculation based on back ground prior.Meanwhile,this paper elaborated the basic ideas of saliency modeling in each subcategory.Finally,it discussed the problems to be solved and further research directions of salient object detection.
    Reference | Related Articles | Metrics
    Survey on Non-frontal Facial Expression Recognition Methods
    JIANG Bin,GAN Yong,ZHANG Huan-long,ZHANG Qiu-wen
    Computer Science    2019, 46 (3): 53-62.   DOI: 10.11896/j.issn.1002-137X.2019.03.007
    Abstract670)      PDF(pc) (1375KB)(1452)       Save
    Facial expression recognition is an important part in biometric feature recognition,and it is also a key techno-logy of human-machine interaction.However,most methods only focus on the frontal or nearly frontal facial images and videos,and restrict the normal head movements,so it is bad for intelligent development of facial expression recognition.To handle this problem,firstly,the face detection,head pose estimation,facial expression feature extraction and classification methods were introduced for exploring the development of non-frontal facial expression recognition system.Se-condly,the non-frontal facial expression feature extraction and classification methods were emphatically introduced,and the comparison and analysis of the facial key points-based non-frontal facial expression recognition algorithm,the appea-rance feature-based non-frontal facial expression recognition algorithm and the pose-depend-based non-frontal facial expression recognition algorithm were carried out.Finally,the current research on the non-frontal facial expression recognition was summarized,and the future research and development direction were prosected.
    Reference | Related Articles | Metrics
    Review on Development of Convolutional Neural Network and Its Application in Computer Vision
    CHEN Chao, QI Feng
    Computer Science    2019, 46 (3): 63-73.   DOI: 10.11896/j.issn.1002-137X.2019.03.008
    Abstract1280)      PDF(pc) (2702KB)(4639)       Save
    In recent years,deep learning has achieved a series of remarkable research results in various fields such as computer vision,speech recognition,natural language processing and medical image processing.In different types of deep neural networks,convolution neural network has obtained most extensive study,not only reflecting the prosperity in aca-demic field,but also making a tremendous realistic impact and commercial value on the related industries.With the rapidgrowth of annotation sample data sets and the drastic improvement of GPU performance,related researches on convolutional neural networks are rapidly developed and have achieved remarkable results in various tasks in the field of computer vision.This paper reviewed the history of convolution neural network firstly.Then it introduced the basic structure of convolutional neural network and the function of each component.Next,it described the improvements of convolution neural network in convolution layer,pooling layer and activation functionin detail.Also,it summarized typical neural network architectures since 1998(such as AlexNet,ZF-Net,VGGNet,GoogLeNet,ResNet,DenseNet,DPN and SENet).In the field of computer vision,this paper emphatically introducedthe latest research progresses of convolution neural network in image classification / localization,target detection,target segmentation,target tracking,behavior re-cognition and image super-resolution reconstruction.Finally,it summarized the problems and challenges to be solvedabout convolutional neural network.
    Reference | Related Articles | Metrics
    Review of Generative Adversarial Network
    CHENG Xian-yi,XIE Lu,ZHU Jian-xin,HU Bin,SHI Quan
    Computer Science    2019, 46 (3): 74-81.   DOI: 10.11896/j.issn.1002-137X.2019.03.009
    Abstract735)      PDF(pc) (3854KB)(2935)       Save
    Humans can understand the way of movement,so they can predictthe future development of things more accurately than machines.But GAN (Generative Adversarial Network) is a new neural Network system,its dataare very lifelike,even people can’t identify whether the data are real or generated.In a sense,GAN provides a brand new thought for guiding the artificial intelligence system to accomplish complex tasks,and makes the machine a specialist.In this paper,first of all,the basic model and some improvements model of GAN were discussed.Then,some application achievements of GAN were shown,such as the images generated by the super resolution,by a text description,by the artistic style and short video generated.Finally,some problems of theory,architecture,and application in the future research were discussed
    Reference | Related Articles | Metrics
    Research on Ship Detection Technology Based on Optical Remote Sensing Image
    YIN Ya, HUANG Hai, ZHANG Zhi-xiang
    Computer Science    2019, 46 (3): 82-87.   DOI: 10.11896/j.issn.1002-137X.2019.03.010
    Abstract625)      PDF(pc) (1261KB)(2477)       Save
    The detection of ships in optical remote sensing images is a research hotspot with broad applications in the field of remote sensing image processing and analysis.Based on the optical remote sensing image,this paper summarized the main processing methods used in each link around the general processing flow of ship detection,and analyzed the advantages and disadvantages of each method.Then,the paper pointed out the bottleneck problems faced by each link,expounded the limitations of natural image detection methods applied to ship target detection and discussed the challenges faced by current research.Finally,the relevant development trends were discussed.
    Reference | Related Articles | Metrics
    State-of-the-art Analysis and Perspectives of 2018 China HPC Development
    ZHANG Yun-quan
    Computer Science    2019, 46 (1): 1-5.   DOI: 10.11896/j.issn.1002-137X.2019.01.001
    Abstract701)      PDF(pc) (2068KB)(1966)       Save
    Based on the data of China’s high performance computer TOP100 rankings published in November 2018,this paper made an in-depth analysis of the current development status of high performance computers in China from the overall performance,manufacturer,industry and other aspects.The average Linpack performance of TOP100 in China continues to be higher than that of the international TOP500,and the threshold for entry performance of TOP100 still exceeds that of TOP500.China’s supercomputing system on TOP100 has almost all been a domestic supercomputer system,and the Shuguang and Lenovo have become the champion on the number of systems on Top100.The situation of the three strong hegemony of Shuguang,Lenovo and Inspur continues to be maintained and strengthened.On the basis of this,according to the performance data of the seventeenth ranking list,this paper analyzed and predicted the development trend of high-performance computers in mainland China in the future.According to the new data,we believe that machines with peak Exa ops will appear between 2018 and 2019;machines with peaks of 10 Exa ops will appear between 2022 and 2023;machines with peaks of 100 Exa ops will appear between 2024 and 2025.
    Reference | Related Articles | Metrics
    Research on Multi-keyword Ranked Search over Encrypted Cloud Data
    DAI Hua, LI Xiao, ZHU Xiang-yang, YANG Geng, YI Xun
    Computer Science    2019, 46 (1): 6-12.   DOI: 10.11896/j.issn.1002-137X.2019.01.002
    Abstract478)      PDF(pc) (1427KB)(1289)       Save
    With the extensive development of cloud computing,storage and/or computing outsourcing services are becoming more and more acceptable nowadays.To protect the privacy of outsourced data,the privacy-preserving multi-keyword ranked search scheme over encrypted cloud data is focused by researches,which turns to be a hot spot recently.This paper introduced the system model and threat model of existing work,and gave the problem description about privacy-preserving,search efficiency and accuracy,search result completeness,etc.Typical works and extended research about multi-keyword ranked search were studied,and the main ideas of those methods were discussed in detail.Finally,the conclusions about current works were given,and the future research directions were proposed simultaneously.
    Reference | Related Articles | Metrics
    Survey of Content Centric Network Based on SDN
    YANG Ren-yu, HAN Yi-gang, ZHANG Fan, FENG Fei
    Computer Science    2019, 46 (1): 13-20.   DOI: 10.11896/j.issn.1002-137X.2019.01.003
    Abstract542)      PDF(pc) (1345KB)(1653)       Save
    The practical deployment of Content Centric Network (CCN) is confronted with numerous challenges.On the other hand,Software Defined Networking (SDN) has been developing rapidly and its features of open,programmable and centralized control bring a new direction for CCN.Accordingly,the realization of CCN based on SDN has gradually attracted attention.This paper summarized the background knowledge,concluded the key issues in the deployment of SDN-based CCN,and analyzed the advantages and difficulties of their integration.Then,this paper introduced the research status ,divided the existing integrated network schemes into purely centralized schemes and semi-centralized schemes,evaluated the representative design of the scheme,and summed up the characteristics of each kind of schemes by comparison.Finally,this paper presented the topics for future researches.
    Reference | Related Articles | Metrics
    Review of Time Series Prediction Methods
    YANG Hai-min, PAN Zhi-song, BAI Wei
    Computer Science    2019, 46 (1): 21-28.   DOI: 10.11896/j.issn.1002-137X.2019.01.004
    Abstract3668)      PDF(pc) (1294KB)(11657)       Save
    Time series is a set of random variables ordered in timestamp.It is often the observation of an underlying process,in which values are collected from uniformly spaced time instants,according to a given sampling rate.Time series data essentially reflects the trend that one or some random variables change with time.The core of time series prediction is mining the rule from data and making use of it to estimate future data.This paper emphatically introduced a summary of time series prediction method,namely the traditional time series prediction method,machine learning based time series prediction method and online time series prediction method based on parameter model,andfurther prospected the future research direction.
    Reference | Related Articles | Metrics
    Research on Time Series Classification Based on Shapelet
    YAN Wen-he, LI Gui-ling
    Computer Science    2019, 46 (1): 29-35.   DOI: 10.11896/j.issn.1002-137X.2019.01.005
    Abstract1047)      PDF(pc) (1483KB)(2006)       Save
    Time series is high-dimensional real-value data changing with time order,and it appears extensively in the fields of medicine,finance,monitoring and others.Because the accuracy of conventional classification algorithms is not ideal for the time series and it doesn’t possess the characteristic of interpretability,and shapelet is a discriminative continuous time-series subsequence,the time series classification based on shapelet has become one of the hot spots in the researches on time series classification.First,through analyzing the existing time series shapelet discovery methods,this paper classified them into two catalogues,namely shapelet discovery from shapelet candidates and learning shapelet by optimizing object function,and introduced the application of shapelet.Then,according to the classification object,this paper emphasized the univariate time series classification algorithms and multivariate time series classification algorithms based on shapelet.Finally,this paper pointed out the further research direction of time series classification based on shapelet.
    Reference | Related Articles | Metrics
    Application Status and Development Trends of Cardiac Magnetic Resonance Fast Imaging Based on Compressed Sensing Theory
    HENG Yang, CHEN Feng, XU Jian-feng, TANG Min
    Computer Science    2019, 46 (1): 36-44.   DOI: 10.11896/j.issn.1002-137X.2019.01.006
    Abstract406)      PDF(pc) (3112KB)(1137)       Save
    Cardiac Magnetic Resonance (CMR) has several shortcomings in practical application,such as slow imaging speed and inevitable artifacts.Compressed Sensing (CS) is applied to CMR to make full use of the redundancy of K space information,and the images are reconstructed from partial K space data to reduce artifacts and ensure image accuracy.This paper summarized a review according to the domestic and foreign literatures published in recent three years.Firstly,this paper described the current situation of CMR,the commonly used sequences,sampling mask and the compressed sensing theory,respectively.Then,it provided the latest fruits and applications of CMR with an introduction to objective quantitative indices and research progress of the authors in the CS-CMR field.Finally,it concluded the shortcomings of current researches and analyzed the further research trends.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 2, 52 records