Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of Interdiscipline & Frontier in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Reconstructing the Right to Algorithm Explanation --Full Algorithm Development Flow Governance and Hierarchical Classification Interpretation Framework
    CONG Yingnan, WANG Zhaoyu, ZHU Jinqing
    Computer Science    2023, 50 (7): 347-354.   DOI: 10.11896/jsjkx.220900120
    Abstract213)      PDF(pc) (1469KB)(259)       Save
    With the rapid development of artificial intelligence,automated decision-making algorithms(ADM) have gradually entered the public domain and increasingly affected social welfare and individual interests.Meanwhile,emerging risks of ADM,such as algorithmic discrimination,algorithmic bias,and algorithm monopoly have raised the demand of governance to algorithm.Faced with information and technology asymmetry among parties involved,traditional legal resources fall short in protecting the rights of users in ADM,which justifies the right to explanation.In addition,the right to algorithm explanation,serving as an important means of algorithm governance,is conductive to making the black box of algorithm moderately transparent,correcting information asymmetry,and balancing the risk burden between the deployer and the user.It has thus become a necessity in regulating ADM deployers and safeguarding the interests of its users.Therefore,the right to explanation has become the focus in both academic and practical realms from home and abroad.However,the right to algorithm explanation in China is faced with the problem of limited eligible parties,insufficient protection scope,and inexplicit content of rights.In this regard,this paper advocates decons-tructing the right to explanation and further reconstructing it from the perspective of machine learning workflow with a hierarchical classification framework.Introducing the concept of machine learning workflow can reasonably extend the scope of the subject and object of the right,while establishing the framework of hierarchical classification can clarify the content and boundary of the right,which considers both individuality and generality of algorithms and balances the efficiency of explanation and the protection of users’ rights.In this way,all parties in ADM can be fully protected,and the development of digital economy can be empo-wered.
    Reference | Related Articles | Metrics
    Water Resources Governance Mode in Watersheds Oriented to “RNAO-Ecology” Hypernetwork Complex Structure
    SUO Liming, LI Jun
    Computer Science    2023, 50 (7): 355-367.   DOI: 10.11896/jsjkx.220900134
    Abstract193)      PDF(pc) (2201KB)(281)       Save
    For a long time,the integrity of watersheds and the fragmentation of authority have resulted in high cost of water resources governance in China’s watersheds.The transition to network governance has become a hot topic of watershed research in recent years,and a broad consensus has been formed.Compared with the three traditional schemes proposed by western scholars,the RNAO(Restricted Network Administration Organization)network structure focusing on coordination strategies is more in line with the practical needs of localization of water resources governance in the watersheds.A more general governance theory for RNAO is the direction of its theoretical development.First,this paper sorts out the triple advanced logic of the “traditional-network-hypernetwork” process of water resources governance of watershed research.Secondly,it integrates RNAO and “social-ecological” system theory,innovatively proposes an “RNAO-ecological” scale matched watershed hypernetwork governance mode,and analyzes the complex interaction mechanism of the three-layer sub-network of “organization network,behavior network,and ecological network” in detail,thus initially forming a general theoretical construction of RNAO.Finally,according to the two cases of “united river chief system” and “water resource governance of Heihe watershed”,this paper analyzes the application of RNAO and “RNAO-ecology” system in China’s watershed governance practice and gives relevant policy suggestions and possible academic topics for the future network transformation of watershed governance.
    Reference | Related Articles | Metrics
    Decoupling Analysis of Network Structure Affecting Propagation Effect
    CUI Yunsong, WU Ye, XU Xiaoke
    Computer Science    2023, 50 (7): 368-375.   DOI: 10.11896/jsjkx.220900113
    Abstract362)      PDF(pc) (3586KB)(230)       Save
    As more and more people spread information through social networks,online social networks have gradually changed the way people exchange information,so the influencing factors of information dissemination effect based on online social networks have attracted the attention of many researchers,especially the influence of network structure on the effect of information dissemination.In previous studies,most of the research has emphasized the influence of a certain network statistic on the effect of information dissemination,but the phenomenon of coupling between various network statistics is objective,and the change of a certain network statistic may lead to synchronous changes of other network statistics,which may affect the final propagation effect.In this study,a decoupled zero model framework is proposed,which decouples the coupling between different network statistics through the zero model,and then uses the SIR propagation model simulation experiment to analyze the influence of network statistics on information propagation effect without coupling,and finally uses linear threshold model simulation experiments to verify the applicability of the experimental conclusions of the SIR model in social reinforcement.Simulation experiments of the propagation model of Facebook and Twitter empirical networks show that the average shortest path of the network is the main factor affecting the speed and scope of information dissemination,and the clustering coefficient is the secondary factor affecting the scope of information dissemination.
    Reference | Related Articles | Metrics
    Modeling and Simulation of Point-to-Point Propagation of False Information Based on Information Risk Perception
    YU Kai, SU Tianrui
    Computer Science    2023, 50 (7): 376-385.   DOI: 10.11896/jsjkx.220900084
    Abstract176)      PDF(pc) (4085KB)(217)       Save
    In the study of computational social science,the propagation model inspired by infectious diseases is widely used to simulate the spread of false information.However,the traditional infectious disease model does not distinguish the differences among individuals.In the real world,the difference between individuals helps to understand how false information spreads between individuals ,which is of great significance for exploring the propagation law of false information in social networks and suppressing the spread of false information.Based on the information risk perception theory,this paper makes use of online social users’ emotion,knowledge level,trust and the number of media contacts to distinguish the differences of communication indivi-duals,and builds a more realistic point-to-point communication model of false information.In the process of communication,the differences between individuals are manifested in different communication probabilities.Individuals with high communication probabilities are more likely to be transformed into communication states.This paper uses the manually annotated Facebook data set to conduct simulation to study the propagation laws of false information.The results show that,compared with the average probability propagation system,the time span of spreading false information in the point-to-point propagation mode will be longer and the coverage will be wider.In addition,by controlling the nodes with high propagation probability,it is possible to control the propagation of false information in advance,and it has achieved better results than the methods of random control and controlling the nodes with high influence.However,increasing the control proportion of nodes in turn can not achieve better control results as expected,and the characteristics of “anti common sense” appear.
    Reference | Related Articles | Metrics
    Key Technologies of Intelligent Identification of Biomarkers:Review of Research on Association Prediction Between Circular RNA and Disease
    HU Xuegang, LI Yang, WANG Lei, LI Peipei, YOU Zhuhong
    Computer Science    2023, 50 (4): 369-387.   DOI: 10.11896/jsjkx.220500114
    Abstract329)      PDF(pc) (4129KB)(335)       Save
    Biomarker recognition is a major basis for achieving precision medicine,which plays an important role in diagnosing complex diseases,judging disease stages and evaluating the safety and effectiveness of new drugs or therapies in the target population.As the key technology of intelligent identification of biomarkers,the prediction of the association between circular RNA and disease is the key to deeply evaluate and measure the biological process,pathological process and intervention pathological response of subjects,and is one of the effective means and approaches to practice “precision medicine”.This paper comprehensively combs and prospects the circular RNA disease association prediction model in biological big data.Specifically,it first discusses the relationship between circRNA and disease in terms of the research background,physicochemical properties and functions of circ-RNAs.Then,it investigates the public database resources of circRNAs and diseases,and summarizes four computational methods of circRNA-disease association prediction from the perspective of computational models,and analyzes their advantages and shortcomings.Finally,this paper discusses the current challenges and future possible research directions of circRNA-disease association prediction problem.
    Reference | Related Articles | Metrics
    WiDoor:Close-range Contactless Human Identification Approach
    CAO Chenyang, YANG Xiaodong, DUAN Pengsong
    Computer Science    2023, 50 (4): 388-396.   DOI: 10.11896/jsjkx.220300278
    Abstract220)      PDF(pc) (5846KB)(270)       Save
    The rapid development of contactless identification technology based on Wi-Fi sensing has shown excellent application potential in the fields of intelligent human-computer interaction and intelligent security.However,it has been found that in narrow indoor scenarios,accuracy of existing lightweight identification model decreases with the shortening of transceiver distance.To solve the above problem,a close-range and contactless identification method,WiDoor,is proposed.During the data acquisition stage,WiDoor optimizes antenna deployment at receiving end based on Fresnel propagation model,and reconstructs gait information of multiple antennas to obtain the more complete gait description.In the identification stage,a lightweight convolution model,which combines the concatenated convolution module and the multi-scale convolution module,is used to reduce computational complexity while ensuring high identification accuracy.Experimental results show that WiDoor achieves identification accuracy rate of 99.1% on the 10-person dataset collected at the transceiver distance of 1 m.Moreover,parameter quantity of identification model is only 2% of those with the same identification accuracy,which outperforms other similar methods.
    Reference | Related Articles | Metrics
    Batched Eigenvalue Decomposition Algorithms for Hermitian Matrices on GPU
    HUANG Rongfeng, LIU Shifang, ZHAO Yonghua
    Computer Science    2023, 50 (4): 397-403.   DOI: 10.11896/jsjkx.220100232
    Abstract441)      PDF(pc) (2144KB)(341)       Save
    Batched matrix computing problems are widely existed in scientific computing and engineering applications.With rapid performance improvements,GPU has become an important tool to solve such problems.The eigenvalue decomposition belongs to the two-sided decomposition and must be solved by the iterative algorithm.Iterative numbers for different matrices can be varied.Therefore,designing eigenvalue decomposition algorithms for batched matrices on the GPU is more challenging than designing batched algorithms for the one-sided decomposition,such as LU decomposition.This paper proposes batched algorithms based on the Jacobi algorithms for eigenvalue decomposition of Hermitian matrices.For matrices that cannot reside in shared memory wholly,the block technique is used to improve the arithmetic intensity,thus improving the use of GPU resources.Algorithms presented in this paper run completely on the GPU,avoiding the communication between the CPU and GPU.Kernel fusion is adopted to decrease the overhead of launching kernel and global memory access.Experimental results on V100 GPU show that our algorithms are better than existing works.Performance evaluation results of the Roofline model indicate that our implementations are close to the upper bound,approaching 4.11TFLOPS.
    Reference | Related Articles | Metrics
    Survey of Container Technology for High-performance Computing System
    CHEN Yiyang, WANG Xiaoning, LU Shasha, XIAO Haili
    Computer Science    2023, 50 (2): 353-363.   DOI: 10.11896/jsjkx.220100163
    Abstract387)      PDF(pc) (3088KB)(421)       Save
    Container technology has been widely used in the cloud computing industry,mainly for rapid migration and automated deployment of service software environments.With the deep integration of high performance computing,big data and artificial intelligence technologies,the application software dependency and configuration of high performance computing systems are beco-ming increasingly complex,and the demand for user-defined software stacks in supercomputing centers is getting stronger.Therefore,in the application environment of high-performance computing systems,a variety of container implementations have also been developed to meet the practical needs such as user-defined software stacks.This paper summarizes the development history of container technology,explains the technical principles of containers in Linux platform,analyzes and evaluates the container implementation software for high-performance computing systems,and finally the future research direction of container technology for high-performance computing system is prospected.
    Reference | Related Articles | Metrics
    Thoughts on Development and Research of Science,Technology and Engineering Application of Brain & Mind-inspired Computing
    LIU Yang, LIU Ruijia, ZHOU Liming, ZUO Xianyu, YANG Wei, ZHOU Yi
    Computer Science    2023, 50 (2): 364-373.   DOI: 10.11896/jsjkx.220500023
    Abstract510)      PDF(pc) (2543KB)(726)       Save
    To develop a new generation of brain-inspired intelligence,we need to comprehensively consider the structure,function and behavior of natural intelligence.Bias in any direction is not comprehensive,and it is difficult to fully touch the essence of intelligence.Based on the structure simulation of nervous system,the function emulation of cognitive system and the behavior imitation of natural intelligence,this paper defines the basic concept of brain & mind-inspired computing(BMC),puts forward the hypothesis,model and framework of BMC,and studies the frontier theory of BMC.Then it explores and analyzes the technical route,core algorithms and key technologies of BMC research,and summarizes the current situation of complex system and engineering application of BMC in the aspects of brain mechanism,mental model and behavior control.Combined with the multidisciplinary and interdisciplinary characteristics of intelligence science,neuroscience,cognitive science,information science and computational mathematics,it further discusses the research paradigm and transdisciplinary construction of BMC,brain-inspired computing and brain-like computing.Reserch of BMC is expected to make a major breakthrough in the scientific theory,technological innovation and engineering system of the new generation of brain-inspired intelligence.
    Reference | Related Articles | Metrics
    Modified Social Force Model Considering Pedestrian Characteristics and Leaders
    LIN Jin-cheng, JI Qing-ge, ZHONG Zhen-wei
    Computer Science    2022, 49 (5): 347-354.   DOI: 10.11896/jsjkx.210500144
    Abstract287)      PDF(pc) (2827KB)(491)       Save
    Social force model is a classic model in crowd movement simulation.The model expresses the subjective wishes of pedestrians and the interaction between pedestrians in the form of “force”.The model is concise and easy to explain.However,there are many factors that affect pedestrian movement,and the calculation of self-driving force and social psychological force in the primitive social force model is insufficient.In order to enable the model to simulate the real movement process,many researchers have improved the social force model.This paper mainly studies the subject in the process of crowd evacuation,pedestrians.Pedestrians are modeled from two aspects:pedestrian characteristics and pedestrian roles.The characteristics of pedestrians include the social relationship between pedestrians,the personality of pedestrians and individual emotions.The degree of interference between pedestrians with different levels of intimacy is different,and pedestrian emotions will also affect the judgment of pedes-trians.The role of pedestrians considers leaders and ordinary pedestrians,and analyzes the impact of different pedestrian roles on the evacuation process.Leaders can help ordinary pedestrians to evacuate.Crowd self-organization simulation experiments verifies that the improved model can simulate the real crowd evacuation situation and retain the advantages of the original model.At the same time,the evacuation efficiency and exit utilization rate of the crowd under four simulation models are counted,and the average value and distribution of the experimental data are analyzed.Experimental results show that the main reasons for the long evacuation time are the time-consuming search for exits and the unbalanced utilization rate of exits.Generally,pedestrian characteristics and leaders have a positive impact on pedestrian evacuation efficiency.Pedestrian characteristics can accelerate pedestrian aggregation and optimize pedestrian expectation speed.On the basis of helping pedestrians to find exits,leaders can balance pedestrian’s use of exits,and ensure that thenumber of evacuees at each exit is basically the same.
    Reference | Related Articles | Metrics
    Deep Neural Network Operator Acceleration Library Optimization Based on Domestic Many-core Processor
    GAO Jie, LIU Sha, HUANG Ze-qiang, ZHENG Tian-yu, LIU Xin, QI Feng-bin
    Computer Science    2022, 49 (5): 355-362.   DOI: 10.11896/jsjkx.210500226
    Abstract449)      PDF(pc) (3325KB)(660)       Save
    Operator acceleration libraries based on different hardware devices have become an indispensable part of deep learning framework,which can provide performance improvement for large-scale training or inference tasks dramatically.The current main-stream operator libraries are all developed based on GPU architecture,which is not compatible with other heterogeneous designs.SWDNN operator library is based on the development of SW26010 processor,which can not give full play to the performance of the upgraded SW26010 pro processor,nor can it meet the needs of the current large neural network models such as GPT-3 for large memory capacity and high memory access bandwidth.According to the architecture characteristics of SW26010 pro processor and the training requirements of large neural network model,a three-level parallel and neural network operator task sche-duling scheme based on multi-core group is proposed,which can satisfy the memory requirements of large model training and improve the overall computing performance and parallel efficiency.A memory access optimization method with triple asynchronous flow and overlap of computation and memory access is proposed,which significantly alleviates the memory access performance bottleneck of neural network operators.Based on the above methods,this paper constructs the SWTensor many-core group operator acceleration library based on the SW26010 pro processor.The experimental results of natural language processing model GPT-2 show that,computation-intensive operators and memory access intensive operators in SWTensor operator library reach the maxi-mum of 90.4% and 88.7% of the theoretical peak values respectively in single-precision floating-point computing performance and memory access bandwidth.
    Reference | Related Articles | Metrics
    Parallelization and Locality Optimization for Red-Black Gauss-Seidel Stencil
    JI Ying-rui, YUAN Liang, ZHANG Yun-quan
    Computer Science    2022, 49 (5): 363-370.   DOI: 10.11896/jsjkx.220100119
    Abstract575)      PDF(pc) (2233KB)(740)       Save
    Stencil is a common cyclic nested computing model,which is widely used in many scientific and engineering simulation applications,such as computational electromagnetism,weather simulation,geophysics,ocean simulation and so on.With the deve-lopment of modern processor architecture,the multi-core and multi-layer memory levels have been deepened.Research on paralle-lism and locality is the main way to improve the performance of programs.Blocking is one of the main techniques to exploit data locality and program parallelism.At present,a large number of blocking methods have been proposed for Stencil,but most of them are limited to Jacobi Stencils which is featured with high parallelism and locality.Gauss-Seidel Stencil has a better convergence rate and is widely used in multi-grid calculations.However,the data dependence of this type of Stencil is more complicated.In this paper,a parallel blocking and vectorization algorithm is designed for Gauss-Seidel Stencil for red black sorting,which improves the data locality,medium granularity multi-core parallelism and intra core fine-grained parallelism of Gauss-Seidel Stencil.Experimental results demonstrate the effectiveness of this scheme.
    Reference | Related Articles | Metrics
    Participant Selection Strategies Based on Crowd Sensing for River Environmental Monitoring
    LI Xiao-dong, YU Zhi-yong, HUANG Fang-wan, ZHU Wei-ping, TU Chun-yu, ZHENG Wei-nan
    Computer Science    2022, 49 (5): 371-379.   DOI: 10.11896/jsjkx.210200005
    Abstract285)      PDF(pc) (3567KB)(635)       Save
    The surrounding environment of rivers in city is often damaged and polluted.How to effectively monitor rivers has gradually attracted the attention of public,government and academia.At present,traditional monitoring methods are facing with high cost,insufficient coverage and other defects.With the increasing popularity of intelligent mobile devices,a new idea of using crowd sensing to efficiently monitor the river environment is proposed in this paper.The problem can be described as the assumption that each river reach contains c monitoring points,and then r users are selected according to the movement tracks of a large number of users to jointly complete the monitoring of all river reaches in s periods.It is stipulated that the smaller the number of users r,the less the monitoring cost.The stepwise-greedy strategy,the global-greedy strategy and the integer-programming stra-tegy are designed to solve this problem,that is,to select the least number of participants to achieve the monitoring goal of “s durations-c ranges-r users”.In this paper,the above strategies are applied to environmental monitoring of some rivers in Taijiang,Fuzhou.Experimental results show that the above strategies can obtain better solutions than the random strategy,and the integer-programming strategy has the best performance.However,with the increase of the scale of the problem,the implicit enumeration algorithm used to solve the small-scale integer programming will be unable to solve the situation.Motivated by this,this paper designs a discrete particle swarm optimization algorithm based on greedy initialization(GI-DPSO).Although this algorithm can solve large-scale integer programming,it is time-consuming.Considering the monitoring cost and computational cost comprehensively,it is suggested that the integer-programming strategy can be adopted for small-scale problems and the global-greedy strategy can be adopted for large-scale problems.
    Reference | Related Articles | Metrics
    Survey on Improvement and Application of Grover Algorithm
    LIU Xiao-nan, SONG Hui-chao, WANG Hong, JIANG Duo, AN Jia-le
    Computer Science    2021, 48 (10): 315-323.   DOI: 10.11896/jsjkx.201100141
    Abstract542)      PDF(pc) (1491KB)(1272)       Save
    Quantum information science is a new interdisciplinary subject,which has unique performance in the field of information.It can break through the limits of the existing classical information systems in the aspects of improving computing speed,ensuring information security,increasing information capacity and improving detection accuracy.Grover algorithm is a typical quantum algorithm,which can realize quadratic acceleration for any classical brute force exhaustive search problem,further promoting the development of quantum computing.How to effectively improve and apply Grover algorithm has become an important research field of quantum computing.This paper summarizes the optimization,improvement and application of Grover algorithm,summarizes the application and improvement of Grover algorithm in different fields,and discusses some research directions of future algorithm improvement and related applications of Grover algorithm.
    Reference | Related Articles | Metrics
    Mechanism and Path of Optimizing Institution of Legislative Evaluation by Applying “Big Data+Blockchain”
    ZHANG Guang-jun, ZHANG Xiang
    Computer Science    2021, 48 (10): 324-333.   DOI: 10.11896/jsjkx.201200105
    Abstract443)      PDF(pc) (2003KB)(493)       Save
    Although the application of big data in legislative evaluation helps to get rid of chronic diseases such as single evaluation subject,data omission,data distortion and formalization of conclusions.However,there are still some defects,such as lack of public participation under the dominant mode of the legislature,improper cleaning methods leading to poor data quality,information asymmetry,trust crisis caused by “algorithm black box”,and biased evaluation conclusions caused by algorithm bias.Based on the approach of technology-system collaborative evolution theory,this paper applies the blockchain which is consistent with the nature of big data to innovate the underlying architecture,makes up for the defects in the application of big data in legislative evaluation,and then constructs an anastomosing application model of “big data+block chain”,optimizes the legislative evaluation system with public participation incentive mechanism,data cleaning and review mechanism,consensus formation negotiation platform and procedure justification circulation platform,so as to fully release the system function of legislative evaluation to gather public opinion and wisdom,strengthen democratic supervision and improve the quality of legislation.
    Reference | Related Articles | Metrics
    Failure-resilient DAG Task Rescheduling in Edge Computing
    CAI Ling-feng, WEI Xiang-lin, XING Chang-you, ZOU Xia, ZHANG Guo-min
    Computer Science    2021, 48 (10): 334-342.   DOI: 10.11896/jsjkx.210300304
    Abstract429)      PDF(pc) (3486KB)(739)       Save
    By deploying computation and storage resources at the network edge that is close to the data source,and scheduling tasks offloaded by users efficiently,edge computing can greatly improve the quality of experience (QoE) of users.However,due to the lack of the reliable infrastructure support,the failure of edge servers or communication links could easily fail the edge computing service.To handle this problem,we establish the failure models of the computing nodes and communication links in edge computing,and then propose the rescheduling algorithm DaGTR (Dependency-aware Greedy Task Rescheduling) for the scheduling of dependent user tasks in resource failure scenarios.DaGTR includes two sub-algorithms,DaGTR-N and DaGTR-L,which are responsible for handling the node and link failure events respectively.DaGTR can sense the data dependency of tasks,and reschedule the tasks affected by failure events based on greedy method to ensure the successful execution of each task.Simulation results show that the algorithm can effectively avoid the task failure caused by failure events and improve the success rate of tasks in the case of resource failure.
    Reference | Related Articles | Metrics
    Microservices User Requests Allocation Strategy Based on Improved Multi-objective Evolutionary Algorithms
    ZHU Han-qing, MA Wu-bin, ZHOU Hao-hao, WU Ya-hui, HUANG Hong-bin
    Computer Science    2021, 48 (10): 343-350.   DOI: 10.11896/jsjkx.201100009
    Abstract588)      PDF(pc) (2643KB)(859)       Save
    How to allocate concurrent user requests to a system based on a microservices architecture to optimize objectives such as time,cost,and load balance,is one of the important issues that microservices-based application systems need to pay attention to.The existing user requests allocation strategy based on fixed rules only focuses on the solving of load balance,and it is difficult to deal with the balance between multi-objective requirements.A microservices user requests allocation model with multiple objectives of total requests processing time,load balancing rate,and total communication transmission distance is proposed to study the allocation of user requests among multiple microservices instances deployed in different resource centers.The multi-objective evolutionary algorithms with improved initial solutions generation strategy,crossover operator and mutation operator are used to solve this problem.Through many experiments on data sets of different scales,it is shown that the proposed method can better handle the balance between multiple objectives and has better solving performance,compared with the commonly used multi-objective evolutionary algorithms and traditional methods based on fixed rules.
    Reference | Related Articles | Metrics
    Study on Co-evolution of Underload Failure and Overload Cascading Failure in Multi-layer Supply Chain Network
    LI Shu, YANG Hua, SONG Bo
    Computer Science    2021, 48 (10): 351-358.   DOI: 10.11896/jsjkx.200900144
    Abstract283)      PDF(pc) (4447KB)(802)       Save
    The supply chain network is closely related to our lives,and the cascading failure of the supply chain network has always been a hot research topic.This paper proposes a multi-layer supply chain network mixed failure model,which can better simulate the process of real supply chain network collapse and provide a reference for preventing supply chain network collapse.By establishing the supply chain network model of upper-level supplier network overload cascade failure and lower-level retailer network underload failure,the vulnerability of the supply chain network is studied when the upper and lower networks are attacked through different attack strategies.In the case of a certain initial attack ratio,the upper-layer supplier network is more robust than the lower-layer retailer network.Under the same attack ratio,the scale of the network crash when deliberately attacking network nodes is larger than that of random attacks.When the upper-layer supplier network node is initially attacked,the thresholdof network collapse is lower,which is more prone to collapse.This paper verifies the validity of the model and provides a new research model for preventing the collapse of the supply chain network.
    Reference | Related Articles | Metrics
    Overview of Application of Positioning Technology in Virtual Reality
    ZHANG Yu-xiang, REN Shuang
    Computer Science    2021, 48 (1): 308-318.   DOI: 10.11896/jsjkx.200800010
    Abstract411)      PDF(pc) (2066KB)(1949)       Save
    In recent years,virtual reality technology in our country has rapidly developed with the development of 5G technology,sensor technology and civilian graphics processors.The demand for virtual reality in education,transportation,commerce,entertainment,industry and other fields is increasing rapidly.Virtual reality technology is a brand-new comprehensive information technology,in which positioning technology is the key technology that determines user's immersion and interactionand is an important support for virtual reality technology.Therefore,it is necessary to focus on the summary of positioning technology,which is one of the core technologies of virtual reality.This paper first introduces virtual reality and positioning technology,then focuses on detailed analysis and comparison of typical positioning technologies currently used in virtual reality systems,and introduces the principles of these technologies,related research results,and their usage scenarios in virtual reality.After that,it introduces the mainstream virtual reality positioning equipment on the market,then discusses the positioning algorithms used in virtual reality positioning technology,and finally introduces current problems and future development directions of virtual reality positioning technology.
    Reference | Related Articles | Metrics
    Application Research on Container Technology in Scientific Computing
    XU Yun-qi, HUANG He, JIN Zhong
    Computer Science    2021, 48 (1): 319-325.   DOI: 10.11896/jsjkx.191100111
    Abstract457)      PDF(pc) (2015KB)(1767)       Save
    Container is a new virtualization technology that has emerged in recent years.Due to its ability to provide isolated environment for running applications and services with minimal resource overhead,it quickly explodes in popularity among enterprises and has seen wide applications in a number of business scenarios such as continuous integration,continuous deployment,automated testing and micro-services.Although not as fully utilized as in industry,the packaging ability of containers also holds promise for improving productivity and code portability in the domain of scientific computing.In this paper,we discuss how the container and related technologies can be used in scientific computing by surveying existing application examples.The different application patterns represented by these examples suggest that the scientific computing community may benefit from the container technology and the ecosystem evolving around it in many different ways.
    Reference | Related Articles | Metrics
    Highly Available Elastic Computing Platform for Metagenomics
    HE Zhi-peng, LI Rui-lin, NIU Bei-fang
    Computer Science    2021, 48 (1): 326-332.   DOI: 10.11896/jsjkx.191200030
    Abstract275)      PDF(pc) (3670KB)(646)       Save
    Next generation sequencing(NGS) has significantly promoted the development of metagenomics due to its low cost and ultra-high throughput.However,it has brought great challenges to researchers at the same time since processing large-scale and high-complexity sequencing data is a tough task.On the one hand,the analysis of large-scale sequencing data consumes too many resources such as hardware resources and the cost of time,etc.On the other hand,in the process of computational analysis,a large number of metagenomics computational analysis tools need to be deployed,debugged and maintained inevitably which are difficult for common users.For the above reasons,this paper compares the mainstream metagenomics computing platforms in the field and analyzes the main advantages and disadvantages of each platform comprehensively.Furthermore,a highly available and flexible metagenomics computing platform MWS-MGA(More than a Web Service for Metagenomic Analysis) focusing on meta-genomics computational analysis has been constructed which is combined with the current effective computing service technology.Meanwhile,not only multiple interactive access methods but also rich and flexible computing tools are provided in MWS-MGA.Thus,the scientific research threshold for researchers to conduct metagenomics analysis has been greatly reduced.
    Reference | Related Articles | Metrics
    Semi-supervised Scene Recognition Method Based on Multi-mode Fusion
    SHEN Hong, LIU Jun-fa, CHEN Yi-qiang, JIANG Xin-long, HUANG Zheng-yu
    Computer Science    2019, 46 (12): 306-312.   DOI: 10.11896/jsjkx.191200500C
    Abstract480)      PDF(pc) (1913KB)(1113)       Save
    Scene recognition is an important part of pervasive computing.It aims to provide users with accurate persona-lized service and improve service quality by identifying the location of the smartphone users.In the actual environment,there are two problems in accurate scene recognition.Firstly,based on single mode sensor data or wireless signal data,classification effect is not good enough ,and its generalization is not enough.Secondly,scene recognition accuracy depends on a large number of labeled data,resulting in high cost.In view of these problems,a semi-supervised scene recognition method based on multi-mode fusion was proposed.The menthod makes full use of the complementary information of Wi-Fi,Bluetooth and sensors to improve the accuracy of recognition.Compared with the recognition based on single mode data,fusion feature can increase the static scene classification accuracy by 10%.In this paper,a semi-supervised learning method was constructed to solve the problem of high data acquisition cost in dynamic scene,and the classification accuracy is over 90% by reducing half of the labeled data.The results show that introducing semi-supervised lear-ning method based on the complementary advantages of Wi-Fi,Bluetooth and sensors information can reduce data collecting cost and improve scene recognition accuracy to some extent,and thus highly increase its recognition accuracy and universality.
    Reference | Related Articles | Metrics
    PPI Network Inference Algorithm for PCP-MS Data
    CHEN Zheng, TIAN Bo, HE Zeng-you
    Computer Science    2019, 46 (12): 313-321.   DOI: 10.11896/jsjkx.181102215
    Abstract470)      PDF(pc) (3391KB)(1053)       Save
    With the development of proteomics,scholars begin to pay more attention to the construction of Protein-Protein Interaction (PPI) network.Mass spectrometry(MS) has become a representative method for protein-protein inte-raction (PPI) inference,and it is one of the main experiment method to construct PPI network.Based on the technology of mass spectrometry,a large amount of experimental protein MS data is generated,such as affinity purification-mass spectrometry (AP-MS) data and protein correlation profiling-mass spectrometry (PCP-MS) data,which provide important data support for the construction of PPI networks,but constructing PPI networks by hand is impracticable and time consuming.Thus,PPI network inference algorithm for PCP-MS data has begun to become the research hotspot in bioinformatics.This thesis focused on the problem of PPI network inference for two main types of mass spectral data (AP-MS data and PCP-MS data),and designed effective methods respectively to solve the issue of current bottlenecks,achieving the construction of high-quality PPI network.The existing algorithms for PPI network interface from PCP-MS data are still in infancy,and there is a few of related algorithms.The existing method have several problem.Specifically:1)many error interaction is contained in the results produced by the different algorithms,and the correct interaction is omitted in the results.2)Different algorithms may produce very different results when they face the same data set.3)For different data sets,the performance variance of the same algorithm is larger.For the problem of PPI network inference for PCP-MS data,this paper proposed a PPI scoring method based on correlation analysis and rank aggregation.The method is based on unsupervised learning and includes two steps.Firstly,correlation coefficient between protein pairs is computed,and multiple results of PPI scores can be obtained.Secondly,multiple results for each pair of proteins are combininect via rank aggregation to a single PPI score.The experimental results show that this method is comparable with those supervised learning methods using standard reference set.
    Reference | Related Articles | Metrics
    Following-degree Tree Algorithm to Detect Overlapping Communities in Complex Networks
    FU Li-dong, LI Dan, LI Zhan-li
    Computer Science    2019, 46 (12): 322-326.   DOI: 10.11896/jsjkx.190200293
    Abstract308)      PDF(pc) (1523KB)(626)       Save
    Overlapping community detection is a key and difficult issue in complex network research field.Due to the widespread hierarchical structures in real networks,the hierarchical overlapping community detection methods are more suitable for studying and analyzing complex networks in the real world.However,these researches of this kind of method are not many now.This paper proposed a new overlapping community detection algorithm named following-degree tree based on the definition of complex network node leadership and subordination concept.Combined with the hierarchical characteristic,this algorithm constructs the following-degree tree by calculating the following degree of eve-ry nodes and finally finds the overlapping nodes and overlapping communities by dividing the tree.The feasibility of the algorithm was proved by an artificial network experiment,and its effectiveness was verified by the experiments on Dolphin network and Karate network.The proposed algorithm has high extended modularity and more reasonable division,which can find overlapping nodes that other algorithms cannot find.
    Reference | Related Articles | Metrics
    Analysis of SIR Model Based on Individual Heterogeneous Infectivity and State Transition
    QU Qian-qian, HAN Hua
    Computer Science    2019, 46 (12): 327-333.   DOI: 10.11896/jsjkx.181001974
    Abstract698)      PDF(pc) (2309KB)(1094)       Save
    To explore the phenomenon of infected individuals with different infectious rates,based on the basic SIR epidemic model in complex networks,this paper proposed an epidemic model with two types of infections and probability of metastasis.Based on the existence of the equilibrium point of endemic diseases,it obtaines the basic reproduction number R0.It analyzes two common immunization strategies:random immunization and target immunization.Simulation experiments show that under the same conditions,diseases spread faster and wider in heterogeneous networks than in homogeneous networks when R0>1,and network structure has little influence on the spread of diseases when R0<1.Further researches show that the greater the degree of initial infection nodes in the network,the faster the disease transmission speed and the greater the peak value;the greater the centrality of the proximity of the initial infected nodes,the faster and wider the disease spreads;the point aggregation coefficient has little effect on the transmission process;the basic reproduction number decreases with the increase of the transfer probability,and the increase of the transfer probability can effectively reduce the spread of disease;in the case of the same average immunity rate,the target immunity is more effective than random immunity.
    Reference | Related Articles | Metrics
    Method of Mining Hidden Transition of Business Process Based on Behavior Profiles
    SONG Jian, FANG Xian-wen, WANG Li-li, LIU Xiang-wei
    Computer Science    2019, 46 (12): 334-340.   DOI: 10.11896/jsjkx.180901654
    Abstract294)      PDF(pc) (2243KB)(638)       Save
    In the process of business process optimization,mining hidden transitions from infrequent behaviors is one of the important tasks.Mining the hidden transitions from infrequent behaviors can better restore the process model and improve the efficiency of the process.Based on the theory of behavioral profiles,this paper mined logs from relatively high frequency and obtained initial models.Firstly,the event log is filtered by the reasonableness threshold to obtain a valid low-frequency sequence log.Then,the low-frequency sequence log is used to optimize the initial model,and the behavioral profiles relationship between each activity is compared with the source model to find the changed region,and the possible hidden transitions are minned.Next,through the optimization of the indicators to further verify the hidden transitions,a complete and accurate model of the implicit transition process is obtained.Finally,the model is analyzed by concrete examples and simulations to verify the effectiveness of the method.
    Reference | Related Articles | Metrics
    WiCount:A Crowd Counting Method Based on WiFi Channel State Information
    DING Ya-san, GUO Bin, XIN Tong, WANG Pei, WANG Zhu, YU Zhi-wen
    Computer Science    2019, 46 (11): 297-303.   DOI: 10.11896/jsjkx.191100506C
    Abstract770)      PDF(pc) (2584KB)(1542)       Save
    Crowd counting is the process of monitoring the number of people in a certain area,which is crucial in traffic supervision,etc.For example,counting people waiting in lines at airports or retail stores could be used for improving the service.At present,some methods based on videos (or images) and wearable devices have been proposed,but there are some shortcomings in these schemes.For example,the camera can only monitor within the range of sight distance,and wearable devices need people to wear them consciously.Some scholars have made use of radar related technology torealize the number,but its cost is very high.In this paper,an indoor crowd counting scheme,WiCount,based on WiFi signals was proposed.WiCount aims at a fine-grained indoor people counting scheme,which can accurately identify the number of people at different positions.According to the relationship between the number of indoor people and the amplitudes fluctuation of CSI,features are extracted,which are contributed to mitigate the difference of CSI data produced by the same number of people in distinct positions,and then three classifiers (SVM,KNN,BP Neural Network) are trained to identify the number of people in the monitoring area.Prototype systems is implemented in a laboratory and a meeting room respectively,and the recognition is fine when the number of people is on the small side.In the laboratory,the accuracy is up to 90% in the case of no more than 4 persons.In the meeting room,the results show that no matter where people move,the accuracy can reach 89.58% in the case of no more than 2 persons.
    Reference | Related Articles | Metrics
    Training-free Human Respiration Sensing Based on Wi-Fi Signal
    YU Yi-ran, CHANG Jun, WU Liu-fan, ZHANG Yong-hong
    Computer Science    2019, 46 (11): 304-308.   DOI: 10.11896/jsjkx.190600143
    Abstract444)      PDF(pc) (1962KB)(1024)       Save
    With the rapid development of wireless communication technology,Wi-Fi has been widely used in public and private fields.Non-invasive breath detection technology based on wireless technology has a broad application prospect in the field of smart home.Considering that the existing solutions are difficult to explain the huge performance differences in different scenarios,this paper introduced the Fresnel edge diffraction model in the free space and designd a training free breathing detection sensing based on Wi-Fi signals.Firstly,we introduced the Fresnel Zone knife-edge diffraction model in free space,then verified the diffraction propagation characteristics of Wi-Fi signals in indoor environment.Se-condly,we accurately quantified the relationship between diffraction gain and micro thoracic displacement in human respiration,which Not only explains why Wi-Fi devices can be used to detect human breathing,but also demonstrates where is easier to detect.Finally,respiratory rate is estimated from RSS by fast Fourier transform (FFT).The algorithm in this paper can clearly know the distribution of good and bad positions of breath detection,and for good positions,the accuracy of breath estimation can reach 93.8%.Experiment results show that using a pair of transceivers makes centimeter-scale breathing perception possible and it is expected to provide a ubiquitous respiratory detection solution through a popular Wi-Fi infrastructure.
    Reference | Related Articles | Metrics
    Cloud Computing Resource Scheduling Strategy Based on Improved Bacterial Foraging Algorithm
    ZHAO Hong-wei, TIAN Li-wei
    Computer Science    2019, 46 (11): 309-314.   DOI: 10.11896/jsjkx.181002000
    Abstract323)      PDF(pc) (1996KB)(1182)       Save
    As one of the core problems of cloud computing,the efficiency of scheduling algorithm has a direct impact on the operation capacity of the system.Swarm intelligence algorithm,with good coordination and overall stability,is one kind of swarm intelligence algorithms which imitates swarm intelligence in the process of evolution swarm.This paper presented a calculation method of bacteria foraging algorithm applied to cloud computing resource scheduling algorithm,which can be used to control the node allocation of cloud computing resource scheduling by using bacterial swarm algorithm to copy and perish the nodes.According to the problem of too much resource change interval caused by the random selection chemotax in the traditional flora swarm algorithm,the bacteria foraging CBFO optimization algorithm based on Quorum Sensing mechanism and the MPSOBS optimization algorithm introducing bacteria chemotaxis action in the process of group collaboration were proposed in this paper.According to the environment around the nodes and the situation of the whole flora,the chemotaxis factor is selected to make the process of chemotaxis more accurate,which is implemented on the cloud computing platform.The simulation results show that the proposed algorithm is more efficient than the BFO algorithm in terms of task execution time,system load balancing and resource service quality,and can improve the service quality of cloud applications while improving resource utilization.
    Reference | Related Articles | Metrics
    Load Balancing Scheduling Optimization of Cloud Workflow Using Improved Shuffled Frog Leaping Algorithm
    XU Jun, XIANG Qian-hong, XIAO Gang
    Computer Science    2019, 46 (11): 315-322.   DOI: 10.11896/jsjkx.181001866
    Abstract530)      PDF(pc) (1834KB)(915)       Save
    In instance-intensive and open cloud environments,workflow scheduling always suffers from frequent calls of the cheap and high-quality resources,resulting in poor scheduling efficiency and disruption of stability.In addition,unlike general task scheduling,workflow tasks usually have associated dependencies,which greatly increase the complexity of task assignment.Aiming at the imbalance of load between cloud virtual machines,a workflow hierarchical scheduling model was proposed,which is hierarchically divided according to task priorities so as to alleviate virtual machine load pressure.Besides,to optimize the shuffled frog leaping algorithm(ISFLA),the time greedy strategy is applied to initia-lize population,as a result,improving the search efficiency.Then,by enhancing the position of best solutions locally,a reconstruction strategy was put forward to go out of the dilemma of local optimum.Finally,the experimental results in cloud workflow scheduling show that the improved shuffled frog leaping algorithm can optimize load balance degree and is more effective in task processing as well as searching compared with the traditional shuffled frog leaping algorithm and particle swarm optimization.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 4, 95 records