Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of Interdiscipline & Application in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Scalable Parallel Computing Method for Conditional Likelihood Probability of Nucleotide Molecular Phylogenetic Tree Based on GPU
    HUANG Jia-wei, LI Xiao-peng, LING Cheng
    Computer Science    2022, 49 (11A): 210800189-7.   DOI: 10.11896/jsjkx.210800189
    Abstract275)      PDF(pc) (4095KB)(280)       Save
    The efficient implementation of Bayesian and Metropolis Hastings algorithms makes Mrbayes a widely used tool for molecular sequence phylogenetic analysis.However,the increase of molecular sequences and evolutionary parameters leads to the rapid expansion of the sample space of candidate molecular trees,which makes the reconstruction of phylogenetic trees face great computational challenges.In order to reduce the calculation time of conditional likelihood probability of molecular tree in mrbayes phylogenetic analysis and improve the analysis efficiency,a number of parallel acceleration methods based on graphics processor(GPU) have emerged in recent years.In order to improve the scalability of parallel methods,an optimized likelihood probability multithreaded parallel computing method is proposed in this paper.As the calculation of molecular state likelihood probability in the variable evolution rate model between sites needs to correspond to different transition probability matrices,this method further decomposes the parallel calculation of likelihood probability of different sites using multithreading into the calculation of conditional likelihood probability under different transition probability matrices between multiple sites.This strategy optimizes the parallel overlap between threads and improves the parallel efficiency by increasing the number of threads without changing the calculation transmission ratio of a single thread.In addition,because each thread warp only calculates the likelihood probability under the same transition probability matrix,it avoids the synchronization overhead between different warps when using shared memory,and further improves the computing efficiency of the kernel.Calculation results of 4 groups of actual data and 30 groups of simulated data show that the computational performance of this method is 1.78 and 2.04 times higher than that of tgMC3(version 2.0) and nMC3(version 2.1.1) in the calculation acceleration of core likelihood function.
    Reference | Related Articles | Metrics
    Fault Diagnosis of Shipboard Zonal Distribution Power System Based on FWA-PSO-MSVM
    GAO Ji-hang, ZHANG Yan
    Computer Science    2022, 49 (11A): 210800209-5.   DOI: 10.11896/jsjkx.210800209
    Abstract485)      PDF(pc) (4369KB)(340)       Save
    The occurrence of faults will greatly affect the safety of the shipboard zonal distribution power system.In order to ensure the safe operation of ships,10 types of short-circuit faults in the shipboard zonal distribution power system are focused in this paper.MATLAB/Simulink is used to establish the power system simulation model of the shipboard zonal distribution power system,SMOTE oversampling is adopted to preprocess the fault data.Taking the feature vector extracted by principal component analysis(PCA) in fault data as input of multi-class support vector machine(MSVM) for fault diagnosis.In order to optimize the diagnosis results,a firework particle swarm optimization algorithm is presented to optimize the penalty factor C and the kernel function parameter γ of MSVM,which is compared with the results of MSVM fault classification optimized only by the particle swarm optimization algorithm.Simulation results show that the proposed algorithm has higher fault classification accuracy and precision.
    Reference | Related Articles | Metrics
    Study on Decision-making for Low-carbon Supply Chain with Capital Constraint and Risk Aversion
    LI Li-ying, LIU Guang-an, LI Xiao-bing, WANG Bo
    Computer Science    2022, 49 (11A): 210900104-6.   DOI: 10.11896/jsjkx.210900104
    Abstract174)      PDF(pc) (1784KB)(282)       Save
    In order to alleviate the financing difficulties of capital-constrained manufacturers in low-carbon environment,a Stac-kelberg game model is established in government’s cap-and-trade system,which is led by a risk-neutral supplier and followed by a loss-averse manufacturer.Based on the assumption that the manufacturer borrows two loans to execute the ordering decision and make emission reduction investment,the optimal ordering and emission reduction decisions of the risk-neutral and loss-averse manufacturer and the optimal wholesale pricing decision of the supplier are obtained,respectively.Theoretical and numerical ana-lysis show that when the manufacturer is loss-averse,it will make more conservative order decisions,and the supplier will set a higher wholesale price.As a result,the manufacturer reduces the emission reduction level.The impact of loss aversion on the expected utility of the manufacturer is related to carbon cap allocated by the government.When carbon cap is large,the manufactu-rer with higher loss aversion can earn additional gains by selling more remaining emission permits in the carbon permit trading markets.
    Reference | Related Articles | Metrics
    Summary and Analysis of Research on ManyCore Processor Technologies
    SONG Li-guo, HU Cheng-xiu, WANG Liang
    Computer Science    2022, 49 (11A): 211000012-7.   DOI: 10.11896/jsjkx.211000012
    Abstract340)      PDF(pc) (1774KB)(570)       Save
    Processors have been developing from single-core to manycore.The latest research results abroad on manycore are comprehensively analyzed.The development status of many-core processors is first introduced,and then the related recent papers are summarized and retrieved from three aspects:architecture,on-chip storage and software.The main contributions and basic ideas of these papers are analyzed from the perspectives of energy efficiency,performance and reliability.Finally,combined with the development trend of integrated circuits in the post Moore era,two main technical direction are expounded which are the emerging adaptive architecture technology and three-dimensional integration technology of manycore processors.
    Reference | Related Articles | Metrics
    Testing System of Target Recognition Method of Array Screen
    PAN Deng, CAI Meng-yun, WANG Zhen-yu, LV Jia-liang
    Computer Science    2022, 49 (11A): 211000109-4.   DOI: 10.11896/jsjkx.211000109
    Abstract455)      PDF(pc) (2152KB)(304)       Save
    In order to solve the problem that the existing array screen test system cannot judge and recognize multiple continuous target signals and the unit detection screen is susceptible to external interference,the principle of using the method of similarity coefficient is proposed.To distinguish the real target signal and interference signal of each unit detection screen output,based on the D-S evidence theory,a method to identify the real target signals and eliminate the false targets in the test system of multi photoelectric detection sensors is established.The characteristics of the output target signal which pass through the array screen test system are studied,and the distribution of the reliability function of the output signal target type under the evidence body of four unit detection screens in the array screen test system is given,and then the target recognition result is obtained through fusion processing.So this paper can achieve the goal of eliminating the false targets and signal recognition.
    Reference | Related Articles | Metrics
    Fault Diagnosis Based on Channel Splitting CLAHE and Adaptive Threshold Residual NetworkUnder Variable Operating Conditions
    HUANG Xiao-ling, ZHANG De-ping
    Computer Science    2022, 49 (11A): 211100122-7.   DOI: 10.11896/jsjkx.211100122
    Abstract266)      PDF(pc) (2628KB)(269)       Save
    Driven by the development of big data,fault diagnosis method based on deep learning has gradually become a research hotspot in the field of fault diagnosis in recent years.However,in the real industrial field,deep learning fault diagnosis still has two limitations:1)Early fault features are weak and fault information extraction is insufficient.2)The distribution of fault data collected under variable conditions is inconsistent.The two points lead to the problems of low fault recognition rate and poor domain adaptability in deep learning fault diagnosis.In order to solve the problems above,a fault diagnosis method based on channel splitting CLAHE and adaptive threshold residual network(FEResNet) under variable operating conditions is proposed,which starts from the two perspectives of enhancing important features and deleting redundant features.Firstly,Morlet wavelet transform is employed for excavating discriminative time-frequency information hidden in vibration signals under variable operation conditions.Then,CLAHE with channel splitting is designed to improve the contrast and clarity of the time-frequency diagram to enhance fault information.Finally,the time-frequency diagram after feature enhancement is input to the designed adaptive thres-hold residual network to remove redundant features.Experimental results on CWRU dataset show that the prediction accuracy of the proposed method under the same working condition is up to 100%,the average prediction accuracy under different working conditions is up to 99.03%,and the domain adaptability is strong.
    Reference | Related Articles | Metrics
    Empirical Research on Remaining Useful Life Prediction Based on Machine Learning
    WANG Jia-chang, ZHENG Dai-wei, TANG Lei, ZHENG Dan-chen, LIU Meng-juan
    Computer Science    2022, 49 (11A): 211100285-9.   DOI: 10.11896/jsjkx.211100285
    Abstract623)      PDF(pc) (3894KB)(482)       Save
    Remaining useful life(RUL) prediction is one essential task of the predictive maintenance system.This paper investigates the latest RUL prediction methods,focusing on direct RUL prediction based on machine learning.Firstly,we describe the four representative machine learning models adopted by the RUL prediction methods,including support vector regression(SVR),multilayer perceptron(MLP),convolutional neural network(CNN),and recurrent neural network(RNN).And then,we give the three primary benchmark datasets and two performance evaluation metrics widely used in RUL prediction.The contribution of this paper is to demonstrate the steps and key technical details of how to build the RUL prediction models over the benchmark dataset(C-MAPSS) provided by NASA.We also compare the performance of these representative prediction models in detail and visually analyze the experimental results.Experimental results show that the performance of SVR with a shallow structure is significantly weaker than those based on deep neural networks.CNN and RNN based models have a solid ability for mining complex feature interaction and temporal feature interaction.Finally,we provide an outlook on the future of predictive maintenance technology and discuss the main challenges.
    Reference | Related Articles | Metrics
    Classical Simulation Realization of HHL Algorithm Based on OpenMP Parallel Model
    XIE Hao-shan, LIU Xiao-nan, ZHAO Chen-yan, HE Ming, SONG Hui-chao
    Computer Science    2022, 49 (11A): 211200028-5.   DOI: 10.11896/jsjkx.211200028
    Abstract268)      PDF(pc) (1946KB)(294)       Save
    Due to its natural superposition and entanglement,quantum computing has parallel computing capability that is incomparable to classical computing technologies.Based on the powerful parallel capability of quantum computing,some known quantum algorithms are faster than classical algorithms in processing problems.However,at this stage,because quantum computers is still in the development stage,the demand for algorithm experiments on quantum computers cannot be met.Therefore,quantum algorithms can be classically simulated on classical computers.HHL algorithm is used to solve the equation problem of linear system and it is widely used in data processing,numerical calculation,optimization problem and other fields.Based on the classic computer platform,HHL algorithm is simulated with C++,and the parallel programming model of OpenMP is used to accele-rate the algorithm.Realizing the HHL algorithm simulation to solve the linear equations of 4×4,8×8,16×16 matrix and realize the acceleration of the algorithm.
    Reference | Related Articles | Metrics
    Acceleration Method for Multidimensional Function Optimization Based on Artificial Bee Colony Algorithm
    LI Hui, HAN Lin, YU Zhe, WANG Wei
    Computer Science    2022, 49 (11A): 211200075-6.   DOI: 10.11896/jsjkx.211200075
    Abstract299)      PDF(pc) (3726KB)(237)       Save
    The artificial bee colony algorithm is widely used in the development of agricultural and rural big data applications,the serial artificial bee colony algorithm has a high time complexity and is not suitable for solving multi-dimensional problems quic-kly.According to the serial artificial bee colony algorithm,the problem of low execution efficiency of multi-dimensional function solving is analyzed,a multi-dimensional function optimization method based on the artificial bee colony algorithm is proposed after analyzing the multi-dimensional function and determining the artificial dependency relationship,which consists of task allocation,data distribution,synchronization operations and task parallelism.To demonstrate the efficacy of the proposed method,the Haiguang processor is used as a hardware test platform to compare and test four multi-dimensional functions.Experimental results show that the proposed method significantly outperforms the serial artificial bee colony algorithm in solving four multidimensional functions.
    Reference | Related Articles | Metrics
    Secondary Modeling of Pollutant Concentration Prediction Based on Deep Neural Networks with Federal Learning
    QIAN Dong-wei, CUI Yang-guang, WEI Tong-quan
    Computer Science    2022, 49 (11A): 211200084-5.   DOI: 10.11896/jsjkx.211200084
    Abstract256)      PDF(pc) (2114KB)(309)       Save
    In the new century,along with the rapid development of Chinese economy,air pollution in many areas of China is relatively serious,while the government is paying more and more attention to air pollution,and its efforts to control air pollution are increasing.Currently,six pollutants that have the greatest impact on China’s air quality are O3,SO2,NO2,CO,PM10,PM2.5.Therefore,predicting and forecasting the concentrations of the six pollutants and making corresponding control adjustments in time have become the urgent needs to protect the health of residents and build a beautiful China.At present,the mainstream solution for pollutant prediction is WRF-CMAQ prediction system,which is based on two parts,physical and chemical reaction of pollutants and meteorological simulation.However,due to the current research on the generation mechanism of pollutants such as ozone is still on the way,the prediction of WRF-CMAQ model has large errors.Therefore,this paper adopts a deep neural network for secondary modeling of pollutant concentrations to reduce the prediction error.At the same time,this paper adopts the federal learning method,and uses federal learning for data training for multiple monitoring stations to improve the model generalization ability.Experiment results show that the deep neural network scheme reduces the mean square error value to at most 3.93% compared to the primary prediction results of one WRF-CMAQ.Moreover,the scheme with federal learning improves the perfor-mance by up to 68.89% compared to a single monitoring site in extensive tests.
    Reference | Related Articles | Metrics
    New SLAM Method of Multi-layer Lidar Assisted by Rotational Strapdown Inertial NavigationSystem
    LYU Run, LI Guan-yu, QI Pei, QIAN Wei-xing, WANG Lan-ze, FENG Tai-ping
    Computer Science    2022, 49 (11A): 211200088-5.   DOI: 10.11896/jsjkx.211200088
    Abstract561)      PDF(pc) (2783KB)(336)       Save
    Focusing on the influence of low-accuracy inertial sensor on the performance of lidar/inertial SLAM,an optimized SLAM method by fusing information of multi-layer lidar and rotational strapdown inertial navigation system is studied.In this scheme,the rotating strapdown inertial navigation alignment method based on fuzzy adaptive Kalman filter is discussed,and the real-time correction of carrier attitude and inertial sensor error is completed in the process of carrier motion.Further more,the corrected inertial sensor data and LIDAR point cloud data are fused in tight coupling mode to improve the accuracy and real-time of positioning and mapping when the carrier moves in complex scenes.Experimental results show that the slam scheme based on rotating inertial navigation and multi-layer lidar information fusion not only ensures the real-time operation,but also effectively improves the positioning performance of lidar / inertial odometry and the accuracy of point cloud map.
    Reference | Related Articles | Metrics
    Application of Early Quantum Algorithms in Quantum Communication,Error Correction and Other Fields
    Renata WONG
    Computer Science    2022, 49 (6A): 645-648.   DOI: 10.11896/jsjkx.210400214
    Abstract366)      PDF(pc) (1811KB)(331)       Save
    At present,a development direction of quantum algorithm is to rethink the early quantum algorithms.Each of them involves an important,groundbreaking concept in quantum computing.They are generally considered to only belong to the theoretical category due to the fact that the problems they solve are of little practical value.However,theyare still important as they can solve a problem exponentially faster than a classical algorithm.Here,this paper elaborates on some recent developments in repurposing the early quantum algorithms for quantum key distribution and other fields.It especially focuses on Deutsch-Jozsa algorithm,Bernstein-Vazirani algorithm and Simon's algorithm.The Deutsch-Jozsa algorithm is used to determine whether a multi-argument function is balanced or constant.As recent research shows,it can be extended to application in the field of quantum communication and formal languages.The Bernstein-Vazirani algorithm finds a string encoded in a function.Its application can be extended to quantum key distribution and error correction.Simon's algorithm tackles the problem of identifying a string with a particular property.Its modern applications include quantum communication and error correction.
    Reference | Related Articles | Metrics
    Optimization for Shor's Integer Factorization Algorithm Circuit
    LIU Jian-mei, WANG Hong, MA Zhi
    Computer Science    2022, 49 (6A): 649-653.   DOI: 10.11896/jsjkx.210600149
    Abstract415)      PDF(pc) (2652KB)(533)       Save
    With the help of techniques such as windowed arithmetic and the coset representation of modular integers,the overall optimization and resource estimation for the quantum circuit of Shor's algorithm has been shown.What's more,the simulation experiment of the designed quantum circuit has been carried out.The Toffoli gate and the depth of the overall circuit can be reduced by techniques such as windowed arithmetic and the coset representation of modular integers.The Toffoli count is 0.18n3+0.000 465n3log n and the measurement depth is 0.3n3+0.000 465n3log n.Due to the windowed semiclassical Fourier transform,the space usage includes 3n+O(log n) logical qubits.A tradeoff for resources consume between the time and the space has been made at the cost of adding some approximation errors.
    Reference | Related Articles | Metrics
    Study on Improved BP Wavelet Neural Network for Supply Chain Risk Assessment
    XU Jia-nan, ZHANG Tian-rui, ZHAO Wei-bo, JIA Ze-xuan
    Computer Science    2022, 49 (6A): 654-660.   DOI: 10.11896/jsjkx.210800049
    Abstract283)      PDF(pc) (2507KB)(379)       Save
    In view of the impact of supply chain risks on upstream and downstream enterprises in the manufacturing industry,it is important to research the method of identification and evaluation for the supply chain risks.Firstly,based on supply chain operation reference model(SCOR) and taking automobile manufacturing enterprises as the research background,the identification process of supply chain risk indicators is studied by analyzing automobile supply chain risks and combining with the field survey results.An evaluation index system involving five risk categories,including strategic planning risk,procu-rement risk,manufacturing risk,distribution risk and return risk,is established.Secondly,considering that BP neural network model is prone to local optimal solution and other problems in the process of optimization evaluation,it is improved and optimized by increasing momentum,and the S-type function in the basic evaluation model is replaced by Morlet wavelet function to reconstruct the supply chain risk evaluation model.Finally,risk identification and assessment are studied with automobile enterprise of actual case,using the Matlab simulation to compare and analyze the improved BP wavelet neural network and fuzzy comprehensive evaluation,BP neural network,increased momentum of BP neural network.The results show that the improved BP wavelet neural network model has the better practicability and reliability.
    Reference | Related Articles | Metrics
    Construction of Ontology Library for Machining Process of Mechanical Parts
    WANG Yu-jue, LIANG Yu-hao, WANG Su-qin, ZHU Deng-ming, SHI Min
    Computer Science    2022, 49 (6A): 661-666.   DOI: 10.11896/jsjkx.210800013
    Abstract185)      PDF(pc) (3458KB)(653)       Save
    In view of the common problem that enterprises cannot deeply utilize the typical part machining data resources scattered in various application systems,this paper proposes to realize the comprehensive integration of typical mechanical part machining process data by establishing a typical part machining process ontology library.Firstly,the typical part machining process knowledge is categorized into two types:theoretical knowledge and process knowledge,and a complete typical part machining process ontology library is established by using the ontology modeling language OWL.The research results of this paper provide strong support for building an intelligent manufacturing of the whole elements of typical parts processing.
    Reference | Related Articles | Metrics
    Model Based on Spirally Evolution Glowworm Swarm Optimization and Back Propagation Neural Network and Its Application in PPP Financing Risk Prediction
    ZHU Xu-hui, SHEN Guo-jiao, XIA Ping-fan, NI Zhi-wei
    Computer Science    2022, 49 (6A): 667-674.   DOI: 10.11896/jsjkx.210800088
    Abstract414)      PDF(pc) (3275KB)(427)       Save
    Public-private partnership(PPP) projects can improve infrastructure,ensure people's livelihood,and promote the economic development,but there may be a huge loss of capitals and a serious waste of resources among the parties involved because of the characteristics of difficulty in withdrawing funds,long construction cycle and large numbers of participants.Thus,it is important to predict the risks of PPP projects scientifically and accurately.A risk prediction model based on spirally evolution glowworm swarm optimization(SEGSO) and back propagation neural network(BPNN) is proposed in this paper,which is applied for risk prediction in PPP infrastructure projects.Firstly,several strategies such as good point set,communication behavior,elite group and spiral evolution are introduced into the basic GSO,and SEGSO is proposed.Secondly,SEGSO is used to capture better initial weights and thresholds of BPNN to build a SEGSO-BPNN prediction model.Finally,the SEGSO algorithm searching performance is verified on five test functions,and the significance and validity of SEGSO-BPNN model are verified on seven UCI standard datasets.The model is applied to the risk prediction of Chinese PPP projects,and it gains good results,which provides a novel technique for PPP financing risk prediction.
    Reference | Related Articles | Metrics
    Study on Cloud Classification Method of Satellite Cloud Images Based on CNN-LSTM
    WANG Shan, XU Chu-yi, SHI Chun-xiang, ZHANG Ying
    Computer Science    2022, 49 (6A): 675-679.   DOI: 10.11896/jsjkx.210300177
    Abstract328)      PDF(pc) (3545KB)(831)       Save
    The classification of satellite cloud images has always been one of the research hotspots in the field of meteorology.But there are some problems,such as the same cloud type has different spectral features,different cloud types have the same spectral features,and mainly use the spectral features and ignore spatial features.To solve the above problems,this paper proposes a cloud classification method of satellite cloud image based on CNN-LSTM,which makes full use of spectral information and spatial information to improve the accuracy of cloud classification.Firstly,the spectral features are screened based on the physical characteristics of the cloud,and the square neighborhood of the point cloud is used as the spatial information.Then,the convolutional neural network(CNN) is used to automatically extract the spatial features,which solves the problem of difficult classification with spectral feature alone.Finally,on this basis,combined with the spatial local difference features extracted by the long short-term memory(LSTM) network,it provides multi-view features for the classification of satellite cloud images,and solves the problem of misjudgment caused by the similarity of cloud spatial structure.Experimental results show that the overall classification accuracy of the proposed method for satellite cloud images reaches 93.4%,which is 2.7% higher than that of the single CNN method.
    Reference | Related Articles | Metrics
    Complex Network Analysis on Curriculum System of Data Science and Big Data Technology
    YANG Bo, LI Yuan-biao
    Computer Science    2022, 49 (6A): 680-685.   DOI: 10.11896/jsjkx.210800123
    Abstract599)      PDF(pc) (6398KB)(729)       Save
    In recent years,more and more universities have begun to offer majors in data science and big data technology.As an emerging and popular multi-disciplinary major with wide caliber,its curriculum system is still being furthered improved.In this paper,we use complex network methods to analyze and visualize the course data set of 106 universities collected from the Internet.The course co-occurrence network and college relationship network are constructed respectively.For the highly coupled course co-occurrence network,a shell decomposition algorithm based on edge weights is proposed.The results are compared with the word frequency statistics and the frequent items obtained by the Apriori algorithm.Considering that this speciality can award a degree in science or engineering,the data set is divided into two sections science and engineering to analyze and visualize.This research can provide a certain reference to universities which are establishing or have established data science and big data technology speciality,and also provide an effective algorithm for the analysis of highly coupled networks.
    Reference | Related Articles | Metrics
    Pop-up Obstacles Avoidance for UAV Formation Based on Improved Artificial Potential Field
    CHEN Bo-chen, TANG Wen-bing, HUANG Hong-yun, DING Zuo-hua
    Computer Science    2022, 49 (6A): 686-693.   DOI: 10.11896/jsjkx.210500194
    Abstract296)      PDF(pc) (3556KB)(469)       Save
    With the maturity of UAV-related technologies,the development prospects and potential application scenarios of UAVs are also recognized by more and more people.Among them,UAV formation can overcome the load,endurance and mission of a single UAV.Due to restrictions on types and other aspects,UAV formation flying is an important development direction in the future.During the flight,the UAV formation may be restricted by unknown obstacles such as new high-rise buildings and temporary no-fly zones.The main focus of current obstacle avoidance methods is to generate a reference flight path that does not intersect the obstacle for the UAV in the two-dimensional scene before the departure,with the obstacle information is known.However,this method is not flexible enough to meet the requirements of avoiding these unknown obstacles in the process of advancing in the actual three-dimensional environment.A formation collision avoidance system(FCAS) for collision risk perception is proposed.By analyzing the movement trend of UAVs,those UAVs within the formation that are most likely to collide are screened out,and the improved artificial potential field is used to unknown obstacles.Avoidance of such obstacles can effectively avoid collisions between UAVs within the formation during the obstacle avoidance process,effectively reduce the number of communication links within the formation,and minimize the impact of obstacles on the formation of UAVs.After the obstacle avoi-dance is completed,all UAVs will resume their original formation and return to the reference paths.Simulation results show that the system enables the UAV formation to deal with static unknown obstacles during the flight of the reference path,and finally reaches the destination without collision,thus verifying the feasibility of the strategy.
    Reference | Related Articles | Metrics
    Evolutionary Game Analysis of WeChat Health Information Quality Optimization Based on Prospect Theory
    WANG Xian-fang, ZHANG Liang, ZHANG Ning
    Computer Science    2022, 49 (6A): 694-704.   DOI: 10.11896/jsjkx.210900186
    Abstract250)      PDF(pc) (4001KB)(449)       Save
    The quality of health information in WeChat varies from good to bad.Research on the evolutionary process of platform,official account and user behavioral decision-making,explore the key factors that prevent the official account from publi-shing false information,and provide a useful reference for optimizing the health ecological environment.By constructing a three-party game model,system equilibrium points and constraints are solved,then the influencing factors and optimal stable state of the system evolution are simulated and analyzed.Prospect theory is introduced to explore the influence of the subject's risk attitude and loss aversion on the optimal outcome.Simulation experiments show that the platform and users are more sensitive to risks than losses.Compared with the high cost of supervision,the platform pays more attention to the improvement of reputation.Compared with the misleading caused by false information,users focus on satisfying subjective needs.Official accounts are more sensitive to losses,and the sensitivity to platform penalties is greater than fan losses.When the initial willingness of the official account to release real information is low,although external factors such as platform punishment and media exposure can curb the spread of fake health information,the optimal system is difficult to achieve as soon as possible.
    Reference | Related Articles | Metrics
    Dynamic Customization Model of Business Processes Supporting Multi-tenant
    ZHANG Ji-lin, SHAO Yu-cao, REN Yong-jian, YUAN Jun-feng, WAN Jian, ZHOU Li
    Computer Science    2022, 49 (6A): 705-713.   DOI: 10.11896/jsjkx.210200104
    Abstract184)      PDF(pc) (2815KB)(399)       Save
    Process customization is an essential means to realize personal services of business processes.It provides differential business services by adjusting the internal structure of business process model while using a single software system.However,with the increasing scale and complexity of business processes,the existing process customization technology needs to reconstruct the process model when dealing with those complex and changeable business processes,which affects the development efficiency of process customization.Therefore,providing an efficient process customization method has always been a research hotspot in the field of business processes.From the perspective of multi-tenant application,this paper proposes a dyna-mic customization model of business processes supporting multi-tenant.Firstly,the business sub-process is constructed by means of assembling variable task nodes and then tenant identify identification and process instance derivation are realized by tenant sensor.Secondly,a dynamicprocess customization method is provided for the varying requirements of tenants.Finally,combined with case analysis,the validity of the model is verified.
    Reference | Related Articles | Metrics
    Teleoperation Method for Hexapod Robot Based on Acceleration Fuzzy Control
    YIN Hong-jun, DENG Nan, CHENG Ya-di
    Computer Science    2022, 49 (6A): 714-722.   DOI: 10.11896/jsjkx.210300076
    Abstract403)      PDF(pc) (5078KB)(435)       Save
    In order to solve the problem that conventional speed-level controller is hard to guarantee the speed tracking capability of hexapod robot,this paper proposes a bilateral teleoperation method based on acceleration fuzzy control.Firstly,a semi-autonomous mapping scheme between the master's position and the slave's velocity is established.Then,the relationship between the acceleration of the body and the drive value of the leg joint is determined.Secondly,a fuzzy PD control algorithm is used to design the control law of teleoperation system.On this basis,for improving the operating performance of the system,the velocity or force information is fed back to the operator in the form of haptic force,after the stability range of these control law parameters is analyzed by Llewellyn criterion.Finally,a semi-physical simulation platform is developed for experiment.Experimental results show that the proposed method is feasible,and the speed-tracking and force transparency of teleoperation system are obviously improved.
    Reference | Related Articles | Metrics
    Application Research of PBFT Optimization Algorithm for Food Traceability Scenarios
    LI Bo, XIANG Hai-yun, ZHANG Yu-xiang, LIAO Hao-de
    Computer Science    2022, 49 (6A): 723-728.   DOI: 10.11896/jsjkx.210800018
    Abstract291)      PDF(pc) (2361KB)(502)       Save
    The characteristics of blockchain such as immutability and traceability can better support the food traceability system,and there are problems such as long delay,many nodes and high system overhead in the application of food traceability combined with blockchain technology.To address the above problems,an optimized PBFT algorithm trace-PBFT(t-PBFT) is proposed for the food traceability scenario based on the practical Byzantine fault tolerance(PBFT) algorithm.Firstly,the nodes in the supply chain are divided into three classes,and the node status is dynamically updated according to the actual communication volume of the nodes in the consensus,which is used to evaluate the reliability of the nodes as the basis for electing the master node.Secon-dly,the consistency protocol in the original algorithm is optimized to reduce the number of node communications by combining the characteristics of the food supply chain.Experimental results show that the t-PBFT algorithm performs better than the PBFT algorithm in terms of communication overhead,request delay and throughput.Finally,based on the t-PBFT algorithm and combined with the consortium chain,an architectural model to meet the demand of food traceability is proposed.It can record the data of each link in the food supply chain,ensure data traceability and the safety of food circulation process.
    Reference | Related Articles | Metrics
    Diagnosis Strategy Optimization Method Based on Improved Quasi Depth Algorithm
    ZHANG Zhi-long, SHI Xian-jun, QIN Yu-feng
    Computer Science    2022, 49 (6A): 729-732.   DOI: 10.11896/jsjkx.210700076
    Abstract323)      PDF(pc) (2111KB)(327)       Save
    In the existing diagnostic strategy optimization methods,there are few researches on the unreliability test of multi-valued system,and it is difficult to fully consider the dual effects of multi-valued test and unreliability test on the optimization of diagnostic strategy.A quasi-depth algorithm based on tabu search is proposed.Firstly,the uncertain correlation matrix between fault and multi-valued test and the multi-valued unreliable diagnosis strategy are described.Then,aiming at the problem,the steps of the improved quasi-depth algorithm for tabu search are described.Finally,an example is given to verify the proposed algorithm.Experimental results show that the algorithm can reduce the algorithm complexity while ensuring the fault detection and isolation effect,and make the optimization process of diagnosis strategy more accurate and efficient.
    Reference | Related Articles | Metrics
    Hybrid Housing Resource Recommendation Based on Combined User and Location Characteristics
    PIAO Yong, ZHU Si-yuan, LI Yang
    Computer Science    2022, 49 (6A): 733-737.   DOI: 10.11896/jsjkx.210800062
    Abstract349)      PDF(pc) (2339KB)(430)       Save
    With the development of the times,the idea of users to purchase houses has also changed,paying more attention to the location resources in their decision-making process.This paper proposes a hybrid recommendation method based on user and location resource characteristics to provide more accurate purchasing suggestions,where the content-based recommendation algorithm and user based collaborative filtering algorithm are combined in a cascade way.By integrating 170 000+ housing transaction with 1200+location resource data,experiment result shows that the proposed hybrid model has better recommendation effect than the traditional ones.
    Reference | Related Articles | Metrics
    Dynamic Model and Analysis of Spreading of Botnet Viruses over Internet of Things
    ZHANG Xi-ran, LIU Wan-ping, LONG Hua
    Computer Science    2022, 49 (6A): 738-743.   DOI: 10.11896/jsjkx.210300212
    Abstract618)      PDF(pc) (2736KB)(618)       Save
    With the innovation and progress of imformation technology,Internet of things(IoT) technology grows explosively growth in various fields.However,devices over these networks are suffering the threat of hackers.The rapid growth of IoT-Botnets in recent years leads to many security occurrences including large-scale DDoS attacks,which brings IoT users severe damages.Therefore,it is significant to study the spread of a group of botnets represented by Mirai virus among IoT networks.In order to describe the formation process of IoT botnet precisely,this paper classifies the nodes of IoT devices into transmission devices and function devices,and then proposes SDIV-FB,a novel IoT virus dynamics model,through the analysis of Mirai virus propagation mechanism.The spreading threshold and equiliabrium of the model system are calculated,and the stability of the equiliabria are proved and analyzed.Moreover,the rationality of the derived theories are proved through the numerical simulation experiments,and the effectiveness of the model parameters are verified as well.Finally,decreasing the infection rate and increasing the recovery rate are proposed in this paper as two effective strategies for controlling the IoT botnets.
    Reference | Related Articles | Metrics
    Study on Information Sharing and Channel Strategy of Platform in Consideration ofInformation Leakage and Information Investing Cost
    XU Ming-yue
    Computer Science    2022, 49 (6A): 744-752.   DOI: 10.11896/jsjkx.211000055
    Abstract201)      PDF(pc) (2364KB)(409)       Save
    A model in which a manufacturer sells products through an e-commerce platform and a traditional offline retailer is constructed.Specifically,the e-commerce platform selects online selling format and establishes information sharing strategy.This paper compares and analyses four situations where the online selling format is either reselling or agency selling with or without information sharing.Based on Bayesian game and information leakage effect,what is the e-commerce platform's choice,agency selling or reselling with the interaction of information investing cost and sharing strategy in dual-channel supply chain.Research shows that,firstly,unless the demand uncertainty is low and the platform revenue sharing ratio is high,the e-commerce platform selects the reselling format.Secondly,the e-commerce platform's incentive to share information strongly depends on its online selling format selection and demand uncertain degree.Under reselling case,the e-commerce platform does not share information voluntarily.While under agency selling case,the e-commerce platform shares information with manufacturer voluntarily when the demand uncertainty is high.Finally,under agency selling case,information sharing is beneficial for all members of supply chain.More specially,information sharing is not always beneficial for the entire dual-supply chain under reselling case.Only if the demand uncertainty is high and the cross-channel substitutability is small,information sharing can improve the performance of entire supply chain.
    Reference | Related Articles | Metrics
    Development of Electric Vehicle Charging Station Distribution Model Based on Fuzzy Bi-objective Programming
    QUE Hua-kun, FENG Xiao-feng, GUO Wen-chong, LI Jian, ZENG Wei-liang, FAN Jing-min
    Computer Science    2022, 49 (6A): 753-758.   DOI: 10.11896/jsjkx.210700225
    Abstract226)      PDF(pc) (2389KB)(383)       Save
    With the popularization of electric vehicles,the number of public charging stations in cities cannot meet the growing demand for charging.Charging station construction usually requires multi-cycle and multi-level strategic planning,which is also affected by policies,economic environment and other factors.There are great uncertainties in the charging demand,the construction cost and operation cost in each charging station construction cycle.Considering the limited-service capacity of charging stations and the constraints of service radius,this paper develops a bi-objective fuzzy programming model that maximizes the charging satisfaction of electric vehicle users in the full cycle and minimizes the total cost of charging stations.Furthermore,a modified genetic algorithm based on adaptive and reverse search mechanisms is proposed to solve this problem.The results of the improved genetic algorithm and the standard genetic algorithm are compared in a case study.The performance of the model with different confidence levels and service radius of charging stations on the objective function are also verified.
    Reference | Related Articles | Metrics
    Pedestrian Navigation Method Based on Virtual Inertial Measurement Unit Assisted by GaitClassification
    YANG Han, WAN You, CAI Jie-xuan, FANG Ming-yu, WU Zhuo-chao, JIN Yang, QIAN Wei-xing
    Computer Science    2022, 49 (6A): 759-763.   DOI: 10.11896/jsjkx.211200148
    Abstract428)      PDF(pc) (3365KB)(578)       Save
    Due to the degraded performance of pedestrian navigation system when foot-mounted IMU is out of range during vigo-rous activities or collisions,a novel pedestrian navigation method is proposed based on construction of virtual inertial measurement unit(VIMU) assisted by gait classification.Attention-based convolutional neural network(CNN) is introduced to classify the common gaits of pedestrian.Then the inertial data from pedestrian's thigh and foot is collected synchronously via actual IMUs as training and testing samples.For different gaits,the corresponding ResNet-gated recurrent unit(Resnet-GRU) hybrid neural network models are built.According to these models,virtual foot-mounted IMU is constructed for positioning in case of actual foot-mounted IMU overrange.Experiments show that,the proposed method brings enhanced performance of pedestrian navigation system based on zero velocity update when the foot motion of pedestrian is violent,which makes the navigation system more adaptable in complex and unknown terrains.The positioning error during comprehensive gait is about 1.43% of the total walking distance,which satisfies the accuracy requirement of military and civilian applications.
    Reference | Related Articles | Metrics
    Model Medial Axis Generation Method Based on Normal Iteration
    ZONG Di-di, XIE Yi-wu
    Computer Science    2022, 49 (6A): 764-770.   DOI: 10.11896/jsjkx.210400050
    Abstract298)      PDF(pc) (2564KB)(387)       Save
    As the dimensionality reduction representation of model,the medial axis has been widely used in many engineering fields because of its good performance.At present,the method of generating the medial axis of the model is mainly based on the idea of approximating the medial axis,or the quality of the medial axis is not high,or the calculation time cost is high.As a result,a method of generating model medial axis based on normal iteration is proposed.The normal iteration method first discretizes the model into a triangular mesh model,and then performs GPU parallel tracking calculations based on the definition of the medial axis on the sample points and triangular faces.After multiple normal iterations,the medial axis points corresponding to all sample points are obtained.Finally,connecting the corresponding medial axis points according to the topological connectivity of the sample points to obtain the medial axis of the model.Experiment results show that the method can generate the model medial axis relatively quickly and accurately under different models,which verifies that the method improves the time efficiency and accuracy of the medial axis generation.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 7, 207 records