Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 44 Issue 4, 13 November 2018
Survey on Blockchain Technology and Its Application Prospect
HE Pu, YU Ge, ZHANG Yan-feng and BAO Yu-bin
Computer Science. 2017, 44 (4): 1-7.  doi:10.11896/j.issn.1002-137X.2017.04.001
Abstract PDF(729KB) ( 4459 )   
References | Related Articles | Metrics
Blockchain is a decentralized,untrustworthy distributed database technology.The database is collectively maintained by all the nodes involved in the system,and has the features of decentralization,tamper-resistance,trans-parentness and security.The emergence of blockchain technology thanks to the Bitcoin applications.Blockchain is the underlying technique support of Bitcoin system and is the fundament of the Bitcoin system.Since Blockchain technology is a promising technology,the key technologies,components,principles,limitations,applications and prospect were introduced,and the related research issues was discussed.
POP:Micro-service Based Online Programming System
HU Xing, WANG Ze-rui, LI Shuo, YANG Nan, ZHANG Zhi-fan, WANG Qiao and WANG Qian-xiang
Computer Science. 2017, 44 (4): 8-11.  doi:10.11896/j.issn.1002-137X.2017.04.002
Abstract PDF(942KB) ( 1407 )   
References | Related Articles | Metrics
With the development of cloud computing,more and more developers prefer programming based on cloud.Combined with the PaaS platform,online programming system will greatly simplify the application development,providing great convenience for developers.The emergence of Docker promoted the rapid development of PaaS.All the features of Docker are fit for online IDE to install and configure completely.This paper introduced POP (Public Online Programming) which based on Micro-service using Docker.In this paper,monolithic architecture app was broken into several services.Each service was running in an independent Docker container.Each component evolved on its own in Micro-service architecture.It reduces the evolvement risks.Through the Docker management and scheduling,POP can allocate Dockers for different types of applications to deploy,debug and run as soon as possible and minimize resources.
Efficient Clone Detection Technique for Functionally Similar Programs
DONG Jia-xing and XU Chang
Computer Science. 2017, 44 (4): 12-15.  doi:10.11896/j.issn.1002-137X.2017.04.003
Abstract PDF(345KB) ( 435 )   
References | Related Articles | Metrics
Clone detection techniques are widely used for but not limited to maliciously modified software detection,code recognition and reconstruction,etc.However,it is difficult to precisely detect code clones in functionally similar programs since they have their own unique features.Existing clone detection techniques have high false positive rates when applied to functionally similar programs.In this paper,we proposed a novel clone detection approach.After analyzing the features of functionally similar programs,we improved the clone detection technique’s performance.The experimental results show that our technique can effectively control the false positive rate when conducting the clone detection.
CCodeExtractor:Automatic Approach of Function Extraction for C Programs
ZHANG Qi-liang, ZHANG Yu and ZHOU Kun
Computer Science. 2017, 44 (4): 16-20.  doi:10.11896/j.issn.1002-137X.2017.04.004
Abstract PDF(526KB) ( 854 )   
References | Related Articles | Metrics
As program complexity increases,code refactoring plays an important role in improving quality and perfor-mance of software,and is also essential for improving the maintainability and extensibility of programs.Current code extraction approach for C code in Eclipse,can only deal with some simple code,and cannot refactor the code automatically.In this paper,we proposed a code extraction approach for C code,CCodeExtractor.It can extract the C code fragments that meet the specified conditions into new functions,and replace the fragments with these new functions calls automatically.The refactored code has the same program semantics as the original code.In order to verify the validity of CCodeExtractor,we implemented it in LLVM to extract some for statements in C programs into new functions since loop ana-lysis and optimization has been widely explored in recent years.Our experiments evaluated CCodeExtractor using six actual applications of different size,and compared the outcomes of the original and transformed programs.Experimental results show that CCodeExtractor can provide correct source-level transformation for programs of different size.
Study on Bug-fixed Traceability of Mozilla Project
ZHANG Yu-xia
Computer Science. 2017, 44 (4): 21-23.  doi:10.11896/j.issn.1002-137X.2017.04.005
Abstract PDF(351KB) ( 497 )   
References | Related Articles | Metrics
Software traceability provides important supports for many activities of the software engineering,such as changing impact analysis,regression analysis,version control and so on.In the open source software projects,the bug-fixed relationship between bugs and commits is a significant traceability.In this paper,we studied the relationship between bugs and version products in the open source software projects and choosed the large open source software project Mozilla as research subject.After having an in-depth understanding of the overall distribution of Mozilla’s bug-related data,we used Fellegi-Sunter model to mine the association between bugs data and commits data,then we built and analyzed the bug-fixed traceability in the Firefox browser.The result of this study provides a reference to the research of traceability in the open source software.
Meta-modeling Approach of Message Interaction in Service
ZHOU Wen-bo, LIU Hong-jia, LIU Lei, ZHANG Peng and LV Shuai
Computer Science. 2017, 44 (4): 24-29.  doi:10.11896/j.issn.1002-137X.2017.04.006
Abstract PDF(501KB) ( 480 )   
References | Related Articles | Metrics
To improve the normalization of message interfaces and the correctness of interactive behaviors in service,a meta-modeling approach of service message interaction was proposed.Firstly,workflow model is utilized to model ser-vice.Then by analyzing the message operation pattern,a method about interface representing and compatibility checking is given.Finally,inference rules and recursive functions are used to describe the semantics of message transmission,and the change of environments is discussed at the same time.Case analysis shows that the proposed method can normalize service interface patterns,model message interaction situation effectively,and ensure the reliability of service modeling.
Traceability Model Oriented to Software Safety Requirement Analysis Process
ZHENG Pei-zhen, YUAN Chun-chun, LIU Chao, WU Ji, YANG Hai-yan and HU Ning
Computer Science. 2017, 44 (4): 30-34.  doi:10.11896/j.issn.1002-137X.2017.04.007
Abstract PDF(1214KB) ( 577 )   
References | Related Articles | Metrics
Traceability is the mechanism or the ability to relate artefacts and the attached factors.Safety-critical system development,besides the general system development,contains more independent safety analysis which generates and verifies system safety requirements.At present,there are few traceability researches oriented to safety analysis process,which are of extremely challenging.Safety related standards,such as ARP-4761,DO 178C,provide guidelines for conducting safety analysis.However,some information may be neglected since there are a lot of concepts and methods.Besides,software safety requirement analysis should include both system to software and software to system safety analysis.Establishing bi-directional traceability of safety related information oriented to software safety requirement analysis process helps to simplify the verification and impact analysis.In this paper,we established a traceability model oriented to software safety requirement analysis process.
Summary Extraction Method for Code Topic Based on LDA
LI Wen-peng, ZHAO Jun-feng and XIE Bing
Computer Science. 2017, 44 (4): 35-38.  doi:10.11896/j.issn.1002-137X.2017.04.008
Abstract PDF(324KB) ( 630 )   
References | Related Articles | Metrics
Understanding the function of software code is important in software reuse.Topic modeling technologies can mine the latent topics from software code,which represent the software function.But these topics lack unambiguous explanation that make them hard to be understood by the developers.Latent Dirichlet allocation (LDA) technology is one of the popular topic modeling technology.There are studies which have used LDA to mine software code and get a good result,but there are also the problems in topic description.In this paper,in addition to the use of topic modeling techno-logy to generate topics from source code,explanatory text descriptions were generated for code topics from software resource such as documents,pairs of question and answer,mailing lists and so on.It can help users to understand the function of software code.The experiments show that the approach proposed in this paper is effective.
Research on Linux Device Driver Reuse
WANG Huan, MAO Jun-jie, WANG Dan and CHEN Yu
Computer Science. 2017, 44 (4): 39-42.  doi:10.11896/j.issn.1002-137X.2017.04.009
Abstract PDF(348KB) ( 467 )   
References | Related Articles | Metrics
Device drivers is an important factor affecting the applicability of the operating system.Considering the large cost of completely developing a device driver,reusing existing device drivers in the operating system became the preferred method for improving the applicability of operating system.Reuse of device driver process is essentially a target environment set up process of device drivers,reusing a device driver does not need to implement all of the kernel servi-ces.Code dependency analysis could recognize the dependency between device driver and kernel service.Device driver environment can be built by code dependency analysis technology automatically.Through reusing the e1000 network adapter driver,the feasibility of this method was proved.
Time Series Based Killer Task Online Recognition Approach
TANG Hong-yan, LI Ying, JIA Tong and YUAN Xiao-yong
Computer Science. 2017, 44 (4): 43-46.  doi:10.11896/j.issn.1002-137X.2017.04.010
Abstract PDF(381KB) ( 490 )   
References | Related Articles | Metrics
By analyzing failure frequency and failure patterns in Google cluster dataset,this paper fond what are called as killer tasks that suffer from frequent and continuous failure.Killer task is a big concern of cloud system as it causes unnecessary resource wasting and significant increase of scheduling overhead.In this paper,an online recognition approach was proposed to make use of the resource usage time series to recognize killer tasks precisely at the very early stage of their occurrence so that proactive actions can be taken to avoid rescheduling and resource wasting.The experiment results show that the proposed approach performs a 98.5% precision in recognizing killer tasks at 3% of failure duration,with a 96.75% resource saving for the cloud system averagely.
Model-based Runtime Configuration Framework for Cloud-based Applications
LIANG Chao-chao, CHEN Wei, WEI Jun and XU Shu-ren
Computer Science. 2017, 44 (4): 47-55.  doi:10.11896/j.issn.1002-137X.2017.04.011
Abstract PDF(811KB) ( 485 )   
References | Related Articles | Metrics
Cloud-based application is one of the most important paradigms of cloud computing at application layer.These applications are usually constituted of a set of distributed components.Due to the dependencies between the components,there are correlations between the configuration parameters,which makes it difficult to configure the components correctly and to ensure the parameter consistencies at runtime,making system failed or service unavailable.To address this issue,we proposed a model-based self-configuration method for cloud-based applications,which realizes the automatic choreography for component configuration,ensures configuration consistency and improves the reliability of runtime scale.Firstly,this method proposes a model to specify application topology in a declarative way.This model describes application components in form of services,and makes the separation of components and their relations.Then this method designes a self-configuration protocol to support component configurations at runtime.Based on service re-gistration and service discovery,this protocol decouples the component dependencies and ensures the consistencies of changing the configuration values.We also implemented a prototype based on this method and evaluated the effectiveness of this method with a typical application BookStore-TPCW.
Feature Location Method Based on Sub-graph Searching
FU Kun, WU Yi-jian, PENG Xin and ZHAO Wen-yun
Computer Science. 2017, 44 (4): 56-59.  doi:10.11896/j.issn.1002-137X.2017.04.012
Abstract PDF(1158KB) ( 353 )   
References | Related Articles | Metrics
The process to identify relevant program elements according to a given feature is called feature location.However,existing feature location methods,mainly based on feature description and source code structure,only produce source code elements as the result,which is usually lack of structural information and makes it difficult for developers to understand the code structure quickly.To solve this problem,a feature location method based on sub-graph search was proposed.The method finds out code elements related to the feature and the results can be displayed in a call graph.The method is implemented as a tool and tested for its performance.The average precision is 40.41% and the average recall is 50.28%.
Cloud-based Notes Plugin for Video on Google Chrome
XIN Chao, QIAO Zi-jian and SUN Yan-chun
Computer Science. 2017, 44 (4): 60-65.  doi:10.11896/j.issn.1002-137X.2017.04.013
Abstract PDF(2710KB) ( 514 )   
References | Related Articles | Metrics
With the development of Internet and the promotion of massive open online courses,online teaching has been widely used.In the environment of online teaching,most students acquire knowledge from the videos in online classes.However,there is an obvious shortcoming existing in most current online learning platforms that students can’t take and share notes or ask for help conveniently while learning from videos.As a result,it may be difficult for students to have a deep understanding of the video lectures.So this paper designed and implemented a cloud-based note plugin for video on Google Chrome and implemented it with HTML5,Node.js,MongoDB and other key technologies.Students can take notes on videos lectures and share them with others.At the same time,the plugin can analyze the lecture content and find the key points in some degree according to students’ using data so that it can support much help in both interactive discussion and knowledge understanding.Finally,this paper gave case studies to verify the feasibility and effectiveness of the solution.
Modeling and Verifying Device Automatic Polling Control Logic Using Hierarchical Timed Automata
SUN Cheng, XING Jian-chun, YANG Qi-liang and HAN De-shuai
Computer Science. 2017, 44 (4): 66-71.  doi:10.11896/j.issn.1002-137X.2017.04.014
Abstract PDF(2089KB) ( 412 )   
References | Related Articles | Metrics
The device systems in the underground construction are often dormant.In order to ensure they can operate securely and reliably when they are needed,it is necessary to poll automatically on the devices at a regular interval.In the automatic polling process,the device automatic polling control logic is of great importance.To solve these problems which are caused by device automatic polling control logic,one formal model of hierarchical finite automaton(HFA) was set up and discussed previously,and modeled for device automatic polling control logic with HFA.But we didn’t add time constraints and verify its accuracy and reliability.This paper proposed a formal model of hierarchical timed automata and modeled with it to the device automatic polling control logic.Then we analyzed and carried out formal verification with UPPAAL to the model.We verified its safety,reachability,activation and time constraint respectively to make sure the accuracy of timeliness and reliability.The method of modeling and formal verification makes up the disadvantage without time constraints in the former and ensures the accuracy and reliability of the device automatic polling control logic effectively.Finally,this model passes the simulation and formal verification,which proves the device automatic polling control logic is correct and reliable.
Correctness Verification of Rules for Unmanned Vehicles’ Decision System
LIU Bin-bin, LIU Wan-wei, MAO Xiao-guang and DONG Wei
Computer Science. 2017, 44 (4): 72-74.  doi:10.11896/j.issn.1002-137X.2017.04.015
Abstract PDF(1168KB) ( 452 )   
References | Related Articles | Metrics
The technology of unmanned vehicle is one of the key areas of current scientific research.However,the deve-lopment of decision system is facing the problem of insufficient security.Aiming at this problem,this paper presented a solution framework of verification-driven unmanned vehicle’s decision system based on automatic code generation.We introduced the technology of model checking to model the environment of unmanned vehicles.So this framework can find the defects that hide deeply during the designing process.With this method,the problem of insufficient security can be solved,which can also reduce the maintenance cost by synchronizing the process of software development with secu-rity checking.The tool named UNMANNED_RULE_EDIT (URE) was also implemented based on this framework.Now the tool has been tentatively used in an unmanned vehicle to support their work.
Hierarchical Clustering Based Software Architecture Recovery Approach
LI Han, TONG Ning and CHEN Feng
Computer Science. 2017, 44 (4): 75-78.  doi:10.11896/j.issn.1002-137X.2017.04.016
Abstract PDF(319KB) ( 726 )   
References | Related Articles | Metrics
To solve the problem of the lack of consideration of the characteristics of entities and features,a hierarchical clustering based software architecture recovery approach was proposed.Targeted entities and features were selected,weight scheme was proposed to generate feature vectors,information loss was considered as the similarity metric,and objective and subjective evaluation measures were respectively chosen and given.Compared with superior agglomerative software clustering approaches,HCSAR is more cohesive,requires less arbitrary decisions,is flexible to adjust the focus of software clustering,and is able to assist to generate more accurate system partition.
Android Vulnerability Detection and Assessment System Based on OVAL
WAN Yan, ZHAO Xi and WANG Guo-lin
Computer Science. 2017, 44 (4): 79-81.  doi:10.11896/j.issn.1002-137X.2017.04.017
Abstract PDF(234KB) ( 557 )   
References | Related Articles | Metrics
It is difficult to deal with more and more complex security vulnerabilities for the traditional detection tool,which takes a long time,takes up a large number of system resources and needs to simulate the attack.This paper pre-sented a C/S,open vulnerability and assessment language(OVAL) based android vulnerability detection and assessment system.This architecture puts most of the evaluation work to the central control and reduces the impact on the android system performance.Using OVAL as vulnerability assessment standard,the architecture guarantees the high accuracy of the evaluation,and it also has better openness and scalability.
Research on Mixed Source Software Quality Model and Measurement Method
LIU Qi-lin, DONG Wei, YIN Liang-ze, QI Xuan and YANG Sha-zhou
Computer Science. 2017, 44 (4): 82-84.  doi:10.11896/j.issn.1002-137X.2017.04.018
Abstract PDF(335KB) ( 488 )   
References | Related Articles | Metrics
Because the mixed source software(MSS) contains independent code,open source code and other different codes,the MSS has higher diversity and complexity,and quality measurement is greatly different between traditional software.In order to measure the MSS,it is very essential to establish measurement model and method.Based on MSS quality property,this paper put forward mixed source software quality model. Then we used the analytic hierarchy process,power method and linear method to build measurement method system.Finally we performed an experimental measurement on UbuntuKylin operating system.The experimental results shows the effectiveness,feasibility of the model and method.
Scheduling Strategy of Hierarchical Storage about Replication in Cloud Storage
YANG Dong-ju and LI Qing
Computer Science. 2017, 44 (4): 85-89.  doi:10.11896/j.issn.1002-137X.2017.04.019
Abstract PDF(382KB) ( 514 )   
References | Related Articles | Metrics
HDFS takes random storage strategy,if cluster has some cheap nodes,it is possible to make high frequency data store in the low processing performance nodes,causing a long time access and poor efficiency.To solve these problems,an improved scheduling strategy of hierarchical storage about replication was proposed.In order to reduce the number of replication scheduling,firstly,the information of data node from CPU load,memory load,network load,sto-rage load and network distance are used to evaluate node availability.Secondly,the optimal one is selected.Accessing frequency and hardware configuration are used to realize the replication scheduling.The response rate of cluster is improved by making high frequency data store on the high processing performance and high configuration node.The experimental results show that the strategy can improve access efficiency of replicas and local balancing for data storage in the heterogeneous clusters.
Personalized Defect Prediction for Individual Source Files
CHEN Heng, LIU Wen-guang, GAO Dong-jing, PENG Xin and ZHAO Wen-yun
Computer Science. 2017, 44 (4): 90-95.  doi:10.11896/j.issn.1002-137X.2017.04.020
Abstract PDF(507KB) ( 588 )   
References | Related Articles | Metrics
Most defect prediction methods are project-oriented or developer-oriented.These methods do not distinguish the differences between source files and between developers,or only distinguish the differences between developers.However,there exist differences between developers and between source files in development process,and both diffe-rences may have an effect on defect prediction model and result.Therefore,if these two kinds of differences or either one kind of differences are ignored,and defect prediction models based on either the whole project or a single developer’s development history are built,the prediction accuracy may be affected.To solve this problem,this paper proposed a personalized defect prediction method for individual source files,that is,we regarded the historical data of each file modified by each developer as an independent dataset,built a corresponding defect prediction model and used it to predict the defect possibility for the corresponding file modified by the corresponding developer.Experiments show the proposed method can improve the prediction accuracy with sufficient personal defect data.
Traffic Flow Forecast Based on Residual Modification GM(1,1) Model
ZHAO Zhuo-feng and YANG Zong-run
Computer Science. 2017, 44 (4): 96-99.  doi:10.11896/j.issn.1002-137X.2017.04.021
Abstract PDF(410KB) ( 1025 )   
References | Related Articles | Metrics
Traffic flow prediction has been one of the core problems in urban transport system.Accurate and efficient traffic prediction can effectively support the urban transportation planning, effectively reduce traffic congestion and the waste of resources and emissions.The accuracy of vehicle traffic forecast is also closely related to the residents’ travel quality.However,traffic is affected by many uncertain factors that has a certain degree of uncertainty,randomness and gray.The data from city intersection traffic monitors also has a certain degree of loss and deviation.Simplely,accurately and efficiently predicting traffic is a problem. Using the data from monitors,through the prediction value of GM (1,1) model comparing with real value to compute residual,using residual to modify computing model to get modified GM (1,1)model,multiple iterations are used for the same data set,and the final results are stable and superior to the classic GM (1,1) model.Experiments show that compared with the classic GM (1,1) model,the improved GM (1,1) model to get the predicted values is more close to the real value.With the complexity of the classical algorithm,the calculation accuracy is improved to some extent.
Goal Oriented Approach for Analayzing Event Model of Cyber-physical Systems
LIU Chun, HUANG Ran-ran and HAN Dao-jun
Computer Science. 2017, 44 (4): 100-103.  doi:10.11896/j.issn.1002-137X.2017.04.022
Abstract PDF(335KB) ( 291 )   
References | Related Articles | Metrics
Event model of software systems can model system behaviors effectively.However,since the system structure should be known and the spatial-temporal attributes of events should be considered due to the hybrid and distributed properties of cyber-physical systems (CPS),new challenges arise when building CPS’s event model.From the angle of requirements analysis,this paper proposed a goal oriented approach to deal with the challenges.This approach defines the users’goal on CPS as the causal relationships between events,that is a goal expresses the relationship that the occurrence of an event will cause of another event.Then,it analyzes and develops the event model based on users’ requirements on CPS by means of “And/Or” decomposition.An adaptive cruise control system has been used to illustrate the proposed approach.
Usage Habit-oriented Self-adaptive Method for Android Applications
SHEN Li-wei, NING Ke-cheng and ZHAO Wen-yun
Computer Science. 2017, 44 (4): 104-108.  doi:10.11896/j.issn.1002-137X.2017.04.023
Abstract PDF(433KB) ( 464 )   
References | Related Articles | Metrics
Endowing Android applications with the capability of adapting to users’ personalized usage habits is able to improve their user experience.Aiming at this objective,this paper proposed a usage habit-oriented self-adaptive method for Android applications.Based on the conventional self-adaptive mechanism,this method takes the usage habits as the context factor.Meanwhile,the adaptation implementation of the method focuses on the ma-nagement of the transition sequence of the activities in an application.The paper further concluded a set of designing requirements in order to support developers to construct this kind of applications.In addition,two usage scenarios are illustrated to explain the implementation details towards the designing requirements in the application development phase.
Detection Approach for Security Vulnerability Based on Pattern Matching
MIAO Xu-dong, WANG Yong-chun, CAO Xing-chen and FANG Feng
Computer Science. 2017, 44 (4): 109-113.  doi:10.11896/j.issn.1002-137X.2017.04.024
Abstract PDF(418KB) ( 1053 )   
References | Related Articles | Metrics
For the conditions that most of the existing software vulnerability static detecting tools cannot detect vulnerabilities that users care,this paper proposed a vulnerability detection method based on pattern matching.First,the source code which is going to be tested is parsed,and the code is transformed into intermediate representation which is stored in the user-defined data structure.Then,the vulnerability is described and the safety rules is parsed by using safety rule languages,and they are converted into corresponding automata model which can be stored in memory.Finally,the source code intermediate representation and safety rule should be for pattern matching,and the automata state should be transformed.And we need to submit the report based on the automata state to users.The experimental results show that this method has a low missing report rate and good expansibility.
Automatic Generation of Basis Path Set Based on Model Algebra
ZHAO Hui-qun and LU Fei
Computer Science. 2017, 44 (4): 114-117.  doi:10.11896/j.issn.1002-137X.2017.04.025
Abstract PDF(302KB) ( 428 )   
References | Related Articles | Metrics
Path testing is a white-box test method for designing test case.Basis path testing is one of the most widely used path test methods.The basic path test is the collection of the basic executable paths based on the control flow graph of the program,so the control flow graph is the key of automatic generation of the basic path set.In this paper,an automatic generation of basic path set method was proposed based on model algebra.The basic path set will be generated based on the model algebra expression which is automatically generated by analyzing the program.In the end,the classical cases show the efficiency of the method.
Rule-based Verification of Use Case Specification
ZHANG Ying, WU Ji, LIU Chao, YANG Hai-yan and HU Ning
Computer Science. 2017, 44 (4): 118-123.  doi:10.11896/j.issn.1002-137X.2017.04.026
Abstract PDF(543KB) ( 459 )   
References | Related Articles | Metrics
Use cases capture the functional requirement of system and are important in software development.Any error in use cases may have influence on the development of system.This paper described a method to verify use case specification based on the rules.First,we analyzed the error types of use case specification,then proposed a set of rules to help find these errors,which supports the automatic verification by formalizing the rules and focused the verification work on the meta-model of a kind of use case modeling method called RUCM(Restricted Use Case Modeling).
Probabilistic Diagnosis Approach to Diagnosing Multiple-fault Programs with Fault Correlation
XU Jun-jie and CHEN Rong
Computer Science. 2017, 44 (4): 124-130.  doi:10.11896/j.issn.1002-137X.2017.04.027
Abstract PDF(588KB) ( 715 )   
References | Related Articles | Metrics
Diagnosing multiple-fault software is necessary because almost all real-world software contains more than one fault.Unlike single-fault,the propagation and correlation of multiple faults in software lead to more complexity and great uncertainty,and probabilistic reasoning is thus applied to accommodate such uniqueness.This paper proposed a new probabilistic reasoning method to diagnose multiple-fault software by using variant probabilistic graphs FCG and their inference.Distinguished from the BARINEL method and the classical Bayesian network,FCG features Bayesian and Noisy-or inference from undirected graph consisting of candidate faults and their correlation which can be set up from statement-level control and data dependencies.Experiments were conducted on programs ranging from Siemens suite to larger ones like space and grep.The experimental results validate the effectiveness of the present approach in handling programs no matter with single fault and with multiple faults,and especially it is more accurate than competitors such as LOUPE,Tarantula,Ochiai and even BARINEL.
Evolution Method for TTCN-3 Codec
SUN Jing and MA Yuan-yuan
Computer Science. 2017, 44 (4): 131-134.  doi:10.11896/j.issn.1002-137X.2017.04.028
Abstract PDF(415KB) ( 357 )   
References | Related Articles | Metrics
Software requirements’ frequent change always leads to the evolution of software.When using TTCN-3 to test the software which has been evolved,we usually need to redevelop the codec for most test suite.Therefore,it is ne-cessary to automate the evolution of the original codec in order to improve the test efficiency.In this paper,we analyzed the state-of-the-art of TTCN-3 codec,and proposed a new method of evolving codec by combining the test suite.Experiments demonstrate that this method is feasible,and it is convenient for the TTCN-3 test system.
Detecting Malware by Combining API and Permission Features
SHAO Shu-di, YU Hui-qun and FAN Gui-sheng
Computer Science. 2017, 44 (4): 135-139.  doi:10.11896/j.issn.1002-137X.2017.04.029
Abstract PDF(1227KB) ( 821 )   
References | Related Articles | Metrics
With the use of Android OS,the number of Android applications is getting larger and larger.Therefore,how to detect malware is very important for protecting the mobile phone security.In this paper,we extracted API feature and permission feature by reverse-engineering the apk files respectively.Then,the two features are combined into a feature set.Finally,with different classification algorithms,the malwares can be detected.As a result,compared to single API or permission feature,higher detecting accuracy is gotten,which shows that the feature combination of permission and API is more efficient in detecting malicious Android applications.
CEStream:A Complex Event Stream Processing Language
WANG Yi-xiong, LIAO Hu-sheng, KONG Xiang-xuan, GAO Hong-yu and SU Hang
Computer Science. 2017, 44 (4): 140-143.  doi:10.11896/j.issn.1002-137X.2017.04.030
Abstract PDF(432KB) ( 375 )   
References | Related Articles | Metrics
Complex event processing is one of the core technologies of stream processing platform which supports big data processing.A new style event stream processing language,named CEStream,supports complex event processing in distributed environment.This language takes hierarchical data as data model,such as XML,and also provides a feature for complex event detecting,which is named regular tree pattern matching.This feature supports structural connection and regular expression matching.Moreover,aiming at distributed multi-sources event stream,CEStream provides anotherfeature that matches each pattern matching result once again by regular expression which describes time sequence for each result.This feature supports the demand of multi-sources composite complex event detecting,and has good ability of event processing.To implement CEStream language,a process engine system was developed based on stream data processing cluster and remote querying agent.This system provides a feature which uses remote querying agent to detect events by regular tree pattern,and provides another feature which uses stream processing cluster to process multi-sources composite complex events.It’s demonstrated by experiments that this system has implemented CEStream language,has effectively made constraint on communication flows between each node in cluster,has capitalized on ability of cluster computing,and it supports the demand of comprehensive performance.
Research on FCA Based Dependence Cluster Detection
SHANG Ying, CHENG Ke and LI Zheng
Computer Science. 2017, 44 (4): 144-147.  doi:10.11896/j.issn.1002-137X.2017.04.031
Abstract PDF(422KB) ( 385 )   
References | Related Articles | Metrics
Dependence cluster is a maximal set of program components that all depend upon each other.The current view is that large dependence cluster is extremely universal,which widely exists in all kinds of program source code.The existence of large dependence cluster may lead to the significant ripple effect,i.e.,a change to a certain point of the cluster will cause potential adverse effects on the rest of the cluster,which might affect the whole system.It has a negative impact on many different software engineering activities,including comprehension,testing and maintenance.Dependence cluster detection is a prerequisite for solving adverse effects caused by large dependence cluster.Previous researchers have proposed monotonic slice-size graph (MSG) based method to detect the same slice size of dependence cluster.However,the proposed method is of conservative approximation,which will cause false positives and false negatives.This paper proposed a technique to detect dependence clusters using formal concept analysis (FCA),in which concept inclusiveness is defined to select large concepts.Furthermore,a light-weight computing strategy for FCA was proposed to compute large concept to decrease the computation cost significantly.Based on 12 open source subjects,the empirical comparison showes that the proposed technique can increase the detection accuracy of large dependence clusters with higher efficiency.
Formal Framework of Architecture-based Model Transformation
HOU Jin-kui and WANG Lei
Computer Science. 2017, 44 (4): 148-152.  doi:10.11896/j.issn.1002-137X.2017.04.032
Abstract PDF(512KB) ( 380 )   
References | Related Articles | Metrics
To resolve the problems of semantic feature description and calculation of model-driven software development,a semantic description framework for model transformation was proposed based on the extension of typed category theory.The framework can be used to formally describe component-based software model,model mapping,and semantic verification of model transformation.Category diagram is used to depict structural semantics of architecture model,in which typed morphisms are tools to describe the relations between components.Mapping mechanism between models before and after transformation is formally described by typed functors.The application research shows that the framework nicely follows the essence and requirements of model-driven development,and provides a new guide for the understanding,cognitive learning and propulsion of model-driven software development.
Research on Distributed Embedded Computer Performance Evaluation Model
LI Hong-jun, CUI Xi-ning, MU Ming and HAN Wei
Computer Science. 2017, 44 (4): 153-156.  doi:10.11896/j.issn.1002-137X.2017.04.033
Abstract PDF(321KB) ( 463 )   
References | Related Articles | Metrics
There exists great difference in performance evaluation of embedded computer in various application domains.The distributed embedded computer has its characters which should be taken into account while they are evaluated.The general characteristics of distributed embedded computer were analyzed,and a performance evaluation model was provided for the distributed embedded computer in view of its HW & SW.Starting with the common technology characteristics of distributed embedded computer,the model contains metrics and guidelines,which builds up an integrated evaluation model and takes advantages of explicit definitions and convenient regulations.Several common synthetic evaluation processes were summarized and on this basis a complete synthetic evaluation process was provided specially for the distributed embedded computer.The method is of good feasibility by an experiment.The study has great importance for performance evaluation of embedded devices,especially for distributed systems and the related design.
Design of High Concurrent Communication Server of Elevator Remote Monitoring System
WANG Xi-bo, GE Hong-shuai, WANG Rui-quan and LIN Hai
Computer Science. 2017, 44 (4): 157-160.  doi:10.11896/j.issn.1002-137X.2017.04.034
Abstract PDF(390KB) ( 332 )   
References | Related Articles | Metrics
Since the remote communication server needs to parallelly handle a large number of datagrams with different levels,by using Java NIO,batch data processing,database connection pool and java synchronization,a high concurrency UDP communication server model was proposed.The receiving and sending of datagrams,data acquisition and storage were completed in detail,and a multi-queue thread pool was designed with dynamic priority.The simulation results show that the performance of multi-queue thread pool increases by 15% to 21.58% than the traditional thread pool The practical application effect of the system in Shenyang bluelight group shows that the server model is stable to work for large-scale underlying communication of multi-priority task and has very good generality.
Research on Parallel Management Model Based on Transaction Cost Theory for Crowd-sourcing
XING Ying, GUO Ji-feng and GOU Xi-mei
Computer Science. 2017, 44 (4): 161-164.  doi:10.11896/j.issn.1002-137X.2017.04.035
Abstract PDF(347KB) ( 407 )   
References | Related Articles | Metrics
Crowd-sourcing is a new software development model,and it possesses economic characteristics.Aiming to the problem of online trading market,this paper introduced the transaction cost theory,and designed parallel management mode for crowd-sourcing model.Then its symbol definition,cost analysis,the equilibrium point were set.Finally,for the case of “bug fixes”,the application of the model was explained.Model emphasizes resource management and allocation of market-based competition,and it has practical significance,contributing to resources management and optimal allocation of both supply and demand,and improving development efficiency.
Prediction of Code Clone Quality Based on Bayesian Network
LIU Dong-rui, LIU Dong-sheng, ZHANG Li-ping, HOU Min and WANG Chun-hui
Computer Science. 2017, 44 (4): 165-168.  doi:10.11896/j.issn.1002-137X.2017.04.036
Abstract PDF(328KB) ( 400 )   
References | Related Articles | Metrics
This paper researched on the quality of code clone in the softwareand ,evaluated the code clone quality of the current versions.Then on this basis,Bayesian network was used to train the existing sample data gets the prediction model of code clone which is able to predict the quality.The prediction results are able to help developers make judgments that code clone should be reconstructed or efficiently reused.The experiment shows that the method can be used to predict the quality of code clone in software more accurately.
Research and Application on Software Trustworthiness Evaluation Model Based on Classification
JIA Xiao-hui, ZHANG Wen-ning and LIU An-zhan
Computer Science. 2017, 44 (4): 169-172.  doi:10.11896/j.issn.1002-137X.2017.04.037
Abstract PDF(341KB) ( 547 )   
References | Related Articles | Metrics
Aiming at the problem of software trustworthiness,we researched on the software quality model and the evaluation model,and put forward the software quality model and the evaluated model for software trustworthiness.The degree of software trust is divided into existing level,trustless level,useable level,corroborated level,recommended level and application level.The decision tree is used to express the evaluation process.It is applied to the trusted component platform and the system runs stably after testing,it can play a guide role in the development and reuse of high trusted software.
Test Case Prioritization Based on DU Chains
PAN Li-li, WANG Tian-e, QIN Jiao-hua and XIANG Xu-yu
Computer Science. 2017, 44 (4): 173-176.  doi:10.11896/j.issn.1002-137X.2017.04.038
Abstract PDF(318KB) ( 424 )   
References | Related Articles | Metrics
Test case prioritization is an effective and practical technique of regression testing.Yet this technique is quite limited in a way that it prioritizes testing cases based on test-requirement coverage only and ignores many other testing factors.To improve the performance,this paper presented a new test case prioritization algorithm based on DU chain.The algorithm combines the DU-chain coverage and fault detection rate as the test-case quantitative factors.Compared with existing algorithms,the new algorithm makes use of information from executed testing and modules coupling,and dynamically calculates priority quantitative value for every test case.The experimental resucts show that the new prioritization algorithm is helpful to detect more faults in a shorter time.
Research and Development of Computer-aided Requirements Engineering Tool Based on Multi-modal Interaction Technologies
LIU Zhe and LI Zhi
Computer Science. 2017, 44 (4): 177-181.  doi:10.11896/j.issn.1002-137X.2017.04.039
Abstract PDF(1691KB) ( 396 )   
References | Related Articles | Metrics
Human-computer interaction is developing towards building cognitive systems in order to provide natural and effective interaction experience.By relying on data processing and data fusion algorithms from various sensors,cognitive systems can dynamically adapt to external environment to improve human-computer interaction experience.This paper proposed a natural way of human-computer interaction based on gesture operations in order to cater for particular kinds of usage scenarios of a computer-aided requirements engineering (CARE) tool,which makes further improvement on an existing problem-oriented CARE tool,under the principle and theory of human-computer interactions.A tracing method assisted by multi-modal techniques is applied in the gesture recognition process,which makes significant enhancement to the speed and accuracy of tracing and recognition,thus providing better user experience.
Performance Model of Sparse Matrix Vector Multiplication on GPU
YIN Meng-jia, XU Xian-bin, HE Shui-bing, HU Jing, YE Cong-huan and ZHANG Tao
Computer Science. 2017, 44 (4): 182-187.  doi:10.11896/j.issn.1002-137X.2017.04.040
Abstract PDF(564KB) ( 436 )   
References | Related Articles | Metrics
Sparse matrix-vector multiplication algorithm is widely used in large-scale linear system and solving matrix eigenvalue problems.It also often becomes the bottleneck of processing in iterative process and affects the whole algorithm performance.Choosing a different store format for different forms of matrix,the corresponding algorithms tend to generate large performance impact.Through experimental analysis,the variation performance characteristics were found under different storage structure,so as to build up an effective performance measurement model for assessment of sparse matrix computation overhead,and we selected reasonable storage format and made effective guidance.Based on 14 groups of CSR,COO,HYB format,8 groups of ELL format test cases,the difference between the performance prediction model and measurement is less than 9%.
Research on Social Network Structural Holes Discovery Algorithm under Large-scale Data
WANG Zhen, HAN Zhong-ming and LI Jin
Computer Science. 2017, 44 (4): 188-192.  doi:10.11896/j.issn.1002-137X.2017.04.041
Abstract PDF(432KB) ( 543 )   
References | Related Articles | Metrics
With the increase of social network data scale,the amount of calculation on structural holes exponentially increases.How to construct efficient parallel algorithms to shorten the running time of algorithms becomes the difficulties of the current study.This paper concentrated on shortages of network structural holes discovery algorithm,using parallel thought to design structural holes discovery algorithm based on MapReduce.The algorithm uses three groups of different sizes of data sets which are DBLP,YouTube and California road network to test on Hadoop.The results show that increasing the number of Data Node’s machine nodes can shorten the running time of algorithms and increase ope-rational efficiency,and this algorithm has good parallel speedup and scalability.
Speculative Execution Optimization Algorithm with MapReduce
HUANG Zhong-ping, BAI Guang-wei, SHEN Hang, CHENG Xiao and HUA Zhi-xiang
Computer Science. 2017, 44 (4): 193-196.  doi:10.11896/j.issn.1002-137X.2017.04.042
Abstract PDF(437KB) ( 338 )   
References | Related Articles | Metrics
In the framework of data center for large-scale data processing,MapReduce contains thousands of nodes.Speculative execution is a way to improve the efficiency of parallel computing,which can deal with the straggling task in parallel computing effectively.In this paper,we proposed a speculative execution optimization algorithm with MapReduce,focusing on the target jobs with higher demand of real time and less amount of calculation.The purpose is to minimize execution time while meeting real time demand.To this end,we established a task model and a time model.By the analysis of the task model and time model,we employed a 0-1 integer linear program to minimize the total finishing time.In addition,a heuristic algorithm was put forword to meet the optimal value,which can be done with the polynomial complexity.Finally,the simulation experiment results show that the proposed algorithm can gain remarkable effect.
Speedup of GMRES Based on MIC Heterogeneous Cluster Platform
WANG Ming-qing, LI Ming, ZHANG Qing, ZHANG Guang-yong and WU Shao-hua
Computer Science. 2017, 44 (4): 197-201.  doi:10.11896/j.issn.1002-137X.2017.04.043
Abstract PDF(501KB) ( 958 )   
References | Related Articles | Metrics
Generalized minimal residual method (GMRES) is the most commonly used method for solving asymmetric large-scale linear algebraic equations,and it has fast convergence and stable property.Intel many integrated co-processors (MIC) has strong computing power and it can program easily.In this paper,MPI+OpenMP+offload hybrid programming paradigm was used to port GMRES algorithm to the MIC heterogeneous cluster platform.The perfor-mance of GMRES parallel algorithm was greatly improved by using kinds of optimization methods,such as hiding collective communications using asynchronous execution model,vectorization optimization,data transfer optimization,extensibility of MIC thread optimization,etc.Finally,GMRES parallel algorithm was used to improve the perfomance of solving high dimensional PDEs by the localized radical basis functions (RBFs) collocation methods.Results from tests indicate that the parallel efficiency can be up to 71.74% when using 32 processes in cluster,and the maximum speedup ratio of 4 MICs to 1 CPU can be up to 7.
Survey on Security Protocol of Space Information Networks
LIAO Yong, FAN Zhuo-chen and ZHAO Ming
Computer Science. 2017, 44 (4): 202-206.  doi:10.11896/j.issn.1002-137X.2017.04.044
Abstract PDF(448KB) ( 863 )   
References | Related Articles | Metrics
This paper summarized the space information network security protocol.Firstly,we introduced the significance of construction of space information network security.Secondly,we discussed the research status at home and aboard of the space information network security protocol.Then we analyzed authentication and encryption algorithms in detail,which are the key technologies of space communication protocol specification-security protocol (SCPS-SP).On this basis,we indicated the challenges which SCPS-SP faced in the application,and gave some ideas and suggestions for solution.
Blind Estimation Method for OFDM Parameter Based on Symbol Kurtosis
ZHANG Hai-chuan and LEI Ying-ke
Computer Science. 2017, 44 (4): 207-212.  doi:10.11896/j.issn.1002-137X.2017.04.045
Abstract PDF(487KB) ( 465 )   
References | Related Articles | Metrics
Aiming at the poor performance and the large required OFDM symbol number of conventional estimation method for the parameter of OFDM signal when the cyclic-prefix length is short.A method based on symbol kurtosis was proposed to estimate the parameter of OFDM signal.Firstly,we used the trial values of related parameter of OFDM signal in the sample spacing to demodulate the received signal.Secondly,we constructed the symbol kurtosis according to the demodulated baseband symbol.Finally,we proved that the joint estimation for cyclic-prefix length and useful symbol length can be achieved by detecting the lowest value of symbol kurtosis.The experimental simulation results show that the proposed method overcomes the shortcoming of conventional estimation method that it is difficult to estimate the parameter of OFDM signal when the cyclic prefix length is short,and the parameter of OFDM signal could be accurately estimated using less symbols than conventional estimation method.
Research on Incentive Strategy of Nodes in Selfish Opportunistic Network
LI Xiang-li and XUAN Mao-yi
Computer Science. 2017, 44 (4): 213-217.  doi:10.11896/j.issn.1002-137X.2017.04.046
Abstract PDF(500KB) ( 387 )   
References | Related Articles | Metrics
Nodes in opportunistic network are selfish due to limited resource when transmitting messages.In order to encourage nodes to cooperate with each other,the incentive strategies have been proposed to use virtual currency to motivate selfish nodes to transmit messages,but there exist problems of virtual currency shortage and mendacious offer.With the purpose of solving the above problems,this paper proposed a node incentive strategy based on the mechanism of overdraft virtual currency.The strategy designs mechanism of overdraft virtual currency,which uses the ability of messages transmitting of a node as monetary overdraft guarantee to solve the problem of virtual currency shortage.The resource and wealth status are public in the trading mechanism of the strategy,if the message transaction cannot be completed once,the two sides will friendly bargain based on resource and wealth status to suppress the issue of mendacious offer.Simulation results show that the proposed strategy is of effectiveness of the strategy and mechanism of virtual currency,and can suppress the issue of mendacious offer.
WSN Routing Detection Algorithm Based on Energy Consumption Quantization
XU Yin-xia, JIN Guo-xiang and TAO Yi-wei
Computer Science. 2017, 44 (4): 218-222.  doi:10.11896/j.issn.1002-137X.2017.04.047
Abstract PDF(409KB) ( 418 )   
References | Related Articles | Metrics
In order to reduce the energy loss of routing nodes in wireless sensor networks (WSN) and improve the lifetime of the network,it is necessary to design the optimal distribution of routing nodes.Traditional methods use channel allocation model based on CSMA/CA limited competition for WSN routing detection to achieve energy balance,but with the expansion of the network nodes and interference enhancement,saving energy consumption costs more.A WSN routing detection algorithm based on energy quantitative conduction was proposed in this paper.Firstly,the cluster energy consumption scheduling model of WSN is established and the routing detection control objective function is constructed when taking the energy control overhead,packet loss rate and transmission delay as the constraint parameters.Then the coordination mechanism of routing conflict is used to realize the quantitative distribution of energy consumption,and the optimization of WSN routing detection and WSN node deployment is achieved by the energy equilibrium model of WSN transmission channel.Simulation results show that the proposed method is superior to the traditional methods in WSN routing detection design due to its higher energy efficiency of network,lower transmission delay and bit error rate.
Finding Type Mismatch Defects of JavaScript Based on Static Analysis
WEI Miao, WU Yi-jian, SHEN Li-wei, PENG Xin and ZHAO Wen-yun
Computer Science. 2017, 44 (4): 223-228.  doi:10.11896/j.issn.1002-137X.2017.04.048
Abstract PDF(517KB) ( 993 )   
References | Related Articles | Metrics
Because of the nature of the JavaScript language and the increase of amount of JavaScript code with the evolving software,a JavaScript program may have a lot of defects which are related to the runtime variable type.This kind of defect is often difficult to detect,only when runtime errors can find fault.It takes programmers a lot of time to locate and search the code bug by debugging manually.The proposed JavaScript defect inspection method is mainly used to check the possible runtime type unmatched defects.First of all,the JavaScript file was grouped in the project based on HTML,JSP page reference for JavaScript files.Secondly,JavaScript files were analyzed in groups and the variable type was inferred.Then we checked whether there is a multi-type attribute in the group,afterwards the use of the multi-type attribute was checked.Finally,the checking results was reported and the repair advice was gave.A tool for automatic detection of multi-type attribute defect in JavaScript was implemented,through the experiment in the real JavaScript projects,the feasibility of this method was illustrated and the existing JavaScript analysis method was compared to illustrate the effectiveness of this method,improving the JavaScript’s defect finding efficiency and effectiveness.
Deep Belief Network Software Defect Prediction Model
GAN Lu, ZANG Lie and LI Hang
Computer Science. 2017, 44 (4): 229-233.  doi:10.11896/j.issn.1002-137X.2017.04.049
Abstract PDF(406KB) ( 470 )   
References | Related Articles | Metrics
Software defect prediction technology plays an important role in detecting software defect and ensuring software quality.Using the neural network classification algorithm to build software defect prediction model has been used widely.But the neural network classification algorithm to train historical data is only shallow learning,this algorithm can’t deeply extract data features.To solve this problem,the deep belief network software defect prediction model (DBNSDPM) by using a deep belief nets which is composed of multilayer restricted Boltzmann machine was constructed.This model conducts feature integration and iteration firstly,then the characteristic data can be studied deeply.The simulation experiment proves that the prediction accuracy of DBNSDPM improves significantly than the traditional neural network prediction model.
Black-box Test Case Generation Method Based on PEFSM Behavioral Model
LIANG Hao-ran, ZHOU Kuan-jiu, CUI Kai, PAN Jie and HOU Gang
Computer Science. 2017, 44 (4): 234-240.  doi:10.11896/j.issn.1002-137X.2017.04.050
Abstract PDF(586KB) ( 400 )   
References | Related Articles | Metrics
With the wide application of computer software in medicine,spaceflight,finance and other fields,people proposed stricter requirements for the reliability of the software system.Software testing is an effective method to ensure the security and reliability of software,whereas the quality of test cases will directly influence the testing efficiency and cost.To solve black-box testing problems about embedded systems,a test case generation method based on PEFSM (Probabilistic Extended Finite State Machine) behavioral model was proposed.The applicable scenarios of this method are illuminated with two assumptions.The algorithms for converting and unwrapping RE (regular expression) are designed,and this method is applied to conduct black-box testing towards a smart television based on Android operating system.The features of this method are as follows:1)users’ common operations on IUT (Implementation Under Test) can be tested preferentially by utilizing different frequencies of various operations.In this way,it is possible to reduce the number and length of the test cases.2)testers can specify the initial and final states of test cases.In addition,they can also set cyclic numbers of closures and the waiting time between transitions.Therefore,this method makes the testing flexible and applicable.The experimental results show that this method can not only reduce the software testing cost,but also improve the error-detecting efficiency of test cases.
Prototype Selection Algorithm Based on Natural Neighbor and MST
ZHU Qing-sheng, DUAN Lang-jun and YANG Li-jun
Computer Science. 2017, 44 (4): 241-245.  doi:10.11896/j.issn.1002-137X.2017.04.051
Abstract PDF(505KB) ( 425 )   
References | Related Articles | Metrics
K-nearest neighbor (KNN) is one of the most popular algorithms for supervised classification.However,the traditional KNN classification has two limitations that the option of parameter K and prohibitive computational and storage demand for large datasets.To overcome these limitations,a new prototype selection algorithm was proposed,which retains some key prototypes that make a large contribution to classification and removes the most of other prototypes with little contribution for classification.Differing from other prototype selection algorithms,the proposal uses a novel neighbor concept natural neighbor to preprocess the dataset and builds minimum spanning tree based on the speci-fic terminal conditions.According to the MST,the prototypes close to boundaries and some internal prototypes are preserved.Experimental results show that the proposed algorithm effectively reduces the number of prototypes while maintaining the same level of classification accuracy as the traditional KNN classification algorithm.Moreover,it is a little bit superior to other prototype selection algorithms in classification accuracy and retention ratio.
Multi-plant Assembly Sequence Planning Based on Fruit Fly Optimization Algorithm
YUAN Wen-bing, CHANG Liang, XU Zhou-bo and GU Tian-long
Computer Science. 2017, 44 (4): 246-251.  doi:10.11896/j.issn.1002-137X.2017.04.052
Abstract PDF(478KB) ( 425 )   
References | Related Articles | Metrics
An improved fruit fly optimization algorithm(FOA) was proposed to solve the distribution problem of assembly sequence planning and plant assignment.Firstly,in the multi-plant assembly sequence planning model,the coding system of FOA is given.Secondly,it uses multiple fruit fly groups to enhance the parallel search ability of the FOA.According to the characteristics of the improved FOA,a smell-based search and a vision-based search are well designed for the groups to stress its exploitation.And a cooperative search process is developed to stress its exploration.Thirdly,the fitness function is proposed with comprehensive consideration of assembly operation cost,assembly tool change cost,the clamping change cost and general transportation cost.Moreover,the feasible assembly sequences and station allocation are guided by assembly precedence matrix (APM) and fitness function.At same time,they are optimized based on the improved FOA.Finally,an example of the product is tested and illustrated.The test results show that the presented method is feasible and efficient for solving the multi-plant assembly sequence planning problem.
Supervised WSD Method Based on Context Translation
YANG Zhi-zhuo
Computer Science. 2017, 44 (4): 252-255.  doi:10.11896/j.issn.1002-137X.2017.04.053
Abstract PDF(408KB) ( 744 )   
References | Related Articles | Metrics
In order to overcome the data sparseness problem for supervised WSD methods,this paper presented a WSD method based on context translation.The method assumes that the context consisted of the ambiguous words has the similar meaning as the context in the original.Under this assumption,first,a large number of pseudo training data are generated in the context of the target text.Then the Bayesian model is trained by utilizing both authentic and pseudo training data.Finally,the method performs word sense disambiguation by using Bayesian model.Experimental results show that the proposed method can significantly improve traditional WSD accuracy by 4.35%,and outperforms the best participating system in the SemEval-2007 evaluation.
Argument Game Model for Gradual Argumentation Semantic
Computer Science. 2017, 44 (4): 256-262.  doi:10.11896/j.issn.1002-137X.2017.04.054
Abstract PDF(672KB) ( 718 )   
References | Related Articles | Metrics
In formal argumentation theory,the proof theory of argumentation semantics sloves the problem of how to determine the status of justification of an argument in a given argumentation semantic,which always needs a corresponding argument game model.Argument game happens the process of argument interactions between a proponent and an opponent,and they both object and defense their arguments by giving attacking arguments,only if the proponent wins an argument game,its initial argument could obtain explicit status of justification.This paper defined a kind of gradual argumentation semantic,namely BRD-argumentation semantic,which is different from Dung’s abstract argumentation semantic,embeding a circular semantic for calculating the strengthes and degrees of justification of arguments into a structured argumentation framework ASPIC+.For sake of giving the proof theory of this semantic,this paper constructed a corresponding argument game model.
Fuzzy Modal Propositional Logic with Three Kinds of Negation
CHEN Cheng, PAN Zheng-hua and LV Yong-xi
Computer Science. 2017, 44 (4): 263-268.  doi:10.11896/j.issn.1002-137X.2017.04.055
Abstract PDF(415KB) ( 526 )   
References | Related Articles | Metrics
In the fuzzy knowledge processing,it is a basic theory for distinguishing,expressing,reasoning and calculating for different negative knowledge.The fuzzy propositional logic formal system FLCOM with contradictory negation,opposition negation and medium negation is a theory which can fully describe the different negation with its relation and law in fuzzy knowledge.In this paper,a fuzzy modal propositional logic system MKCOM with three kinds of negation was proposed based on FLCOM and the medium modal propositional logic MK,as well as the expansion system MTCOM,MS4COM and MS5COM of MKCOM.Then the interpretation of semantics and syntax about MKCOM was discussed,and the reliability and completeness theorems of MKCOM were proved.
Application of Bacteria Foraging Algorithm in High-dimensional Optimization Problems
LI Jun and DANG Jian-wu
Computer Science. 2017, 44 (4): 269-274.  doi:10.11896/j.issn.1002-137X.2017.04.056
Abstract PDF(549KB) ( 642 )   
References | Related Articles | Metrics
Excessive empirical parameters in the former self-adaptation step formula caused the defect in terms of failing to achieve self-adaptation in bacteria foraging optimization algorithm.Therefore,a revised step formula has been proposed,which enables step length to be relevant to the present evolution generation of individual bacteria as well as the optimal range of the problem to be solved,in order to achieve the step length self-adaption.Besides,the combination of chaotic thoughts and differential evolution thoughts with bacteria foraging algorithm can improve both the initial process and optimal process of the algorithm.This method increased the diversity of groups,preventing the algorithm from falling into the local optima due to the precocious.In the optimal process of high-dimensional problem,fractal dimension optimization is used to replace the former method.The fractal dimension optimization means that the information of every dimension will be updated one by one on the basis of whether the new position of every dimension changes comparing to the fitness value.Dealing with the problems in different dimensions can boost the precision and efficiency of the algorithm obviously.Experiments show that through the testing of multiple standard test functions in the hyperspace,the revised algorithm optimizing in the hyperspace has several benefits,such as fast speed,high precision and the simple process of solving.It has improved manifestly in terms of precision when comparing to other modified programs.
Improved Weighted Extreme Learning Machine
XING Sheng, WANG Xiao-lan, ZHAO Shi-xin and ZHAO Yan-xia
Computer Science. 2017, 44 (4): 275-280.  doi:10.11896/j.issn.1002-137X.2017.04.057
Abstract PDF(1074KB) ( 371 )   
References | Related Articles | Metrics
If the weight of the original weighted extreme learning machine is fixed artificially,the more optimal weight may be missed.Aiming at this problem,an improved weighted extreme learning machine was proposed.The new method uses the ratio of the sample number of different classes as the initial weight.The weight is adjusted by the weight adjustment factor from two directions of reducing and enlarging the weight ratio.Finally,the optimal weights are selected by the experimental results.The experiments were carried out on the transformed UCI data set using the original weighted extreme learning machine,the weighted extreme learning machine with other weights and the new method respectively.The experimental results indicate that the improved weighted extreme learning machine has better classification performance.
New Improved Artificial Fish Swarm Algorithm
LIU Dong-lin and LI Le-le
Computer Science. 2017, 44 (4): 281-287.  doi:10.11896/j.issn.1002-137X.2017.04.058
Abstract PDF(510KB) ( 468 )   
References | Related Articles | Metrics
Aiming at the problems of easy to fall into the local optimum value,converging slowly in the later period and low solving accuracy that artificial fish swarm algorithm(AFSA) have,a new improved artificial fish algorithm (IAFSA) was proposed. Firstly,the new algorithm uses chaos transform to initialize the position of individual fish,and the fish is more evenly distributed in the specified area within the region,that keeps the fish population diversity,and it is conducive to global convergence.Secondly,the artificial fish with different function values in foraging behavior take different visual,and it not only improves the searching speed but also reduces the possibility of the artificial fish falling into local optimum.Finally,according to the relationship between physical and the activity,a physical transformation mo-del is construct,and the physical of artificial fish become weaken in the late stage of the algorithm,reducing the step timely is very important,which can improve the convergence speed and the accuracy of the algorithm.The algorithm is verified by standard test function and TSP of 14 cities,and the experimental results show that the improved algorithm has faster convergence speed and higher precision than the basic artificial fish swarm algorithm.
Using Social Trust Relationship and Helpfullness Ratings for Recommendation Based on Matrix Factorization
ZENG An and XU Xiao-qiang
Computer Science. 2017, 44 (4): 288-294.  doi:10.11896/j.issn.1002-137X.2017.04.059
Abstract PDF(577KB) ( 375 )   
References | Related Articles | Metrics
So far,cold-start and data sparsity issues have still been two challenges in recommender systems.Most of traditional recommender systems based on the matrix factorization model often assumed that users are isolated and the relationships among users are ignored,this results in the decrease in the recommendation effects.Thus,a novel approach incorporating social trust relationship and helpfulness ratings was proposed.Based on the probabilistic matrix factorization,approach combined the social trust relationships among users with helpfulness ratings was proposed,and the alternating least squares was employed to train model parameters.The experiment results on Epinions and Ciao data set suggested that the proposed approach was superior to other advanced approaches in accuracy and reliability,especially while the cold-start issues were involved in.
Research of Time Weighted Collaborative Filtering Algorithm in Movie Recommendation
LAN Yan and CAO Fang-fang
Computer Science. 2017, 44 (4): 295-301.  doi:10.11896/j.issn.1002-137X.2017.04.060
Abstract PDF(639KB) ( 610 )   
References | Related Articles | Metrics
In order to deal with the outdated information problem of collaborative filtering algorithm,a new time weighted collaborative filtering algorithm (NTWCF) was proposed.Considering the influence of this change,it introduced the concept of information retention period which inspired by the information half-value period.At the stages of nearest neighbor searching and predictive scoring,this paper used a novel time weighted function to put time weight to the user’sscore.The experimental results of movie data sets show that it can greatly improve the accuracy of predicted ratings.Then,in order to improve the real-time speed of algorithm,this paper used time weighted item clustering algorithm to optimize NTWCF,and put forward a collaborative filtering algorithm based on time weight and item clustering (TWICCF).It used K-means to cluster the items based on time weighted scores,and then searched the nearest neighbors of the target item in the set of some clusters.TWICCF algorithm achieves significant improvements in both recommendation accuracy and real-time than NTWCF algorithm.
Generation of Three-dimention Vehicle Panorama
LIU Dong, QIN Rui, CHEN Xi and LI Qing
Computer Science. 2017, 44 (4): 302-305.  doi:10.11896/j.issn.1002-137X.2017.04.061
Abstract PDF(1730KB) ( 1977 )   
References | Related Articles | Metrics
Information loss,image blur and object deformation exist in bird-eye panoramic image generated by homography matrix.Confined 3D point to two-dimensional space,image in a view can transform to another uniquely.We selected a curved surface wrapping the vehicle as selected two-dimensional space.We can get the multi-view panorama image from different perspectives.A special process is also applied to the overlapping region.The final experiment demonstrate that,the multi-view panorama images make use of all the information in the source images,at the same time,image blur and object deformation decreases notably.The overlapping regions appear more naturally and smoothly.
Mean Shift Tracking Algorithm Based on Improved LTP Feature Extraction
ZOU Qing-zhi and HUANG Shan
Computer Science. 2017, 44 (4): 306-311.  doi:10.11896/j.issn.1002-137X.2017.04.062
Abstract PDF(2484KB) ( 441 )   
References | Related Articles | Metrics
A mean shift target tracking algorithm based on improved LTP feature and color feature fusion was proposed,which can solve the problem of tracking difficult of algorithm under varying light intensity scene.The rotation invariant is introduced aiming at the problem of LTP model,then the dynamic threshold method is put forward, and then the improved LIP feature and color feature are fused to embed into mean shift algorithm throught adaptive function. Compared with traditional target tracking algorithm in changing light intensity scenario,tracking result of this algorithm is superior to other algorithms,and has good robustness.
Algorithm of Enhanced Visual Background Extraction for Eliminating Ghost
QU Zhong and HUANG Xiao-ling
Computer Science. 2017, 44 (4): 312-316.  doi:10.11896/j.issn.1002-137X.2017.04.063
Abstract PDF(2029KB) ( 773 )   
References | Related Articles | Metrics
Visual background extractor model selected each pixel’s spatial neighbourhood pixels randomly to initialize the background model in the first frame,as the result to emerge ghost easily in the early of detection.Meanwhile,the segmentation of background and foreground by global threshold cannot adapt to the dynamic background and the fixed update rate is inappropriate for the scene change rapidly.In order to solve the problem,this paper proposed an enhanced algorithm using pixel’s information in time domain to initialize the background model.At first,this paper took advantage of the history pixels in the image sequences to complete background model initialization.And then,the segmentation threshold was obtained adaptively by the complexity of background using the spatial neighbourhood pixels.Finally,the background model with dynamic updated rate,so that it could adapt to the scene change faster and better.Experimental results show that the enhanced ViBe algorithm not only can remove ghost quickly and effectively,but also can improve the accuracy of the target detection.
Image Segmentation Based on Pulse Coupled Neural Network
WANG Ai-wen and SONG Yu-jie
Computer Science. 2017, 44 (4): 317-322.  doi:10.11896/j.issn.1002-137X.2017.04.064
Abstract PDF(1731KB) ( 442 )   
References | Related Articles | Metrics
Traditional pulse coupled neural network (PCNN) model needs to set a lot of parameters in processing image segmentation and can not segment the images with low contrast precisely.In order to solve the problems,an improved image segmentation algorithm was proposed based on a simplified PCNN model.In the simplified mode,the number of parameters required in the traditional PCNN model was reduced.In the improved algorithm,the model parameters were set adaptively according to the image pixel space and gray features,and the image grayscale expected mean was obtained as the image segmentation threshold according to the image gray histogram.Therefore,the improved algorithm has no iteration stop condition need to choose,just once the ignition process the method can complete the image segmentation effectively.The experimental results show that this method is accurate in image segmentation,especially in image texture details,and the final result is better than some methods,such as manual adjustment method of PCNN parameters and Otsu method.