Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 42 Issue 2, 14 November 2018
  
Survey on Real-time Video and Audio Communication Based on WebRTC
ZHANG Xiang-hui, HUANG Jia-qing, WU Kang-heng and LEI Zhi-bin
Computer Science. 2015, 42 (2): 1-6.  doi:10.11896/j.issn.1002-137X.2015.02.001
Abstract PDF(639KB) ( 1296 )   
References | Related Articles | Metrics
WebRTC (Web real-time communication) can achieve real-time video and audio communication on the Web without installation of any plugin or client software.It is accepted as W3C draft,because of low development cost and wide application range.WebRTC is bringing us an innovation era on multimedia communications.The overall framework of WebRTC-based real-time video and audio communication was introduced first,and then the related key technologies of WebRTC were elaborated,including signaling mechanism,Web app and WebRTC of low layer of browser.The latter also includes video codec module,audio codec module and transmission module.Next,implementation details of WebRTC real-time video and audio communication applications were compared from the point of view of two users and multiple users.Finally,open issues that are worthy of further study were discussed.
Research and Progress of microRNA Prediction Methods Based on Machine Learning
WANG Ying, LI Jin, WANG Lei, XU Cheng-zhen and CAI Zhong-xi
Computer Science. 2015, 42 (2): 7-13.  doi:10.11896/j.issn.1002-137X.2015.02.002
Abstract PDF(640KB) ( 590 )   
References | Related Articles | Metrics
Traditional cloning experimental approaches are affected by the organizational and environmental impact,and the cost is high.The comparative method that belongs to the computational method is low sensitivity to the far evolutionary distance genes,and can’t predict the no homologous microRNAs.The machine learning method can resolve the restraints that comparative method dependents on the homologous gene.Firstly,this paper summarized the microRNA relevant biological knowledge which the machine learning is related to.Secondly,it outlined the general process and the latest research software and algorithms of machine learning based on microRNA prediction.Thirdly,starting from data selection,feature selection,classifier design,feature subset selection,class imbalance problem,performance evaluation and other aspects in terms of the essential elements of microRNA prediction based on the machine learning,it summarized the method and technology in each process,described their latest research progress.The approaches were contrasted and analyzed respectively in some process,and their respective advantages,disadvantages were summarized.Finally,summary and prospect of the research work on microRNA prediction based on machine learning were given.
Survey about Research on Information Extraction
GUO Xi-yue and HE Ting-ting
Computer Science. 2015, 42 (2): 14-17.  doi:10.11896/j.issn.1002-137X.2015.02.003
Abstract PDF(505KB) ( 2896 )   
References | Related Articles | Metrics
The task of information extraction (IE) is obtaining the objective information precisely and quickly from a large scale of data.Nowadays IE has already been an important branch of NLP,and its value is also becoming increa-singly apparent.As a result,the industry and academia are putting more and more emphasis on it.This paper first reviewed the development process of information extraction,then summarized the new research progress about IE from 4 aspect:named-entity recognition,anaphora resolution,relation extraction and event extraction.What’s more,the paper analyzed some primary problems that the IE is facing with,and finally predicted the researching trend of IE in the future.
K-threaded Low Energy-consuming Task Scheduling Optimization Algorithm Based on Multi-core Processors
WANG Ke-te, WANG Li-sheng and LIAO Xin-kao
Computer Science. 2015, 42 (2): 18-23.  doi:10.11896/j.issn.1002-137X.2015.02.004
Abstract PDF(497KB) ( 566 )   
References | Related Articles | Metrics
Based on multi-core processor system with independent DVFS module,this paper proposed a K-threaded low-power optimal algorithm for parallel task modeling which is tasks optimization based on Energy-Effectiveness Model(TO-EEM).Compared with the traditional energy-efficient scheduling of parallel tasks,the main solution for reducing processor power consumption reduces synchronization duration between threads and optimizes parallelism performance not only by decreasing the instantaneous frequency of processors,but also rationally allocating thread resources .Regar-ding tasks with a certain acceptable speedup performance,we improved resource utilization and reduced energy consumption to reach a compromise between power consumption and program performance.The paper carried a lot of simulation experiments,and the result presents that the proposed task optimization scheduling model has a effective impact on reducing processor power consumption,and still maintains a linear speedup.
Measurement Study of 3G Mobile Networks Using Android Platform
ZHANG Cheng-wei, CHENG Wen-qing and HEI Xiao-jun
Computer Science. 2015, 42 (2): 24-28.  doi:10.11896/j.issn.1002-137X.2015.02.005
Abstract PDF(502KB) ( 475 )   
References | Related Articles | Metrics
With the proliferation of mobile online-social applications,there has been an increasing growth for accessing the Internet using mobile devices.3G mobile networks provide a convenient approach for users to access the Internet any-time and anywhere.Nevertheless,the performance of 3G networks may significantly impact user’s quality-of-experience (QoE).This paper conducted an end-to-end active measurement study of the 3G mobile Internet in China.Due to the largest market share of the Android operation systems,we designed and implemented a wireless network measurement tool in Android,and then performed an evaluation study of three major 3G networks in China using the proposed measurement tool.Our measurement study focused on the delay measurements to popular websites accessed by smart phones.Finally,based on the collected measurement data,we discussed network performance of three different 3G networks in China.Our results show that China Telecom CDMA2000 has the best performance in network delay and delay jitter.This study may provide a feasible measurement approach and data support to achieve better QoE for end users and optimize the mobile wireless network performance.
Localization Algorithm of Underwater Acoustic Sensor Network Oriented to Marine Monitoring
CHEN Qiu-li, HE Ming, WANG Yan, CHEN Xi-liang and WANG Li-hui
Computer Science. 2015, 42 (2): 29-32.  doi:10.11896/j.issn.1002-137X.2015.02.006
Abstract PDF(360KB) ( 420 )   
References | Related Articles | Metrics
To solve the nodes location problem of large-scale underwater acoustic sensor networks (UASNs) for marine monitoring application,firstly,the integer linear programming theory was used to create the water gateway optimization deployment strategy based on multi-objective constraint.Secondly,a predictive algorithm of UASNs nodes deployment was put forward through the layered orientation.Simulation proves the feasibility and effectiveness of the method.The experimental results show that the localization algorithm with predictability can significantly improve the nodes positioning range,decrease the energy consumption of communication,reduce the positioning error,and research achievements can provide the corresponding technical guidance for ocean large-scale deployment of UASNs.
Design and Analysis of Virtual Network Mapping Competitive Algorithms
YU Jian-jun and WU Chun-ming
Computer Science. 2015, 42 (2): 33-38.  doi:10.11896/j.issn.1002-137X.2015.02.007
Abstract PDF(517KB) ( 425 )   
References | Related Articles | Metrics
This paper reviewed the virtual network mapping problem in network visualization area and the current research progress for this problem.For the virtual network mapping problem with known virtual node mapping but no support of path splitting by physical network,this paper proposed a virtual network mapping competitive algorithm aiming at maximizing the profit of physical network provider.This paper also provided the competitive analysis of the algorithm and the experiment result shows that the proposed algorithm increases the load balancing metric and utilization of physical network resource,and hence can improve the acceptance ratio of virtual network construction request and profit of physical network provider.
Effect of AS’s Geographic Locations on Internet’s Stability
MENG Ya-kun and SUN Jing-chang
Computer Science. 2015, 42 (2): 39-42.  doi:10.11896/j.issn.1002-137X.2015.02.008
Abstract PDF(432KB) ( 502 )   
References | Related Articles | Metrics
This paper analyzed how the AS’s geographic locations and transmission distances affect on Internet’s stability.In contrast to the traditional studies focusing on the future AS network which is based on nowadays,this paper emphasized particularly on the change of AS network’s performance under the effect of the economic adjusting or the technology innovation to make adjusting measures and to lead the technological trend.A double-deck network model of geographic hypergraph and AS network was presented to simulate the geographic locations,which is not reflected in traditional models.The simulate results of our model show that Internet’s stability ascends with the increase of transmission distances.However,if AS’s geographic locations are random,the transmission distances has no obvious effects on Internet’s stability.
Cooperative Spectrum Sensing Technology Based on BP Neural Network
CHEN Yi, ZHANG Hang and HU Hang
Computer Science. 2015, 42 (2): 43-45.  doi:10.11896/j.issn.1002-137X.2015.02.009
Abstract PDF(353KB) ( 540 )   
References | Related Articles | Metrics
The technology of cognitive radio can use the untapped spectrum resource to increase the cognitive radio technology efficiency dramatically.Although the spectrum sensing for only one technique is simple,it is not reliable.However,using the cooperative spectrum sensing can improve the capability of spectrum sensing.Recently,most of the studying about the cooperative spectrum sensing is based on one simple condition that is assuming the signal to noise ratio for each cognitive user is similar.In reality,each cognitive radio user will have different environment,which will produce different signal to noise ratio and different influence on the justification of the fusion center.In addition,another important issue is how to improve the performance of CR system while enhancing the energy efficiency.This paper proposed a method of cooperative spectrum sensing based on BP (Back Propagation) neural network which will improve spectrum sensing performance by using the historical information about the spectrum through BP neural network.The simulation identifies the algorithm can reduce the involved cognitive users and the energy consumption,and guarantee the performance of the cooperative sensing at the same time.
Performance Analysis of Time-delay of Hybrid Authentication in Mobile Multihop Relay Networks
LU Wei-feng, YU Ke-yuan and CHEN Si-guang
Computer Science. 2015, 42 (2): 46-49.  doi:10.11896/j.issn.1002-137X.2015.02.010
Abstract PDF(401KB) ( 397 )   
References | Related Articles | Metrics
This paper firstly analyzed the existing two authentication schemes,the centralized authentication scheme and the distributed authentication scheme before a new relay node accessing into the MMR WiMAX Network.Secondly a new hybrid authentication scheme was proposed based on the two existing authentication schemes.Then we used the queuing theory to analyze time-delay of the centralized and hybrid authentication scheme under different relay nodes distribution.Finally the results of numerical analysis show that the hybrid authentication scheme has great advantage in the performance of authentication time-delay compared with the centralized authentication scheme.
Architecture Design and Mode Research of High-dynamic Self-organizing UAV Network
CHEN Si-jing, ZHANG Ke and HE Ying
Computer Science. 2015, 42 (2): 50-54.  doi:10.11896/j.issn.1002-137X.2015.02.011
Abstract PDF(441KB) ( 1365 )   
References | Related Articles | Metrics
A high dynamic self-organized and mode switchable network architecture of unmanned aerial vehicle (UAV) was proposed according to the special application environment and flexible networking requirements.The architecture offers full interconnected Peer-to-Peer mode and graded clustering mode for UAV network to meet the demands of different tasks,and both modes can be switched autonomously.Furthermore,a specific program for UAV network mana-gement was proposed considering what may be encountered when the network works.The program manages the nodes joining and leaving the net,handoff and other performances in two models separately, pointedly designs the way to switch networking mode as well.Analysis and demonstration show that this new architecture and network management are systematic and complete compared to previous studies.It can satisfy the fast changes in the nodes,frequent changes in topology,can meet the particular application environment of the UAV network,and has high dynamic condition and enforceability.
On-chip Network Router Arbitration Control with QoS Support
GUAN Zheng, QIAN Wen-hua and DU Chang-qing
Computer Science. 2015, 42 (2): 55-59.  doi:10.11896/j.issn.1002-137X.2015.02.012
Abstract PDF(698KB) ( 639 )   
References | Related Articles | Metrics
For the routers of network on chip (NoC),arbitrator plays an important role in the performance of packets switching between each port.The traditional round robin (RR) algorithm provides fairness among different ports.However,it seems difficult to guarantee QoS in delay characteristics.For this reason,a priority based parallel round robin (PP-RR) algorithm was proposed,which aims to provide differentiated service according to the communication traffic load.The high priority port will acquire more transmission opportunity in the arbitration.Furthermore,a mathematic analytic model based on the two-level polling systems was utilized for the performance evaluation of PP-RR algorithm.The closed form expression of mean waiting time is achieved.
Synthesis Evaluation Method for Node Importance in Complex Networks
QIN Li, YANG Zi-long and HUANG Shu-guang
Computer Science. 2015, 42 (2): 60-64.  doi:10.11896/j.issn.1002-137X.2015.02.013
Abstract PDF(465KB) ( 1855 )   
References | Related Articles | Metrics
In complex networks,how to rank the nodes according to their importance plays key roles in various kinds of fields.Against the problem that most of the existing single index are unilateralist and limited,and most of the current synthesis evaluation method is also inaccurate,we proposed a new synthesis evaluation method combining principal component analysis with TOPSIS.We used our method to analyse the ARPA and the USA airport network.The result suggests that our method should be effective and corrective,and also lays the foundation for further more evaluation of node importance.
Improved Energy Efficient Data Gathering Protocol in Wireless Sensor Network
MA Chen-ming, WANG Wan-liang and HONG Zhen
Computer Science. 2015, 42 (2): 65-69.  doi:10.11896/j.issn.1002-137X.2015.02.014
Abstract PDF(543KB) ( 381 )   
References | Related Articles | Metrics
Virtual backbone based on connected dominating set is a key technique for reducing the number of dominating node and constraining the searching space for the routing which plays an important role for optimizing the lifetime of wireless sensor networks.ViTAMin protocol not only turns off unnecessary nodes to generate virtual backbone,but also sends the data collected along the minimal energy consumption path to save energy.In light of the problem that ViTAMin may generate non-connected network and dominating nodes consume energy not equally,an energy efficient virtual backbone data gathering protocol (EEVB) was proposed.Theoretical analysis shows that EEVB is able to construct connected dominating set with time and message complexity of O(n),and simulation experiments confirm further that EEVB can construct smaller connected dominating set with low energy overhead and extend the life time of the network effectively.
Bounding End to End Delay for Self-similar Traffic in LTE-A Femtocell Networks
SI Yuan, CHEN Xin and LIU Zong-qi
Computer Science. 2015, 42 (2): 70-75.  doi:10.11896/j.issn.1002-137X.2015.02.015
Abstract PDF(464KB) ( 456 )   
References | Related Articles | Metrics
LTE-A is a fourth-generation radio communication standard which provides high data rates to mobile users.LTE-A adopts femtocell technology to enhance the quality of service (QoS) experienced by indoor users.To derive the end-to-end delay bound of LTE-A femtocell networks,a stochastic network calculus method was proposed.First,stochastic traffic and service curves are constructed to model respectively the self-similar mobile traffic and MIMO channels in LTE-A femtocell networks.Then,effective bandwidth and Chernoff bound approaches are employed to obtain the theoretical end-to-end delay bound.Simulations by NS3 show that the deviation between the theoretical and simulative upper bounds on delay is within 2 milliseconds.This paper provided some fresh insight into QoS provisioning in LTE-A femtocell networks.
Distributed Network Mobility Management over Proxy Mobile IPv6 Network
CHEN Yuan, ZHANG Qi-zhi, RAO Liang and ZHAO Gan-sen
Computer Science. 2015, 42 (2): 76-80.  doi:10.11896/j.issn.1002-137X.2015.02.016
Abstract PDF(383KB) ( 482 )   
References | Related Articles | Metrics
Applying network mobility (NEMO) over proxy mobile IPv6 (PMIPv6) provides a seamless handover between the NEMO network and the PMIPv6 network.However,all packets addressed to mobile nodes are delivered via the local mobility anchor (LMA),leading to large packet delivery overhead.A scenario of distributed NEMO management over PMIPv6 network was proposed to improve the NEMO network quality of service in the vehicles like buses.The data plane of LMA is distributed and the control plane is managed by the central mobility database.By numerical analysis,in short-bus-riding scenarios,the proposed scheme has an advantage in network performance with less network cost.
Quantification and Conformance of Web Service Degraded Substitution
WU Xin-xing, HU Guo-sheng and CHEN Yi-xiang
Computer Science. 2015, 42 (2): 81-85.  doi:10.11896/j.issn.1002-137X.2015.02.017
Abstract PDF(488KB) ( 510 )   
References | Related Articles | Metrics
Software trustworthiness is facing greater challenges in open network environment,and one of approaches to improve the trustworthiness of software is degraded substitution.This paper mainly studied the Web service degraded substitution based on process algebra.In order to ensure the validity of Web service degraded substitution,a confor-mance condition was given based on the modified process algebra by introducing the time-out operator and the time-delay operator.Further,it studied the metric of Web service degraded substitution.
Research of Efficient Recognition Technique for Preferred Access Pattern of Flash Storage System
LEI Juan, ZHU Zhu, FU Yun-qing and SHI Liang
Computer Science. 2015, 42 (2): 86-89.  doi:10.11896/j.issn.1002-137X.2015.02.018
Abstract PDF(477KB) ( 454 )   
References | Related Articles | Metrics
Flash memory is the most widely used storage device.Flash memory,as is well-known,has great sensitivity to its access pattern,such as random and sequential access patterns,hot and cold access mode,focused writing and partitioned sequential writing,etc.In addition,many parts of flash device also have higher sensitivity to access patterns.Therefore,the recognition technology of preferred access pattern has great influence on the performance and design of flash storage system.The definition of preferred access pattern of flash storage system was first introduced in this paper,and the re-cognition technology of preferred pattern was followed in detail.The experimental results show that the proposed recognition technology has very high accuracy.
Detection of Malicious PDF Based on Structural Path
CHEN Liang, CHEN Xing-yuan, SUN Yi and DU Xue-hui
Computer Science. 2015, 42 (2): 90-94.  doi:10.11896/j.issn.1002-137X.2015.02.019
Abstract PDF(449KB) ( 768 )   
References | Related Articles | Metrics
Malicious PDF document is still a network security threat,and even causes a number of significant security incidents.The existing methods mainly analyse malicious code extraction and simulation execution.The detection efficiency is not high.On the basis of analyzing the structural properties of PDF documents structure,a structure path was defined and a detection method based on the structure of the potential difference between the characteristics of malicious and benign documents was proposed.A large number of experimental data results show that the method has a good performance on the detection accuracy rate and detection speed.
Lightweight RFID Authentication Protocol Based on Digital Signature
LIU Ya-li, QIN Xiao-lin, ZHAO Xiang-jun, HAO Guo-sheng and DONG Yong-quan
Computer Science. 2015, 42 (2): 95-99.  doi:10.11896/j.issn.1002-137X.2015.02.020
Abstract PDF(566KB) ( 493 )   
References | Related Articles | Metrics
The issues of security and privacy in RFID systems become the handicap to the development of RFID technology and the bottle neck of their further pervasive usage.A lightweight RFID authentication protocol based on digital signature was proposed,which combines digital signature technology and RFID authentication technology properly to implement the lightweight authentication mechanism of RFID systems successfully.Performance evaluation shows that this protocol not only has main security and privacy properties,but also can resist a variety of typical malicious attacks and threats.The security of this protocol is based on the assumption of difficulty in solving the discrete logarithm hard problem in the finite field and the security of pseudo random number generator.The protocol puts the higher cost operations of public key cryptography over the server end,which ensures the lightweight operation of the tag end and promotes the further application of public key cryptography in RFID systems.
Design and Proof of Bilateral Authentication Protocol for Wireless Sensor Network
GUO Ping, FU De-sheng, CHENG Ya-ping and ZHAN Xiang
Computer Science. 2015, 42 (2): 100-102.  doi:10.11896/j.issn.1002-137X.2015.02.021
Abstract PDF(342KB) ( 421 )   
References | Related Articles | Metrics
A bilateral authentication protocol between users and sensor nodes was proposed for WSN(wireless sensor network).Analysis shows that authentication protocol not only avoids the drawback of private key escrow in the identity-based system,but also achieves the advantage of simplifying complication of producing and verifying public key in traditional certificate-based system.Moreover,the authentication protocol doesn’t need TTP(Trusted Third Party),is efficient and has less communications.Finally,the integrity,correction and security of protocol were proved with BAN logic formal method.
Edge Enhancement on Size Invariant Visual Cryptography
HU Hao, YU Bin and SHEN Gang
Computer Science. 2015, 42 (2): 103-107.  doi:10.11896/j.issn.1002-137X.2015.02.022
Abstract PDF(950KB) ( 449 )   
References | Related Articles | Metrics
Considering the problem of edge distortion in size invariant visual cryptography,on the basis of analyzing the characteristics of the image edge,an LP operator was proposed to design an edge enhancement algorithm with variable edge expansion ratio,and the method to design edge enhancement size invariant visual cryptography was given.The experimental results show that the quality of recovery edge can be improved effectively.Furthermore,the whole image can also be significantly improved with better visual effects.
Mobile Location Privacy Protection Based on Untrusted Environment
LIU Xue-jun, CHEN Yu-feng and LI Bin
Computer Science. 2015, 42 (2): 108-113.  doi:10.11896/j.issn.1002-137X.2015.02.023
Abstract PDF(629KB) ( 390 )   
References | Related Articles | Metrics
With the development of mobile computing and location equipment technologies,location privacy protection receives extensive attention from academia in recent years.A lot of anonymity algorithm has been put forward to protect users’ privacy information.However,the existing methods is either not suitable for mobile environment,or do not consider the untrusted environment.In light of these problems,this paper put forward Dynamic_p which is dynamic planning anonymity algorithm based on game theory.This method is proposed on the basic of Privacy_1[8] solving the privacy protection in the suspect environment.First it turns anonymous group into anonymous tree,and then child node cooperates with the parent node to calculate the anchor through game from the tree’s bottom to top,finally it calculates the entire anchors of anonymous group through layers of recursive.After that,users initiate location neighbor query through using anchor of anonymous group instead of actual location,then select candidate locations by probability statistics,finally calculate the final ideal location through computation.Simulation results show that this method has better performance,and can be applied to mobile untrusted environment.
Approach Based on Recurrence Plot to Detect Covert Timing Channels
LIU Biao, LAN Shao-hua, ZHANG Jing and LIU Guang-jie
Computer Science. 2015, 42 (2): 114-117.  doi:10.11896/j.issn.1002-137X.2015.02.024
Abstract PDF(487KB) ( 446 )   
References | Related Articles | Metrics
The detection of covert timing channel is the focus of the research of covert channel,and it is very difficult.Entropy-based approach is the most effective detection approach.It can detect almost all the covert timing channels.However,Liquid is proposed soon after.It can effectively evade the entropy-based detection by smoothing the entropy.In this paper,a detection approach based on recurrence plot was introduced,and the detection approach can detect various covert timing channels including Liquid.
Research on Focused Crawling Technology Based on SVM
LI Lu, ZHANG Guo-yin and LI Zheng-wen
Computer Science. 2015, 42 (2): 118-122.  doi:10.11896/j.issn.1002-137X.2015.02.025
Abstract PDF(418KB) ( 467 )   
References | Related Articles | Metrics
With the rapid development of Internet,network information comes to be massive and diversity.How to provide the information required by users rapidly and exactly is the first task of the search engine.Traditional general search engine can provide the information in the general area,but in the special area,it cannot provide the professional and in-depth information for users.In this paper,the focused crawler based on the SVM classification algorithm was proposed for a solution to the problem of information retrieval in the special area,which makes use of the topic relevance predict algorithm based on the content and partial link information,the SVM classification algorithm and the HITS algorithm.The experiment shows that the crawling strategy based on the SVM classification algorithm can distinguish the topic related pages and topic unrelated Web pages better,improve the harvest rate and recall rate,and furthermore,the retrieval efficiency of search engines is improved.
ABS-OSBE Protocol Supporting Sensitive Attributes Protection
ZHANG Bin, LI Dong-hui, XIONG Hou-ren and FEI Xiao-fei
Computer Science. 2015, 42 (2): 123-126.  doi:10.11896/j.issn.1002-137X.2015.02.026
Abstract PDF(341KB) ( 393 )   
References | Related Articles | Metrics
In order to protect the sensitive attribute in attribute-based access control,an ABS-based OSBE protocol was put forward.ABS-OSBE protocol integrates the ABS algorithm into OSBE,and provides the computational method to both attribute negotiated sides for exchanging information,ensuring the users that meet the access tree structure can obtain sensitive attributes.By extending the description of attributes,the access tree structure can exactly describe “NOT” threshold.A matching method of “NOT” threshold was proposed.Finally,the security of ABS-OSBE protocol was de-monstrated.
Research on Hardware Optimization Implementation of TWINE
LI Lang, ZOU Yi, HE Wei-wei, LI Ren-fa and LIU Bo-tao
Computer Science. 2015, 42 (2): 127-130.  doi:10.11896/j.issn.1002-137X.2015.02.027
Abstract PDF(577KB) ( 472 )   
References | Related Articles | Metrics
With the development of IOT,effective implement of lightweight cryptographic algorithms becomes a hot topic in recent years.TWINE encryption algorithm was proposed in 2011.It is a lightweight block cipher algorithm.TWINE is suitable for the security of encryption in the environment of the Internet of Things.The same round operation is achieved only once and repeated calls.TWINE algorithm has 36 operations in which the first 35 operations can be called repeatedly for the same structure.The original algorithm can be repeated up to 35 repeat calls,because the 36th operation has less confusing compared to the first 35 operations.36 cycle operation is called repeatedly.After that, a block is added.It makes the 36th operation of TWINE can not be re-implemented.A relatively simple algorithm of confusion inverse is added which makes the module highly multiplexed.FPGA experimental results show that area saves 2204 slices and speed increases 5 times than original TWINE.These can provide a reference to encryption researcher.
On Dendritic Cell Algorithm and its Theoretical Investigation
FANG Xian-jin, WANG Li, KANG Jia and LIU Jia
Computer Science. 2015, 42 (2): 131-133.  doi:10.11896/j.issn.1002-137X.2015.02.028
Abstract PDF(621KB) ( 453 )   
References | Related Articles | Metrics
Dendritic cell algorithm (DCA) is inspired by functions of the dendritic cells (DCs) of the innate immune system,and has been successfully applied to numerous security-related problems.However,theoretical analysis of the DCA has barely been performed,and most theoretical aspects of the algorithm have not yet been revealed.Other immune inspired algorithms,such as negative and clonal selection algorithms,are theoretically presented in many literatures.As a result,it is important to conduct a similar theoretical analysis of the DCA,and determine its runtime complexity and other algorithmic properties in line with other artificial immune algorithms.Theoretical analysis was implemented via introduction of three runtime variables in terms of three phases of the algorithm.The standard DCA achieves a lower bound of Ω(n) runtime complexity and an upper bound of O(n2) runtime complexity under the worst case.In addition,the algorithm’s runtime complexity can be improved to O(max(nN,nδ)) by using segmentation approach for online analysis component.
Intrusion Detection Method Based on Multi-label and Semi-supervised Learning
QIAN Yan-yan, LI Yong-zhong and YU Xi-ya
Computer Science. 2015, 42 (2): 134-136.  doi:10.11896/j.issn.1002-137X.2015.02.029
Abstract PDF(339KB) ( 483 )   
References | Related Articles | Metrics
The concerned problem of machine learning is how the systems automatically improve the classification performance with the increase of experience,which is consistent with IDS.Therefore,it has become an effective program to put the theories and methods of machine learning into IDS.In this paper,a multi-label lazy learning approach named ML-KNN was applied to intrusion detection systems.KDD CUP99 data set was implemented to evaluate the ML-KNN algorithm.The simulation results show that this method can achieve higher detection rate and lower false positive rate compared to other algorithms.
Generation and Extraction of Secret Keys Based on Properties of Wireless Channels
SUI Lei, GUO Yuan-bo, JIANG Wen-bo and YANG Kui-wu
Computer Science. 2015, 42 (2): 137-141.  doi:10.11896/j.issn.1002-137X.2015.02.030
Abstract PDF(497KB) ( 1355 )   
References | Related Articles | Metrics
Generating secret keys based on properties of wireless channels is a secure “one pad one key” solution at the physical layer,having the property of unconditional security,where secret keys are extracted collaboratively.According to fading wireless channels noises,evaluation of highly correlated channel parameters,the time-variable,reciprocal and unique natures of channels shared by both parties of communications are used.The work becomes one of the hotspots in wireless network security,because it can avoid security vulnerabilities in the 4-way handshake of the prevailing wireless networks and free itself of the limits in pre-distributing secret keys in practical application.The theoretical foundation of this field was analyzed.The related work of these two key issues,generation and extraction of secret keys based on properties of wireless channels was summarized.The problems of existing schemes according to performance principles were discussed.Finally,the focus of the future work based on existing problems was presented.
Attribute-based Signatures with Auditabiling in Standard Model
REN Yan
Computer Science. 2015, 42 (2): 142-146.  doi:10.11896/j.issn.1002-137X.2015.02.031
Abstract PDF(382KB) ( 379 )   
References | Related Articles | Metrics
In an ABS scheme,the attribute authority (A-authority) generates the private key for any users,hence it has to be completely trusted.The A-authority is free to engage in malicious activities without any risk of being confronted in a court of law.Motivated by this,we firstly proposed a notion of audit attribute-based signature scheme.It is not only a variant of ABS,but also a new approach to mitigate the key escrow problem.Then we constructed a audit attribute-based signature scheme in the standard model.Finally,we proved its security under the standard model.
Intrusion Detection Algorithm Based on Cluster and Cloud Model
LI Yong-zhong and ZHANG Jie
Computer Science. 2015, 42 (2): 147-149.  doi:10.11896/j.issn.1002-137X.2015.02.032
Abstract PDF(283KB) ( 372 )   
References | Related Articles | Metrics
A new intrusion detection algorithm based on cluster and cloud model was proposed to solve the low rate of high false alarm rate problem in network intrusion detection.Because of the different contribution of the attributes to the classification,the attributes were given based on the concept of “clouds approach degree”.The cloud model was builded based on the improved cluster in the text.Using the method of dynamic weighting and the cloud model updating for the attributes gradually strengthens the classifier to guide the data classification.KDD CUP99 data set was implemented to evaluate the proposed algorithm.Experimental results prove that the method is feasible and effective.
UML-based C4ISR Capability Requirement Modeling Language
WANG Cong, WANG Zhi-xue and XU You-yun
Computer Science. 2015, 42 (2): 150-156.  doi:10.11896/j.issn.1002-137X.2015.02.033
Abstract PDF(616KB) ( 511 )   
References | Related Articles | Metrics
This paper discussed the domain conceptualization,suggested a meta-ontology for C4ISR capability conceptualization under a domain-specific modeling language,which can be defined to describe the C4ISR capability concepts.The method is elaborated upon the multi-leveled domain reusable object modeling technology by taking advantage of the UML extension mechanism.In the paper,the abstract syntax,concrete syntax and formal semantic of the modeling language were discussed.Finally,a case study of C4ISR architectural simulation modeling was provided to demonstrate the availability and applicability of the method.
VEMBP:A Novel Labeling Method for Updating on XML Data
QIN Zun-yue, CAI Guo-ming, ZHANG Bin-lian and TANG Yong
Computer Science. 2015, 42 (2): 157-160.  doi:10.11896/j.issn.1002-137X.2015.02.034
Abstract PDF(413KB) ( 350 )   
References | Related Articles | Metrics
In order to improve the efficiency of the XML management system,some labeling schemas for the orderly XML tree were put forward,which realize processing of XML data under the condition of no need to access the original XML tree.The proposed labeling schemas for query have higher query performance,but the updating performance is poorer.Some novel labeling schemas designed for updating permance sacrifice query efficiency and possesse larger labeling space.For higher updating efficiency and smaller labeling space at the same time no reducing query efficiency,a novel labeling schema called VEMBP(Vector Encoding Method Based of Prime) was proposed,in which vector indicates order relation and a prime indicates structure relation between nodes in XML tree,and then an algorithm was designed which realizes updating in the cases no sacrificing query efficiency and completely avoiding secondary coding.Meanwhile labeling space is also under control.The experimental results show that VEMBP processes better on updating efficiency without sacrificing query performance.
Research on Developer Preferential Collaboration in Open-source Software Community
HE Peng, LI Bing, YANG Xi-hui and XIONG Wei
Computer Science. 2015, 42 (2): 161-166.  doi:10.11896/j.issn.1002-137X.2015.02.035
Abstract PDF(494KB) ( 508 )   
References | Related Articles | Metrics
This paper mainly focused on the analysis of developer’s behavior in open-source community.At first,we ana-lyzed the growth of the number of projects and developers in SourceForge.net community to witness its rapid development.Then,we investigated the quantities of new developers and collaborations in a two months interval,and divided the new collaborations into four categories to explore their differences and then judge the cooperation order among developers.Finally,with respect to the collaboration between new and old members,we further analyzed the relationship between preferential behavior and centrality measures such as degree centrality,betweenness centrality and closeness centrality,the number of projects developed and their roles.The result shows that a new developer will prefer collaborating with those who have great betweenness centrality or degree centrality,because they develop more projects and play important roles.Our work will optimize the development process of collaborative development,and lay a solid foundation to improve the productivity and quality of software.
Model Checking of Software Product Line Based on Bilattices
SHI Yu-feng, WEI Ou and ZHOU Yu
Computer Science. 2015, 42 (2): 167-172.  doi:10.11896/j.issn.1002-137X.2015.02.036
Abstract PDF(522KB) ( 371 )   
References | Related Articles | Metrics
Software product line (SPL) maximizes the commonality between similar software products to reduce production costs and improve productivity.Recently,state transition systems based on features have been widely used in behavioral modeling and verification of SPLs.However,they can’t nicely support uncertain and inconsistent information.Therefore,firstly a formalism,bilattice-based featured transition systems (BFTS),was proposed for model chec-king of SPLs with uncertain and inconsistent information,where product was defined via projection.Furthermore,action computation tree logic (ACTL) was used to describe temporal properties;its semantics on BFTS was defined for model checking.Finally,based on multi-valued model checker χ chek,a case study was conducted to illustrate the effectiveness of our approach.
Method of Software Design Patterns Identification Based on Correlation and Feature Constraints
GU Hui, ZHANG Wei-xing, JIN Peng and GU Jie-jie
Computer Science. 2015, 42 (2): 173-176.  doi:10.11896/j.issn.1002-137X.2015.02.037
Abstract PDF(424KB) ( 377 )   
References | Related Articles | Metrics
In program comprehension and software reverse engineering research,it is crucial to find a method that can describe software design patterns and source code to be identified accurately and quickly for constructing a reasonable design pattern identification framework and efficient recognition algorithms.With the principle of adjacency list and connected component of undigraph,we presented the concept of correlation between the class in source code,constructed association class collection from source code to be identified so as to reduce the search space of the algorithm of design pattern recognition.According to the features of design patterns,algorithms of design pattern identification based on correlation and feature constraints were proposed.The algorithm was applied to three open sources including applications-Junit,JHotDraw and JreFactory.These results demonstrate the algorithms can accurately and efficiently complete the recognition of design patterns in source code.
Partial Diagnosability Analysis of Discrete-event Systems
LU Wei, ZHANG Long-mei and ZHU Yi-an
Computer Science. 2015, 42 (2): 177-181.  doi:10.11896/j.issn.1002-137X.2015.02.038
Abstract PDF(1202KB) ( 476 )   
References | Related Articles | Metrics
A quantitative evaluation and analysis method was proposed for partial diagnosable discrete event systems.The proposed method is based on the fault model which is depicted as a tree structure in this paper.Two indicators,diagnosable degree and diagnosable depth,were introduced in the method which can evaluate the diagnosability of systems in cover range and precision respectively.The advantage of the method is that the quantitive values of the evaluating result can be used to analyze and compare different systems which are all partial diagnosable.Furthermore,the impact of different structures of fault model on diagnosable degree and diagnosable depth was discussed and some general principles for constructing fault model were given.The results of analysis and discussion on an example show that the diagnosable degree and diagnosable depth indicators can reflect the diagnosable status of the system accurately when the system is partial diagnosable.The proposed method is useful for designing and analyzing complex systems based on discrete event model and can be helpful for designing and analyzing intelligent systems,self-adaptive systems and self-hea-ling systems.
Sampling Model for Quality Inspection of Uncertain Ocean Data
WANG Zhen-hua, ZHOU Xue-nan and HUANG Dong-mei
Computer Science. 2015, 42 (2): 182-184.  doi:10.11896/j.issn.1002-137X.2015.02.039
Abstract PDF(317KB) ( 433 )   
References | Related Articles | Metrics
Ocean data are characterized by magnanimity,multisource,various types and multi-dimensional,so mostly the quality characteristic of ocean data is uncertain.Thus,the conventional theory of sampling inspection cannot satisfy the requirement of quality inspection for ocean data.In this paper,a fuzzy sampling model was proposed based on trapezoid fuzzy number.The fuzzy sampling model has advantage of the performing quality inspection for ocean data,which have uncertain quality characters,and improves the conventional theory of sampling inspection.
Structural Difference Recognition and Dispelling in Schema Matching
DU Xiao-kun, LI Guo-hui and LI Yan-hong
Computer Science. 2015, 42 (2): 185-190.  doi:10.11896/j.issn.1002-137X.2015.02.040
Abstract PDF(531KB) ( 389 )   
References | Related Articles | Metrics
Schema matching is a primary problem in the hot research field such as data space,semantic Web and so on.The existing method extracts the element’s own information,structural information and data information,and then chooses the pair of elements having most similar semantic as matching elements.But the difference between elements in element’s own information and structural information hinders the extraction of semantic.Through analyzing the reason of structure information difference,this paper summarized some kinds of structure information difference and proposed the corresponding detecting and dispelling algorithm.Extensive simulation experiments were conducted and the results show that the accuracy of matching result is increased by the dispelling of structure information difference.
Research on Improved Energy-saving Technology for Data Access in High Performance Computing
DENG Ding-sheng
Computer Science. 2015, 42 (2): 191-197.  doi:10.11896/j.issn.1002-137X.2015.02.041
Abstract PDF(948KB) ( 457 )   
References | Related Articles | Metrics
Aiming at the problems that the existing disk power management methods cannot handle short idle periods of high-performance parallel applications,require extensive code modifications and cost much energy,this paper proposed a compiler-directed data access scheduling technology for saving disk energy.The proposed technology contains two phases.In the first phase,the compiler analyzes parallel application programs,extracts disk access patterns and generates scheduling tables,while in the second phase,a “data access scheduler” performs the actual data accesses according to scheduling tables.As compared to prior software based efforts,our framework requires no code or data restructuring.Our experimental evaluation reveals that the proposed framework effectively increases the energy savings brought by the disk power down mechanism for data-intensive workloads,and the rate of energy savings improves from 5.5% to 11.8%,making disk spin-down a viable strategy in data-intensive high-performance computing.In addition,it increases the energy benefits of multi-speed disks from 12.7% to 27.6%.
User Classification Method in Online Social Network Using Random Walks
HE Chao-bo, YANG Zhen-xiong, HONG Shao-wen, TANG Yong, CHEN Guo-hua and ZHENG Kai
Computer Science. 2015, 42 (2): 198-203.  doi:10.11896/j.issn.1002-137X.2015.02.042
Abstract PDF(557KB) ( 437 )   
References | Related Articles | Metrics
Aiming at the problem that the existing methods for user classification in online social network (OSN) are not enough effective to utilize both attribute and linkage information of user to improve the classification performance,we designed a new multi-label classification method using random walks (MLCMRW) to solve the problem of user cla-ssification in OSN.MLCMRW can utilize both user attribute and linkage information to improve the classification performance.In particular,MLCMRW includes two key parts:learning the initial label distribution and iterative inference for steady label distribution of every user. The experiments on the real-world OSN datasets show that MLCMRW performs quite well than other representative methods.Moreover, it is suitable to classify users in the real-world OSN.
Intelligent Fusion of Information Law and its Inner Separating
TANG Ji-hua, ZHANG Ling and SHI Kai-quan
Computer Science. 2015, 42 (2): 204-209.  doi:10.11896/j.issn.1002-137X.2015.02.043
Abstract PDF(579KB) ( 406 )   
References | Related Articles | Metrics
Function packet set is a dynamic model of information law,which introduces function into packet set and improves it.Function packet set is a function set pair composed of function internal and outer packet sets,i.e.,(S,SF) is a function packet set.Packet reasoning is a dynamic reasoning generated from packet set,and it’s composed of internal and outer packet reasoning.We improved packet reasoning,introduced function into it,and put forward packet information law reasoning.By cross-penetration of function internal packet set with internal packet information law reasoning,the studies on intelligent fusion of internal packet information law and its inner separating were given.The generating of internal packet information law reasoning,its attribute conjunctive extension theorems and inner separating and reducing were proposed.At the end,an application to inner separating of intelligent fusion of internal packet information law in the discovery of unknown information law was shown.
Self-adaptive Improved Artificial Fish Swarm Algorithm with Changing Step
ZHU Xu-hui, NI Zhi-wei and CHENG Mei-ying
Computer Science. 2015, 42 (2): 210-216.  doi:10.11896/j.issn.1002-137X.2015.02.044
Abstract PDF(587KB) ( 432 )   
References | Related Articles | Metrics
The artificial fish swarm algorithm in function optimization problems has some defectives,such as falling into local optimum value,converging slowly in the later period and acquiring solutions inaccurately.In order to overcome these shortcomings,a new self-adaptive artificial fish swarm algorithm with changing step was proposed by improving foraging behavior and adjusting self-adaptive step of artificial fish swarm algorithm.In addition,the paper strengthened the theoretical basis of the algorithm by proving the global convergence.Finally,the experimental results of 10 typical functions show that the proposed algorithm is superior to the original artificial fish swarm algorithm and artificial glowworm swarm optimization algorithm in overcoming the local optimum,convergence efficiency,computational precision and stability.Furthermore, the method is superior to the paper [23],[24] and [9] in computational precision and stability.
Key Retrieval Technologies in Large-scale Chinese Corpus
YU Yi-jiao and LIU Qin
Computer Science. 2015, 42 (2): 217-223.  doi:10.11896/j.issn.1002-137X.2015.02.045
Abstract PDF(672KB) ( 839 )   
References | Related Articles | Metrics
The query requirements of large-scale Chinese corpora are different from those of general text retrieval system.Cici v2.0 is a Chinese corpus search system and provides linguistic query services:part-of-speech search,reduplicated words search,wildcard search,and Chinese N-gram string occurrence search.The N-gram string occurrences are accounted and indexed by Unicode and frequency respectively.The search procedure is divided into three steps.First,the Chinese N-gram occurrence statistic records are searched and the candidate n-gram strings are produced.Then,keywords are searched according to user’s linguistic need;At last,these Chinese strings are searched selected by users in the corpora and the final results are returned.
AP Twice Clustering Based Neural Network Ensemble Algorithm
LI Hui and DING Shi-fei
Computer Science. 2015, 42 (2): 224-227.  doi:10.11896/j.issn.1002-137X.2015.02.046
Abstract PDF(417KB) ( 403 )   
References | Related Articles | Metrics
In order to improve the precision and diversity of individual networks to improve the generalization perfor-mance of neural network ensemble (NNE),this paper proposed a method to generate individual neural network for ensembling based on twice clustering.Firstly,all the samples are chosen to cluster for the first time to form once clustering subclass,and then the twice clustering is performed for each type of subclass to form the sample subsets of each subclass.Affinity Propagation(AP) clustering makes the otherness criterion of “similar in classes,diversity between classes” maxization,and the samples in the class can response the real data distribution.Finally,according to the permutation and combination a subset is selected from each twice clustering of each subclass to construct a trainset to train an individual neural network.So the trainset with the smaller size of the data and the real data distribution can train the individual neural networks with the bigger diversity,and the ensemble of these individual neural networks can get better performance.Simulation experiments show that our proposed method here is effective.
Solving Minimal Cost Strong Planning Solution by Hierarchical Algorithm
WU Xiao-hui, WEN Zhong-hua, LI Yang and LAO Jia-qi
Computer Science. 2015, 42 (2): 228-232.  doi:10.11896/j.issn.1002-137X.2015.02.047
Abstract PDF(428KB) ( 343 )   
References | Related Articles | Metrics
In nondeterministic planning areas,previous studies on the strong planning solution focused on the solution itself,with little regard for the required cost of nondeterministic transfer system performing an action,and the algorithms’ efficiency which exists is not high.Aiming at this problem,we introduced a strong planning hierarchical method in model-checking,designed an algorithm to solve the minimal cost strong planning solution quickly.Firstly,this algorithm uses strong planning hierarchical method to get hierarchical states of nondeterministic problem,and then it reversely searches the minimal cost strong planning solution by using the hierarchical information.In the search process,according to the algorithm strategy,the upper and lower bounds of required searching layer are real-time updated to avoid a lot of useless search,improving search efficiency.Experimental results show that this algorithm can not only solve the minimal cost strong planning solution quickly and precisely,but also run more efficient than existing algorithms.And the greater the number of layers and the number of actions,the more obvious the advantages.
Temporal Weight in Dynamic Topic Tracking
WU Shu-fang and XU Jian-min
Computer Science. 2015, 42 (2): 233-236.  doi:10.11896/j.issn.1002-137X.2015.02.048
Abstract PDF(423KB) ( 413 )   
References | Related Articles | Metrics
A new dynamic topic tracking model was proposed based on Bayesian belief network,which is used as the representation model in this paper.We used time distance to quantify temporal information which is then used to dynamically adjust feature weight.A weight decay function was given to deal with the long-time disappearing features.If the weight of a feature is lower than the given threshold after decaying,the feature will be viewed as redundant information.TDT4 corpora and DET curves were used to run experiments.We firstly obtained the optimal time distance threshold α and the threshold β to determine whether a feature is redundant information.Experimental results show that the tracking performance of dynamic topic models can be effectively improved by using temporal weight.
Chinese Argument Extraction Approach Based on Semantics
HUANG Yuan, LI Pei-feng and ZHU Qiao-ming
Computer Science. 2015, 42 (2): 237-240.  doi:10.11896/j.issn.1002-137X.2015.02.049
Abstract   
References | Related Articles | Metrics
Chinese is a topic-based language,and the expression of Chinese sentences is very flexible,which leads to the loose connection between the arguments and the trigger in an event.Because the syntactic features based on shallow semantic are widely used in the previous studies,it leads to the low performance of argument extraction.To solve this issue,this paper put forward a novel argument extraction approach based on multiple semantics.This approach makes use of the semantic of roles,entities and trigger to be a supplement of syntactic-based approaches.Experimental results on the ACE 2005 Chinese corpus show that our approach outperforms the baseline significantly.
Constrainted E-CARGO Model Applying in CSP Problem
TENG Shao-hua, ZHANG Hong, LIU Dong-ning, ZHU Hai-bin, ZHANG Wei and LIANG Lu
Computer Science. 2015, 42 (2): 241-246.  doi:10.11896/j.issn.1002-137X.2015.02.050
Abstract PDF(496KB) ( 552 )   
References | Related Articles | Metrics
Role-based collaboration is a set of method,theory and technology which is used to explore the role and the complex relationship among them.In the RBC,group role assignment is a key problem and difficult problem.Many researchers solve GRA problems based on a qualification matrix,but it is difficult to describe the complex constraint relationships of the problems with only one matrix.Therefore, after adding a constraint set to the E-CARGO model,we put forward the Ec-CARGO model with constraints.We also deeply studied the intrinsic relationships among the RBC,GRA,SAT,and CSP,and then produced the transformation relation of RBC-GRA-SAT-CSP.We proposed a method of solving typical CSP constraint satisfaction problems based on Ec-CARGO,and designed a common framework of solving CSP problems with the constraint assignment of GRA.Finally,the N-queen problem was used to verify the validity of our method .
Interval Iterative Algorithm for a Class of Convex Optimization
TANG Min and DENG Guo-qiang
Computer Science. 2015, 42 (2): 247-252.  doi:10.11896/j.issn.1002-137X.2015.02.051
Abstract PDF(444KB) ( 788 )   
References | Related Articles | Metrics
This paper studied solving method for nonlinear constrained convex optimization and pointed out how to make use of Kuhn-Tucker condition to transfer convex programming to find roots of multivariate nonlinear systems equivalently.Our interval approach for finding optimal solutions can be combined with the principle of interval arithmetic and the modified iterative version given by R.Krawczyk.In the case of a convex programming with differentiable objective and constrained functions,the ability of global optimization can be improved.We illustrated the numerical experiments in contrast to the classical algorithms such as genetic algorithm,pattern search method,simulated annealing method and some built-in facilities of mathematical software packages.The implementation indicates that our method can provide a relative guaranteed bound on the error of the computed value and obtain more global optimal solutions.
Pattern Filtering and Conversion Methods for Semi-supervised Chinese Event Extraction
XU Xia, LI Pei-feng and ZHU Qiao-ming
Computer Science. 2015, 42 (2): 253-255.  doi:10.11896/j.issn.1002-137X.2015.02.052
Abstract PDF(367KB) ( 585 )   
References | Related Articles | Metrics
The accuracy of event patterns is very important in semi-supervised event extraction.Currently,semi-supervised Chinese event extraction system based on the pairwise pattern (e.g.,Trigger-Argument) suffers much from the issues of polysemy of triggers and sparse patterns.This paper put forward a argument-based mechanism to solve trigger sense disambiguation,and then applied it to pattern filtering to eliminate invalid patterns.In addition,for several special Chinese sentence structures,this paper proposed a pattern conversion method based on syntactic structure to enhance the applicability of the pattern.The experimental results on the ACE 2005 Chinese data show that our methods can effectively improve performance of semi-supervised Chinese event extraction system.
Dimensionality Reduction Algorithm Based on Neighborhood Rival Linear Embedding
LI Yan-yan and YAN De-qin
Computer Science. 2015, 42 (2): 256-259.  doi:10.11896/j.issn.1002-137X.2015.02.053
Abstract PDF(1205KB) ( 464 )   
References | Related Articles | Metrics
In order to improve the correctness of locally linear embedding caused by sparse data,a novel dimensionality reduction algorithm based on neighborhood rival linear embedding was proposed in this paper.According to the statistical information,it determines local linear dynamic range,adopts the cam distribution to find neighbors of data points,and avoids the lack of the direction of neighbor selection.In the case of sparse data sets,the algorithm can effectively obtain local and global information of data.The experiment to test the improved algorithm obtains a good effort of reducing dimension.The experimental results on the image retrieval using the Corel database show the efficiency of the algorithm.
Finite Time Control of Memristor Chaotic Systems
CHENG Ping-guang, MA Yue, HUANG Jun-jian and LIU Ji
Computer Science. 2015, 42 (2): 260-262.  doi:10.11896/j.issn.1002-137X.2015.02.054
Abstract PDF(214KB) ( 406 )   
References | Related Articles | Metrics
Memristor is considered to be the missing 4th passive circuit element,which was proposed by professor Chua in 1971.Memristor is nonlinear and shows many special properties,which can be described by a nonlinear constitutive relation.At present,research on circuits based on memristor becomes a focal topic for research.Chaotic system based on memristor also has attracted attention recently.Based on the finite time stability theorem,this paper took a study on the global stability problem and finite-time stabilization problem of a class of the 5th order memristor chaotic systems and designed nonlinear state feedback controller to ensure the global exponential stability of memristor chaotic systems.The simulation experiment results verify the correctness and validity of the method.
Recommended Method of Mashup Services Based on Information Entropy Multi-attribute Decision-making
WANG Shao-wei, LIU Jian-xun, CAO Bu-qing, TANG Ming-dong and WANG Xian
Computer Science. 2015, 42 (2): 263-266.  doi:10.11896/j.issn.1002-137X.2015.02.055
Abstract PDF(697KB) ( 374 )   
References | Related Articles | Metrics
With the continuous development of Mashup services,how to find the services which have high quality and users are interested in becomes a hard work in a massive services.To solve this problem,this paper proposed a recommended method of Mashup services based on information entropy multi-attribute decision making.In this approach,firstly, user interest model and Mashup quality of service (QoS) model are created.Then,based on information entropy multi-attribute decision making method,the comprehensive score of the candidate Mashup services is predicted and Top-K highest ones are recommended to the user.Finally,the large-scale experiments on Mashup service dataset show that the recommended method of Mashup services can effectively recommend a Mashup service list to user with high comprehensive quality,and has good scalability.
Personalized Tag Recommendation Algorithm Based on Tensor Decomposition
LI Gui, WANG Shuang, LI Zheng-yu, HAN Zi-yang, SUN Ping and SUN Huan-liang
Computer Science. 2015, 42 (2): 267-273.  doi:10.11896/j.issn.1002-137X.2015.02.056
Abstract PDF(854KB) ( 837 )   
References | Related Articles | Metrics
Internet-based social tag recommendation system provides a information sharing platform for the majority of users,allowing the users to annotate information for the items they have browsed in the form of “tag”.It not only describes the item’s semantics but also reflects the user’s preferences.The advantage of tag recommendation system is that the system can play a swarm intelligence to obtain the accurate keywords description of the item,and accurate tag information is an important resource to improve the performance of personalized recommendation system.However,due to the different interests of different users,the existing tag recommendation system is facing a problem that different users may tag different tags for the same items,and another problem is that the same tag for different users may contain different semantics.So how to effectively get the potential semantic association among the users,items and tags has become a main problem which needs to be solved.Therefore,we introduced a tensor model and used the third-order tensor to describe the three types of entities of social tag recommendation system:users,items and tags. On the basic of constructing initial tensor based on the history tagging data(tagging metadata),we applied the higher order singular value decomposition (HOSVD)method to reduce the dimension of tensor,at the same time to realize the analysis of potential semantic association between three types of entities.We performed experimental comparison of the method against two tag recommendations algorithms (FolkRank and PR) with two real data sets(Last.fm and Movielens).Experimental results show significant improvements of the method in terms of recall and precision.
Adapting Suppressed Fuzzy C-regression Models Algorithm
GUO Hua-feng, ZHAO Jian-min and PAN Xiu-qiang
Computer Science. 2015, 42 (2): 274-276.  doi:10.11896/j.issn.1002-137X.2015.02.057
Abstract PDF(341KB) ( 456 )   
References | Related Articles | Metrics
Fuzzy C-regression models algorithm was proposed by Hathaway and Bezdek,which has the advantages of strong stability and good convergence effect comparing to the hard C-regression models algorithm.However,the algorithm also has the disadvantage of poor convergence speed.To solve this problem,the thought of membership suppression was introduced and an algorithm called suppressed fuzzy C-regression models(S-FCRM) algorithm was proposed.Experiments show that S-FCRM algorithm can speed up the convergence of the algorithm,and provide better convergence effect.However,S-FCRM algorithm also has the problem of inhibitory factor parameter selection.To solve this problem,the adaptive method of inhibitory factor selection was studied and an algorithm called adapting suppressed fuzzy C-regression (AS-FCRM) algorithm was proposed.Experiments show that AS-FCRM algorithm has good adaptive effect,faster convergence speed and better robustness.
3D Model’s Alignment Approach Combining Partial Symmetry
ZHU Xin-yi and GENG Guo-hua
Computer Science. 2015, 42 (2): 277-279.  doi:10.11896/j.issn.1002-137X.2015.02.058
Abstract PDF(868KB) ( 490 )   
References | Related Articles | Metrics
Symmetry is an important attribute for most natural objects.Alignment of 3D model is a key preprocess step for 3D model retrieval.For this problem,an approach to align 3D model using partial symmetry was proposed.First,CPCA coordinate planes of a 3D model are computed to establish the model’s initial pose.Then a new measure,which is called partial symmetry length ratio (PSLR),is introduced to judge the model’s partial symmetry plane.If the model has more than 2 planes,the pose with maximal PSLR is the estimated pose.Otherwise,moment balance is used to estimate the model’s final pose by computing area instead of mass.The algorithm takes into account both the symmetric nature of the model and the asymmetric model.The validity is showed by results demonstration.
Adaptive Selected Parameters of Decomposed Two-dimensional Renyi Entropy Thresholding Method
GONG Qu, RAN Qing-hua and WANG Hai-jun
Computer Science. 2015, 42 (2): 280-282.  doi:10.11896/j.issn.1002-137X.2015.02.059
Abstract PDF(850KB) ( 623 )   
References | Related Articles | Metrics
To select the parameter α in decomposed 2D renyi entropy image thresholding segmentation method,a new adaptive method was proposed using particle swarm optimization algorithm,according to the uniformity measure which is an image segmentation evaluation criteria.The experiment results show that the method not only can get the suitable parameter α and desire segmentation result for each image,but also can reduce the computational complexity from O(L6) to O(L2),and the computational time of this method is only one-ten-thousandth of the computational time of 2D renyi entropy image thresholding segmentation method with adaptive selection parameter.
Hierarchical Subtraction Combining Phong Model for Foreground Detection in Sudden Illumination Changes Scenes
CAO Qian-xia, LUO Da-yong and WANG Zheng-wu
Computer Science. 2015, 42 (2): 283-286.  doi:10.11896/j.issn.1002-137X.2015.02.060
Abstract PDF(843KB) ( 422 )   
References | Related Articles | Metrics
To solve the problems of sudden illumination changes and non-stationary background disturbance during foreground detection,a foreground detection method combining block-level and pixel-level hierarchical background subtraction with Phong model was presented.First,the foreground areas are quickly detected and non-stationary background is effectively dealed with by using block-level Sigma-delta background subtraction algorithm.Then,the coarse targets are extracted from the foreground areas by using Phong model to deal with sudden illumination changes.Finally,the coarse targets are executed for pixel-level foreground refining operation and the background is updated by using pixel-level Sigma-delta algorithm.Experiments show that the method can achieve robust foreground detection in scenes with sudden illumination changes and non-stationary background.
D-Tile for Texture Synthesis Based on Artificial Bee Colony
SUN Jin-guang and LIU Shuang-jiu
Computer Science. 2015, 42 (2): 287-291.  doi:10.11896/j.issn.1002-137X.2015.02.061
Abstract PDF(1697KB) ( 286 )   
References | Related Articles | Metrics
Patch-based texture synthesis builds a texture by joining together texture blocks taken from the original texture sample.Its basic idea is to select a group of blocks from the sample,and then joint the blocks in some way to get the output texture.We proposed D-Tile for texture synthesis based on artificial bee colony.Firstly,the idea of artificial bee colony algorithm is introduced to select four square texture blocks which have smaller differences of pixels at the boundary.Texture blocks with smaller differences of pixels can reduce the obvious level of the seams in D-Tile,and choosing square texture makes the texture sample’s corner information can be utilized.Secondly,the four blocks are tiled with some overlaps,the diagonal is connected respectively,and the diamond texture patch in the center is taken as D-Tile framework.This selection of framework can avoid the corner problem when tiling the D-Tiles.Lastly,the D-Tile is used to synthesis texture according to the principle of border color matching.The experimental result shows that this algorithm can achieve a good visual effect of various type of texture.
Algorithm of Image Thinning Post-processing Based on Direction Chain Code Scanning and Tracking
QU Zhong, JIANG Yu-ping and WEN Qian-yun
Computer Science. 2015, 42 (2): 292-295.  doi:10.11896/j.issn.1002-137X.2015.02.062
Abstract PDF(845KB) ( 1331 )   
References | Related Articles | Metrics
The extraction of the target image skeleton is an important part of the intelligent analysis.Some flaws of Zhang parallel thinning algorithm are that the skeleton is non-single-pixel and also easily produces burr.This article proposed a fast image skeleton extraction algorithm to obtain a single pixel and eliminate burr.Firstly,the binary target image needs to be morphologically preprocessed to fill the tiny holes and smooth the boundary.Secondly,this article used 8 direction chain code scanning and coding principle to achieve a single pixel.Finally,this article adopted the 8 direction chain code to remove the burr.The experiments show that the algorithm can rapidly and effectively obtain a single pixel width and remove the burrs.
Research of Human Tracking Algorithm through Multi Feature Fusion Particle Filter Based on Direction Vector
ZHANG Lei, GONG Ning-sheng and LI Jin
Computer Science. 2015, 42 (2): 296-300.  doi:10.11896/j.issn.1002-137X.2015.02.063
Abstract PDF(678KB) ( 382 )   
References | Related Articles | Metrics
Traditional multi feature fusion particle filter algorithm has huge amount of computation,which is not conducive to timeliness.Meanwhile,the algorithm often generates error for tracking and matching.To solve these problem better,this paper adopted the multi feature fusion particle filter algorithm founded in direction vector.Firstly,the algorithm adds the value of multiplicative fusion and additive fusion through the body color feature and contour feature.In addition,on the basis of the uncertainty multiplication of two features,the respective weights proportion in the tracking process is adjusted according to the actual contribution rate of two kinds of features.Moreover,the probable range of the movement of objects can be predicted to reduce the amount of calculation of particle iteration by combining the direction vector.At last,the merge human’s body can be separated by adjusting the window automatically and precisely.The test shows our method can realize the human tracking accurately in the complex environment.
Generation Algorithm of Digital Reconstruction Radiographs Based on CUDA
DU Xiao-gang, DANG Jian-wu and WANG Yang-ping
Computer Science. 2015, 42 (2): 301-305.  doi:10.11896/j.issn.1002-137X.2015.02.064
Abstract PDF(687KB) ( 995 )   
References | Related Articles | Metrics
Because the generation procedure of digitally reconstructed radiograph has good parallelism,the digital reconstruction radiograph generation algorithm based on CUDA parallel computing was presented in this paper.Firstly,the octree structure is adopted to organize the volume data in CPU,and then the volume data are loaded into the GPU.The kernel function which can be used to simulate the decay process of X-rays penetrating the human body is designed according to the correspondence between the light and the thread,and finally the kernel function is executed in parallel by the multi-thread to complete the DRR image generation process.The experimental results show that this algorithm uses effectively the parallel computing capabilities of GPU in the premise of ensuring the quality of the DRR,significantly improves the generation speed of DRR,and meets the real-time requirements of DRR in the image-guided radiotherapy.
Wavelet Transform Image Retrieval Method Based on Content
LI Lan and LIU Yang
Computer Science. 2015, 42 (2): 306-310.  doi:10.11896/j.issn.1002-137X.2015.02.065
Abstract PDF(670KB) ( 507 )   
References | Related Articles | Metrics
Traditional image retrieval methods are based on the local characteristics of image,ignoring the overall image characteristics.Aiming at this problem,through in-depth analysis of the overall characteristics of the image,this paper proposed a hybrid method based on local features and global features to extract the image content.First of all,the methodof stationary wavelet transform is used to extract the image of the horizontal,vertical and diagonal image overall information.Second, gray level co-occurrence matrix of each child matrix is used to extract the local characteristics of the ima-ge.According to the joint characteristics of local features and the overall description, multimode association rule data mining method is used for image retrieval,and its main decision parameters of association rules are the Euclidean distance.Experimental results show that the proposed image retrieval method of wavelet transform multimode association rule data mining based on the content has great performance improvement compared with the existing schemes.
Image Retrieval Algorithm Based on Improved Color Coherence Vectors and Contribution to Clustering
ZHANG Yong-ku, LI Yun-feng and SUN Jing-guang
Computer Science. 2015, 42 (2): 311-315.  doi:10.11896/j.issn.1002-137X.2015.02.066
Abstract PDF(939KB) ( 400 )   
References | Related Articles | Metrics
In order to improve the speed and accuracy of image retrieval,the drawbacks of image retrieval based on a variety of clustering algorithms were analyzed,and a new partition clustering method for image retrieval was presented in this paper.First,based on the asymmetrical quantization of the color in HSV model,color coherence vectors are introduced as the color feature.Secondly, qualified feature vectors are found as the initial cluster centers,and it clusteres based on the dispersion and the contribution,establishes image feature index library.Finally,it obtains the retrieval and reordered results by the similarity with the retrieval image.By comparing with other algorithms,it is demonstrated that the percentage of precision and recall of proposed algorithm are improved greatly.
Particle Filter Object Tracking Based on Adaptive Feature Fusion
HUAN Er-yang and LI Rui
Computer Science. 2015, 42 (2): 316-319.  doi:10.11896/j.issn.1002-137X.2015.02.067
Abstract PDF(576KB) ( 450 )   
References | Related Articles | Metrics
In order to solve the problem that the traditional particle filter tracking method fails under the complex background,a tracking algorithm named particle filter object tracking based on feature fusion adaptively was proposed.The object is represented by color feature and edge feature that are selected in our algorithm and fused by the particle filter.The weighting coefficient of feature is adjusted by the reliability.If the scale of object is changing,the template of object in the tracking process is needed to update.The experimental results show that the proposed algorithm can track the object stably and more accurately when the object is under the complex background or is occluded.