Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 45 Issue 10, 20 October 2018
  
CGCKD 2018
Generalized Sequential Three-way Decisions Approach Based on Decision-theoretic Rough Sets
YANG Xin, LI Tian-rui, LIU Dun, FANG Yu, WANG Ning
Computer Science. 2018, 45 (10): 1-5.  doi:10.11896/j.issn.1002-137X.2018.10.001
Abstract PDF(2383KB) ( 845 )   
References | Related Articles | Metrics
The theory three-way decisions is one of effective approaches to solve the dynamic uncertain problem.Compared to two-way decisions,sequential three-way decisions can address the balance of cost of decision result and cost of decision process effectively when information is insufficient or evidence is inadequate.Based on the study of the multilevel granular structure,theprocessing objects of multiple selections and the diversified cost structure,this paper proposed a generalized sequential three-way decisions model under decision-theoretic rough sets.This model considers se-ven different methods to process objects at each level.Finally,experiments were conducted to analyze the efficiency and performance of seven approaches in the proposed model.
Cost-sensitive Sequential Three-way Decision Making Method
XING Ying, LI De-yu, WANG Su-ge
Computer Science. 2018, 45 (10): 6-10.  doi:10.11896/j.issn.1002-137X.2018.10.002
Abstract PDF(1610KB) ( 772 )   
References | Related Articles | Metrics
In realistic decision-making,cost-sensitive issue is one of the important factors which affects human decision-making,and many researchers are committed to reducing the cost of decision-making.At present,in the field of rough set,many researchers mainly research decision-making based on DTRS model and only consider a certain cost,which is not comprehensive enough.While sequential three-way decision model is sensitive to two kinds of costs,and the multi-level granular structure can effectively reduce the total cost of decision and can better simulate the process of human’s dynamic and gradual decision-making.Based on sequential three decision models,this paper constructed a multi-level granular structure.It relates the test cost of each attribute to its classification ability and sets the test cost from the perspective of information entropy.At the same time,combined with the sequential three decisions,the attribute reduction based on the minimum cost criterion is used to remove the influence of redundant attributes and irrelevant attributes on the cost.The experimental results on the seven UCI datasets show that while high accuracy is ensured ,the total cost of decision-making is dropped by an average of 26%,which fully validates the effectiveness of the proposed method.
Dynamic Parallel Updating Algorithm for Approximate Sets of Graded Multi-granulation Rough Set Based on Weighting Granulations and Dominance Relation
ZHAO Yi-lin, JIANG Lin, MI Yun-long, LI Jin-hai
Computer Science. 2018, 45 (10): 11-20.  doi:10.11896/j.issn.1002-137X.2018.10.003
Abstract PDF(2114KB) ( 777 )   
References | Related Articles | Metrics
With the continuous updating of large data sets,the classical multi-granulation rough set theory is no longer practical.Therefore,this paper put forward the related theory of graded pessimistic multi-granulation rough set with weighting granulations and dominance relation,graded optimistic multi-granulation rough set with weighting granulations and dominance relation.On the basis of this improved theory,this paper proposed a dynamic parallel updating algorithm forapproximate sets of graded multi-granulation rough set based on weighting granulations and dominance relation.Finally,the experiment verifies the effectiveness of the proposed algorithm,which is able to handle data with massive dynamic updates and improve running efficiency.
Rules Acquisition on Three-way Class Contexts
REN Rui-si, WEI Ling, QI Jian-jun
Computer Science. 2018, 45 (10): 21-26.  doi:10.11896/j.issn.1002-137X.2018.10.004
Abstract PDF(1981KB) ( 710 )   
References | Related Articles | Metrics
Rules acquisition is an important problem in three-way concept analysis.Based on attribute-induced three-way concepts,two kinds of three-way class contexts were defined,namely three-way condition class contexts and three-way decision class contexts.The class concepts in these two different three-way class contexts were defined and the structures of class concepts were studied.Moreover,the relationships between the class concepts in three-way decision class contexts and the attribute-induced three-way concepts in three-way weakly consistent formal decision contexts were discussed.Then the rules based on three-way decision class concepts were presented,and the way to acquire them was shown.Furthermore,compared to the rules acquired from the three-way weakly consistent formal decision contexts,the class context based rules were proved to be superior than three-way weakly consistent context based rules.Specifically,the number of class context based rules is smaller than the number of three-way weakly consistent context based rules,but for each three-way weakly consistent context based rule,there exists a class context based rule containing more know-ledge.Finally,considering the three-way condition class context,the reverse rules were defined.Considering both three-way condition class contexts and three-way decision class contexts,the double directed rules were presented.
Fuzzy Rough Set Model Based on Three-way Decisions of Optimal Similar Degrees
YANG Ji-lin, ZHANG Xian-yong, TANG Xiao, FENG Lin
Computer Science. 2018, 45 (10): 27-32.  doi:10.11896/j.issn.1002-137X.2018.10.005
Abstract PDF(1252KB) ( 638 )   
References | Related Articles | Metrics
In the fuzzy information system,the noises tend to affect objects’ similarity,as well as not all the objects’similar degree need high accuracy for calculation in the model.Consequently,a fuzzy rough set model based on three-way decisions of similar degrees was proposed by introducing thresholds (α,β) in this paper.Then error rates,decision costs and corresponding semantics explanations of three-way decision about objects’ similar degrees were given through the three-way decisions method of fuzzy sets approximation.Furthermore,by taking minimizing overall decision costs as the goal,a calculating method of the optimal thresholds (α,β) was given.Therefore,a fuzzy rough sets model based on the three-way decision of optimal similar degrees was established.Finally,an example was analyzed to show the feasibility and reasonability of the model.The fuzzy rough sets model based on three-way decision keeps uncertainty of fuzzy information system,but also reduces noise effects in some extents,and the optimal (α,β) can be got by calculating.This study is benefit for application of the fuzzy information system.
Property-oriented and Object-oriented Decision Rules in Decision Formal Contexts
JIANG Yu-ting, QIN Ke-yun
Computer Science. 2018, 45 (10): 33-36.  doi:10.11896/j.issn.1002-137X.2018.10.006
Abstract PDF(2067KB) ( 724 )   
References | Related Articles | Metrics
Decision formal context is an important topic of formal concept analysis.The decision rules in decision formal contexts based on object-oriented and property oriented concept lattices were proposed in this paper.The semantic interpretation of decision rules was provided.Furthermore,the relationship between decision rules based on property-oriented concept lattices and decision rules based on Wille’s concept lattices were described.And a method for determining the consitency of attribute sets was given.
Recommendation Algorithm Combining User’s Asymmetric Trust Relationships
ZHANG Zi-yin, ZHANG Heng-ru, XU Yuan-yuan, QIN Qin
Computer Science. 2018, 45 (10): 37-42.  doi:10.11896/j.issn.1002-137X.2018.10.007
Abstract PDF(1918KB) ( 713 )   
References | Related Articles | Metrics
Data sparsity is one of the major challenges faced by collaborative filtering.Trust relationships between users provide useful additional information for the recommender system.In the existing studies,the direct trust relationships are mainly used as additional information,while the indirect trust relationships are less considered.This paper proposed a recommendation algorithm(ATRec) that combines the direct and indirect asymmetric trust relationships.First,a trust transfer mechanism is constructed and used to obtain asymmetric indirect trust relationships between users.Second,each user’s trust set is obtained by the direct and indirect asymmetric trust relationship.At last,the popularity of the item is computed according to the rating information of the trust set or the k-nearest neighbors and the favorable thre- shold,thus generating user’s top-N recommendation list by the recommended threshold.The experimental results show that this algorithm has better performance than the state-of-the-art recommendation algorithms in top-N recommendation.
Feature Selection Algorithm Based on Segmentation Strategy
JIAO Na
Computer Science. 2018, 45 (10): 43-46.  doi:10.11896/j.issn.1002-137X.2018.10.008
Abstract PDF(1175KB) ( 652 )   
References | Related Articles | Metrics
Feature selection is a key issue in rough sets.The theory of rough sets is an efficient tool for reducing redundancy.At pre-sent,there are many items and features in a large table,but few methods can gain better performance for big data table.The idea of segmentation was introduced in this paper.A big data table is divided into several small tables,and selection results are joined together to solve feature selection problem of the original table.To evaluate the performance of the proposed method,this paper applied it to the benchmark data sets.Experimental results illustrate that the proposed method is effective.
Three-way Granular Reduction for Decision Formal Context
LIN Hong, QIN Ke-yun
Computer Science. 2018, 45 (10): 47-50.  doi:10.11896/j.issn.1002-137X.2018.10.009
Abstract PDF(2050KB) ( 594 )   
References | Related Articles | Metrics
This paper studied the three-way granular reduction in decision formal context.The concepts of three-way granular consistent formal decision context and three-way granular consistent set were put forward.The judgment theorem for consistent set was examined.Based on discernibility matrix and discernibility function,the reduction method and an illustrative example were presented.At last,the relationships among three-way granular reduction,granular reduction and classification reduction were examined.
Object-oriented Multigranulation Formal Concept Analysis
ZENG Wang-lin, SHE Yan-hong
Computer Science. 2018, 45 (10): 51-53.  doi:10.11896/j.issn.1002-137X.2018.10.010
Abstract PDF(1857KB) ( 620 )   
References | Related Articles | Metrics
To further introduce granular computing into the study of formal concept analysis,this paper studied formal concepts in multigranulation formal context and extended the existing study from single-granulation to multigranulation.Firstly,the definition of formal concepts was given at different granulation levels.Secondly,the relationship between concepts was examined at different granulation levels.Thirdly,the necessary and sufficient condition that the extension set is equal was provedat different granulation levels.The obtained results provide a possible framework for data analysis by combing both formal concept analysis and rough set theory at multiple granulation levels.
Generalized Dominance-based Attribute Reduction for Multigranulation Intuitionistic Fuzzy Rough Set
LIANG Mei-she, MI Ju-sheng, FENG Tao
Computer Science. 2018, 45 (10): 54-58.  doi:10.11896/j.issn.1002-137X.2018.10.011
Abstract PDF(1901KB) ( 691 )   
References | Related Articles | Metrics
The combination of the evidence theory and multigranulation rough set model has become one of the hot issues,and the established models have been applied to various information systems,such as incomplete information system,coverage information system and fuzzy information system.However,intuitionistic fuzzy information system has not been investigated yet.Firstly,three kinds of dominance relations and three kinds of dominance classes were defined by using triangular norms and triangular conorms in intuitionistic fuzzy decision information system.Secondly,generali-zed dominance-based multigranulation intuitionistic fuzzy rough set model was proposed,and the belief structure of this model was discussed under evidence theory.After that,attribute reduction was acquired by the importance of granularity and attribute.Finally,an example was used to illustrate the effectiveness of the model.
Design and Application of Extreme Learning Machine Model Based on Granular Computing
CHEN Li-fang, DAI Qi, FU Qi-feng
Computer Science. 2018, 45 (10): 59-63.  doi:10.11896/j.issn.1002-137X.2018.10.012
Abstract PDF(1432KB) ( 664 )   
References | Related Articles | Metrics
The importance of attributes in data intelligence processing is not only different from each other,but also highly nonlinear.In such case,it is difficult to obtain effective solutions to the problem by applying machine learning directly.In order to solve this problem,the granularity-based ranking method of attribute importance and the application of the ranking results in binary relationship were explored to perform the granular partitioning algorithm.Then,this paper applied extreme learning machine to granular layer space.The learning results in the layer space were compared and analyzed to obtain the optimal partition and granular layer.In addition,the particle size extreme learning machine model proposed in this paper was applied to the air quality forecasting problem,not only accelerating the forecasting speed,but also being consistent with the actual forecasting,thus empirically proving the validity and reliability of extreme learning machine model.
Network & Communication
Modulation Protection Algorithm Based on Wireless OFDM Systems
GAO Bao-jian, WANG Shao-di, REN Yu-hui, WANG Yu-jie
Computer Science. 2018, 45 (10): 64-68.  doi:10.11896/j.issn.1002-137X.2018.10.013
Abstract PDF(2490KB) ( 665 )   
References | Related Articles | Metrics
With the broadband trend in wireless communication system,traditional data encryption methods have high computational complexity,and haven’t taken the security of modulation modes of the physical layer into consideration.Aiming at this problem,a modulation protection algorithm based on wireless OFDM systems was proposed from the perspective of physical layer encryption.Modulation protection effect of the proposed algorithm was analyzed respectively in the cases of single-carrier and multi-carrier,and typical modulation identification methods in the cognitive radio were adopted in the simulation to analyze and compare the recognition rate before and after encryption.Theoretical ana-lysis and simulation results show that this algorithm doesn’t change the performance of original system,and has high capacity of modulation protection.
Incremental Indoor Localization for Device Diversity Issues
XIA Jun, LIU Jun-fa, JIANG Xin-long, CHEN Yi-qiang
Computer Science. 2018, 45 (10): 69-77.  doi:10.11896/j.issn.1002-137X.2018.10.014
Abstract PDF(3000KB) ( 690 )   
References | Related Articles | Metrics
With the rapid development of Wireless Local Area Network(WLAN),the Received Signal Strength(RSS) based indoor localization becomes a hot area in research and application fields.Among various kinds of up-to-date indoor localization methods,fingerprint based methods are most widely used because of its good performance,and one feature of those methods is that the accuracy is determined by the identification of the training and the testing dataset.Howe-ver,in practical applications,there are three problems in existing fingerprint based methods or system.Firstly,the loca-lization error caused by device variance is a severe problem.Secondly,the wireless data are changing as the time passed,leading to the reduction of the prediction accuracy.Thirdly,traditional fingerprint based methods or system cannot avoid the dependency on a large amount of labeled data to keep effective positioning performance,which usually involves high cost in labor and time.To solve these problems,this paper proposed a incremental indoor localization method for device diversity issues,which keeps real-time update by training uncalibrated data that collected in localization.Experimental results show that the proposed method can increase the precision of overall indoor localization system,especially when error distance is between 3 and 5 meters.What’s more,this method possesses good advantage of timeliness compared with traditional indoor localization method on real BLE dataset.
Radio Resource Optimization Mechanism Based on Time-reversal in Device-to-Device Communication Network
LI Fang-wei, ZHANG Lin-lin, ZHU Jiang
Computer Science. 2018, 45 (10): 78-82.  doi:10.11896/j.issn.1002-137X.2018.10.015
Abstract PDF(2621KB) ( 702 )   
References | Related Articles | Metrics
Aiming at the interference between D2D users and cellular users in D2D heterogeneous wireless communication network,this paper proposed a radio resource optimization mechanism based on time-reversal.The mechanism includes two procedures:1)the time-reversal mirror is combined to realize interference cancellation in the uplink transmission system,namely,the channel signature is performed for each user,so as to extract useful signals and remove interference while getting the SINR(Signal and Interference to Noise Ratio)of system users;2)according to SINR,a power control algorithm is adopted to adjust user transmission power in combination with the convex optimization theory to maximize the throughput of system.Simulation results show that the proposed mechanism effectively suppresses mutual interference between cellular users and D2D users in the heterogeneous network,improves the system capacity,satisfies the requirement of reliability of communication,and ensures that users can obtain higher QoS(Quality of Service).
Cluster-based Real-time Routing Protocol for Cognitive Multimedia Sensor Networks
LI Ling-li, BAI Guang-wei, SHEN Hang, WANG Tian-jing
Computer Science. 2018, 45 (10): 83-88.  doi:10.11896/j.issn.1002-137X.2018.10.016
Abstract PDF(1528KB) ( 574 )   
References | Related Articles | Metrics
Variability of channel in cognitive radio sensor network makes transmission of multimedia data more difficult.How to make data transmit to sink in real time is the problem faced by many researchers.This paper proposed a Cluster-Based Real-Time Routing (CBRTR) for cognitive multimedia sensor networks.The expected available time of channels was estimated by forecasting PU’s activity based on which the appropriate channel was chosen for data transmissions.Meanwhile,the reliability was considered to control data loss probability within reasonable extent,so that data can be transmitted reliably to sink in required time.When choosing the next hop,this paper not only considered the distance,but also added the expected available time of channels.Therefore,CBRTR reduces the amount of available time as much as possible.Simulation results show that the proposed CBRTR can balance nodes’ energy,prolong network lifetime,and achieve real-time reliable transmission of data.
Improved MAC Protocol in Radio-over-fiber Networks and Its Performance Analysis
GUAN Zheng, YANG Zhi-jun, QIAN Wen-hua
Computer Science. 2018, 45 (10): 89-93.  doi:10.11896/j.issn.1002-137X.2018.10.017
Abstract PDF(1537KB) ( 496 )   
References | Related Articles | Metrics
In simulcast radio-over-fiber based distributed antenna systems (RoF-DAS),a single set of base stations simu-ltaneously broadcast wireless signals to multiple remote antenna units (RAUs) in the downlink,and in the uplink,user stations covered by different RAUs are connected to a wireless local-area network access point with different-length fiber links.To provide a contented-free media access control,this paper presented a parallel gated service IEEE 802.11 point coordination function (PGPCF) with gated service scheme and piggyback technology,improving the throughput of channel resources and reducing the delay deriving from the fiber.This paper also established a Markov chain analysis model to achieve the closed form expression of the mean queue length and the throughput,which overcomes the shortcoming of only employing experiments to analyze the access mechanism of RoF networks.Both the theory analysis and simulation results suggest the validity of improving the network throughput andshortening the mean queue length.
Subsection Model Based Error-resilient Decoding Algorithm for Source Coding
WANG Gang, PENG Hua, JIN Yan-qing, TANG Yong-wang
Computer Science. 2018, 45 (10): 94-98.  doi:10.11896/j.issn.1002-137X.2018.10.018
Abstract PDF(1281KB) ( 680 )   
References | Related Articles | Metrics
Aiming at the error code diffusion problems of lossless source coding,a subsection decoding model for source sequence was constructed based on MAP(Maximum A Posteriori),and then a error-resilient decoding algorithm based on statistical model was proposed.The algorithm makes full use of the residual redundancy of source coding data,overcomes the sensitive characteristics of lossless data for error code well,and provides a new solution for error-resilient decoding of text compression data.Experimental results show that this algorithm has the ability of correcting errors in the source data and significantly reduce the information loss.
AODV Routing Strategy Based on Joint Coding and Load Balancing
WANG Zhen-chao, SONG Bo-yao, BAI Li-sha
Computer Science. 2018, 45 (10): 99-103.  doi:10.11896/j.issn.1002-137X.2018.10.019
Abstract PDF(1653KB) ( 542 )   
References | Related Articles | Metrics
In wireless mesh,this paper proposed an optimized routing strategy Coding-aware and Load balanced AODV (CLAODV).The strategy can not only enable the AODV routing protocol to support inter-stream coding,but also solve the load imbalance problem caused by inter-stream coding.The proposed strategy allows multiple downstream nodes of coding node to jointly decode the same coded packet to increase the coding opportunities on the path.A new routing metric parameter ECTXL,which can simultaneously reflect the coding gain,path packet loss rate and the path load le-vel,was designed.The CLAODV strategy can be routed based on this parameters.The simulation results show that the proposed CLAODV routing strategy can increase the path coding opportunities effectively,improve the network throughput,and significantly reduce the routing delay and bandwidth resource overhead compared with other related routing strategies.
Adaptive Indoor Location Method for Multiple Terminals Based on Multidimensional Scaling
FU Xian-kai, JIANG Xin-long, LIU Jun-fa, ZHANG Shao-bo, CHEN Yi-qiang
Computer Science. 2018, 45 (10): 104-110.  doi:10.11896/j.issn.1002-137X.2018.10.020
Abstract PDF(3895KB) ( 762 )   
References | Related Articles | Metrics
Indoor location is a hot research topic in the field of pervasive computing.At present,indoor location methods are mainly divided into the localization method based on signal propagation model and the one based on wireless signal fingerprint.The fingerprint based method is more widely used because it does not need to know the location of the wireless signal AP.But it needs to collect a large amount of data at the offline stage to build a rich fingerprint database,which needs a lot of manual calibration.For this reason,this paper proposed a localization method based on spatial relations of fingerprints.Compared with the traditional fingerprint localization methods,this method does not need to build a fingerprint database.It uses Wi-Fi fingerprint from multiple terminals to extract the similarity of fingerprints and construct a dissimilarity matrix,and finally applies multidimensional scaling (MDS) algorithm to construct the relative location map for all terminals.Then each terminal can be positioned by determining the position of more than 3 terminals.In this paper,support vector regression (SVR) is used to calculate the distance between arbitrary terminals,and the distance matrix is used as the dissimilarity matrix.A shopping mall which is about 2500 square meter is selected as testing environment,and the average positioning error of the proposed method is about 7 meters.
Wireless Sensor Routing Algorithm Based on Energy Balance
SU Sheng-chao, ZHAO Shu-guang
Computer Science. 2018, 45 (10): 111-114.  doi:10.11896/j.issn.1002-137X.2018.10.021
Abstract PDF(1501KB) ( 708 )   
References | Related Articles | Metrics
In order to improve the service life of wireless sensor and make up for the shortcomings of traditional routing algorithms,a wireless sensor routing algorithm based on energy balance was proposed.Firstly,the energy consumption process of wireless sensor node is analyzed,and the routing table from source node to destination node is built.Secondly,all nodes need to jump through a single message to determine its adjacent nodes,and the remaining energy information will be deliveried to its adjacent nodes.Finally,according to the concentration of pheromone in ant-colony algorithm andlocal energy,the next hop node of transmitting data for wireless sensor is selected.The experimental results show that the proposed algorithm has low energy consumption,ensures energy comsuption balance,and extends the lifetime of wireless senser nodes.
Base Station Selection Optimization Method Oriented at Indoor Positioning
CHEN Shi-jun, WANG Hui-qiang, WANG Yuan-yuan, HU Hai-jing
Computer Science. 2018, 45 (10): 115-119.  doi:10.11896/j.issn.1002-137X.2018.10.022
Abstract PDF(2599KB) ( 1071 )   
References | Related Articles | Metrics
Indoor positioning based on the cellular network has become the preferred carrier-class method.Due to the common infrastructure of the communication network,it has a wide range of coverage and it is without the need for reinvestment of infrastructure,which has become one of hot spots in the field of 5G communication.In the cellular positioning scene of network indoor,the layout of station will directly affect the number of the first acceptance diameter,arriving time TOA (Time of Arrival),measurement error and so on,then it will affect the positioning accuracy.A base station selection optimization algorithm for indoor location was proposed,which reduces the deviation due to the base station layout.Firstly,an indoor three-dimensional positioning model with error suppression is proposed to suppress the inaccuracy of single model localization.TOA information is used to suppress the error caused by the virtual locating point in the TDOA model.Secondly,according to the results of the selection of different base stations,the isolated point are removed by the idea of secondary clustering,position of the positioning point is determined according to the class with the largest number of sample nodes in the clustering result.The experimental results show that the base station selection optimization algorithm reduces the average deviation of indoor positioning by 15.49% compared with other optimization algorithms.
Multi-user Network Analysis of BC Unicast and BC Multicast Coexistence
YU Zhen-chao, LIU Feng, ZENG Lian-sun
Computer Science. 2018, 45 (10): 120-123.  doi:10.11896/j.issn.1002-137X.2018.10.023
Abstract PDF(2388KB) ( 589 )   
References | Related Articles | Metrics
A new method of multi-user BC unicast and BC multicast coexistence was proposed,and a new method of zero-space interference was proposed.The main network of the model uses “loop mode” to allocate the expected message to the receiving end to form the multicast network.The secondary network uses the “one-over-one” to allocate the desired message to the receiving end to constitute an unicast network.The forcing zero method first obtains the corresponding zero space according to the interference message of the receiving end,and then takes a number of intersection space of zero space.Finally,multiple interference messages are placed in the corresponding intersection space at the same time,achieving more than one forcing zero interference message at each receiving end.For this multiuser system,the generalized results of optimized antenna configuration scheme and thereuse gain of the system are obtained.The system was simulated through Matlab.The results show that the theoretical value of the system reuse gain is consistent with the simulation results.
Sensor-based Adaptive Rate Control Method for Mobile Streaming
XIONG Li-rong, YOU Ri-jing, JIN Xin
Computer Science. 2018, 45 (10): 124-129.  doi:10.11896/j.issn.1002-137X.2018.10.024
Abstract PDF(2396KB) ( 632 )   
References | Related Articles | Metrics
Mobile terminal video streaming service has been paid more and more attention,and intelligent terminal rate adaptation mechanism has become a research hotspot.It costs plenty of network traffic for the mobile users to watch high-definition movies.When users are not interested in the movies,or the terminals are far away from users,watching HD movies will not bring good experience and will waste a lot of wireless network traffic.This paper designed a sensor-based rate adaptive decision model,taking into account the users’ watching location,interests and equipment status.The sensor-based rate adaptive decision model can optimize the tranditional rate decision mechanism for designing sesor-based hybrid rate decision method.The experiments show that the proposed sensor-based rate adaption model can effectively save the wireless network bandwidth resources in the case of insufficient network bandwidth.
Study on Channel-aware Expected Energy Consumption Minimization Strategy in Wireless Networks
HUANG Rong-xi, WANG Nao, XIE Tian-xiao, WANG Gao-cai
Computer Science. 2018, 45 (10): 130-137.  doi:10.11896/j.issn.1002-137X.2018.10.025
Abstract PDF(1518KB) ( 728 )   
References | Related Articles | Metrics
With the rapid development of wireless network technology,saving energy consumption has become a very important topic to build green wireless networks.Due to the time-varying characteristics of the channel,it is possible to obtain a higher utilization for energy by using the channel with good state in wireless communication.From the view of the data transmission energy consumption of the whole wireless network,this paper proposed the expected energy consumption minimization strategy(E2CMS) for data transmission based on the optimal stopping theory.The E2CMS delays the transmission of data until the best desired channel state is found,taking into account the maximum transmission delay and the given receiver power.This paper first constructed an energy consumption minimization problem with qua-lity of service constraints.Then it proved that the E2CMS is a pure threshold strategy by the optimal stopping theory,and obtained the power threshold by solving a fixed-point equation with backward induction.Finally,it conducted si-mulations in a typical small-scale fading channel model and compared E2CMS with a variety of different transmission scheduling strategies.The results show that E2CMS has smaller average energy consumption per unit of data and signi-ficantly improves the network performance.
Information Security
Evaluation of Network Node Invasion Risk Based on Fuzzy Game Rules
LIU Jian-feng, CHEN Jian
Computer Science. 2018, 45 (10): 138-141.  doi:10.11896/j.issn.1002-137X.2018.10.026
Abstract PDF(2558KB) ( 659 )   
References | Related Articles | Metrics
In order to evaluate network security state in real time and make up for the shortcomings of low accuracy and poor practicability of traditional network node intrusion risk assessment method,a new network node intrusion risk assessment method based on fuzzy game rules was proposed.A set of finite state sets is used to describe the network,and the benefit matrix and fuzzy game elements are given to obtain the expected income of intruders and network nodes.On this basis,the fuzzy game rules are given.The risk assessment model is constructed according to the assets,threats,weaknesses and risk factors through the fuzzy game rules.After the quantification of the strategy cost and income,the fuzzy game tree of the network node is established,and the nash equilibrium is obtained.Combined with the income function of intruders and network nodes,the expectation of network nodes’ risk under fuzzy game rules is obtained,and the value of network node’s intrusion risk is determined.The threshold value is used to judge whether alarm is needed to prevent the network node from being invaded.Experimental results show that the proposed method has high accuracy,reliability and practicability.
Mobile Location Privacy Protection Based on Anonymous Routing
XIONG Wan-zhu, LI Xiao-yu
Computer Science. 2018, 45 (10): 142-149.  doi:10.11896/j.issn.1002-137X.2018.10.027
Abstract PDF(1986KB) ( 646 )   
References | Related Articles | Metrics
To preserve the security of mobile location pravicy based on location services,a mobile location pravicy protection model based on anonymous routing was presented.This model makes every mobile node as a forwarder and uses rerouting to select a route.It uses random selected mobile node as first forwarder,queries issued by it are firstly encrypts with public key of the location information server and secondly encrypted with the public key of first forwarder.Then mobile sending node sends it to first forwarder.The first forwarder receives it and decides next hop which is either location information server or second forwarder,the first forwarder firstly decrypts it with private key of first forwar-der,then secondly encrypts it with the public key of next hop.If next hop is second forwarder,it does what the first forwarder does until the location information server receives this message.Theoretical analysis and experimental results show that the mobile location privacy protection model can ensure the location privacy of location information server and any forwarder node can acquire mobile nodes,and it can realize the location privacy protection of a mobile node at a low price.Moreover,forwarder node can be any node in the mobile network,so this model is robust and can’t fail due to the faults of some nodes.
Online Health Community Searching Method Based on Credible Evaluation
CAO Yan-rong, ZHANG Yun, LI Tao, LI Hua-kang
Computer Science. 2018, 45 (10): 150-154.  doi:10.11896/j.issn.1002-137X.2018.10.028
Abstract PDF(1800KB) ( 754 )   
References | Related Articles | Metrics
With the rapid development of moile Internet tehnology and online health community (OHC),more and more patients and caregivers would search the health information andseek medical advice before going to hsopital.However,there will be plenty of answers forpatients and they may be influenced by unrelated advertisements,inaccurate suggestions and unreliable regiments.In order to reduce the noise of unreliable data for sorting algorithm,this paper proposed a new algorithm to optimize the ranking of searching results with some credible information on OHC platforms.The method utilizes the information of every OHC answer provider,including professional knowledge level,focused fields,answer-accepted rate,and so on,to estimate a credible score.For each new question searching,a combined sorting function with the content similarity and credible score for provider is provided to obtain the results ranking.To improve the accuracy in a further step,the category of searching question is given to match the interested area of answer provider.The experiment compares several optimizing factors and their corresponding results,and the results show that this new algorithm can effectively select more accurate answers on OHC platforms.
Improved Data Anomaly Detection Method Based on Isolation Forest
XU Dong, WANG Yan-jun, MENG Yu-long, ZHANG Zi-ying
Computer Science. 2018, 45 (10): 155-159.  doi:10.11896/j.issn.1002-137X.2018.10.029
Abstract PDF(1409KB) ( 949 )   
References | Related Articles | Metrics
An improved data anomaly detection method namely SA-iForest was proposed to solve the problem of low accuracy,poor execution efficiency and generalization ability of exsiting anomaly data detection algorithm based on isolated forest.The isolation tree with high precision and differences is selected to optimize the forest based on simulated annealing algorithm.At the same time,the redundant isolated trees are removed,and the forest construction of isdated trees is improved.The method of data anomaly detection based on SA-iForest was compared with the traditional Isolation Forest algorithm and LOF algorithm.The accuracy,execution efficiency,and stability of the proposed algorithm have significant improvement through the standard simulation data set.
Identification of User’s Role and Discovery Method of Its Malicious Access Behavior in Web Logs
WANG Jian, ZHANG Yang-sen, CHEN Ruo-yu, JIANG Yu-ru, YOU Jian-qing
Computer Science. 2018, 45 (10): 160-165.  doi:10.11896/j.issn.1002-137X.2018.10.030
Abstract PDF(2557KB) ( 1029 )   
References | Related Articles | Metrics
With the rapid development of Internet technology,a variety of malicious access behavios endanger the information security of network.There is theoretical significance and practical value for network security to identify user’s role and discover malicious access behaviors.Based on Web logs,an IP assisted database was constructed to build IP u-ser’s daily role model.On this basis,the sliding time window technique was introduced,and the dynamic change of time was integrated into user’s role identification.A dynamic identification model of user’s role based on sliding time window was established.Then,analyzing the characteristics of user’s malicious access traffic,the user access traffic and thecharacteristicsof user’s information entropy were weighted to construct an identification model based on multi-characteristics of the user’s malicious access behavior.The model can not only identify explosive and highly persistent malicious access behaviors,but also identify the malicious access behaviors which are small but widely distributed.Finally,the model was implemented by using big data storage and Spark memory computing technology.The experimental results show thatthe user of malicious access behavior can be found by using the proposed model when the network traffic is abnormal,and the user’s role can be identified accurately and efficiently,thus verifying its validity.
Spectrum-aware Secure Opportunistic Routing Protocol in Cognitive Radio Networks
WANG Lu, BAI Guang-wei, SHEN Hang, WANG Tian-jing
Computer Science. 2018, 45 (10): 166-171.  doi:10.11896/j.issn.1002-137X.2018.10.031
Abstract PDF(1489KB) ( 634 )   
References | Related Articles | Metrics
This paper proposed a spectrum-aware secure opportunistic routing (S2OR) protocol against the dynamic characteristics of spectrum and the potential selective forwarding attack in cognitive radio networks.During the spectrum sensing phase,the availability probability of link among cognitive nodes is analyzed by modeling primary user activity.During the routing selection phase,a trust management model is exploited to examine node’s reliability for packet forwarding,so as to select relay nodes with high trustability and ensure the integrity of data transmission.With the local information,a comprehensive routing metric called the expected throughput,consisting of link availability probability,link quality and node’s trustability,is computed.On this basis,cognitive nodes can opportunistically select candidate forwarding nodes and the corresponding data channels.Simulation results demonstrate that S2OR can well adapt to the dynamics of spectrum,result in higher throughput and reduce the impacts of malicious attacks at the same time.
Flexibly Accessed and Vaguely Searchable EHR Cloud Service System
YAN Ming, ZHANG Ying-hui, ZHENG Dong, LV Liu-di, SU Hao-nan
Computer Science. 2018, 45 (10): 172-177.  doi:10.11896/j.issn.1002-137X.2018.10.032
Abstract PDF(1535KB) ( 654 )   
References | Related Articles | Metrics
In e-healthcare record systems (EHRS),some schemes exploit key-policy ABE (KP-ABE)to protect privacy.An access policy is specified by the user,and the ciphertexts can be decrypted only when they match users’ access plicy.The existing KP-ABE requires that the access policies should be confirmed first during key generation,which is not always practicable in EHRS,because the policies are sometimes confirmed after key generation.Based on KP-ABE,this paper proposed a flexibly accessed and vaguely searchable EHR cloud service system.This system not only fulfills the cloud ciphertext search based on keyword fault-tolerant technique,but also allows users to redefine their access policies and generates keys for the redefined ones,hence,a precise policy is no longer necessary.Finally,the scheme was proved to be secure.
Adaptive Blind Watermarking Algorithm Based on Sub-block Segmentation
LIU Wei, CHENG Cong-cong, PEI Meng-li, SHE Wei
Computer Science. 2018, 45 (10): 178-182.  doi:10.11896/j.issn.1002-137X.2018.10.033
Abstract PDF(4248KB) ( 687 )   
References | Related Articles | Metrics
To improve the robustness of watermarking,this paper proposed an adaptive full-blind watermarking algorithm based on sub-block segmentation (ABWASS).In the embedding of watermark,it embeds the adaptive watermark according to the characteristic matrix of the host image and integrates the key information of the watermark into the LH2 sub-band by using the improved 2-phase DWT,so as to obtain the watermarked image with dual watermarks.In the process of watermark extraction,firstly it separates key information for blocking and then gets the characteristic matrix by DCT to extract the watermark.The embedding of key information for blocking makes this proposed algorithm be a full-blind watermarking.Experimental results show that the proposed algorithm performs well in most conventional attacks and geometric attacks and performs better in all mixed attacks with the maximum increment of 2.7%,and it also has good invisibility and practicability.
Software & Database Technology
Requirement Defect Detection Based on Multi-view Card Model
SU Ruo, WU Ji, LIU Chao, YANG Hai-yan
Computer Science. 2018, 45 (10): 183-188.  doi:10.11896/j.issn.1002-137X.2018.10.034
Abstract PDF(1324KB) ( 547 )   
References | Related Articles | Metrics
Requirement stems from the understanding and expectations of different stakeholders on the real system.Requirement elicitation is crucial in the wholeprocess of software product development,and it often decides the quality of software product,even its success or failure.Due to the influence of various complex factors in the elicited requirements,there are some defects such as incompleteness,inaccuracy and conflict.The ambiguity of requirement expression and the incompleteness and inconsistency of requirement description are the most common requirement defects.This paper proposed a card model based on multi-view requirements and requirement defect detection rules.In the process of requirement acquisition,especially in the early period,the incomplete and inconsistent requirement defects from stakeholders can be found through requirement defect detection rules.Finally,the validity of this method is verified by the experiment on three project cases.
Judgement Method of Evolution Consistency of Component System
ZHENG Jiao-jiao, LI Tong, LIN Ying, XIE Zhong-wen, WANG Xiao-fang, CHENG Lei, LIU Miao
Computer Science. 2018, 45 (10): 189-195.  doi:10.11896/j.issn.1002-137X.2018.10.035
Abstract PDF(1485KB) ( 629 )   
References | Related Articles | Metrics
A necessary condition to ensure the reliability of evolution operations is the evolution consistency of component system.If this condition is not satisfied,the evolved system will miss the established functional target.In response to this problem,this paper proposed a judgement method of evolution consistency of component system based on interface,process structure and internal behavior.Firstly,in the evolved system,each component is considered as a judgement executor,so that all the components can participate in the process of consistency judgment collaboratively.Based on the interface and the process structure,the consistency between judgement executors and the global system can be judged.Secondly,in the case of satisfying consistency of interface and process structure,the internal behavior consistency of components before and after evolution is judged.Finally,the complete analysis of a component case is used to describe the judgement method in detail and verify the feasibility of the proposed method.
Artificial Intelligence
Social Recommendation Method Integrating Matrix Factorization and Distance Metric Learning
WEN Jun-hao, DAI Da-wen, YU Jun-liang, GAO Min, ZHANG Yi-hao
Computer Science. 2018, 45 (10): 196-201.  doi:10.11896/j.issn.1002-137X.2018.10.036
Abstract PDF(0KB) ( 462 )   
References | Related Articles | Metrics
In order to solve the dilemma called cold start in traditional recommender systems,a novel social recommendation method integrating matrix factorization and distance metric learning was proposed based on the assumption that distance reflects likability.The algorithm trains the samples and distance metric,at the same time,the distance metric and the coordinates of users and items are updated to meet the constraints of distance.Finally,users and items are embedded into an united low dimensional space,and the distance between users and items is used to generate recommendation results.The experimental results on Douban and Epinions datasets show that the proposed method can effectively improve both interpretability and accuracy of recommender systems and is superior to recommendation methods based on matrix factorization.Research results indicate that the proposed method mitigates the cold start dilemma intraditionalrecommender systems,and it provides another research idea for recommender systems.
Mobile User Interface Pattern Recommendation Based on Conflict Degree and Collaborative Filtering
JIA Wei, HUA Qing-yi, ZHANG Min-jun, CHEN Rui, JI Xiang, WANG Bo
Computer Science. 2018, 45 (10): 202-206.  doi:10.11896/j.issn.1002-137X.2018.10.037
Abstract PDF(2360KB) ( 589 )   
References | Related Articles | Metrics
Mobile user interface pattern is an effective method to improve efficiency and quality of mobile interface development.Focused on the issue that retrieval results of existing interface pattern retrieval methods cannot meet the requirements of the interface development,a mobile user interface pattern recommendation method based on conflict degree and collaborative filtering was proposed.Firstly,fuzzy c-means clustering algorithm is used to narrow the search range of interface pattern according to the requirement of mobile interface development.Secondly,two tensor models are constructed by using the historical rating and the conflict degree of interface pattern.Tensor factorization method based on Hamiltonian Monte Carlo algorithm is employed to reconstruct these two tensor models.Finally,the recommended interface patterns are obtained by using a linear method.Experimental results show that the performance of the proposed method is superior to existing methods in terms of helping developers to find interface patterns.
Terminal Neural Network Algorithm for Solution of Time-varying Sylvester Matrix Equations
KONG Ying, SUN Ming-xuan
Computer Science. 2018, 45 (10): 207-211.  doi:10.11896/j.issn.1002-137X.2018.10.038
Abstract PDF(3640KB) ( 613 )   
References | Related Articles | Metrics
In order to improve the convergence rate and convergence precision,a method for new types of terminal neural network (TNN)and its accelerated form (ATNN)was proposed.This method has terminal attractor characteristics and can get effective solution for time-varying matrix in finite time.In contrast to the ANN,it’s proved that TNN can accelerate the convergence,speed and achieve finite-time convergence.It not only improves the rate of convergence,but also results in high computing precision.The dynamic equations of time-varying Sylvester are solved by ANN,TNN and ATNN models respectively.In addition,the terminal neural network models are applied in Katana6M180 manipulator to demonstrate the effectiveness of the proposed computing models in performing the repeatable motion planning tasks.The simulation results verify the validity of the terminal neural network method.
Convergence Analysis of Artificial Bee Colony Algorithm:Combination of Number and Shape
HUO Jiu-yuan, WANG Ye, HU Zhuo-ya
Computer Science. 2018, 45 (10): 212-216.  doi:10.11896/j.issn.1002-137X.2018.10.039
Abstract PDF(1250KB) ( 636 )   
References | Related Articles | Metrics
The convergence analysis of existing methods for artificial bee colony algorithm(ABC) is based on the analysis method of global convergence.But these convergence analysis methods can’t show the convergence change in the convergence process of ABC.Firstly,the method of combination of number and shape is adopted,and the objective function diagram is combined to divide the convergence process of ABC into the global search stage and the optimal region search stage by using stage analysis.Then,the convergence process and changes of each stage are analyzed one by one based on transferring character that the artificial bees follow a certain degree of average distribution.Finally,the convergence results and change of ABC are obtained.This method can clearly show the convergence advantages and defects of the ABC algorithm,and reveal the changing process of the convergence probability of the algorithm.
Attribute Reduction Algorithm Using Information Gain and Inconsistency to Fill
LI Hong-li, MENG Zu-qiang
Computer Science. 2018, 45 (10): 217-224.  doi:10.11896/j.issn.1002-137X.2018.10.040
Abstract PDF(1693KB) ( 748 )   
References | Related Articles | Metrics
The attribute reduction of incomplete and inconsistent data is a major content of data mining.Combining information gain and inconsistent degree of data,this paper proposed an attribute reduction algorithm for incomplete and inconsistent data.First,the information gain is introduced,and the concept and algorithm formula of inconsistent degree are defined.Besides,the method of data filling based on information gain and inconsistent degree is given.Then,based on this data filling method,the attribute reduction algorithm is provided with the information gain under the condition of taking the maximum inconsistent degree as the weight and inconsistent degree as heuristic information.Finally,the experimental results demonstrate the effectiveness of the proposed algorithm.
Approach for Granular Reduction in Formal Context Based on Objects-induced Three-way Concept Lattices
CHANG Xin-xin, QIN Ke-yun
Computer Science. 2018, 45 (10): 225-228.  doi:10.11896/j.issn.1002-137X.2018.10.041
Abstract PDF(1186KB) ( 533 )   
References | Related Articles | Metrics
The attribute reduction in formal context is an important topic of formal concept analysis.Researchers have put forward many kinds of attribute reduction criterions and methods aiming at formal context.This paper studied the reduction in formal context based on objects-induced three-way concept lattices.A new approach for granular reduction was proposed by using discernibility attributes of the objects.In this new approach of granular reduction,the objects-induced three-way concept lattices don’t need to be constructed.Furthermore,it is proved that objects-induced three-way concept lattices based granular reduction and rough set based classification reduction are equivalent.
Attribute Reduction Based on Concentration Boolean Matrix under Dominance Relations
LI Yan, GUO Na-na, WU Ting-ting, ZHAN Yan
Computer Science. 2018, 45 (10): 229-234.  doi:10.11896/j.issn.1002-137X.2018.10.042
Abstract PDF(1326KB) ( 636 )   
References | Related Articles | Metrics
Under the framework of dominance relation-based rough set approach (DRSA),attribute reduction was stu-died for inconsistent target information systems.The methods based on dominance matrix are the most commonly used ones,but not all elements in the matrix are valid.The concentration dominance matrix only preserves the smallest set of attributes which are useful for attribute reduction,and thus the computational complexity can be significantly reduced.On the other side,the concentration Boolean matrix further improves the generation efficiency of the dominance matrix by Boolean algebra.This paper extended the concentration Boolean matrix method under equivalence relations to that under dominance relations.The concept of concentration Boolean matrix was proposed for the dominance matrix,and the corresponding efficient reduction method was established to improve the efficiency of the reduction algorithm.Finally,nine UCI data sets were used in the experiments,and the results show the feasibility and effectiveness of the proposed method.
Crowd Counting Method Based on Multilayer BP Neural Networks and Non-parameter Tuning
XU Yang, CHEN Yi, HUANG Lei, XIE Xiao-yao
Computer Science. 2018, 45 (10): 235-239.  doi:10.11896/j.issn.1002-137X.2018.10.043
Abstract PDF(2087KB) ( 633 )   
References | Related Articles | Metrics
Because the performance of most existing crowd counting methods isdecreased when they are applied to a new scene,a crowd counting method based on non-parameter tuning was proposed in the framework of multilayer BP neural network.Firstly,image blocks are cropped from the training images to obtain similar scale pedestrian as an input of crowd BP neural network model.Then,the predictive density map is learned by BP neural network model to obtain representative crowd blocks.Finally,in order to deal with the new scene,the target scene is adjusted on the trained BP neural network model,retrieving samples with the same attributes,which includes candidate block retrieval and local block retrieval.The data set includes PETS2009 data set,UCSD data set and UCF_CC_50 data set.The effectiveness of the proposed method is verified by the experimental results on these scenes.Compared with the global regression coun-ting method and density estimation counting method,the proposed method has advantages of average absolute error and mean square error,and overcomes the influences of the differences between the scenes and foreground segmentation.
Dynamic Strategy-based Differential Evolution for Flexible Job Shop Scheduling Optimization
ZHANG Gui-jun, WANG Wen, ZHOU Xiao-gen, WANG Liu-jing
Computer Science. 2018, 45 (10): 240-245.  doi:10.11896/j.issn.1002-137X.2018.10.044
Abstract PDF(1383KB) ( 757 )   
References | Related Articles | Metrics
To solve the flexible job shop scheduling problem,a differential evolution optimization method based on dynamic strategy was proposed in this paper.Firstly,based on the framework of differential evolution algorithm,taking the distance between individuals into consideration,the indicator of population crowding degree was designed,which for measuring the distribution of the current population,and the stage of the algorithm can be further determined adaptively.Then,in view of characteristics of different stages,the corresponding mutation strategy pool was designed to realize the dynamic stage selection of mutation strategy,so as to improve the search efficiency of the algorithm.Finally,the test results on 10 benchmark functions show that the proposed algorithm is feasible and efficient.Based on the double layer coding method of working procedure and machine,the best scheduling scheme was obtained by minimizing the maximum completion time.
Graphics, Image & Pattern Recognition
Improved Image Enhancemengt Algorithm Based on Multi-scale Retinex with Chromaticity Preservation
ZHANG Xiang, WANG Wei, XIAO Di
Computer Science. 2018, 45 (10): 246-249.  doi:10.11896/j.issn.1002-137X.2018.10.045
Abstract PDF(2005KB) ( 1197 )   
References | Related Articles | Metrics
An improved image enhancement algorithm based on MSRCP(multi-scale Retinex with chromaticity preservation) was proposed to solve the problems of halo artifacts and color distortion.Firstly,the intensity image of the origi-nal image is obtained.Then,the intensity image is smoothed by guided filtering and the illuminance component is estimated.And the reflection component is estimated according to the principle of Retinex.Finally,in the color restoration function,the S curve function is used to obtain the final enhanced image.The experimental results show that this algorithm can effectively solve the phenomenon of halo artifacts and improve the details.The overall color of the enhanced image is consistent with original image.The overall visual effect of the image is improved.
Multi-directional Weighted Mean Denoising Algorithm Based on Two Stage Noise Restoration
MA Hong-jin, NIE Yu-feng
Computer Science. 2018, 45 (10): 250-254.  doi:10.11896/j.issn.1002-137X.2018.10.046
Abstract PDF(4731KB) ( 659 )   
References | Related Articles | Metrics
In view of problem that some present algorithms cannot effectively remove salt-and-pepper noise meanwhile preserving edges and details in the case of high noise density,a multi-directional weighted mean denoising algorithm based on two stage noise restoration was proposed.In the noise detection stage,the proposed algorithm firstly introduces a variance parameter to judge the gray level difference between current pixel and its neighborhood pixels,then designs the noise detector by combining the variance parameter and gray level extreme.In the noise restoration stage,a two stage restoration method is introduced to restore the gray value of noisy pixels.Firstly,the restoration method uses the improved adaptive median filter to carry out the first stage noise restoration,then divides all the noisy pixels into two types and applies different restoration skills to carry out the second stage noise restoration.One type of noisy pixel is further restored by the mean filter and the other type of noisy pixel is further restored by the multi-directional weighted mean filter.Experimental results show that the proposed algorithm outperforms many state-of-the-art filters in terms of image denoising and edge preservation.
Efficient Method of Lane Detection Based on Multi-frame Blending and Windows Searching
CHEN Han-shen, YAO Ming-hai, CHEN Zhi-hao, YANG Zhen
Computer Science. 2018, 45 (10): 255-260.  doi:10.11896/j.issn.1002-137X.2018.10.047
Abstract PDF(3427KB) ( 1447 )   
References | Related Articles | Metrics
Lane detection is one of the most important research areas in assistance driving and automated driving.Many efficient lane detection algorithms have been proposed recently,but most of them are still hard to achieve a balance between computational efficiency and accuracy.This paper presented a real-time and robust approach for lane detection based on multi-frame blending and windows searching.Firstly,the image is cropped and mapped to create a bird’s-eye view of the road.Then,the RGB image is converted to a binary image based on a threshold of multi-frame blending.In the next step,the starting point of the lane line is calculated by using the pixel density distribution in the near field of view,and the whole lane is extracted by the method of sliding window search.Finally,according to the feature of candidate lane,different lane models are defined and chosen,and the model parameters are obtained by Least Square Estimation(LSE).The proposed algorithm shows good performance when tested on real-world data containing various lane conditions.
Saliency Object Detection Algorithm Integrating Focusness Feature of Frequency Domain Information
YUAN Xiao-yan, WANG An-zhi, WANG Ming-hui
Computer Science. 2018, 45 (10): 261-266.  doi:10.11896/j.issn.1002-137X.2018.10.048
Abstract PDF(3035KB) ( 796 )   
References | Related Articles | Metrics
Since visual attention prediction can locate the salient area of image quickly and accurately,in this paper,the frequency domain information of visual attention was integrated into the saliency object detection,to detect the saliency object effectively in the complex scene.Firstly,the improved frequency domain detection method is used to predict the visual attention of image,and the frequency domain information is blended into Focusness feature to calculate the frequency domain information focusness feature,which is combined with the color feature to generate the foreground sa-liency map.Next,the RBD background is optimized to generate the background saliency map.Finally,the foreground saliency map and background sa-liency map are fused to generate saliency map.A large number of experiments were carried out on two challenging datasets(ESSCD and DUT-OMON),and the results were evaluated by PR curve,F-Measure and MAE.Experimental results show that the proposed method is better than HFT,PQFT,HDCT,UFO,DSR and RBD,and it can deal with the images with complex scenes.
Method of Face Recognition and Dimension Reduction Based on Curv-SAE Feature Fusion
ZHANG Zhi-yu, LIU Si-yuan
Computer Science. 2018, 45 (10): 267-271.  doi:10.11896/j.issn.1002-137X.2018.10.049
Abstract PDF(3733KB) ( 665 )   
References | Related Articles | Metrics
Compared with the traditional dimension reduction algorithm,stacked autoencoders (SAE)in deep learning can effectively learn the features and achieve efficient dimension reduction,but its performance depends on the input characteristics.The second generation discrete curvelet transform can extract the information of human faces,including edge and overview features,and ensure that the input features of SAE are sufficient,thus making up for the shortages of SAE.Therefore,a new recognition and dimension reduction algorithm based on Curv-SAE feature fusion was proposed.Firstly,the face images are processed by DCT to generate the Curv-faces,which are trained as input characteristics of SAE.And then different layers of features are used for the final classification of identification.Experimental results on ORL and FERET face databases show that the feature information of curvelet transform is more abundant than the wavelet transform.Compared with the traditional dimension reduction algorithms,the feature expression of SAE is more complete and the recognition accuracy is higher.
Saliency Detection Based on Surroundedness and Markov Model
CHEN Bing-cai, WANG Xi-bao, YU Chao, NIAN Mei, TAO Xin, PAN Wei-min, LU Zhi-mao
Computer Science. 2018, 45 (10): 272-275.  doi:10.11896/j.issn.1002-137X.2018.10.050
Abstract PDF(4753KB) ( 619 )   
References | Related Articles | Metrics
Aiming at solving the problem of saliency detection,this paper proposed a saliency detection algorithm based on surrouuundedness and markov model.Firstly,the surroundedness is used to predict the approximate region of the salient object for eye fixation.Secondly,a simple linear iterative clustering (SLIC) algorithm is used to process the ori-ginal image,and the graph model of the image is established based on the superpixels.Next,the superpixels of the two boundaries that are the furthest from the approximate region of the salient object are taken as the virtual background absorb nodes,the saliency value of each superpixel is calculated by the absorption Markov chain,and the initial saliency map S1 is detected.Then,superpixels in the approximate region of the salient object is used as the virtual foreground absorption nodes,and the initial saliency map S2 is detected by the absorption Markov chain.Then S1 and S2 are fused to get the final saliency map S.Finally,the guided filter is used to smooth the saliency maps and get a better saliency map.Experimental results based on two public datasets demonstrate that the proposed algorithm outperforms many state-of-the-art methods.
Remote Sensing Targets Detection Based on Adaptive Weighting Feature Dictionaries and Joint Sparse
WANG Wei, CHEN Jun-wu, WANG Xin
Computer Science. 2018, 45 (10): 276-280.  doi:10.11896/j.issn.1002-137X.2018.10.051
Abstract PDF(2572KB) ( 573 )   
References | Related Articles | Metrics
With the improvement of resolution,more and more useful information is contained in remote sensing images,which makes the processing of remote sensing data become more complex,and it is easy to cause the curse of dimensionality and the poor recognition effect.In view of this situation,a remote sensing targets detection approach(GJ-SRC) based on adaptive weighting feature dictionaries and joint sparse was proposed.Firstly,the Gabor transform is used to extract the features from the training images and the testing images.Then,the contribution weights of each eigenvalue in sparse representation are calculated,and the feature dictionary is constructed by the adaptive method,which makes the dictionary more discriminative.Finally,the common features of each category and the private features of a single image are extracted to form a joint dictionary,and the sparse representation of the test image is used for target recognition.In order to avoid the curse of dimensionality caused by the Gabor transform,the PCA method is used to reduce the dimension of the feature dictionary in order to reduce the computational cost.Experiments show that this method has better detection effect compared with the existing SRC method and remote sensing target detection method.
Object Contour Extraction Algorithm Based on Biological Visual Feature
WU Jing, YANG Wu-nian, SANG Qiang
Computer Science. 2018, 45 (10): 281-285.  doi:10.11896/j.issn.1002-137X.2018.10.052
Abstract PDF(3151KB) ( 599 )   
References | Related Articles | Metrics
Object contour extraction from natural scenes plays an important role in computer vision.However,it is difficult to preserve the integrality of the object contour in cluttered scenes because of non-meaningful edges engendered from texture field.Recently,the task benefits from a biologically motivated mechanism called as surround suppression (SS) that can preserve the object boundaries while suppressing the texture edges.Nevertheless,the traditional models just adopt a simple combination method of intersection and union that fails to process the short edges with intensity response.This paper proposed an improved natural images object contour extraction algorithm based on biological visual feature.Firstly,a candidate edge set is obtained by multi-level suppression method.Secondly,an edge combination methodbased on biological visual feature is used to combine candidate edges to a completed contour.Experiments show that theproposed method improves the accuracy and integrality compared to the traditional surround suppression methods.
LDA Facial Expression Recognition Algorithm Combining Optical Flow Characteristics with Gaussian
LIU Tao, ZHOU Xian-chun, YAN Xi-jun
Computer Science. 2018, 45 (10): 286-290.  doi:10.11896/j.issn.1002-137X.2018.10.053
Abstract PDF(2782KB) ( 764 )   
References | Related Articles | Metrics
This paper presented a new method for facial expression recognition,which uses dynamic optical flow features to describe the differences in facial expressions and improve the recognition rate of facial expression recognition.Firstly,the optical flow features between a peak emotion image and the neutral expression image are calculated.Then,the linear discriminant analysis (LDA) method is extended,and the Gaussian LDA method is used to map the optical flow features into eigenvector of facial expression image.Finally,multi-class support vector machine classifier is designed to achieve the classification and the recognition of facial expression.The experimental results on the JAFFE and CK facial expression databases show that the average recognition rates of the proposed method are more than 2% higher than three benchmark methods.
Interdiscipline & Frontier
Cell Verlet Algorithm of Molecular Dynamics Simulation Based on GPU and Its Parallel Performance Analysis
ZHANG Shuai, XU Shun, LIU Qian, JIN Zhong
Computer Science. 2018, 45 (10): 291-294.  doi:10.11896/j.issn.1002-137X.2018.10.054
Abstract PDF(2485KB) ( 1596 )   
References | Related Articles | Metrics
Molecular dynamics simulation has complexity in spatial and temporal scale,therefore it is critical to optimize the simulation process of molecular dynamics.Based on the characteristics of data parallel on GPU hardware architecture,this paper combined atomic partition and spatial partition of molecular dynamics simulation and optimized the Cell Verlet algorithm for the short-range force calculation,and also designed the core and basic algorithms of molecular dynamics on GPU with optimization and performance analysis.The implementation of Cell Verlet algorithm in this paper is started with atomic partition,each task of particle simulation is mapped to each GPU thread,and then the simulated space is divided into cellular area in the spatial partition way.The cellular index table is established,and the real-time locating simulation particles are realized.Meanwhile,in the force calculation between particles,Hilbert’s space-filling curve algorithm was introduced to keep the partial correlation of particle’s space layout in line with linear data storage,in order to cache and accelerate the access on GPU global memory.This paper also used the memory address alignment and block sharing memory technology to optimize the design of GPU molecular dynamics simulation process.Testing and comparative analysis of examples show that the current implementation of algorithm has advantages in high parallelism and speedup.
OpenFlow Switch Packets Pipeline Processing Mechanism Based on SDN
WU Qi, WANG Xing-wei, HUANG Min
Computer Science. 2018, 45 (10): 295-299.  doi:10.11896/j.issn.1002-137X.2018.10.055
Abstract PDF(1312KB) ( 789 )   
References | Related Articles | Metrics
Currently,SDN (Software Defined Networking) has become the focus of the research and development in network field,but its related research and development are limited in the campus network and data center network.Due to the limitation of the processing efficiency of control layer and data layer,the research on ultra large scale network such like Internet is basically in the blank stage.In order to improve the performance of SDN and make it adaptable to the large-scale network,this paper explored the possibility of parallel acceleration processing in SDN data layer,in which pipeline technology is applied to the packet forwarding process of OpenFlow switch in SDN data layer.Combined with SDN work specification provided by the south interface protocols OpenFlow,a 3 level pipeline processing mechanism was designed to adaptable for OpenFlow switch packets transmission.The design and simulation of this system prove that using pipeline into SDN field can speed up the packet forwarding speed of OpenFlow switch effectively.
Optimization Selection Strategy of Cloud Storage Replica
WANG Xin, WANG Ren-fu, QIN Qin, JIANG Hua
Computer Science. 2018, 45 (10): 300-305.  doi:10.11896/j.issn.1002-137X.2018.10.056
Abstract PDF(1472KB) ( 721 )   
References | Related Articles | Metrics
In order to improve the efficiency of the overall data scheduling in the cloud computing environment and research the copy selection problem in the cloud storage system,an optimal selection strategy of cloud storage replicas based on ant colony feeding principle was proposed.In view of the advantages of ant colony algorithm in solving the optimization problem,this strategy combines the ant colony feeding process in the natural environment with the replica selection process in the cloud storage.Furthermore,the pheromone dynamic change law and the Gaussian probability distribution characteristic are used to optimize the replica selection method,so as to obtain the optimal solution of a set of replica resources,and then respond to the appropriate replica of the data request.The experimental results show that the algorithm has good performance in the OptorSim simulation platform.For example,the average operation time is 18.7% higher than that of the original ant colony algorithm,and the time consumption of copy selection is reduced to a certain extent,thus reducing network load.
Flexibility Measurement Model of Command and Control Information Chain for Networked Operations
NAN Ming-li, LI Jian-hua, CUI Qiong, RAN Hao-dan
Computer Science. 2018, 45 (10): 306-312.  doi:10.11896/j.issn.1002-137X.2018.10.057
Abstract PDF(1892KB) ( 734 )   
References | Related Articles | Metrics
Flexibility is the key ability for command and control information chain to effectively response dynamic complexity and uncertainty,which plays an important role in ensuring command and control information flowing efficiently.Aiming at the flexibility measurement problem of command and control information chain for networked operations,in this paper,firstly,concepts of operational node,command and control information flow,command and control information chain for networked operations and flexibility were defined,abstract structure of information chain was built,and flexibility intension of command and control information and action process were analyzed.Secondly,nine flexibility factor measuring indexes were proposed from design,implement and control phases,and corresponding computing methods were given.Thirdly,index weight determination and aggregation method were given,and command and control information chain flexibility measurement model was built.According to the measurement results,flexibility degree can be judged.Finally,taking regional joint air defense operations as example,the feasibility and effectiveness of the model are validated.
Infinite-horizon Optimal Control of Genetic Regulatory Networks Based on Probabilistic Model Checking and Genetic Algorithm
LIU Shuang, WEI Ou, GUO Zong-hao
Computer Science. 2018, 45 (10): 313-319.  doi:10.11896/j.issn.1002-137X.2018.10.058
Abstract PDF(1464KB) ( 690 )   
References | Related Articles | Metrics
Genetic regulatory networks (GRNs) are the fundamental and significant biological networks,and the biologi-cal system function can be regulated by controlling them.In the field of biological system,one of the significant research topics is to construct the control theory of genetic regulatory networks by applying external intervention control.Currently,as an important network model,the context-sensitive probabilistic Boolean network with perturbation (CS-PBNp) has been widely used for the research of optimal control problem of GRNs.With respect to the infinite-horizon optimal control problem,this paper proposed an approach of approximate optimal control strategy based on probabilistic model checking and genetic algorithm.Firstly,the total expected cost defined in infinite-horizon control is reduced to the steady-state reward in a discrete-time Markov chain.Then,the model of CS-PBNp containing stationary control policy should be constructed,the cost of the fixed control strategy is represented by the temporal logic with reward property,and the automatic calculation is carried out by using probabilistic model checker PRISM.Next,stationary control policy is encoded as an individual in the solution space of genetic algorithm.The fitness of the individual can be computed by PRISM,and the optimal solution can be obtained by making use of the genetic algorithm to execute genetic operations iteratively.The experimental results generated by utilizing the proposed approach into the WNT5A network illustrate the correctness and effectiveness of this approach.