Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 44 Issue 2, 13 November 2018
  
Spatial Skyline Queries:Applications,Research and Challenges
YU Wei, ZHENG Ji-ping, WANG Hai-xiang, WANG Yong-ge, CHEN Jia-liang and JIANG Shun-qing
Computer Science. 2017, 44 (2): 1-16.  doi:10.11896/j.issn.1002-137X.2017.02.001
Abstract PDF(7866KB) ( 83 )   
References | Related Articles | Metrics
Spatial Skyline queries combined with dynamic attributes have been applied in many areas.This survey first introduced definitions of spatial Skylines as well as traditional Skylines and reviewed the development of Skyline queries in the latest 15 years.Then,the survey put emphasis on introducing applications of spatial Skylines in various areas.Key methods and techniques were summarized based on the applications.Finally,the survey put forward some challenges and gave general research directions for spatial Skyline processing in the future.
Survey on Formal Semantics of UML Sequence Diagram
GUO Yan-yan, ZHANG Nan and TONG Xiang-rong
Computer Science. 2017, 44 (2): 17-30, 64.  doi:10.11896/j.issn.1002-137X.2017.02.002
Abstract PDF(1363KB) ( 46 )   
References | Related Articles | Metrics
Formal semantics of UML sequence diagrams is critical to express the dynamic interaction of software system accurately.Therefore,a well-formed sequence diagram is a prerequisite for the analysis and verification of UML model and an important guarantee to improve the reliability of software systems.In this paper,different methods used in UML sequence diagram’s semantics were summarized and compared based on the working mechanisms and pros&cons,respectively.Meanwhile,the special issues with respect to how to define the semantics of UML sequence diagram were discussed as well.Finally,some specific research topics and directions in this area were suggested and proposed.
Survey of Clustering Algorithms for Wireless Sensor Networks
XU Jing-jing, ZHANG Xin-hui, XU Bi-xiao and SUN Zhi-xin
Computer Science. 2017, 44 (2): 31-37.  doi:10.11896/j.issn.1002-137X.2017.02.003
Abstract PDF(628KB) ( 126 )   
References | Related Articles | Metrics
Clustering algorithms are very important in the area of wireless sensor network.On the viewpoint of energy balance and network lifetime,according to whether or not an only central control node is responsible for the whole network,clustering algorithms can be divided into three categories.Some typical clustering algorithms and the latest achievements of recent years were briefly reviewed,and their characteristics and application scope were analyzed.At last,according to the existed study on WSN,some problems worthy of note were mentioned in this paper.Besides,the developing trend and prospects of WSN were predicted.
Survey on Network Security Event Correlation Analysis Methods and Tools
JU An-kang, GUO Yuan-bo, ZHU Tai-ming and WANG Tong
Computer Science. 2017, 44 (2): 38-45.  doi:10.11896/j.issn.1002-137X.2017.02.004
Abstract PDF(720KB) ( 78 )   
References | Related Articles | Metrics
At present,the frequency of the new network security attacks events represented by APT is increasing,and it is more harmful to the enterprise information infrastructure.The new types of attack have the characteristics of customi-zation,concealment and continuity,and these make it more difficult for traditional detection methods to detect or predict these deep-hidden attacks in time.However,with the development of big data technology,people can correlate the information about security events and system running environment effectively,and this makes it possible to detect new types of attack and threat.In this paper,we expounded the importance of security event correlation analytics,and then discussed the existing correlation analysis techniques from the aspect of event attributes,logical reasoning,statistics and machine learning.Finally we introduced several commonly used open-source correlation analysis software,and synthetically compared them in application scenarios,programming language,user interface,and the correlation method used.
Survey on Temporal Topic Model Methods and Application
GUI Xiao-qing, ZHANG Jun, ZHANG Xiao-min and YU Peng-fei
Computer Science. 2017, 44 (2): 46-55.  doi:10.11896/j.issn.1002-137X.2017.02.005
Abstract PDF(872KB) ( 142 )   
References | Related Articles | Metrics
With the fast development of Internet,the data has reached an unprecedented scale.However,it is becoming more and more difficult to get valuable information from mass data.Topic model is a new probabilistic model which has been widely applied in natural language processing,text mining,information retrieval and other fields in recent years.The technology of topic detecting and temporal analysis can help users focus on interested information.Temporal topic model has gradually become a hot research topic in the field of computer science.Therefore,temporal topic model and its application were investigated in detail in this paper.Firstly,the basic knowledge of topic model and temporal topic model were introduced.Secondly,temporal models were categorized into several types,representative models were discussed and their advantages and disadvantages were also analyzed.Thirdly,the applications of temporal models were summarized in several fields.Finally,the future development trends of temporal topic models were presented.
Living Lab Approach for Innovation in Domestic Applications
WEI Wei-jie and LIU Zheng-jie
Computer Science. 2017, 44 (2): 56-64.  doi:10.11896/j.issn.1002-137X.2017.02.006
Abstract PDF(2243KB) ( 41 )   
References | Related Articles | Metrics
Technologies,such as Internet of things,pervasive computing and home robots,are becoming ever more integral to our domestic lives.However,user acceptance,usability and user experience of these new technologies are beco-ming increasingly important issues,which need employing User-Centred-Design (UCD) methods and tools.Living Lab approach for studying technologies in domestic context has the advantages of involving user actively,going deep into user’s real-life environment and collecting data longitudinally.This paper first introduced the concept and the principles of Living Lab,then focused on the cases to adopt the Living Lab methods and tools in the home environments.At the end,the future research topics were explored.
Research of Distributed Principle Components Analysis Algorithm Based on MapReduce
YI Xiu-shuang, LIU Yong, LI Jie and WANG Xing-wei
Computer Science. 2017, 44 (2): 65-69.  doi:10.11896/j.issn.1002-137X.2017.02.007
Abstract PDF(395KB) ( 62 )   
References | Related Articles | Metrics
With the population of the parallel framework of MapReduce,the parallelization of various types of Data Mi-ning algorithms is becoming a hot area of research.Principle components analysis(PCA) algorithm is getting more and more attention too.Summarizing the recent research result of the parallelization of PCA,we found that these PCA algorithms are not fully parallelized,especially the process of calculating the eigenvalue of the matrix.Whole process of PCA algorithm is divided into two stages,which are solution of the correlation coefficient matrix and the singular value decomposition of the matrix.Through the combination of the most popular MapReduce parallel framework and the QR decomposition of matrix,a new way to parallel the SVD was proposed in this paper.Analyzing the calculation speed of the parallel algorithm through the experiment on the data set which is consisted of random produced double floating point matrix of different dimensions,the result with the traditional serial algorithm was compared to show the efficiency improvement of the mentioned algorithm.Then we integrated the SVD algorithm into the PCA algorithm,and proposed the parallel computing process of the correlation coefficient matrix,and this will parallel the two stages of PCA algorithm.Subsequently,we conducted the comparison between the existing not fully parallelized PCA algorithm and normal PCA algorithm with the proposed algorithm on different dimensions of the matrix.Then analying the speed-up ratio of the proposed algorithm,we can find that our algorithm will consume less time when processing massive data set.
Method of Loop Distribution and Aggregation for Partial Vectorization
HAN Lin, XU Jin-long, LI Ying-ying and WANG Yang
Computer Science. 2017, 44 (2): 70-74, 81.  doi:10.11896/j.issn.1002-137X.2017.02.008
Abstract PDF(469KB) ( 42 )   
References | Related Articles | Metrics
There are a large number of loops which contain few unvectorizable statements and many vectorizable statements.Loop distribution separates these specific statements into different loops,and then partial vectorization can be achieved.Currently,the mainstream optimizing compiler just support loop distribution which is simple and aggressive,resulting in large loop overhead and bad reuse of register and cache.To solve these problems,a method of loop distribution and aggregation for partial vectorization was proposed.Firstly,two key issues were analyzed in loop distribution,which are grouping of statements and execution order of distributed loops.Secondly,a modified topological sorting method was presented to achieve better loop aggregation,which reduces the loop overhead.Finally,we evaluated the proposed method in the experimental section.The experimental results show that the proposed method can produce correct SIMD code,and can significantly improve the efficiency of implementation program.
Review for Research of Control Plane in Software-defined Network
LIU Lin and ZHOU Jian-tao
Computer Science. 2017, 44 (2): 75-81.  doi:10.11896/j.issn.1002-137X.2017.02.009
Abstract PDF(615KB) ( 101 )   
References | Related Articles | Metrics
Software-defined network(SDN) has emerged as a new networking paradigm by decoupling the control plane and data forwarding plane,and centralized controller and the global view of the network,established a open interface between the control plane and the data plane,and enabled programmability of the network by external applications.It maks up for the lacks and limitations of the current networking infrastructure.Thereinto,controller which is an important component in SDN has become a hot spot.This paper focused on the researches of Software-defined network control plane controller.Firstly,we summarized the current status of SDN control plane controller technology development and classification,and then we seriously analyzed the current existing consistency,scalability,load balancing,and other a series of issues in Software-defined network.Finally,we discussed the SDN technology research and development direction of the future and trend.
Cache Management Mechanism Study in HTML5 Hybrid Mobile Social Applications
ZHONG Yuan, WANG Jing, HAN Yan-bo and XING Qi-yuan
Computer Science. 2017, 44 (2): 82-87, 111.  doi:10.11896/j.issn.1002-137X.2017.02.010
Abstract PDF(1261KB) ( 57 )   
References | Related Articles | Metrics
With the development of the mobile Internet,the hybrid development,as a way of rapid development,has become a trend.This approach can not only reflect the characteristics of Web applications which is both suitable for IOS and Android platform,but also can solve the problem which the web application cannot call the underlying resources of operating system.Nowadays,social application generally have a large number of pictures,When duplicate pictures accessed multiple times from server,it will consume client flow and increase the latency of access.However,today’s cache mechanism of hybrid application development frameworks can not solve the problem described above.So,this paper proposed a management mechanism of picture cache which is suitable for mixing mobile social applications,it provides a native module of picture cache management for use by developers in hybrid mode,and it can be used in social applications.Meanwhile,we put forward an algorithm of picture cache replacement based on social relationships.It considers enough influence factors,including close value of social relationships among users in social applications,the LRU algorithm,image size space.Practical application shows that the algorithm we proposed can increase the cache hit ratio of hybrid mobile social applications.
One-class Personalized Collaborative Ranking Algorithm Incorporating Social Network
LI Gai, CHEN Qiang, LI Lei and PAN Jin-cai
Computer Science. 2017, 44 (2): 88-92, 116.  doi:10.11896/j.issn.1002-137X.2017.02.011
Abstract PDF(492KB) ( 93 )   
References | Related Articles | Metrics
The research’s key idea of one-class personalized collaborative Ranking Algorithm is to make use of partial order of items.In the early research of these problems,the training data are only implicit feedback dataset,this limits the sorting accuracy.With the advent of online social networks,in order to improve the performance of one-class personalized collaborative ranking algorithm,we proposed a new one-class personalized collaborative ranking algorithm incorporating social network.We conducted our experiment on two large real-world datasets with social information.The experiment results illustrate that our approach achieves a better performance than several traditional OCCF methods.Experiments also show that the social network information plays an important role in improving the performance of one-class perso-nalized collaborative ranking algorithm.
D-VSSP:Distributed Social Network Privacy Preserving Algorithm
ZHANG Xiao-lin, ZHANG Chen, ZHANG Wen-chao, ZHANG Huan-xiang and YU Fang-ming
Computer Science. 2017, 44 (2): 93-97.  doi:10.11896/j.issn.1002-137X.2017.02.012
Abstract PDF(411KB) ( 53 )   
References | Related Articles | Metrics
The processing efficiency of traditional social network privacy preserving technology for large-scale social network data is low.To solve this problem,a distributed vertex splitting social network privacy preserving(D-VSSP) algorithm was proposed.D-VSSP algorithm deals the large-scale social network data in parallel with MapReduce computing model and Pregel-like model.Firstly,using MapReduce distributed model processes the vertex labels with method of label trivialization,grouping trivialized label and exact grouping.And then it realizes distributed vertex splitting anonymity based on the message passing mechanisms of Pregel-like through splitting vertex electing.The experimental results show that the D-VSSP algorithm is superior to the traditional algorithm in processing efficiency for large-scale social network data.
Missing Data Imputation Approach Based on Tuple Similarity
WANG Jun-lu, WANG Ling, WANG Yan and SONG Bao-yan
Computer Science. 2017, 44 (2): 98-102, 106.  doi:10.11896/j.issn.1002-137X.2017.02.013
Abstract PDF(466KB) ( 78 )   
References | Related Articles | Metrics
With the development of Internet and information technology,the data loss,damage and other problems become more and more popular.Especially with data collection from the manual to machine,storage medium is not stability,transmission omissions appear and other reasons,resulting that missing data are more serious.A large number of missing values in the database not only seriously affect the quality of the query,but also affect the accuracy of the results of data mining and data analysis.At present,there is not a general method to deal with missing data.Most of the strategies are based on the problem of the missing value of a certain type.Therefore,in view of this complex situation of that the different deletion types also appear in the incomplete data at the same time,this paper put forward missing data imputation approach based on tuple similarity(IATS).Incomplete data sets of weighted association rules are extracted by the method of data mining,and according to the rules imputate normal missing data,and for abnormal missing data,this paper introduced data recommendation algorithm,the recommended screening strategy of tuple similarity calculation and the realization of the corresponding fill,and then it greatly improves the data effective utilization rate and user query result quality.The experimental results show that the IATS strategy has better accuracy under the premise of ensuring the filling ratio.
Collaborative Filtering Recommendation Algorithm Based on User Characteristics and Expert Opinions
GAO Fa-zhan, HUANG Meng-xing and ZHANG Ting-ting
Computer Science. 2017, 44 (2): 103-106.  doi:10.11896/j.issn.1002-137X.2017.02.014
Abstract PDF(322KB) ( 51 )   
References | Related Articles | Metrics
Collaborative filtering recommendation algorithm is one of the most widely used algorithms in recommender system.After analyzing the low precision problem caused by sparse data in conventional collaboration filtering algorithms,this paper proposed an collaboration filtering algorithm which integrates user characteristics and expert opi-nions.The algorithm analyzes user characteristics,compares the similarity between users and experts,and then calculates the similarity matrix.Our algorithm reduces the sparsity of dataset and improves the accuracy of prediction.Our experimental results based on the MovieLens dataset show that,by using our algorithm,performance on the cold start problem and relevant accuracy of recommendation has greatly improved.
Approach for Test Case Generation Based on Data Flow Criterion
CHEN Jie-qiong, JIANG Shu-juan and ZHANG Zheng-guang
Computer Science. 2017, 44 (2): 107-111.  doi:10.11896/j.issn.1002-137X.2017.02.015
Abstract PDF(381KB) ( 82 )   
References | Related Articles | Metrics
Control flow criterion may miss the state dependent relations in object oriented program easily.This paper presented an approach for automatic test case generation based on data flow criterion,using data flow analysis to get definition use pairs that test suite should cover,using genetic algorithm to generate test suite automatically and evolving the test cases according to fitness function.The experimental results indicate that the test cases generated by our approach can detect more mutants comparing with approaches based on branch and statement criterion,and fitness function designed in our approach makes the number of generations decreased.
Time-aware Entity Integration Framework in Heterogeneous Information Spaces
YANG Dan, CHEN Mo and SHEN De-rong
Computer Science. 2017, 44 (2): 112-116.  doi:10.11896/j.issn.1002-137X.2017.02.016
Abstract PDF(402KB) ( 36 )   
References | Related Articles | Metrics
In heterogeneous information spaces,entities and associations generally have time information,entities of multiple time versions coexist.While traditional entity integration (EI) ignores time information,does not support the integration on the time dimension.In this paper,a time-aware EI framework in heterogeneous information spaces T-EI was proposed,which can aggregate large collections of heterogeneous entities into a set of clean and complete entity profiles with time information to support time-aware entity search.T-EI adopts time-aware entity resolution algorithm leveraging time information of entities and associations,and adopts time-aware data fusion algorithm considering data currency.Experimental results on the real data sets demonstrate the feasibility and effectiveness of T-EI.
Synchronization Calibration Method about Local Clock of Sink Node and RTC Clock of Sensor Nodes in Wireless Sensor Networks
PEI Xu-ming, LI Wen-yan, ZHU Zheng-hang and KANG Kai
Computer Science. 2017, 44 (2): 117-122.  doi:10.11896/j.issn.1002-137X.2017.02.017
Abstract PDF(489KB) ( 63 )   
References | Related Articles | Metrics
In order to reduce the power consumption of the sensor nodes as far as possible,the sensor nodes in the absence of business need to turn into the dormant state.However,while the sensor nodes are in the dormant state,only the internal RTC clock module is working.As the crystal oscillator of RTC module is greatly affected by temperature and other factors,it will result in low precision of RTC clock.The sensor node will miss the preset time and fail to communicate with the sink node when it is automatically awakened.Such a new synchronization calibration method about local clock of sink node and RTC clock of sensor nodes in wireless sensor networks was proposed.The method abandons the previous practice that directly making temperature parameters compensation on internal crystal oscillator of the RTC module in sensor nodes.In the method,the sensor nodes adjust their own RTC clock dynamically to make the clock time of sensor nodes and sink node keep consistent according to the local clock of sink node.
Robust Approach for Holes Recovery of Wireless Sensor Networks
YAN Luo-heng and HE Yu-yao
Computer Science. 2017, 44 (2): 123-128, 146.  doi:10.11896/j.issn.1002-137X.2017.02.018
Abstract PDF(1359KB) ( 38 )   
References | Related Articles | Metrics
In the wireless sensor hybrid networks composed of stationary nodes and mobile nodes,coverage holes is one of the key problems because it directly reduces the performance of network.In order to solve this problem,a robust approach based on improved artificial fish swarm algorithm was presented for holes recovery in this paper.The movement of mobile nodes is analogized to the motion of artificial fish such as prey,follow and swarm with the network coverage as object function.Two new fish motions called as jump and rebirth are also presented to enhance the convergence of this algorithm.The self-adaptive visual distance and step size of fish are implemented when the status of artificial fish is updated to recover the hole of networks.Simulation experiments show the robustness of the algorithm.The holes can be recovered efficiently without location information and holes probe using the least amount of mobile nodes.The network coverage is improved significantly with this proposed algorithm.
Hybrid Rate Adaptation Algorithm for Adaptive HTTP Streaming
XIONG Li-rong, LEI Jing-zhi and JIN Xin
Computer Science. 2017, 44 (2): 129-134, 162.  doi:10.11896/j.issn.1002-137X.2017.02.019
Abstract PDF(582KB) ( 91 )   
References | Related Articles | Metrics
The rate-adaptation algorithm is the hotspot and difficulty in adaptive HTTP streaming(AHS).In this paper,we proposed a hybrid rate-adaptation algorithm (Combined with Bandwidth and Buffer,CBB) for adaptive HTTP streaming.It is designed at the application layer using a “probe” principle to estimate the real-time bandwidth and avoid frequent switching of video rate.Then an exponential weighted moving average smoother (EWMA) is applied in bandwidth smoothing procedure.The smoothing factor can vary with the client buffer status to reduce buffer overflows.The schedule strategy is designed to keep buffer level in a balanced range as far as possible.The whole algorithm includes estimating,smoothing,quantizing and scheduling.These four steps proceed to form a circular closed-loop.The algorithm has been verified on the MPEG-DASH standard reference platform-libdash,and the experiments show that our algorithm performs well under varying network conditions.
Realization and Optimization of Multi-hop D2D Communications System Based on Android Platform
QIN Heng-jia, MI Zhi-chao, DONG Chao and PENG Fei
Computer Science. 2017, 44 (2): 135-139, 151.  doi:10.11896/j.issn.1002-137X.2017.02.020
Abstract PDF(1197KB) ( 42 )   
References | Related Articles | Metrics
Device-to-Device (D2D) communication is a new communication technique which is defined as communications between mobile users directly without traversing the base station (BS) or core network by sharing small cell resources.Currently,most D2D communication studies are focused on single-hop D2D.Compared to single-hop D2D,multi-hop D2D has advantages in system capacity,communication coverage,data offloading,and energy efficiency,etc.The paper realized multi-hop D2D communication system based on Android smartphones.Optimized link state routing (OLSR) protocol is responsible for network management and a developed application is provided to users to control D2D communication program.Users can realize multihop D2D communication with the system.In addition,the paper increased the power of wireless signal to optimize important performances of the system,such as communication coverage and network link quality.
Research on Network Performance Based on Nomadic Group Mobility Model
LIU Jian-ming and LIN Dao-wei
Computer Science. 2017, 44 (2): 140-146.  doi:10.11896/j.issn.1002-137X.2017.02.021
Abstract PDF(1953KB) ( 76 )   
References | Related Articles | Metrics
In view of the fact that the independent network mobility model between nodes can’t reflect the mobile ad hoc network features under the real application scenario,nomadic group mobility model which is applicable in the military field and vehicle network is utilized.After constructing the corresponding system model,a multi-copy relaying algorithm between groups was put forward and the upper and lower bounds of network capacity and delay based on nomadic group mobility model under relaying mode can be deduced from which the corresponding rate of compromise is obtained.The simulation of the node mobility shows that the model possesses good mobility characteristics.The function curves of the relevant parameters prove that the better performance of the network can be obtained under this kind of mobility model.
Optimal Path Planning for Mobile Sink in Random Distributed Wireless Sensor Networks
CHANG Jie and ZHANG Ling
Computer Science. 2017, 44 (2): 147-151.  doi:10.11896/j.issn.1002-137X.2017.02.022
Abstract PDF(396KB) ( 38 )   
References | Related Articles | Metrics
In wireless sensor networks with a large number of normally distributed nodes,in order to improve the network lifetime,an efficient path planning scheme of a mobile sink was proposed in this paper.Firstly,the network is divided into several subregions by the distribution of nodes.Then,the best turning point of sink on this basis is found in order to maximize the network life time.Finally,an optimal path is got.Lots of simulation results under NS-2 show that compared with existing similar schemes,this scheme can effectively balance the network energy consumption,prolong the network lifetime and achieve better network performance.
Improved Algorithm for Uneven Clustering Routing
WANG Lei, XIE Wan-wan, LIU Zhi-zhong and QI Jun-yan
Computer Science. 2017, 44 (2): 152-156.  doi:10.11896/j.issn.1002-137X.2017.02.023
Abstract PDF(399KB) ( 64 )   
References | Related Articles | Metrics
It is presented that an improved algorithm focuses on the “hot zone” problem for uneven cluster-based routing protocol in wireless sensor networks.This paper explored the cluster head selection scheme and clustering Multi-hop routing algorithm.The specific solutions include the threshold value setting and the calculation of an uneven clustering competition radius in the clustering competition stage.Furthermore,two parameters are added to network energy cost formula,and they are the number of candidate relay nodes which used as a forwarding node of the cluster and the number of cluster members in the multi-hop routing phase.The paper compared the protocol with improvement algorithm to EEUC protocol and LEACH protocol by simulation method.The results show that the algorithm prolongs the lifetime of the network and balances network energy consumption.And the proposed scheme is an effective solution for the hot zone problem in wireless sensor networks.
Multi-hop Routing Algorithm for Wireless Sensor Networks Based on Uneven Clustering
WU Biao, CUI Chen, YU Jian and YI Ren-jie
Computer Science. 2017, 44 (2): 157-162.  doi:10.11896/j.issn.1002-137X.2017.02.024
Abstract PDF(504KB) ( 35 )   
References | Related Articles | Metrics
Aiming to solve the highly efficient networking problem of wireless sensor networks(WSN)under a complex and irregular scenario,a multi-hop routing algorithm based on uneven clustering(MRAUC) was proposed for wireless sensor networks.Firstly,according to the characteristics that irregular shape of the scenario and the sink node are far away from the detection area,the algorithm approximates the detection area into anannular sector of which the sink node is located in the heart.Based on the annular sector scenario,the uneven clustering is established for wireless sensor networks.The detection area is divided into annular sectors with equal segmentation,then the cluster head number and the best proportion of each annular senctor are determined by the minimum energy consumption of the first annular senctor.Through adaptively transmitting power of cluster head,the uneven clustering is realized.At the same time,the best relaying cluster head is determined by the MTE principle,which effectively overcome the routing relaying problem between cluster heads.The simulation results show that compared with the traditional algorithm,the new algorithm has significant advantages in balancing energy consumption between nodes and prolonging the network life cycle.As a result,it is more suitable for engineering practice.
Performance Analysis of Beidou Receiver under Interference
LIU Chun-ling and ZHANG Zi-hao
Computer Science. 2017, 44 (2): 163-170, 170.  doi:10.11896/j.issn.1002-137X.2017.02.025
Abstract PDF(367KB) ( 64 )   
References | Related Articles | Metrics
After analyzing the performance of the Beidou receiver in complex jamming environment,it provides a specialized theoretical basis for anti-jamming.Though the analysis of satellite navigation direct sequence spread spectrum communication system model,this paper analysed and compared the BER performance and the equivalent load ratio of Beidou satellite navigation system under two common interference,and it also carried out the digital simulation with Matlab.The simulation results show that narrowband interference has greater impacts on the receiver than the band interference.And the navigation signal is reduced on performance level,interference signal bandwidth and position disturbance implementation.So the performance improvement of navigation signal needs to consider a variety of factors mentioned above.
Traffic Estimation for Data Center Network Based on Traffic Characteristics
QIAO Yan, JIAO Jun and RAO Yuan
Computer Science. 2017, 44 (2): 171-175.  doi:10.11896/j.issn.1002-137X.2017.02.026
Abstract PDF(449KB) ( 38 )   
References | Related Articles | Metrics
Data center network (DCN) is the infrastructure of cloud computing and other distributed computing ser-vices.Understanding the characteristics of end-to-end traffic flows in DCNs is essential to DCN designs and operations.However,it is extremely difficult to measure the traffic flows directly.Due to the distinct structure of DCNs,the traditional traffic estimation method can not be applied to DCNs yet.To address this problem,we first extracted the coarse-grained traffic characteristics based on the user resource allocation and link utilization.And then an efficient traffic estimation algorithm was proposed for DCNs based on the gravity traffic model and network tomography.We compared our new proposal with two classical traffic inference algorithms Tomogravity and ELIA on different scale of DCNs.The results show that new algorithm outperforms the other two algorithms in both speed and accuracy.With the new method,the network managers can obtain the end-to-end traffic on DCNs in real time.
MA-ABE Access Control Scheme in Cloud Storage
LI Xie-hua, ZHOU Mao-ren and LIU Ting
Computer Science. 2017, 44 (2): 176-176.  doi:10.11896/j.issn.1002-137X.2017.02.027
Abstract PDF(488KB) ( 61 )   
References | Related Articles | Metrics
In order to improve the security and efficiency of cross-domain data access in cloud storage,this paper pro-posesd a multi-authority attribute-based encryption (MA-ABE) access control scheme.The new scheme uses split-key to guarantee the security of users’ secret key.In addition,proxy re-encryption is used to load most of the re-encryption to the cloud server when revocation occurs,which can minimize the computation cost for the data owner (DO).The splited secret key components are generated and distributed by the DO and attribute authorities (AA) respectively without using their global identifier (GID),which can prevent authorities collusion attack.Finally,theoretical analysis has been provided to prove that the new scheme is secure and has high performance on revocation.
Fault Tree Generation Based on Fault Configuration
HUANG Ming-yu, WEI Ou and HU Jun
Computer Science. 2017, 44 (2): 182-191.  doi:10.11896/j.issn.1002-137X.2017.02.029
Abstract PDF(866KB) ( 50 )   
References | Related Articles | Metrics
Fault tree analysis is an effective method to improve system safety and reliability.However,traditional ma-nual fault tree generation is difficult to solve the problem of large scale and complexity of system and error-prone.In order to systematically support system faults modeling and formal analysis,a fault tree generation method based on fault configuration was proposed in this paper by introducing variability management form software product line into system faults modeling.Firstly,we defined fault feature diagram for describing the constraints among faults and proposed fault labeled transition system based on Kripke structure to describe system behavior.Secondly,a model checking procedure of generating fault tree was established based on the model semantics.Finally,using model checker SNIP,the safety properties specified with temporal logic were verified and fault tree was generated based on the result.Case study shows the effectiveness of the proposed approach.
VHF:A Lightweight Block Cipher Based on Dual Pseudo-random Transformation and Feistel Structure
DAI Xue-jun, HUANG Yu-hua and LIU Ning-zhong
Computer Science. 2017, 44 (2): 192-194, 201.  doi:10.11896/j.issn.1002-137X.2017.02.030
Abstract PDF(308KB) ( 87 )   
References | Related Articles | Metrics
A new lightweight block cipher based on double pseudo random transform and Feistel structure called VHF was proposed for the demand of the resource constrained mobile terminal for the lightweight cipher.Similar to many other lightweight block ciphers,the block size of VHF is 128bit and the key size is 80bit and 128bit.Security evaluation of VHF shows that VHF can achieve enough security margin against known attacks,such as differential cryptanalysis,linear cryptanalysis,and impossible differential cryptanalysis etc.Furthermore,VHF can be implemented efficiently not only in hardware environments but also in software platforms such as 8bit microcontroller.The implementation efficiency of both software and hardware based on VHF is higher than CLEFIA algorithm,which is the international standard also oriented to 8bit platform.
Research on MapReduce-based Data Auditing Method for Cloud Storage
JIN Yu and YAN Dong
Computer Science. 2017, 44 (2): 195-201.  doi:10.11896/j.issn.1002-137X.2017.02.031
Abstract PDF(600KB) ( 49 )   
References | Related Articles | Metrics
Cloud storage is a new network storage technology.It is an important service provided by cloud computing.Cloud storage is very popular in cloud users for its characteristics of speediness,low price and convenience.However,it also brings many security problems towards user’s outsourced data.One major problem is to ensure the integrity of data at semi-trusted cloud server.So cloud users and cloud servers are both in urgent need of a stable,safe and credible data auditing method.With the arrival of the era of big data,the efficiency of dealing with huge amounts of data in cloud by traditional data batch auditing methods is not high.What’s more,with the popularity of mobile clients,the traditional auditing methods bring too much online burden to cloud user.Therefore,this paper proposed a data auditing method based on MapReduce programming framework for cloud storage.It uses the technology of proxy signature,which deals with data signing instead of cloud user.Moreover,it can also complete the work of data signing and batch auditing in parallel.The experiment results show that the proposed method clearly improves the efficiency of batch auditing,enhances the availability of cloud storage service and reduces the online burden towards cloud user.
Integral Zero-correlation Cryptanalysis on Zodiac
MA Meng, ZHAO Ya-qun and LIU Qing-cong
Computer Science. 2017, 44 (2): 202-205.  doi:10.11896/j.issn.1002-137X.2017.02.032
Abstract PDF(295KB) ( 109 )   
References | Related Articles | Metrics
Zodiac algorithm,which was designed by a group of Korean scholars,is a 16-round Feistel-type block cipher.In this paper,the security of Zodiac algorithm was evaluated from the point of integral zero-correlation cryptanalysis for the first time.Two groups of 13-round zero-correlation linear approximations for zodiac were constructed,and the 8-round integral zero-correlation distinguisher of zodiac was given,based on which integral zero-correlation cryptanalysis was made on the full-round Zodiac algorithm,and 144bit round-subkey was restored successfully.It shows that the integral zero-correlation cryptanalysis on the full-round Zodiac-128/192/256 algorithm needs 2120 pairs of chosen plaintext-ciphertext and about 282 full-round Zodiac encryptions,and its time complexity is obviously better than the existing results of integral attack.
New Mutual Authentication for Lightweight RFID Protocols
LIU Yi and GU Guo-sheng
Computer Science. 2017, 44 (2): 206-208, 227.  doi:10.11896/j.issn.1002-137X.2017.02.033
Abstract PDF(328KB) ( 38 )   
References | Related Articles | Metrics
Radio frequency identification (RFID) technology is an automated identification technology which is widely used to identify and track all kind of objects.It is well suitable for many fields.However,it is a challenging task to design an authentication protocol because of the limited resource of lightweight RFID tags.Recently,a lightweight RFID authentication protocol of RFID tags were presented by Kulseng et al.This protocol uses physically unclonable functions (PUFs) and linear feedback shift registers (LFSRs) which are well known for lightweight operations.Unfortunately,their protocols face several serious security issues.In this paper,based on PUFs and LFSRs,we suggested a new mutual authentication for low-cost RFID Systems.Security analysis shows that our protocol owns better security and privacy.
State Merging for Symbolic Execution Engine with Shape Analysis
DENG Wei and LI Zhao-peng
Computer Science. 2017, 44 (2): 209-215.  doi:10.11896/j.issn.1002-137X.2017.02.034
Abstract PDF(577KB) ( 62 )   
References | Related Articles | Metrics
Symbolic execution is widely used in static code analysis and automatic test generation for its well controlled precision and code coverage.When applied to analyze a program,symbolic execution traversals all possible states by simulating the execution of the program to analyze the data-flow and control-flow information and get results.High precision and coverage request detailed and complete description of program states,which will lead to the path-explosion problem in almost all implementations of symbolic execution.State merging is an effective way to solve the path-explosion problem.We firstly proposed an algorithm for the merging of states from different paths,then abstracted the states in a proper way to expand the application scope of the algorithm.Finally,we discussed the actual effect of state merging and put forward an optimization scheme.The whole algorithm is deployed in ShapeChecker,which is our symbolic execution tool,and experiments show good results in performance.
Scenario-oriented Location Method of Android Applications
LV Zhao-jin, SHEN Li-wei and ZHAO Wen-yun
Computer Science. 2017, 44 (2): 216-221, 256.  doi:10.11896/j.issn.1002-137X.2017.02.035
Abstract PDF(1407KB) ( 59 )   
References | Related Articles | Metrics
During the process of implementing new requirements or maintaining existing project codes,Android application developers often try to acquire code fragments related to specific themes and understand their logical structures.As it involves analytical work on the code level,it costs developers much time to locate code fragments and sorts out the implementation logic due to the complexity of codes’ structures and bad code style of developers.Therefore,it is important to find a method that can be used to locate the fragments quickly.In this paper,we proposed a scenario-oriented location method of Android applications.This method combines static analysis and dynamic analysis techniques according to an execution scenario of a specific theme,and then locates code fragments associated with the specific themes.The method comprises the steps of collecting and analyzing theme execution traces,static analyzing of Android source codes,information matching and synthesizing on top of static and dynamic methods,and visualizing the method information.We furthermore implemented a plug-in tool to facilitate the search process of codes’ information with respect to specific theme and support the highlight of code fragments specified by developers.
Research on Data Consistency for In-memory File Systems
SUN Zhi-long, Edwin H-M Sha, ZHUGE Qing-feng, CHEN Xian-zhang and WU Kai-jie
Computer Science. 2017, 44 (2): 222-227.  doi:10.11896/j.issn.1002-137X.2017.02.036
Abstract PDF(1257KB) ( 78 )   
References | Related Articles | Metrics
In recent years,many research works have proposed new in-memory file systems to manage storage class memory (SCM),such as BPFS,PMFS and SIMFS.Since the in-memory data access is different from traditional I/O path of block-based file systems,the data consistency mechanisms are not well studied for in-memory file systems.Thus,a new consistency mechanism called direct copying was presented for in-memory file systems.The pros and cons when different consistency strategies are used in in-memory file systems were discussed.Then,different consistency mechanisms were implemented in SIMFS to test the effectiveness of the proposed methods.Finally,experiments were conducted with standard benchmark to measure the performance of different consistency mechanisms.The experimental results show that the proposed direct copying method outperforms other strategies.
Throughput Enhancement for Heterogeneous Solid-state Drives
YANG Liang-huai, WAN Kai-ming and FAN Yu-lei
Computer Science. 2017, 44 (2): 228-234.  doi:10.11896/j.issn.1002-137X.2017.02.037
Abstract PDF(568KB) ( 38 )   
References | Related Articles | Metrics
Solid-state drives are widely used nowadays for its superiority of low latency,shock resistance and internal parallelism.How to enhance the performance of solid-state drives (SSDs) is one of the hot research topics.We explored SSDs’ external behavior through experiments and found that high ratio of read requests is favorable for better SSDs’ throughput.Based on these observations,a method called RODI (Read-only Data Isolation) was put forward,which isolates read-only load to certain SSDs from the rest to increase heterogeneous SSDs’ total throughput.Experiment results show that RODI method can effectively improve heterogeneous solid-state drives’ throughput.
Dynamic Analysis Method of Mobile User Preference Context Based on Multi-dimensional
LUO Xiao-dong
Computer Science. 2017, 44 (2): 235-238, 249.  doi:10.11896/j.issn.1002-137X.2017.02.038
Abstract PDF(392KB) ( 58 )   
References | Related Articles | Metrics
Because contextual data ins introduced into dynamic analysis of mobile Users preference,the original user-project two-dimensional matrix will be extended to users-Projects-Context three dimensional matrix decomposition theory based on multi-dimensional matrix of low rank,which can simplify the analysis of the data,but the low rank decomposition properties of self-learning method for a mobile user preferences dynamic analysis do not take full advantage of multi-dimensional matrix.To solve this problem,this paper presented a self-learning method which uses multi-dimensional matrix of low rank decomposition,promoting the convergence rate,reducing the data analysis complexity.The simulation results show the effectiveness of the proposed algorithm.
Online Detection of Incipient Fault Based on Large-scale Neural Networks
SI Wen-jie and YANG Fei-fei
Computer Science. 2017, 44 (2): 239-243, 266.  doi:10.11896/j.issn.1002-137X.2017.02.039
Abstract PDF(1788KB) ( 44 )   
References | Related Articles | Metrics
Neural networks have been widely used for the system modeling and pattern recognition.However,in order to approximate the unknown parameters or system dynamics,it needs enough neurons to achieve sufficiently accurate approximation,which leads to increase of the computational cost.The computation would restrict the online application of the large-scale neural networks.Because CPU processing cannot keep pace with online data capture,the commonly available graphics processors are used for the bulk of data processing in online systems.First,the input of the system was analyzed by persistent excitation characteristics of RBF neural network,reducing the number of neurons and optimizing design optimization algorithm to improve the approximation error.Secondly,LabVIEW and LabVIEW GPU analysis toolkit were used to achieve algorithm implementation and parallel computing.Finally,online experiment of stall detection was conducted in a low speed axial compressor based on LabVIEW.Experimental results show that the proposed method can meet compressors stall detection of online operating system.
Emotion Recognition of Chinese Microblogs with Syntactic Information
HUANG Lei, LI Shou-shan and ZHOU Guo-dong
Computer Science. 2017, 44 (2): 244-249.  doi:10.11896/j.issn.1002-137X.2017.02.040
Abstract PDF(512KB) ( 78 )   
References | Related Articles | Metrics
Emotion recognition aims to predict the involving emotion towards a piece of text.Automatic emotion recognition is a basic task for sentiment analysis.In this paper,an emotion recognition for Chinese microblogs approach based on syntactic information was proposed.One distinguishing feature of the proposed method is that the microblog’s syntactic information is employed.Specifically,we took advantage of POS (part of speech) sequence and syntactic tree to represent syntactic information in order to extract POS sequence pattern,rewrite rules and bigrams of syntactic labels as features for text representation.Then,we utilized the maximum entropy algorithm to perform the classification.Experimental studies demonstrate that our approach is very effective for emotion recognition.
Group Search Optimizer Based on Differential Strategies
XIONG Cong-cong, HAO Lu-meng, WANG Dan and DENG Xue-chen
Computer Science. 2017, 44 (2): 250-256.  doi:10.11896/j.issn.1002-137X.2017.02.041
Abstract PDF(520KB) ( 46 )   
References | Related Articles | Metrics
The conventional group search optimizer (GSO) is not free from some drawbacks such as easily falling into local optimum,a relative long computing time and lower convergence accuracy.In this study,we proposed a differential ranking-based group search optimizer (DRGSO) algorithm to alleviate these limitations.There are mainly two improvements in the design of DRGSO.First,the population is initialized according to the ranking of fitness values.With this regard,the population obtains heuristic information and alleviates premature convergence to some extent.Second,four evolutionary operators based on differential strategies are constructed to improve the convergence of the algorithm and enhance the population diversity.To demonstrate the performance,eleven benchmark functions were included to eva-luate the performance of DRGSO.Experimental results indicate that the proposed DRGSO exhibits better performance in comparison with the GA,PSO and GSO in terms of accuracy and speed of convergence.
Micro-blog Topic Detection Method Integrating BTM Topic Model and K-means Clustering
LI Wei-jiang, WANG Zhen-zhen and YU Zheng-tao
Computer Science. 2017, 44 (2): 257-261, 274.  doi:10.11896/j.issn.1002-137X.2017.02.042
Abstract PDF(503KB) ( 74 )   
References | Related Articles | Metrics
Recently,the development of micro-blog provides people with convenient communication.Because every piece of micro-blog is limited in 140 words,large scale of short texts appear.In the meantime,discovering topics from short texts genuinely becomes an intractable problem.It is hard for traditional topic model to model short texts,such as probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA).They suffer from the severe data sparsity when disposing short texts.Moreover,K-means clustering algorithm can make topics discriminative when datasets is intensive and the difference between topic documents is distinct.In order to improve data sparsity,BTM topic model was employed to process short texts-micro-blog data for alleviating the problem of sparsity in this paper.At the same time,we integrated K-means clustering algorithm into BTM(Bi-term Topic Model) for topics discovery further.The results of experiments on Sina micro-blog short text collections demonstrate that our method can discover topics effectively.
Computing Research of User Similarity Based on Micro-blog
ZHENG Zhi-yun, JIA Chun-yuan, WANG Zhen-fei and LI Dun
Computer Science. 2017, 44 (2): 262-266.  doi:10.11896/j.issn.1002-137X.2017.02.043
Abstract PDF(417KB) ( 42 )   
References | Related Articles | Metrics
For traditional similarity calculation methods and evaluation criteria have defects that accurately and efficientlymeasuring micro-blog users similar relationship is undesirable,a new method to calculate micro-blog user similarity was proposed. For different attribute data structure,different calculation methods are used,and based on the assignment of property statistics for each property,the method uses AHP to determine the weight of each attribute,finally building an integrated similarity calculation model.Experimental results show that the improved method of calculation to measure user similarity precision increases by 22.6%,recall rate increases by 12.7%,and F1 metric improves 29.5%.
LDA-RR:A Recommendation Method Based on Ratings and Reviews
WANG Jian and HUANG Jia-jin
Computer Science. 2017, 44 (2): 267-269, 305.  doi:10.11896/j.issn.1002-137X.2017.02.044
Abstract PDF(279KB) ( 70 )   
References | Related Articles | Metrics
Recommender system is one of the effective ways to solve the problem of information overload.The collaborative filtering method is a typical method of recommender systems.The traditional collaborative filtering algorithm only takes rating information into account,while reviews contain more specific characteristic information about users and items.In this paper,we proposed an improved LDA algorithm which can combine ratings with review opinions of users.We assumed that each user has an implicit topic distribution,each topic has an implicit item distribution,and the distribution of words is determined by the topic and the item,then we used the potential topic distribution to mine user’s interests and make recommendations.The experiment shows that our algorithm can effectively improve the recommendation quality.
Computing Longest Common Subsequences Approximately Based on Lattice
SUN Tao and ZHU Xiao-ming
Computer Science. 2017, 44 (2): 270-274.  doi:10.11896/j.issn.1002-137X.2017.02.045
Abstract PDF(369KB) ( 46 )   
References | Related Articles | Metrics
The longest common subsequences can represent the common information of a sequence set,and it has many important applications in many fields,such as computational genomics,information retrieval and so on.Computing longest common subsequences is a famous NP-hard problem with multiple solutions.Some approximate algorithms have a low time complexity,but the result set only has one subsequence.For the sequence set with many longest common subsequences,the information loss is overmuch.In this paper,we presented a new approximate algorithm for this problem.Our algorithm employs a specific mathematical structure called lattice.Firstly,the common lattice of two sequences is computed through dynamic programming and then the common lattice of the current lattice and the current sequence recursively combined with the greedy algorithm are also computed.As the paths in a common lattice contain many common subsequences,the result set contains many longest common subsequences of the original sequence set.And the validity of our algorithm is proved through theory and experiment.
Research on Temporal Perception-oriented Microblog Propagation Model
WANG Zhen-fei, ZHANG Li-ying, ZHANG Xing-jin and LI Lun
Computer Science. 2017, 44 (2): 275-278, 289.  doi:10.11896/j.issn.1002-137X.2017.02.046
Abstract PDF(455KB) ( 81 )   
References | Related Articles | Metrics
With the rapid development of online social networks,extracting information propagation characteristics and building the propagation model have become hot research topics.The traditional propagation models of the microblog network have not considered the users’ incomplete reading behavior,the incubation period and direct immunization at the same time,so they couldn’t accurately identify the immune nodes.In view of these defects,by analyzing the behavior characteristics of the users,the paper proposed a MSLIR which increases the classification of spread individuals and improves the propagation path.The users can get,spread and shield information timely according to the propagation cha-racteristics of microblog information by the algorithm,the function of social network will be improved according to the model response of social relations and online social behaviors.Taking sina microblog as an example,the paper analyzed the effect of the propagation mechanism and network parameters on the processes of information spreading,deduced the dynamic evolution equations and elaborated the temporal evolution rules of the processes of information spreading.Based on the real user datasets of sina microblog,the MSLIR algorithm and its dynamical evolution equations were utilized in the computer simulation for the process of microblog information propagation.Compared with several other representative algorithms,the simulation results show the effectiveness and feasibility of the MSLIR algorithm.
Parameter Analysis and Optimization of Cardinality Estimation Algorithm
LIU Shao-ji, CAO Yang and CUI Meng-tian
Computer Science. 2017, 44 (2): 279-282, 301.  doi:10.11896/j.issn.1002-137X.2017.02.047
Abstract PDF(420KB) ( 57 )   
References | Related Articles | Metrics
Cardinality estimation algorithm is an algorithm based on statistics,which can estimate the cardinality of the given data set.In this algorithm,hash function and some parameters are the key factors to the performance of the algorithm.Based on the related research,an algorithm was proposed which selects hash function and parameters based on the scale and type of data.The experimental results show that both accuracy and stabilization of this algorithm have significant improvement than the traditional estimation algorithm.
Study on Advertising Click-through Rate Prediction Based on User Similarity and Feature Differentiation
PAN Shu-min, YAN Na and XIE Jin-kui
Computer Science. 2017, 44 (2): 283-289.  doi:10.11896/j.issn.1002-137X.2017.02.048
Abstract PDF(595KB) ( 69 )   
References | Related Articles | Metrics
Targeting the Internet advertising accurately is an eye-catching problem in the field of computational advertising.As an important evaluation criteria for online advertising effect,the precision of prediction for click through rate (CTR)benefits publishers,advertisers and users.Without considering feature differentiation,mainstream approaches are extracting features and establishing click prediction model,which use a single weight to measure the effect of a feature for CTR.According to the idea divide and conquer,a hybrid model based on user similarity and feature differentiation was proposed.The model divides users into several groups depending on user similarity evaluated by mixture gaussian distribution.For each group,model was built respectively and they were combined to excavate the different effects of a feature to different groups and improve predict CTR prediction accuracy.Several experiments on advertising data sets of an Internet companies were made and the effectiveness of the approach through detailed comparative analysis was verified with the mainstream approaches.
Low Complexity Scene Change Detection Algorithm for Supporting Resolution Dynamic Change
FANG Hong-jun, SONG Li and YANG Xiao-kang
Computer Science. 2017, 44 (2): 290-295.  doi:10.11896/j.issn.1002-137X.2017.02.049
Abstract PDF(3319KB) ( 54 )   
References | Related Articles | Metrics
In digital video post processing of TV system,many video detection and enhancement IP blocks need refer temporal information between frames.For cutting off the temporal relationship when content scene change happens,an optimum design of scene change detector was proposed in this paper.According to the network TV dynamically changes the video resolutions based on the quality of network service,the design adopts an optimum method based on histogram distribution detection combined with dynamic weighting.The simulation results show that the design has improved the detection accuracy compares to the common design which is based on the average intensity level.And it improves the detection’s reliability and stability for low IRE scenes and resolution dynamic change case of network TV programs.
Multithread and GPU Parallel Schema on Patch-based Multi-view Stereo Algorithm
LIU Jin-shuo, JIANG Zhuang-yi, XU Ya-bo, DENG Juan and ZHANG Lan-xin
Computer Science. 2017, 44 (2): 296-301.  doi:10.11896/j.issn.1002-137X.2017.02.050
Abstract PDF(1211KB) ( 147 )   
References | Related Articles | Metrics
PMVS (Patch-based Multi-view Stereo) has been widely used in the 3D reconstruction,with the aerial photo of the UAV (Unmanned Aerial Vehicle).To solve the problem of the time complexity and calculation amount of PMVS,this paper proposed the two-level parallel schema of CPU multi-thread and GPU for PMVS.The solution includes GPU-based parallel design and optimization and task allocation mechanism of the images between the GPU and CPU.The experiments have been done on the platform with a 24-core CPU and NVIDIA Tesla K20 GPU high-performance server,with 16 remote sensing images having the resolution of 4081×2993.Compared with the serial traditional PMVS,the experiment results show that our model MGPS (the two-level parallel schema of CPU multi-thread and GPU for PMVS) can be 13 times faster at feature extraction,4 times faster at PMVS.Calculation error is less than 10%.MGPS shortens the execution time of PMVS algorithm.PMVS based on MGPS algorithm can also be used in the field of cultural relic protection,medical image processing,virtual reality and so on.
Sparse Orthogonal Procrustes Problem Based Regression for Face Recognition with Pose Variations
ZHANG Juan
Computer Science. 2017, 44 (2): 302-305.  doi:10.11896/j.issn.1002-137X.2017.02.051
Abstract PDF(1884KB) ( 35 )   
References | Related Articles | Metrics
Orthogonal Procrustes problem (OPP) is a popular technique to deal with matrix approximation problem.Recently,OPP was introduced into a regression model named orthogonal Procrustes problem based regression (OPPR) to handle facial pose variations and achieved interesting results.However,OPPR performs F-norm constraint on the error term,which makes the model sensitive to the noises (i.e.,illumination variations).To address this problem,in this paper,the F-norm constraint was replaced by the L1-norm constraint and the sparse orthogonal Procrustes problem based regression (SOPPR) model was proposed,which is more robust.The proposed model was then solved by an efficient alternating iterative algorithm.Experimental results on public face databases demonstrate the effectiveness of the proposed model for handling facial pose variations.
Activity Recognition from Depth Image Sequences Based on L2,1-norm Sparse Feature Selection and Super Normal Vector
SONG Xiang-fa, ZHANG Yan-feng and ZHENG Feng-bin
Computer Science. 2017, 44 (2): 306-308, 323.  doi:10.11896/j.issn.1002-137X.2017.02.052
Abstract PDF(323KB) ( 58 )   
References | Related Articles | Metrics
This paper presented a novel method of activity recognition from depth image sequences based on L2,1-norm sparse feature selection and super normal vector.First,the super normal vector feature is extracted from depth image sequences.Then the most discriminative feature subset is selected from the whole super normal vector feature set based on the method of L2,1-norm sparse feature selection.Finally,the classification is based on Liblinear classifier.Experimental results on MSR Action3D dataset show that the proposed method achieves 94.55% of recognition accuracy using only 2% of the whole super normal vector feature,and is superior to the state-of-art methods.
Algorithm of Micro-motion Object Detection Based on ViBe and Multi-feature Extraction
YANG Chun-de and MENG Qi
Computer Science. 2017, 44 (2): 309-312, 316.  doi:10.11896/j.issn.1002-137X.2017.02.053
Abstract PDF(1654KB) ( 32 )   
References | Related Articles | Metrics
In order to realize the accurate extraction of micro-motion object and overcome problems like high false detection rate in the target extraction process,this paper etablished background model for CbCr component,RGB and SILTP feature,and proposed an improved algorithm of ViBe background modeling based on fusion of multi features.The algorithm improvs LBP-TOP texture operator encoding method by introducing LBSP operator,and generates the spatial and temporal domain foreground probability of a pixel by utilizing the improved LBP-TOP texture feature,which is used to gradually establish a CbCr background model close to the true background.It improves ViBe foreground determination and background updating method based on the local pixel complexity and the changing conditions of the three types of characteristics.Then,it gets the complete object detection and accomplishs the accurate background replacement of the video sequences.The experimental results show that the proposed algorithm can effectively segment the micro motion-object of the video sequences and realize the background replacement.
Mobile User Behavior Recognition Based on Compressed Sensing
SONG Hui and WANG Zhong-min
Computer Science. 2017, 44 (2): 313-316.  doi:10.11896/j.issn.1002-137X.2017.02.054
Abstract PDF(335KB) ( 59 )   
References | Related Articles | Metrics
To increase the recognition accuracy of mobile users’ behaviors,a compressed sensing based recognition method was proposed,which is able to recognize behaviors from raw acceleration data or compressed acceleration data.According to the theory that an over-comprehensive dictionary is able to reconstruct data,an over-comprehensive dictionary is constructed by raw acceleration data from three-axis accelerometer firstly,and then to calculate the sparse coefficient for samples to be tested by solving the minimum l1 norm.At last,residual values are calculated according to the behaviors and the minimum one is selected as the indicator to obtain the classification results.Experiment results show that by using this method the recognition accuracy can reach to 82.64%,which is higher than the recognition accuracy by using traditional recognition algorithms.At the same time,the recognition accuracy for the compressed acceleration data is also satisfied.
Real-time Lane Detection Algorithm Based on Inter-frame Correlation
LI Chao, LIU Hong-zhe, YUAN Jia-zheng and ZHENG Yong-rong
Computer Science. 2017, 44 (2): 317-323.  doi:10.11896/j.issn.1002-137X.2017.02.055
Abstract PDF(2544KB) ( 80 )   
References | Related Articles | Metrics
In order to meet the requirements of the real-time and robustness of lae detection algorithm,a real-time lane detection algorithm based on inter-frame correlation was proposed.According to the characteristics of the road image, noise is filtered out by median filter and the lane mark edge is extracted by adaptive threshold for lane detection firsly.Then,the original image is divided beyond the extraction process.It gets lane candidate lines by improving Hough transform and builds dynamic ROI.Finally,the lane line model by inter-frame correlation is updated and restricted.The results show that the operation amount of image data is simplified,the runtime of the algorithm is reduced,and the robustness of the algorithm is greatly improved.