Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 41 Issue 10, 14 November 2018
  
Evaluation of Effects of Centripetal and Centrifugal Saccades on Human Performance in Gaze-based Interactions
ZHANG Xin-yong and ZHA Hong-bin
Computer Science. 2014, 41 (10): 1-6.  doi:10.11896/j.issn.1002-137X.2014.10.001
Abstract PDF(1198KB) ( 510 )   
References | Related Articles | Metrics
Fitts’ law is an effective model to predict human performance in the field of human-computer interaction (HCI).Its effectiveness has been confirmed in many situations,and it is the theoretical basis to study the human performance in HCI.However,due to the different muscle controlling mechanisms of eyes and limbs,Fitts’ law cannot be applied to the pointing task in gaze-based interactions.Recently,Zhang etc proposed a new index of difficulty (IDeye) that can effectively model the human performance in dwell-based eye pointing.However,their model dose not specifically take account of the differences of the two kinds of saccades (i.e.centripetal and centrifugal saccades) involved in eye pointing.This paper aimed to investigate the different performance under the different conditions of the two typical saccadic eye movements:saccades toward the primary position (centripetal) and away from the primary position (centrifugal).Carrying out an experiment,we confirmed that there is significant difference in moving time between these two eye movements,and that the IDeye model can still accurately model the performance even in the situation of pure centripetal or centrifugal saccades.This work is necessary to complete the study of modeling dwell-based eye pointing,and it further confirms the suitability and effectiveness of the IDeye model for gaze-based interactions.
ARM-MuxOS:A System Architecture to Support Multiple Operating Systems on Single Mobile Device
YU Kuan-long,CHEN Yu,MAO Jun-jie and ZHANG Lei
Computer Science. 2014, 41 (10): 7-11.  doi:10.11896/j.issn.1002-137X.2014.10.002
Abstract PDF(2913KB) ( 475 )   
References | Related Articles | Metrics
Enabling concurrent execution of multiple operating systems on mobile devices greatly extends their usage model.Mobile virtualization provides such functionality,but has poor performance.We first analyzed the challenges of allocating physical memory and sharing hardware devices among multiple general purpose operating systems on a single mobile device,designed new methods to answer these problems,and implemented a prototype of ARM-MuxOS on a Ga-laxy Nexus smartphone,which can support multiple operating systems running concurrently on it,cleverly manage its limited memory across many operating systems,and avoid the performance overhead of mobile virtualization and its required high engineering effort.Our test results show that ARM-MuxOS supports Android and Firefox OS and with an almost native performance,and it is better than current paravirtualization-based methods.
Nature Multimodal Human-Computer-Interaction Dialog System
YANG Ming-hao,TAO Jian-hua,LI Hao and CHAO Lin-lin
Computer Science. 2014, 41 (10): 12-18.  doi:10.11896/j.issn.1002-137X.2014.10.003
Abstract PDF(2547KB) ( 1006 )   
References | Related Articles | Metrics
During the dialogue,people naturally use multimodal information,e.g.facial expressions and gestures,in addition to using spoken interaction,to support the content expression.The paper proposed a framework on how to efficiently fuse the multimodal information with human-computer dialog model and finally created a multimodal human-computer dialog system.The paper classified the fused methods into three modes,complementary,mixed and indepen-dent,according to their relations between speech channel and other channels.For the dialog framework,the paper proposed a multimodal dialog management model by combining finite state machine,slot filling method and mixed initiative method.The new module can flexibly process the multimodal information during the dialogue.The paper also proposed a Multimodal Markup Language (MML) to control the action of the virtual human for the dialog system.The MML can help to coordinate the complicated actions among different channels for virtual human.Finally,based on above technologies,the paper created a multimodal dialog system and used it for weather information retrieval service.
High Fidelity Panchromatic and Multispectral Image Fusion Based on Ratio Transform
XU Qi-zhi and GAO Feng
Computer Science. 2014, 41 (10): 19-22.  doi:10.11896/j.issn.1002-137X.2014.10.004
Abstract PDF(1297KB) ( 538 )   
References | Related Articles | Metrics
With the rapid development of remote sensing technology,more and more satellites can simultaneously acquire both panchromatic (PAN) and multispectral (MS) imagery.In general,spatial resolution of the MS imagery is relatively lower than that of PAN imagery.However,MS imagery with high spatial resolution is more desirable in application. Although various image fusion algorithms have been developed to pansharpen MS imagery,there are still some problems regarding spectral and spatial distortion.In addition,due to large size of remote sensing imagery product,running times of existing fusion methods are hard to match user’s expectation.To solve these problems,a high fidelity fusion method based on ratio transform was proposed to fuse PAN and MS imagery.In this method,a degraded image is generated by down-sampling and up-sampling of PAN image,while a MS image is up-sampled by bilinear interpolation.Then,fused image is obtained by multiplying the up-sampled to a ratio between the PAN image and its degraded version.Experimental results show that the proposed method has better spatial and spectral fidelity performance than compared methods.
Submap and Adaptive Covariance Based Method for 2D Localization
ZHANG He,LIU Guo-liang,LI Nan-jun and HOU Zi-feng
Computer Science. 2014, 41 (10): 23-26.  doi:10.11896/j.issn.1002-137X.2014.10.005
Abstract PDF(1390KB) ( 465 )   
References | Related Articles | Metrics
Submap and adaptive covariance based 2D SLAM solution can not only achieve efficient loop-closure detection but also accurate localization.Firstly,the loop-closure is detected by efficiently matching 2D geometric features between local submaps.Unlike the previous methods which often use the number of the measure frames as the criteria of the division,we employed the number of features as the main criteria.To achieve accurate localization,we proposed an adaptive Kalman filter to estimate the final pose.Moreover,the prediction and observation covariance are adaptive and estimated by the scan-matching algorithm.Finally,if a loop-closure is detected,the optimized transformation and covariance from the backend can be fused directly in the Kalman filter.In the first experiment,the comparison between the two kinds of submap division mechanism verifies the validity of the proposed method.The second experiment shows that the proposed method can accurately localize the robot only using a single lidar.
LiveData—A Data Collecting System Based on Sensors in Smart Phones
WANG Zhong-wei and SUN Guang-zhong
Computer Science. 2014, 41 (10): 27-30.  doi:10.11896/j.issn.1002-137X.2014.10.006
Abstract PDF(943KB) ( 468 )   
References | Related Articles | Metrics
With the emergence of every build-in sensor in the smart phone,users can collect,analyze and mine more useful information.We introduced LiveData,an application based on Android platform which is used to collect sensor data.Using 280 thousand data records collected by LiveData,we distinguished the behavior of users by extracting some attributes.We also analyzed the impact of different sensors and different data collection environments on experiment results.
Intelligent SMS Classification Method Based on Improved Bayes Classification Algorithm
YANG Liu,YIN Zhao,TENG Jian-bin,WANG Heng and WANG Guo-ping
Computer Science. 2014, 41 (10): 31-35.  doi:10.11896/j.issn.1002-137X.2014.10.007
Abstract PDF(447KB) ( 450 )   
References | Related Articles | Metrics
With the development of the mobile communication technology,the number of mobile phone users is increa-sing continuously.As a traditional mobile communication service,SMS occupies a very important position in people’s lives.SMS messages record the track of one’s life to a certain extent.However,the existing SMS management systems only manage our messages in an unintelligent way—classifying by contacts and showing in the order of sending time.As a result,different kinds of messages mix together and are hard to be managed.By studying the characteristics of SMS messages and analyzing the shortages of the traditional algorithm based on word frequency and the algorithm based on mutual information,we proposed a new feature selection algorithm for SMS messages based on both word frequency and mutual information and improved the accuracy of the Bayes classification algorithm using more features including the length of SMS messages.In the experiments,it is proved that this new algorithm can get a very good recall rate and accuracy rate when processing SMS messages.
CJPD:Coherent Junction Point Drift for Junction Points Set
LUO Ting-jin,ZHANG Jun,LIAN Lin,XU Shu-kui and LI Guo-hui
Computer Science. 2014, 41 (10): 36-41.  doi:10.11896/j.issn.1002-137X.2014.10.008
Abstract PDF(1122KB) ( 464 )   
References | Related Articles | Metrics
Junction is an important associate feature among the multi-sensor images and junction point set matching plays a key role of multi-sensor images registration.In this paper,coherent junction point drift for affine transformation (CJPD) was proposed.According to the inherent characteristic of junction,we defined the local structural consistency,which is used to measure the similarity between two junctions.What’s more,we introduced local structural consistency of junctions as a constraint of the posterior probabilities of GMM components.The added structural information improves the robustness of CJPD for noise and outliers and speeds up its convergence.We tested the CJPD algorithm for affine transformation in the presence of noise and outliers,where CJPD shows more accurate results and outperforms current state-of-the-art methods than CPD.
Gait Data System and Joint Movement Recognition Model for Human-exoskeleton Interaction
GAO Zeng-gui,SUN Shou-qian,ZHANG Ke-jun,SHE Duo-chun and YANG Zhong-liang
Computer Science. 2014, 41 (10): 42-44.  doi:10.11896/j.issn.1002-137X.2014.10.009
Abstract PDF(358KB) ( 635 )   
References | Related Articles | Metrics
Human-machine interaction plays a great role in control of exoskeletons,and usually it is required to obtain the relevant information about body motion as control signal sources.In order to collect human gait data and find the association between the physiological signals and the joint movement mechanism,we designed a Gait Data Acquisition System(GDS) which consists of eight thin-film pressure sensors and a joint angle sensor.After gait experiments,we obtained 15 groups of gait data of health male objects with natural walking under three rates in 3km/h,4km/h and 5km/h.We also proposed establishment of recognition model of the knee joint motion using GEP.The gait data was used to train and validate the recognition model.The result shows that the model can effectively identify and predict knee joint motion and the GDS is feasible as a human-machine interface in exoskeletons.
Application Research of Audio Feature Based on Particle Swarm Optimization Algorithm
WANG Zhi-qiang,GUO Ning and FU Xiang-hua
Computer Science. 2014, 41 (10): 45-49.  doi:10.11896/j.issn.1002-137X.2014.10.010
Abstract PDF(428KB) ( 529 )   
References | Related Articles | Metrics
Based on the research of audio feature,this paper extracted the features of loudness and pitch,and selected their feature weights by PSO.We proposed an automatic evaluation method of singing segment which was already applied to video song-on-demand marking system.According to the results of the experiments carried out,the systems characterizes similar degree of the singer singing and singing original sound in real time so that the marking standards are efficient.
Real-time Fingertip Tracking and Gesture Recognition Using RGB-D Camera
LIU Xin-chen,FU Hui-yuan and MA Hua-dong
Computer Science. 2014, 41 (10): 50-52.  doi:10.11896/j.issn.1002-137X.2014.10.011
Abstract PDF(2191KB) ( 1075 )   
References | Related Articles | Metrics
In recent decades,visual interpretation of finger and hand gestures has been an attractive direction in both computer vision and human-computer interaction areas.Traditional methods use a monocular RGB camera or multiple RGB cameras to get hand information.But it is limited by clustered backgrounds,lighting conditions,textures and other environment factors,which makes the accuracy,robustness and efficiency cannot satisfy real-time interactions.With the coming of consumer-level RGB-D camera,above limitations can be overcame with depth data got from a RGB-D camera.We first defineed a 3D interaction space with the RGB-D camera and segmented the hand region from backgrounds with the help of depth information.Then we proposed a real-time finger recognition and tracking approach using a depth camera which mainly use the contour of hands.At last,the human-computer interaction was achieved with the position and trajectory of fingers from the above method.Based on the proposed method,we designed several experiments.The results validate the accuracy,effectiveness and robustness of our approach.
Detection and Location of Near-duplicate Video Clips
GUO Yan-ming,XIE Yu-xiang,LAO Song-yang and BAI Liang
Computer Science. 2014, 41 (10): 53-56.  doi:10.11896/j.issn.1002-137X.2014.10.012
Abstract PDF(430KB) ( 670 )   
References | Related Articles | Metrics
The paper aimed to detect and locate multiple near-duplicate video clips at random locations.First,video structure analysis was carried out to extract the keyframes.Second,to ensure the accuracy and efficiency,a method which combines the advantages of FAST and BRIEF was proposed to find the near-duplicate keyframes (NDK) between the videos.Then,the paper put forward an algorithm to calculate the distance between NDKs,using the locations of the keyframes in the source video.In this way,we could get a distance matrix of NDKs.We reserved the close distance in the matrix by setting a distance threshold,and transformed the detection and location of multiple near-duplicate video clips to find the connected graphs that the matrix corresponds to.The experimental results show that the method can effectively detect and locate near-duplicate video clips at random locations.
Scudware Mobile:Mobile Middleware for Collaboration of Data and Services between Wearable Devices
DING Yang,LI Shi-jian,YE Zhi-qiang and PAN Gang
Computer Science. 2014, 41 (10): 57-61.  doi:10.11896/j.issn.1002-137X.2014.10.013
Abstract PDF(1369KB) ( 460 )   
References | Related Articles | Metrics
With the developing of ubiquitous computing,the integration of software and hardware has become an important trend of personal consumer electronics field,and wearable devices are becoming the current research hotspot of academic and industry.Most of these devices have certain capabilities of sensing,computing and service.But now interface of wearable devices is different from each other,and the capability of collaboration is weak,which causes the waste of sensing and computing capabilities,also can not provide rich services.To solve this problem,this paper proposed a mobile middleware for collaboration of data and services between wearable devices-Scudware Mobile.Scudware Mobile aggregates data and services of wearable devices,and provides a unified interface of accessing to the application layer.Also we introduced collaboration mechanism into Scudware Mobile.Using Scudware Mobile,we implemented two applications:portal of personal data and shaking e-card.
Mobile Social Activity Recommendation System Based on Background Sound Recognition
YANG Yao,GUO Bin and YU Zhi-wen
Computer Science. 2014, 41 (10): 62-66.  doi:10.11896/j.issn.1002-137X.2014.10.014
Abstract PDF(1198KB) ( 423 )   
References | Related Articles | Metrics
With the rapid development of smart phone and mobile Internet,the life style of people is changing.At present,the smart phone terminal integrates different sensors,such as GPS,Wi-Fi,camera,microphone,and so on.By analyzing the information collected from mobile phone sensing,it can help understand and identify users’ activities and provide personalized activity recommendation service.This paper mainly focused on improving the life in big communities (such as in a university campus).We put forward a social activity recommendation system based on background sound recognition using mobile phones.The system,named Mobile Sound Sensing and Activity Recommender (MSSAR),can gather the background sounds through the embedded microphone of mobile phones,recognizing the ongoing user activities (such as in the coffee shop,in a meeting).Furthermore,based on the online interactive historical data,MSSAR can calculate the intimacy among friends and suggest activities accordingly.This system has positive effects to enhance social connection among users and promote communication in the community.
Local Structure Preserved Shared-subspace Analysis
DU Lin-lin,ZHU Zhen-feng,DUAN Hong-shuai and ZHAO Yao
Computer Science. 2014, 41 (10): 67-71.  doi:10.11896/j.issn.1002-137X.2014.10.015
Abstract PDF(396KB) ( 456 )   
References | Related Articles | Metrics
With the rapid development of information technology,multi-view data has become increasingly common and how to obtain the shared information from the multi-view data has become one of the hottest research topics in the field of machine learning.As a shared subspace method for multi-view data,Multi-output regularized feature projection (MORP) has been proposed recently to build the correlation of multi-view data in the shared subspace by matrix factorization.Compared with the classical multi-view analysis method CCA,MORP has been proved to be more effective.On the basis of MORP,we proposed a local structure preserved shared-subspace analysis (LSPSA) method by imposing an extra graph constraint.While obtaining the shared information from multi-view data like MORP,the local geometrical structure of data in both shared subspace and original multi-view feature space can be well preserved.Thus,in the obtained shared subspace,the over-fitting problem of multi-view data can be avoided to some extent for MORP model.Meanwhile,we also proposed a graph approximating method to provide an online extension of LSPSA for the problem of out-of-sample.Without loss of performance,the computational complexity of online extension of LSPSA for seeking the representation of out-of-sample in the shared subspace can be reduced greatly,especially with the increasing size of dataset.The final experimental results on UCI multi-view hand-written digit dataset demonstrate that LSPSA achieves much better performance for classification and retrieval tasks.
Empirical Study of Direction Effect on Pen Pressure Performance in Pointing Tasks
XIN Yi-zhong,MA Yan-fei,LI Yan and ZHAO Heng-yue
Computer Science. 2014, 41 (10): 72-75.  doi:10.11896/j.issn.1002-137X.2014.10.016
Abstract PDF(352KB) ( 416 )   
References | Related Articles | Metrics
Direction may have an effect on the performance of pen pressure in pointing tasks,but the impact factor is not considered in Fitts’ Law.To investigate whether directions have an influence on the performance of the pen pressure in pointing operations,we designed the experiment which contains two kinds of directions.One is made up of four rectangular menus with different directions (North-South,West-East,Northeast-Southwest and Northwest-Southeast).The other is pressure direction.The participants increase pen pressure or decrease pen pressure to select the targets on the rectangular menu,and we refered to as PI Pointing and PD Pointing,respectively.Twelve participants are required to obtain and select two targets which have different distances (100,200,300 pixels) and different widths (20,30,40 pixels) on rectangular menus.Experimental results show that the pressure direction has an impact on pressure efficiency,while the different menu directions have no significant effect.The findings of this study will be useful for human-oriented pen pressure use in user interface designs.
Acceleration-based Activity Recognition Independent of Device Orientation and Placement
HOU Cang-jian,CHEN Ling,LV Ming-qi and CHEN Gen-cai
Computer Science. 2014, 41 (10): 76-79.  doi:10.11896/j.issn.1002-137X.2014.10.017
Abstract PDF(433KB) ( 610 )   
References | Related Articles | Metrics
Traditional activity recognition methods based on acceleration sensors generally have the assumption that the orientation and placement of sensing devices are fixed.But the recognition performance will be greatly affected when this assumption fails.However,mobile phones,the most widely used sensing devices in pervasive computing environments,are usually placed at unfixed orientation and placement.In this paper,an activity recognition method based on indepen-dent acceleration sensor orientation and placement was proposed to resolve this problem.First,the original 3D acceleration signals are processed into one-dimensional signals.Then,the concept ‘Motif’ from bioinformatics is borrowed to extract position-independent patterns from one-dimensional signals.Finally,Vector Space Model (VSM) based on extracted patterns is built to conduct activity recognition.Experimental results show that recognition rate of the method reaches to 81.41% under the condition of unfixed orientation and placement of sensing devices.
Saliency Detection Based on Global and Local Short-term Sparse Representation
FAN Qiang and QI Chun
Computer Science. 2014, 41 (10): 80-83.  doi:10.11896/j.issn.1002-137X.2014.10.018
Abstract PDF(1416KB) ( 611 )   
References | Related Articles | Metrics
Saliency detection has been considered to be an important issue in many computer vision tasks.We proposed a novel bottom-up saliency detection method based on sparse representation.Saliency detection includes two elements:image representation and saliency measurement.The two elements used in our method are both biological plausible and accurate.For an input image,we first used ICA algorithm to learn a set of basis functions,then the image could be represented by the set of basis functions.Next,we used a global and local saliency framework to measure the saliency respectively,and combined the two results to obtain the final saliency.The global saliency is obtained through Low-Rank Representation(LRR),and the local saliency is obtained through a sparse coding scheme.We compared our method with six state-of-the-art methods on two popular human eye fixation datasets.The experimental results indicate the accuracy of the proposed method to predict the human eye fixations is higher.
Fast Sea-Land Segmentation Method Based on Maritime Boundary Tracking
LI Chao-peng and YANG Guang
Computer Science. 2014, 41 (10): 84-86.  doi:10.11896/j.issn.1002-137X.2014.10.019
Abstract PDF(1334KB) ( 794 )   
References | Related Articles | Metrics
Sea-land segmentation is a key issue for marine target detection and coastline extraction.Based on the traditional sea-land segmentation method which processes the image by pixels,this paper presented a method which processes the maritime boundary region efficiently by blocks.This method first investigates four edges of the image and extracts texture features based on edge information named Edge Based Texture (EBT) feature.Then maritime boundary seed block can be captured by EBT feature.From this seed block,maritime boundary is traversed and detected efficiently by EBT feature.Experimental results show that the proposed method is of high accuracy.Compared with the state-of-the-art methods,the computational burden is greatly reduced.
Contextual Dictionary Learning for Super Resolution
YU Wei,YAO Hong-xun,SUN Xiao-shuai,LIU Xian-ming and XU Peng-fei
Computer Science. 2014, 41 (10): 87-90.  doi:10.11896/j.issn.1002-137X.2014.10.020
Abstract PDF(1608KB) ( 427 )   
References | Related Articles | Metrics
This paper proposed a novel dictionary learning method for single image super resolution based on sparse representation.We tried to utilize patch-level clustering to enhance the contextual information in atom learning stage.Unlike the previous dictionary learning works using the image classification,our training set is constructed from the high-resolution and low-resolution patch pairs labeled by different patch-level class,which is more appropriate for image reconstruction.This approach tried to promote the transfer ability of the dictionary which is built on a limited training set and can eliminate the atoms redundancy introduced by multiple training subsets.
STMLRC:Sparse Topic Model with Low Rank Constraint
LIU Chao,ZHUANG Lian-sheng and YU Neng-hai
Computer Science. 2014, 41 (10): 91-94.  doi:10.11896/j.issn.1002-137X.2014.10.021
Abstract PDF(333KB) ( 441 )   
References | Related Articles | Metrics
The project matrix learned by classic Latent Semantic Analysis is always dense,which leads to high storage cost and unclear semantic for each topic.To tackle this problem,a novel sparse topic model was proposed in this paper.By enforcing the sparsity of project matrix,the new model only selects a small number of relevant words for each topic and hence leads to a clear semantic interpretation.Moreover,by enforcing the low rankness of encoding matrix,data projected in the topic subspace shows a better clustering features.Experimental result show that topic subspace learned by our new topic model is in favor of classification,and significantly reduces the storage cost of project matrix.
Multiple Plane Extraction Based on Feature Point Tracking over Video Sequences
TAO Lei,WANG Ping and ZHANG Lei
Computer Science. 2014, 41 (10): 95-100.  doi:10.11896/j.issn.1002-137X.2014.10.022
Abstract PDF(2791KB) ( 722 )   
References | Related Articles | Metrics
Planar structures are abundant in both man-made and natural environments which enables the use of planes in various vision tasks.We introduced a new approach for the robust detection of multiple planes over a video sequence without camera calibration or 3D reconstruction.Given a video sequence,we first built a projective geometry model between adjacent frames based on epipolar constraint.Then a homography induced by a plane was computed,with which we could get the planar structure by doing video segmentation.Experimental results on a variety of real video sequences have verified the effectiveness and efficiency of our method.
Smoothing Algorithm with Edge-preserving by Extrema Constraints
JIANG Xiao-lei,YAO Hong-xun and ZHAO Si-cheng
Computer Science. 2014, 41 (10): 101-105.  doi:10.11896/j.issn.1002-137X.2014.10.023
Abstract PDF(1500KB) ( 545 )   
References | Related Articles | Metrics
Edge-aware smoothing is important for many image editing applications as well as image preprocessing.Edge preservation and detail suppression form a contradictory pair.An edge-preserving smoothing algorithm was proposed,in which a signal is asked to attain its extrema at some given points.By manipulations on extrema of the resulting image,significant edges are retained,and at the same time,side edges and small fluctuations are subdued.Our method first obtains a preliminary smoothing version,whose extrema are used as constraints on the resulting image.Among all functions that meet these constraints,we pursued the one that is most similar to the original signal as the smoothing result.This optimization problem is solved by the half-quadratic technique and alternating minimization.Experimental results show that applications such as detail enhancement can benefit from the better performance of our method for preserving edges.
Motion Pattern Analysis in Crowded Scenes Based on Feature Maps
WANG Chong-jing,ZHAO Xu and LIU Yun-cai
Computer Science. 2014, 41 (10): 106-109.  doi:10.11896/j.issn.1002-137X.2014.10.024
Abstract PDF(3458KB) ( 430 )   
References | Related Articles | Metrics
Crowded scene analysis is currently a hot and challenging topic in computer vision field.We proposed a novel approach to analyze motion patterns by clustering the hybrid generative-discriminative feature maps using unsupervised hierarchical clustering algorithm.The hybrid generative-discriminative feature maps are derived by posterior divergence based on the track-lets,which are captured by tracking dense points with three effective rules.The feature maps effectively associate low-level features with the semantically motion patterns by exploiting the hidden information in crowded scenes.Motion pattern analyzing is implemented in a completely unsupervised way and the feature maps are clustered automatically through hierarchical clustering algorithm building on the basis of graphic model.The experiment results precisely reveal the distributions of motion patterns in current crowded videos and demonstrate the effectiveness of our approach.
Content-based Adaptive Contrast Enhancement Using Overlapped Sub-block Processing
DOU Zhi,HAN Yu-bing,HU Jing,SHENG Wei-xing and MA Xiao-feng
Computer Science. 2014, 41 (10): 110-112.  doi:10.11896/j.issn.1002-137X.2014.10.025
Abstract PDF(1253KB) ( 775 )   
References | Related Articles | Metrics
In this paper,a content-based adaptive contrast enhancement using sub-block processing was presented,which can flexibly process a wide range of image with various features.It can meticulously process image by analyzing local characteristics and operating in sub-block.We generated an enhancement function with adjustable parameters,which can be used to process various images flexibly through adjusting the parameters.Instead of adjusting these parameters manually,the proposed algorithm can obtain the reasonable enhancement parameters automatically through extracting the relevant characteristics from local content of image.In this way,various images can be adaptively processed without manual intervention.Experimental results demonstrate that the proposed method can flexibly enhance various images,such as underexposure,overexposure,back-lighted,misted,even mixed several characteristics above-and produce ideal enhanced images.
Classification of Hyperspectral Image Based on Sparse Representation and Bag of Words
REN Yue-mei,ZHANG Yan-ning,WEI Wei and ZHANG Xiu-wei
Computer Science. 2014, 41 (10): 113-116.  doi:10.11896/j.issn.1002-137X.2014.10.026
Abstract PDF(1230KB) ( 541 )   
References | Related Articles | Metrics
To enhance representation ability of the sparse dictionary for hyperspectral image classification using sparse representation and make full use of spectral information and spatial information of hyperspectral image,a novel hyperspectral image classification method based on sparse representation and bag of words was proposed.First,some professional dictionaries of each class are generated by bag of words algorithm based on the hyperspectral remote sensing image dataset,and the sparse representation dictionary is obtained by merging these professional dictionaries.Then,the sparse coefficient of each pixel is calculated according to the sparse representation dictionary,and the spatial continuity is used to constraint the coefficient by using the information of its neighborhoods.Finally,the classification of the objects is determined by computing the minimum reconstruction error of it on each professional dictionary.Experiments on hyperspectral remote sensing images indicate that the proposed method has better performance,a higher overall accuracy and Kappa coefficient than other sparse representation methods and the method based on spectral information respectively.
SIRS Model on Complex Network of Continuous Time Markov Chain Based on Analysis
CHEN Xu-hui,LI Chen,KE Ming and HAO Ze-long
Computer Science. 2014, 41 (10): 117-121.  doi:10.11896/j.issn.1002-137X.2014.10.027
Abstract PDF(398KB) ( 1626 )   
References | Related Articles | Metrics
Addressing at the general characteristics of random fluctuations in the propagation process,by uniforming network SIRS model for research object,the paper established a random network model based on continuous time Markovchain,and it analyzed the steady-state threshold and critical conditions of random network model.The conclusion of random network model is the same with the result of the mean-field approach.In addition,Propagation model was established based on continuous time Markov chain,in the description of the phenomenon of random fluctuations in the propagation process,the theory explained were given better than the mean-field approach,which is the most obvious advantage compared with the mean-field method in resolving such problems.As well as the paper provided an analysis of the behavior of the transmission dynamics of complex networks of ideas based on probability and statistics methods.
Sub-coding and Entire-coding Jointly Penalty Based Sparse Representation Dictionary Learning
DONG Jun-jian,MAO Qi-rong,HU Su-li and ZHAN Yong-zhao
Computer Science. 2014, 41 (10): 122-127.  doi:10.11896/j.issn.1002-137X.2014.10.028
Abstract PDF(524KB) ( 383 )   
References | Related Articles | Metrics
Currently,the penalty function of dictionary learning (DL) used for sparse representation classification has many versions and each of them has its own advantages.This paper presented a new dictionary learning method called Sub-coding and Entire-coding jointly penalty based dictionary learning,which jointly adds sub-coding penalty functions and entire-coding penalty functions into the dictionary learning objective function.Sub-coding penalty function makes the dictionary after learning can use its reconstruction error and sub-coding for classification,and entire-coding penalty function makes the dictionary after learning can directly use its whole coding for classification at the same time.By combining these two penalty function,good recognition effect can be got.The proposed method is extensively evaluated on emotion speech database and face database in comparison with famous DL based sparse representation classification methods DKSVD and FDDL,and other famous recognition method SRC and SVM.The experimental results show that the proposed method has better recognition performance.
Approach to Ensure Safety of Elderly Indoor Based on Spatiotemporal Information
ZHENG Xiao-li,WANG Hai-peng,NI Hong-bo,GUO Bin,LIN Qiang,BAI Liang and WANG Tian-ben
Computer Science. 2014, 41 (10): 128-130.  doi:10.11896/j.issn.1002-137X.2014.10.029
Abstract PDF(329KB) ( 532 )   
References | Related Articles | Metrics
In recent yeas,the number of elderly people living along has grown rapidly.The safety issue is a problem that elderly residents must face,which not only reduce the quality of life,but also threat their lives.This paper proposed a real-time security monitoring solution which based on indoor temporal and spatial information.The method models the spatial and temporal data of historical activities to determine whether the current activity is abnormal.We collected the data by building the experimental platform and verified the effectiveness of the method we proposed in this paper by ana-lyzing the experimental results.
Dynamic Buffer Mechanism of P2P VOD on Embedded System
WANG Pan,HUANG Hao and XIE Chang-sheng
Computer Science. 2014, 41 (10): 131-133.  doi:10.11896/j.issn.1002-137X.2014.10.030
Abstract PDF(1740KB) ( 461 )   
References | Related Articles | Metrics
P2P network is widely used in IPTV systems now.The essence of P2P streaming is making use of the upload bandwidth of each peer to reduce the load of the streaming servers.Regard to VOD P2P network,most of the existing researches are focused on peer selection strategies,such as the peer topology,peer bandwidth contribution,balance of peer churn and stream quality.These researches do make a significant improvement to the P2P live streaming and the P2P VOD systems.In this paper,we designed a new content-oriented buffering mechanism for P2P VOD on embedded system.It can minimize the weakness of embedded system (few memory and storage),and make an intelligent buffer mechanism at the basis of content popularity.Using this new mechanism,P2P VOD system can get a lower built-up time and a higher data sharing rate.
Depth Map Coding Based on Wavelet Inter-subband Coefficients Prediction
LI Xing,ZHAO Yao,LIN Chun-yu and YAO Chao
Computer Science. 2014, 41 (10): 134-138.  doi:10.11896/j.issn.1002-137X.2014.10.031
Abstract PDF(1274KB) ( 599 )   
References | Related Articles | Metrics
With the re-emergence of 3D video technology,the format based on 3D video has been used widely,and the coding algorithm aiming at the depth map has become a hot research in recent years.Different from the image coding based on wavelet transform,we proposed a novel scheme based on the correlations of intra-subband.The scheme is performed in the wavelet quantization coefficients.Firstly,we predicted the coefficients in the horizontal and vertical directions by using the diagonal subband in the same level.Then,we made use of four coefficients to predict the correspon-ding coefficient in the adjacent level.Finally,we performed arithmetic coding for atom-band coefficients and the residual values after prediction.The experimental results show that the proposed scheme in this paper keeps the quality of depth map,while up to 18.4% bit rate can be saved.
Research on Power-saving Technology for Wireless Sensor Node Based on Improved Dynamic Power Management (DPM)
CHEN Gao-jie,CHEN Zhang-wei and YAO Xue-ting
Computer Science. 2014, 41 (10): 139-143.  doi:10.11896/j.issn.1002-137X.2014.10.032
Abstract PDF(407KB) ( 443 )   
References | Related Articles | Metrics
According to structural characteristics of the wireless sensor node,and the technology for dynamic power management,an improved dynamic power management was presented with real working conditions.Then some energy consumption models were established in different working conditions,and then analyzed in measurable way.The results show that the energy consumption for the wireless sensor node is decreased effectively after using the improved dynamic power management,and the analyzed results can lay a good foundation for further improved design of real wireless sensor node in energy saving condition.
Design and Implementation of Trust-based Identity Management Model for Cloud Computing
LI Bing-xu,WU Li-fa,ZHOU Zhen-ji and LI Hua-bo
Computer Science. 2014, 41 (10): 144-148.  doi:10.11896/j.issn.1002-137X.2014.10.033
Abstract PDF(408KB) ( 382 )   
References | Related Articles | Metrics
With the development of cloud computing,identity management issues of cloud computing have attracted great attention.Being widely used in cloud identity management,the identity authentication mechanism based on group signature guarantees that the cloud service provider cannot backtrack users’ identity information through outsourcing data,but it cannot prevent a malicious user from accessing cloud services.To solve the problem,the paper designed an identity management model by integrating trust management with group signature mechanism.The model calculates the user’s trustworthiness firstly,and then divides the users into groups according to the trustworthiness.At last,using the group signature mechanism,our model implements the authentication,which not only ensures user privacy in cloud but also helps the cloud providers to protect cloud services.Experiments show that the model can identify the malicious users effectively,and help the cloud service providers to prevent a malicious user from getting access to cloud services.
Research and Improved Design in IEEE802.15.4 MAC Protocol for Service Distinguishing
QIAO Guan-hua,MAO Jian-lin,GUO Ning,HU Yu-jie and WANG Le
Computer Science. 2014, 41 (10): 149-153.  doi:10.11896/j.issn.1002-137X.2014.10.034
Abstract PDF(391KB) ( 388 )   
References | Related Articles | Metrics
According to multiple services in the dynamic networks,this paper proposed a new backoff scheme,which uses probability judgement based on network load and adaptive service distinguishing backoff (PJNL_ASDB) scheme that can judge network status with Probability mechanism and use adaptive dynamic backoff scheme to distinguish servi-ce by introducing weighting parameter,thus it can implement reasonable backoff.The numerical results of two-dimensional discrete-time Markov chains model and NS2 simulation show that the PJNL_ASDB scheme can not only ensure the demand of high priority transmission but also improve the network performance of low priority.
Minimum-cost Based Data Replication Strategy in Cloud Computing Environment
WU Xiu-guo
Computer Science. 2014, 41 (10): 154-159.  doi:10.11896/j.issn.1002-137X.2014.10.035
Abstract PDF(501KB) ( 430 )   
References | Related Articles | Metrics
Data replica management is an important component in cloud storage system,which is important for improving the system reliability and high performance.In general,if the number of replicas increase,the transfer cost will be declined because of the data can transfer more effectively;but the storage cost is becoming large because of the existence of additional replicas.Aimed to reduce the cost of data management,this paper proposed a minimum-cost based data replication strategy in balancing storage cost and transfer cost,including the data management cost model,the necessity of adding data replica and an approximate algorithm that can automatically decide the number and their store places.Both the theoretical analysis and simulations conducted on general (random) data sets as well as specific real world applications with Amazon’s cost model show that the minimum-cost replica strategy is close to or even the same as the minimum cost benchmark and the efficiency is very high for practical runtime utilization in the cloud.On the other side,this research can promote the enterprise (user) actively using cloud computing platform and the harmonious development of cloud computing environment.
Frame-level Stereoscopic Video Transmission Distortion Model at Encoder
WANG Xiao-dong,WANG Teng-fei,HU Bin-bin,JIANG Gang-yi and ZHANG Lian-jun
Computer Science. 2014, 41 (10): 160-163.  doi:10.11896/j.issn.1002-137X.2014.10.036
Abstract PDF(295KB) ( 412 )   
References | Related Articles | Metrics
Taking into account the relationship between network transmission and stereoscopic video,the temporal correlation and spatial correlation of stereo video sequences,channel parameters,as well as terminal error concealment technology, we put forward a frame-level stereoscopic video transmission distortion model at the encoder.We made experiment of stereoscopic video sequences with different motion intensity and disparity in different network conditions.When MSE represents transmission distortion,average distortion error is 4.16%.When PSNR represents transmission distortion,average distortion error is 0.93%.The model can accurately estimate the transmission distortion of stereoscopic video at the encoder.
Feature-related Node Address and Replica Distribution over Structured P2P Networks
LAN Ming-Jing
Computer Science. 2014, 41 (10): 164-168.  doi:10.11896/j.issn.1002-137X.2014.10.037
Abstract PDF(457KB) ( 510 )   
References | Related Articles | Metrics
In the traditional structured P2P network,node identifications are generated randomly or based on order.There is a lack of correlation between the features such as node distribution,node location,and safety.The phenomenon of "wrong correlation" cannot be dealt with effectively,and there is a risk of data corruption.A new node addressing method was proposed in this paper,together with its correlated replica distribution algorithm.By combining characteristic information such as location and security of nodes into node identification,the nodes can be distributed according to their features.According to these features,nodes can be avoided or approached during the replica distribution procedure.Based on this,problems such as "wrong correlation" are solved,and the distribution efficiency is improved.The node distribution and backup nodes selection result after improvement were given out in the emulation experiment,which shows the effectiveness of the method.
Distributed Clustering Algorithm in Heterogeneous Wireless Sensor Network Based on Load Balance and Shortest Path
LIU Tang and SUN Yan-qing
Computer Science. 2014, 41 (10): 169-172.  doi:10.11896/j.issn.1002-137X.2014.10.038
Abstract PDF(351KB) ( 433 )   
References | Related Articles | Metrics
To solve the problem of the load balance and data transmission in wireless sensor network (WSN),an distributed and unequal clustering algorithm based on load balance and shortest path (DUBP) was proposed.In DUBP,during the clustering per round,the whole network is firstly divided into energy-balanced subareas by the energy consumption factor,and then combining the graph theory and hybrid topology,the Floyd algorithm is used to calculate each node’s shortest distance to the other nodes in the subarea as path factor.Cluster-heads are elected by the two factors,which can avoid the low-energy node to be cluster,and save the transmission energy consumption.Finally,simulation results demonstrate that DUBP has good adaptability and efficiency and prolongs the lifetime of WSN.
Research on Cyber Attack Case Base Model Based on Ontology
LI Wen-xiong,WU Dong-ying,LIU Sheng-li and XIAO Da
Computer Science. 2014, 41 (10): 173-176.  doi:10.11896/j.issn.1002-137X.2014.10.039
Abstract PDF(385KB) ( 764 )   
References | Related Articles | Metrics
In the study of network security,cyber-attack case plays an important role for effectively analyzing and defensing network illegal intrusion.However,how to effectively build cyber-attack case base is one of the difficulties.For there is no perfect cyber-attack case base,this paper studied the cyber-attack case model based on ontology.This paper first defined formalized representation of cyber-attack case,classified cyber-attack case domain knowledge,and on this basis,applying ontology,the knowledge sharing tools,built a sharing,reusable,scalable cyber-attack case model,finally,using the model of cyber-attack case put forward based on ontology,realized knowledge acquisition of a network attack events,to verify the validity of the model.
Cluster-based Multipath Anonymous Routing Protocol in Wireless Ad Hoc Networks
ZHANG Zhong-ke and WANG Yun
Computer Science. 2014, 41 (10): 177-183.  doi:10.11896/j.issn.1002-137X.2014.10.040
Abstract PDF(1306KB) ( 381 )   
References | Related Articles | Metrics
Without disclosing the real identities of participating nodes,the shared session keys and secret link identifiers among neighbors are exchanged based on bilinear pairing,and further the local anonymous routing entries between the normal nodes and cluster head are built.With the help of local anonymous routing table,multi-path between source-destination can be constructed and data packets will be anonymously forwarded to destination along these paths.Simulation results show that CMAR achieves perfect anonymous communication with relatively low cost of communication and computation overhead.
Game Model of User’s Privacy-preserving in Social Networks
HUANG Qi-fa,ZHU Jian-ming,SONG Biao and ZHANG Ning
Computer Science. 2014, 41 (10): 184-190.  doi:10.11896/j.issn.1002-137X.2014.10.041
Abstract PDF(597KB) ( 446 )   
References | Related Articles | Metrics
Based on incomplete information dynamic game,this paper analyzed three kinds of game between attackers and defenders of social networks:offensive-defensive game,mutual defense game,joint attacking game,and further discussed the effects of relationship levels on game process.The result tells us that incomplete selfish defenders can optimize their overall defense and the degree of optimization is depended on their privacy value and relationship levels,and collusion between attackers can obtain higher attack utility,but relationship levels have different effects to different attackers.The result of this study has a certain guiding role to social networks user’s privacy-preserving.
Identity Authentication Scheme in Opportunistic Network Based on Fuzzy-IBE
CAO Xiao-mei and YIN Ying
Computer Science. 2014, 41 (10): 191-195.  doi:10.11896/j.issn.1002-137X.2014.10.042
Abstract PDF(360KB) ( 595 )   
References | Related Articles | Metrics
An identity authentication scheme in opportunistic network was proposed based on Fuzzy-IBE,which can conform to the characteristics of self-organized management,openness and intermittent connectivity in opportunistic networks.The scheme is committed to addressing the security issues such as privacy leaks in the existing social context-based routing protocols.Because of the intermittent connectivity,the traditional cryptography cannot be applied to the opportunistic networks.So in F-ONIAS,an off-line PKG is used to generate private keys for users.Meanwhile,in identity-based cryptography,identity information may be forged.To avoid such security risks,the biological information is used as a node’s identifier.Simulation results show that implementing our security scheme does not induce any negative impact on the average delay,and achieves higher delivery probability and lower routing overhead rate.
Remote User Experience Evaluation Review:Tools,Methods and Challenges
HAN Li,LIU Zheng-jie,ZHANG Jun and CHEN Yuan-yuan
Computer Science. 2014, 41 (10): 196-203.  doi:10.11896/j.issn.1002-137X.2014.10.043
Abstract PDF(648KB) ( 483 )   
References | Related Articles | Metrics
Having many advantages,such as economy,convenient and fast,non-intrusive,allowing to study the plenty of diversity users in field,the remote user experience evaluation is considered as the potential solution for the problems of traditional user experience evaluation,and it becomes a research hotspot in the field of human-computer interaction.This paper reviews the state of remote user experience evaluation,where the architecture is given.According this architecture,the issues are classified into three parts which are data capturing,automatic analysis and visualization to discuss.At the end of this paper,the challenges and future research topics are listed.
Design and Implementation of Requirements Capture Tools Based on MDA
ZENG Yi,HUANG Xing-yan,LI Han-yu and WANG Cui-qin
Computer Science. 2014, 41 (10): 204-209.  doi:10.11896/j.issn.1002-137X.2014.10.044
Abstract PDF(2213KB) ( 413 )   
References | Related Articles | Metrics
By now,the traditional ways of MDA developing process have always been the manual capture and text descriptions,which has affected the accuracy and consistency of both demand model and PIM model,and reduced the automation degree of MDA development.This article offered the way of how to develop a visual demand capture tool.With the technology of MDA framework and the GEF adopted in whose development,this tool runs the demand capture in the goal-situation capturing way.Whereas it offers the feasibility of exporting the demand model in the form of both requirements document and XML,and provides enough information for the conversion from demand model to PMI model.Finally,the examples offered in this article demonstrate the effectiveness of the capture tool.By this tool,the lack of independent requirements capture process can be made up,and to some extent,the development of MDA process and the automation degree of MDA software can be improved.
Task Analysis and Task Modeling Method in Mobile Environment
LI Juan-ni,HUA Qing-yi and JI Xiang
Computer Science. 2014, 41 (10): 210-215.  doi:10.11896/j.issn.1002-137X.2014.10.045
Abstract PDF(1512KB) ( 507 )   
References | Related Articles | Metrics
With the emergence of mobile computing and wireless devices,accurately and comprehensively description user dynamic tasks and developing the interactive system with good user experience have recently become one of the hottest topics in the domain of Human-Computer Interaction(HCI).Aiming at this problem,firstly this paper introduced some related concept of task,summarized the characteristics of task in the mobile environment, analyzed the difference of task model between traditional static environment and the mobile environment,and then expounded the key technologies used in modeling process.Finally the prospects for future development and suggestions for possible extensions were discussed.
Web Database Security Index Based on Multi-layer Space Fuzzy Subtractive Clustering Algorithm
LIN Nan and SHI Wei-hang
Computer Science. 2014, 41 (10): 216-219.  doi:10.11896/j.issn.1002-137X.2014.10.046
Abstract PDF(345KB) ( 424 )   
References | Related Articles | Metrics
The current database index uses the single text feature clustering method in the indexing query on the Web database.When the clustering features are different,the illegal clustering and illegal output problems occurred.A Web database security index method was proposed based on multi-layer space fuzzy subtractive clustering.The database information vector is constructed as a multilayer vector autoregressive space.The data flow information is focused on the multi-layer fuzzy clustering centers,and the fuzzy inference system of subtractive clustering is used to establish the database index function.The clustering center vector is adjusted in variable scales.The neighboring data point’s illegal intrusion and clustering problems are avoided.The Web database security index is realized.Simulation result shows that the new algorithm can make the database information flow expanded in the multilayer vector autoregressive space.The feature matching degree is improved greatly,and illegal data output is removed.The security index can be ensured.
Goal-based Conceptual Business Process Modeling
WANG Nan,SUN Shan-wu and OUYANG Dan-tong
Computer Science. 2014, 41 (10): 220-224.  doi:10.11896/j.issn.1002-137X.2014.10.047
Abstract PDF(460KB) ( 370 )   
References | Related Articles | Metrics
Human’s activities are mainly driven by goals.The organization’s business process should contain those activities which can add value for users and the activities should provide service for the business goals.This paper proposed a goal-based conceptual business process model framework. Based on the refined goal-activities hierarchy,the research automatically generated reusable business process fragments bottom up,and then realized an automatic match between the user’s goals and the business process fragments.Furthermore,the evaluation analysis of the extent on which the activities support the business process fragments was given to provide guidance for users to build the business process conceptual models.
Research on Accurate Analysis of Internet Public Opinion:A Semantic Grammar-based Method
HOU Sheng-luan,LIU Lei and CAO Cun-gen
Computer Science. 2014, 41 (10): 225-231.  doi:10.11896/j.issn.1002-137X.2014.10.048
Abstract PDF(681KB) ( 542 )   
References | Related Articles | Metrics
The conventional methods of public opinion analysis based on keywords statistics are inaccurate due to lack of semantic processing which is necessary.A novel semantic grammar-based method for accurate analysis of Internet public opinion was presented.This method has two parts.One is an executable Internet public opinion accurate analysis Language (Eipoaal),which is a general-purpose program language that can be designed according to actual demand;and the other is an Internet public opinion accurate analysis system(Ipoaas),which provides a running platform for Eipoaal.This system has been implemented and tested in the analysis of Internet public opinion about corruption.Experimental results show the validity of the method.
Modeling and Multi-start Variable Neighborhood Descent Solution of Two-echelon Open Vehicle Routing Problem
ZENG Zheng-yang,XU Wei-sheng and XU Zhi-yu
Computer Science. 2014, 41 (10): 232-237.  doi:10.11896/j.issn.1002-137X.2014.10.049
Abstract PDF(500KB) ( 843 )   
References | Related Articles | Metrics
In this paper,a Two-Echelon Open Vehicle Routing Problem (2E-OVRP) model was constructed according to the common open and two-echelon distribution of freight in city logistics.In 2E-OVRP,freight from a remote depot is compulsorily delivered through intermediate depots (called satellites).The first echelon is from depot to satellites,while the second from satellites to customers.Vehicles of both echelons are not required to return to starting points after fi-nishing their respective distribution,or they are required to do so by making the same trips in the reverse order.We designed a multi-start variable neighborhood descent algorithm to solve this NP-hard problem effectively.Computational tests on several extended benchmark instances show that the designed algorithm pays attention on the balance between solution quality and efficiency and can solve the proposed 2E-OVRP effectively.
Design and Implementation of Apriori on GPU
TANG Jia-wei and WANG Xiao-feng
Computer Science. 2014, 41 (10): 238-243.  doi:10.11896/j.issn.1002-137X.2014.10.050
Abstract PDF(451KB) ( 475 )   
References | Related Articles | Metrics
Big data and parallel computation era have come,and it is a trend to convert serial data mine algorithm into parallel algorithm to take advantage of cheap machine.In this paper two main steps,namely support counting and candidate set generation in serial apriori algorithm,were rebuilt parallelly on CUDA architecture.Meanwhile the difference between various implements of parallel apriori was compared to find a better solution.Finally,the experiments indicate that the time of support counting and candidate set generation decreases 16% and 25% respectively on a data set containing 10000 items.
Clustering Stability Analysis for Non-numeric Data Based on Concept Lattice
ZHI Hui-lai
Computer Science. 2014, 41 (10): 244-248.  doi:10.11896/j.issn.1002-137X.2014.10.051
Abstract PDF(396KB) ( 419 )   
References | Related Articles | Metrics
Stable concepts usually represent strong correlation with real world entities and the calculation of concept stability,which is proven as an NP-complete problem,plays an important role in clustering analysis.To precisely calculate concept stability,concept lattice was used as the analysis model.At first,the definition of kernel object set as well as the way to find the kernel object set of a concept was proposed,and then concept stability was calculated based on kernel object set.Meanwhile,the method of calculating kernel attribute set of a given concept was derived directly based on the principle of duality of concept lattice.At last,an example was given to illustrate the application of concept stability.
Attribute Reduction Based on Closure Operators
LIU Jing and MI Ju-sheng
Computer Science. 2014, 41 (10): 249-251.  doi:10.11896/j.issn.1002-137X.2014.10.052
Abstract PDF(272KB) ( 399 )   
References | Related Articles | Metrics
For a consistent information system,the definitions of two closure operators C(R)and C(r) on the power set of conditional attribute set were first defined respectively,and then the properties of two closed set families Cr and CR were also discussed.The relationships among Cr,CR and the set of all discernibility attribute sets Ω were examined,from which we provided a simple method to attribute reduction in consistent decision table defined in reference [4].Meanwhile,the sufficient and necessary condition of Cr=CR was proved.Finally,we proved that under condition Cr=CR,the proposed method is equivalent to those in references [4] and [7].
Research of Domain QoS-aware Logistics Web Services Optimized Combination
AN Ji-yu,WANG Zhen-zhen,LIU Zhi-zhong and XUE Xiao
Computer Science. 2014, 41 (10): 252-256.  doi:10.11896/j.issn.1002-137X.2014.10.053
Abstract PDF(389KB) ( 420 )   
References | Related Articles | Metrics
Optimized combination occupies an important position in modern logistics service.At the present stage,the researchers of logistics Web services combination focuse on Web services general QoS indexes.It is difficult to meet the needs of service selection of specific areas.And the researchers focuse on economics and management,lack of service computing technology support.Aiming at these problems,model of domain QoS evaluation was built through domain QoS-aware to analyze the service combination.Logistics service provider selection,evolution and optimized combination can be researched using domain QoS-aware.Then the shortest path in combination schemes is computed by Dijkstra algorithm,namely combination service QoS values,which obtains domain QoS-aware logistics Web services optimized combination scheme,and predicts the development of domain QoS-aware service combination on modern logistics industry,which has certain theoretical and practical significance.
Research on Algebraic Method of Interoperation of Heterogeneous Resource in Pervasive Computing Environment
JIANG Li-fen and ZHAO Zi-ping
Computer Science. 2014, 41 (10): 257-260.  doi:10.11896/j.issn.1002-137X.2014.10.054
Abstract PDF(388KB) ( 365 )   
References | Related Articles | Metrics
This paper analyzed the requirements and the existing problems of interoperation and integration of heterogeneous resource in pervasive computing environment.In view of the existing semantic problems in heterogeneous resource,an algebraic method for ontology was proposed and a relatively complete algebra theory for ontology was established.It has improved and developed the existing algebra theory for ontology,and has provided a theoretical guidance for interoperation,integration,management and query of heterogeneous resource in pervasive computing environment.
Clustering Algorithm Based on Quantum Game and Grid
HUANG De-cai and TANG Sheng-long
Computer Science. 2014, 41 (10): 261-265.  doi:10.11896/j.issn.1002-137X.2014.10.055
Abstract PDF(417KB) ( 428 )   
References | Related Articles | Metrics
Quantum game is an analogy of classical game.Using the quantum entanglement,game players interact with each other implicitly,and the game will result in a different way.Quantum game was applied to clustering.A clustering algorithm based on quantum game and grid was proposed where data points are regarded as players.By embeding distance function into payoff matrix,similar data points can get more payoff,and clusters will be formed in that way.In addition,a rule about merging grid was designed to simplify the game.Simulations show the clustering quality of this algorithm is superior to K-means etc.At last,several parameters in this algorithm were discussed and some recommendations about parameters selection were provided.
Method of Information Delivery for State Accessibility in Non-determinate System
LAO Jia-qi,WEN Zhong-hua,WU Xiao-hui and LI Yang
Computer Science. 2014, 41 (10): 266-269.  doi:10.11896/j.issn.1002-137X.2014.10.056
Abstract PDF(282KB) ( 391 )   
References | Related Articles | Metrics
In the field of non-determinate planning,when solving planning problems,due to lack of guidance information,the search will lead to many unwanted states and actions,resulting in the redundant calculations.So before seeking planning solution, it is significant to find up to the reachability relations between states in an indeterminate state transfer system.Previous algorithm is simulated by the state transition matrix multiplication,but such algorithm for large-scale system overhead is large.Therefore we proposed the method of information transmission to solve state accessibility with the matrix analog uncertain system.Each state records reachability information that the other states reach the state,and by passing reachability information between states,seek relationship between status in uncertain system to avoid a large number of matrix operations.Contrastive experiments show that the algorithm is superior to the design matrix multiplication algorithm.
Lambek Calculus of Flexible Word Order of Chinese Based Statements
LIU Dong-ning,DENG Chun-guo,TENG Shao-hua and LIANG Lu
Computer Science. 2014, 41 (10): 270-275.  doi:10.11896/j.issn.1002-137X.2014.10.057
Abstract PDF(1181KB) ( 671 )   
References | Related Articles | Metrics
Now natural language processing has shifted from syntactic/lexical level to lightweight semantic level.As for the natural language processing of Chinese narrative sentences,the traditional Lambek calculus cannot solve the problem of processing those Chinese statements with a flexible word order.And the present methods,such as adding modal words or new conjunctions,are not suitable for the relevant computer processing,because they will increase the complexity of the NP-hard Lambek calculus.In response,this paper used the Lambek calculus of marked verb matching to process the flexible word order of narrative sentences in Chinese.A low time complexity of the marked verb matching algorithm enables the computer programs to effectively process the flexible-word-ordered Chinese sentences,and also makes it possible to apply the lightweight semantic processing according to the corresponding Curry-Howard theory as well as lambda-calculus.
Mutual Information Distribution of Frequent N-gram Chinese Characters
YU Yi-jiao, YIN Yan-fei and LIU Qin
Computer Science. 2014, 41 (10): 276-282.  doi:10.11896/j.issn.1002-137X.2014.10.058
Abstract   
References | Related Articles | Metrics
Mutual information based Chinese word segmentation and new terms extraction are typical statistics-based Chinese information processing technologies in recent 20 years.This paper discussed the mutual information distribution characteristics of frequent 2-gram,3-gram and 4-gram Chinese characters in a large corpus.The statistic results show two obvious findings as follows.First,there are no evident mutual information boundaries between Chinese word and phrase,which means it is impossible to distinguish Chinese words and phrases with either mutual information or frequency.Second,the mutual information of words,phrases and illegal Chinese strings are mixed together,which drama-tically affects the precision of statistics-based Chinese information processing technology.These two findings show that Chinese word extraction and segmentation only based on statistic technology still face great challenges.
Multi-relational Nave Bayesian Classifier Using Feature Weighting
XU Guang-mei,LIU Hong-zhe and ZHANG Jing-zun
Computer Science. 2014, 41 (10): 283-285.  doi:10.11896/j.issn.1002-137X.2014.10.059
Abstract PDF(260KB) ( 707 )   
References | Related Articles | Metrics
To improve the accuracy of multi-relational nave Bayesian classifiers,this paper discussed existing feature weighting methods and upgraded the method to deal with multi-relational data directly.Based on the tuple ID propagation method and counting methods towards tuples,a multi-relational nave Bayesian classifier using feature weighting (MRNBC-W) was given.Experiments on Financial database show that with the help of feature weighting,the classifiers can give better accuracy without increase of time complexity.Furthermore,MRNBC-W based on mutual information(MRNBC-W-MI) was implemented.
New Applicable Condition of Dempster’s Combination Rule
CUI Jia-wei and LI Bi-cheng
Computer Science. 2014, 41 (10): 286-290.  doi:10.11896/j.issn.1002-137X.2014.10.060
Abstract PDF(388KB) ( 565 )   
References | Related Articles | Metrics
Aiming at the opening issues that indicators are ambiguous and threshold setting is too subjective in using the classical conflict coefficient to determine the applicability of the Dempster’scombination rule in D-S evidence theory,a new applicable condition of the Dempster’s combination rule was proposed.Firstly,evidence conflict couldnot judge the application of the Dempster’s combination rule.Secondly,the reason of the unreasonable results of using the Dempster’s combination rule was analyzed.Thirdly,the condition was proposed.The results of the numerical examples and the comparison with the similar methods demonstrate that the proposed applicable condition is clear and simple,and provides a good applicable and reasonable indicator.
Hybrid Gene Selection Algorithm Based on Optimized Neighborhood Rough Set
CHEN Tao,HONG Zeng-lin and DENG Fang-an
Computer Science. 2014, 41 (10): 291-294.  doi:10.11896/j.issn.1002-137X.2014.10.061
Abstract PDF(411KB) ( 373 )   
References | Related Articles | Metrics
DNA microarray technique can detect tens of thousands of gene activity in cells,which has been widely used in clinical diagnosis.However,microarray data has high dimension,small sample,a lot of noise and redundant genes.In order to further improve the classification performance,this paper proposed a hybrid gene selection algorithm.Firstly,using ReliefF algorithm to eliminate a lot of irrelevant genes,the feature genes candidate set was obtained.Then the optimized neighborhood rough set model based on differential evolution algorithm was used to select feature genes.At last the validity of the algorithm was verified using support vector machine as classifier.The simulation results show that the algorithm can obtain higher classification accuracy with less feature gene,and it not only enhances the generalization performance of the algorithm,but also improves the time efficiency.
Detecting Product Review Spammers Based on Review Graphs
WANG Zhuo,LI Zhun,XU Ye and SONG Kai
Computer Science. 2014, 41 (10): 295-299.  doi:10.11896/j.issn.1002-137X.2014.10.062
Abstract PDF(471KB) ( 483 )   
References | Related Articles | Metrics
Online product reviews can significantly affect product sales,resulting in a large number of reviewers who promote and/or demote target products by writing untruthful product reviews.Wang G et al proposed review graphs which reveal the relationships of reviews,reviewers and stores to calculate the reputations of reviews,reviewers and stores by convergent iterative computation,which can capture fake reviewers.To handle the storeless shopping environment,we proposed a new review graph structure by replacing stores with products,and designed a novel Algorithm ICE to fasten the iteration process by eliminating a certain portion of reviewers and reviews during each iteration.Meanwhile,by exploiting new scoring criteria for reviews,reviewers and products,the precision for identifying fake reviewers is also improved.Experiments show that the proposed Algorithm ICE not only performs faster but also more accurately than previous method.
Abnormal Event Detection Using Linear Dynamical System Combined with Sparse Coding
LIU Yang and LI Yi-bo
Computer Science. 2014, 41 (10): 300-305.  doi:10.11896/j.issn.1002-137X.2014.10.063
Abstract PDF(1859KB) ( 471 )   
References | Related Articles | Metrics
Linear dynamical system model combined with sparse coding is used to achieve abnormal event detection.Linear Dynamical System(LDS) as a description for dynamic texture can capture the transition of appearance and motion effectively.LDS is applied to describe spatio-temporal cuboids.Since LDS does not belong to Euclidean space,traditional sparse coding techniques can not be applied.Similarity transformation combined with sparse coding based on a principled convex optimization formulation can deal with the optimization of spare coding.The results show that the proposed algorithm has better performance and outperforms the earlier approaches.
Improved TLD Algorithm Based on Region Marking of Double Bounding Boxes
ZHANG Wei-wei,TANG Guang-ming and SUN Yi-feng
Computer Science. 2014, 41 (10): 306-309.  doi:10.11896/j.issn.1002-137X.2014.10.064
Abstract PDF(1635KB) ( 426 )   
References | Related Articles | Metrics
Considering that the Tracking-Learning-Detection (TLD) framework can hardly balance the tracking of the whole and the interest,by using region-marking method of single bounding box,this paper proposed an improved algorithm introducing a region-marking method of double bounding boxes.With one bounding box labeling the whole object,the algorithm draws another bounding box over the stable area of image to indicate the interested part.While extracting the trace points,the weighting approach is adopted to produce more points in interested area,which improves TLD’s adaptability to local variance.Experimental results show that the improved algorithm has good performance in tracking the object when a fixed part keeps stable but the rest varies.For the object not containing the stable local area,the effects are not so obvious.
Data Field-based Feature Extraction Method for Sparse Binary Image
WU Tao,CHEN Yi-xiang and YANG Jun-jie
Computer Science. 2014, 41 (10): 310-316.  doi:10.11896/j.issn.1002-137X.2014.10.065
Abstract PDF(571KB) ( 373 )   
References | Related Articles | Metrics
In order to extract the image feature automatically,a novel data field-based method for sparse binary image was proposed from the point of view of the physics-like field theory.First,the method constructs a map from grayscale space to potential space by producing the data field for a given binary image.Next,it calculates the potential value and the principal direction for each pixel with non-zero value by scanning its 8-connected regions,and then obtains the potential matrix and the direction angle matrix.Finally,it generates the feature vectors and its corresponding visual curve after the normalization of potential value and principal direction.The proposed method solves the issue on image feature extraction using data field,and it can keep a balance between the locality of image grayscale space and the globality of potential space in data field.The quantitative and qualitative experiments with the handwritten digital images indicate that the proposed method yields accurate and robust feature extraction results,and is reasonable and effective.
Planar Delaunay Triangulation Algorithm Based on 2D Convex Hull
BI Shuo-ben,CHEN Dong-qi,YAN Jian and GUO Yi
Computer Science. 2014, 41 (10): 317-320.  doi:10.11896/j.issn.1002-137X.2014.10.066
Abstract PDF(299KB) ( 625 )   
References | Related Articles | Metrics
This paper proposed a planar Delaunay triangulation algorithm based on parallel 2D convex hull algorithm.The proposed algorithm is based on the parallel 2D convex hull algorithm proposed in literature [20] by Jian Yan,etc.It constructs an initial triangulation by recording the grown edges and the removed points during the convex hull buil-ding,then constructs partial Delaunay triangulation through point by point method inside each triangle of the initial triangulation,finally locally optimizes the boundary edges of the local Delaunay triangulations to get the Delaunay triangulation of the whole origin set.The correctness of the algorithm was discussed.And the experiment results show that the algorithm is efficient and stable.
Real-time Simulation of Dynamic Clouds Based on Cellular Automata
FAN Xiao-lei,ZHANG Li-min,ZHANG Bing-qiang and ZHANG Yuan
Computer Science. 2014, 41 (10): 321-325.  doi:10.11896/j.issn.1002-137X.2014.10.067
Abstract PDF(1744KB) ( 474 )   
References | Related Articles | Metrics
This paper established the model of clouds based on cellular automata and developed the method of how to deal with boundary grid points.It also described some of the dynamic aspects of clouds by using promoted transition rules and introduced ascending air current.These techniques enhance the run-time efficiency.At the same time,it used multiple forward scattering model in illumination calculation and embedded Henyey-Greenstein phase function into forward scattering.The simulation results show that the proposed methods can implement real-time rendering of the clouds realistically.