Computer Science ›› 2020, Vol. 47 ›› Issue (1): 87-95.doi: 10.11896/jsjkx.181202320

• Database & Big Data & Data Science • Previous Articles     Next Articles

Stepwise Optimized Feature Selection Algorithm Based on Discernibility Matrix and mRMR

FAN Xin1,CHEN Hong-mei2   

  1. (School of Information Science and Technology,Southwest Jiaotong University,Chengdu 611756,China)1;
    (Key Laboratory of Cloud Computing and Intelligent Technology,Southwest Jiaotong University,Chengdu 611756,China)2
  • Received:2018-12-14 Published:2020-01-19
  • About author:FAN Xin,born in 1994,postgraduate.His main research interests include the areas of data mining and pattern reco-gnition;CHEN Hong-mei,born in 1971,Ph.D,professor,Ph.D supervisor,is member of China Computer Federation (CCF).Her main research interests include rough set,granular computing and intelligent information processing.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China (61572406).

Abstract: Classification is a common problem in modern industrial production.The classification efficiency and classification accuracy can be improved effectively by using feature selection to filter useful information before classifying tasks.Considering the maximum correlation between features and class and the minimum redundancy among features,the minimal-redundancy-maximal-relevance(mRMR) algorithm can effectively select feature subset.However,two problems exist in the algorithm mRMR,i.e.,the relatively large deviation of the importance of the features in the middle and later stages of mRMR,and the feature subset is not given directly.A novel algorithm based on the principle of mRMR and discernibility matrix of neighborhood rough set was proposed,to solve these problems.The significance of the feature is defined by employing neighborhood entropy and neighborhood mutual entropy based on the principle of the minimal redundancy and maximal relevance,which can deal the mixed type date better.Dynamic discernibility set is defined based on the discernibility matrix.The dynamic evolution of the discernibility set is utilized as the policy to delete redundant features and narrows search range.The optimized feature subset is given when the iteration is stop by the stop condition given by discernibility matrix.In this paper,SVM,J48,KNN and MLP were selected as classi-fiers to evaluate the performance of the feature selection algorithm.The experimental results on the public datasets show that the average classification accuracy of the proposed algorithm is about 2% more than that of previous algorithm,and the proposed algorithm can effectively shorten the feature selection time on the data set with more features.Therefore,the proposed algorithm inherits the advantages of discernibility matrix and MRMR,and can effectively deal with feature selection problems.

Key words: Discernibility matrix, Feature selection, mRMR, Neighborhood rough set

CLC Number: 

  • TP301.6
[1]CHANDRASHEKAR G,SAHIN F.A survey on feature selection methods[J].Computers and Electrical Engineering,2014,40(1):16-28.
[2]XE J Y,WANG M Z,ZHOU Y,et al.Differentially expressed gene selection algorithms for unbalanced gene datasets [J].Chinese Journal of Computers,2019,42(6):1232-1250.
[3]WANG D,NIE F,HUANG H.Feature Selection via Global Redundancy Minimization[J].IEEE Transactions on Knowledge and Data Engineering,2015,27(10):2743-2755.
[4]TASKIN G,KAYA H,BRUZZONE L.Feature selection based on high dimensional model representation for hyperspectral images[J].IEEE Transactions on Image Processing,2017,26(6):2918-2928.
[5]SHEIKHPOUR R,SARRAM M A,GHARAGHANI S,et al.A Survey on semi-supervised feature selection methods[J].Pattern Recognition,2016,64(C):141-158.
[6]MAFARJA M,MIRJALILI S.Whale optimization approaches for wrapper feature selection[J].Applied Soft Computing Journal,2018,62:441-453.
[7]VIOLA M,SANGIOVANNI M,TORALDO G,et al.A genera- lized eigenvalues classifier with embedded feature selection[J].Optimization Letters,2017,11(2):299-311.
[8]PENG H,LONG F,DING C.Feature selection based on mutual information:Criteria of Max-Dependency,Max-Relevance,and Min-Redundancy[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(8):1226-1238.
[9]ESTEVEZ P A,TESMER M,PEREZ C A,et al.Normalized mutual information feature selection[J].IEEE Transactions on Neural Networks,2009,20(2):189-201.
[10]SUN Z,ZHANG J,DAI L,et al.Neurocomputing Mutual information based multi-label feature selection via constrained convex optimization[J].Neurocomputing,2019,329:447-456.
[11]WANG J,WEI J M,YANG Z,et al.Feature Selection by Maximizing Independent Classification Information[J].IEEE Transactions on Knowledge & Data Engineering,2017,29(4):828-841.
[12]PAWLAK Z.Rough sets[J].International Journal of Computer & Information Sciences,1982,11(5):341-356.
[13]LIU Y,CHEN Y,TAN K,et al.Maximum relevance,minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification[J].Measurement Science and Technology,2016,27(12):125501.
[14]YONG L,WENLIANG H,YUNLIANG J,et al.Quick attribute reduct algorithm for neighborhood rough set model[J].Information Sciences,2014,271:65-81.
[15]JIANG Y,YU Y.Minimal attribute reduction with rough set based on compactness discernibility information tree[J].Soft Computing,Springer Berlin Heidelberg,2016,20(6):2233-2243.
[16]DUBOIS D,PRADE H.Rough fuzzy sets and fuzzy rough sets[J].International Journal of General Systems,1990,17(2/3):191-209.
[17]Hu Q H,YU D R,XIE Z X.Numerical Attribute Reduction Based on Neighborhood Granulation and Rough Approximattion [J].Journal of Software,2008,19(3):640-649.
[18]SKOWRON A,RAUSZER C.The Discernibility Matrices and Functions in Information Systems[M]∥Intelligent Decision Support.Dordrecht:Springer,1992:331-362.
[19]DAI J.Rough set approach to incomplete numerical data[J].Information Sciences,2013,241:43-57.
[20]LI M,SHANG C,FENG S,et al.Quick attribute reduction in inconsistent decision tables[J].Information Sciences,2014,254:155-180.
[21]YANG M,YANG P.A novel condensing tree structure for rough set feature selection[J].Neurocomputing,2008,71(4/5/6):1092-1100.
[22]HU Q,ZHANG L,ZHANG D,et al.Measuring relevance between discrete and continuous features based on neighborhood mutual information[J].Expert Systems with Applications,2011,38(9):10737-10750.
[23]LIN Y,HU Q,LIU J,et al.Multi-label feature selection based on neighborhood mutual information[J].Applied Soft Computing,2016,38:244-256.
[24]HU Q,ZHANG L,ZHANG D,et al.Measuring relevance be- tween discrete and continuous features based on neighborhood mutual information[J].Expert Systems with Applications,2011,38(9):10737-10750.
[25]BERMEJO P,OSSA L DE,GAMEZ J A.Knowledge-Based Systems Fast wrapper feature subset selection in high-dimensional datasets by means of filter re-ranking[J].Knowledge-Based Systems,2012,25(1):35-44.
[26]HU Q,YU D,XIE Z.Neighborhood classifiers[J].Expert Systems with Applications,2008,34(2):866-876.
[1] LI Bin, WAN Yuan. Unsupervised Multi-view Feature Selection Based on Similarity Matrix Learning and Matrix Alignment [J]. Computer Science, 2022, 49(8): 86-96.
[2] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[3] KANG Yan, WANG Hai-ning, TAO Liu, YANG Hai-xiao, YANG Xue-kun, WANG Fei, LI Hao. Hybrid Improved Flower Pollination Algorithm and Gray Wolf Algorithm for Feature Selection [J]. Computer Science, 2022, 49(6A): 125-132.
[4] CHEN Yu-si, AI Zhi-hua, ZHANG Qing-hua. Efficient Neighborhood Covering Model Based on Triangle Inequality Checkand Local Strategy [J]. Computer Science, 2022, 49(5): 152-158.
[5] CHU An-qi, DING Zhi-jun. Application of Gray Wolf Optimization Algorithm on Synchronous Processing of Sample Equalization and Feature Selection in Credit Evaluation [J]. Computer Science, 2022, 49(4): 134-139.
[6] SUN Lin, HUANG Miao-miao, XU Jiu-cheng. Weak Label Feature Selection Method Based on Neighborhood Rough Sets and Relief [J]. Computer Science, 2022, 49(4): 152-160.
[7] LI Zong-ran, CHEN XIU-Hong, LU Yun, SHAO Zheng-yi. Robust Joint Sparse Uncorrelated Regression [J]. Computer Science, 2022, 49(2): 191-197.
[8] ZHANG Ye, LI Zhi-hua, WANG Chang-jie. Kernel Density Estimation-based Lightweight IoT Anomaly Traffic Detection Method [J]. Computer Science, 2021, 48(9): 337-344.
[9] YANG Lei, JIANG Ai-lian, QIANG Yan. Structure Preserving Unsupervised Feature Selection Based on Autoencoder and Manifold Regularization [J]. Computer Science, 2021, 48(8): 53-59.
[10] HOU Chun-ping, ZHAO Chun-yue, WANG Zhi-peng. Video Abnormal Event Detection Algorithm Based on Self-feedback Optimal Subclass Mining [J]. Computer Science, 2021, 48(7): 199-205.
[11] HU Yan-mei, YANG Bo, DUO Bin. Logistic Regression with Regularization Based on Network Structure [J]. Computer Science, 2021, 48(7): 281-291.
[12] ZHOU Gang, GUO Fu-liang. Research on Ensemble Learning Method Based on Feature Selection for High-dimensional Data [J]. Computer Science, 2021, 48(6A): 250-254.
[13] DING Si-fan, WANG Feng, WEI Wei. Relief Feature Selection Algorithm Based on Label Correlation [J]. Computer Science, 2021, 48(4): 91-96.
[14] TENG Jun-yuan, GAO Meng, ZHENG Xiao-meng, JIANG Yun-song. Noise Tolerable Feature Selection Method for Software Defect Prediction [J]. Computer Science, 2021, 48(12): 131-139.
[15] ZHANG Ya-chuan, LI Hao, SONG Chen-ming, BU Rong-jing, WANG Hai-ning, KANG Yan. Hybrid Artificial Chemical Reaction Optimization with Wolf Colony Algorithm for Feature Selection [J]. Computer Science, 2021, 48(11A): 93-101.
Full text



No Suggested Reading articles found!