计算机科学 ›› 2014, Vol. 41 ›› Issue (6): 161-165.doi: 10.11896/j.issn.1002-137X.2014.06.031

• 人工智能 • 上一篇    下一篇

基于拉普拉斯方向的差值线性判别分析

李照奎,丁立新,王岩,何进荣,周凌云   

  1. 武汉大学软件工程国家重点实验室 武汉大学计算机学院 武汉430072;沈阳航空航天大学计算机学院 沈阳110136;武汉大学软件工程国家重点实验室 武汉大学计算机学院 武汉430072;沈阳航空航天大学计算机学院 沈阳110136;武汉大学软件工程国家重点实验室 武汉大学计算机学院 武汉430072;武汉大学软件工程国家重点实验室 武汉大学计算机学院 武汉430072
  • 出版日期:2018-11-14 发布日期:2018-11-14
  • 基金资助:
    本文受国家自然科学基金(60975050,60902053),广东省省部产学研结合专项(2011B090400477),珠海市产学研合作专项(2011A050101005,2D0501990016),珠海市重点实验室科技攻关项目(2012D0501990026)资助

Different Linear Discriminant Analysis Based on Laplacian Orientations

LI Zhao-kui,DING Li-xin,WANG Yan,HE Jin-rong and ZHOU Ling-yun   

  • Online:2018-11-14 Published:2018-11-14

摘要: 标准的LDA方法通常有3个问题:1)为了确保类内散度矩阵的非奇异性,必须首先通过PCA进行维数约简,这限制了对更多维数空间的使用;2)当每人只有单个训练样本时,类内散度矩阵必然奇异,此时LDA无法工作;3)缺乏对像素间的局部相关性的考虑。为了解决这些问题,提出一种基于拉普拉斯方向的差值线性判别分析方法。该方法通过拉普拉斯方向实现更鲁棒的图像相异性测度,通过引入差值散度矩阵来避免类内散度矩阵的奇异性。实验结果显示,该算法对表情变化、光照改变及不同遮挡情况获得了更高的识别率,尤其针对光照变化,效果更加显著。

关键词: 拉普拉斯方向,维数约简,线性判别分析,鲁棒的相异性度量 中图法分类号TP391文献标识码A

Abstract: For the traditional linear discriminant analysis method,there are usually three questions:1) In order to ensure that the within-class scatter matrix is nonsingular,the principal component analysis must firstly is performed,which limits the effect of multidimensional space.2) If the number of training samples per person is single,the within-class scatter matrix is generally singular,and the method does not work.3)Without considering the partial correlation between pixels.To address these problems,this paper proposed a different linear discriminant analysis based on Laplacian orientations.The usage of the Laplacian orientations results in a more robust dissimilarity measures between images.The introduction of the difference scatter matrix avoids the singularity of the within-class scatter matrix.Experiments show that the proposed method has better robustness for facial expressions,illumination changes and different occlusions,and achieves a higher recognition rate.Especially for illumination changes,the effect is better.

Key words: Laplacian orientations,Dimensionality reduction,Linear discriminant analysis,Robust dissimilarity measures

[1] Turk M,Pentland A.Eigenfaces for recognition[J].Journal of Cognitive Neuroscience,1991,3(1):71-86
[2] He X F,Yan S C,Hu Y X,et al.Face recognition using Lapla-cianfaces[J].IEEE Trans.on Pattern Analysis and Machine Intelligence,2005,7(3):328-340
[3] He X F,Cai D,Yan SC ,et al.Neighborhood preserving embedding[C]∥Proc.of the IEEE Int’l Conf.on Computer Vision (ICCV).Beijing,2005:1208-1213
[4] Belhumeur P N,Hespanha J P,Kriegman D J.Eigenfaces vs.Fisherfaces:Recognition using class specific linear projection[J].IEEE Trans.on Pattern Analysis and Machine Intelligence,1997,9(7):711-720
[5] Xu D,Yan S,Tao D,et al.Marginal Fisher analysis and its variants for human gait recognition and content-based image retrie-val[J].IEEE Trans.Image Process,2007,16(11):2811-2821
[6] Cai Deng,He Xiao-fei,Zhou Kun,et al.Locality Sensitive Discriminant Analysis[C]∥Proceedings of the 20th International Joint Conference on Artificial Intelligence(IJCAI).Hyderabad,2007:708-713
[7] Chen Xiao-hong,Chen Song-can,Xue Hui.A unified dimensionality reduction framework for semi-paired and semi-supervised multi-view data[J].Pattern Recognition,2012,4(5):2005-2018
[8] Cai D,He X,Han J.Semi-supervised discriminant analysis[C]∥Proceedings of IEEE 11th International Conference on Computer Vision (ICCV).Riode Janeiro,2007:1-7
[9] Balasubramanian M,Schwartz E L,Tenenbaum J B,et al.TheIsomap Algorithm and Topological Stability[J].Science,2002,5(5552):7
[10] Roweis S,Saul L.Nonlinear dimensionality reduction by locally linear embedding[J].Science,2000,290(5500):2323-2326
[11] Belkin M,Niyogi P.Laplacian Eigenmaps for dimensionality reduction and data representation[J].Neural Computation,2003,5(6):1373-1396
[12] Schlkopf B,Smola A,Müller K R.Nonlinear Component Ana-lysis as a Kernel Eigenvalue Problem[J].Neural Computation,1998,0(5):1299-1319
[13] Baudat G,Anouar F.Generalized Discriminant Analysis Using a Kernel Approach[J].Neural Computation,2000,12(10):2385-2404
[14] Di You,Hamsici O C,Martinez A M.Kernel Optimization inDiscriminant Analysis[J].IEEE Trans.Pattern Analysis and Machine Intelligence,2011,33(3):631-638
[15] Yang J,Frangi A F,Yang J Y,et al.KPCA plus LDA:A complete kernel fisher discriminant framework for feature extraction and recognition[J].IEEE Trans.on Pattern Analysis and Machine Intelligence,2005,27(2):230-244
[16] Cai Deng,He Xiao-fei,Han Jia-wei.Speed up kernel discriminant analysis[J].The VLDB Journal,2011,20(1):21-33
[17] Martinez A M,Kak A C.PCA versus LDA[J].IEEE Trans.Pattern Analysis and Machine Intelligence,2001,23(2):228-233
[18] Lee K,Ho J,Kriegman D.Acquiring linear subspaces for facerecognition under variable lighting[J].IEEE Trans.Pattern Analysis and Machine Intelligence,2005,27(5):684-698

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!