计算机科学 ›› 2022, Vol. 49 ›› Issue (11A): 211000131-6.doi: 10.11896/jsjkx.211000131

• 人工智能 • 上一篇    下一篇

基于类间和类内密度的多视角距离度量学习

任双艳1, 郭威1, 范昌琪2, 王喆1, 吴松洋3   

  1. 1 华东理工大学信息科学与工程学院 上海 200237
    2 上海移动互联网产业促进中心 上海 200333
    3 公安部第三研究所 上海 201204
  • 出版日期:2022-11-10 发布日期:2022-11-21
  • 通讯作者: 王喆(wangzhe@ecust.edu.cn)
  • 作者简介:(shuangyan_ren99@163.com)
  • 基金资助:
    上海市科技计划项目(20511100600);国家自然科学基金(62076094);信息网络安全公安部重点实验室开放课题项目(C20603)

Multi-view Distance Metric Learning with Inter-class and Intra-class Density

REN Shuang-yan1, GUO Wei1, FAN Chang-qi2, WANG Zhe1, WU Song-yang3   

  1. 1 School of Information Science and Engineering,East China University of Science and Technology,Shanghai 200237,China
    2 Shanghai Mobile Internet Industry Promotion Center,Shanghai 200333,China
    3 The Third Research Institute of Ministry of Public Security,Shanghai 201204,China
  • Online:2022-11-10 Published:2022-11-21
  • About author:REN Shuang-yan,born in 1997,postgraduate.Her main research interests include pattern recognition and machine learning.
    WANG Zhe,born in 1981,Ph.D,asso-ciate professor,is a member of China Computer Federation.His main research interests include pattern recognition and image processing.
  • Supported by:
    Shanghai Science and Technology Program(20511100600),Natural Science Foundation of China(62076094) and Key Lab of Information Network Security of Ministry of Public Security(The Third Research Institute ofMinistry of Public Security)(C20603).

摘要: 几何信息可以为分类方法提供先验知识和直观解释。从几何角度观察样本是一种新的样本学习方法,密度则是几何信息中非常直观的表现形式。提出了基于类间和类内密度的多视角距离度量学习方法来学习一个度量空间。在这个空间内,异类样本更加分散,同类样本更加紧密。首先,在大边际框架下引入类间密度,通过最小化类间密度来约束度量空间中的样本,从而实现类间分散,提高分类性能。其次,引入类内密度,通过最大化类内密度来达到同类样本互相靠近的效果,从而实现类内紧凑。最后,为了更好地挖掘多视角样本的互补信息,最大化度量空间中各视角之间的相关性,使各视角自适应地相互学习,探索视角之间的互补信息。在真实数据集上的大量实验结果证明了该方法的优越性。

关键词: 几何信息, 类间密度, 类内密度, 互补信息, 视角相关性

Abstract: Geometric information can provide prior knowledge and intuitive explanation for classification methods.Observing samples from geometric perspective is a novel method of sample learning,and density is a very intuitive form of geometric information.This paper proposes a multi-view distance metric learning method with inter-class and intra-class density to learn a metric space.In this space,the heterogeneous samples are more scattered,and the homogeneous samples are closer.First,the inter-class density is introduced under the large margin framework,and the samples in the metric space are constrained by minimizing the inter-class density,so as to realize the inter-class dispersion and improve the classification performance.Second,maximize the intra-class density to achieve the effect of similar samples close to each other,so as to achieve intra-class compactness.Finally,to better mine the complementary information of the multi-view samples,the correlation between the views in the metric space is maximized,so that the views can learn from each other adaptively and explore the complementary information among the views.A large number of experimental results on real-world datasets demonstrate the superiority of the proposed method.

Key words: Geometric information, Inter-class density, Intra-class density, Complementary information, View correlation

中图分类号: 

  • TP181
[1]WILDER B L,CASSIO G L.Geometric-Algebra Adaptive Filters [J].IEEE Transactions on Signal Processing,2019,67(14):3649-3662.
[2]MENG M,LAN M C,YU J,et al.Constrained Discriminative Projection Learning for Image Classification [J].IEEE Transactions on Image Processing,2020,29:186-198.
[3]DONG M Z,WANG Y J,YANG X C,et al.Learning Local Metrics and Influential Regions for Classification [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2020,42(6):1522-1529.
[4]TSAKIRIS M C,PENG L Z,CONCA A,et al.An Algebraic-Geometric Approach for Linear Regression Without Correspondences [J].IEEE Transactions on Information Theory,2020,66(8):5130-5144.
[5]POURBAHRAMI S,KHANLI L M,AZIMPOUR S.Improving Neighborhood Construction with Apollonius Region Algorithm Based on Density for Clustering [J].Information Sciences,2020,522:227-240.
[6]CHEN H,SUN D G,LIU W Q,et al.An Automatic Registration Approach to Laser Point Sets Based on Multidiscriminant Parameter Extraction [J].IEEE Transactions on Instrumentation and Measurement,2020,69(12):9449-9464.
[7]MA J J,ZHOU S S.Metric Learning-Guided K Nearest Neighbor Multilabel Classifier [J].Neural Computing and Applications,2021,33:2411-2425.
[8]SUSAN S,KUMAR A.DST-ML-EkNN:Data Space Transformation with Metric Learning and Elite k-Nearest Neighbor Cluster Formation for Classification of Imbalanced Datasets [C]//Advances in Artificial Intelligence and Data Engineering.Select Proceedings of AIDE 2019.Advances in Intelligent Systems and Computing,2021:319-328.
[9]LIANG J Q,ZHU P F,DANG C Y,et al.Semisupervised Laplace-Regularized Multimodality Metric Learning [J].IEEE Transactions on Cybernetics,2022,52(5):2955-2967.
[10]DUTTA U K,HARANDI M,SEKHAR C C.UnsupervisedDeep Metric Learning via Orthogonality based Probabilistic Loss [J].IEEE Transactions on Artificial Intelligence,2020,1(1):74-84.
[11]LI Y Q,FAN X H,GAUSSIER E.Supervised Categorical Me-tric Learning With Schatten p-Norms [J].IEEE Transactions on Cybernetics,2022,52(4):2059-2069.
[12]WEINBERGER K Q,SAUL L K.Distance Metric Learning for Large Margin Nearest Neighbor Classification [J].Journal of Machine Learning Research,2009,10:207-244.
[13]ZADEH P H,HOSSEINI R,SRA S.Geometric Mean Metric Learning [C]//Proceedings of the 33rd International Conference on Machine Learning(ICML).2016:3660-3667.
[14]GENG C X,CHEN S C.Metric Learning-Guided Least Squares Classifier Learning [J].IEEE Transactions on Neural Networks and Learning Systems,2018,29(12):6409-6414.
[15]HU J L,LU J W,LIU L,et al.Multi-View Geometric MeanMetric Learning for Kinship Verification [C]//2019 IEEE International Conference on Image Processing(ICIP).2019:1178-1182.
[16]WANG H B,FENG L,MENG X Z,et al.Multi-View Metric Learning Based on KL-Divergence for Similarity Measurement [J].Neurocomputing,2017,238:269-276.
[17]ZHANG C Q,LIU Y Q,LIU Y,et al.FISH-MML:Fisher-HSIC Multi-View Metric Learning [C]//Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence(IJCAI).2018:3054-3060.
[18]SUGIYAMA M.Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis [J].Journal of Machine Learning Research,2007,8:1027-1061.
[19]WANG T H,LU J,ZHANG G Q.Two-Stage Fuzzy Multiple Kernel Learning Based on Hilbert-Schmidt Independence Criterion [J].IEEE Transactions on Fuzzy Systems,2018,26(6):3703-3714.
[20]BERRENDERO J R,BUENO-LARRAZ B,CUEVAS A.OnMahalanobis Distance in Functional Settings [J].Journal of Machine Learning Research,2020,21:1-33.
[21]ABHISHEK S,ABHISHEK K,HAL D,et al.Generalized multiview analysis:A discriminative latent space [C]//2012 IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2012,2160-2167.
[22]WANG Z,CHEN S C,SUN T K.MultiK-MHKS:A Novel Multiple Kernel Learning Algorithm [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(2):348-353.
[23]BAC N,CARLOS M,BERNARD D B.Supervised Distance Metric Learning Through Maximization of the Jeffrey Divergence [J].Pattern Recognition,2017,64:215-225.
[24]IKIZLER N,CINBIS R G,PEHLIVAN S,et al.Recognizing Actions from Still Images [C]//2008 19th International Conference on Pattern Recognition.2008:1-4.
[1] 董奇达, 王喆, 吴松洋.
结合注意力机制与几何信息的特征融合框架
Feature Fusion Framework Combining Attention Mechanism and Geometric Information
计算机科学, 2022, 49(5): 129-134. https://doi.org/10.11896/jsjkx.210300180
[2] 刘质彬,赵启阳.
基于几何信息先验分布的似物性推荐方法
Objectness Proposal Based on Prior Distribution of Geometric Characteristics of Object Regions
计算机科学, 2015, 42(9): 303-308. https://doi.org/10.11896/j.issn.1002-137X.2015.09.060
[3] 周毅敏,李光耀.
一个根据几何信息对场景对象重组织以加速显示的方法
Method for Accelerating Display Speed by Organizing Scene Objects via Geometric Data Analysis
计算机科学, 2013, 40(Z11): 296-300.
[4] 李建波 潘振宽 孙志军.
基于包围盒与空间分解的碰撞检测算法

计算机科学, 2005, 32(6): 155-157.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!