Computer Science ›› 2014, Vol. 41 ›› Issue (12): 160-163.doi: 10.11896/j.issn.1002-137X.2014.12.034

Previous Articles     Next Articles

Multi-label Learning Algorithm Based on Granular Computing

ZHAO Hai-feng,YU Qiang and CAO Yu-dan   

  • Online:2018-11-14 Published:2018-11-14

Abstract: Multi-label learning deals with the problem that each instance is associated with multiple labels.Existing multi-label learning algorithm IMLL based on lazy learning does not fully consider the distribution of instances.When building the nearest neighbor sets of the instances,the number of the neighbor for each instance is a constant value valued k.It may lead to such an outcome that the instances with higher similarity are ruled out of the nearest neighbor set or the instances with lower similarity are capsulated into the nearest neighbor set,which will affect the performance of the classification method.In this article,an improved multi-label lazy learning algorithm combined with the idea of granular computing was proposed.The nearest neighbor set of each instance is built by the controlling of the granularity.Then the instances in the nearest neighbor set of each instance behave high similarity.Experimental results show that the performance of our algorithm is superior to IMLLA.

Key words: K-nearest-neighbor,Multi-label learning,Lazy learning,IMLLA,Granular computing

[1] McCallum A.Multi-label text classification with a mixture modeltrained by EM[J].AAAI’99 Workshop on Text Learning,1999:1-7
[2] Elisseeff A,Weston J.A kernel method for multi-labelled classification[C]∥ Advances in Neural Information Processing System.Cambridge.MA:MIT Press,2002:681-687
[3] Wang C,Yan S,Zhang L.Multi-label sparse coding for automatic image annotation [C]∥Computer Vision and Pattern Recognition.2009:1643-1650
[4] Tsoumakas G.Multi-label classification [J].International Journal of Data Warehousing & Mining,2007,3(3):1-13
[5] Streich A,Buhmann J.Classification of multi-labelded data:agenerative approach [C]∥Proceeding of the 19th European Conferrence on Machine Learning and the 11th Principles and Practice of Knowledge Discovery in Data-bases.2008:390-405
[6] 汤进,黄莉莉,赵海峰.使用自适应线性回归的多标签分类算法[J].华南理工大学学报:自然科学版,2012,0(9):69-74
[7] Schapire R E,Singer Y.Boostexter:A boosting-based system for text categorization[J].Machine Learning,2000,9(2/3):135-168
[8] Hotho A,Maedche A,Staab S,Ontologies Improve Text Document Clustering [C]∥Proc.of the IEEE International Confe-rence on Data Mining.Melbourne,Australia,2003:542-544
[9] Freund Y,Schapire R E.A decision-theoretic generalization ofon-line learning and an application to boosting[C]∥Lecture Notes in Computer Science 904.Berlin:Springer,1995:23-37
[10] Zhang Min-ling,Zhou Zhi-hua.Multi-label neural networks with applications to functional genomics and text categorization[J].IEEE Transon Knowledge and Data Engineering,2006,18(10):1338-1351
[11] Zhang Min-ling,Zhou Zhi-hua.ML-Knn:A lazy learningapproach to multi-label learning [J].Pattern Recognition,2007,0(7):2038-2048
[12] 张敏灵.一种新型多标签懒惰学习算法[J].计算机研究与发展,2012,9(11):2271-2282
[13] 张铃,张钹.问题求解理论及应用:商空间粒度计算理论及应用(第2版)[M].北京:清华大学出版社,2007

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!