计算机科学 ›› 2016, Vol. 43 ›› Issue (1): 85-88.doi: 10.11896/j.issn.1002-137X.2016.01.020

• 第五届全国智能信息处理学术会议 • 上一篇    下一篇

基于眼动数据的分类视觉注意模型

王凤娇,田媚,黄雅平,艾丽华   

  1. 北京交通大学计算机与信息技术学院 北京100044,北京交通大学计算机与信息技术学院 北京100044,北京交通大学计算机与信息技术学院 北京100044,北京交通大学计算机与信息技术学院 北京100044
  • 出版日期:2018-12-01 发布日期:2018-12-01
  • 基金资助:
    本文受北京市高等学校“青年英才计划”(YETP0541,YETP0546),中央高校基本科研业务费专项资金(2015JBM036),国家自然科学基金(61273364,61473031,61472029)资助

Classification Model of Visual Attention Based on Eye Movement Data

WANG Feng-jiao, TIAN Mei, HUANG Ya-ping and AI Li-hua   

  • Online:2018-12-01 Published:2018-12-01

摘要: 视觉注意是人类视觉系统中的重要部分,现有的视觉注意模型大多强调基于自底向上的注意,较少考虑自顶向下的语义,也鲜有针对不同类别图像的特定注意模型。眼动追踪技术可以客观、准确地捕捉到被试的注意焦点,但在视觉注意模型中的应用还比较少见。因此,提出了一种自底向上和自顶向下注意相结合的分类视觉注意模型CMVA,该模型针对不同类别的图像,在眼动数据的基础上训练分类视觉注意模型来进行视觉显著性预测。实验结果表明:与现有的其它8个视觉注意模型相比,该模型的性能最优。

关键词: 视觉注意,视觉显著性,分类模型,自底向上,自顶向下

Abstract: Visual attention is a very important part of the human visual system.Most of the existing visual attention models emphasize bottom-up attention,considering less top-down semantic.There is few specific attention model for different categories of images.Eye tracking technology can capture the focus of attention objectively and accurately,but its application in visual attention model is still relatively rare.Therefore,we proposed a classification model of visual attention (CMVA) combining bottom-up with top-down factors,which trains classification models for different categories of images on the basis of eye movement data so as to predict visual saliency.Our model was compared with other existing eight models,proving its superior performance than other models.

Key words: Visual attention,Visual saliency,Classification model,Bottom-up,Top-down

[1] Treisman A M,Gelade G.A feature-integration theory of attention[J].Cognitive psychology,1980,12(1):97-136
[2] Itti L,Koch C,Niebur E.A Model of Saliency-Based Visual Attention for Rapid Scene Analysis[J].IEEE Transactions on PAMI,1998,20(11):1254-1259
[3] Harel J,Koch C,Perona P.Graph-based visual saliency[C]∥NIPS.2007:545-552
[4] Cheng M M,Zhang G X,Mitra N J,et al.Global contrast based salient region detection[C]∥IEEE CVPR.2011:409-416
[5] Zhang J M,Sclaroff S.Saliency Detection:A Boolean Map Approach[C]∥ICCV.2013:153-160
[6] Judd T,Ehinger K,Durand F,et al.Learning to predict where humans look[C]∥ICCV.2009:2106-2113
[7] Borji A.Boosting bottom-up and top-down visual features forsaliency estimation[C]∥CVPR.2012:438-445
[8] Zhao Q,Koch C.Learning a saliency map using fixated locations in natural scenes[J].Journal of Vision,2011,11(3):1-15
[9] NUSEF.http://mmas.comp.nus.edu.sg/NUSEF.html
[10] MSRA.http://research.microsoft.com/en-us/um/peo-ple/jiansun/SalientObject/salient_object.htm
[11] MIT.http://people.csail.mit.edu/tjudd/WherePeople-Look/index.html
[12] PASCAL VOC 2007.http://pascallin.ecs.soton.ac.uk/challenges/VOC/
[13] Torralba A.Modeling global scene factors in attention[J].Journal of the Optical Society of America,2003,0(7):1407-1418
[14] http://www.csie.ntu.edu.tw/~cjlin/libsvm/
[15] Erdem E,Erdem A.Visual saliency estimation by nonlinearly integrating features using region covariances[J].Journal of Vision,2013,13(4):1-20
[16] Margolin R,Zelnik-Manor L,Tal A.Saliency for image manipulation[J].The Visual Computer,2013,29(5):381-392
[17] Hou X,Harel J,Koch C.Image Signature:Highlighting sparse salient regions[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(1):194-201
[18] Zhang L,Tong M,Marks T,et al.SUN:A bayesian framework for saliency using natural statistics[J].Journal of Vision,2008,8(7):32:1-20
[19] Judd T,Durand F,Torralba A.A benchmark of computational models of saliency to predict human fixations[R].MIT,2012
[20] Itti L,Koch C.Computational modeling of visual attention[J].Nat Rev Neurosci,2001,2(3):194-203
[21] Hou X,Zhang L.Saliency detection:A spectral residual ap-proach[C]∥CVPR.2007:1-8
[22] Judd T,Durand F,Torralba A.MIT saliency benchmark.http://people.csail.mit.edu/tjudd/SaliencyBenchmark/
[23] Bruce N,Tsotsos J.Saliency based on information maximization[C]∥NIPS 18.2006:155-162
[24] Cheng M-M,Zhang Z,Lin W-Y,et al.BING:Binarized normed gradients for objectness estimation at 300fps[C]∥IEEE CVPR.2014:2386-2393
[25] Borji A,Itti L.State-of-the-art in visual attention modeling[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,5(1):185-207
[26] Vig E,Dorr M,Cox D.Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images[C]∥IEEE CVPR.2014:2798-2805
[27] Borji A,Sihite D N,Itti L.Salient object detection:A benchmark[C]∥ECCV.2012:414-429
[28] Goferman S,Zelnik-Manor L,Tal A.Context-aware saliency detection[J].PAMI,2012,34(10):1915-1926
[29] Kienzle W,Wichmann F A,Scholkopf B,et al.A Nonparametric Approach to Bottom-Up Visual Saliency[C]∥NIPS.2007:689-696

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!