Computer Science ›› 2018, Vol. 45 ›› Issue (1): 24-28.doi: 10.11896/j.issn.1002-137X.2018.01.003

Previous Articles     Next Articles

Personalized Affective Video Content Analysis:A Review

ZHANG Li-gang and ZHANG Jiu-long   

  • Online:2018-01-15 Published:2018-11-13

Abstract: Personalized affective video content analysis is an emerging research field which aims to provide personalized video recommendation to an individual viewer tailored to his/her personal preferences or interests.However,there still lacks a review about recent progress on the development of approaches in this field.This paper presented a review of state-of-the-art approaches towards building automatic systems for personalized affective video content analysis from three perspectives of audio-visual features in video content (e.g.light,color,and motion),physiological response signals from viewers (e.g.facial expression,body gesture,and pose),and personalized recommendation techniques.It discussed the advantages and disadvantages of existing approaches,and highlighted several challenges and issues that may need to be overcomed in future work.

Key words: Affective video content analysis,Personalized recommendation,Viewer interest,Review

[1] WANG S,JI Q.Video Affective Content Analysis:A Survey of State-of-the-Art Methods[J].IEEE Transactions on Affective Computing,2015,6:410-430.
[2] BAVEYE Y,CHAMARET C,DELLANDREA E,et al(1)Affective Video Content Analysis:A Multidisciplinary Insight[J].IEEE Transactions on Affective Computing,2017,PP(99):1.
[3] DARWIN C.The expression of the emotions in man and animals[M].Oxford England:Philosophical Library,1955.
[4] EKMAN P.Strong Evidence for Universals in Facial Expres-sions- A Reply to Russells Mistaken Critique[J].Psychological Bulletin,1994,115:268-287.
[5] PICARD R W.Affective Computing[M].Cambridge:The MIT Press,1997.
[6] HANJALIC A,LIQUN X.Affective video content representa-tion and modeling[J].IEEE Transactions on Multimedia, 2005,7:143-154.
[7] ARAPAKIS I,ATHANASAKOS K,JOSE J M.A comparison of general vs personalised affective models for the prediction of topical relevance[C]∥The 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval.2010:371-378.
[8] ZHANG S,HUANG Q,TIAN Q,et al(1)Personalized MTV Affective Analysis Using User Profile[C]∥Advances in Multimedia Information Processing-PCM.2008:327-337.
[9] PANTIC M,PENTLAND A,NIJHOLT A,et al(1)Human Computing and Machine Understanding of Human Behavior:A Survey[C]∥Artifical Intelligence for Human Computing.2007:47-71.
[10] JOHO H,STAIANO J,SEBE N,et al(1)Looking at the viewer:analysing facial activity to detect personal highlights of multimedia contents[J].Multimedia Tools and Applications,2011,51:505-523.
[11] IRIE G,SATOU T,KOJIMA A,et al(1)Affective Audio-Visual Words and Latent Topic Driving Model for Realizing Movie Affective Scene Classification[J].IEEE Transactions on Multimedia, 2010,12:523-535.
[12] JANA M,ALLAN H.Affective image classification using features inspired by psychology and art theory[C]∥Proceedings of the International Conference on Multimedia.2010:83-92.
[13] CANINI L,BENINI S,LEONARDI R.Affective Recommendation of Movies Based on Selected Connotative Features [J].IEEE Transactions on Circuits and Systems for Video Technology,2013,3:636-647.
[14] JIANCHAO W,BING L,WEIMING H,et al(1)Horror moviescene recognition based on emotional perception[C]∥17th IEEE International Conference on Image Processing (ICIP).2010:1489-1492.
[15] SHILIANG Z,QI T,SHUQIANG J,et al(1)Affective MTV ana-lysis based on arousal and valence features[C]∥IEEE International Conference on Multimedia and Expo.2008:1369-1372.
[16] XU M,XU C,HE X,et al(1)Hierarchical affective content analysis in arousal and valence dimensions[J].Signal Processing,2013,93:2140-2150.
[17] NIU J,ZHAO X,ZHU L,et al(1)Affivir:An affect-based Internet video recommendation system[J].Neurocomputing,2013,120:422-433.
[18] EGGINK J.Evaluation of a mood-based graphical user interface for accessing TV archives[C]∥ IEEE International Conference on Multimedia and Expo Workshops (ICMEW).2013:1-6.
[19] BAVEYE Y,DELANDRA E,CHAMARET C,et al(1)Deeplearning vs.kernel methods:Performance for emotion prediction in videos[C]∥International Conference on Affective Computing and Intelligent Interaction (ACII).2015:77-83.
[20] SHILIANG Z,QINGMING H,SHUQIANG J,et al(1)Affective Visualization and Retrieval for Music Video[J].IEEE Transactions on Multimedia,2010,12:510-522.
[21] SOLEYMANI M,PANTIC M.Multimedia implicit tagging using EEG signals[C]∥IEEE International Conference on Multimedia and Expo (ICME).2013:1-6.
[22] MONEY A G,AGIUS H.Analysing user physiological responsesfor affective video summarisation[J].Displays,2009,0:59-70.
[23] KATTI H,YADATI K,KANKANHALLI M,et al(1)Affective Video Summarization and Story Board Generation Using Pupillary Dilation and Eye Gaze[C]∥IEEE International Symposium on Multimedia (ISM).2011:319-326.
[24] YAZDANI A,LEE J S,VESIN J M,et al(1)Affect recognitionbased on physiological changes during the watching of music videos [J].ACM Trans.Interact.Intell.Syst.,2012,2:1-26.
[25] NASOZ F,ALVAEREZ K,LISETTI C,et al(1)Emotion recognition from physiological signals using wireless sensors for pre-sence technologies[J].Cognition,Technology & Work,2004,6(1):4-14.
[26] MEHMOOD I,SAJJAD M,RHO S,et al(1)Divide-and-conquer based summarization framework for extracting affective video content[J].Neurocomputing,2016,174:393-403.
[27] SOLEYMANI M,PANTIC M.Human-centered implicit tag-ging:Overview and perspectives[C]∥IEEE International Conference on Systems,Man,and Cybernetics (SMC).2012:3304-3309.
[28] MEHRABIAN A.Communication without words[J].Psycho-logy Today,1968,2(9):52-55.
[29] JOHO H,JOSE J M,VALENTI R,et al(1)Exploiting facial expressions for affective video summarisation[C]∥Proceedings of the ACM International Conference on Image and Video Retrie-val.2009:1-8.
[30] JIAO J,PANTIC M.Implicit image tagging via facial information[C]∥Proceedings of the 2nd International Workshop on Social Signal Processing.2010:59-64.
[31] CCˇ M,ODI' A,KOIR A,et al(1)Impact of Implicit and Explicit Affective Labeling on a Recommender System’s Performance[C]∥International Conference on Advances in User Modeling.Springer-Verlag,2012:342-354.
[32] PENG W T,CHU W T,CHANG C H,et al(1)Editing by Viewing:Automatic Home Video Summarization by Viewing Behavior Analysis[J].IEEE Transactions on Multimedia,2011,13(3):539-550.
[33] ZHAO S,YAO H,SUN X.Video classification and recommendation based on affective analysis of viewers[J].Neurocompu-ting,2013,119:101-110.
[34] WANG S,LIU Z,ZHU Y,et al(1)Implicit video emotion tagging from audiences’ facial expression[C]∥Multimedia Tools and Applications.2014:1-28.
[35] ARAPAKIS I,KONSTAS I,JOSE J M.Using facial expressions and peripheral physiological signals as implicit indicators of topical relevance[C]∥Proceedings of the 17th ACM International Conference on Multimedia.2009:461-470.
[36] SOLEYMANI M,PANTIC M,PUN T.Multimodal EmotionRecognition in Response to Videos[J].IEEE Transactions on Affective Computing,2012,3:211-223.
[37] SOLEYMANI M,CHANEL G,KIERKELS J J M,et al(1)Affective ranking of movie scenes using physiological signals and content analysis[C]∥Proceedings of the 2nd ACM Workshop on Multimedia Semantics.2008:32-39.
[38] WANG S,ZHU Y,WU G,et al(1)Hybrid video emotional tagging using users’ EEG and video content[C]∥Multimedia Tools and Applications.2013:1-27.
[39] ARAPAKIS I,MOSHFEGHI Y,JOHO H,et al(1)Integrating facial expressions into user profiling for the improvement of a multimodal recommender system[C]∥IEEE International Conference on Multimedia and Expo.2009:1440-1443.
[40] BENINI S,CANINI L,LEONARDI R.A Connotative Space for Supporting Movie Affective Recommendation[J].IEEE Tran-sactions on Multimedia,2011,1:1356-1370.
[41] SOLEYMANI M,LICHTENAUER J,PUN T,et al(1)A Multimodal Database for Affect Recognition and Implicit Tagging[J].IEEE Transactions on Affective Computing,2012,3:42-55.
[42] KOELSTRA S,MUHL C,SOLEYMANI M,et al(1)DEAP:ADatabase for Emotion Analysis Using Physiological Signals[J].IEEE Transactions on Affective Computing,2012,3:18-31.
[43] BAVEYE Y,DELLANDREA E,CHAMARET C,et al(1)LIRIS-ACCEDE:A Video Database for Affective Content Analysis[J].IEEE Transactions on Affective Computing,2015,6:43-55.
[44] ABADI M K,SUBRAMANIAN R,KIA S M,et al(1)DECAF:MEG-Based Multimodal Database for Decoding Affective Phy-siological Responses[J].IEEE Transactions on Affective Computing,2015,6:209-222.
[45] PORIA S,CAMBRIA E,BAJPAI R,et al(1)A review of affective computing:From unimodal analysis to multimodal fusion[J].Information Fusion,2017,37:98-125.
[46] ZHU Y,JIANG Z,PENG J,et al(1)Video Affective ContentAnalysis Based on Protagonist via Convolutional Neural Network[C]∥Pacific Rim Conference on Multimedia:Advances in Multimedia Information Processing.2016:170-180.
[47] YU G,LI X,SONG D,et al(1)Encoding physiological signals as images for affective state recognition using convolutional neural networks[C]∥38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).2016:812-815.

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!