Computer Science ›› 2021, Vol. 48 ›› Issue (11): 312-318.doi: 10.11896/jsjkx.200900088

• Artificial Intelligence • Previous Articles     Next Articles

Construction and Application of Russian Multimodal Emotion Corpus

XU Lin-hong1, LIU Xin1, YUAN Wei2, QI Rui-hua1   

  1. 1 Research Center for Language Intelligence of Dalian University of Foreign Languages,Dalian,Liaoning 116044,China
    2 Information Engineering University,Luoyang,Henan 471003,China
  • Received:2020-09-10 Revised:2021-03-26 Online:2021-11-15 Published:2021-11-10
  • About author:XU Lin-hong,born in 1979,associate professor.Her main research interests include nature language processing and sentiment analysis.
  • Supported by:
    Ministry of Education Humanities and Social Science Project(18YJCZH208) and National Natural Science Foundation of China(61806038,61772103).

Abstract: As a research hotspot in the field of emotion analysis,Russian multimodal sentiment analysis technology can automatically analyze and identify emotions through rich information such as text,voice and image,which is helpful to timely understand the public opinion hotspots in Russian speaking countries and areas.However,there are only a few multimodal emotion corpora in Russian,which limits the further development of Russian emotion analysis technology.Based on the analysis of the related research and emotion classification methods of multimodal emotion corpus,this paper develops a scientific and complete tagging system,which includes 11 items of information in utterance,space-time and emotion.In the whole process of corpus construction and quality control,this paper follows the principle of emotional subject and emotional continuity,formulates a strong operational annotation specification and constructs a large-scale Russian emotional corpus.Finally,it discusses the application of corpus in the analysis of emotional expression characteristics,the analysis of personality characteristics and the construction of emotion recognition model.

Key words: Corpus, Multimodal, Russian, Sentiment analysis

CLC Number: 

  • TP393
[1]BUSSO C,BULUT M,LEE C,et al.IEMOCAP:Interactiveemotional dyadic motion capture database[J].Journal of Language Resources and Evaluation,2008,42(4):335-359.
[2]DARIO B,PASCALE F.Deep Learning of Audio and Language Features for Humor Prediction[C]//Proceedings of the 10th International Conference on Language Resources and Evaluation.LREC,2016:496-501.
[3]BERTERO D,PASCALE F.Predicting humor response in dialogues from TV sitcoms[C]//IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP).2016:5780-5784.
[4]BERTERO D,FUNG P.A Long Short-Term Memory Frame-work for Predicting Humor in Dialogues[C]//Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.2016:130-135.
[5]HSU C C,KUO C C,CHEN S,et al.Emotionlines:An emotion corpus of multi-party conversations[C]//Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC).2018:1597-1601.
[6]PORIA S,HAZARIKA D,MAJUMDER N,et al.MELD:A Multimodal Multi Party-Dataset for Emotion Recognitionin Conversations[C]//Annual Meeting of the Association for Computational Linguistics.2019:527-536.
[7]WOLLMER M,WENINGER F,KNAUP T,et al.YouTubeMovie Reviews:Sentiment Analysis in an Audio-Visual Context[J].IEEE Intelligent Systems,2013,28(3):46-53.
[8]PORIA S,CAMBRIA E,HOWARD N,et al.Fusing audio,visual and textual clues for sentiment analysis from multimodal content[J].Neurocomputing,2016,174(JAN.22PT.A):50-59.
[9]ZADEH A,ZELLERS R,PINCUS E,et al.Mosi:Multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos[J].arXiv:1606.06259,2016.
[10]ZADEH A,LIANG P P,POIRA S,et al.Multimodal Language Analysis in the Wild:CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph[C]//Annual Meeting of the Association for Computational Linguistics.2018:2236-2246.
[11]WANG X M,ZHAO X B.Survey of Construction and Application of Reading Eye-tracking Corpus[J].Computer Science,2020,47(3):174-181.
[12]MAKAROVA V,PERTRUSHIN V A.RUSLANA:a Database of Russian Emotional Utterances[C]//International Conference on Spoken Language Processing.2002:1-4.
[13]COWIE R,DOUGLAS E,TSAPATSOULIS N,et al.Emotion recognition in human-computer interaction[J].IEEE Signal Processing Magazine,2002,18(1):32-80.
[14]PEREPELKINA O,KAZIMIROVA E,KONSTANTINOVAM.RAMAS:Russian Multimodal Corpus of Dyadic Interaction for Affective Computing[C]//SPECOM.2018:1-6.
[15]CARLETTA J.Assessing agreement on classification tasks:the kappa statistic[J].Computational Linguistics,1996,22(2):249-254.
[16]LI X,LI Y,WANG S G.Text Similarity Calculation for Text Sentiment Clustering[J].Journal of Chinese Information Processing,2018,32(5):97-104.
[17]WANG J C,XU Y,LIU Q Y,et al.Dialog Sentiment Analysis with Neural Topic Model[J].Journal of Chinese Information Processing,2020,34(1):106-112.
[18]WANG K,PAN W,YANG B H.OTSRM-Based Approach for Sentiment Evolution and Topic Analysis[J].Journal of the China Society for Scientific and Technical Information,2019,38(5):534-542.
[19]YANG L,ZHOU F Q,LIN H F,et al.Sentiment Analysis Based on Emotion Commonsense Knowledge[J].Journal of Chinese Information Processing,2019,33(6):94-99.
[20]SAKENOVICH N S,ZHARMAGAMBETOV A S.On one approach of solving sentiment analysis task for Kazakh and Russian languages using deep learning[C]//International Confe-rence on Computational Collective Intelligence.2016:537-545.
[21]GALINSKY R,ALEKSEEV A,NIKOLENKO S I.Improvingneural network models for natural language processing in Russian with synonyms[C]//IEEE Artificial Intelligence and Natural Language Conference.2016:1-7.
[22]ZADEH A,LIANG P P,MAZUMDER N,et al.Memory Fusion Network for Multi-view Sequential Learning[J/OL].Association for the Advancement of Artificial Intelligence, 2018:5634-5641.https://arxiv.org/abs/1802.00927.
[23]MAJUMDER N,PORIA S,HAZARIKA D,et al.Dialogue-RNN:An Attentive RNN for Emotion Detection in Conversations[C]//Association for the Advancement of Artificial Intelligence.2019:681-682.
[1] NIE Xiu-shan, PAN Jia-nan, TAN Zhi-fang, LIU Xin-fang, GUO Jie, YIN Yi-long. Overview of Natural Language Video Localization [J]. Computer Science, 2022, 49(9): 111-122.
[2] LIU Chang, WEI Wei-min, MENG Fan-xing, CAI Zhi. Research Progress on Speech Style Transfer [J]. Computer Science, 2022, 49(6A): 301-308.
[3] CHANG Bing-guo, SHI Hua-long, CHANG Yu-xin. Multi Model Algorithm for Intelligent Diagnosis of Melanoma Based on Deep Learning [J]. Computer Science, 2022, 49(6A): 22-26.
[4] DU Xiao-ming, YUAN Qing-bo, YANG Fan, YAO Yi, JIANG Xiang. Construction of Named Entity Recognition Corpus in Field of Military Command and Control Support [J]. Computer Science, 2022, 49(6A): 133-139.
[5] LI Hao-dong, HU Jie, FAN Qin-qin. Multimodal Multi-objective Optimization Based on Parallel Zoning Search and Its Application [J]. Computer Science, 2022, 49(5): 212-220.
[6] ZHAO Liang, ZHANG Jie, CHEN Zhi-kui. Adaptive Multimodal Robust Feature Learning Based on Dual Graph-regularization [J]. Computer Science, 2022, 49(4): 124-133.
[7] DING Feng, SUN Xiao. Negative-emotion Opinion Target Extraction Based on Attention and BiLSTM-CRF [J]. Computer Science, 2022, 49(2): 223-230.
[8] YUAN Jing-ling, DING Yuan-yuan, SHENG De-ming, LI Lin. Image-Text Sentiment Analysis Model Based on Visual Aspect Attention [J]. Computer Science, 2022, 49(1): 219-224.
[9] HU Yan-li, TONG Tan-qian, ZHANG Xiao-yu, PENG Juan. Self-attention-based BGRU and CNN for Sentiment Analysis [J]. Computer Science, 2022, 49(1): 252-258.
[10] LIU Yan, XIONG De-yi. Construction Method of Parallel Corpus for Minority Language Machine Translation [J]. Computer Science, 2022, 49(1): 41-46.
[11] CHEN Zhi-yi, SUI Jie. DeepFM and Convolutional Neural Networks Ensembles for Multimodal Rumor Detection [J]. Computer Science, 2022, 49(1): 101-107.
[12] DAI Hong-liang, ZHONG Guo-jin, YOU Zhi-ming , DAI Hong-ming. Public Opinion Sentiment Big Data Analysis Ensemble Method Based on Spark [J]. Computer Science, 2021, 48(9): 118-124.
[13] ZHANG Xiao-yu, WANG Bin, AN Wei-chao, YAN Ting, XIANG Jie. Glioma Segmentation Network Based on 3D U-Net++ with Fusion Loss Function [J]. Computer Science, 2021, 48(9): 187-193.
[14] ZHANG Jin, DUAN Li-guo, LI Ai-ping, HAO Xiao-yan. Fine-grained Sentiment Analysis Based on Combination of Attention and Gated Mechanism [J]. Computer Science, 2021, 48(8): 226-233.
[15] SHI Wei, FU Yue. Microblog Short Text Mining Considering Context:A Method of Sentiment Analysis [J]. Computer Science, 2021, 48(6A): 158-164.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!