计算机科学 ›› 2024, Vol. 51 ›› Issue (10): 112-118.doi: 10.11896/jsjkx.240400118
张立国, 徐鑫, 董宇欣
ZHANG Liguo, XU Xin, DONG Yuxin
摘要: 通过面部表情识别和情绪分析,观测者能够根据所观察到的实体状态了解学习者的学习效果,如通过课堂中学生所展现出的情绪波动,来辨别学生对新知识的接受程度,从而更便捷、直观地理解学生的疑惑。然而,在许多情况下,学生的面部可能会被学习用品、前排同学等遮挡,导致面部情感识别准确性不高。与整张脸相比,眼部区域作为情感表达的核心部位,通常会受到观察者更多的关注,在相同的课堂环境下眼部也更不容易被遮挡。眼睛是展现情感最重要的部位之一,在情绪变化期间的眼部表情变化可以提供更多的情绪信息。特别是当一个人承受外部压力并必须抑制面部表情时,眼神很难欺骗。因此,对眼部复杂表情进行情绪识别和分析具有重要研究价值和挑战性。针对这种挑战,首先构建了一个用于分类眼部表情复杂情绪的数据集,包括5种基本情绪,另外还定义了5种复杂情绪。其次,提出了一种新颖的模型,根据数据集中输入图像中提取的眼部特征准确地对情绪进行分类。最后,介绍了一种基于眼部识别的情绪分析可视化方法,该方法可以分析复杂情感和基础情感的波动,并为基于眼部进行进一步的情绪分析提供了新的解决方案。
中图分类号:
[1]ZENG H P,SHU X H,WANG Y B,et al.EmotionCues:Emo-tion-Oriented Visual Summarization of Classroom Videos[J].IEEE Transactions on Visualization and Computer Graphics,2021,27(7):3168-3181. [2]PEKRUN,R,LINNENBRINK-GARCIA L.International handbook of emotions in education[M]//Educational psychology handbook series.New York:Routledge,Taylor & Francis Group,2014. [3]BOUHLAL M,AARIKA K,ABDELOUAHID R A,et al.Emotions recognition as innovative tool for improving students' performance and learning approaches[J].Procedia Computer Science,2020,175:597-602. [4]TRACY J L D.RANDLES D,STECKLER C M.The nonverbal communication of emotions[J].Curr.Opin.Behav.Sci.,2015,3:25-30. [5]DAVISON A K,LANSLEY C,COSTEN N, et al.SAMM:ASpontaneous Micro-Facial Movement Dataset[J].IEEE Trans.Affect.Comput.,2018,9(1):116-129. [6]SHU L,XIE J,YANG M,et al.A Review of Emotion Recognition Using Physiological Signals[J].Sensors,2018,18(7):2074. [7]LEWIS M.Handbook of emotions[M].New York:GuilfordPress,2008. [8]RANGANATHAN H,CHAKRABORTY S,PANCHANATHANS.Multimodal Emotion Recognition using Deep Learning Architectures[C]//2016 IEEE Winter Conference on Applications of Computer Vision(WACV).2016:1-9. [9]ZHANG S,WANG X,ZHANG G,et al.Multimodal Emotion Recognition Integrating Affective Speech with Facial Expression[J].IEEE Transactions on Visualization and Computer Gra-phics,2014,10:526-537. [10]ZHANG C,NI S F,FAN Z P.3D Talking Face With Persona-lized Pose Dynamics[J].IEEE Transactions on Visualization and Computer Graphics,2023,29(2):1438-1449. [11]GOODFELLOW I J,ERHAN D,CARRIER P L.Challenges in Representation Learning:A Report on Three Machine Learning Contests[C]//International Conference on Neural Information Processing.Berlin,Heidelberg:Springer,2013:117-124. [12]YANG D,ALSADOON A,PRASAD P W C,et al.An EmotionRecognition Model Based on Facial Recognition in Virtual Learning Environment[J].Procedia Computer Science,2018,125:2-10. [13]TAUTKUTE I,TRZCINSKI T,BIELSKI A.I Know How You Feel:Emotion Recognition with Facial Landmarks [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW).2018. [14]JYOTI S,SHARMA G,DHALL A.Expression EmpoweredResiDen Network for Facial Action Unit Detection [C]//IEEE International Conference on Automatic Face & Gesture Recognition(FG 2019).Lille,France:IEEE,1-8. [15]LEE J,KIM S,KIM S,et al.Context-Aware Emotion Recognition Networks[C]//2019 IEEE/CVF International Conference on Computer Vision(ICCV).IEEE,2020. [16]KAMBLE K,SENGUPTA J.A Comprehensive Survey on Emo-tion Recognition Based on Electroencephalograph(EEG) Signals[J].Multimedia Tools and Applications,2023,82(18):27269-27304. [17]SUN L,LIAN Z,LIU B,et al.MAE-DFER:Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition [C]//Proceedings of the 31st ACM International Conference on Multimedia.Ottawa,ON,Canada:ACM,2023:6110-6121. [18]KING D E.Dlib-ml:A Machine Learning Toolkit[J].Journal ofMachine Learning Research,2009,10(3):1755-1758. [19]TERZIS V,MORIDIS C N,ECONOMIDES A A.Measuring Instant Emotions During a Self-assessment Test:The Use of Face-Reader[C]//Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research.2010. [20]PAN T,YE Y,ZHANG Y,et al.Online Multi-hypergraph Fusion Learning for Cross-subject Emotion Recognition [J].Information Fusion,2024,108:102338. [21]LUCEY P,COHN J F,KANADE T,et al.The Extended Cohn-Kanade Dataset(CK+):A Complete Dataset for Action Unit and Emotion-specified Expression [C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops.San Francisco,CA,USA:IEEE,2010:94-101. [22]PANTIC M,VALSTAR M,RADEMAKER R,et al.Web-BasedDatabase for Facial Expression Analysis,[C]//2005 IEEE International Conference on Multimedia and Expo.Amsterdam,The Netherlands:IEEE,2005:317-321. [23]LIU S,HAO J.Generating Talking Face With Controllable EyeMovements by Disentangled Blinking Feature[J].Ieee Transactions On Visualization and Computer Graphics,2023,29(12):5050-5061. [24]RUSSELL J A,MEHRABIAN A.Evidence for a Three-Factor Theory of Emotions [J].Journal of Research in Personality,1977,11(3):273-294. [25]SUN K,YU J,HUANG Y,et al.An Improved Valence-Arousal Emotion Space for Video Affective Content Representation and Recognition [C]//2009 IEEE International Conference on Multimedia and Expo.New York,NY,USA:IEEE,2009. [26]ZHANG K,ZHANG Z,LI Z,et al.Joint Face Detection andAlignment Using Multitask Cascaded Convolutional Networks [J].IEEE Signal Processing Letters,2016,23(10):1499-1503. [27]HE K,ZHANG X,REN S,et al.Deep Residual Learning forImage Recognition [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Las Vegas,NV,USA:IEEE,2016. [28]LIU Z,MAO H,WU C Y,et al.A ConvNet for the 2020s[J].arXiv:2201.03545,2022. [29]KRIZHEVSKY A.Learning Multiple Layers of Features fromTiny Images[EB/OL].http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf. [30]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-CAM:Visual Explanations From Deep Networks via Gradient-Based Localization [J].International Journal of Computer Vision,2020,128(2):336-359. |
|