Computer Science ›› 2024, Vol. 51 ›› Issue (10): 112-118.doi: 10.11896/jsjkx.240400118

• Technology and Application of Intelligent Education • Previous Articles     Next Articles

Eye Emotion Recognition and Visualization in Smart Classrooms Based on ConvNeXt

ZHANG Liguo, XU Xin, DONG Yuxin   

  1. College of Computer Science and Technology,Harbin Engineering University,Harbin 150001,China
  • Received:2024-04-17 Revised:2024-07-08 Online:2024-10-15 Published:2024-10-11
  • About author:ZHANG Liguo,born in 1981,Ph.D,professor,Ph.D supervisor,is a member of CCF(No.57424S).His main research interests include artificial intelligence,digital image processing and multimedia data processing.
    DONG Yuxin,born in 1974,Ph.D,professor,Ph.D supervisor, is a member of CCF(No.09158M).Her main research interests include machine learning and computer vision and big data and intelligent applications.
  • Supported by:
    Fundamental Research Funds for the Central Universities of Ministry of Education of China and Harbin Engineering University Education Reform Program:Design of Development Planning for the Scale and Structure of Multidisciplinary Collaborative Mentor Teams(3072024XX0601).

Abstract: By leveraging facial expression recognition and emotion analysis,observers can understand learners' learning outcomes through the observed physical states.For instance,fluctuations in students' emotions displayed in the classroom can be used to discern their level of acceptance of new knowledge,facilitating a more convenient and intuitive understanding of students' confusion.However,in many cases,students' faces may be obstructed by learning materials,classmates in the front row,etc.,leading to inaccuracies in facial emotion recognition.Compared to the entire face,the eye region,as a core area of emotional expression,typically receives more attention from observers,and in the same classroom environment,the eyes are less likely to be obstructed.The eyes are one of the most important parts for displaying emotions,and changes in eye expressions during emotional fluctuations can provide more emotional information.Especially when a person is under external pressure and tend to suppress facial expressions,it is difficult to deceive with the gaze.Therefore,recognizing and analyzing complex eye expressions for emotions holds significant research value and challenges.To address this challenge,firstly,a dataset for classifying complex emotions in eye expressions is constructed,including five basic emotions,as well as defining five complex emotions.Secondly,a novel model is proposed to accurately classify emotions based on eye features extracted from input images in the dataset.Finally,a visualization method for emotion analysis based on eye recognition is introduced,which can analyze fluctuations in complex and basic emotions.This method provides a new solution for further eye-based emotion analysis.

Key words: Wisdom classroom, ConvNeXt, Data visualization, Emotion recognition, Eye area

CLC Number: 

  • TP183
[1]ZENG H P,SHU X H,WANG Y B,et al.EmotionCues:Emo-tion-Oriented Visual Summarization of Classroom Videos[J].IEEE Transactions on Visualization and Computer Graphics,2021,27(7):3168-3181.
[2]PEKRUN,R,LINNENBRINK-GARCIA L.International handbook of emotions in education[M]//Educational psychology handbook series.New York:Routledge,Taylor & Francis Group,2014.
[3]BOUHLAL M,AARIKA K,ABDELOUAHID R A,et al.Emotions recognition as innovative tool for improving students' performance and learning approaches[J].Procedia Computer Science,2020,175:597-602.
[4]TRACY J L D.RANDLES D,STECKLER C M.The nonverbal communication of emotions[J].Curr.Opin.Behav.Sci.,2015,3:25-30.
[5]DAVISON A K,LANSLEY C,COSTEN N, et al.SAMM:ASpontaneous Micro-Facial Movement Dataset[J].IEEE Trans.Affect.Comput.,2018,9(1):116-129.
[6]SHU L,XIE J,YANG M,et al.A Review of Emotion Recognition Using Physiological Signals[J].Sensors,2018,18(7):2074.
[7]LEWIS M.Handbook of emotions[M].New York:GuilfordPress,2008.
[8]RANGANATHAN H,CHAKRABORTY S,PANCHANATHANS.Multimodal Emotion Recognition using Deep Learning Architectures[C]//2016 IEEE Winter Conference on Applications of Computer Vision(WACV).2016:1-9.
[9]ZHANG S,WANG X,ZHANG G,et al.Multimodal Emotion Recognition Integrating Affective Speech with Facial Expression[J].IEEE Transactions on Visualization and Computer Gra-phics,2014,10:526-537.
[10]ZHANG C,NI S F,FAN Z P.3D Talking Face With Persona-lized Pose Dynamics[J].IEEE Transactions on Visualization and Computer Graphics,2023,29(2):1438-1449.
[11]GOODFELLOW I J,ERHAN D,CARRIER P L.Challenges in Representation Learning:A Report on Three Machine Learning Contests[C]//International Conference on Neural Information Processing.Berlin,Heidelberg:Springer,2013:117-124.
[12]YANG D,ALSADOON A,PRASAD P W C,et al.An EmotionRecognition Model Based on Facial Recognition in Virtual Learning Environment[J].Procedia Computer Science,2018,125:2-10.
[13]TAUTKUTE I,TRZCINSKI T,BIELSKI A.I Know How You Feel:Emotion Recognition with Facial Landmarks [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW).2018.
[14]JYOTI S,SHARMA G,DHALL A.Expression EmpoweredResiDen Network for Facial Action Unit Detection [C]//IEEE International Conference on Automatic Face & Gesture Recognition(FG 2019).Lille,France:IEEE,1-8.
[15]LEE J,KIM S,KIM S,et al.Context-Aware Emotion Recognition Networks[C]//2019 IEEE/CVF International Conference on Computer Vision(ICCV).IEEE,2020.
[16]KAMBLE K,SENGUPTA J.A Comprehensive Survey on Emo-tion Recognition Based on Electroencephalograph(EEG) Signals[J].Multimedia Tools and Applications,2023,82(18):27269-27304.
[17]SUN L,LIAN Z,LIU B,et al.MAE-DFER:Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition [C]//Proceedings of the 31st ACM International Conference on Multimedia.Ottawa,ON,Canada:ACM,2023:6110-6121.
[18]KING D E.Dlib-ml:A Machine Learning Toolkit[J].Journal ofMachine Learning Research,2009,10(3):1755-1758.
[19]TERZIS V,MORIDIS C N,ECONOMIDES A A.Measuring Instant Emotions During a Self-assessment Test:The Use of Face-Reader[C]//Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research.2010.
[20]PAN T,YE Y,ZHANG Y,et al.Online Multi-hypergraph Fusion Learning for Cross-subject Emotion Recognition [J].Information Fusion,2024,108:102338.
[21]LUCEY P,COHN J F,KANADE T,et al.The Extended Cohn-Kanade Dataset(CK+):A Complete Dataset for Action Unit and Emotion-specified Expression [C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops.San Francisco,CA,USA:IEEE,2010:94-101.
[22]PANTIC M,VALSTAR M,RADEMAKER R,et al.Web-BasedDatabase for Facial Expression Analysis,[C]//2005 IEEE International Conference on Multimedia and Expo.Amsterdam,The Netherlands:IEEE,2005:317-321.
[23]LIU S,HAO J.Generating Talking Face With Controllable EyeMovements by Disentangled Blinking Feature[J].Ieee Transactions On Visualization and Computer Graphics,2023,29(12):5050-5061.
[24]RUSSELL J A,MEHRABIAN A.Evidence for a Three-Factor Theory of Emotions [J].Journal of Research in Personality,1977,11(3):273-294.
[25]SUN K,YU J,HUANG Y,et al.An Improved Valence-Arousal Emotion Space for Video Affective Content Representation and Recognition [C]//2009 IEEE International Conference on Multimedia and Expo.New York,NY,USA:IEEE,2009.
[26]ZHANG K,ZHANG Z,LI Z,et al.Joint Face Detection andAlignment Using Multitask Cascaded Convolutional Networks [J].IEEE Signal Processing Letters,2016,23(10):1499-1503.
[27]HE K,ZHANG X,REN S,et al.Deep Residual Learning forImage Recognition [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Las Vegas,NV,USA:IEEE,2016.
[28]LIU Z,MAO H,WU C Y,et al.A ConvNet for the 2020s[J].arXiv:2201.03545,2022.
[29]KRIZHEVSKY A.Learning Multiple Layers of Features fromTiny Images[EB/OL].http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.
[30]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-CAM:Visual Explanations From Deep Networks via Gradient-Based Localization [J].International Journal of Computer Vision,2020,128(2):336-359.
[1] YANG Pengyue, WANG Feng, WEI Wei. ConvNeXt Feature Extraction Study for Image Data [J]. Computer Science, 2024, 51(6A): 230500196-7.
[2] YIN Xudong, CHEN Junyang, ZHOU Bo. Study on Industrial Defect Augmentation Data Filtering Based on OOD Scores [J]. Computer Science, 2024, 51(6A): 230700111-7.
[3] ZHANG Jiahao, ZHANG Zhaohui, YAN Qi, WANG Pengwei. Speech Emotion Recognition Based on Voice Rhythm Differences [J]. Computer Science, 2024, 51(4): 262-269.
[4] ZHANG Xiaoyun, ZHAO Hui. Study on Multi-task Student Emotion Recognition Methods Based on Facial Action Units [J]. Computer Science, 2024, 51(10): 105-111.
[5] CUI Lin, CUI Chenlu, LIU Zhengwei, XUE Kai. Speech Emotion Recognition Based on Improved MFCC and Parallel Hybrid Model [J]. Computer Science, 2023, 50(6A): 220800211-7.
[6] LUO Ruiqi, YAN Jinlin, HU Xinrong, DING Lei. EEG Emotion Recognition Based on Multiple Directed Weighted Graph and ConvolutionalNeural Network [J]. Computer Science, 2023, 50(6A): 220600128-8.
[7] LEI Ying, LIU Feng. LDM-EEG:A Lightweight EEG Emotion Recognition Method Based on Dual-stream Structure Scaling and Multiple Attention Mechanisms [J]. Computer Science, 2023, 50(6A): 220300262-9.
[8] WANG Xiya, ZHANG Ning, CHENG Xin. Review on Methods and Applications of Text Fine-grained Emotion Recognition [J]. Computer Science, 2023, 50(6A): 220900137-7.
[9] YE Xianyi, CHAI Yanmei, GUO Fengying. Survey of Medical Data Visualization Based on EHR [J]. Computer Science, 2023, 50(11A): 221100265-11.
[10] XU Ming-ke, ZHANG Fan. Head Fusion:A Method to Improve Accuracy and Robustness of Speech Emotion Recognition [J]. Computer Science, 2022, 49(7): 132-141.
[11] CHEN Hui-pin, WANG Kun, YANG Heng, ZHENG Zhi-jie. Visual Analysis of Multiple Probability Features of Bluetongue Virus Genome Sequence [J]. Computer Science, 2022, 49(6A): 27-31.
[12] GAO Yue, FU Xiang-ling, OUYANG Tian-xiong, CHEN Song-ling, YAN Chen-wei. EEG Emotion Recognition Based on Spatiotemporal Self-Adaptive Graph ConvolutionalNeural Network [J]. Computer Science, 2022, 49(4): 30-36.
[13] WEI Yan-tao, LUO Jie-lin, HU Mei-jia, LI Wen-hao, YAO Huang. Online Learning Emotion Recognition Based on Videos [J]. Computer Science, 2022, 49(11A): 211000049-6.
[14] LUO Jing-jing, TANG Wei-zhen, DING Ji-ting. Research of ATC Simulator Training Values Independence Based on Pearson Correlation Coefficient and Study of Data Visualization Based on Factor Analysis [J]. Computer Science, 2021, 48(6A): 623-628.
[15] QIAN Tian-tian, ZHANG Fan. Emotion Recognition System Based on Distributed Edge Computing [J]. Computer Science, 2021, 48(6A): 638-643.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!