Computer Science ›› 2019, Vol. 46 ›› Issue (3): 242-247.doi: 10.11896/j.issn.1002-137X.2019.03.036

• Artificial Intelligence • Previous Articles     Next Articles

End-to-End Single-channel Automatic Staging Model for Sleep EEG Signal

JIN Huan-huan1,YIN Hai-bo2,HE Ling-na1   

  1. (College of Computer Science and Technology,Zhejiang University of Technology,Hangzhou 310023,China)1
    (School of Astronautics,Harbin Institute of Technology,Harbin 150001,China)2
  • Received:2018-01-18 Revised:2018-04-10 Online:2019-03-15 Published:2019-03-22

Abstract: The classification accuracy of current automatic sleep staging is determined by the small data set of imba-lanced classes and hand-engineered features.Aiming at this problem,this paper proposed an automatic sleep staging model based on deep hybrid neural network.For the construction of model’s main structure,the multi-scale Convolutional Neural Networks are used to automatically learn the high-level time-invariant features,the Recurrent Neural Networks constructed by bidirectional Gated Recurrent Unit are used to decode the temporal information from the time invariant features,and the residual connection is used to fully combine the time invariant features with the time information features.For model optimization,in order to reduce the impact of the dataset of imbalanced class on the classification effect of minority class,the experimental data set reconstructed by MSMOTE (Modified Synthetic Minority Oversampling Technique) is used for pre-training.The Swish activation function is used to accelerate the training convergence rate.The experiment was set up on the initial single-channel EEG signal of Fpz-Cz in Sleep-EDF Database.The 15-fold cross-validation experiments show that the overall classification accuracy is 86.85% and the Macro-averaged F1-score is 81.63%.This model can effectively avoid the subjectivity of feature selection and the limitation of class imba-lanced small dataset of imbalanced class in deep learning.

Key words: Deep learning, End to end, Gated recurrent unit, Single-channel, Sleep staging, Swish

CLC Number: 

  • TP391
[1]DA S T,KOZAKEVICIUS A J,RODRIGUES C R.Single-channel EEG sleep stage classification based on a streamlined set of statistical features in wavelet domain[J].Medical & Biological Engineering & Computing,2017,55(2):1-10.
[2]TSINALIS O,MATTHEWS P M,GUO Y K.Automatic sleep stage scoring using time-frequency analysis and stacked sparse autoencoders[J].Annals of Biomedical Engineering,2016,44(5):1587-1597.
[3]MONIKA P,POLAK A G.Effect of Feature Extraction on Automatic Sleep Stage Classification by Artificial Neural Network[J].Metrology & Measurement Systems,2017,24(2):229-240.
[4]LI C,CHAI Y M,NAN X F,et al.Research on Problem Classification Method Based on Deep Learning[J].Computer Science,2016,43(12):115-119.(in chinese)
李超,柴玉梅,南晓斐,等.基于深度学习的问题分类方法研究[J].计算机科学,2016,43(12):115-119.
[5]WANG Z X,TENG S H,LIU G D,et al.Hierarchical sparse representation with deep dictionary for multi-modal classification[J].Neurocomputing,2017,253(C):65-69.
[6]WANG Z G,ZHAO Z S,WENG S F,et al.Incremental multiple instance outlier detection[J].Neural Computing & Applications,2015,26(4):957-968.
[7]ZHAO Z S,FENG X,WEI F,et al.Learning Representative
Features for Robot Topological Localization[J].International Journal of Advanced Robotic Systems,2013,10(4):1-12
[8]ZHANG Q L,ZHAO D,CHI X B.Review for Deep Learning Based on Medical Imaging Diagnosis[J].Computer Science,2017,44(Z11):1-7.(in Chinese)
张巧丽,赵地,迟学斌.基于深度学习的医学影像诊断综述[J].计算机科学,2017,44(Z11):1-7.
[9]TSINALIS O,MATTHEWS P M,GUO Y K,et al.Automatic sleep stage scoring with single-channel EEG using convolutional neural networks[EB/OL].[2016-10-05].https://arxiv.org/abs/1610.01683.
[10]SUPRATAK A,DONG H,WU C,et al.DeepSleepNet:a model for automatic sleep stage scoring based on raw single-channel EEG[EB/OL].[2017-08-03].https://arxiv.org/abs/1703.04046v2.
[11]HE K M,ZHANG X Y,REN S Q,et al.Deep Residual Learning for Image Recognition[C]∥Computer Vision and Pattern Re-cognition.IEEE,2016:770-778.
[12]RAMACHANDRAN P,ZOPH B,LE Q V.Swish:a self-gated activation function[EB/OL].[2017-10-27].https://arxiv.org/abs/1710.05941v2.
[13]HU S G,LIANG Y F,MA L T,et al.MSMOTE:Improving
Classification Performance When Training Data is Imbalanced[C]∥International Workshop on Computer Science & Engineering.IEEE Computer Society,2009:13-17.
[14]COHEN M X.Analyzing neural time series data :theory and practice[M].Massachusetts:MIT Press,2014.
[15]IOFFE S,SZEGEDY C.Batch normalization:accelerating deep network training by reducing internal covariate shift[C]∥International Conference on Machine Learning.2015:448-456.
[16]JOZEFOWICZ R,ZAREMBA W,SUTSKEVER I.An empirical exploration of recurrent network architectures[C]∥Internatio-nal Conference on International Conference on Machine Lear-ning.2015:2342-2350.
[1] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[2] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[3] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[4] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[5] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[6] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[7] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[8] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
[9] ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun. Robust Deep Neural Network Learning Based on Active Sampling [J]. Computer Science, 2022, 49(7): 164-169.
[10] SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He. Survey of Deep Learning for Radar Emitter Identification Based on Small Sample [J]. Computer Science, 2022, 49(7): 226-235.
[11] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[12] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[13] LIU Wei-ye, LU Hui-min, LI Yu-peng, MA Ning. Survey on Finger Vein Recognition Research [J]. Computer Science, 2022, 49(6A): 1-11.
[14] SUN Fu-quan, CUI Zhi-qing, ZOU Peng, ZHANG Kun. Brain Tumor Segmentation Algorithm Based on Multi-scale Features [J]. Computer Science, 2022, 49(6A): 12-16.
[15] KANG Yan, XU Yu-long, KOU Yong-qi, XIE Si-yu, YANG Xue-kun, LI Hao. Drug-Drug Interaction Prediction Based on Transformer and LSTM [J]. Computer Science, 2022, 49(6A): 17-21.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!