Computer Science ›› 2019, Vol. 46 ›› Issue (7): 206-210.doi: 10.11896/j.issn.1002-137X.2019.07.031

• Artificial Intelligence • Previous Articles     Next Articles

Bio-inspired Activation Function with Strong Anti-noise Ability

MAI Ying-chao,CHEN Yun-hua,ZHANG Ling   

  1. (School of Computers,Guangdong University of Technology,Guangzhou 5110006,China)
  • Received:2018-05-22 Online:2019-07-15 Published:2019-07-15

Abstract: Although the artificial neural network is almost comparable to the human brain in image recognition,the activation functions such as ReLU and Softplus are only highly simplified and simulated for the output response characte-ristics of biological neurons.There is still a huge gap between the artificial neural network and the human brain in many aspects,such as noise resistance,uncertainty information processing and power consumption.In this paper,based on the simulation experiments of biological neurons and their response characteristics,a strong anti-noise activation function Rand Softplus with biological authenticity was constructed by defining and calculating parameters η,which reflects the randomness of each neuron.Finally,the activation function was applied to the depth residuals network and verified by facial expression dataset.The results show that the recognition accuracy of the activation function proposed in this paper is almost equal to the current mainstream activation function when there is no noise or a small amount of noise,and when the input contains a large amount of noise,it shows good anti-noise performance.

Key words: Activation function, Anti-noise, Leaky integrate-and-fire model, Neural networks

CLC Number: 

  • TP183
[1]LECUN Y,BENGIO Y,HINTON G.Deep learning [J].Nature,2015,521(7553):436.
[2]SCHMIDHUBER J.Deep learning in neural networks:An overview[J].Neural Networks,2015,61:85-117.
[3]BADJATIYA P,GUPTA S,GUPTA M,et al.Deep Learning for Hate Speech Detection in Tweets[C]∥Proceedings of the 26th International Conference on World Wide Web Companion.Perth:International World Wide Web Conferences Steering Committee,2017:759-760.
[4]KIM B K,LEE H,ROH J,et al.Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression Recognition[C]∥Proceedings of the 2015 ACM on International Conference on Multimodal Interaction.Seattle:ACM,2015:427-434.
[5]HODGKIN A L,HUXLEY A F.A quantitative description of membrane current and its application to conduction and excitation in nerve[J].The Journal of Physiology,1952,117(4):500-544.
[6]STEIN R B.A Theoretical analysis of neuronal variability[J].Biophysical Journal,1965,5(2):173-194.
[7]IZHIKEVICH E M.Which model to use for cortical spiking neurons[J].IEEE Transactions on Neural Networks,2004,15(5):1063-1070.
[8]ACHARD P,DE SCHUTTER E.Complex parameter landscape for a complex neuron model[J].PLOS Computational Biology,2006,2(7):794-804.
[9]GLOROT X,BENGIO Y.Understanding the difficulty of trai- ning deep feedforward neural networks[C]∥Proceedings of the thirteenth International Conferenceon Artificial Intelligenceand Statistics.Sardinia:PMLR,2010:249-256.
[10]GLOROT X,BORDES A,BENGIO Y.Deep sparse rectifier neural networks[C]∥Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics.Fort Lauderdale:PMLR,2011:315-323.
[11]ERHAN D,BENGIO Y,COURVILLE A,et al.Why does unsupervised pre-training help deep learning?[J].Journal of Machine Learning Research,2010,11:625-660.
[12]TROTTIER L,GIGU P,CHAIB-DRAA B.Parametric expo- nential linear unit for deep convolutional neural networks[C]∥Machine Learning and Applications.Cancun:IEEE Press,2017:207-214.
[13]STROMATIAS E,NEIL D,PFEIFFER M,et al.Robustness of spiking deep belief networks to noise and reduced bit precision of neuro-inspired hardware platforms[J].Frontiers in neuroscience,2015,9:222.
[14]LIU Q,FURBER S.Noisy softplus:a biology inspired activation function[C]∥InInternational Conference on Neural Information Processing.Kyoto:Springer,2016:405-412.
[15]LA C G,GIUGLIANO M,SENN W,et al.The response of cortical neurons to in vivo-like input current:theory and experiment:I.Noisy inputs with stationary statistics[J].Biological Cybernetics,2008,99(5):279-301.
[16]GEWALTIG M O,DIESMANN M.Nest (neural simulation tool)[J].Scholarpedia,2007,2(4):1430.
[17]DAVISON A P,BRUDERLE D,EPPLER J M,et al.PyNN:a common interface for neuronal network simulators[J].Frontiers in Neuroinformatics,2009,2(11):204-123.
[18]LUCEY P,COHN J F,KANADE T,et al.The Extended Cohn-Kanade Dataset (CK+):A complete dataset for action unit and emotion-specified expression[C]∥Computer Vision and Pattern Recognition Workshops.San Francisco:IEEE Press 2010:94-101.
[19]CALVO M G,LUNDQVIS D.Facial expressions of emotion (KDEF):Identification under different display-duration conditions[J].Behavior Research Methods,2008,40(1):109-115.
[20]WHITEHILL J,MOVELLAN J R.A discriminative approach to frame-by-frame head pose tracking[C]∥IEEE International Conference on Automatic Face & Gesture Recognition.Amsterdam:IEEE Press,2008:1-7.
[21]HE K,ZHANG X,REN S,et al.Deep residual learning for ima- ge recognition[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Boston:IEEE Press,2016:770-778.
[1] NING Han-yang, MA Miao, YANG Bo, LIU Shi-chang. Research Progress and Analysis on Intelligent Cryptology [J]. Computer Science, 2022, 49(9): 288-296.
[2] ZHU Cheng-zhang, HUANG Jia-er, XIAO Ya-long, WANG Han, ZOU Bei-ji. Deep Hash Retrieval Algorithm for Medical Images Based on Attention Mechanism [J]. Computer Science, 2022, 49(8): 113-119.
[3] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[4] WANG Jian-ming, CHEN Xiang-yu, YANG Zi-zhong, SHI Chen-yang, ZHANG Yu-hang, QIAN Zheng-kun. Influence of Different Data Augmentation Methods on Model Recognition Accuracy [J]. Computer Science, 2022, 49(6A): 418-423.
[5] SUN Jie-qi, LI Ya-feng, ZHANG Wen-bo, LIU Peng-hui. Dual-field Feature Fusion Deep Convolutional Neural Network Based on Discrete Wavelet Transformation [J]. Computer Science, 2022, 49(6A): 434-440.
[6] LI Jing-tai, WANG Xiao-dan. XGBoost for Imbalanced Data Based on Cost-sensitive Activation Function [J]. Computer Science, 2022, 49(5): 135-143.
[7] LI Yong, WU Jing-peng, ZHANG Zhong-ying, ZHANG Qiang. Link Prediction for Node Featureless Networks Based on Faster Attention Mechanism [J]. Computer Science, 2022, 49(4): 43-48.
[8] ZHANG Hong-min, LI Ping-ping, FANG Xiao-bing, LIU Hong. Human Abnormal Behavior Detection Method Based on Improved YOLOv3 Network Model [J]. Computer Science, 2022, 49(4): 233-238.
[9] CHEN Zhi-yi, SUI Jie. DeepFM and Convolutional Neural Networks Ensembles for Multimodal Rumor Detection [J]. Computer Science, 2022, 49(1): 101-107.
[10] FAN Hong-jie, LI Xue-dong, YE Song-tao. Aided Disease Diagnosis Method for EMR Semantic Analysis [J]. Computer Science, 2022, 49(1): 153-158.
[11] WANG Chao, WEI Xiang-lin, TIAN Qing, JIAO Xiang, WEI Nan, DUAN Qiang. Feature Gradient-based Adversarial Attack on Modulation Recognition-oriented Deep Neural Networks [J]. Computer Science, 2021, 48(7): 25-32.
[12] ZHOU Xin, LIU Shuo-di, PAN Wei, CHEN Yuan-yuan. Vehicle Color Recognition in Natural Traffic Scene [J]. Computer Science, 2021, 48(6A): 15-20.
[13] HE Qing-fang, WANG Hui, CHENG Guang. Research on Classification of Breast Cancer Pathological Tissues with Adaptive Small Data Set [J]. Computer Science, 2021, 48(6A): 67-73.
[14] ZHANG Zheng-wan, WU Di, ZHANG Chun-jiong. Study of Cellular Traffic Prediction Based on Multi-channel Sparse LSTM [J]. Computer Science, 2021, 48(6): 296-300.
[15] LI Si-di, GUO Bing-hui, YANG Xiao-bo. Study on Financial Credit Information Based on Graph Neural Network [J]. Computer Science, 2021, 48(4): 85-90.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!