计算机科学 ›› 2024, Vol. 51 ›› Issue (3): 244-250.doi: 10.11896/jsjkx.221200003

• 人工智能 • 上一篇    下一篇

自校准首脉冲时间编码神经元模型

冯忍, 陈云华, 熊志民, 陈平华   

  1. 广东工业大学计算机学院 广州510006
  • 收稿日期:2022-12-01 修回日期:2023-04-05 出版日期:2024-03-15 发布日期:2024-03-13
  • 通讯作者: 陈云华(yhchen@gdut.edu.cn)
  • 作者简介:(2359091834@qq.com)
  • 基金资助:
    广东省自然科学基金(2021A1515012233)

Self-calibrating First Spike Temporal Encoding Neuron Model

FENG Ren, CHEN Yunhua, XIONG Zhimin, CHEN Pinghua   

  1. School of Computer Science,Guangdong University of Technology,Guangzhou 510006,China
  • Received:2022-12-01 Revised:2023-04-05 Online:2024-03-15 Published:2024-03-13
  • About author:FENG Ren,born in 1995,postgraduate.His main research interests include neuromorphic computing,computer vision and machine learning.CHEN Yunhua,born in 1984,Ph.D,professor,postgraduate supervisor,is a member of CCF(No.30983M).Her main research interests include neuromorphic computing,computer vision and machine learning.
  • Supported by:
    Natural Science Foundation of Guangdong Province,China(2021A1515012233).

摘要: 由于脉冲神经元具有复杂的时空动力过程且脉冲信息不可导,脉冲神经网络(SNN) 的训练一直是一个难题。基于人工神经网络(ANN)转SNN间接训练深度SNN的方法,避免了直接训练深度SNN的难题,但该方法所获得的SNN的性能在很大程度上会受到脉冲信息编码机制的影响。在众多编码机制中,首脉冲时间编码(TTFS)具有良好的生物学基础和更高的能效,但现有TTFS编码采用单脉冲形式,信息表征能力较弱,编码所需时间窗较大。为此,在TTFS的单脉冲编码基础上,增加一个校准脉冲,形成一种自校准首脉冲时间(SC-TTFS)编码机制,并构建相应的SC-TTFS神经元模型。在SC-TTFS中,首脉冲为必定发放的脉冲,而校准脉冲根据首脉冲发放后剩余的膜电位来确定是否发放,用于对编码脉冲所引起的转换量化误差和截断误差进行补偿,同时缩小编码所需的时间窗。通过对多种编码对应的转换误差进行对比分析,以及在多种网络结构上进行ANN-SNN转换实验,验证了所提方法的优越性。采用CIFAR10和CIFAR100数据集,基于VGG和ResNet两种网络结构进行了实验验证。结果表明,所提方法在两类网络结构和两种数据集上均实现了精度无损的ANN-SNN转换,且相较于最先进的同类方法,所提方法所构建的SNN具有最短的网络推理延迟。另外,在VGG结构上,所提方法相比TTFS编码能源效率提升了约80%。

关键词: 脉冲神经网络, 脉冲编码机制, ANN-SNN转化

Abstract: Because of the complex spatio-temporal dynamic process of spike neurons and the non-differentiable spike information,the training of spike neural network(SNN) has always been very difficult.The ANN-to-SNN method for indirect training of deep SNN avoids the difficulties of direct training of deep SNN.However,the performance of the SNN obtained in this approach is greatly affected by the spike information encoding mechanism.Among many coding mechanisms,TTFS has a good biological basis and is energy efficient,but existing TTFS codes use a single-spike formalism,which has weak information representation capability and large time windows for encoding.Therefore,based on the single spike coding of TTFS,a calibration spike is added to form a self-calibrating first spike time to first spike coding mechanism,and the corresponding SC-TTFS neuron model is constructed.In SC-TTFS,the first spike is the spike that must be emitted,while the calibration spike determines whether it is emitted according to the residual membrane potential after the first spike is emitted,which is used to compensate the quantification error and truncation error caused by the coding spike and to reduce the time window required for coding.The advantages of this approach are verified by comparing and analyzing the corresponding conversion errors of various codes and ANN-SNN conversion experiments on various network architectures.On CIFAR10 and CIFAR100 datasets,the proposed algorithm is verified by experiments based on VGG and ResNet network structures,and it achieves ANN-SNN transformation with non-destructive accuracy on both network structures and two data sets.Compared to state-of-the-art similar methods,the SNN constructed by the proposed method has the smallest network inference latency.In addition,on the VGG structure,the proposed method improves the energy efficiency by about 80% compared with TTFS coding.

Key words: Spiking neural network, Spike encoding mechanism, ANN-SNN conversion

中图分类号: 

  • TP183
[1]ZHANG T L,XU B.Research Status and Prospect of PulseNeural Network [J].Journal of Computer Science,2021,44(9):1767-1785.
[2]BOHTE S M,KOK J N,LA POUTRÉ J A.Spikeprop: backpropagation for networks of spiking neurons[C]//ESANN.Bruges,2000:419-424.
[3]NEFTCI E O,MOSTAFA H,ZENKE F.Surrogate Gradient Learning in Spiking Neural Networks:Bringing the Power of Gradient-based optimization to spiking neural networks[J].IEEE Signal Processing Magazine,2019,36(6):51-63.
[4]LEE J H,DELBRUCK T,PFEIFFER M.Training deep spiking neural networks using backpropagation[J].Frontiers in Neuroscience,2016,10:508.
[5]MIRSADEGHI M,SHALCHIAN M,KHERADPISHEH S R,et al.Spike time displacement based error backpropagation in convolutional spiking neural networks[J].arXiv:2108.13621,2021.
[6]TAVANAEI A,MAIDA A S.Multi-layer unsupervised learningin a spiking convolutional neural network[C]//2017 International Joint Conference on Neural Networks(IJCNN).IEEE,2017:2023-2030.
[7]TAVANAEI A,MAIDA A.BP-STDP:Approximating back-propagation using spike timing dependent plasticity[J].Neurocomputing,2019,330:39-47.
[8]CAO Y,CHEN Y,KHOSLA D.Spiking deep convolutional neural networks for energy-efficient object recognition[J].International Journal of Computer Vision,2015,113(1):54-66.
[9]RUECKAUER B,LUNGU I A,HU Y,et al.Conversion of continuous-valued deep networks to efficient event-driven networks for image classification[J].Frontiers in Neuroscience,2017,11:682.
[10]HAN B,SRINIVASAN G,ROY K.Rmp-snn:Residual mem-brane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:13558-13567.
[11]MUELLER E,HANSJAKOB J,AUGE D,et al.Minimizing Inference Time:Optimization Methods for Converted Deep Spiking Neural Networks[C]//2021 International Joint Conference on Neural Networks(IJCNN).IEEE,2021.
[12]LI Y,DENG S,DONG X,et al.A free lunch from ANN:Towards efficient,accurate spiking neural networks calibration[C]//International Conference on Machine Learning.PMLR,2021:6316-6325.
[13]KIM J,KIM H,HUH S,et al.Deep neural networks withweighted spikes[J].Neurocomputing,2018,311:373-386.
[14]LI Y,ZENG Y,ZHAO D.Bsnn:Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neurons[J].arXiv:2105.12917,2021.
[15]RUECKAUER B,LIU S C.Temporal pattern coding in deep spiking neural networks[C]//2021 International Joint Con-ference on Neural Networks(IJCNN).IEEE,2021.
[16]PARK S,KIM S,CHOE H,et al.Fast and efficient information transmission with burst spikes in deep spiking neural networks[C]//2019 56th ACM/IEEE Design Automation Conference(DAC).IEEE,2019.
[17]GUO W,FOUDA M E,ELTAWIL A M,et al.Neural coding in spiking neural networks:A comparative study for robust neuromorphic systems[J].Frontiers in Neuroscience,2021,15:638474.
[18]RUECKAUER B,LIU S C.Conversion of analog to spiking neural networks using sparse temporal coding[C]//2018 IEEE International Symposium on Circuits and Systems(ISCAS).IEEE,2018.
[19]HAN B,ROY K.Deep spiking neural network:Energy efficien-cy through time based coding[C]//European Conference on Computer Vision.Cham:Springer,2020:388-404.
[20]ZHANG L,ZHOU S,ZHI T,et al.Tdsnn:From deep neural networks to deep spike neural networks with temporal-coding[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:1319-1326.
[21]PARK S,KIM S,NA B,et al.T2FSNN:Deep spiking neural networks with time-to-first-spike coding[C]//2020 57th ACM/IEEE Design Automation Conference(DAC).IEEE,2020.
[22]TOVEE M J,ROLLS E T.Information encoding in short firing rate epochs by single neurons in the primate temporal visual cortex[J].Visual cognition,1995,2(1):35-58.
[23]KUMAR S,KAPOSVARI P,VOGELS R.Encoding of predictable and unpredictable stimuli by inferior temporal cortical neurons[J].Journal of Cognitive Neuroscience,2017,29(8):1445-1454.
[24]STÖCKL C,MAASS W.Optimized spiking neurons can classify images with high accuracy through temporal coding with two spikes[J].Nature Machine Intelligence,2021,3(3):230-238.
[25]DAVIDSON S,FURBER S B.Comparison of artificial and spiking neural networks on digital hardware[J].Frontiers in Neuroscience,2021,15:651141.
[26]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[27]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[28]KRIZHEVSKY A,HINTON G.Learning multiple layers of features from tiny images[J/OL].https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=0D60E5DD558A91470E0EA1725FF36E0A?doi=10.1.1.222.9220&rep=rep1&type=pdf.
[29]SENGUPTA A,YE Y,WANG R,et al.Going deeper in spiking neural networks:VGG and residual architectures[J].Frontiers in Neuroscience,2019,13:95.
[30]RATHI N,ROY K.DIET-SNN:A low-latency spiking neural network with direct input encoding and leakage and threshold optimization[J].IEEE Transactions on Neural Networks and Learning Systems,2021,34(6):3174-3182.
[31]MEROLLA P A,ARTHUR J V,ALVAREZ-ICAZA R,et al.A million spiking-neuron integrated circuit with a scalable communication network and interface[J].Science,2014,345(6197):668-673.
[32]FURBER S B,GALLUPPI F,TEMPLE S,et al.The spinnaker project[J].Proceedings of the IEEE,2014,102(5):652-665.
[33]MORADI S,MANOHAR R.The impact of on-chip communication on memory technologies for neuromorphic systems[J].Journal of Physics D:Applied Physics,2018,52(1):014003.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!