Computer Science ›› 2025, Vol. 52 ›› Issue (3): 260-267.doi: 10.11896/jsjkx.240100195

• Artificial Intelligence • Previous Articles     Next Articles

Learning Rule with Precise Spike Timing Based on Direct Feedback Alignment

NING Limiao1, WANG Ziming2, LIN Zhicheng1, PENG Jian1, TANG Huajin2   

  1. 1 College of Computer Science,Sichuan University,Chengdu 610065,China
    2 School of Computer Science and Technology,Zhejiang University,Hangzhou 310027,China
  • Received:2024-01-29 Revised:2024-04-07 Online:2025-03-15 Published:2025-03-07
  • About author:NING Limiao,boin in 1985,Ph.D,lecturer,is a member of CCF(No.A1425G).His main research interests include neuromorphic computing and IoT.
    TANG Huajin,Ph.D,professor,Ph.D supervisor.His main research interests include neuromorphic computing,neu-romorphic hardware,cognitive systems and robotic cognition.

Abstract: Due to the complex spatiotemporal dynamics of spike neurons and synapses,training spike neural networks(SNNs) is relatively challenging,and there are currently no widely accepted core training algorithms and techniques.In this paper,we propose a learning rule with precise spike timing based on direct feedback alignment(PREST-DFA).Inspired by the learning algorithm called spike layer error reassignment(SLAYER),PREST-DFA uses error signals based on spike convolution differences.The output layer iteratively calculates the error values,and utilizes direct feedback alignment(DFA) to broadcast the error to hidden layer neurons,finally achieving synaptic weights update.We implement time-driven PREST-DFA,and simulation experiments demonstrate that PREST-DFA has precise spike timing learning capabilities and good biological plausibility.Based on literature search results,this is the first time to verify that learning algorithm based on DFA can control the precise fire time of spikes in deep networks,indicating that the DFA mechanism can be applied to algorithm design based on spike timing.We also compare learning performance and training speed.Experimental results show that PREST-DFA can achieve good learning performance with lower inference latency and can accelerate training speed compared to learning algorithms trained using backpropagation with the same learning rule.

Key words: Spiking neural network, Direct feedback alignment, Learning rule, Precise spike timing, Online learning

CLC Number: 

  • TP181
[1]LIAN S,SHEN J,LIU Q,et al.Learnable Surrogate Gradient for Direct Training Spiking Neural Networks[C]//International Joint Conferences on Artificial Intelligence Organization.2023.
[2]WU Y,DENG L,LI G,et al.Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks [J].Frontiers in Neuroscience,2018,12:331.
[3]HAN J,WANG Z,SHEN J,et al.Symmetric-threshold ReLU for Fast and Nearly Lossless ANN-SNN Conversion [J].Machine Intelligence Research,2023,20(3):435-446.
[4]BU T,DING J H,YU Z F,et al.Optimized Potential Initialization for Low-Latency Spiking Neural Networks [J].arXiv:2202.01440,2022.
[5]NING L,DONG J,XIAO R,et al.Event-driven spiking neural networks with spike-based learning [J].Memetic Computing,2023,15(2):205-217.
[6]LIU F,ZHAO W,CHEN Y,et al.SSTDP:Supervised SpikeTiming Dependent Plasticity for Efficient Spiking Neural Network Training [J].Frontiers in Neuroscience,2021,15:756876.
[7]SHRESTHA S B,ORCHARD G.SLAYER:Spike Layer Error Reassignment in Time[C]//proceedings of the NeurIPS.2018.
[8]LILLICRAP T P,COWNDEN D,TWEED D B,et al.Random synaptic feedback weights support error backpropagation for deep learning [J].Nature Communications,2016,7:13276.
[9]LILLICRAP T P,SANTORO A,MARRIS L,et al.Backpropagation and the brain [J].Nature Reviews Neuroscience,2020,21(6):335-346.
[10]NØKLAND A.Direct Feedback Alignment Provides Learning in Deep Neural Networks[C]//Proceedings of the NIPS.2016.
[11]LAUNAY J,POLI I,BONIFACE F C,et al.Direct FeedbackAlignment Scales to Modern Deep Learning Tasks and Architectures[C]//Proceedings of the NeurIPS.2020.
[12]NEFTCI E O,AUGUSTINE C,PAUL S,et al.Event-DrivenRandom Back-Propagation:Enabling Neuromorphic Deep Learning Machines [J].Frontiers in Neuroscience,2017,11:324.
[13]ZHAO D,ZENG Y,ZHANG T,et al.GLSNN:A Multi-Layer Spiking Neural Network Based on Global Feedback Alignment and Local STDP Plasticity [J].Frontiers Computational Neuroscience,2020,14:576841.
[14]LEE J,ZHANG R,ZHANG W,et al.Spike-Train Level Direct Feedback Alignment:Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets [J].Frontiers in Neuroscience,2020,14:143.
[15]SHI C,WANG T,HE J,et al.DeepTempo:A Hardware-Friend-ly Direct Feedback Alignment Multi-Layer Tempotron Learning Rule for Deep Spiking Neural Networks [J].IEEE Transactions on Circuits and Systems II:Express Briefs,2021,68(5):1581-1585.
[16]KANG W M,KWON D,WOO S Y,et al.Hardware-Based Spiking Neural Network Using a TFT-Type AND Flash Memory Array Architecture Based on Direct Feedback Alignment [J].IEEE Access,2021,9:73121-73132.
[17]BANG S,LEW D,CHOI S,et al.An Energy-Efficient SNN Processor Design based on Sparse Direct Feedback and Spike Prediction[C]//Proceedings of the 2021 International Joint Confe-rence on Neural Networks (IJCNN).2021.
[18]TAVANAEI A,MAIDA A.BP-STDP:Approximating back-propagation using spike timing dependent plasticity [J].Neurocomputing,2019,330:39-47.
[19]FANG W,YU Z,CHEN Y,et al.Deep Residual Learning in Spiking Neural Networks[C]//Proceedings of the Advances in Neural Information Processing Systems.2021.
[20]FANG W,YU Z,CHEN Y,et al.Incorporating Learnable Membrane Time Constant To Enhance Learning of Spiking Neural Networks[C]//Proceedings of the ICCV.2021.
[21]KAISER J,FRIEDRICH A,TIECK J C V,et al.Embodied Neuromorphic Vision with Event-Driven Random Backpropagation [J].arXiv1904,04805,2019.
[22]XU Q,QI Y,YU H,et al.CSNN:An Augmented Spiking based Framework with Perceptron-Inception[C]//Proceedings of the IJCAI.2018.
[23]DING J,YU Z,TIAN Y,et al.Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks[C]//Proceedings of the IJCAI-21.2021.
[1] WANG Dongzhi, LIU Yan, GUO Bin, YU Zhiwen. Edge-side Federated Continuous Learning Method Based on Brain-like Spiking Neural Networks [J]. Computer Science, 2025, 52(3): 326-337.
[2] FENG Ren, CHEN Yunhua, XIONG Zhimin, CHEN Pinghua. Self-calibrating First Spike Temporal Encoding Neuron Model [J]. Computer Science, 2024, 51(3): 244-250.
[3] GAO Mengqi, FENG Xiang, YU Huiqun, WANG Mengling. Large-scale Multi-objective Evolutionary Algorithm Based on Online Learning of Sparse Features [J]. Computer Science, 2024, 51(3): 56-62.
[4] ZHUO Mingsong, MO Lingfei. Spiking Neural Network Classification Model Based on Multi-subnetwork Pre-training [J]. Computer Science, 2024, 51(11A): 240300191-6.
[5] AN Yang, WANG Xiuqing, ZHAO Minghua. Mobile Robots' Path Planning Method Based on Policy Fusion and Spiking Deep ReinforcementLearning [J]. Computer Science, 2024, 51(11A): 240100211-11.
[6] HUANG Chunli, LIU Guimei, JIANG Wenjun, LI Kenli, ZHANG Ji, TAK-SHING Peter Yum. Learning Pattern Recognition and Performance Prediction Method Based on Learners'Behavior Evolution [J]. Computer Science, 2024, 51(10): 67-78.
[7] MA Weiwei, ZHENG Qinhong, LIU Shanshan. Study and Evaluation of Spiking Neural Network Model Based on Bee Colony Optimization [J]. Computer Science, 2023, 50(8): 221-225.
[8] QIN Liang, XIE Liang, CHEN Shengshuang, XU Haijiao. Online Semi-supervised Cross-modal Hashing Based on Anchor Graph Classification [J]. Computer Science, 2023, 50(6): 183-193.
[9] MA Weiqi, YUAN Jiabin, ZHA Keke, FAN Lili. Onboard Rock Detection Algorithm Based on Spiking Neural Network [J]. Computer Science, 2023, 50(1): 98-104.
[10] HUANG Zenan, LIU Xiaojie, ZHAO Chenhui, DENG Yabin, GUO Donghui. Spiking Neural Network Model for Brain-like Computing and Progress of Its Learning Algorithm [J]. Computer Science, 2023, 50(1): 229-242.
[11] WEI Yan-tao, LUO Jie-lin, HU Mei-jia, LI Wen-hao, YAO Huang. Online Learning Emotion Recognition Based on Videos [J]. Computer Science, 2022, 49(11A): 211000049-6.
[12] LIU Ling-yun, QIAN Hui, XING Hong-jie, DONG Chun-ru, ZHANG Feng. Incremental Classification Model Based on Q-learning Algorithm [J]. Computer Science, 2020, 47(8): 171-177.
[13] KONG Fang, LI Qi-zhi, LI Shuai. Survey on Online Influence Maximization [J]. Computer Science, 2020, 47(5): 7-13.
[14] HE Xiao-wen, HU Yi-fei, WANG Hai-ping, CHEN Mo. Online Learning Nonnegative Matrix Factorization [J]. Computer Science, 2019, 46(6A): 473-477.
[15] WAN Jia-shan, CHEN Lei, WU Jin-hua, GAO Chao. Persona Based Social User Modeling Using KD-Tree [J]. Computer Science, 2019, 46(6A): 442-445.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!