计算机科学 ›› 2014, Vol. 41 ›› Issue (2): 153-156.

• CCML 2013 • 上一篇    下一篇

基于结点敏感度的单隐含层前馈神经网络结构选择

翟俊海,哈明光,邵庆言,王熙照   

  1. 河北大学数学与计算机学院 保定071002;河北大学数学与计算机学院 保定071002;河北大学数学与计算机学院 保定071002;河北大学数学与计算机学院 保定071002
  • 出版日期:2018-11-14 发布日期:2018-11-14
  • 基金资助:
    本文受国家自然科学基金项目(61170040),河北省自然科学基金项目(F2013201110,F2013201220),河北大学自然科学基金项目(2011-228043),河北大学教育教学改革研究项目(JX07-Y-27)资助

Architecture Selection for Single-hidden Layer Feed-forward Neural Networks Based on Sensitivity of Node

ZHAI Jun-hai,HA Ming-guang,SHAO Qing-yan and WANG Xi-zhao   

  • Online:2018-11-14 Published:2018-11-14

摘要: 提出了一种基于结点敏感度的单隐含层前馈神经网络结构选择方法。该方法从一个隐含层结点个数较多的网络开始,首先利用结点敏感度度量隐含层结点的重要性,然后按重要性对隐含层结点由大到小排序,最后逐个剪去不重要的隐含层结点,直到满足预定义的停止条件。该算法的特点是不需要重复训练神经网络,得到的网络结构紧凑,具有较高的泛化能力。在实际数据集和UCI数据集上的实验结果显示,提出的算法是行之有效的。

关键词: 前馈神经网络,结构选择,敏感度,交叉熵 中图法分类号TP181文献标识码A

Abstract: Based on sensitivity of node,an architecture selection for Single-hidden Layer Feed-forward Neural Networks (SLFNNs) was proposed.Beginning from an initial large number of hidden nodes,the proposed algorithm firstly employs the sensitivity to measure the significance of the hidden nodes,and then the hidden nodes are sorted in descending order by their significance,finally all unimportant nodes are pruned.The algorithm will terminate when a predefined stop condition is held.The main feasures of the proposed algorithm include the unnecessity of retraining the SLFNN,the compact architecture and the high generalizition capacity.We experimented the proposed approaches on real world datasets and UCI datasets,and the experimental results show that the proposed method is effective and efficient.

Key words: Feed-forward neural network,Architecture selection,Sensitivity,Cross-entropy

[1] Kumar S.Neural networkw [M].Beijing:Tsinghua University Press,2006
[2] Bishop C M.Neural networks for pattern recognition [M].Oxford:Clarendon Press,1996
[3] Zhang G P.An investigation of neural networks for linear time-series forecasting [J].Computers & Operations Research,2001,28(12):1183-1202
[4] Zanchettin C,Ludermir T B,Almeida L M.Hybrid trainingmethod for MLP:optimization of architecture and training [J].IEEE Transactions on Systems,Man,and Cybernetics-Part B:Cybernetics,2011,41(4):1097-1109
[5] Yang S H,Chen Y P.An evolutionary constructive and pruning algorithm for artificial neural networks and its prediction applications [J].Neurocomputing,2012,86:140-149
[6] Kwok T Y,Yeung D Y.Constructive algorithms for structurelearning in feedforward neural networks for regression problems [J].IEEE Transactions on Neural Networks,1997,8(3):630-645
[7] Reed R.Pruning algorithms-a survey [J].IEEE Transactions on Neural Networks,1993,4(5):740-747
[8] Redding N J,Kowalczyk A,Downs T.Constructive higher-order network that is polynomial time [J].Neural Networks,1993,6(7):997-1010
[9] Tsoi A C,Tan S.Recurrent neural networks:A constructive algorithm,and its properties [J].Neurocomputing,1997,15(3/4):309-326
[10] Liu D R,Chang T S,Zhang Y G.A constructive algorithm for feedforward neural networks with incremental training [J].IEEE Transactions on Circuits and Systems I:Fundamental Theory and Applications,2002,49(12):1876-1879
[11] Subirats J L,Franco L,Jerez J M.C-Mantec:A novel constructive neural network algorithm incorporating competition between neurons [J].Neural Networks,2012,26:130-140
[12] Zhang R,Lan Y,Huang G B,et al.Universal approximation of extreme learning machine with adaptive growth of hidden nodes [J].IEEE Transactions on Neural Networks and Learning Systems,2012,23(2):365-371
[13] Karnin E D.A simple procedure for pruning back-propagationtrained neural networks [J].IEEE Transactioins on Neural Networks,1990,1(2):239-242
[14] Hagiwara M.A simple and effective method for removal of hidden units and weights [J].Neurocomputing,1994,6:207-218

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!