Computer Science ›› 2013, Vol. 40 ›› Issue (12): 116-121.

Previous Articles     Next Articles

Convergence of Asynchronous Gradient Method with Momentum for Ridge Polynomial Neural Networks

YU Xin,TANG Li-xia and YU Yan   

  • Online:2018-11-16 Published:2018-11-16

Abstract: The momentum was introduced into the conventional error function of asynchronous gradient method to improve the convergence efficiency of Ridge Polynomial neural network.This paper studied the convergence of the asynchronous gradient method with momentum for training Ridge Polynomial neural network,and a monotonicity theorem and two convergence theorems were proved,which are important for choosing appropriate learning rate and initial weights to perform an effective training.To illustrate above theoretical finding,a simulation experiment was presented.

Key words: Ridge polynomial neural networks,Asynchronous gradient algorithm,Momentum,Monotonicity,Convergence

[1] Shin Y,Ghosh J.Approximation of Multivariate Functions Using Ridge Polynomial Networks [J].Neural Networks,1992,2:380-385
[2] Shin Y,Ghosh J.Ridge Polynomial Networks [J].IEEE Transactions on Neural Networks,1995,6(3):610-622
[3] Shin Y,Ghosh J.The pi-sigma network:an effcient higher-order neural network for pattern classification and function approximation [J].IEEE Transactions on Neural Networks,1991,1:13-18
[4] Christodoulos V,Yiannis S B,Basil G M.Ridge polynomial networks in pattern recognition [C]∥4th EURASIP Conference focused on Video/Image Processing and Multimedia Communications.2003:519-524
[5] Ghazali R,Hussian A J,Liatsis P.Dynamic ridge polynomialneural network:forecasting the univariate non-stationary and stationary trading signal [J].Expert Systems with Applications,2011,38:3765-3776
[6] Ghazali R,Hussian A J,Nawi N M,et al.Non-stationary and stationary prediction of financial time series using dynamic ridge polynomial neural network [J].Neurocomputing,2009,2(10-12):2359-2367
[7] Hacib T,Bihan Y L,Mekideche M R,et al.Ridge polynomialneural network for non-destructive eddy current evaluation [J].Studies in Compatational Ingelligence,2011,327:185-199
[8] Giles C L,Mzxwell T.Learning,invaricance,and generalization in a high-order neural network [J].Applied Optics,1987,26(23):4972-4978
[9] Rumelhart D E,McClelland J L.Parallel Distributed Processing Explorations in the microstructure of cognition [M].Cambridge:MIT Press,1986
[10] Durbin R,Rumelhart D E.Product units:A computationallypowerful and biologically plausible extension to backpropagation networks [J].Neural Computation,1989,1:133-142
[11] Zhang Chao,Wu Wei,Xiong Yan.Covergence Analysis of Batch Gradient Algorithm for Three Classes of Sigma-Pi Neural Networks [J].Neural Processing Letters,2007,26(3):177-189
[12] 张超,李正学,陈先华,等.用在线梯度法训练积单元神经网络的收敛性分析 [J].高等学校计算数学,2010,32(3):261-274
[13] 邵红梅,安凤仙.一类训练前馈神经网络的梯度算法及收敛性 [J].中国石油大学学报,2010,34(4):176-180
[14] Shao Hong-mei,Wu Wei,Liu Li-jun.Convergence of OnlineGradient Method with Penalty for BP Neural Networks [J].Communications in Mathematical Research,2010,26(1):67-75
[15] 喻昕,邓飞,唐利霞.Pi-Sigma神经网络的乘子法随机单点在线梯度算法 [J].计算机应用研究,2011,28(11):4074-4077
[16] 熊焱,张超.Pi-Sigma神经网络的带动量项的异步批处理梯度算法收敛性[J].应用数学,2008,21(1):217-212
[17] 袁亚湘,孙文瑜.最优化理论与方法[M].北京:科学出版社,2001

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!