Computer Science ›› 2016, Vol. 43 ›› Issue (Z6): 46-50.doi: 10.11896/j.issn.1002-137X.2016.6A.010

Previous Articles     Next Articles

Research on Image Classification Algorithm Based on Semi-supervised Deep Belief Network

ZHU Chang-bao, CHENG Yong and GAO Qiang   

  • Online:2018-12-01 Published:2018-12-01

Abstract: In recent years,the deep learning to get a successful application in image,voice,video and other unstructured data,has become a hot topic of machine learning and data mining.As a supervised learning model,the successful deep learning applications often require a larger set of high-quality training.Based on this situation,we studied deep belief network composed of more restricted Boltzmann machines,and combined with the thought of semi-supervised learning,we used smaller training set to improve the classification accuracy of depth network model.We used Knn,SVM and pHash three methods to study the non-labeled data set.And the result shows that the semi-supervised deep belief networks increases image classification accuracy by about 3% compared with the traditional network with more restricted Boltzmann machine.

Key words: Semi-supervised,Deep belief networks,Restricted boltzmann machine

[1] Hinton G,Deng L,Yu D,et al.Deep neural networks for acoustic modeling in speech recognition:The shared views of four research groups[J].Signal Processing Magazine,2012,29(6):82-97
[2] Mohamed A,Dahl G E,Hinton G.Acoustic modeling using deep belief networks[J].IEEE Trans on Audio,Speech,and Language Processing,2012,20(1):14-22
[3] Roux N L,Bengio Y.Representational power of restricted Boltzmann machines and deep belief networks[J].Neural Computation,2006,20(6):1631-1649
[4] Hinton G E,Osindero S,The Y.A fast learning algorithm for deep belief nets[J].Neural Computation,2006,18(7):1527-1554
[5] Sainath T N,Kingsbury B,Soltau H,et al.Optimization technique to improve training speed of deep belief networks for large speech tash tasks[J].IEEE Trans on Audio,Speech,and Language Processing,2013,1(11):2267-2276
[6] You Z,Wang X,Xu B.Exploring one pass learning for deep neural network training with averaged stochastic gradient descent[C]∥IEEE International Conference on Acoustics,Speech and Signal Processing.2014:6854-6858
[7] Erhan D,Courville A,Bengio Y,et al.Visualizing Higher Layer Features of a Deep Network[C]∥Spotlight presentation and poster at the ICML 2009 Workshop on Learning Feature Hierarchies.Montreal,Canada,2009
[8] Hinton G E.Training products of experts by minimizing contrastive divergence[J].Neural Computation,2002,14(8):1771-1800
[9] Mass A L,Hannum A Y,Lengerich C T,et al.Increasing deep neural network acoustic model size for large vocabulary conti-nuous speech recognition[J].arXiv preprint arXiv:1406.7806,4
[10] Hsu C,Lin C.A comparison on methods for multi-class support vector machines[J].IEEE Transactions on Neural Networks,2002,12:415-425
[11] Zauner.Christoph:Implementation and Benchmarking of Per-ceptual Image Hash Functions[C]∥Master’s thesis,Upper Austria University of Applied Sciences.Hagenberg Campus,2010
[12] Hinton G E.A practical guide to training restricted Boltzmann machines[J].Momentum,2010,9(1):926
[13] Deng W,Qian Y,Fan Y,et al.Stochastic data sweeping for fast DNN training[C]∥Proc of IEEE International Conference on Acoustics,Speech and Signal Processing.2014:240-244
[14] 罗恒.基于协同过滤视角的受限波尔兹曼机研究[D].上海:上海交通大学,2011

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!