Full-text resources of PSJD and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

PL EN


Preferences help
enabled [disable] Abstract
Number of results
2015 | 128 | 2B | B-271-B-272

Article title

Comparison of Second Order Algorithms for Function Approximation with Neural Networks

Content

Title variants

Languages of publication

EN

Abstracts

EN
The Neural networks are massively parallel, distributed processing systems representing a new computational technology built on the analogy to the human information processing system. They are usually considered as naturally parallel computing models. The combination of wavelets with neural networks can hopefully remedy each other's weaknesses, resulting in wavelet based neural network capable of approximating any function with arbitrary precision. A wavelet based neural network is a nonlinear regression structure that represents nonlinear mappings as the superposition of dilated and translated versions of a function, which is found both in the space and frequency domains. The desired task is usually obtained by a learning procedure which consists in adjusting the "synaptic weights". For this purpose, many learning algorithms have been proposed to update these weights. The convergence for these learning algorithms is a crucial criterion for neural networks to be useful in different applications. In this paper, we use different training algorithms for feed forward wavelet networks used for function approximation. The training is based on the minimization of the least-square cost function. The minimization is performed by iterative first and second order gradient-based methods. We make use of the Levenberg-Marquardt algorithm to train the architecture of the chosen network and, then, the training procedure starts with a simple gradient method which is followed by a BFGS (Broyden, Fletcher, Glodfarb et Shanno) algorithm. The conjugate gradient method is then used. The performances of the different algorithms are then compared. It is found that the advantage of the last training algorithm, namely, conjugate gradient method, over many of the other optimization algorithms is its relative simplicity, efficiency and quick convergence.

Keywords

EN

Contributors

author
  • Laboratory of Particle Physics and Statistical Physics, Ecole Normale, Supérieure BP 92 Vieux Kouba, Algeria
  • Laboratory of Theoretical Physics, Faculty of Sciences-Physics, USTHB, B.P. 32, El-Alia, Algeria
author
  • Laboratory of Theoretical Physics, Faculty of Sciences-Physics, USTHB, B.P. 32, El-Alia, Algeria
  • Laboratory of Theoretical Physics, Faculty of Sciences-Physics, USTHB, B.P. 32, El-Alia, Algeria

References

  • [1] J.J. Hopfield, D.W. Tank, Science 233, 625 (1986), doi: 10.1126/science.3755256
  • [2] Q. Zhang, A. Benveniste, IEEE Trans. Neural Networks 3, 889 (1992), doi: 10.1109/72.165591
  • [3] J. Zhang, G.G. Walter, Y. Miao, W.N.W. Lee, IEEE Trans. Signal Processing 43, 1485 (1995), doi: 10.1109/78.388860
  • [4] K. Hornik, M. Stinchcombe, H. White, Neural Networks 2, 359 (1989), doi: 10.1016/0893-6080(89)90020-8
  • [5] L. Ait Gougam, M. Tribeche, F. Mekideche, Neural Networks 21, 1311 (2008), doi: 10.1016/j.neunet.2008.06.015
  • [6] B. Giraud, A. Touzeau, Phys. Rev. E 65, 016109 (2001), doi: 10.1103/physreve.65.016109

Document Type

Publication order reference

YADDA identifier

bwmeta1.element.bwnjournal-article-appv128n2b078kz
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.