Neural Network with Combined Approximation of the Surface of the Response
Background. There are a large number of neural networks that have their advantages and disadvantages, for example, simple, fast and easy to use single-stranded perceptrons are suitable for linear and linearized regression tasks, and more complicated neural networks are expendable in training and prediction time. Therefore, the problem arises for the development of fast and efficient algorithms for training artificial neural networks. An additional factor for researching new methods for training neural networks is finding the smallest training and prediction errors.
Objective. The aim of the paper is to search and analyze the properties of the most effective method of training artificial neural networks using a combined approximation of the response surface. Another step is to perform computational experiments on proposed artificial neural networks and compare the results of experiments with known and developed methods.
Methods. Analysis of known methods of combined approximation of the response surface was used. New algorithms for training neural networks, based on clustering of data using k-means method were developed. The algorithm with the smallest errors of artificial neural network learning and data prediction is chosen.
Results. The results of research of different methods of training of artificial neural networks are given. Peculiarities of the methods of combined approximation of the response surface are analyzed. It is shown that the two methods of combined approximation of the response surface for training of artificial neural networks and prediction confirm the effectiveness of the proposed approach. Combined approximation algorithm is selected, which provides the lowest learning and forecasting errors.Conclusions. It was investigated that developed methods of combined approximation of the response surface allow training neural networks and predicting data with less error than when using autoregressive model with moving average, multilayer perceptron or artificial neural networks of models of geometric transformations without additional data processing.
Full Text:PDF (Українська)
M.I. Jordan and T.M. Mitchell, “Machine learning: trends, perspectives, and prospects”, Science, vol. 349, no. 6245, pp. 255–260, 2015. doi: 10.1126/science.aaa8415
R. Tkachenko, Neural Network Means of Artificial Intelligence. Lviv, Ukraine: Lviv Polytechnic Publ., 2017.
А. Sevastjanov et al. (2003). Neural Network Regularization of the Solution of Inverse Ill-Posed Applications of Applied Spectroscopy [Online]. Available: http://zhurnal.ape.relarn.ru/articles/2003/189.pdf
R. Tkachenko, “New paradigm of artificial neural networks of direct distribution”, Visnyk Natsionalnoho Universytetu “Lvivska politekhnika”: Komp’iuterna Inzheneriia ta Informatsiini Tekhnolohii, no. 386, pp. 43–54, 1999.
R. Kruse et al., Computational Intelligence: A Methodological Introduction. London: Springer, 2013.
C. Zhang et al., “Understanding deep learning requires rethinking generalization”, in Proc. 5th Int. Conf. Learning Representations (ICLR), 2017.
U.V. Volosiuk, “Analysis of clustering algorithms for data analysis tasks”, Proc. of the Military Institute of Kyiv National Taras Shevchenko University, no. 47, pp. 112–119, 2014.
O.I. Derkach, “Analytical processing of text information with the help of clustering tools”, Molodyi Vchenyi: Physyko-Matematychni Nauky, no. 7, pp. 159–165, 2016.
L. Jiang et al., “Survey of improving k-nearest-neighbor for classification”, in Proc. 4rth Int. Conf. FSKD 2007, vol. 1, pp. 679–683, 2007, doi: 10.1109/FSKD.2007.552
GOST Style Citations
Copyright (c) 2018 Igor Sikorsky Kyiv Polytechnic Institute
This work is licensed under a Creative Commons Attribution 4.0 International License.