Method of Encoding the Output Signal of Neural Networks Models

Authors

DOI:

https://doi.org/10.20535/1810-0546.2017.5.107206

Keywords:

Neural network, Standard, Encoding method, Output signal

Abstract

Background. A significant drawback of the technology of creating modern neural network models based on the multilayer perceptron is that when the parameters of the case studies are encoded, the expected output signal correlation with the similarity of the class standards to be recognized is not taken into account.

Objective. The aim of the paper is the development of the method for encoding the output of the case studies, which ensures the reflection of the similarity of the class standards to be recognized.

Methods. The encoding method is based on a probabilistic neural network, in which case studies the expected output signal is determined not by numerical form but by the class name to be recognized. At the same time, when recognizing, it is possible in the numerical form of the output signal of the network to show the similarity of the input image to each class that was laid in it during the training.

Results. The encoding method has been developed, which, due to the use of the probabilistic neural network, allows us to consider the similarity of the class standards to be recognized in the expected output signal of the case studies.

Conclusions. The proposed method allows reducing the number of training iterations 1.3–1.5 times to achieve a tolerable learning error within 1 %.

References

V. Pham, “Dropout improves recurrent neural networks for handwriting recognition”, in Proc. 14th Int. Conf. IEEE ICFHR, 2014, pp. 285–290.

K. Hwang and W. Sung, “Single stream parallelization of generalized LSTM-like RNNs on a GPU”, arXiv:1503.02852, 2015.

K. Cho, “On the properties of neural machine translation: Encoder-decoder approaches”, arXiv:1409.1259, 2014.

S. Tai et al., “Improved semantic representations from tree-structured long short-term memory networks”, arXiv:1503.00075, 2015.

L. Tereikovska et al., “Prospects of neural networks in business models”, in Proc. TransComp, Zakopanem, Poland, Nov. 30–Dec. 3, 2015, pp. 1539–1545.

A. Korchenko et al., Neural Network Models, Methods and Tools for Assessing the Security Parameters of Internet-Oriented Infor­mation Systems. Kyiv, Ukraine: Nash Format, 2016 (in Ukrainian).

L.O. Tereikovska, “Neural network models and methods of recognition of phonemes in a voice signal in the system of distance learning”, Ph.D. dissertation, Kyiv National University of Construction and Architecture, Kyiv, Ukraine, 2016 (in Ukrainian).

I. Tereikovskyi, Neural Networks in the Means of Protection of Computer Information. Kyiv, Ukraine: PolygraphConsulting, 2007 (in Ukrainian).

V.A. Mishhenko, “Models and algorithms for recognizing graphic images based on fuzzy neural networks”, Ph.D. disser­tation, Voronezh State University, Voronezh, Russia, 2013 (in Russian).

I.L. Kaftannikov and A.V. Parasich, “Problems of formation of a training sample in the problems of machine learning”, Vestnik SUSU. Ser. Kompyuternyie Tehnologii, Upravlenie, Radioelektroni, vol. 16, no. 3, pp. 15–24, 2016 (in Russian).

O.Ju. Lavrov, “The method of formulating the Naval Vibrack for the Navicana by the automated system for the detection of ob'ektiv in the territory of the Republic of Moldova”, Systemy Obrobky Informatsiyi, no. 8 (145), pp. 29–32, 2016 (in Ukrainian).

V.G. Caregorodcev, “Comparison of the effectiveness of trainees and hand-held feature detectors on the image” [Online]. Available: http://neuropro.ru/memo.shtml (in Russian).

Published

2017-10-31

Issue

Section

Art