2-Layer Perceptron Performance Improvement in Classifying 26 Turned Monochrome 60-by-80-Images via Training with Pixel-Distorted Turned Images

Вадим Васильович Романюк


There is tried 2-layer perceptron in classifying turn-distorted objects at acceptable classification error percentage. The object model is a letter of English alphabet, which is monochrome 60-by-80-image. Neither training 2-layer perceptron with pixel-distorted images, nor with turn-distorted images makes it classify satisfactorily. Therefore in classifying turn-distorted images a 2-layer perceptron performance might be improved via training under distortion modification. The modified distorted images for the training set are suggested as mixture of turn-distorted and pixel-distorted images. Thus the training set is formed of pixel-distorted turned images on the 26 alphabet letters pattern. A performance improvement is revealed when there are passed much more training samples through 2-layer perceptron. This certainly increases traintime, but instead 2-layer perceptron can classify either of pixel-distorted images and pixel-distorted turned images. At that the trained 2-layer perceptron is about 35 times faster than neocognitron in classifying objects of the considered medium format.


Automatization; Object classification; Neocognitron; Perceptron; Monochrome images; Pixel-distortion; Rotation; Turn-distortion; Training set; Classification error percentage

Full Text:



S. Haykin, Neural Networks: A Comprehensive Foundation. New Jersey: Prentice Hall, Inc, 1999.

G. Arulampalam and A. Bouzerdoum, “A generalized feedforward neural network architecture for classification and regression”, Neural Networks, vol. 16, no. 5-6, pp. 561–568, 2003.

K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position”, Biological Cybernetics, vol. 36, no. 4, pp. 193–202, 1980.

K. Fukushima, “Neocognitron: A hierarchical neural network capable of visual pattern recognition”, Neural Networks, vol. 1, no. 2, pp. 119–130, 1988.

K. Hagiwara et al., “Upper bound of the expected training error of neural network regression for a Gaussian noise sequence”, Ibid, vol. 14, no. 10, pp. 1419–1429, 2001.

G. Poli and J.H. Saito, Parallel Face Recognition Proces­sing using Neocognitron Neural Network and GPU with CUDA High Performance Architecture, in Face Recognition, M. Oravec, Ed., InTech, 2010.

Романюк В.В. Зависимость производительности нейросети с прямой связью с одним скрытым слоем нейронов от гладкости ее обучения на зашумленных копиях алфавита образов // Вісник Хмельницького нац. ун-ту. Технічні науки. – 2013. – № 1. – С. 201–206.

M.Т. Hagan and M.B. Menhaj, “Training feedforward networks with the Marquardt algorithm”, IEEE Trans. Neural Networks, vol. 5, no. 6, pp. 989–993, 1994.

A. Nied et al., “On-line neural training algorithm with sli­ding mode control and adaptive learning rate”, Neuro­com­puting, vol. 70, no. 16-18, pp. 2687–2691, 2007.

K.-S. Oh and K. Jung, “GPU implementation of neural networks”, Pattern Recognition, vol. 37, no. 6, pp. 1311–1314, 2004.

D. Kangin et al., “Further Parameters Estimation of Neocognitron Neural Network Modification with FFT Convolution”, J. Telecomm., Electronic and Comp. Eng., vol. 4, no. 2, pp. 21–26, 2012.

GOST Style Citations



DOI: http://dx.doi.org/10.20535/1810-0546.2014.5.35234


  • There are currently no refbacks.