AN ATTEMPT FOR 2-LAYER PERCEPTRON HIGH PERFORMANCE IN CLASSIFYING SHIFTED MONOCHROME 60-BY-80-IMAGES VIA TRAINING WITH PIXEL-DISTORTED SHIFTED IMAGES ON THE PATTERN OF 26 ALPHABET LETTERS
DOI:
https://doi.org/10.15588/1607-3274-2013-2-18Keywords:
object classification, neocognitron, perceptron, pixel distortion, shift, monochrome images, shifted monochrome images.Abstract
Object classification problem is considered, where neocognitron and multilayer perceptron may be applied. As neocognitron, solving almost any classification problem, performs too slowly and expensively, then for recognizing shifted monochrome images there is attempted the 2-layer perceptron, being fast only for pixel-distorted monochrome images, though. Having assumed the original images set of 26 monochrome 60-by-80-images of the English alphabet letters, there is formulated the task to clear out whether the 2-layer perceptron is capable to ensure high performance in classifying shifted monochrome images. Thus it is disclosed that the 2-layer perceptron performs as the good classifier of shifted monochrome images, when in training its input is fed with training samples from shifted images, being pixel-distorted. For this, however, it may need more passes of training samples through the 2-layer perceptron, but nevertheless the total traintime will be shorter than for training the 2-layer perceptron with only pixel-distortion-free shifted monochrome images.References
Arulampalam, G. A generalized feedforward neural network architecture for classification and regression / G. Arulampalam, A. Bouzerdoum // Neural Networks. – 2003. – Volume 16, Issues 5–6. – P. 561–568.
Hagan, M. T. Neural Networks for Control / M. T. Hagan, H. B. Demuth // Proceedings of the 1999 American Control Conference, San Diego, CA. – 1999. – P. 1642–1656.
Benardos, P. G. Optimizing feedforward artificial neural network architecture / P. G. Benardos, G.-C. Vosniakos // Engineering Applications of Artificial Intelligence. – 2007. – Volume 20, Issue 3. – P. 365–382.
Hagan, M. Т. Training feedforward networks with the Marquardt algorithm / M. Т. Hagan, M. B. Menhaj // IEEE Transactions on Neural Networks. – 1994. – Volume 5, Issue 6. – P. 989–993.
Yu, C. An efficient hidden layer training method for the multilayer perceptron / C. Yu, M. T. Manry, J. Li, P. L. Narasimha // Neurocomputing. – 2006. – Volume 70, Issues 1 – 3. – P. 525–535.
Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position / K. Fukushima // Biological Cybernetics. – 1980. – Volume 36, Issue 4. – P. 193–202.
Fukushima, K. Neocognitron: A hierarchical neural network capable of visual pattern recognition / K. Fukushima // Neural
Networks. – 1988. – Volume 1, Issue 2. – P. 119–130.
Hagiwara, K. Upper bound of the expected training error of neural network regression for a Gaussian noise sequence / K. Hagiwara, T. Hayasaka, N. Toda, S. Usui, K. Kuno //Neural Networks. – 2001. – Volume 14, Issue 10. – P. 1419–1429.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2014 V. V. Romanuke
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Creative Commons Licensing Notifications in the Copyright Notices
The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.
The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.
The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.
Authors who publish with this journal agree to the following terms:
-
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License CC BY-SA that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
-
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
-
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.