A FRAMEWORK FOR CLASSIFIER SINGLE TRAINING PARAMETER OPTIMIZATION ON TRAINING TWO-LAYER PERCEPTRON IN A PROBLEM OF TURNED 60-BY-80-IMAGES CLASSIFICATION
DOI:
https://doi.org/10.15588/1607-3274-2014-2-13Keywords:
classifier training parameter optimization, statistical evaluation, optimization scenario, two-layer perceptron, classification error percentage, turned objects classification, monochrome image, pixel-to-turn standard deviations ratio, training set.Abstract
A 13-itemed scenario framework for classifier single training parameter optimization is developed. Formally, the problem is to find global extremum (mostly, minimum) of function as a classifier output parameter against its single training parameter. Linking the scenario theory to praxis, the classifier type has been decided on two-layer perceptron. Its input objects are monochrome images of a medium format, having a few thousands independent features. Within the framework, the programming environment has been decided on MATLAB, having powerful Neural Network Toolbox. Keeping in mind the stochasticity of the being minimized function, there is defined statistical ε-stability of its evaluation by a finite set of data. These data are mined in batch testings of the trained classifier. For exemplification of the scenario framework, there is optimized pixel-to-turn standard deviations ratio for training two-layer perceptron in classifying monochrome 60-by-80-images of the enlarged 26 English alphabet capital letters. The goal is to find a pixel-to-turn standard deviations ratio for the training process in order to ensure minimum of classification error percentage. The optimization relative gain is about a third. The developed framework can be applied also for classifier multivariable optimization, wherein it instructs which item operations shall regard the corresponding multiplicity of variables.
References
Axinte D. A. Approach into the use of probabilistic neural networks for automated classification of tool malfunctions in broaching / D. A. Axinte // International Journal of Machine Tools and Manufacture. – 2006. – Volume 46, Issue 12–13. – P. 1445–1448. DOI: 10.1016/j.ijmachtools.2005.09.017 2. Fukushima K. Increasing robustness against background noise: Visual pattern recognition by a neocognitron / K. Fukushima // Neural Networks. – 2011. – Volume 24, Issue 7. – P. 767–778. DOI: 10.1016/j.neunet.2011.03.017 3. On the use of small training sets for neural network-based characterization of mixed pixels in remotely sensed hyperspectral images / [J. Plaza, A. Plaza, R. Perez, P. Martinez] // Pattern Recognition. – 2009. – Volume 42, Issue 11. – P. 3032–3045. DOI: 10.1016/j.patcog.2009.04.008 4. Siniscalchi S. M. Exploiting deep neural networks for detection-based speech recognition / S. M. Siniscalchi, D. Yu, L. Deng, C.-H. Lee // Neurocomputing. – 2013. – Volume 106. – P. 148 – 157. DOI: 10.1016/j.neucom.2012.11.008 5. Arulampalam G. A generalized feedforward neural network architecture for classification and regression / G. Arulampalam, A. Bouzerdoum // Neural Networks. – 2003. – Volume 16, Issue 5–6. – P. 561 – 568. DOI: 10.1016/S0893-6080(03)00116-3 6. Multi-column deep neural network for traffic sign classification / [D. Cireşan, U. Meier, J. Masci, J. Schmidhuber] // Neural Networks. – 2012. – Volume 32. – P. 333–338. DOI: 10.1016/j.neunet.2012.02.023 7. Fukushima K. Artificial vision by multi-layered neural networks: Neocognitron and its advances / K. Fukushima // Neural Networks. – 2013. – Volume 37. – P. 103–119. DOI: 10.1016/j.neunet.2012.09.016 8. An efficient hidden layer training method for the multilayer perceptron / [C. Yu, M. T. Manry, J. Li, P. L. Narasimha] // Neurocomputing. – 2006. – Volume 70, Issue 1–3. – P. 525– 535. DOI: 10.1016/j.neucom.2005.11.008 9. Comparing evolutionary hybrid systems for design and optimization of multilayer perceptron structure along training parameters / [P. A. Castillo, J. J. Merelo, M. G. Arenas, G. Romero] // Information Sciences. – 2007. – Volume 177, Issue 14. – P. 2884–2905. DOI: 10.1016/j.ins.2007.02.021 10. Hoi K. I. Improvement of the multilayer perceptron for air quality modelling through an adaptive learning scheme / K. I. Hoi, K. V. Yuen, K. M. Mok // Computers & Geosciences. – 2013. – Volume 59. – P. 148–155. DOI: 10.1016/j.cageo.2013.06.002
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2015 V. V. Romanuke
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Creative Commons Licensing Notifications in the Copyright Notices
The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.
The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.
The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.
Authors who publish with this journal agree to the following terms:
-
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License CC BY-SA that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
-
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
-
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.