FEATURE VECTOR GENERATION FOR THE FACIAL EXPRESSION RECOGNITION USING NEO-FUZZY SYSTEM
system on the basis of a multidimensional extended neo-fuzzy neuron. The aspects of choice the attributes vector’s dimension and
composition, their influence on the system learning rate are considered. The object of research is the method of multidimensional
data clustering. The subject of research is two-dimensional images geometric features systematization.
Objective. The main goal of the work is to develop an approach to person’s face expression description using geometric features
fixed set that can be obtained by video sequence frames processing.
Method. To study the facial expressions recognition system it is proposed to form a feature vector consisting of characteristic
points coordinates. There were selected points that relate to the location and shape of the eyelids, eyebrows, eye pupils, lips contours,
nose wings, nasolabial folds. Such points can be easily found during the automatic image processing using known contour detectors.
Also, the possibility of using for the human facial expression description not the coordinates of characteristic points, but the distances
between them, was investigated. From these distances a different feature vector was created, the properties of which were compared
with the points coordinates vector.
Results. The developed recognition system on the basis of a multidimensional extended neo-fuzzy neuron have been
implemented in software and investigated for solving the problem of facial expression classification. A comparison between the
attribute vectors that are different in composition and dimension is made. The structure for the feature vector, which provides high
system learning rate, and does not require the additional structural elements was chosen.
Conclusions. The experimental study fully confirms the effectiveness of the developed approach for the human facial
expressions recognition using a multidimensional extended neo-fuzzy neuron.
Full Text:PDF (Українська)
Ekman P., Friesen W. V. Facial Action Coding System. Palo Alto, USA, Consulting Psychologist Press, 1978.
Ekman P., Friesen W. V., Hager J. C. Facial Action Coding System, A Human Face. Salt Lake City, USA, 2002.
Pantic M., Bartlett M. S. eds: K. Delac and M. Grgic Machine
Analysis of Facial Expressions, Face Recognition, InTech
Education and Publishing, 2007, Available from:
eds: S. Z. Li, A. K. Jain. Second edition Handbook of Face
Recognition. London, Springer-Verlag, 2011, 695 p.
Kryvonos Ju. G., Krak Ju. V., Jefimov G. M. ta in. Modeljuvannja ta analiz mimichnyh projaviv emocij, Dopovidi NAN Ukrai’ny, 2008, No. 12, pp. 51–55.
Gokturk S. B., Bouguet J. Y., Tomasi C., Girod B. Model-based face tracking for view independent facial expression recognition, Proc. IEEE Int’l Conf. Face and Gesture Recognition, 2002, pp. 272–278.
Chang Y., Hu C., Feris R., Turk M. Manifold based analysis of facial expression, J. Image & Vision Computing, 2006, Vol. 24, No. 6, pp. 605–614.
Pantic M., Rothkrantz L.J.M. Facial action recognition for facial expression analysis from static face images, IEEE Trans. on Systems, Man and Cybernetics, 2004, Part B, Vol. 34, No. 3, pp. 1449–1461.
Tian Y. L., Kanade T., Cohn J. F.; eds Li S. Z., Jain A. K. Facial Expression Analysis. Handbook of Face Recognition. New York, Springer, 2005, pp. 247–276.
Zhang Y., Ji Q. Active and dynamic information fusion for facial expression understanding from image sequence, IEEE Trans. Pattern Analysis & Machine Intelligence, 2005, Vol. 27, No. 5, pp. 699–714.
Bartlett M. S., Littlewort G., Frank M. G., Lainscsek C., Fasel I., Movellan J. Fully automatic facial action recognition in
spontaneous behavior, Proc. IEEE Conf. Automatic Face & Gesture Recognition, 2006, pp. 223–230.
Guo G., Dyer C. R. Learning from Examples in the Small Sample Case – Face Expression Recognition, IEEE Trans. Systems, Man, and Cybernetics, 2005, Part B, Vol. 35, No. 3, pp. 477–488.
Anderson K., McOwan P. W. A Real-Time Automated System for Recognition of Human Facial Expressions, IEEE Trans. Systems, Man, and Cybernetics, 2006, Part B, Vol. 36, No. 1, pp. 96–105.
Hu Z., Bodyanskiy Ye. V., Kulishova N. Ye., Tyshchenko O. K. A Multidimensional Extended Neo-Fuzzy Neuron for Facial
Expression Recognition, International Journal of Intelligent Systems and Applications (IJISA), 2017, Vol. 9, No. 9, pp. 29–36.
Bay H., Ess A., Tuytelaars T., Van L. Gool SURF: Speeded up robust features, Computer Vision and Image Understanding, 2008, Vol. 110, pp. 346–359.
Shi J., Tomasi C. Good Features to Track, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1994, pp. 593–600.
Lucey P., Cohn J. F., Kanade T., Saragih J., Ambadar Z., Matthews I. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression, Proceedings of IEEE workshop on CVPR for Human Communicative Behavior Analysis, San Francisco, USA, 2010.
GOST Style Citations
Copyright (c) 2018 Ye. V. Bodyanskiy, N. Ye. Kulishova, V. Ph. Tkachenko
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Address of the journal editorial office:
Editorial office of the journal «Radio Electronics, Computer Science, Control»,
Zaporizhzhya National Technical University,
Zhukovskiy street, 64, Zaporizhzhya, 69063, Ukraine.
Telephone: +38-061-769-82-96 – the Editing and Publishing Department.
The reference to the journal is obligatory in the cases of complete or partial use of its materials.