http://ric.zntu.edu.ua/issue/feed Radio Electronics, Computer Science, Control 2021-07-15T05:48:19+00:00 Sergey A. Subbotin subbotin.csit@gmail.com Open Journal Systems <p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="http://zntu.edu.ua/zaporozhye-national-technical-university" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Russian, Ukrainian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="https://mjl.clarivate.com/search-results" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="http://ric.zntu.edu.ua/about/editorialTeam" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="http://journals.uran.ua/public/site/images/grechko/1OA1.png" alt="" /> <img src="http://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="" /></span></strong></p> http://ric.zntu.edu.ua/article/view/236116 ONLINE PROBABILISTIC FUZZY CLUSTERING METHOD BASED ON EVOLUTIONARY OPTIMIZATION OF CAT SWARM 2021-06-30T14:50:10+00:00 Ye. V. Bodyanskiy rvv@zntu.edu.ua A. Yu. Shafronenko rvv@zntu.edu.ua I. N. Klymova rvv@zntu.edu.ua <p>Context. The problems of big data clustering today is a very relevant area of artificial intelligence. This task is often found in many applications related to data mining, deep learning, etc. To solve these problems, traditional approaches and methods require that the entire data sample be submitted in batch form.</p> <p>Objective. The aim of the work is to propose a method of fuzzy probabilistic data clustering using evolutionary optimization of cat swarm, that would be devoid of the drawbacks of traditional data clustering approaches.</p> <p>Method. The procedure of fuzzy probabilistic data clustering using evolutionary algorithms, for faster determination of sample extrema, cluster centroids and adaptive functions, allowing not to spend machine resources for storing intermediate calculations and do not require additional time to solve the problem of data clustering, regardless of the dimension and the method of presentation for processing.</p> <p>Results. The proposed data clustering algorithm based on evolutionary optimization is simple in numerical implementation, is devoid of the drawbacks inherent in traditional fuzzy clustering methods and can work with a large size of input information processed online in real time.</p> <p>Conclusions. The results of the experiment allow to recommend the developed method for solving the problems of automatic clustering and classification of big data, as quickly as possible to find the extrema of the sample, regardless of the method of submitting the data for processing. The proposed method of online probabilistic fuzzy data clustering based on evolutionary optimization of cat swarm is intended for use in hybrid computational intelligence systems, neuro-fuzzy systems, in training artificial neural networks, in clustering and classification problems.</p> 2021-06-29T00:00:00+00:00 Copyright (c) 2021 Є. В. Бодянський , А. Ю. Шафроненко , І. М. Клімова http://ric.zntu.edu.ua/article/view/236439 EXPERIMENTAL ANALYSIS OF MULTINATIONAL GENETIC ALGORITHM AND ITS MODIFICATIONS 2021-07-03T19:02:48+00:00 N. M. Gulayeva rvv@zntu.edu.ua S. A. Yaremko rvv@zntu.edu.ua <p>Context. Niching genetic algorithms are one of the most popular approaches to solve multimodal optimization problems. When classifying niching genetic algorithms it is possible to select algorithms explicitly analyzing topography of fitness function landscape; multinational genetic algorithm is one of the earliest examples of these algorithms.</p> <p>Objective. Development and analysis of the multinational genetic algorithm and its modifications to find all maxima of a multimodal function.</p> <p>Method. Experimental analysis of algorithms is carried out. Numerous runs of algorithms on well-known test problems are conducted and performance criteria are computed, namely, the percentage of convergence, real (global, local) and fake peak ratios; note that peak rations are computed only in case of algorithm convergence.</p> <p>Results. Software implementation of a multinational genetic algorithm has been developed and experimental tuning of its parameters has been carried out. Two modifications of hill-valley function used for determining the relative position of individuals have been proposed. Experimental analysis of the multinational genetic algorithm with classic hill-valley function and with its modifications has been carried out.</p> <p>Conclusions. The scientific novelty of the study is that hill-valley function modifications producing less number of wrong identifications of basins of attraction in comparison with classic hill-valley function are proposed. Using these modifications yields to performance improvements of the multinational genetic algorithm for a number of test functions; for other test functions improvement of the quality criteria is accompanied by the decrease of the convergence percentage. In general, the convergence percentage and the quality criterion values demonstrated by the algorithm studied are insufficient for practical use in comparison with other known algorithms. At the same time using modified hill-valley functions as a post-processing step for other niching algorithms seems to be a promising improvement of performance of these algorithms.</p> 2021-07-03T00:00:00+00:00 Copyright (c) 2021 Н. М. Гулаєва , С. А. Яремко http://ric.zntu.edu.ua/article/view/236442 THE AUTOMATIC SYNTHESIS OF PETRI NETS BASED ON THE FUNCTIONING OF ARTIFICIAL NEURAL NETWORK 2021-07-03T19:41:53+00:00 A. A. Gurskiy rvv@zntu.edu.ua A. V. Denisenko rvv@zntu.edu.ua S. M. Dubna rvv@zntu.edu.ua <p>Context. The important task was solved during the scientific research related to the development of the methods for automatic synthesis of Petri nets while tuning up of the coordinating automatic control systems. The importance of development of these methods is due to the evolution of intelligent systems. These systems provide the automation of labor intensive processes in the particular case this is the tuning of the certain type of complex control systems.</p> <p>Objective. The purpose of the scientific work is to minimize the time and automation of process in tuning of the multilevel coordinating automatic control systems.</p> <p>Method. The principle of automatic synthesis of Petri nets and the implementation of certain algorithms for tuning complex control systems based on the functioning of an artificial neural network are proposed. The mathematical description of the method for changing the coefficients in neural connections of network in the synthesis of Petri nets is presented.</p> <p>Results. The experiments were conducted in the Matlab\Simulink 2012a environment. These experiments were bound to the joint functioning of an artificial neural network and Petri nets. The functioning of Petri nets was presented in the Matlab \ Simulink environment using Statflow diagrams.</p> <p>As a result of the experiments we have obtained the temporal characteristics of the functioning of artificial neural network providing the composition of Petri nets. The fundamental suitability of using artificial neural network to provide the automatic composition of Petri nets was determined on the basis of analysis of temporal characteristics.</p> <p>Conclusion. The problem linked to the development of system for the joint functioning of neural network and Petri nets for the formation of algorithms and sequential calculations was solved in this work. Thus the method of automatic synthesis of Petri nets and the method of developing of the certain algorithms based on the functioning of a neural network were further developed.</p> 2021-07-03T00:00:00+00:00 Copyright (c) 2021 A. A. Gurskiy , A. V. Denisenko , S. M. Dubna http://ric.zntu.edu.ua/article/view/236443 SYNTHESIS AND USAGE OF NEURAL NETWORK MODELS WITH PROBABILISTIC STRUCTURE CODING 2021-07-03T20:21:29+00:00 S. D. Leoshchenko rvv@zntu.edu.ua A. O. Oliinyk rvv@zntu.edu.ua S. A. Subbotin rvv@zntu.edu.ua Ye. O. Gofman rvv@zntu.edu.ua M. B. Ilyashenko rvv@zntu.edu.ua <p>Context. The problem of encoding information of models based on artificial neural networks for further transmission and use of such models is considered. The object of research is the process of coding artificial neural networks using probabilistic data structures.</p> <p>Objective of this work is to develop a method for coding neural networks to reduce the resource intensity of the process of neuroevolutionary model synthesis.</p> <p>Method. A method for encoding neural networks based on probabilistic data structures is proposed. At the beginning, the method uses the basic principles of the approach of direct encoding of network information and, based on sequencing, encodes a matrix of interneuronal connections in the form of biopolymers. Then, probabilistic data structures are used to represent the original matrix more compactly. For this purpose, hash functions are used, the initial matrix goes through the hashing process, which significantly reduces the requirements for memory resources. The method allows to reduce memory costs when sending artificial neural networks, which significantly expands the practical use of such models, preventing a sharp decrease in the accuracy of their operation.</p> <p>Results. The developed method is implemented and investigated in solving the problem of classification of the state of South German creditors. The use of the developed method allowed increasing the rate of neuromodel synthesis by 15–17.6%, depending on the computing resources used. The method also reduced the share of information transfers by 8%, which also indicates faster and more efficient use of resources.</p> <p>Conclusions. The conducted experiments confirmed the efficiency of the proposed mathematical software and allow us to recommend it for use in practice, when encoding models based on artificial neural networks, for further solving problems of diagnostics, forecasting, evaluation and pattern recognition. Prospects for further research may consist in pre-processing data for more strict control of the encoding process in order to minimize the loss of quality of models based on neural networks.</p> 2021-07-04T00:00:00+00:00 Copyright (c) 2021 С. Д. Леощенко , А. О. Олійник , С. О. Субботін , Є. О. Гофман , М. Б. Ільяшенко http://ric.zntu.edu.ua/article/view/236740 TREE-BASED SEMANTIC ANALYSIS METHOD FOR NATURAL LANGUAGE PHRASE TO FORMAL QUERY CONVERSION 2021-07-07T06:53:58+00:00 A. A. Litvin rvv@zntu.edu.ua V. Yu. Velychko rvv@zntu.edu.ua V. V. Kaverynskyi rvv@zntu.edu.ua <p>Context. This work is devoted to the problem of natural language interface construction for ontological graph databases. The focus here is on the methods for the conversion of natural language phrases into formal queries in SPARQL and CYPHER query languages.</p> <p>Objective. The goals of the work are the creation of a semantic analysis method for the input natural language phrases semantic type determination and obtaining meaningful entities from them for query template variables initialization, construction of flexible query templates for the types, development of program implementation of the proposed technique.</p> <p>Method. A tree-based method was developed for semantic determination of a user’s phrase type and obtaining a set of terms from it to put them into certain places of the most suiting formal query template. The proposed technique solves the tasks of the phrase type determination (and this is the criterion of the formal query template selection) and obtaining meaningful terms, which are to initialize variables of the chosen template. In the current work only interrogative and incentive user’s phrases are considered i.e. ones that clearly propose the system to answer or to do something. It is assumed that the considered dialog or reference system uses a graph ontological database, which directly impacts the formal query patterns – the resulting queries are destined to be in SPARQL or Cypher query languages. The semantic analysis examples considered in this work are aimed primarily at inflective languages, especially, Ukrainian and Russian, but the basic principles could be suitable to most of the other languages.</p> <p>Results. The developed method of natural language phrase to a formal query in SPARQL and CYPHER conversion has been implemented in software for Ukrainian and Norwegian languages using narrow subjected ontologies and tested against formal performance criteria.</p> <p>Conclusions. The proposed method allows the dialog system fast and with minimum number of steps to select the most suitable query template and extract informative entities from a natural language phrase given the huge phrase variability in inflective languages. Carried out experiments have shown high precision and reliability of the constructed system and its potential for practical usage and further development.</p> 2021-07-07T00:00:00+00:00 Copyright (c) 2021 А. А. Литвин, В. Ю. Величко, В. В. Каверінскій http://ric.zntu.edu.ua/article/view/236753 STOCHASTIC PSEUDOSPIN NEURAL NETWORK WITH TRIDIAGONAL SYNAPTIC CONNECTIONS 2021-07-07T07:55:12+00:00 R. М. Peleshchak rvv@zntu.edu.ua V. V. Lytvyn rvv@zntu.edu.ua О. І. Cherniak rvv@zntu.edu.ua І. R. Peleshchak rvv@zntu.edu.ua М. V. Doroshenko rvv@zntu.edu.ua <p>Context. To reduce the computational resource time in the problems of diagnosing and recognizing distorted images based on a fully connected stochastic pseudospin neural network, it becomes necessary to thin out synaptic connections between neurons, which is solved using the method of diagonalizing the matrix of synaptic connections without losing interaction between all neurons in the network.</p> <p>Objective. To create an architecture of a stochastic pseudo-spin neural network with diagonal synaptic connections without loosing the interaction between all the neurons in the layer to reduce its learning time.</p> <p>Method. The paper uses the Hausholder method, the method of compressing input images based on the diagonalization of the matrix of synaptic connections and the computer mathematics system MATLAB for converting a fully connected neural network into a tridiagonal form with hidden synaptic connections between all neurons.</p> <p>Results. We developed a model of a stochastic neural network architecture with sparse renormalized synaptic connections that take into account deleted synaptic connections. Based on the transformation of the synaptic connection matrix of a fully connected neural network into a Hessenberg matrix with tridiagonal synaptic connections, we proposed a renormalized local Hebb rule. Using the computer mathematics system “WolframMathematica 11.3”, we calculated, as a function of the number of neurons N, the relative tuning time of synaptic connections (per iteration) in a stochastic pseudospin neural network with a tridiagonal connection Matrix, relative to the tuning time of synaptic connections (per iteration) in a fully connected synaptic neural network.</p> <p>Conclusions. We found that with an increase in the number of neurons, the tuning time of synaptic connections (per iteration) in a stochastic pseudospin neural network with a tridiagonal connection Matrix, relative to the tuning time of synaptic connections (per iteration) in a fully connected synaptic neural network, decreases according to a hyperbolic law. Depending on the direction of pseudospin neurons, we proposed a classification of a renormalized neural network with a ferromagnetic structure, an antiferromagnetic structure, and a dipole glass.</p> 2021-07-07T00:00:00+00:00 Copyright (c) 2021 Р. М. Пелещак, В. В. Литвин, О. І. Черняк, І. Р. Пелещак, М. В. Дорошенко http://ric.zntu.edu.ua/article/view/237057 GUIDED HYBRID GENETIC ALGORITHM FOR SOLVING GLOBAL OPTIMIZATION PROBLEMS 2021-07-10T08:00:18+00:00 S. E. Avramenko rvv@zntu.edu.ua T. A. Zheldak rvv@zntu.edu.ua L. S. Koriashkina rvv@zntu.edu.ua <p>Context. One of the leading problems in the world of artificial intelligence is the optimization of complex systems, which is often represented as a nonlinear function that needs to be minimized. Such functions can be multimodal, non-differentiable, and even set as a black box. Building effective methods for solving global optimization problems raises great interest among scientists.</p> <p>Objective. Development of a new hybrid genetic algorithm for solving global optimization problems, which is faster than existing analogues.</p> <p>Methods. One of the crucial challenges for hybrid methods in solving nonlinear global optimization problems is the rational use of local search, as its application is accompanied by quite expensive computational costs. This paper proposes a new GBOHGA hybrid genetic algorithm that reproduces guided local search and combines two successful modifications of genetic algorithms. The first one is BOHGA that establishes a qualitative balance between local and global search. The second one is HGDN that prevents reexploration of the previously explored areas of a search space. In addition, a modified bump-function and an adaptive scheme for determining one of its parameters – the radius of the “deflation” of the objective function in the vicinity of the already found local minimum – were presented to accelerate the algorithm.</p> <p>Results. GBOHGA performance compared to other known stochastic search heuristics on a set of 33 test functions in 5 and 25dimensional spaces. The results of computational experiments indicate the competitiveness of GBOHGA, especially in problems with multimodal functions and a large number of variables.</p> <p>Conclusions. The new GBOHGA hybrid algorithm, developed on the basis of the integration of guided local search ideas and BOHGA and HGDN algorithms, allows to save significant computing resources and speed up the solution process of the global optimization problem. It should be used to solve global optimization problems that arise in engineering design, solving organizational and management problems, especially when the mathematical model of the problem is complex and multidimensional.</p> 2021-07-10T00:00:00+00:00 Copyright (c) 2021 S. E. Avramenko, T. A. Zheldak, L. S. Koriashkina http://ric.zntu.edu.ua/article/view/237058 CRITERIA FOR ESTIMATING THE SENSORIMOTOR REACTION TIME BY THE SMALL UAV OPERATOR 2021-07-10T08:47:07+00:00 T. A. Vakaliuk rvv@zntu.edu.ua I. A. Pilkevych rvv@zntu.edu.ua A. M. Tokar rvv@zntu.edu.ua R. I. Loboda rvv@zntu.edu.ua <p>Context. The rapid development of science and technology predetermines a significant expansion of the fields of application of UAVs different purposes. The key to the effective use UAVs is high-quality training of operators, an important element of which is the PPS of candidates, in particular, the assessment of their sensorimotor reactions. This can be achieved by selecting and justifying appropriate criteria.</p> <p>Objective. The goal of the work is the justification criteria for estimating the time sensorimotor reactions of a small UAV operator by analyzing the density distribution of statistical data.</p> <p>Method. A method has been developed to determine criteria for evaluating the time of sensorimotor reactions a small UAV operator based on the accumulation statistical material and its mathematical processing based on the results of a field experiment. The method allows to estimate numerical characteristics the distribution of the average reaction time in three modes: training production, in the conditions overload, in the conditions of overtraining and to obtain a generalized estimation. It was possible, by analyzing the occasional noninterruptible values, which take values within a certain range of values, to establish standards against which the obtained values the sensorimotor reaction time of the small UAV operator are compared and a decision is made on their suitability for training.</p> <p>Results. We obtained statistical series for the modes of assessment: skill development, under obstacle conditions, under conditions skill restructuring. For a visual representation of the series the corresponding histograms the distribution of the average reaction time duration were constructed. In order to eliminate the representativeness error, statistical series alignment was carried out by selecting a theoretical distribution curve for each series, which displays only essential features of the statistical material. For this purpose, we approximated the histogram of distribution by the polynomialf fourth degree. The interval theoretical density of distribution, in which the time sensomotor reaction of an arbitrary person is considered normal, with a given probability reliability such event – 0.95 has been established. To verify the effectiveness of the proposed method, algorithms for estimating the sensorimotor reaction time of a small UAV operator in three modes have been synthesized and the corresponding software that implements the proposed algorithms has been developed.</p> <p>Conclusions. The criteria for evaluating the sensorimotor reaction time for UAV operator to a visual stimulus using specialized software were substantiated. This allowed the previous PPS training candidates to take into account the requirements to the motor skills of the small UAV operator and the specificity his movements. The conducted experiments confirmed the validity of decisions made. Prospects for further research may include expansion of testing modes with justification for appropriate evaluation criteria.</p> 2021-07-10T00:00:00+00:00 Copyright (c) 2021 T. A. Vakaliuk, I. A. Pilkevych, A. M. Tokar, R. I. Loboda http://ric.zntu.edu.ua/article/view/235476 PERIODICITY SEARCH ALGORITHMS IN DIGITAL SEQUENCES WITH BLOCK CODING BY THEIR CORRELATION PROPERTIES 2021-06-24T16:36:59+00:00 О. М. Romanov rvv@zntu.edu.ua V. Yu. Kotiubin rvv@zntu.edu.ua <p>Context. To improve the noise immunity of communication and data transmission systems, error-correcting coding is widely used. The most common because of their effectiveness are block coding methods. Under conditions of partial a priori uncertainty of the type and parameters of encoding, before decoding the digital sequence, a preliminary analysis is carried out to determine them. In block coding, to determine the period of a digital sequence caused by the addition of a sync sequence to it, and which can determine the type and parameters of coding, a common approach is to use their correlation properties.</p> <p>Objective. The object of the research is the presentation of periodicity search algorithms in digital sequences with block errorcorrecting coding under conditions of partial a priori uncertainty of the type and parameters of the error-correcting code.</p> <p>Method. The article presents two periodicity search algorithms in digital sequences with block coding and describes the principle of their operation. The basis of one algorithm is the calculation of the autocorrelation function, the basis of the other is calculation of the cross-correlation function. It is shown that the length of the digital sequence should be twice as long as the maximum possible period. The operation of both algorithms is illustrated by examples.</p> <p>Results. Based on the proposed algorithms, special software has been developed. The results of determining the period of digital sequences with block error-correcting coding at different values of the period confirmed the efficiency of the proposed algorithms. Both proposed algorithms give approximately the same result. Experimental dependences of the calculation time of auto- and crosscorrelation functions from the length of the digital sequence and the maximum possible period are established. The period search algorithm in a digital sequence, that use the cross-correlation function of its components, is more efficient due to fewer calculations.</p> <p>Conclusions. For the first time, two periodicity search algorithms in digital sequences with block error-correcting based on the determination of their correlation functions are obtained. The application of the developed algorithms in practice allows, under partial a priori uncertainty of the type and parameters of the error-correcting code, to determine the period of digital sequences in real time even at large values of the period, and based on it, to identify the type and parameters of block error-correcting codes.</p> 2021-06-24T00:00:00+00:00 Copyright (c) 2021 О. М. Романов , В. Ю. Котюбін http://ric.zntu.edu.ua/article/view/235638 OPTIMIZATION OF PREVENTIVE THRESHOLD FOR CONDITIONBASED MAINTENANCE OF RADIO ELECTRONIC EQUIPMENT 2021-06-25T16:27:39+00:00 O. V. Solomentsev rvv@zntu.edu.ua M. Yu. Zaliskyi rvv@zntu.edu.ua O. A. Shcherbyna rvv@zntu.edu.ua M. M. Asanov rvv@zntu.edu.ua <p>Context. Operation costs throughout the life cycle of radio electronic equipment are very significant, which value far exceeds the initial cost of the equipment. Therefore, the up-to-date scientific and technical problem is to minimize operation costs. One of the ways to solve this problem is the introduction of statistical data processing technologies in the operation systems of radio electronic equipment.</p> <p>Objective. The goal of the paper is to improve the efficiency of thecondition-based maintenance with the determining parameters monitoring, which is widely used in civil aviation.</p> <p>Method. The solution of this problem is based on finding the functional dependence of the efficiency indicator in the form of specific operation costs on the basic parameters of radio electronic equipment and its operation system. To determine this dependence, the probability-event model is used,as well as methods of probability theory and mathematical statistics, in particular methods of statistical classification of sample sets and functional transformations of random variables. To determine the optimal level of the preventive threshold by the criterion of minimizing operation costs, the method of statistical simulation of Monte-Carlo is used.</p> <p>Results. Maintenance strategy with the determining parameters monitoring based on additional statistical data processing and technology of the optimal preventive threshold calculation are improved.</p> <p>Conclusions. The obtained results can be used during the development and modernization of operation systems of radio electronic equipment in terms of application of statistical data processing procedures. A comparative analysis of the two maintenance strategies showed that the use of additional statistical data processing might reduce specific operation costs. The proposed technology for determining the optimal preventive threshold can be extended to use during the operation of complex technical systems, in particular for those whose technical condition is associated with the values of the determining parameters.</p> 2021-07-15T00:00:00+00:00 Copyright (c) 2021 О. В. Соломенцев , М. Ю. Заліський , О. А. Щербина , М. М. Асанов http://ric.zntu.edu.ua/article/view/235655 PROCEDURE FOR EVALUATION OF THE SUPPORTING FREQUENCY SIGNAL OF THE SATELLITE COMMUNICATION SYSTEM IN CONTINUOUS MODE 2021-06-26T08:11:48+00:00 A. L. Turovsky rvv@zntu.edu.ua O. V. Drobik rvv@zntu.edu.ua <p>Context. One of the features of satellite communication systems is the advantageous use in them during the reception of the signal in the continuous mode of phase modulation of signals intended for the transmission of useful information. The use of this type of modulation requires solving the problem of estimating the carrier frequency of the signal. And the estimation itself is reduced to the problem of estimating the frequency of the maximum in the spectrum of a fragment of a sinusoidal signal against the background of additive Gaussian noise. The article considers the process of estimating the carrier frequency of a signal by a satellite communication system in a continuous mode according to the rule of maximum likelihood.</p> <p>Objective. Development of a procedure for estimating the carrier frequency of a signal received by a satellite communication system in a continuous mode according to the maximum likelihood rule.</p> <p>Method. The procedure proposed in the work and the algorithm developed on its basis allows to estimate the carrier frequency according to the rule of maximum likelihood, taking into account the conditions of uncertainty of all signal parameters by the satellite communication system in continuous mode.</p> <p>The results. For the purpose of practical introduction of the specified algorithm in operating schemes of satellite communication, schemes of its hardware realization are offered in work. To illustrate the ratio of the limits of the minimum limiting variance of the carrier frequency estimate, the paper presents dependencies that allow comparing the minimum limiting variance defined by the lower Cramer-Rao boundary and the minimum limiting variance determined taking into account all signal parameters.</p> <p>Conclusions. Analysis of these dependences showed that in real conditions the minimum dispersion of the carrier frequency of the signal according to the rule of maximum likelihood received by the satellite communication system in continuous mode with uncertainty of all signal parameters may differ significantly from the minimum dispersion obtained by applying the lower Kramer-Rao boundary. Prospective research, development and creation of algorithms and techniques aimed at estimating the carrier frequency at the minimum limiting variance in the conditions of uncertainty of all parameters of the received signal should be aimed at the maximum approximation of the minimum limiting variance of the estimated carrier frequency to the lower Cramer-Rao boundary to estimate the carrier frequency under conditions of certainty of other signal parameters.</p> 2021-06-26T00:00:00+00:00 Copyright (c) 2021 О. Л. Туровський , О. В. Дробик http://ric.zntu.edu.ua/article/view/235657 ON THE KOLMOGOROV-WIENER FILTER FOR RANDOM PROCESSES WITH A POWER-LAW STRUCTURE FUNCTION BASED ON THE WALSH FUNCTIONS 2021-06-26T08:42:56+00:00 V. N. Gorev rvv@zntu.edu.ua A. Yu. Gusev rvv@zntu.edu.ua V. I. Korniienko rvv@zntu.edu.ua A. A. Safarov rvv@zntu.edu.ua <p>Context. We investigate the Kolmogorov-Wiener filter weight function for the prediction of a continuous stationary random process with a power-law structure function.</p> <p>Objective. The aim of the work is to develop an algorithm of obtaining an approximate solution for the weight function without recourse to numerical calculation of integrals.</p> <p>Method. The weight function under consideration obeys the Wiener-Hopf integral equation. A search for an exact analytical solution for the corresponding integral equation meets difficulties, so an approximate solution for the weight function is sought in the framework of the Galerkin method on the basis of a truncated Walsh function series expansion.</p> <p>Results. An algorithm of the weight function obtaining is developed. All the integrals are calculated analytically rather than numerically. Moreover, it is shown that the accuracy of the Walsh function approximations is significantly better than the accuracy of polynomial approximations obtained in the authors’ previous papers. The Walsh function solutions are applicable in wider range of parameters than the polynomial ones.</p> <p>Conclusions. An algorithm of obtaining the Kolmogorov-Wiener filter weight function for the prediction of a stationary continuous random process with a power-law structure function is developed. A truncated Walsh function expansion is the basis of the developed algorithm. In opposite to the polynomial solutions investigated in the previous papers, the developed algorithm has the following advantages. First of all, all the integrals are calculated analytically, and any numerical calculation of the integrals is not needed. Secondly, the problem of the product of very small and very large numbers is absent in the framework of the developed algorithm. In our opinion, this is the reason why the accuracy of the Walsh function solutions is better than that of the polynomial solutions for many approximations and why the Walsh function solutions are applicable in a wider range of parameters than the polynomial ones. The results of the paper may be applied, for example, to practical traffic prediction in telecommunication systems with data packet transfer.</p> 2021-06-26T00:00:00+00:00 Copyright (c) 2021 В. М. Горєв , О. Ю. Гусєв , В. І. Корнієнко , О. О. Сафаров http://ric.zntu.edu.ua/article/view/235660 COMBINED NEWTON’S THIRD-ORDER CONVERGENCE METHOD FOR MINIMIZE ONE VARIABLE FUNCTIONS 2021-06-26T09:16:36+00:00 V. A. Kodnyanko rvv@zntu.edu.ua O. A. Grigorieva rvv@zntu.edu.ua L. V. Strok rvv@zntu.edu.ua <p>Contex. The article deals with the actual problem of numerical optimization of slowly computed unimodal functions of one variable. The analysis of existing methods of minimization of the first and second orders of convergence, which showed that these methods can be used to quickly solve these problems for functions, the values of which can be obtained without difficulty. For slowly computed functions, these methods give slow algorithms; therefore, the problem of developing fast methods for minimizing such functions is urgent.</p> <p>Objective. Development of a combined third-order Newtonian method of convergence to minimize predominantly slowly computed unimodal functions, as well as the development of a database, including smooth, monotonic and partially constant functions, to test the method and compare its effectiveness with other known methods.</p> <p>Method. A technique and an algorithm for solving the problem of fast minimization of a unimodal function of one variable by a combined numerical Newtonian method of the third order of convergence presented. The method is capable of recognizing strictly unimodal, monotonic and constant functions, as well as functions with partial or complete sections of a flat minimum.</p> <p>Results. The results of comparison of the proposed method with other methods, including the fast Brent method, presented. 6954 problems were solved using the combined Newtonian method, while the method turned out to be faster than other methods in 95.5% of problems, Brent’s method worked faster in only 4.5% of problems. In general, the analysis of the calculation results showed that the combined method worked 1.64 times faster than the Brent method.</p> <p>Conclusions. A combined third-order Newtonian method of convergence proposed for minimizing predominantly slowly computed unimodal functions of one variable. A database of problems developed, including smooth, monotone and partially constant functions, to test the method and compare its effectiveness with other known methods. It is shown that the proposed method, in comparison with other methods, including the fast Brent method, has a higher performance.</p> 2021-06-26T00:00:00+00:00 Copyright (c) 2021 В. А. Коднянко , О. А. Григорьева , Л. В. Строк http://ric.zntu.edu.ua/article/view/235664 DELAY MODELS BASED ON SYSTEMS WITH USUAL AND SHIFTED HYPEREXPONENTIAL AND HYPERERLANGIAN INPUT DISTRIBUTIONS 2021-06-26T10:06:13+00:00 V. N. Tarasov rvv@zntu.edu.ua N. F. Bakhareva rvv@zntu.edu.ua <p>Context. In the queueing theory, the study of systems with arbitrary laws of the input flow distribution and service time is relevant because it is impossible to obtain solutions for the waiting time in the final form for the general case. Therefore, the study of such systems for particular cases of input distributions is important.</p> <p>Objective. Getting a solution for the average delay in the queue in a closed form for queuing systems with ordinary and with shifted to the right from the zero point hyperexponential and hypererlangian distributions in stationary mode.</p> <p>Method. To solve this problem, we used the classical method of spectral decomposition of the solution of the Lindley integral equation. This method allows to obtaining a solution for the average delay for two systems under consideration in a closed form. The method of spectral decomposition of the solution of the Lindley integral equation plays an important role in the theory of systems G/G/1. For the practical application of the results obtained, the well-known method of moments of probability theory is used.</p> <p>Results. For the first time, a spectral decomposition of the solution of the Lindley integral equation for systems with ordinary and with shifted hyperexponential and hyperelangian distributions is obtained, which is used to derive a formula for the average delay in a queue in closed form.</p> <p>Conclusions. It is proved that the spectral expansions of the solution of the Lindley integral equation for the systems under consideration coincide; therefore, the formulas for the mean delay will also coincide. It is shown that in systems with a delay, the average delay is less than in conventional systems. The obtained expression for the waiting time expands and complements the wellknown incomplete formula of queuing theory for the average delay for systems with arbitrary laws of the input flow distribution and service time. This approach allows us to calculate the average delay for these systems in mathematical packages for a wide range of traffic parameters. In addition to the average waiting time, such an approach makes it possible to determine also moments of higher orders of waiting time. Given the fact that the packet delay variation (jitter) in telecommunications is defined as the spread of the waiting time from its average value, the jitter can be determined through the variance of the waiting time.</p> 2021-06-26T00:00:00+00:00 Copyright (c) 2021 В. Н. Тарасов , Н. Ф. Бахарєва http://ric.zntu.edu.ua/article/view/236800 AUTOMATED PANSHARPENING INFORMATION TECHNOLOGY OF SATELLITE IMAGES 2021-07-07T10:51:01+00:00 V. V. Hnatushenko rvv@zntu.edu.ua V. Yu. Kashtan rvv@zntu.edu.ua <p>Context. Nowadays, information technologies are widely used in digital image processing. The task of joint processing of satellite image obtained by different space systems that have different spatial differences is important. The already known pansharpening methods to improve the quality of the resulting image, there are new scientific problems associated with increasing the requirements for high-resolution image processing and the development of automated technology for processing the satellite data for further thematic analysis. Most spatial resolution techniques result in artifacts. Our work explores the major remote sensing data fusion techniques at pixel level and reviews the concept, principals, limitations and advantages for each technique with the program implementation of research.</p> <p>Objective. The goal of the work is analyze the effectiveness of the traditional pan-sharpening methods like the Brovey, the wavelet-transform, the GIHS, the HCT and the combined pansharpening method for satellite images of high-spatial resolution.</p> <p>Method. In this paper we propose the information technology for pansharpening high spatial resolution images with automation of choosing the best method based on the analysis of quantitative and qualitative evolutions. The method involves the scaling multispectral image to the size of the panchromatic image; using histogram equalization to adjust the primary images by aligning the integral areas of the sections with different brightness; conversion of primary images after the spectral correction on traditional pansharpening methods; analyze the effectiveness of the results obtained for conducts their detailed comparative visual and quantitative evaluation. The technology allows determining the best method of pansharpening by analyzing quantitative metrics: the NDVI index, the RMSE and the ERGAS. The NDVI index for the methods Brovey and HPF indicate color distortion in comparison with the reference data. This is due to the fact that the Brovey and HPF methods are based on the fusion of three channel images and do not include the information contained in the near infrared range. The RMSE and the ERGAS show the superiority of the combined HSVHCT-Wavelet method over the conventional and state-of-art image resolution enhancement techniques of high resolution satellite images. This is achieved, in particular, by preliminary processing of primary images, data processing localized spectral bases, optimized performance information, and the information contained in the infrared image.</p> <p>Results. The software implementing proposed method is developed. The experiments to study the properties of the proposed algorithm are conducted. Experimental evaluation performed on eight-primary satellite images of high spatial resolution obtained WorldView-2 satellite. The experimental results show that a synthesized high spatial resolution image with high information content is achieved with the complex use of fusion methods, which makes it possible to increase the spatial resolution of the original multichannel image without color distortions.</p> <p>Conclusions. The experiments confirmed the effectiveness of the proposed automated information technology for pansharpening high-resolution satellite images with the development of a graphical interface to obtain a new synthesized image. In addition, the proposed technology will effectively carry out further recognition and real-time monitoring infrastructure.</p> 2021-07-07T00:00:00+00:00 Copyright (c) 2021 V. V. Hnatushenko, V. Yu. Kashtan http://ric.zntu.edu.ua/article/view/236810 AN APPROACH WEB SERVICE SELECTION BY QUALITY CRITERIA BASED ON SENSITIVITY ANALYSIS OF MCDM METHODS 2021-07-07T11:27:21+00:00 O. V. Polska rvv@zntu.edu.ua R. K. Kudermetov rvv@zntu.edu.ua V. V. Shkarupylo rvv@zntu.edu.ua <p>Context. The problem of QoS based Web service from the list of Web services with equal or similar functionality was considered. This task is an essential part of the processes of finding, discover, matching and using Web services on the Internet due to the numerous offerings of Web services with equal or similar functionality. The reasonable selection of a suitable Web service takes into account a lot of user’s quality requirements, such as response time, throughput, reliability, cost, etc. Such a task is usually formulated as an MCDM problem, in which the parameters are the Web service quality factors and the importance degree of these factors. The object of this research is a process of selection Web services using MCDM methods, taking into accounts the user’s preferences and requirements to the Web service quality characteristics. The subject of the research is the LSP method, which, in addition to the degree of importance of the criteria used in all MCDM methods, simulates the user’s reasoning about quality, taking into account, in particular, such characteristics of the criteria as mandatory, sufficiency, desirability, simultaneity and substitutability.</p> <p>Objective. The objective of the work is to develop an approach for comparing the result of using the LSP method with the results of using other MCDM methods.</p> <p>Method. A method for calculating the weights of input criteria that are not always explicitly specified in the LSP method was proposed. For this, the conjunctive coefficients of impact are used, which are calculated as a result of the sensitivity analysis of the Web service generalized quality criterion to changes the partial quality criteria. This method underlies the proposed approach to comparing the efficiency of the LSP method with other MCDM methods, which consists of using the obtained weights as the weights of the input criteria for the MCDM methods.</p> <p>Results. The developed method and approach was verified experimentally. The Web service ranking produced by the LSP method was compared with the ones produced by SAW, AHP, TOPSIS and VIKOR methods. This comparison confirmed the efficiency of the proposed method and approach.</p> <p>Conclusions. From the obtained results of comparing the LSP method and the MCDM methods considered in this study, it follows that the proposed method and approach provide the equivalent input conditions for these methods as for the LSP method, which is a necessary condition for the correct comparison of MCDM methods. The use of the proposed approach made it possible to study the sensitivities of the considered MCDM methods. In practical applications, this approach can be used to select a suitable MCDM method. The proposed method can be useful for creating professional evaluation systems in which it is necessary to assess the importance (weights) of tens and hundreds of quality criteria.</p> 2021-07-07T00:00:00+00:00 Copyright (c) 2021 O. V. Polska, R. K. Kudermetov, V. V. Shkarupylo http://ric.zntu.edu.ua/article/view/236823 FORMALIZATION CODING METHODS OF INFORMATION UNDER TOROIDAL COORDINATE SYSTEMS 2021-07-07T13:15:48+00:00 V. V. Riznyk rvv@zntu.edu.ua <p>Contents. Coding and processing large information content actualizes the problem of formalization of interdependence between information parameters of vector data coding systems on a single mathematical platform.</p> <p>Objective. The formalization of relationships between information parameters of vector data coding systems in the optimized basis of toroidal coordinate systems with the achievement of a favorable compromise between contradictory goals.</p> <p>Method. The method involves the establishing harmonious mutual penetration of symmetry and asymmetry as the remarkable property of real space, which allows use decoded information for forming the mathematical principle relating to the optimal placement of structural elements in spatially or temporally distributed systems, using novel designs based on the concept of Ideal Ring Bundles (IRB)s. IRBs are cyclic sequences of positive integers which dividing a symmetric sphere about center of the symmetry. The sums of connected sub-sequences of an IRB enumerate the set of partitions of a sphere exactly R times. Two-and multidimensional IRBs, namely the “Glory to Ukraine Stars”, are sets of t-dimensional vectors, each of them as well as all modular sums of them enumerate the set node points grid of toroid coordinate system with the corresponding sizes and dimensionality exactly R times. Moreover, we require each indexed vector data “category-attribute” mutually uniquely corresponds to the point with the eponymous set of the coordinate system. Besides, a combination of binary code with vector weight discharges of the database is allowed, and the set of all values of indexed vector data sets are the same that a set of numerical values. The underlying mathematical principle relates to the optimal placement of structural elements in spatially and/or temporally distributed systems, using novel designs based on tdimensional “star” combinatorial configurations, including the appropriate algebraic theory of cyclic groups, number theory, modular arithmetic, and IRB geometric transformations.</p> <p>Results. The relationship of vector code information parameters (capacity, code size, dimensionality, number of encodingvectors) with geometric parameters of the coordinate system (dimension, dimensionality, and grid sizes), and vector data characteristic (number of attributes and number of categories, entity-attribute-value size list) have been formalized. The formula system is derived as a functional dependency between the above parameters, which allows achieving a favorable compromise between the contradictory goals (for example, the performance and reliability of the coding method). Theorem with corresponding corollaries about the maximum vector code size of conversion methods for t-dimensional indexed data sets “category-attribute” proved. Theoretically, the existence of an infinitely large number of minimized basis, which give rise to numerous varieties of multidimensional “star” coordinate systems, which can find practical application in modern and future multidimensional information technologies, substantiated.</p> <p>Conclusions. The formalization provides, essentially, a new conceptual model of information systems for optimal coding and processing of big vector data, using novel design based on the remarkable properties and structural perfection of the “Glory to Ukraine Stars” combinatorial configurations. Moreover, the optimization has been embedded in the underlying combinatorial models. The favorable qualities of the combinatorial structures can be applied to vector data coded design of multidimensional signals, signal compression and reconstruction for communications and radar, and other areas to which the GUS-model can be useful. There are many opportunities to apply them to numerous branches of sciences and advanced systems engineering, including information technologies under the toroidal coordinate systems. A perfection, harmony and beauty exists not only in the abstract models but in the real world also.</p> 2021-07-07T00:00:00+00:00 Copyright (c) 2021 V. V. Riznyk http://ric.zntu.edu.ua/article/view/236969 INFORMATION TECHNOLOGY OF CLIMATE MONITORING 2021-07-08T20:12:39+00:00 M. V. Talakh rvv@zntu.edu.ua S. V. Holub rvv@zntu.edu.ua I. B. Turkin rvv@zntu.edu.ua <p>Context. Information monitoring technology is used to reduce information uncertainty about the regularity of air temperature changes during managing work in hard-to-reach places [1]. The task was to create a method for modelling one of the climatic indicators, air temperature, in the given territories in the information monitoring technology structure. Climate models are the main tools for studying the response of the ecological system to external and internal influences. The problem of reducing information uncertainty in making managerial decisions is eliminated by predicting the consequences of using planned control actions using climate modelling methods in information monitoring technology. The information technology of climate monitoring combines satellite observation methods and observations on climate stations, taking into account the spatial and temporal characteristics, to form an array of input data. It was made with the methods for synthesizing models of monitoring information systems [1] and methods of forming multilevel model structures of the monitoring information systems [1] for converting observation results into knowledge, and with the rules for interpreting obtained results for calculating the temperature value in the uncontrolled territories.</p> <p>Objective of the work is to solve the problem of identifying the functional dependence of the air temperature in a given uncontrolled territory on the results of observations of the climate characteristics by meteorological stations in the information technology of climate monitoring structure.</p> <p>Method. The methodology for creating information technologies for monitoring has been improved to expand its capabilities to perform new tasks of forecasting temperature using data from thermal imaging satellites and weather stations by using a new method of climate modelling. A systematic approach to the process of climate modelling and the group method of data handling were used for solving problems of functional dependence identification, methods of mathematical statistics for evaluating models.</p> <p>Results. The deviation of the calculated temperature values with the synthesized monitoring information systems models from the actual values obtained from the results of observations by artificial earth satellites does not, on average, exceed 2.5°С. Temperature traces obtained from satellite images and weather stations at similar points show similar dynamics.</p> <p>Conclusions. The problem of the functional dependence identification of air temperature in uncontrolled territories on the results of observations at meteorological stations is solved. The obtained results were used in the process of creating a new method of climate modelling within information technology of climate monitoring. Experimental confirmation of the hypothesis about the possibility of using satellite images in regional models of temperature prediction has been obtained. The effectiveness of the application of the methodology for the creation of monitoring information technologies during the implementation of the tasks of reducing uncertainty for management decisions during works in non-controlled territories has been proven.</p> 2021-07-08T00:00:00+00:00 Copyright (c) 2021 M. V. Talakh, S. V. Holub, I. B. Turkin http://ric.zntu.edu.ua/article/view/237050 IMPLEMENTATION OF A FINITE ELEMENT CLASS LIBRARY USING GENERALIZED PROGRAMMING 2021-07-10T06:35:06+00:00 S. V. Choporov rvv@zntu.edu.ua M. S. Ihnatchenko rvv@zntu.edu.ua O. V. Kudin rvv@zntu.edu.ua A. G. Kryvokhata rvv@zntu.edu.ua S. I. Homeniuk rvv@zntu.edu.ua <p>Context. For computer modeling of complex objects and phenomena of various nature, in practice, the numerical finite element method is often used. Its software implementation (especially for the study of new classes of problems) is a rather laborious process. The high cost of software development makes the development of new approaches to improving the efficiency of programming and maintenance (including the addition of new functions) urgent.</p> <p>Objective. The aim of the work is to create a new effective architecture of programs for finite element analysis of problems in mathematical physics, which makes it easy to expand their functionality to solve new classes of problems.</p> <p>Method. A method for developing programs for finite element analysis using generalized programming is proposed, which makes it possible to significantly simplify the architecture of the software and make it more convenient for maintenance and modification by separating algorithms and data structures.</p> <p>A new architecture of classes that implement finite element calculation is proposed, which makes it possible to easily expand the functionality of programs by adding new types of finite elements, methods for solving systems of linear algebraic equations, parallel computations, etc.</p> <p>Results. The proposed approach was implemented in software as a class library in C ++. A number of computational experiments have been carried out, which have confirmed its efficiency in solving practical problems.</p> <p>Conclusions. The developed approach can be used both to create general-purpose finite element analysis systems with an open architecture, and to implement specialized software packages focused on solving specific classes of problems (fracture mechanics, elastomers, contact interaction, etc.).</p> 2021-07-10T00:00:00+00:00 Copyright (c) 2021 С. В. Чопоров, М. С. Игнатченко, А. В. Кудин, А. Г. Кривохата, С. И. Гоменюк