Radio Electronics, Computer Science, Control 2022-04-26T10:49:47+03:00 Sergey A. Subbotin Open Journal Systems <p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Russian, Ukrainian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="" alt="" /> <img src="" alt="" /></span></strong></p> FAST FUZZY CREDIBILISTIC CLUSTERING BASED ON DENSITY PEAKS DISTRIBUTION OF DATA BROAKYSIS 2022-04-05T21:48:32+03:00 Ye. V. Bodyanskiy I. P. Pliss A. Yu. Shafronenko <p>Context. The problem of clustering (classification without a teacher) is often occures when processing data arrays of various natures, which is quite an interesting and integral part of artificial intelligence. To solve this problem, there are many known methods and algorithms based on the principles of the distribution density of observations in the analyzed data. However, these methods are rather complicated in software implementation and are not without drawbacks, namely: the problem of determining significant clusters in datasets of different densities, multiepoch self-learning, getting stuck in local extrema of goal functions, etc. It should be noted that the methods based on the analysis of the peaks of the data distribution density are clear in nature, therefore, to expand the capabilities of these methods, it is advisable to introduce their fuzzy modification.</p> <p>Objective. The aim of the work is to introduce fast fuzzy data clustering using density peaks distribution of the datasets, that can find the prototypes (centroids) of clusters that overlapping regardless of the amount of incoming data.</p> <p>Method. The problem of fuzzy clustering data arrays based on a hybrid method that based on the simultaneous use of a credibilistic approach to fuzzy clustering and an algorithm for finding the types of distribution density of the initial data is proposed. A feature of the proposed method is computational simplicity and high speed, due to the fact that the entire array is processed only once, that is, eliminates the need for multi-era self-learning, implemented in traditional fuzzy clustering algorithms.</p> <p>Results. A feature of the proposed method of fast fuzzy credibilistic clustering using of density peaks distribution is characterized by computational simplicity and high speed due to the fact that the entire array is processed only once, that is, the need for multiepoch self-learning is eliminated, which is implemented in traditional fuzzy clustering algorithms. The results of the computational experiment confirm the effectiveness of the proposed approach in clustering problems under conditions in the case when the clusters are ovelap.</p> <p>Conclusions. The experimental results allow us to recommend the developed method for solving the problems of automatic clustering and data classification, as quickly as possible to find the centroids of clusters. The proposed method of fast fuzzy credibilistic clustering using of density peaks distribution of dataset is intended for use in computational intelligence systems, neuro-fuzzy systems, in training artificial neural networks and in clustering problems.</p> 2022-04-07T00:00:00+03:00 Copyright (c) 2022 Ye. V. Bodyanskiy, I. P. Pliss, A. Yu. Shafronenko FASTER OPTIMIZATION-BASED META-LEARNING ADAPTATION PHASE 2022-04-07T18:13:50+03:00 K. S. Khabarlak <p>Context. Neural networks require a large amount of annotated data to learn. Meta-learning algorithms propose a way to decrease number of training samples to only a few. One of the most prominent optimization-based meta-learning algorithms is MAML. However, its adaptation to new tasks is quite slow. The object of study is the process of meta-learning and adaptation phase as defined by the MAML algorithm.<br />Objective. The goal of this work is creation of an approach, which should make it possible to: 1) increase the execution speed of MAML adaptation phase; 2) improve MAML accuracy in certain cases. The testing results will be shown on a publicly available few-shot learning dataset CIFAR-FS.<br />Method. In this work an improvement to MAML meta-learning algorithm is proposed. Meta-learning procedure is defined in terms of tasks. In case of image classification problem, each task is to try to learn to classify images of new classes given only a few training examples. MAML defines 2 stages for the learning procedure: 1) adaptation to the new task; 2) meta-weights update. The whole training procedure requires Hessian computation, which makes the method computationally expensive. After being trained, the network will typically be used for adaptation to new tasks and the subsequent prediction on them. Thus, improving adaptation time is an important problem, which we focus on in this work. We introduce lambda pattern by which we restrict which weight we update in the network during the adaptation phase. This approach allows us to skip certain gradient computations. The pattern is selected given an allowed quality degradation threshold parameter. Among the pattern that fit the criteria, the fastest pattern is then selected. However, as it is discussed later, quality improvement is also possible is certain cases by a careful pattern selection.<br />Results. The MAML algorithm with lambda pattern adaptation has been implemented, trained and tested on the open CIFAR-FS dataset. This makes our results easily reproducible.<br />Conclusions. The experiments conducted have shown that via lambda adaptation pattern selection, it is possible to significantly improve the MAML method in the following areas: adaptation time has been decreased by a factor of 3 with minimal accuracy loss. Interestingly, accuracy for one-step adaptation has been substantially improved by using lambda patterns as well. Prospects for further research are to investigate a way of a more robust automatic pattern selection scheme.</p> 2022-04-07T00:00:00+03:00 Copyright (c) 2022 К. С. Хабарлак DEVELOPMENT OF MATHEMATICAL MODELS OF GROUP DECISION SYNTHESIS FOR STRUCTURING THE ROUGH DATA AND EXPERT KNOWLEDGE 2022-04-08T11:03:26+03:00 I. I. Kovalenko A. V. Shved Ye. O. Davydenko <p>Context. The problem of aggregating the decision table attributes values formed out of group expert assessments as the classification problem was solved in the context of structurally rough set notation. The object of study is the process of the mathematical models synthesis for structuring and managing the expert knowledge that are formed and processed under incompleteness and inaccuracy (roughness).</p> <p>Objective. The goal of the work is to develop a set of mathematical models for group expert assessments structuring for classification inaccuracy problem solving.</p> <p>Method. A set of mathematical models for structuring the group expert assessments based on the methods of the theory of evidence has been proposed. This techniques allow to correctly manipulate the initial data formed under vagueness, imperfection, and inconsistency (conflict). The problems of synthesis of group decisions has been examined for two cases: taking into account decision table existing data, only, and involving additional information, i.e. subjective expert assessments, in the process of the aggregation of the experts’ judgments.</p> <p>Results. The outcomes gained can become a foundation for the methodology allowing to classify the groups of expert assessments with using the rough sets theory. This make it possible to form the structures modeling the relationship between the classification attributes of the evaluated objects, the values of which are formed out of the individual expert assessments and their belonging to the certain classes.</p> <p>Conclusions. Models and methods of the synthesis of group decisions in context of structuring decision table data have been further developed. Three main tasks of structuring decision table data gained through the expert survey has been considered: the aggregation of expert judgments of the values of the decision attributes in the context of modeling of the relationship between the universe element and certain class; the aggregation of expert judgments of the values of the condition attributes; the synthesis of a group decision regarding the belonging of an object to a certain class, provided that the values of the condition attributes are also formed through the expert survey. The proposed techniques of structuring group expert assessments are the theoretical foundation for the synthesis of information technologies for the solution of the problems of the statistical and intellectual (classification, clustering, ranking and aggregation) data analysis in order to prepare the information and make the reasonable and effective decisions under incompleteness, uncertainty, inconsistency, inaccuracy and their possible combination</p> 2022-04-08T00:00:00+03:00 Copyright (c) 2022 I. I. Kovalenko, A. V. Shved, Ye. O. Davydenko DEVELOPING A FUZZY RISK ASSESSMENT MODEL FOR ERPSYSTEMS 2022-04-08T11:50:18+03:00 A. D. Kozhukhivskyi O. A. Kozhukhivska <p>Context. Because assessing information security risks is a complex and complete uncertainty process, and uncer-tainties are a major factor influencing valuation performance, it is advisable to use fuzzy methods and models that are adaptive to non-calculated data. The formation of vague assessments of risk factors is subjective, and risk assessment depends on the practical results obtained in the process of processing the risks of threats that have already arisen during the functioning of the organization and experience of information security professionals. Therefore, it will be advisable to use models that can adequately assess fuzzy factors and have the ability to adjust their impact on risk assessment. The greatest performance indicators for solving such problems are neuro-fuzzy models that combine methods of fuzzy logic and artificial neural networks and systems, i.e. “human-like” style of considerations of fuzzy systems with training and simulation of mental phenomena of neural networks. To build a model for calculating the risk assessment of information security, it is proposed to use a fuzzy product model. Fuzzy product models (Rule-Based Fuzzy Models/Systems) this is a common type of fuzzy models used to describe, analyze and simulate complex systems and processes that are poorly formalized.</p> <p>Objective. Development of the structure of a fuzzy model of quality of information security risk assessment and protection of ERP systems through the use of fuzzy neural models.</p> <p>Method. To build a model for calculating the risk assessment of information security, it is proposed to use a fuzzy product model. Fuzzy product models are a common kind of fuzzy models used to describe, analyze and model complex systems and processes that are poorly formalized.</p> <p>Results. Identified factors influencing risk assessment suggest the use of linguistic variables to describe them and use fuzzy variables to assess their qualities, as well as a system of qualitative assessments. The choice of parameters is substantiated and the structure of the fuzzy product model of risk assessment and the basis of the rules of fuzzy logical conclusion is developed. The use of fuzzy models for solving problems of information security risk assessment, as well as the concept and construction of ERP systems and analyzed problems of their security and vulnerabilities are considered.</p> <p>Conclusions. A fuzzy model has been developed risk assessment of the ERP system. Selected a list of factors affecting the risk of information security. Methods of risk assessment of information resources and ERP-systems in general, assessment of financial losses from the implementation of threats, determination of the type of risk according to its assessment for the formation of recommendations on their processing in order to maintain the level of protection of the ERP-system are proposed. The list of linguistic variables of the model is defined. The structure of the database of fuzzy product rules – MISO-structure is chosen. The structure of the fuzzy model was built. Fuzzy variable models have been identified.</p> 2022-04-08T00:00:00+03:00 Copyright (c) 2022 A. D. Kozhukhivskyi, O. A. Kozhukhivska NEUROMODELING OF OPERATIONAL PROCESSES 2022-04-11T13:05:26+03:00 S. A. Subbotin H. V. Pukhalska S. D. Leoshchenko A. O. Oliinyk Ye. O. Gofman <p>Context. The problem of synthesis a neural network model of operational processes with the determination of the optimal topology, which is characterized by a high level of logical transparency and acceptable accuracy, is considered. The object of the study is the process of neural network modeling of operational processes using an indicator system to simplify the selection of the topology of neuromodels. </p> <p>Objective of the work is to synthesis a neural network model of operational processes with a high level of logical transparency and acceptable accuracy based on the use of an indicator system. </p> <p>Method. It is proposed to use a system of indicators to determine the topological features of ANN, which is the basis for modeling operational processes. The assessment of the level of complexity of the task obtained on the basis of information about the input data and the values of the criteria for assessing the specificity of the task allows to categorize the task to one of the types of complexity in order to determine the approach to the synthesis of a neuromodel. Complexity category OS allows, based on analytical data about the selection of input data, to obtain the exact number of neurons in the hidden layer for the synthesis of a neuromodel with a high level of logical transparency, which significantly expands their practical use and reduces the cost of subsequent operational processes. </p> <p>Results. The obtained neuromodels of operational processes based on historical data. The use of the indicator system made it possible to significantly increase the level of logical transparency of the models, while maintaining high accuracy. Synthesized neuromodels reduce the resource intensity of operational processes by increasing the level of previous modeling. </p> <p>Conclusions. The conducted experiments confirmed the operability of the proposed mathematical software and allow to recommend it for use in practice when modeling operational processes. The prospects for further research may consist in the use of more complex methods of feature selection to fix the group relationships of information features for the construction of more complex <br>models</p> 2022-04-11T00:00:00+03:00 Copyright (c) 2022 S. A. Subbotin, H. V. Pukhalska, S. D. Leoshchenko, A. O. Oliinyk, Ye. O. Gofman DOMAIN ONTOLOGY DEVELOPMENT FOR CONDITION MONITORING SYSTEM OF INDUSTRIAL CONTROL EQUIPMENT AND DEVICES 2022-04-14T10:32:25+03:00 L. O. Vlasenko N. M. Lutska N. A. Zaiets A. V. Shyshak O. V. Savchuk <p>Context. Modern intelligent systems of failure identification of control equipment and devices in food industry are based on a complexation of approaches implemented on various methods and algorithms. The feature of such systems is that within them operates a large amount of heterogeneous data and knowledge that are difficult to combine. The use of ontologies of different levels in the system development process solves this problem.</p> <p>Objective. Domain ontology development for equipment condition monitoring system as a basis for designing intelligent decision support system with ontology knowledge base.</p> <p>Methods. There are different ontology development approaches. They may differ in the quantity of levels and types of ontologies or be a combination of subject and problem domains ontologies depending on the complexity of the problem and the chosen ontology development method. This paper represents two levels of the three-level ontology being developed for intelligent condition monitoring system of control equipment and devices. The upper level is represented by top-level ontology Basic Formal Ontology (BFO) which provides systematization of the meta-level, including temporal part. International standards and technical reports such as IEC 62890, ISO 55000, ISA 95, ISA 106, IEC 62264, ISO 10303-242: 2020 are considered in the development process of the second ontology level – Domain ontology.</p> <p>Results. The article provides Domain ontology for equipment condition monitoring system in food industry. The developed Domain ontology systematizes, structures engineering knowledge and uses BFO which provides a set of basic elements at the metalevel. They set the values of the following entities: type of production, methods of failure identification, causes, failures, events, equipment, etc. The developed Domain ontology has semantic cross-links. A fragment of the Domain ontology relationships for the “Control equipment” subclass of “Equipment” class is also presented in the paper.</p> <p>Conclusions. The developed ontology can be used to analyze the knowledge base on the causes, locations and types of failures and their identification methods. The developed ontology is a basis for application ontology development.</p> 2022-04-14T00:00:00+03:00 Copyright (c) 2022 L. O. Vlasenko, N. M. Lutska, N. A. Zaiets, A. V. Shyshak, O. V. Savchuk DECISION-MAKING AT EVOLUTIONARY SEARCH DURING LIMITED NUMBER OF FUZZY EXPERIMENTS WITH MULTIPLE CRITERIA 2022-04-17T18:59:36+03:00 V. F. Irodov M. V. Shaptala K. V. Dudkin D. E. Shaptala D. A. Chirin <p>Context. The mechanism of decision-making during limited number of fuzzy experiments with multiple criteria are considered. The investigation object is process decision-making for project or control in complex systems with multiple criteria.</p> <p>Objective. It is necessary to determine optimal (most preferred) parameters of the systems with multiple criteria. It is no the mathematical model of the system, there is limited number of fuzzy experiments only.</p> <p>Method. experimental study of a process with several criteria (functions) depending on its parameters; the use of expert fuzzy evaluation to build a matrix of preferences for individual implementations; building a function of choosing preferred solutions based on a preference matrix by constructing a mathematical model of preference recognition, formulation and solving the problem of generalized mathematical programming as the final step in building the selection mechanism. The decision-making mechanism depends on the expert assessment procedure when comparing a limited set of results with each other, as well as on the statement of conditions when solving the problem of generalized mathematical programming. Comparison of a finite number of fuzzy experiments is convenient for expert evaluation. Presentation of the final choice as a result of solving the problem of generalized mathematical programming is convenient for using such a mechanism in automatic control systems already without human intervention. The proposed scheme of decision-making during limited number of fuzzy experiments has been applied to decision-making of project management for pellet burner.</p> <p>Results. Experimental decision-making fuzzy results are presented in the presence of several criteria for a pellet burner of a tubular heater, which confirm the acceptability of the developed decision-making mechanism. It was proposed the new scheme for constructing a selection mechanism for decision-making in systems with several criteria where there is a sample of fuzzy experimental results.</p> <p>Conclusions. The scheme of decision-making is includes the solving the generalized mathematical programming as the final step in building the selection mechanism. For the problem of generalized mathematical programming it may be applied the evolutionary search with choice function in form of preference or in form of lock.</p> 2022-04-19T00:00:00+03:00 Copyright (c) 2022 V. F. Irodov, M. V. Shaptala, K. V. Dudkin, D. E. Shaptala, D. A. Chirin MODELLING GAME TASK OF ASSIGNING STAFF TO PERFORM IT-PROJECTS BASED ON ONTOLOG IES 2022-04-12T10:35:36+03:00 P. Kravets V. Lytvyn V. Vysotska <p>Context. This article describes how to solve the game problem of assigning staff to work on projects based on an ontological approach. The essence of the problem is this. There is a need to create teams to carry out several projects. Each project is defined by a set of necessary ontological knowledge. To implement projects, managers invite qualified specialists (agents), whose abilities are also defined by sets of ontologies. The composition of the teams should be such that the combined ontologies of their agents cover the set of ontologies of the respective projects. Each agent with a certain probability can take part in the implementation of several projects. Simultaneous work of the agent on different projects is not allowed. It is necessary to determine the order of project implementation and the corresponding order of personnel appointment.</p> <p>Objective of the study is to develop a mathematical model of stochastic game, recurrent Markov methods for its solution, algorithmic and software, computer experiment, analysis of results and development of recommendations for their practical application.</p> <p>Method. A stochastic game algorithm for coloring an undirected random graph was used to plan project execution. To do this, the number of vertices of the graph is taken equal to the number of projects. The edges of the project graph for which the same agent is invited are connected by edges. Due to the recovery failures of agents, the connections between the vertices of the graph change dynamically. It is necessary to achieve the correct coloring of the random graph. Then projects with the same colored vertices of the graph can be executed in parallel, and projects with different colors of vertices – in series.</p> <p>Results. The article builds a mathematical model of a stochastic game and a self-learning Markov method for its solution. Each vertex of the graph is controlled by the player. The player’s pure strategies are the elements of the color palette. After selecting the color of their own top, each player calculates the current loss as a relative number of identical colors in the local set of neighboring players. The goal of the players is to minimize the functions of average losses. The Markov recurrent method provides an adaptive choice of colors for the vertices of a random graph based on dynamic vectors of mixed strategies, the values of which depend on the current losses of players. The result of a stochastic game is an asymptotically correctly colored random graph, when each edge of the initial deterministic graph will correspond on average to different colors of vertices.</p> <p>Conclusions. A computer experiment was performed, which confirmed the convergence of the stochastic game for the problem of coloring a random graph. This made it possible to determine the procedure for appointing staff to implement projects.</p> 2022-04-12T00:00:00+03:00 Copyright (c) 2022 П. О. Кравець, В. В. Литвин, В. А. Висоцька THE RELIABILITY IMPROVING OF COMPUTER SYSTEM ELEMENTS WITH USING MODULAR ENCODING 2022-04-13T10:13:43+03:00 V. I. Freyman <p>Context. Computing systems are implemented in many industries and economies of the modern world. The quality indicators of the systems in which they are used depend on the reliability of their work. The reliability of a computing system consists of the reliability of the construction and functioning of its elements. It is not always possible to ensure reliability in the design by choosing a high-quality element base, structural redundancy, or other well-known methods. Therefore, important and critical elements of computing systems are protected by built-in control schemes. They allow you to detect errors that occur when performing basic data operations. An effective way of constructing such circuits is to use actions on the remainder of the division of the operands by a selected module or by several modules (modular coding). Especially the task of choosing the most accurate and least redundant means of control is relevant for a wide range of basic elements of modern computing systems.</p> <p>Objective. The aim of the work is research and development of recommendations on the use of modular coding to improve the reliability of the functioning of elements of modern computing systems in various hardware and software basis.</p> <p>Methods. A method for numerical control of the correctness of performing basic arithmetic and logical operations by computing devices is selected and analyzed. On its basis, a schematic model of a computing system was built and verified in the MatLab Simulink environment, which uses modular coding as a means of ensuring the reliability of the functioning of elements. The analysis of the probabilistic characteristics of decision-making is carried out, estimates of the probability of an erroneous decision-making are given. A software implementation of the simulation algorithm in the Visual Basic for Applications environment has been created, which made it possible to plot the dependence of reliability indicators on coding parameters.</p> <p>Results. A schematic model of a computing system has been developed. It allows study various combinations of faults in the functioning of elements and errors in their operations. An algorithm for simulating all kinds of malfunctions and errors in the functioning of elements of computing systems when they perform basic operations is implemented in software. The qualitative dependences of the probabilistic characteristics of reliability on the coding parameters are determined. Based on the analysis of the characteristics obtained, conclusions are drawn and practical recommendations are given on the use of modular coding in the elements of computing systems in order to achieve the specified reliability indicators.</p> <p>Conclusions. To improve the reliability of the functioning of the elements of computing systems, it is effective to use built-in control schemes using modular coding. Taking into account the recommendations for choosing the parameters of the codes will ensure the required reliability with minimal redundancy of circuits and the computational complexity of the calculation algorithms.</p> 2022-04-13T00:00:00+03:00 Copyright (c) 2022 V. I. Freyman FAST ALGORITHM FOR SOLVING A ONE-DIMENSIONAL UNCLOSED DESIRABLE NEIGHBORS PROBLEM 2022-04-03T05:55:50+03:00 V. A. Kodnyanko <p>Contex. The paper formulates a general combinatorial problem for the desired neighbors. Possible areas of practical application of the results of its development are listed. Within the framework of this problem, an analysis of the scientific literature on the optimization of combinatorial problems of practical importance that are close in subject is carried out, on the basis of which the novelty of the formulated problem accepted for scientific and algorithmic development is established.</p> <p>Objective. For a particular case of the problem, the article formulates a one-dimensional unclosed integer combinatorial problem of practical importance about the desired neighbors on the example of the problem of distributing buyers on land plots, taking into account their recommendations on the desired neighborhood.</p> <p>Method. A method for solving the mentioned problem has been developed and an appropriate effective algorithm has been created, which for thousands of experimental sets of hundreds of distribution subjects allows to get the optimal result on an ordinary personal computer in less than a second of counting time. The idea of developing the optimization process is expressed, which doubles the practical effect of optimization by cutting off unwanted neighbors without worsening the maximum value of the desirability criterion.</p> <p>Results. The results of the work include the formulation of a one-dimensional unclosed combinatorial problem about the desired neighbors and an effective algorithm for its solution, which makes it possible to find one, several, and, if necessary, all the options for optimal distributions. The main results of the work can also include the concept and formulation of a general optimization combinatorial problem of desirable neighbors, which may have theoretical and practical prospects.</p> <p>Conclusions. The method underlying the algorithm for solving the problem allows, if necessary, to easily find all the best placement options, the number of which, as a rule, is very large. It is established that their number can be reduced with benefit up to one by reducing the number of undesirable neighborhoods, which contributes to improving the quality of filtered optimal distributions in accordance with this criterion. The considered problem can receive prospects for evolution and development in various subject areas of the economy, production, architecture, urban studies and other spheres.</p> 2022-04-03T00:00:00+03:00 Copyright (c) 2022 В. А. Коднянко PROPERTIES OF GENERATORS OF PSEUDO-RANDOM SEQUENCES CONSTRUCTED USING FUZZY LOGIC AND TWO-DIMENSIONAL CHAOTIC SYSTEMS 2022-04-03T11:43:11+03:00 M. Ya. Kushnir Hr. V. Kosovan P. M. Kroyalo <p>Context. The problem of generating pseudo-random sequences of bits using the rules of fuzzy logic and two-dimensional chaotic systems is considered.</p> <p>Objective. Pseudo-random sequences generators built using two-dimensional chaotic systems and fuzzy logic. The purpose of the work is to develop and implement pseudo-random bit sequences generators based on the rules of fuzzy logic and two-dimensional chaotic systems and to evaluate the statistical characteristics of the generated sequences using statistical tests of National Institute of Standards and Technology.</p> <p>Method. A method for generating pseudo-random bit sequences is proposed, which allows form bit sequences with characteristics that meet the requirements of secure communication systems and cryptographic protection of information based on the rules of fuzzy logic and two-dimensional chaotic systems. In the process of studying the operation of generators, histograms of the distribution of output values were constructed, which allows to clearly determine whether the entire range of output values of the twodimensional system could be used to generate pseudo-random bit sequence or only part of it. A study of the statistical characteristics of the generated sequences using a set of statistical tests was also performed.</p> <p>Results. Bit sequences formed using fuzzy logic rules and two-dimensional chaotic systems can be used to transmit information in secure communication systems.</p> <p>Results. The proposed generators were implemented in software, histogram analysis and evaluation of compliance with the criteria for a set of statistical tests of National Institute of Standards and Technology.</p> <p>Conclusions. The experiments confirmed the ability of the proposed generators to generate bit sequences with good statistical characteristics, which allows them to be recommended for use in practice in solving problems of cryptographic protection of information and secure transmission of information over open communication channels. Prospects for further research may be to create cryptographic methods of information protection based on the proposed pseudo-random bit sequences generators, the implementation of secure communication systems.</p> 2022-04-03T00:00:00+03:00 Copyright (c) 2022 M. Ya. Kushnir, Hr. V. Kosovan SOLVING POISSON EQUATION WITH CONVOLUTIONAL NEURAL NETWORKS 2022-04-04T00:08:25+03:00 V. A. Kuzmych M. A. Novotarskyi O. B. Nesterenko <p>Context. The Poisson equation is the one of fundamental differential equations, which used to simulate complex physical processes, such as fluid motion, heat transfer problems, electrodynamics, etc. Existing methods for solving boundary value problems based on the Poisson equation require an increase in computational time to achieve high accuracy. The proposed method allows solving the boundary value problem with significant acceleration under the condition of acceptable loss of accuracy.</p> <p>Objective. The aim of our work is to develop artificial neural network architecture for solving a boundary value problem based on the Poisson equation with arbitrary Dirichlet and Neumann boundary conditions.</p> <p>Method. The method of solving boundary value problems based on the Poisson equation using convolutional neural network is proposed. The network architecture, structure of input and output data are developed. In addition, the method of training dataset generation is described.</p> <p>Results. The performance of the developed artificial neural network is compared with the performance of the numerical finite difference method for solving the boundary value problem. The results showed an acceleration of the computational speed in x10–700 times depending on the number of sampling nodes.</p> <p>Conclusions. The proposed method significantly accelerated speed of solving a boundary value problem based on the Poisson equation in comparison with the numerical method. In addition, the developed approach to the design of neural network architecture allows to improve the proposed method to achieve higher accuracy in modeling the process of pressure distribution in areas of arbitrary size.</p> 2022-04-04T00:00:00+03:00 Copyright (c) 2022 V. A. Kuzmych, M. A. Novotarskyi, O. B. Nesterenko THE MODULAR EXPONENTIATION WITH PRECOMPUTATION OF REDUSED SET OF RESIDUES FOR FIXED-BASE 2022-04-04T11:17:59+03:00 I. Prots’ko O. Gryshchuk <p>Context. Modular exponentiation is an important operation in many applications that requires a large number of calculations Fast computations of the modular exponentiation are extremely necessary for efficient computations in theoretical-numerical transforms, for provide high crypto capability of information data and in many other applications.</p> <p>Objective – the runtime analysis of software functions for computation of modular exponentiation of the developed program that uses the precomputation of redused set of residuals for fixed-base.</p> <p>Method. Modular exponentiation is implemented using of the development of the right-to-left binary exponentiation method for a fixed basis with precomputation of redused set of residuals. To efficient compute the modular exponentiation over big numbers, the property of a periodicity for the sequence of residuals of a fixed base with exponents equal to an integer power of two is used.</p> <p>Results. Comparison of the runtimes of five variants of functions for computing the modular exponentiation is performed. In the algorithm with precomputation of redused set of residuals for fixed-base provide faster computation of modular exponentiation for values larger than 1K binary digits compared to the functions of modular exponentiation of the MPIR and Crypto++ libraries. The MPIR library with an integer data type with the number of binary digits from 256 to 2048 bits is used to develop an algorithm for computing the modular exponentiation.</p> <p>Conclusions. In the work has been considered and analysed the developed software implementation of the computation of modular exponentiation on universal computer systems. One of the ways to implement the speedup of computing modular exponentiation is developing algorithms that can use the precomputation of redused set of residuals for fixed-base. The software implementation of modular exponentiation with increasing from 1K the number of binary digit of exponent shows an improvement of computation time with comparison with the functions of modular exponentiation of the MPIR and Crypto++ libraries.</p> 2022-04-04T00:00:00+03:00 Copyright (c) 2022 I. Prots’ko, O. Gryshchuk TWO PAIRS OF DUAL QUEUEING SYSTEMS WITH CONVENTIONAL AND SHIFTED DISTRIBUTION LAWS 2022-04-05T19:49:02+03:00 V. N. Tarasov N. F. Bakhareva <p>Context. The relevance of studies of G/G/1 systems is associated with the fact that they are in demand for modeling data transmission systems for various purposes, as well as with the fact that for them there is no final solution in the general case. We consider the problem of deriving a solution for the average delay of requests in a queue in a closed form for ordinary systems with Erlang and exponential input distributions and for the same systems with distributions shifted to the right.</p> <p>Objective. Obtaining a solution for the main characteristic of the system – the average delay of requests in a queue for two pairs of queuing systems with ordinary and shifted Erlang and exponential input distributions, as well as comparing the results for systems with normalized Erlang distributions.</p> <p>Methods. To solve the problem posed, the method of spectral solution of the Lindley integral equation was used, which allows one to obtain a solution for the average delay for the systems under consideration in a closed form. For the practical application of the results obtained, the method of moments of the theory of probability was used.</p> <p>Results. Spectral solutions of the Lindley integral equation for two pairs of systems are obtained, with the help of which calculation formulas are derived for the average delay of requests in the queue in a closed form. Comparison of the results obtained with the data for systems with normalized Erlang distributions confirms their identity.</p> <p>Conclusions. The introduction of the time shift parameter into the distribution laws of the input flow and service time for the systems under consideration transforms them into systems with a delay with a shorter waiting time. This is because the time shift operation reduces the value of the variation coefficients of the intervals between the arrivals of claims and their service time, and as is known from the queuing theory, the average delay of requests is related to these variation coefficients by a quadratic dependence. If a system with Erlang and exponential input distributions works only for one fixed pair of values of the coefficients of variation of the intervals between arrivals and their service time, then the same system with shifted distributions allows operating with interval values of the coefficients of variations, which expands the scope of these systems. The situation is similar with shifted exponential distributions. In addition, the shifted exponential distribution contains two parameters and allows one to approximate arbitrary distribution laws using the first two moments. This approach makes it possible to calculate the average latency and higher-order moments for the specified systems in mathematical packets for a wide range of changes in traffic parameters. The method of spectral solution of the Lindley integral equation for the systems under consideration has made it possible to obtain a solution in closed form, and these obtained solutions are published for the first time.</p> 2022-04-05T00:00:00+03:00 Copyright (c) 2022 В. Н. Тарасов, Н. Ф. Бахарєва SPECTRAL ESTIMATION METHODS FOR A JOINT SYSTEM OF THE NON-NOISE-LIKE TARGETS DETECTION AND THE NOISE RADIATING SOURCES LOCALIZATION 2022-04-01T10:33:57+03:00 D. V. Atamanskyi V. P. Riabukha V. M. Kartashov A. V. Semeniaka L. V. Procopenco <p>Context. For many radars, the autonomous systems of the non-noise-like aerial targets (AT) detection and the noise radiating sources (NRS) localization (direction-of-arrival estimation) may be replaced with a single detection-localization system, which carries out the common operations of the AT-detection and the NRS-localization only once. For such a system, groups of noneigenvalue and eigenvalue decomposition based “super-resolving” spectral estimation (SE) methods are considered to substantiate efficient one for the NRS-localization.</p> <p>Objective. The comparative analysis efficiency of the SE-methods of different groups by a set of criteria and recommendations on their practical application.</p> <p>Method. The methods’ efficiency is analyzed analytically, under simulation results and their comparison with new results presented in the open literature. In the simulation, a well-grounded and practically examined software-algorithmic basis of adaptive lattice filters for nonparametric SE-methods implementation is used.</p> <p>The results. It is shown that the SE-methods of both groups have no restrictions on the antenna array configuration (flat, ring, etc.), including when used in non-equal spaced “sparse” antenna arrays with inter-element distances of more than half radar wavelength. A comparison is made on the resolution (determination of the NRS number) and the NRS-localization (direction-of-arrival estimation) efficiency by methods of different groups when using various antenna arrays. It is shown that the methods of the first group (non-eigenvalue based) in terms of the probability of correct resolution, are almost not inferior to the known and new methods of the second group (eigenvalue ones). Based on the set of criteria and practical application conditions for direction-of-arrival estimation of the noise radiating sources, it is recommended to use the Capon’s minimum variance method if there are limitations on the computational complexity of the method. In the absence of such restrictions, it is advisable to use the SE-bank of methods.</p> <p>Conclusions. For the practical implementation of a joint system of the non-noise-like aerial target detection and the noise radiating sources localization, a structural-algorithmic basis of adaptive lattice filters is preferred. Using latter, along with the weight vector forming for the target detection, it is possible to implement not only the Capon’s method, but also a SE-bank of methods by combining the squares of absolute values of its original vectors’ components.</p> 2022-04-01T00:00:00+03:00 Copyright (c) 2022 Д. В. Атаманський, В. П. Рябуха, В. М. Карташов, А. В. Семеняка, Л. В. Прокопенко SYSTEMATIZATION OF THE FORMULAS OF THE RESONANT FERRITE ISOLATOR LOSS 2022-04-02T10:35:08+03:00 O. B. Zaichenko N. Ya. Zaichenko <p>Context. The problem is to systematize and improve the models of a resonance ferrite isolator in the rectangular waveguide for the antenna-feeder devices, generating, receiving, measuring microwave equipment containing ferrite decoupling devices: ferrite isolators and circulators.</p> <p>Objective. The goal of the work is to verify the formula for the losses of the resonant ferrite isolator in the direct and reverse directions, as well as the isolator ratio.</p> <p>Method. The research method of the work is a critical analysis of literary sources, which was carried out, but did not bring the desired results, since it did not allow to verify the correctness of the derivation of the formula [17]. Therefore, a number of hypotheses were put forward, what the formula might mean. The difficulty lay in the presence in the formula of the product of trigonometric functions that can be attributed to frequency properties, which was taken as an initial hypothesis, which was not subsequently confirmed. The check included transformation of formulas using mathematical physics in terms of microwave electrodynamics, trigonometry and algebra. The beginning was the formula of the classics [16], similar to the formula of [18], accepted without proof. As it is known, for the main type of wave in a rectangular waveguide, the components of the magnetic field strength, obtained as a solution to the wave equation under the boundary conditions inherent in a rectangular waveguide. One component of the magnetic field strength is along the direction of wave propagation, and the second one is in the transverse direction in the section of the waveguide are proportional to the trigonometric functions cosine and sine with the same arguments. The equality of the two components of the strengths is traditionally uses to find the plane of circular polarization where to place the ferrite isolator, and so the authors use this proportionality to trigonometric functions in their derivation, namely the formulas of trigonometric functions of a double angle, the basic trigonometric identity sine squared plus cosine squared is equal to one for replacing the propagation constants with trigonometric functions, this allows to get rid of radicals in the formulas, these radicals in the formula are due to the phenomenon of dispersion in a rectangular waveguide. The rest of the manipulations with the formula are the reduction of similar terms.</p> <p>Results. There was obtained analytical expressions for the losses of the resonant ferrite isolator in the forward and reverse directions, as well as the isolator ratio by strict mathematical transformations. There was performed such transformations. The ratios of the longitudinal propagation constant to the transverse propagation constant are replaced by the ratios of the trigonometric functions sine and cosine, since they are continuous as opposed to tangents and cotangents. Such a transformation allows to avoid square roots in the formula for the losses of the ferrite isolator in the forward and reverse directions, which are associated with the presence of dispersion in the waveguide, as in the formula for wavelength in the waveguide. The conversion is based on microwave electrodynamics. The formulas are used for the distribution of fields in a rectangular waveguide for the main type of wave. Further transformations consist in taking the common factor out of brackets and other arithmetic transformations.</p> <p>Conclusions. Тhrer was obtained results partially coincide with the well-known [17], the derivation of the formula [17] was obtained for the first time, the studies carried out allowed us to reject the hypothesis that the product of cosines and sines in the loss formula of a ferrite isolator is a frequency characteristic, it appears as a result of arithmetic transformations. To take into account the frequency range, it is used that there is circular polarization at the middle frequency, there will also be circular polarization at the extreme frequency of the range, but the plane of circular polarization will shift in comparison with the position of the plane of circular polarization at the middle frequency. That is, a peculiar system of two equations is obtained with respect to two positions of the polarization plane relative to the wide side of the rectangular waveguide section. The scientific novelty consists in systematization and generalization of the formulas of the loss of the resonance ferrite isolator, the connection between the formulas from different literature sources, both foreign and domestic, is proved, which saves time for researchers of ferrite isolators for the verification of the formula. The practical significance. It may be useful for teaching purposes and in optimization of the ferrite isolator design.</p> 2022-04-02T00:00:00+03:00 Copyright (c) 2022 O. B. Zaichenko, N. Ya. Zaichenko