Radio Electronics, Computer Science, Control 2024-04-03T16:03:51+03:00 Sergey A. Subbotin Open Journal Systems <p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Ukrainian. Before 2022 also Russian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals and conferences. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="" alt="" /> <img src="" alt="" /></span></strong></p> STEWART PLATFORM DYNAMICS MODEL IDENTIFICATION 2024-03-30T14:37:09+02:00 V. A. Zozulya S. І. Osadchy S. N. Nedilko <p>Context. At the present stage, with the current demands for the accuracy of motion control processes for a moving object on a specified or programmable trajectory, it is necessary to synthesize the optimal structure and parameters of the stabilization system (controller) of the object, taking into account both real controlled and uncontrolled stochastic disturbing factors. Also, in the process of synthesizing the optimal controller structure, it is necessary to assess and consider multidimensional dynamic models, including those of the object itself, its basic components, controlled and uncontrolled disturbing factors that affect the object in its actual motion.</p> <p>Objective. The aim of the research, the results of which are presented in this article, is to obtain and assess the accuracy of the Stewart platform dynamic model using a justified algorithm for the multidimensional moving object dynamics identification.</p> <p>Method. The article employs a frequency-domain identification method for multidimensional stochastic stabilization systems of moving objects with arbitrary dynamics. The proposed algorithm for multidimensional moving object dynamics model identification is constructed using operations of polynomial and fractional-rational matrices addition, multiplication, Wiener factorization, Wiener separation, and determination of dispersion integrals.</p> <p>Results. As a result of the conducted research, the problem of identifying the dynamic model of a multidimensional moving object is formalized, illustrated by the example of a test stand based on the Stewart platform. The outcomes encompass the identification of the dynamic model of the Stewart platform, its transfer function, and the transfer function of the shaping filter. The verification of the identification results confirms the sufficient accuracy of the obtained models.</p> <p>Conclusions. The justified identification algorithm allows determining the order and parameters of the linearized system of ordinary differential equations for a multidimensional object and the matrix of spectral densities of disturbances acting on it under operating conditions approximating the real functioning mode of the object prototype. The analysis of the identification results of the dynamic models of the Stewart platform indicates that the primary influence on the displacement of the center of mass of the moving platform is the variation in control inputs. However, neglecting the impact of disturbances reduces the accuracy of platform positioning. Therefore, for the synthesis of the control system, methods should be applied that enable determining the structure and parameters of a multidimensional controller, considering such influences.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 V. A. Zozulya, S. І. Osadchy, S. N. Nedilko REFINEMENT AND ACCURACY CONTROL OF THE SOLUTION METHOD FOR THE DURABILITY PROBLEM OF A CORRODING STRUCTURE USING NEURAL NETWORK 2024-03-29T18:53:21+02:00 O. D. Brychkovskyi <p>Context. The prediction of the time until failure of corroding hinge-rod structures is a crucial component in risk management across various industrial sectors. An accurate solution to the durability problem of corroding structures allows for the prevention of undesired consequences that may arise in the event of an emergency situation. Alongside this, the question of the effectiveness of existing methods for solving this problem and ways to enhance them arises.</p> <p>Objective. The objective is to refine the method of solving the durability problem of a corroding structure using an artificial neural network and establish accuracy control.</p> <p>Method. To refine the original method, alternative sets of input data for the artificial neural network which increase information about the change in axial forces over time are considered. For each set of input data a set of models is trained. Based on target metric values distribution among the obtained sets, a set is selected where the minimum value of the mathematical expectation of the target metric is achieved. For the set of models corresponding to the identified best set, accuracy control of the method is determined by establishing the relationship between the mathematical expectation of the target metric and the parameters of the numerical solution.</p> <p>Results. The conditions under which a lower value of the mathematical expectation of the target metric is obtained compared to the original method are determined. The results of numerical experiments, depending on the considered case, show, in average, an improvement on 43.54% and 9.67% in the refined method compared to the original. Additionally, the proposed refinement reduces the computational costs required to find a solution by omitting certain steps of the original method. An accuracy control rule of the method is established, which allows to obtain on average a given error value without performing extra computations.</p> <p>Conclusions. The obtained results indicate the feasibility of applying the proposed refinement. A higher accuracy in predicting the time until failure of corroding hinge-rod structures allows to reduce the risks of an emergency situation. Additionally, accuracy control enables finding a balance between computational costs and the accuracy of solving the problem. KEYWORDS</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 O. D. Brychkovskyi METHOD OF GENERATIVE-ADVERSARIAL NETWORKS SEARCHING ARCHITECTURES FOR BIOMEDICAL IMAGES SYNTHESIS 2024-03-29T19:13:27+02:00 O. M. Berezsky P. B. Liashchynskyi <p>Context. The article examines the problem of automatic design of architectures of generative-adversarial networks. Generativeadversarial networks are used for image synthesis. This is especially true for the synthesis of biomedical images – cytological and histological, which are used to make a diagnosis in oncology. The synthesized images are used to train convolutional neural networks. Convolutional neural networks are currently among the most accurate classifiers of biomedical images.</p> <p>Objective. The aim of the work is to develop an automatic method for searching for architectures of generative-adversarial networks based on a genetic algorithm.</p> <p>Method. The developed method consists of the stage of searching for the architecture of the generator with a fixed discriminator and the stage of searching for the architecture of the discriminator with the best generator.</p> <p>At the first stage, a fixed discriminator architecture is defined and a generator is searched for. Accordingly, after the first step, the architecture of the best generator is obtained, i.e. the model with the lowest FID value.</p> <p>At the second stage, the best generator architecture was used and a search for the discriminator architecture was carried out. At each cycle of the optimization algorithm, a population of discriminators is created. After the second step, the architecture of the generative-adversarial network is obtained.</p> <p>Results. Cytological images of breast cancer on the Zenodo platform were used to conduct the experiments. As a result of the study, an automatic method for searching for architectures of generatively adversarial networks has been developed. On the basis of computer experiments, the architecture of a generative adversarial network for the synthesis of cytological images was obtained. The total time of the experiment was ~39.5 GPU hours. As a result, 16,000 images were synthesized (4000 for each class). To assess the quality of synthesized images, the FID metric was used.The results of the experiments showed that the developed architecture is the best. The network’s FID value is 3.39. This result is the best compared to well-known generative adversarial networks.</p> <p>Conclusions. The article develops a method for searching for architectures of generative-adversarial networks for the problems of synthesis of biomedical images. In addition, a software module for the synthesis of biomedical images has been developed, which can be used to train CNN.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 O. M. Berezsky, P. B. Liashchynskyi MACHINE LEARNING FOR AUTOMATIC EXTRACTION OF WATER BODIES USING SENTINEL-2 IMAGERY 2024-03-29T19:43:15+02:00 V. Yu. Kashtan V. V. Hnatushenko <p>Context. Given the aggravation of environmental and water problems, there is a need to improve automated methods for extracting and monitoring water bodies in urban ecosystems. The problem of efficient and automated extraction of water bodies is becoming relevant given the large amount of data obtained from satellite systems. The object of study is water bodies that are automatically extracted from Sentinel-2 optical satellite images using machine learning methods.</p> <p>Objective. The goal of the work is to improve the efficiency of the process of extracting the boundaries of water bodies on digital optical satellite images by using machine learning methods.</p> <p>Method. The paper proposes an automated information technology for delineating the boundaries of water bodies on Sentinel-2 digital optical satellite images. The process includes eight stages, starting with data download and using topographic maps to obtain basic information about the study area. Then, the process involved data pre-processing, which included calibrating the images, removing atmospheric noise, and enhancing contrast. Next, the EfficientNet-B0 architecture is applied to identify water features, facilitating optimal network width scaling, depth, and image resolution. ResNet blocks compress and expand channels. It allows for optimal connectivity of large-scale and multi-channel links across layers. After that, the Regional Proposal Network defines regions of interest (ROI), and ROI alignment ensures data homogeneity. The Fully connected layer helps in segmenting the regions, and the Fully connected network creates binary masks for accurate identification of water bodies. The final step of the method is to analyze spatial and temporal changes in the images to identify differences, changes, and trends that may indicate specific phenomena or events. This approach allows automating and accurately identifying water features on satellite images using machine learning.</p> <p>Results. The implementation of the proposed technology is development through Python software development. An assessment of the technology’s accuracy, conducted through a comparative analysis with existing methods, such as water indices and K-means, confirms a high level of accuracy in the period from 2017 to 2023 (up to 98%). The Kappa coefficient, which considers the degree of consistency between the actual and predicted classification, confirms the stability and reliability of our approach, reaching a value of 0.96.</p> <p>Conclusions. The experiments confirm the effectiveness of the proposed automated information technology and allow us to recommend it for use in studies of changes in coastal areas, decision-making in the field of coastal resource management, and land use. Prospects for further research may include new methods that seasonal changes and provide robustness in the selection and mapping of water surfaces.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 V. Yu. Kashtan, V. V. Hnatushenko APPROACH TO THE AUTOMATIC CREATION OF AN ANNOTATED DATASET FOR THE DETECTION, LOCALIZATION AND CLASSIFICATION OF BLOOD CELLS IN AN IMAGE 2024-03-29T20:54:31+02:00 S. M. Kovalenko O. S. Kutsenko S. V. Kovalenko A. S. Kovalenko <p>Context. The paper considers the problem of automating the creation of an annotated dataset for further use in a system for detecting, localizing and classifying blood cells in an image using deep learning. The subject of the research is the processes of digital image processing for object detection and localization.</p> <p>Objective. The aim of this study is to create a pipeline of digital image processing methods that can automatically generate an annotated set of blood smear images. This set will then be used to train and validate deep learning models, significantly reducing the time required by machine learning specialists.</p> <p>Method. The proposed approach for object detection and localization is based on digital image processing methods such as filtering, thresholding, binarization, contour detection, and filling. The pipeline for detection and localization includes the following steps: The given fragment of text describes a process that involves noise reduction, conversion to the HSV color model, defining a mask for white blood cells and platelets, detecting the contours of white blood cells and platelets, determining the coordinates of the upper left and lower right corners of white blood cells and platelets, calculating the area of the region inside the bounding box, saving the obtained data, and determining the most common color in the image; filling the contours of leukocytes and platelets with said color; defining a mask for red blood cells; defining the contours of red blood cells; determining the coordinates of the upper left and lower right corners of red blood cells; calculating the area of the region within the bounding box; entering data about the found objects into the dataframe; saving to a .csv file for future use. With an unlabeled image dataset and a generated .csv file using image processing libraries, any researcher should be able to recreate a labeled dataset.</p> <p>Results. The developed approach was implemented in software for creating an annotated dataset of blood smear images</p> <p>Conclusions. The study proposes and justifies an approach to automatically create a set of annotated data. The pipeline is tested on a set of unlabelled data and a set of labelled data is obtained, consisting of cell images and a .csv file with the attributes “file name”, “type”, “xmin”, “ymin”, “xmax”, “ymax”, “area”, which are the coordinates of the bounding box for each object. The number of correctly, incorrectly, and unrecognised objects is calculated manually, and metrics are calculated to assess the accuracy and quality of object detection and localisation.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 S. M. Kovalenko, O. S. Kutsenko, S. V. Kovalenko, A. S. Kovalenko A RESEARCH OF THE LATEST APPROACHES TO VISUAL IMAGE RECOGNITION AND CLASSIFICATION 2024-03-29T22:35:54+02:00 V. P. Lysechko B. I. Sadovnykov O. M. Komar О. S. Zhuchenko <p>Context. The paper provides an overview of current methods for recognizing and classifying visual images in static images or video stream. The paper will discuss various approaches, including machine learning, current problems of these methods and possible improvements. The biggest challenges of the visual image retrieval and classification task are discussed. The main emphasis is placed on the review of such promising algorithms as SSD, YOLO, R-CNN, an overview of the principles of these methods, network architectures.</p> <p>Objective. The aim of the work is to analyze existing studies and find the best algorithm for recognizing and classifying visual images for further activities.</p> <p>Method. Primary method is to compare different factors of algorithms in order to select the most perspective one. There are different marks to compare, like image processing speed, accuracy. There are a number of studies and publications that propose methods and algorithms for solving the problem of finding and classifying images in an image [3–6]. It should be noted that most promising approaches are based on machine learning methods. It is worth noting that the proposed methods have drawbacks due to the imperfect implementation of the Faster R-CNN, YOLO, SSD algorithms for working with streaming video. The impact of these drawbacks can be significantly reduced by applying the following solutions: development of combined identification methods, processing of edge cases – tracking the position of identified objects, using the difference between video frames, additional preliminary preparation of input data. Another major area for improvement is the optimization of methods to work with real-time video data, as most current methods focus on images.</p> <p>Results. As an outcome of the current research we have found an optimal algorithm for further researches and optimizations.</p> <p>Conclusions. Analysis of existent papers and researches has demonstrated the most promising algorithm for further optimizations and experiments. Also current approaches still have some space for further. The next step is to take the chosen algorithm and investigate possibilities to enhance it.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 V. P. Lysechko, B. I. Sadovnykov, O. M. Komar, О. S. Zhuchenko UA-LLM: ADVANCING CONTEXT-BASED QUESTION ANSWERING IN UKRAINIAN THROUGH LARGE LANGUAGE MODELS 2024-03-29T23:06:00+02:00 M. V. Syromiatnikov V. M. Ruvinskaya <p>Context. Context-based question answering, a fundamental task in natural language processing, demands a deep understanding of the language’s nuances. While being a sophisticated task, it’s an essential part of modern search systems, intelligent assistants, chatbots, and the whole Conversational AI field. While English, Chinese, and other widely spoken languages have gathered an extensive number of datasets, algorithms, and benchmarks, the Ukrainian language, with its rich linguistic heritage and intricate syntax, has remained among low-resource languages in the NLP community, making the Question Answering problem even harder.</p> <p>Objective. The purpose of this work is to establish and benchmark a set of techniques, leveraging Large Language Models, combined in a single framework for solving the low-resource problem for Context-based question-answering task in Ukrainian.</p> <p>Method. A simple yet flexible framework for leveraging Large Language Models, developed as a part of this research work, enlights two key methods proposed and evaluated in this paper for dealing with a small amount of training data for context-based question-answering tasks. The first one utilizes Zero-shot and Few-shot learning – the two major subfields of N-shot learning, where N corresponds to the number of training samples, to build a bilingual instruction-based prompt strategy for language models inferencing in an extractive manner (find an answer span in context) instead of their natural generative behavior (summarize the context according to question). The second proposed method is based on the first one, but instead of just answering the question, the language model annotates the input context through the generation of question-answer pairs for the given paragraph. This synthetic data is used for extractive model training. This paper explores both augmentation-based training, when there is some annotated data already, and completely synthetic training, when no data is available. The key benefit of these two methods is the ability to obtain comparable prediction quality even without an expensive and long-term human annotation process.</p> <p>Results. Two proposed methods for solving the low-to-zero amount of training data problem for context-based questionanswering tasks in Ukrainian were implemented and combined into the flexible LLM experimentation framework.</p> <p>Conclusions. This research comprehensively studied OpenAI GPT-3.5, OpenAI GPT-4, Cohere Command, and Meta LLaMa-2 language understanding capabilities applied to context-based question answering in low-resource Ukrainian. The thorough evaluation of proposed methods on a diverse set of metrics proves their efficiency, unveiling the possibility of building components of search engines, chatbot applications, and standalone general-domain CBQA systems with Ukrainian language support while having almost zero annotated data. The prospect for further research is to extend the scope from the CBQA task evaluated in this paper to all major NLU tasks with the final goal of establishing a complete benchmark for LLMs’ capabilities evaluation in the Ukrainian language.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 M. V. Syromiatnikov, V. M. Ruvinskaya IN-MEMORY INTELLIGENT COMPUTING 2024-03-30T10:50:17+02:00 V. I. Hahanov V. H. Abdullayev S. V. Chumachenko E. I. Lytvynova I. V. Hahanova <p>Context. Processed big data has social significance for the development of society and industry. Intelligent processing of big data is a condition for creating a collective mind of a social group, company, state and the planet as a whole. At the same time, the economy of big data (Data Economy) takes first place in the evaluation of processing mechanisms, since two parameters are very important: speed of data processing and energy consumption. Therefore, mechanisms focused on parallel processing of large data within the data storage center will always be in demand on the IT market.</p> <p>Objective. The goal of the investigation is to increase the economy of big data (Data Economy) thanks to the analysis of data as truth table addresses for the identification of patterns of production functionalities based on the similarity-difference metric.</p> <p>Method. Intelligent computing architectures are proposed for managing cyber-social processes based on monitoring and analysis of big data. It is proposed to process big data as truth table addresses to solve the problems of identification, clustering, and classification of patterns of social and production processes. A family of automata is offered for the analysis of big data, such as addresses. The truth table is considered as a reasonable form of explicit data structures that have a useful constant – a standard address routing order. The goal of processing big data is to make it structured using a truth table for further identification before making actuator decisions. The truth table is considered as a mechanism for parallel structuring and packing of large data in its column to determine their similarity-difference and to equate data at the same addresses. Representation of data as addresses is associated with unitary encoding of patterns by binary vectors on the found universe of primitive data. The mechanism is focused on processorless data processing based on read-write transactions using in-memory computing technology with significant time and energy savings. The metric of truth table big data processing is parallelism, technological simplicity, and linear computational complexity. The price for such advantages is the exponential memory costs of storing explicit structured data.</p> <p>Results. Parallel algorithms of in-memory computing are proposed for economic mechanisms of transformation of large unstructured data, such as addresses, into useful structured data. An in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed. It includes a framework for matrix analysis of big data to determine the similarity between vectors that are input to the matrix sequencer. Vector data analysis is transformed into matrix computing for big data processing. The speed of the parallel algorithm for the analysis of big data on the MDV matrix of deductive vectors is linearly dependent on the number of bits of the input vectors or the power of the universe of primitives. A method of identifying patterns using key words has been developed. It is characterized by the use of unitary coded data components for the synthesis of the truth table of the business process. This allows you to use read-write transactions for parallel processing of large data such as addresses.</p> <p>Conclusions. The scientific novelty consists in the development of the following innovative solutions: 1) a new vector-matrix technology for parallel processing of large data, such as addresses, is proposed, characterized by the use of read-write transactions on matrix memory without the use of processor logic; 2) an in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed; 3) a method of identifying patterns using keywords is proposed, which is characterized by the use of unitary coded data components for the synthesis of the truth table of the business process, which makes it possible to use the read-write transaction for parallel processing of large data such as addresses. The practical significance of the study is that any task of artificial intelligence (similarity-difference, classification-clustering and recognition, pattern identification) can be solved technologically simply and efficiently with the help of a truth table (or its derivatives) and unitarily coded big data . Research prospects are related to the implementation of this digital modeling technology devices on the EDA market. KEYWORDS: Intelligent</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 V. I. Hahanov, V. H. Abdullayev, S. V. Chumachenko, E. I. Lytvynova, I. V. Hahanova DECISION-MAKING MODELS AND THEIR APPLICATION IN TRANSPORT DELIVERY OF BUILDING MATERIALS 2024-03-29T12:17:49+02:00 A. M. Bashkatov O. A. Yuldashova <p>Context. The task of determining a generalized parameter characterizing a comprehensive assessment of the action of criteria affecting the sequence of execution of orders for the manufacture and delivery of products to the customer.</p> <p>Objective. The purpose of the work is to develop an algorithm for calculating priorities when solving the problem of transport services in conditions of uncertainty of choice.</p> <p>Method. When considering the problem of the efficiency of order fulfillment, the reasons are given that affect the efficiency of the tasks being solved for the delivery of paving slabs to the customer in the shortest possible time. In order to select a scheme that reflects the main stages of decision-making, a justification was carried out and a comparative analysis of existing models was carried out. The criteria for the requirements for describing such models have been determined. It is indicated that the objective function depends on a group of reasons, i.e. represents a composite indicator. The stochastic nature of such factors led to the use of statistical analysis methods for their assessment. The limits of variation of the parameters used in the calculations are established. The solution to the multicriteria problem consists in bringing the role of the acting factors to one unconditional indicator, grouping and subsequent ranking of their values. The decision-making and the choice of the indicator will depend on the set threshold and the priority level of the factor. The indices that form the priority of the factor are determined analytically or expertly. The sequence of actions performed is presented in the form of an algorithm, which allows automating the selection of a model and the calculation of indicators. To assess the adequacy of the proposed solutions, tables of comparative results for the selection of the priority of the executed orders are given.</p> <p>Results. The method allows a comprehensive approach to taking into account the heterogeneous factors that determine the order in which the order is selected when making managerial decisions, ensuring the achievement of a useful effect (streamlining the schedule for the delivery of paving slabs to the customer) by ranking the values of priority indices.</p> <p>Conclusions. The proposed scheme for the transition to a complex unconditional indicator (priority index) makes it possible to quantitatively substantiate the procedure for choosing the next order when performing work. A special feature is that the list of operating factors can be changed (reduced or supplemented with new criteria). The values of these parameters will improve and have a higher reliability with the expansion of the experimental design, depending on the retrospective of their receipt, the accuracy of the data. As a prospect of the proposed method, the optimization of the process of selecting applications using queuing methods (for the type of the corresponding flow – homogeneous, without consequences, stationary, gamma flow, etc.) can be considered.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 О. М. Башкатов, О. О. Юлдашова DESIGN MODELS OF BIT-STREAM ONLINE-COMPUTERS FOR SENSOR COMPONENTS 2024-03-29T12:58:10+02:00 L. V. Larchenko A. V. Parkhomenko B. D. Larchenko V. R. Korniienko <p>Context. Currently, distributed real-time control systems need the creation of devices that perform online computing operations close to the sensor. The proposed online-computers of elementary mathematical functions can be used as components for the functional conversion of signals in the form of pulse streams received from measuring sensors with frequency output.</p> <p>Objective. The objective of the study is the development of mathematical, architectural and automata models for the design of bit-stream online-computers of elementary mathematical functions in order to create a unified approach to their design, due to which the accuracy of calculating functions can be increased, functional capabilities expanded, hardware costs reduced, and design efficiency increased.</p> <p>Method. Mathematical models of devices were developed using the method of forming increments of ascending step functions based on inverse functions with minimization of calculation error. Automata models of online-computers based on Moore’s Finite State Machine have been developed, the graph diagrams of which made it possible to ensure the clarity of function implementation algorithms, to increase visibility and invariance of implementation in formal languages of programming and hardware description.</p> <p>Results. The paper presents the results of research, development and practical approbation of design models of bit-stream onlinecomputers of power functions and root extraction function. A generalized architecture of an online-computer was proposed.</p> <p>Conclusions. The considered functional online-computers are effective from the point of view of calculation accuracy, simplicity of technical implementation, and universality of the architecture</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 Л. В. Ларченко, А. В. Пархоменко, Б. Д. Ларченко, В. Р. Корнієнко EVALUATION OF THE INFLUENCE OF ENVIRONMENTAL FACTORS AND COGNITIVE PARAMETERS ON THE DECISION-MAKING PROCESS IN HUMAN-MACHINE SYSTEMS OF CRITICAL APPLICATION 2024-03-29T16:26:26+02:00 V. I. Perederyi E. Y. Borchik V. V. Zosimov O. S. Bulgakova <p>Context. A feature of human-machine systems of critical application operating in real time is that they include as elements both technical systems and people interacting with these systems. At the same time, the main difficulties are associated not only with the improvement of hardware and software, but also with the insufficient development of methods for reliably predicting the impact of the production environment on the human factor and, as a result, on the relevance of decisions made by decision makers. As a result, the task of developing methods for determining the mutual influence of environmental factors and cognitive parameters of decision makers on the decision-making process becomes very relevant.</p> <p>Objective. The aim of the work is to propose methodological foundations for the development and study of fuzzy hierarchical relational cognitive models to determine the influence of environmental factors and cognitive parameters of decision makers on the DMP.</p> <p>Method. When building FHRCM methods of “soft computing”, methodologies of cognitive and fuzzy cognitive modeling were used, providing an acceptable formalization uncertainty of mutual influence of factors on the DMP.</p> <p>Results. A fuzzy cognitive model based on a fuzzy Bayesian belief network has been developed, which makes it possible to draw a connection between qualitative and quantitative assessments of mutually influencing factors on the DMP. The proposed model makes it possible to probabilistically predict the influence of factors and choose rational ways of their interaction in the DMP.</p> <p>Conclusions. The results of the experiments make it possible to recommend using the developed model, which takes into account the mutual influence of factors of various nature, including cognitive ones, in the DMP in order to improve the efficiency of HMSCA management as a whole.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 V. I. Perederyi, E. Y. Borchik, V. V. Zosimov, O. S. Bulgakova A NONLINEAR REGRESSION MODEL FOR EARLY LOC ESTIMATION OF OPEN-SOURCE KOTLIN-BASED APPLICATIONS 2024-03-29T17:16:06+02:00 S. B. Prykhodko N. V. Prykhodko A. V. Koltsov <p>Context. The early lines of code (LOC) estimation in software projects holds significant importance, as it directly influences the prediction of development effort, covering a spectrum of different programming languages, and open-source Kotlin-based applications in particular. The object of the study is the process of early LOC estimation of open-source Kotlin-based apps. The subject of the study is the nonlinear regression models for early LOC estimation of open-source Kotlin-based apps.</p> <p>Objective. The goal of the work is to build the nonlinear regression model with three predictors for early LOC estimation of open-source Kotlin-based apps based on the Box-Cox four-variate normalizing transformation to increase the confidence in early LOC estimation of these apps.</p> <p>Method. For early LOC estimation in open-source Kotlin-based apps, the model, confidence, and prediction intervals of nonlinear regression were constructed using the Box-Cox four-variate normalizing transformation and specialized techniques. These techniques, relying on multiple nonlinear regression analyses incorporating multivariate normalizing transformations, account for the dependencies between variables in non-Gaussian data scenarios. As a result, this method tends to reduce the mean magnitude of relative error (MMRE) and narrow confidence and prediction intervals compared to models utilizing univariate normalizing transformations.</p> <p>Results. An analysis has been carried out to compare the constructed model with nonlinear regression models employing decimal logarithm and Box-Cox univariate transformation.</p> <p>Conclusions. The nonlinear regression model with three predictors for early LOC estimation of open-source Kotlin-based apps is constructed using the Box-Cox four-variate transformation. Compared to the other nonlinear regression models, this model demonstrates a larger multiple coefficient of determination, a smaller value of the MMRE, and narrower confidence and prediction intervals. The prospects for further research may include the application of other data sets to construct the nonlinear regression model for early LOC estimation of open-source Kotlin-based apps for other restrictions on predictors.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 S. B. Prykhodko, N. V. Prykhodko, A. V. Koltsov METHOD OF EVALUATION THE EFFICIENCY OF FIBER-OPTIC CABLES MODELS WITH MULTI-MODULAR DESIGN BASED ON MASS AND DIMENSIONAL INDICATORS 2024-03-20T10:35:46+02:00 O. V. Bondarenko D. M. Stepanov O. O. Verbytskyi S. V. Siden <p>Context. Today, the leading cable production plants in many countries of the world manufacture single- and multi-module designs of fiber-optic cables (FOC) with different protective covers and the number of fibers. This creates a wide range of possible FOC models for different consumer (buyer) requirements. However, the lack of openness of prices for FOC for the consumer, in particular for the project organization, and the desire of the manufacturer to save on production creates a need for the development and research of a method for evaluating the effectiveness of FOC of a multi-module design. In the work, it is proposed to do this by analyzing a number of optical cable models according to parameters-criteria – the compactness coefficient and the economic efficiency coefficient of the FOC by diameter.</p> <p>Objective. To develop a method of evaluation the efficiency of fiber-optic cables models with multi-modular design based on mass-dimensional indicators, which will allow to quickly choose an appropriate model of the FOC with the given initial data.</p> <p>Method. A method of evaluating the efficiency of the FOC modular design has been developed and proposed. It is based on the comparison of cable models and the selection of the most appropriate of them at given initial data. The paper proposed and introduced parameters-criteria for this – the compactness coefficient υ and the efficiency coefficient according to the diameter of the cable E0 – which show the connection of the design characteristics of the FOC to a certain parameter of its structure. At the same time, the most effective model (design) of the FOC compared to the basic models in terms of technical conditions, both from the point of view of the manufacturer and the customer, consists in lower material costs for its production while simultaneously ensuring the specified requirements for the cable (first of all, number of fibers and mechanical strength). This will allow, at the stage of cable design, to make an appropriate choice of its model with given initial parameters and to develop such a design of the FOC, which will allow to minimize the dimensions (and therefore the material capacity and cost) of the model without losing its quantitative and qualitative characteristics.</p> <p>Results. The paper presents the results of the development and research of the method of evaluating the efficiency of the FOC with multi-module structure based on mass and dimensional indicators. For example, using the developed method, it is shown that it is possible to choose a FOC model with a diameter smaller by 10.9% and save 15.5% of the cable cost for each kilometer of the fiber optic communication line while ensuring the initial requirements for the cable.</p> <p>Conclusions. The scientific novelty of the work results is that, method of evaluation the efficiency of a modular FOC design has been first developed, which allows at the cable design stage to compare the model with the cable design according to the technical conditions (TC) and the appropriate choice of this model with the given initial parameters. The practical significance lies in the possibility of using this method to make an accelerated selection of the cable model at the stage of its design, while simultaneously providing the necessary capacity of the FOC with optical fibers and minimizing the cost of its materials and dimensions.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 O. V. Bondarenko, D. M. Stepanov, O. O. Verbytskyi, S. V. Siden TWO-FRAGMENT NON-LINEAR-FREQUENCY MODULATED SIGNALS WITH ROOTS OF QUADRATIC AND LINEAR LAWS FREQUENCY CHANGES 2024-03-28T19:23:21+02:00 O. O. Kostyria A. A. Нryzo H. V. Khudov O. M. Dodukh B. А. Lisohorskyi <p>Context. The rapid development of the technology of digital synthesis and processing of radar signals, which has been observed in recent decades, has practically removed restrictions on the possibility of implementing arbitrary laws of frequency modulation of radio oscillations. Along with the traditional use of linearly-frequency-modulated signals, modern radar means use probing signals with non-linear frequency modulation, which provide a lower level of maximum side lobes and a higher rate of their descent. These factors, in turn, contribute to improving the detection characteristics of targets under conditions of passive interference, as well as increasing the probability of detecting small targets against the background of targets with larger effective scattering surfaces. In this regard, a large number of studies are conducted in the direction of further improvement of existing and synthesis of radar signals with new laws of frequency modulation. The use of multifragment nonlinear-frequency-modulated signals, which include fragments with both linear and nonlinear modulation, provides an increase in the number of possible versions of the laws of frequency modulation and synthesis of signals with predicted characteristics. Synthesis of new multifragment signals with a reduced level of side lobes of autocorrelation functions and a higher rate of their descent is an important scientific and technical task, the solution of which is devoted to this article.</p> <p>Objective. The purpose of the work is to develop mathematical models of the current and shifted time of two-fragment nonlinear-frequency modulated signals for the case when the first fragment has a root-quadratic, and the second linear frequency modulation and determine the feasibility of using such a signal in radar applications.</p> <p>Method. The article theoretically confirms that for the mathematical model of the current time, when moving from the first fragment to the second at the junction of fragments, jumps of instantaneous frequency and phase (or only phases for the mathematical model of shifted time) occur, which can significantly distort the resulting signal. Determination of value of frequency-phase jumps for their further elimination is performed by finding difference between value of initial phase of second fragment and final value of phase of first fragment. A distinctive feature of the developed mathematical models is the use of the first fragment of the signal with root-quadratic, and the second – linear frequency modulation.</p> <p>Results. Comparison of the signal, the first fragment of which has root-square frequency modulation, and the signal with two linearly-frequency-modulated fragments, provided that the total duration and frequency deviation are equal, shows that for the new synthesized signal the maximum level of side lobes decreased by 1.5 dB, and their rate of decay increased by 6.5 dB/dec.</p> <p>Conclusions. A new two-fragment signal was synthesized, the first fragment of which has root-quadratic, and the second – linear frequency modulation. Mathematical models of the current time and with a time shift for calculating the values of the instantaneous phase of such a signal have been developed. A distinctive feature of these models is the presence of components to compensate for frequency-phase distortions, taking into account the modulation law of the frequency of the first fragment. The resulting oscillograms, spectra and autocorrelation functions of the synthesized two-fragment signals do not contradict the known theoretical position, which indicates the reliability and adequacy of the proposed mathematical models.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 O. O. Kostyria, A. A. Нryzo, H. V. Khudov, O. M. Dodukh, B. А. Lisohorskyi RESEARCH OF THE FEATURES OF DIGITAL SIGNAL FORMATION IN SATELLITE COMMUNICATION LINES 2024-03-28T20:13:30+02:00 V. І. Мagro O. G. Panfilov <p>Context. Remote sensing of the Earth is now widely used in various fields. One of the challenges of remote sensing is the creation of inexpensive satellite systems operating in polar circular orbits. These systems require the development of a receptiontransmission system that allows tens of gigabits of video information to be transmitted to an earth receiving station within ten minutes. That is, there is a need to create a communication system that provides high speed data transmission from small satellites weighing up to 50 kg.</p> <p>Objective. The aim of the work is to study the features of digital signal formation in modern satellite communication lines and to develop a communication system with a high data transfer rate (usually 300 Mbit/s), which can be applied to small Earth Observation satellites.</p> <p>Method. Proposed concept for building a high-speed data transmitter from a remote sensing earth satellite using commercially available off-the-shelf technology. Calculations of the power flow density created at the input of the receiving earth station were performed to find out the possible power of the on-board transmitter.</p> <p>Results. A diagram of a communication system based on the DVB-S standard using the technology of commercially available off-the-shelf products has been developed. The high-speed data transmitter is implemented on a Xilinx® Zynq Ultrascale+ ™ MPSoC FPGA microchip, which is located on an Enclustra Mercury XU8 module with a high-performance dual 16-bit AD9174 DAC. The on-board transmitter with a power of up to 2 W meets the requirements of the ITU Radio Regulations for the power flux density on the surface of the Earth, which is created by the radiation of the space station EESS in the range 8025–8400 MHz. It is shown that the energy reserve of the communication line of 3 dB is achieved for various commands for coding and modulation changes with an increase in the elevation angle, which allows to increase the speed of information transmission.</p> <p>Conclusions. An original receiving-transmitting system was developed for use in small satellites for remote sensing of the Earth. It is shown that the function of adaptive modeling of ACM of the DVB-S standard allows you to automatically change the transmission parameters in real time depending on the changing conditions of the channel, providing opportunities for more flexible and effective data transmission in various conditions, which will allow to increase the volumes of information transmitted by communication session. The proposed system operates in the X-band and is built using commercially available off-the-shelf products. An antenna with double circular polarization is used as the emitter. Two physical channels represent two polarization modes: right circular polarization and left circular polarization, each of which has three frequency channels.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 V. І. Мagro, O. G. Panfilov SCATTERING OF ELECTROMAGNETIC WAVES ON FLAT GRID TWO-PERIODIC STRUCTURES 2024-03-28T22:15:22+02:00 V. A. Vanin I. I. Pershyna <p>Context. One of the scientific hypotheses for the creation of nonreciprocal optical metasurfaces is based on the use of a wave channel in which rays of the direct and reverse diffraction scenarios are realized on two-periodic flat structures with nonlinear elements. Such processes in the nanometer wavelength range of electronic devices require precise calculations of the interaction of waves and microstructures of devices. It is also important to describe the behavior of antenna devices in mobile communications. Expanding the wavelength range of stable communication is achieved by using prefractal structures in antenna devices in combination with periodic structuring. Similar modeling problems arise when electromagnetic waves penetrate materials with a crystalline structure (radio transparency).</p> <p>Objective. To test this hypothesis, it is necessary to carry out mathematical modeling of the process of scattering of electromagnetic waves by metasurfaces under conditions of excitation of several diffraction orders. It is known that among two-periodic flat lattices of different structures there are five types that fill the plane. These are the Bravais grilles. The problem of scattering of an incident monochromatic TE polarized wave on a metal screen with recesses in two-periodic structures filled with silicon was considered.</p> <p>Method. The paper builds mathematical models for the study of spatial-amplitude spectra of metasurfaces on Brave lattices and gives some results of their numerical study. The condition for determining the diffraction orders propagating over the grating is proposed. Scattered field amplitudes are from the solution of the boundary value problem for the Helmholtz equation in the COMSOL Multiphysics 5.4 package. Similar problem formulations are possible when studying the penetration of an electromagnetic field into a crystalline substance.</p> <p>Results. Obtained relations for diffraction orders of electromagnetic waves scattered by a diffraction grating. The existence of wavelengths incident on a two-periodic lattice for which there is no reflected wave is shown for different shapes (rectangular, square, hexagonal) of periodic elements in the center of which a depression filled with silicon was made. Distributions of reflection coefficients for different geometric sizes of colored elements and recesses are given. The characteristics of the electric field at resonant modes in the form of modulus isolines show the nature of the interaction of the field over the periodic lattice and the scatterersdepressions. At the resonant wavelengths of the incident waves, standing waves appear in the scatterers.</p> <p>Conclusions. A mathematical model of the set of diffraction orders propagating from a square and hexagonal lattice into halfspace is proposed z &gt;= 0. It has been shown that flat periodic lattice with square or hexagonal periodicity elements and resonant scatterers in the form of cylindrical recesses filled with silicon can produce a non-mirrored scattered field in metal. The response of the lattices to changes in the wavelength of the incident field by the structure of diffraction orders of the scattered field and high sensitivity to the rotation of the incident plane were revealed. The two-periodic lattices have prospects for creating anti-reflective surfaces of various devices. Two-periodic lattices have prospects for creating anti-reflective surfaces for various devices, laser or sensor electronic devices, antennas in mobile communication elements, and radio transparency elements. They have more advanced manufacturing technologies in relation to spatial crystal structures.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 V. A. Vanin, I. I. Pershyna REALIZATION OF THE DECISION-MAKING SUPPORT SYSTEM FOR TWITTER USERS’ PUBLICATIONS ANALYSIS 2024-03-30T11:25:30+02:00 T. Batiuk D. Dosyn <p>Context. The paper emphasizes the need for a decision-making system that can analyze users’ messages and determine the sentiment to understand how news and events impact people’s emotions. Such a system would employ advanced techniques to analyze users’ messages, delving into the sentiment expressed within the text. The primary goal is to gain insights into how news and various events reverberate through people’s emotions.</p> <p>Objective. The objective is to create a decision-making system that can analyze and determine the sentiment of user messages, understand the emotional response to news and events, and distribute the data into clusters to gain a broader understanding of users’ opinions. This multifaceted objective involves the integration of advanced techniques in natural language processing and machine learning to build a robust decision-making system. The primary goals are sentiment analysis, comprehension of emotional responses to news and events, and data clustering for a holistic view of user opinions.</p> <p>Method. The use of long-short-term memory neural networks for sentiment analysis and the k-means algorithm for data clustering is proposed for processing large volumes of user data. This strategic combination aims to tackle the challenges posed by processing large volumes of user-generated data in a more nuanced and insightful manner.</p> <p>Results. The study and conceptual design of the decision-making system have been completed and the decision-making system was created. The system incorporates sentiment analysis and data clustering to understand users’ opinions and the sentiment value of such opinions dividing them into clusters and visualizing the findings.</p> <p>Conclusions. The conclusion is that the development of a decision-making system capable of analyzing user sentiment and clustering data can provide valuable insights into users’ reactions to news and events in social networks. The proposed use of longshort-term memory neural networks and the k-means algorithm is considered suitable for sentiment analysis and data clustering tasks. The importance of studying existing works and systems to understand available algorithms and their applications is emphasized. The article also describes created and implemented a decision-making system and demonstrated the functionality of the system using a sample dataset.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 T. Batiuk, D. Dosyn METHOD OF CREATING A MINIMAL SPANNING TREE ON AN ARBITRARY SUBSET OF VERTICES OF A WEIGHTED UNDIRECTED GRAPH 2024-03-30T11:58:34+02:00 V. M. Batsamut S. O. Hodlevsky Yu. P. Babkov D. A. Morkvin <p>Context. The relevance of the article is determined by the need for further development of models for optimal restoration of the connectivity of network objects that have undergone fragmentation due to emergency situations of various origins. The method proposed in this article solves the problematic situation of minimizing the amount of restoration work (total financial costs) when promptly restoring the connectivity of a selected subset of elements of a network object after its fragmentation.</p> <p>The purpose of the study is to develop a method for creating a minimal spanning tree on an arbitrary subset of vertices of a weighted undirected graph to minimize the amount of restoration work and/or total financial costs when promptly restoring the connectivity of elements that have a higher level of importance in the structure of a fragmented network object.</p> <p>Method. The developed method is based on the idea of searching for local minima in the structure of a model undirected graph using graph vertices that are not included in the list of base vertices to be united by a minimal spanning tree. When searching for local minima, the concept of an equilateral triangle and a radial structure in such a triangle is used. In this case, there are four types of substructures that provide local minima: first, those with one common base vertex; second, those with two common base vertices; third, those with three common base vertices; fourth, those without common base vertices, located in different parts of the model graph. Those vertices that are not included in the list of basic ones, but through which local minima are ensured, are added to the basic ones. Other vertices (non-basic) along with their incident edges are removed from the structure of the model graph. Then, using one of the well-known methods of forming spanning trees, a minimal spanning tree is formed on the structure obtained in this way, which combines the set of base vertices.</p> <p>Results. 1) A method for creating a minimal spanning tree on an arbitrary subset of vertices of a weighted undirected graph has been developed. 2) A set of criteria for determining local minima in the structure of the model graph is proposed. 3) The method has been verified on test problems.</p> <p>Conclusions. The theoretical studies and several experiments confirm the efficiency of the developed method. The solutions developed using the developed method are accurate, which makes it possible to recommend it for practical use in determining strategies for restoring the connectivity of fragmented network objects.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 V. M. Batsamut, S. O. Hodlevsky, Yu. P. Babkov, D. A. Morkvin THE DESIGN OF THE PIPELINED RISC-V PROCESSOR WITH THE HARDWARE COPROCESSOR OF DIGITAL SIGNAL PROCESSING 2024-03-30T12:40:22+02:00 Y. Y. Vavruk V. V. Makhrov H. O. Hedeon <p>Context. The digital signal processing is applied in many fields of science, technology and human activity. One of the ways of implementing algorithms of digital signal processing is the development of coprocessors as an integral part of well-known architectures.</p> <p>In the case of developing a pipelined device, the presented approach will allow to use software and hardware tools of the appropriate architecture, provide the faster execution of signal processing algorithms, reduce the number of cycles and memory accesses.</p> <p>Objective. Objectives are design and characterization study of a pipelined RISC-V processor and coprocessor of digital signal processing which performs fast Fourier transform.</p> <p>Method. Analyzing technical literature and existing decisions allow to assess advantages and disadvantages of modern developments and on the basis of which to form the relevance of the selected topic. Model designing and simulation results allow to examine a model efficiency, to determine weak components’ parts and to improve model parameters.</p> <p>Results. The pipelined RISC-V processor has been designed which executes a basic set of instructions. Execution time of assembly program on the single-cycled and the pipelined processors have been analyzed. According to the results, the test program on the pipelined processor is executed in 29 cycles, while on the single-cycle processor it takes 60 cycles. The structure of the coprocessor for the fast Fourier transform algorithm and a set of processor instructions that allow working with the coprocessor have been developed. The number of cycles of the coprocessor based on Radix-2 fast Fourier transform algorithm for 512 points is 2358 cycles, and for 1024 points is 5180 cycles.</p> <p>Conclusions. Conducted researches and calculations have showed that the application of the developed hardware coprocessor reduces the fast Fourier transform algorithm execution time and the load of the pipelined processor during calculations.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 Y. Y. Vavruk, V. V. Makhrov, H. O. Hedeon LAMA-WAVELET: IMAGE IMPAINTING WITH HIGH QUALITY OF FINE DETAILS AND OBJECT EDGES 2024-03-30T13:11:33+02:00 D. O. Kolodochka M. V. Polyakova <p>Context. The problem of the image impainting in computer graphic and computer vision systems is considered. The subject of the research is deep learning convolutional neural networks for image inpainting.</p> <p>Objective. The objective of the research is to improve the image inpainting performance in computer vision and computer graphics systems by applying wavelet transform in the LaMa-Fourier network architecture.</p> <p>Method. The basic LaMa-Fourier network decomposes the image into global and local texture. Then it is proposed to improve the network block, processing the global context of the image, namely, the spectral transform block. To improve the block of spectral transform, instead of Fourier Unit Structure the Simple Wavelet Convolution Block elaborated by the authors is used. In this block, 3D wavelet transform of the image on two levels was initially performed using the Daubechies wavelet db4. The obtained coefficients of 3D wavelet transform are splitted so that each subband represents a separate feature of the image. Convolutional layer, batch normalization and ReLU activation function are sequentially applied to the results of splitting of coefficients on each level of wavelet transform. The obtained subbands of wavelet coefficients are concatenated and the inverse wavelet transform is applied to them, the result of which is the output of the block. Note that the wavelet coefficients at different levels were processed separately. This reduces the computational complexity of calculating the network outputs while preserving the influence of the context of each level on image inpainting. The obtained neural network is named LaMa-Wavelet. The FID, PSNR, SSIM indexes and visual analysis were used to estimate the quality of images inpainted with LaMa-Wavelet network.</p> <p>Results. The proposed LaMa-Wavelet network has been implemented in software and researched for solving the problem of image inpainting. The PSNR of images inpainted using the LaMa-Wavelet exceeds the results obtained using the LaMa-Fourier network for narrow and medium masks in average by 4.5%, for large masks in average by 6%. The LaMa-Wavelet applying can enhance SSIM by 2–4% depending on a mask size. But it takes 3 times longer to inpaint one image with LaMa-Wavelet than with LaMa-Fourier network. Analysis of specific images demonstrates that both networks show similar results of inpainting of a homogeneous background. On complex backgrounds with repeating elements the LaMa-Wavelet is often more effective in restoring textures.</p> <p>Conclusions. The obtained LaMa-Wavelet network allows to improve the image inpainting with large masks due to applying wavelet transform in the LaMa network architecture. Namely, the quality of reconstruction of image edges and fine details is increased.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 D. O. Kolodochka, M. V. Polyakova PROACTIVE HORIZONTAL SCALING METHOD FOR KUBERNETES 2024-03-30T13:37:54+02:00 O. I. Rolik V. V. Omelchenko <p>Context. The problem of minimizing redundant resource reservation while maintaining QoS at an agreed level is crucial for modern information systems. Modern information systems can include a large number of applications, each of which uses computing resources and has its own unique features, which require a high level of automation to increase the efficiency of computing resource management processes.</p> <p>Objective. The purpose of this paper is to ensure the quality of IT services at an agreed level in the face of significant dynamics of user requests by developing and using a method of proactive automatic application scaling in Kubernetes.</p> <p>Method. This paper proposes a proactive horizontal scaling method based on the Prophet time series prediction algorithm. Prometheus metrics storage is used as a data source for training and validating forecasting models. Based on the historical metrics, a model is trained to predict the future utilization of computation resources using Prophet. The obtained time series is validated and used to calculate the required number of application replicas, considering deployment delays.</p> <p>Results. The experiments have shown the effectiveness of the proposed proactive automated application scaling method in comparison with existing solutions based on the reactive approach in the selected scenarios. This method made it possible to reduce the reservation of computing resources by 47% without loss of service quality compared to the configuration without scaling.</p> <p>Conclusions. A method for automating the horizontal scaling of applications in Kubernetes is proposed. Although the experiments have shown the effectiveness of this solution, this method can be significantly improved. In particular, it is necessary to consider the possibility of integrating a reactive component for atypical load patterns.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 O. I. Rolik, V. V. Omelchenko METHOD FOR DETERMINING THE BIT GRID OVERFLOW OF A COMPUTER SYSTEM OPERATING IN THE SYSTEM OF RESIDUAL CLASSES 2024-03-30T14:05:06+02:00 A. S. Yanko V. A. Krasnobayev S. B. Nikolsky O. O. Kruk <p>Context. Consideration of a set of examples of practical application of the procedure for identifying overflow of the bit grid of a computer system operating in a non-positional number system in residual classes. The object of the study is the process of processing data represented in the residual class system.</p> <p>Objective. The goal of the work is to consider and analyze examples of the bit grid overflow definition of a computer system when implementing the operation of adding two numbers in a system of residual classes based on the application of a method for determining the bit grid overflow, based on the use of the concept of number rank.</p> <p>Method. The specificity of the functioning of a computer system in a system of residual classes requires the implementation of not only modular operations, but also requires the implementation of additional, so-called non-modular operations. Non-modular operations include the operation of determining the overflow of the bit grid of a computer system in the system of residual classes. In a non-positional number system in residual classes, implementing the process of detecting overflow of the bit grid of a computer system is a difficult task to implement. The method considered in the work for determining the overflow of the bit grid is based on the use of positional features of a non-positional code of numbers in the system of residual classes, namely the true and calculated ranks of a number. The process of determining the overflow of the result of the operation of adding two numbers in the system of residual classes has been studied, since this arithmetic operation is the main, basic operation performed by a computer system.</p> <p>Results. The developed methods are justified theoretically and studied when performing arithmetic modular operations of addition, subtraction and multiplication using tabular procedures.</p> <p>Conclusions. The main advantage of the presented method is that the process of determining the overflow of the bit grid can be carried out in the dynamics of the computing process of the computer system, i.e. without stopping the solution of the problem. This circumstance makes it possible to reduce the unproductive expenditure of the computer system in the system of residual classes. In addition, this method can be used to control the operation of adding two numbers in the residual class system. This increases the reliability of obtaining the true result of the operation of adding two numbers in the system of residual classes.</p> 2024-04-02T00:00:00+03:00 Copyright (c) 2024 A. S. Yanko, V. A. Krasnobayev, S. B. Nikolsky, O. O. Kruk