Radio Electronics, Computer Science, Control https://ric.zp.edu.ua/ <p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="http://zntu.edu.ua/zaporozhye-national-technical-university" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Ukrainian. Before 2022 also Russian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="https://mjl.clarivate.com/search-results" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="http://ric.zntu.edu.ua/about/editorialTeam" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals and conferences. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="http://journals.uran.ua/public/site/images/grechko/1OA1.png" alt="" /> <img src="http://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="" /></span></strong></p> en-US <h3 id="CopyrightNotices" align="justify"><span style="font-size: small;">Creative Commons Licensing Notifications in the Copyright Notices</span></h3> <p>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.</p> <p>The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.</p> <p>The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.</p> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors who publish with this journal agree to the following terms:</span></p> <ul> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution License CC BY-SA</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.</span></p> </li> </ul> subbotin.csit@gmail.com (Sergey A. Subbotin) subbotin@zntu.edu.ua (Sergey A. Subbotin) Sun, 29 Jun 2025 13:59:11 +0300 OJS 3.2.1.2 http://blogs.law.harvard.edu/tech/rss 60 NATURAL LANGUAGE PROCESSING OF SOCIAL MEDIA TEXT DATA USING BERT AND XGBOOST https://ric.zp.edu.ua/article/view/333014 <p><strong>Context </strong>The growth of text data in social networks requires the development of effective methods for sentiment analysis that can take into account both lexical and contextual dependencies. Traditional approaches to text processing have limitations in understanding semantic relationships between words, which affects the accuracy of classification. The integration of deep neural networks for text vectorization with ensemble machine learning algorithms and methods for interpreting results allows improving the quality of sentiment analysis.<br /><strong>Objective. </strong>The aim of the study is to develop and evaluate a new approach to text message sentiment classification that combines Sentence-BERT for deep semantic vectorization, XGBoost for high-accuracy classification, SHAP for explaining the contribution of features, sentence embedding similarity for assessing semantic similarity, and λ-regularization to improve the generalization ability of the model. The study is aimed at analyzing the impact of these methods on the quality of classification, identifying the most significant features and optimizing parameters.<br /><strong>Method. </strong>The study uses Sentence-BERT to transform text data into a vector space with deep semantic connections. XGBoost is used for sentiment classification, which provides high accuracy and stability even on unevenly distributed datasets. The SHAP method is used to explain the contribution of features, which allows us to determine which factors have the greatest impact on the prediction. Additionally, sentence embedding similarity is used to compare texts.<br /><strong>Results. </strong>The proposed approach demonstrates high efficiency in mood classification tasks. The ROC-AUC value confirms the ability of the model to accurately distinguish between classes of emotional coloring of the text. The use of SHAP ensures the interpretability of the results, allowing us to explain the influence of each feature on the classification. Sentence embedding similarity confirms the efficiency of Sentence-BERT in detecting semantically<br />similar texts, and λ-regularization improves the generalization ability of the model.<br /><strong>Conclusions. </strong>The study demonstrates scientific novelty through a comprehensive combination of Sentence-BERT, XGBoost, SHAP, sentence embedding similarity, and λ-regularization to improve the accuracy and interpretability of sentiment analysis. The results obtained confirm the effectiveness of the proposed approach, which makes it promising for application in public opinion monitoring, automated content moderation, and personalized recommendation systems. Further research can be aimed at adapting the model to specific domains and improving interpretation methods.</p> T. Batiuk, D. Dosyn Copyright (c) 2025 Т. Batiuk, D. Dosyn https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/333014 Sun, 29 Jun 2025 00:00:00 +0300 COMBINED METRIC FOR EVALUATING THE QUALITY OF SYNTHESIZED BIOMEDICAL IMAGES https://ric.zp.edu.ua/article/view/333049 <p><strong>Context</strong>. This study addresses the problem of developing a new metric for evaluating the quality of synthesized images. The relevance of this problem is explained by the need for assessing the quality of artificially generated images. Additionally, the study highlights the potential of biomedical image synthesis based on diffusion models. The research results can be applied for biomedical image generation and quantitative quality assessment of synthesized images.<br><strong>Objective</strong>. The aim of this study is to develop a combined metric and an algorithm for biomedical image synthesis to assess the quality of synthesized images.<br><strong>Method</strong>. A combined metric MC for evaluating the quality of synthesized images is proposed. This metric is based on two existing metrics: MIS and MFID. Additionally, an algorithm for histopathological image synthesis using diffusion models has been developed.<br><strong>Results.</strong> To study the MIS, MFID, and MC metrics, histopathological images available on the Zenodo platform were used. This dataset contains three classes of histopathological images G1, G2, and G3, representing pathological conditions of breast tissue. Based on the developed image synthesis algorithm, three classes of artificial histopathological images were generated. Using the MIS, MFID, and MC metrics, quality assessments of the synthesized histopathological images were obtained. The developed metric will form the basis of a software module for image quality assessment using metrics. This software module will be integrated into CAD systems.<br><strong>Conclusions.</strong> A combined metric for evaluating the quality of synthesized images has been developed, along with a proposed algorithm for biomedical image synthesis. The software implementation of the combined metric and image synthesis algorithm has been integrated into an image quality assessment module.</p> O. M. Berezsky, M. O. Berezkyi, M. O. Dombrovskyi, P. B. Liashchynskyi, G. M. Melnyk Copyright (c) 2025 O. M. Berezsky, M. O. Berezkyi, M. O. Dombrovskyi, P. B. Liashchynskyi, G. M. Melnyk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/333049 Sun, 29 Jun 2025 00:00:00 +0300 DEVELOPMENT OF INNOVATIVE APPROACHES FOR NETWORK OPTIMIZATION USING GEOSPATIAL MULTI-COMPONENT SYSTEMS https://ric.zp.edu.ua/article/view/333050 <p><strong>Context</strong>. Developing a geospatial multi-agent system for optimizing transportation networks is crucial for enhancing efficiency and reducing travel time. This involves employing optimization algorithms and simulating agent behavior within the network.<br><strong>Objective</strong>. The aim of this study is to develop a geospatial multi-agent system for optimizing transportation networks, focusing on improving network efficiency and minimizing travel time through the application of advanced optimization algorithms and agentbased modeling.<br><strong>Method</strong>. The proposed method for optimizing transportation networks combines foundational structure with advanced refinement in two stages: pre-processing and evolutionary strategy optimization. In the first stage, a Minimum Spanning Tree is constructed using Kruskal’s algorithm to establish the shortest, loop-free network that connects all key points, accounting for natural obstacles and existing routes. This provides a cost-effective and realistic baseline. The second stage refines the network through an evolutionary strategy, where agents representing MST variations are optimized using a fitness function balancing total path length, average node distances, and penalties for excessive edges. Optimization employs crossover to combine solutions and mutation to introduce diversity through edge modifications. Repeated over multiple epochs, this process incrementally improves the network, resulting in an optimized design that minimizes costs, enhances connectivity, and respects real-world constraints.<br><strong>Results.</strong> The results of applying the evolutionary strategy and minimum spanning tree methods were analyzed in detail. Comparing these methods to benchmarks like Tokyo’s railway network and the Slime Mold algorithm revealed the advantage of using the evolutionary approach in generating optimal paths. The findings emphasize the need for integrating advanced algorithms to further refine path optimization and network design.<br><strong>Conclusions</strong>. The research successfully developed a geospatial multi-agent system for optimizing transportation networks, achieving its objectives by addressing key challenges in transport network planning. A detailed analysis of existing solutions revealed the dynamic and complex nature of transportation systems and underscored the need for adaptability to environmental changes, such as new routes or obstacles. The proposed approach enhanced the minimum spanning tree with an evolutionary strategy, enabling flexibility and rapid adaptation. Results demonstrated the system’s effectiveness in planning optimal intercity transport networks. Future work could refine environmental assessments, improve route cost evaluations, expand metrics, define new performance criteria, and integrate neural network models to further enhance optimization capabilities, particularly for urban networks.</p> N. I. Boyko, T. O. Salanchii Copyright (c) 2025 N. I. Boyko, T. O. Salanchii https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/333050 Sun, 29 Jun 2025 00:00:00 +0300 OPTIMIZATION BASED ON FLOWER CUTTING HEURISTICS FOR SPACE ALLOCATION PROBLEM https://ric.zp.edu.ua/article/view/333069 <p><strong>Context.</strong> This research discusses the shelf space allocation problem with vertical and horizontal product categorization, which also includes the products of general and brand assortment as well as products with different storage conditions stored on different shelves and incompatible products stored on the same shelf but no nearby.<br><strong>Objective.</strong> The goal is to maximize the profit, product movement, or sales after allocating products on store shelves, defining the shelf for the product and the number of stock-keeping units it has.<br><strong>Method</strong>. The research proposes the two variants of heuristics with different sorting rules inside utilized as an approach to solving the retail shelf space allocation problem with horizontal and vertical product categorization. It also covers the application of 13 developed steering parameters dedicated to instances of different sizes, which allows to obtain cost-effective solutions of high quality.<br><strong>Results</strong>. The results obtained by heuristics were compared to the optimal solutions given by the commercial CPLEX solver. The effectiveness of the proposed heuristics and the suitability of the control settings were demonstrated by their ability to significantly reduce the number of possible solutions while still achieving the desired outcomes. Both heuristics consistently produced solutions with a quality surpassing 99.80% for heuristic H1 and 99.98% for heuristic H2. Heuristics H1 found 12 optimal solutions, and heuristics H2 found 14 optimal solutions among 15 test instances – highlighting their reliability and efficiency.<br><strong>Conclusions.</strong> The specifics of the investigated model can be used by supermarkets, apparel stores, and electronics retailers. By following the explained heuristics stages and the methods of parameter adjustments, the distributor can systematically develop, refine, and deploy a heuristic algorithm that effectively addresses the shelf space allocation problems at hand while being robust and scalable.</p> K. S. Czerniachowska, S. A. Subbotin Copyright (c) 2025 K. S. Czerniachowska, S. A. Subbotin https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/333069 Sun, 29 Jun 2025 00:00:00 +0300 OPTIMAL ALLOCATION OF LIMITED RESOURCES IN MULTIPROCESSOR SYSTEMS https://ric.zp.edu.ua/article/view/333072 <p><strong>Context</strong>. The paper considers multiprocessor systems consisting of many processors with a common RAM. The efficiency of such systems depends on the operating system. It must ensure a uniform loading of processors with tasks, in which the peak load on RAM will be minimal. This is a rather complex problem. In this paper, it is solved by building optimization models and developing effective heuristic algorithms. This problem is solved in two stages. The first stage is the optimal loading of processors with tasks, and the second is the minimization of the peak load on RAM. Several optimization models of this problem have been built, for the solution of which the exact quadratic regularization method is effective. Effective heuristic algorithms have also been developed. Comparative computational experiments have been conducted, which confirm the effectiveness of the proposed technology for solving this problem.<br /><strong>Objective</strong>. Development of mathematical optimization models, methods, and algorithms for optimal resource allocation in multiprocessor systems.<br /><strong>Method</strong>. A two-stage solution to this problem is effective. Several optimization models containing Boolean variables are proposed. Such models are quite complex for finding optimal solutions. To solve them, it is proposed to use the method of exact quadratic regularization. This optimization method is used for the first time for this class of problems, so it required the development of appropriate algorithmic support. Heuristic algorithms are usually implemented in operating systems. Therefore, effective heuristic algorithms are proposed that use the final principle, which significantly improves the solution of the problem. <strong>Results.</strong> New optimization models for the allocation of limited resources in multiprocessor systems have been constructed. Effective heuristic algorithms have been developed, which are implemented software-wise using VBA in the Excel package. Software for entering initial data for optimization models has also been developed, which simplifies their solution. The results of computational experiments are presented.<br /><strong>Conclusions</strong>. A new effective technology for optimal resource allocation in multiprocessor systems has been developed. Heuristic algorithms have been developed and implemented in software. Computational experiments have been conducted to confirm the effectiveness of the proposed technology for solving the problem.</p> <p> </p> A. I. Kosolap Copyright (c) 2025 A. I. Kosolap https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/333072 Sun, 29 Jun 2025 00:00:00 +0300 METHOD FOR DEVELOPMENT MODELS OF POLYSUBJECT MULTIFACTOR ENVIRONMENT OF SOFTWARE COMPLEX’S SUPPORT https://ric.zp.edu.ua/article/view/333182 <p><strong>Context</strong>. The task of development the models of a polysubject multifactor environment for software’s complex support is considered in this research, that ensures possibilities of taking into account the influence of various impact factors onto the supported software complexes themselves, onto their complex support’s processes, as well as onto the subjects (interacting with them) who provide and implement this complex support. <strong>The object of study </strong>are the processes of complex support of software products, the processes of automation of this complex support, the processes of influence of impact factors onto the object and subjects of the complex support of software products, as well as the processes of perception’s subjectivization of the supported object by relevant subjects of interaction with it. <strong>The subject of study </strong>are methods and means of artificial neural networks, in particular a multilayer perceptron, as well as a computer design and modeling. <strong>Objective </strong>is the development of the method for building models of a polysubject multifactor environment(s) of the complex support of software product(s).<br><strong>Method</strong>. The developed method for building models of a polysubject multifactor environment of software complexes’ support is proposed, which makes it possible (in an automated mode) to obtain appropriate models, based on which, later on – to investigate the strengths and weaknesses of a specific researched complex support’s environment(s) of a particular investigated software product(s), in order to ensure further improvement and automation of this complex support based on the study and analysis of impact factors, which form the subjective vision and perception of this complex support by those subjects who directly provide and perform it, that is, in fact, on whom this support itself depends, as well as its corresponding qualitative and quantitative characteristics and indicators.<br><strong>Results</strong>. The results of functioning of the developed method are corresponding models of investigated polysubject multifactor environments of the complex support of software products, which take into account the presence and the level of influence of relevant existing and present factors performing impact onto the subjects of interaction with supported software complexes, which (subjects) directly provide and perform the complex support for the studied software products, and also form relevant researched support environments. In addition, as an example of a practical application and approbation, the developed method was used, in particular, to solve the applied practical task of determining the dominant and the deficient factors of influence of a polysubject multifactor environment of the studied software complex’s support, with presenting and analyzing the obtained results of resolving the given task.<br><strong>Conclusions</strong>. The developed method resolves the problem of building models of a polysubject multifactor environment of the complex support of software products, and ensures taking into account the action of various impact factors performing influence onto the supported software complex itself, onto the processes of its support, as well as onto the subjects of interaction with it, which (subjects) provide and perform this complex support. In particular, the developed method provides possibilities for modeling and investigating a polysubject multifactor environments of the “in focus” software product’s complex support, which reflect the global (or the local, it depends on the specific tasks) impact of various existing factors making influence onto the object of support itself (the supported software complex, or the processes of its complex support), as well as onto the subjects which directly carry out and implement this complex support in all its possible and/or declared manifestations. The practical approbation of the developed method has been carried out by solving specific applied practical tasks, one of which is presented, as an example, in this paper, – which is the task of determining the dominant and the deficient factors of influence of a polysubject multifactor environment of the studied software complex’s support, and this approbation, actually, confirms its effectiveness in solving a stack of applied practical problems related to researching the impact of factors performing influence onto the complex support of software products, using the advantages of artificial intelligence technologies, machine learning, artificial neural networks, and multilayer perceptron in particular</p> A. I Pukach, V. M Teslyuk Copyright (c) 2025 A. I Pukach, V. M Teslyuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/333182 Sun, 29 Jun 2025 00:00:00 +0300 THE INTELLECTUAL ANALYSIS METHOD OF COLOR IMAGES https://ric.zp.edu.ua/article/view/332721 <p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">Automatic and automated image analysis methods used in computer graphic design, biometric identification, and military target search are now widespread. The object of the research is the process of color image analysis.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The goal of the work is to create an intelligent method of image analysis based on quantization, binarization and clustering.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The proposed method for intelligent color image analysis consists of the following techniques. The technique of reducing the number of colors based on the conversion of a color image into a gray-scale image and quantization of the resulting grayscale image improves the accuracy of image feature extraction by preventing the appearance of an excessive number of image clusters. The technique of creating a set of binary images based on binarization of a quantized gray-scale image allows increasing the speed of subsequent clustering by replacing sequential extraction of all elements of a quantized gray-scale image with parallel extraction of binary image elements, as well as separating clusters obtained during subsequent clustering by color due to image membership. The technique of determining the highest priority binary images based on the probability of occurrence of each color in the quantized gray-scale image improves the speed of image structure synthesis based on the analysis results by considering the most informative binary images. The technique of extracting binary image elements on the basis of its clustering allows to increase the accuracy of extracting binary image elements by improving the method of forming the neighborhoods of points (no radius of empirically determined neighborhood is needed), detecting random outliers and noise, extracting image elements of different shapes and sizes without specifying the number of extracted binary image elements, as well as increasing the speed of extracting binary image elements by forming the neighborhoods of white points only. The technique of determining the higher priority parts of the binary image based on the power of image clusters allows increasing the accuracy of image structure synthesis based on the analysis results by omitting noise and random outliers.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">The proposed method for intelligent analysis of color images was programmatically implemented using Parallel Computing Toolbox of Matlab package and investigated for the task of image feature extraction on the corresponding database. The results obtained allowed to compare the traditional and proposed methods.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The proposed method allows to expand the application area of color image analysis based on color-to-gray-scale image conversion, quantization, binarization, parallel clustering and contributes to the efficiency of computer systems for image classification and synthesis. Prospects for further research investigating the proposed method for a wide class of machine learning tasks</span></p> E. E. Fedorov, O. L. Khramova-Baranova, T. Y. Utkina, Ya. M. Kozhushko , I. O. Nesen Copyright (c) 2025 E. E. Fedorov, O. L. Khramova-Baranova, T. Y. Utkina, Ya. M. Kozhushko , I. O. Nesen https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332721 Sun, 29 Jun 2025 00:00:00 +0300 PREDICTION THE ACCURACY OF IMAGE INPAINTING USING TEXTURE DESCRIPTORS https://ric.zp.edu.ua/article/view/332871 <p><strong><span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The problem of filling missing image areas with realistic assumptions often arises in the processing of real scenes in computer vision and computer graphics. To inpaint the missing areas in an image, various approaches are applied such as diffusion models, self-attention mechanism, and generative adversarial networks. To restore the real scene images convolutional neural networks are used. Although convolutional neural networks recently achieved significant success in image inpainting, high efficiency is not always provided.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The paper aims to reduce the time consumption in computer vision and computer graphics systems by accuracy prediction of image inpainting with convolutional neural networks.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The prediction of image inpainting accuracy can be done by an analysis of image statistics without the execution of inpainting itself. Then the time and computer resources on the image inpainting will not be consumed. We have used a peak signalto-noise ratio and a structural similarity index measure to evaluate an image inpainting accuracy.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">It is shown that a prediction can perform well for a wide range of mask sizes and real-scene images collected in the Places2 database. As an example, we concentrated on a particular case of the LaMa network versions although the proposed method can be generalized to other convolutional neural networks as well.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The results obtained by the proposed method show that this type of prediction can be performed with satisfactory accuracy if the dependencies of the SSIM or PSNR versus image homogeneity are used. It should be noted that the structural similarity of the original and inpainted images is better predicted than the error between the corresponding pixels in the original and inpainted images. To further reduce the prediction error, it is possible to apply the regression on several input parameters</span></p> D. O. Kolodochka, M. V. Polyakova, V. V. Rogachko Copyright (c) 2025 D. O. Kolodochka, M. V. Polyakova, V. V. Rogachko https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332871 Sun, 29 Jun 2025 00:00:00 +0300 MATHEMATICAL FOUNDATIONS OF METHODS FOR SOLVING CONTINUOUS PROBLEMS OF OPTIMAL MULTIPLEX PARTITIONING OF SETS https://ric.zp.edu.ua/article/view/332875 <p><strong><span class="fontstyle0">Context</span></strong><span class="fontstyle1">. The research object is the process of placing service centers (e.g., social protection services, emergency supply storage) and allocating demand for services continuously distributed across a given area. Mathematical models and optimization methods for location-allocation problems are presented, considering the overlap of service zones to address cases when the nearest center cannot provide the required service. The relevance of the study stems from the need to solve problems related to territorial distribution of logistics system facilities, early planning of preventive measures in potential areas of technological disasters, organizing evacuation processes, or providing primary humanitarian assistance to populations in emergencies.<br /></span><strong><span class="fontstyle0">Objective</span></strong><span class="fontstyle1">. The rational organization of a network of service centers to ensure the provision of guaranteed service in the shortest possible time by assigning clients to multiple nearest centers and developing the corresponding mathematical and software support.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle1">The concept of a characteristic vector-function of a </span><span class="fontstyle3">k</span><span class="fontstyle1">-th order partition of a continuous set is introduced. Theoretical justification is provided for using the LP-relaxation procedure to solve the problem, formulated in terms of such characteristic functions. The mathematical framework is developed using elements of functional analysis, duality theory, and nonsmooth optimization.<br /></span><strong><span class="fontstyle0">Results</span></strong><span class="fontstyle1">. A mathematical model of optimal territorial zoning with center placement, subject to capacity constraints, is presented and studied as a continuous problem of optimal multiplex partitioning of sets. Unlike existing models, this approach describes distribution processes in logistics systems by minimizing the distance to several nearest centers while considering their capacities. Several propositions and theorems regarding the properties of the functional and the set of admissible solutions are proven. Necessary and sufficient optimality conditions are derived, forming the basis for methods of optimal multiplex partitioning of sets.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle1">Theoretical findings and computational experiment results presented in the study confirm the validity of the developed mathematical framework, which can be readily applied to special cases of the problem. The proven propositions and theorems underpin computational methods for optimal territorial zoning with center placement. These methods are recommended for logistics systems to organize the distribution of material flows while assessing the capacity of centers and the fleet of transportation vehicles involved.</span></p> L. S. Koriashkina, D. E. Lubenets, O. S. Minieiev, M. S. Sazonova Copyright (c) 2025 L. S. Koriashkina, D. E. Lubenets, O. S. Minieiev, M. S. Sazonova https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332875 Sun, 29 Jun 2025 00:00:00 +0300 TWO-LAYER GRAPH INVARIANT FOR PATTERN RECOGNITION https://ric.zp.edu.ua/article/view/332913 <p><strong><span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The relevance of the article is driven by the need for further development of object recognition (classification) algorithms, reducing computational complexity, and increasing the functional capabilities of such algorithms. The graph invariant proposed in the article can be applied in machine vision systems for recognizing physical objects, which is essential during rescue and monitoring operations in crisis areas of various origins, as well as in delivering firepower to the enemy using swarms of unmanned aerial vehicles.<br /></span><strong><span class="fontstyle0">Objective </span></strong><span class="fontstyle2">is to develop a graph invariant with low computational complexity that enables the classification of physical objects with a certain level of confidence in the presence of external interference.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The physical object to be recognized (identified) is modeled by a connected undirected weighted graph. To identify theconstant characteristics of different model graphs, the idea of selecting the minimum and maximum weighted spanning trees in the structure of these graphs is applied. For this purpose, the classical and modified Boruvka-Sollin’s method are used (modified – for constructing the maximum weighted spanning tree). Such a stratification of the structure of the initial graph into two layers provides a larger information base during image analysis regarding the belonging of a certain implementation to a certain class of objects. Next, for each of the resulting spanning trees, two numerical characteristics are calculated: the weight of the spanning tree and the Randić index. The first characteristic contains indirect information about the linear dimensions of the object, while the second conveys its structural features. These characteristics are independent of vertex labeling and the graphical representation of the graph, which is a necessary condition for graph isomorphism verification. From these four obtained characteristics, an invariant is formed, which describes the corresponding physical object present in a single scene. To fully describe one class or subclass of objects in four scenes (top view; front and rear hemispherical views; side view), the pattern recognition system must have four corresponding invariants.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">1) A two-layer invariant of a weighted undirected graph has been developed, enabling the recognition of physical objects with a certain level of confidence; 2) A method for recognizing physical objects has been formalized in graph theory terms, based on hashing the object structure using the weights of the minimum and maximum spanning trees of the model graph, as well as the Randić index of these trees; 3) The two-layer invariant of the weighted undirected graph has been verified on test tasks for graph isomorphism checking.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The conducted theoretical studies and a series of experiments confirm the feasibility of using the proposed graph invariant for real-time pattern recognition and classification tasks. The estimates obtained using the developed method are probabilistic, allowing the system operator to flexibly approach the classification of physical objects within the machine vision system’s field of view, depending on the technological process requirements or the operational situation in the system’s deployment area.</span></p> V. M. Batsamut, M. V. Batsamut, Y. H. Bashkatov, D. Yu. Tolstonosov Copyright (c) 2025 V. M. Batsamut, M. V. Batsamut, Y. H. Bashkatov, D. Yu. Tolstonosov https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332913 Sun, 29 Jun 2025 00:00:00 +0300 SITUATION ANTICIPATION AND PLANNING FRAMEWORK FOR INTELLIGENT ENVIRONMENTS https://ric.zp.edu.ua/article/view/332917 <p><strong>Context. </strong>Situation anticipation, prediction and planning play an important role in intelligent environments, allowing to learn and predict the behavior of its users, anticipate maintenance and resource provision needs. The object of study is the process of modeling the situation anticipation and planning in the situation-aware systems.<br><strong>Objective. </strong>The goal of the work is to develop and analyze the ontology-based framework for modeling and predicting the situation changes for intelligent agents, allowing for proactive agent behavior.<br><strong>Method. </strong>This article proposes a framework for anticipation and planning based on GFO ontology. Each task or problem is considered a situoid, having a number of intermediate situations. Each task or problem is considered a situoid, having several intermediate situations. The framework is focused on the analysis of changes between situations, coming from anticipated actions or events.<br>Contextually organized knowledge base of experiential knowledge is used to retrieve information about possible developments scenarios and is used for planning and evaluation. The framework allows to build and compare trajectories of configuration changes for specific objects, situations or situoids. The planning and anticipation process works in conditions of incomplete information and unpredicted external events, because the projections are constantly updated using feedback from sensor data and reconciliating this information with predicted model.<br><strong>Results. </strong>The framework for reasoning and planning situations based on GFO ontology, allowing to model spatial, temporal and structural data dependencies.<br><strong>Conclusions. </strong>The situation anticipation framework allows to represent, model and reason about situation dynamics in the intelligent environment, such as intelligent residential community. Prospects for further research include the elaboration of contextual knowledge storing and processing, reconciliation and learning procedures based on real-world feedback and the application of proposed framework in the real-world system, such as intelligent security systems</p> E. V. Burov, Y. I. Zhovnir, O. V Zakhariya, N. E. Kunanets Copyright (c) 2025 E. V. Burov, Y. I. Zhovnir, O. V Zakhariya, N. E. Kunanets https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332917 Sun, 29 Jun 2025 00:00:00 +0300 A STUDY ON THE USE OF NORMALIZED L2-METRIC IN CLASSIFICATION TASKS https://ric.zp.edu.ua/article/view/332936 <p><strong>Context. </strong>In machine learning, similarity measures, and distance metrics are pivotal in tasks like classification, clustering, and dimensionality reduction. The effectiveness of traditional metrics, such as Euclidean distance, can be limited when applied to complex datasets. The object of the study is the processes of data classification and dimensionality reduction in machine learning tasks, in particular, the use of metric methods to assess the similarity between objects.<br><strong>Objective. </strong>The study aims to evaluate the feasibility and performance of a normalized <em>L</em>2-metric (Normalized Euclidean Distance, NED) for improving the accuracy of machine learning algorithms, specifically in classification and dimensionality reduction.<br><strong>Method. </strong>We prove mathematically that the normalized <em>L</em>2-metric satisfies the properties of boundedness, scale invariance, and monotonicity. It is shown that NED can be interpreted as a measure of dissimilarity of feature vectors. Its integration into <em>k</em>-nearest neighbors and <em>t</em>-SNE algorithms is investigated using a high-dimensional Alzheimer’s disease dataset. The study implemented four models combining different approaches to classification and dimensionality reduction. Model M1 utilized the <em>k</em>-nearest neighbors method with Euclidean distance without dimensionality reduction, serving as a baseline; Model M2 employed the normalized <em>L</em>2-metric in <em>k</em>NN; Model M3 integrated <em>t</em>-SNE for dimensionality reduction followed by kNN based on Euclidean distance; and Model M4 combined <em>t</em>-SNE and the normalized <em>L</em>2-metric for both reduction and classification stages. A hyperparameter optimization prоcedure was implemented for all models, including the number of neighbors, voting type, and the perplexity parameter for <em>t</em>-SNE. Cross-validation was conducted on five folds to evaluate classification quality objectively. Additionally, the impact of data normalization on model accuracy was examined.<br><strong>Results. </strong>Models using NED consistently outperformed models based on Euclidean distance, with the highest classification accuracy of 91.4% achieved when it was used in <em>t</em>-SNE and the nearest neighbor method (Model M4). This emphasizes the adaptability of NED to complex data structures and its advantage in preserving key features in high and low-dimensional spaces.<br><strong>Conclusions. </strong>The normalized <em>L</em>2-metric shows potential as an effective measure of dissimilarity for machine learning tasks. It improves the performance of algorithms while maintaining scalability and robustness, which indicates its suitability for various applications in high-dimensional data contexts.</p> N. E Kondruk Copyright (c) 2025 N. E Kondruk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332936 Sun, 29 Jun 2025 00:00:00 +0300 BEARING FAULT DETECTION BY USING AUTOENCODER CONVOLUTIONAL NEURAL NETWORK https://ric.zp.edu.ua/article/view/332949 <p><strong>Context. </strong>Bearings are an important part for the functioning of various means of transportation. They have the property of wear and failure, which requires high-quality and timely detection of faults. Failures are not always easy to detect, so the use of traditional detection methods may not be effective enough. The use of machine learning methods well-suited to the task can effectively solve the problem of detecting bearing faults. The object of study is the process of non-destructive diagnosis of bearings. The subject of study is methods of selecting hyperparameters and other optimization for building a diagnostic model based on a neural network according to observations.<br><strong>Objective. </strong>The goal of the work is to create a model based on a neural network for detecting bearing faults based on the ZSL.<br><strong>Method. </strong>A proposed filter smooths the data, preserving key characteristics such as peaks and slopes, and eliminates noise without significantly distorting the signal. A normalization method vibration data is proposed, which consists of centering the data and distributing the amplitude within optimal limits, contributing to the correct processing of this data by the model architecture. A model based on a neural network is proposed to detect bearing faults by data processing and subsequent binary classification of their vibrations. The proposed model works by compressing the vibration data into a latent representation and its subsequent recovery, calculating the error between the recovered and original data, and determining the difference between the errors of healthy and faulty bearing vibration data. The Zero-Shot Learning machine learning method involves training, validating the model only on healthy vibration data, and testing the model only on faulty vibration data. Due to the proposed machine learning method, the model based on a neural network is able to detect faulty bearings present in the investigated fault class and theoretically new fault classes, that is, the model can detect different classes of data that it did not see during training. The architecture of the model is built on the convolutional and max-pooling layers of the encoder, and the reverse convolutional layers for the decoder. The best hyperparameters of the model are selected using a special method.<br><strong>Results. </strong>Using the Pytorch library, a model capable of binary classification of healthy and faulty bearings was obtained through training, validation, and testing in the Kaggle software environment.<br><strong>Conclusions</strong>. Testing of the constructed model architecture confirmed the model's ability to classify healthy and fault bearings binaryly, allowing it to be recommended for use in practice to detect bearing faults. Prospects for further research may include testing the model through integration into predictive maintenance systems for timely fault detection</p> M. K. Kysarin Copyright (c) 2025 M. K. Kysarin https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332949 Sun, 29 Jun 2025 00:00:00 +0300 SYNTHESIS OF NEURAL NETWORK MODELS FOR TECHNICAL DIAGNOSTICS OF NONLINEAR SYSTEMS https://ric.zp.edu.ua/article/view/332954 <p><strong>Context. </strong>The problem of synthesizing a diagnostic model of complex technical processes in nonlinear systems, which should be characterized by a high level of accuracy, is considered. The object of research is the process of synthesizing a neural network model for technical diagnostics of nonlinear systems.<br><strong>Objective </strong>of the work is to synthesize a high-precision neural network model based on previously accumulated historical data about the system.<br><strong>Method. </strong>It is proposed to use artificial neural networks for modeling nonlinear technical systems. First, you need to perform an overall assessment of the complexity of the task. Based on the assessment, a decision can be made on the best approach to organizing neuromodel synthesis. So, for the task, the level of ‘random complexity’ was chosen, because despite the relative structure of the data, their total array is quite large in volume and requires careful study in order to ensure high quality of the solution. Therefore, in the future, it was proposed to use a neuromodel based on recurrent networks of the GRU topology and use swarm intelligence methods for neurosynthesis, in particular the A3C method. The results obtained showed a high level of solution obtained, but due to the high level of resource intensity, the proposed approach requires further modifications.<br><strong>Results. </strong>A diagnostic model of complex technical processes in nonlinear systems of optimal topology, characterized by a high level of accuracy, is obtained. The built neuromodel reduces the risks associated with ensuring human safety.<br><strong>Conclusions. </strong>The conducted experiments confirmed the operability of the proposed approach and allow us to recommend it for further refinement in order to implement technical, industrial and operational process control systems in practice in automation systems. Prospects for further research may lie in optimizing the resource intensity of synthesis processes</p> S. D. Leoshchenko, A. O. Oliinyk, S. A. Subbotin, B. V. Morklyanyk Copyright (c) 2025 S. D. Leoshchenko, A. O. Oliinyk, S. A. Subbotin, B. V. Morklyanyk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332954 Sun, 29 Jun 2025 00:00:00 +0300 EVALUATION OF QUANTIZED LARGE LANGUAGE MODELS IN THE TEXT SUMMARIZATION PROBLEM https://ric.zp.edu.ua/article/view/332997 <p><strong>Context. </strong>The problem of increasing the efficiency of deep artificial neural networks in terms of memory and energy consumption, and the multi-criteria evaluation of the quality of the results of large language models (LLM) taking into account the judgments of users in the task of summarizing texts, are considered. The object of the study is the process of automated text summarization based on LLMs.<br><strong>Objective. </strong>The goal of the work is to find a compromise between the complexity of the LLM, its performance and operational efficiency in text summarization problem.<br><strong>Method. </strong>An LLM evaluation algorithm based on multiple criteria is proposed, which allows choosing the most appropriate LLM model for text summarization, finding an acceptable compromise between the complexity of the LLM model, its performance and the quality of text summarization. A significant improvement in the accuracy of results based on neural networks in natural language processing tasks is often achieved by using models that are too deep and over-parameterized, which significantly limits the ability of the models to be used in real-time inference tasks, where high accuracy is required under conditions of limited resources. The proposed algorithm selects an acceptable LLM model based on multiple criteria, such as accuracy metrics BLEU, Rouge-1, 2, Rouge-L, BERT-scores, speed of text generalization, or other criteria defined by the user in a specific practical task of intellectual analysis. The algorithm includes analysis and improvement of consistency of user judgments, evaluation of LLM models in terms of each criterion.<br><strong>Results. </strong>Software is developed for automatically extracting texts from online articles and summarizing these texts. Nineteen quantized and non-quantized LLM models of various sizes were evaluated, including LLaMa-3-8B-4bit, Gemma-2B-4bit, Gemma- 1.1-7B-4bit, Qwen-1.5-4B-4bit, Stable LM-2-1.6B-4bit, Phi-2-4bit, Mistal-7B-4bit, GPT-3.5 Turbo and other LLMs in terms of BLEU, Rouge-1, Rouge-2, Rouge-L and BERT-scores on two different datasets: XSum and CNN/ Daily Mail 3.0.0.<br><strong>Conclusions. </strong>The conducted experiments have confirmed the functionality of the proposed software, and allow to recommend it for practical use for solving the problems of text summarizing. Prospects for further research may include deeper analysis of metrics and criteria for evaluating quality of generated texts, experimental research of the proposed algorithm on a larger number of practical tasks of natural language processing</p> N. I. Nedashkovskaya, R. I. Yeremichuk Copyright (c) 2025 N. I. Nedashkovskaya, R. I. Yeremichuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332997 Sun, 29 Jun 2025 00:00:00 +0300 METHOD FOR ANALYZING INPUT DATA FROM GEAR VIBRATIONS https://ric.zp.edu.ua/article/view/332999 <p><strong>Context. </strong>The paper considers the problem of analyzing large data vectors for analyzing helicopter engine performance. This issue is crucial for improving the reliability and efficiency of modern aviation technologies.<br><strong>Objective. </strong>To create a method for analyzing engine vibration data to achieve accurate classification of engine states based on vibration signals<strong>.<br>Method. </strong>The input data is analyzed, and a decision is made to create a neural network that is trained to recognize the class of the input vector. The neural network can work immediately and be configured for further training based on similar data. The program was implemented using a classical neural network method. The optimal weights and offsets are calculated with derivatives to minimize the loss function. The stochastic gradient descent (SGD) algorithm was used for optimization, and different activation functions were tested to find the best configuration. Choosing the right activation functions ensured maximum performance.<br><strong>Results. </strong>The graphs of the input vectors show that vectors from the first class had more peaks, which helped facilitate classification. After applying this method, the accuracy was about 70–75%, which was insufficient for the task. To improve this, we enhanced the model structure and reconfigured the activation functions. With the new method, the neural network can classify the input vector with 100% accuracy.<br><strong>Conclusions. </strong>This study presents an approach to analyzing engine vibration data for assessing performance. The scientific novelty lies in adapting a multilayer perceptron (MLP) for classifying vibration signals. The research shows that high accuracy can be achieved without deep architectures by optimizing the MLP. This method is universally applicable, eliminating additional model adaptation costs, which is crucial for industrial use. The practical significance is demonstrated through software and experiments, proving the effectiveness of the MLP for performance monitoring when model parameters and activation functions are properly adjusted</p> O. Y. Shalimov, O. O. Moskalchuk, O. M. Yevseienko Copyright (c) 2025 O. Y. Shalimov, O. O. Moskalchuk, O. M. Yevseienko https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332999 Sun, 29 Jun 2025 00:00:00 +0300 TERMINAL CONTROL OF QUADCOPTER SPATIAL MOTION https://ric.zp.edu.ua/article/view/333184 <p><strong>Context. </strong>Constructing quadcopter control algorithms is an area of keen interest because controlling them is fundamentally complex despite the quadcopter’s mechanical simplicity. The key problem of quadcopter control systems is to effectively couple three translational and three rotational freedom degrees of motion to perform unique target manoeuvres. In addition, these tasks are relevant due to the high demand for quadcopter in various human activities, such as cadastral aerial photography for monitoring hardto-reach areas and delivering cargo over short distances. They are also widely used in military affairs.<br><strong>Objective. </strong>This work objective is to develop and substantiate novel methods for algorithms constructing the high-precision control of a quadcopter spatial motion, allowing for its autonomous operation in all main flight modes: stabilization mode, position holding mode, automatic point-to-point flight mode, automatic takeoff and landing mode.<br><strong>Method. </strong>The given objective determined the use of the following research methods. Pontryagin’s maximum principle was applied to develop algorithms for calculating program trajectories for transferring a quadcopter from its current state to the given one. Lyapunov functions and modal control methods were used to synthesise and analyse quadcopter angular position control algorithms. Numerical modelling methods were used to verify and confirm the obtained theoretical results.<br><strong>Results. </strong>An approach for constructing algorithms for controlling the spatial quadcopter motion is proposed. It consists of two parts. The first part solves the problem of transferring a quadcopter from its current position to a given one. The second part proposes an original method to construct algorithms for quadcopter attitude control based on a dynamic equation for a quaternion.<br><strong>Conclusions. </strong>The proposed quadcopter motion mathematical model and methods for constructing control algorithms are verified by numerical modelling and can be applied to develop quadcopter control systems</p> M. V. Yefymenko, R. K. Kudermetov Copyright (c) 2025 M. V. Yefymenko, R. K. Kudermetov https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/333184 Sun, 29 Jun 2025 00:00:00 +0300 EVALUATING THE EFFICIENCY OF MECHANISMS FOR FRAME BLOCKS TRANSMISSION IN NOISY CHANNELS OF IEEE 802.11 NETWORKS https://ric.zp.edu.ua/article/view/331894 <p><strong><span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Aggregating frames into blocks when transmitting information in wireless IEEE802.11 networks helps to significantly reduce overhead costs and increase the transmission rate. However, the impact of noise reduces the efficiency of such transmission due to an increased probability of distortion of longer messages. We compared the efficiency of data transmission by variable and constant size blocks formed from frames using VBS and FBS mechanisms correspondingly under conditions of noise varying intensity.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">The purpose of this article is a comparative study of VBS and FBS mechanisms used for the formation and transmission of different sizes frame blocks under medium and high noise intensity.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">A simple model used in IEEE 802.11 networks to determine the DSF throughput for transmitting frames in infrastructure domains was modified by us to transmit frame blocks of different sizes under conditions of medium and high intensity noise affecting the transmission process</span><span class="fontstyle3">. </span><span class="fontstyle2">We use for transmission a discrete in time Gaussian channel without memory. In such a channel, bit errors are independent and equally distributed over the bits of the frame. The scale factors of the model for the number of frames in a block </span><span class="fontstyle3">k </span><span class="fontstyle2">= 6–40 at an average noise level corresponding to BER = 10</span><span class="fontstyle2">–6 </span><span class="fontstyle2">and </span><span class="fontstyle3">k </span><span class="fontstyle2">= 4–15 for high-intensity noise at BER = 10</span><span class="fontstyle2">–5 </span><span class="fontstyle2">are determined. The algorithm for calculation of the network throughput has been generalized. The investigation of the dependences of the throughput on the number of frames in the VBS blocks showed the presence of local maxima in dependences, located in the region of average values of the frames number. These maxima are more pronounced at increased data transfer rates.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">It is shown that with a small number of frames in a block (</span><span class="fontstyle3">k </span><span class="fontstyle2">= 6–9) and high-intensity noise, the efficiency of the FBS mechanism exceeds the efficiency of the VBS block formation mechanism. However, at the same noise level, an increase in the<br />number of frames in a block (</span><span class="fontstyle3">k </span><span class="fontstyle2">≥ 10) makes the use of the VBS mechanism more preferable. This advantage is explained by the fact that the VBS mechanism at each subsequent stage of transmission forms a block from frames distorted at the previous stage, therefore the size of the blocks at subsequent stages decreases, increasing the number of frames successfully transmitted to the AP (due to the increase in the probability of transmitting shorter blocks). At the same time, the constant and small probability of successful transmission of a constant size block at each stage makes the probability of transmission of frames distorted at the previous stages low. The situation changes for noise of medium intensity. Here the transmission of each subsequent block in the range of up to 25 frames per block using the VBS method requires the use of two stages. The application of the FBS method in these conditions shows that only the first set of frames requires the use of two stages for its complete transmission. Then, due to the accumulation of frames at the previous stages, each subsequent stage of transmission completes the formation of the corresponding set in the memory of AP.<br />Thus, when the noise intensity decreases to BER = 10</span><span class="fontstyle2">–6 </span><span class="fontstyle2">and below, the use of the FBS mechanism becomes more effective. The obtained results are illustrated with specific examples characterizing the formation and transmission of various frame blocks.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">In this article, using a mathematical model modified by us, a comparative study was conducted on the efficiency of various mechanisms for forming and transmitting a frame block of different sizes under conditions of the impact of different intensity noise on the transmission process. The algorithm for calculating the network throughput was generalized, and the values of the throughput were determined when using the VBS and FBS network functioning mechanisms.</span></p> V. S. Khandetskyi, N. V. Karpenko, V.V. Gerasimov Copyright (c) 2025 V. S. Khandetskyi, N. V. Karpenko, V.V. Gerasimov https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/331894 Sun, 29 Jun 2025 00:00:00 +0300 THE SELECTION OF INFORMATION-MEASURING MEANS FOR THE ROBOTOTECHNICAL COMPLEX AND THE RESEARCH OF THEIR WORKER CHARACTERISTICS https://ric.zp.edu.ua/article/view/331945 <p><strong>Context</strong>. The topic of the article is devoted to the issue selection of the means of the information-measurement system (IMS) for automation of robototechnical complexes (RTC) of flexible production systems applied in various fields of industry, and the research of their technological characteristics.<br /><strong>Objective</strong>. The goal is using the mathematical models to researching of the working characteristics of the new construction transmitters for information – measurement and automated control of robototechnical complex in flexible production areas.<br /><strong>Method</strong>. In the article, the following issues were set and solved: the analysis of the application object, the selection of the types of information-measurement and management elements of RTC creation and structure scheme; research of the characteristics of the information-measuring transmitter for managing the active elements of the RTC; determining the error of the analog output transmitter of the information-measurement system of RTC active elements. Based on the analysis of the application object, it was determined that the structure scheme of the RTC at the flexible production system includes complex technological, functionally connected production areas, modules and robotic complexes, their automated control system IMS, regulation, execution, microprocessor control system and devices and devices of the industrial network. includes The functional block diagrams of the IMS of RTCi of the flexible production system are given. Based on research, it was found that it is convenient to use a magnetoelastic transducer with a ring sensitive element to measure the mechanical force acting on the working organs of an industrial robot (IR). For this, unlike existing transmitters, the core of this transmitter is made of whole structural steel. The inductive coil of the proposed transmitter is included in the LC circuit of the autogenerator. The magnetoelastic emitter semiconductor is assembled at the base of the transistor. The crosssection of its core is calculated for the mechanical stress that can be released for the steel. The block-scheme of the inductive transmitter is proposed. The proposed transmitters work on the principle of an autogenerator assembled on an operational amplifier. A mathematical expression is defined for determining the output frequency of the autogenerator. The model of the autogenerator consists of a dependent source, the transmission coefficient is determined.<br /><strong>Results.</strong> A new transmitter is proposed to measure the information of the manipulator to perform special technological operations synchronously.<br /><strong>Conclusions</strong>. A mathematical model was developed to determine the error of the analog output transmitter of the informationmeasurement system of RTC active elements. The expression ehq is used to determine the error of the transmitter whose output is analog during the measurement of the current technological operation. It was determined that in practice, the geometric dimensions of the transmitter and the number of windings remain unchanged during the work process, where it is changed due to the influence of the environment. Considering this variation, a mathematical model was developed to determine the transmitter error.</p> J. F. Mammadov, T. A. Ahmadova, A. H. Huseynov, N. H. Talibov, H. M. Hashimova, A. A. Ahmadov Copyright (c) 2025 J. F. Mammadov, T. A. Ahmadova, A. H. Huseynov, N. H. Talibov, H. M. Hashimova, A. A. Ahmadov https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/331945 Sun, 29 Jun 2025 00:00:00 +0300 DEVELOPMENT OF A RANGE MEASUREMENT MODULE ON AN ULTRASONIC SENSOR WITH A GSM MODULE https://ric.zp.edu.ua/article/view/332508 <p><strong>Context</strong>. The development of a range measurement module based on an ultrasonic sensor with a Global System for Mobile Communications (GSM) module is extremely relevant in the field of telecommunications and radio electronics. In today’s world, an increasing number of devices are integrated into Internet of Things (IoT) systems, where long-distance data transmission is provided by telecommunication technologies. The use of the GSM module allows real-time transmission of information from the measuring device to remote servers or end users, which is critical for remote monitoring and control solutions. Ultrasonic sensors in combination with a GSM module can automate measurement processes in hard-to-reach or hazardous environments, which increases the efficiency and safety of systems. The use of radio electronic technologies for real-time transmission of measurement data can significantly expand the functionality of devices and facilitate their integration into existing telecommunications systems, particularly in the industrial, transportation, and infrastructure sectors.Thus, the development of this module with precise measurements contributes to the development of innovations in the field of telecommunications and radio electronics, providing fast and reliable data transmission, which is an important component of modern information systems.<br /><strong>Objective</strong>. Development of a range measurement module based on an ultrasonic sensor with a GSM module and improving the accuracy of measurements by implementing the proposed mathematical model of ultrasonic sensor autocalibration.<br /><strong>Method</strong>. To achieve this goal, an integrated range measurement module was developed, which combines the HC-SR04 ultrasonic sensor with a GSM module. The method of improving accuracy is based on the proposed mathematical model of ultrasonic sensor autocalibration.<br /><strong>Results</strong>. The task was stated, and a range measurement module based on an ultrasonic sensor with an integrated GSM module was developed. In the course of the study, an electrical schematic diagram of the device was created using DipTrace software. An printed circuit board has been created. A mathematical model of autocalibration of an ultrasonic sensor to improve measurement accuracy has been proposed. A series of experimental studies were carried out to assess accuracy. The results of the experiments confirmed the effectiveness of the developed module for measuring distances.<br /><strong>Conclusions</strong>. The developed range measurement module based on an ultrasonic sensor with a GSM module is an innovative solution that meets the modern requirements of telecommunication and radio engineering systems. The integration of accurate distance measurement based on the proposed mathematical model of autocalibration of an ultrasonic sensor with the possibility of remote data transmission opens up new prospects for remote monitoring and automation of processes. Experimental studies have confirmed the accuracy and reliability of the device, and comparative analysis with analogs has demonstrated its competitive advantages. The costeffectiveness and energy efficiency of the developed module make it attractive to a wide range of users, from individual developers to industrial enterprises. Further research can be aimed at improving data processing algorithms and expanding the functionality of the device, which will contribute to the development of innovative technologies in the field of radio electronics and telecommunications.</p> S. V. Sotnik Copyright (c) 2025 S. V. Sotnik https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/332508 Sun, 29 Jun 2025 00:00:00 +0300