Radio Electronics, Computer Science, Control
http://ric.zntu.edu.ua/
<p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="http://zntu.edu.ua/zaporozhye-national-technical-university" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Ukrainian. Before 2022 also Russian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="https://mjl.clarivate.com/search-results" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="http://ric.zntu.edu.ua/about/editorialTeam" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals and conferences. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="http://journals.uran.ua/public/site/images/grechko/1OA1.png" alt="" /> <img src="http://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="" /></span></strong></p>National University "Zaporizhzhia Polytechnic"en-USRadio Electronics, Computer Science, Control1607-3274<h3 id="CopyrightNotices" align="justify"><span style="font-size: small;">Creative Commons Licensing Notifications in the Copyright Notices</span></h3> <p>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.</p> <p>The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.</p> <p>The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.</p> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors who publish with this journal agree to the following terms:</span></p> <ul> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution License CC BY-SA</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.</span></p> </li> </ul>THE FREQUENCY METHOD FOR OPTIMAL IDENTIFICATION OF CLOSE-LOOP SYSTEM ELEMENTS
http://ric.zntu.edu.ua/article/view/296256
<p>Context. The article is devoted to overcoming the contradictions between the assumptions adopted in known methods of closedloop control system identification and the design and conditions of its operation. The article presents a new method of identifying the transfer functions matrix of a two-level closed-loop control system element, which functions under the conditions of multidimensional stationary centered random influences.</p> <p>Objective. The purpose of the study, the results of which are presented in this paper, is to extend the indirect identification method to the case of estimating one of the two-level closed-loop control system elements’ dynamics model based on passive experiment data.</p> <p>Method. To solve the optimal identification problem, a variational method for minimizing the quality functional on the class of fractional-rational transfer function matrices was used.</p> <p>Results. As a result of the research, the identification problem formulation was formalized, the rules for obtaining experimental information about the input and output signals were determined, the rules for identifying the transfer functions matrix of a two-level closed-loop control system element, which minimizes the sum of the variances of identification errors in the frequency domain, and the verification of these rules was carried out.</p> <p>Conclusions. Justified rules allow to correctly determine transfer functions matrices of the closed-loop systems selected element when fulfilling the defined list of conditions. The closed-loop systems control paths signals analysis proves the possibility of the effect of changing these signals statistical means, even under conditions of only centered stationary input influences actions on the system. Based on this, the further development of research can be aimed at overcoming such effects.</p>S. І. OsadchyiV. A. ZozulyaV. M. KalichA. S. Timoshenko
Copyright (c) 2024 S. І. Osadchyi, V. A. Zozulya, V. M. Kalich, A. S. Timoshenko
https://creativecommons.org/licenses/by-sa/4.0
2024-01-042024-01-04419519510.15588/1607-3274-2023-4-18SYNTHESIS OF VHDL-MODEL OF A FINITE STATE MACHINE WITH DATAPATH OF TRANSITIONS
http://ric.zntu.edu.ua/article/view/296233
<p>Context. The problem of building a program model of a finite state machine with datapath of transitions using VHDL language is considered. The model synthesis process is identified with the synthesis of this type of finite state machine, since the built model can be used both for the analysis of the device’s behavior and for the synthesis of its logic circuit in the FPGA basis. The object of the research is the automated synthesis of the logic circuit of the finite state machine with the datapath of transitions, based on the results of which numerical characteristics of the hardware expenses for the implementation of the state machine circuit can be obtained. This makes it possible to evaluate the effectiveness of using this structure of the finite state machine when implementing a given control algorithm.</p> <p>Objective. Development and research of a VHDL model of a finite state machine with datapath of transitions for the analysis of the behavior of the state machine and the quantitative assessment of hardware expenses in its logic circuit.</p> <p>Method. The research is based on the structural diagram of a finite state machine with datapath of transitions. The synthesis of individual blocks of the structure of the state machine is carried out according to a certain procedure by the given graph-scheme of the con-trol algorithm. It is proposed to present the result of the synthesis in the form of a VHDL description based on the fixed values of the states codes of the state machine. The process of synthesizing the datapath of transitions, the block of formation of codes of transitions operations and the block of formation of microoperations is demonstrated. VHDL description of that blocks is carried out in a synthesizable style, which allows synthesis of the logic circuit of the finite state machine based on FPGA with the help of modern CAD and obtaining numerical characteristics of the circuit, in particular, the value of hardware expenses. To analyze the correctness of the synthesized circuit, the process of developing the behavioral component of the VHDL model, the function of which is the generation of input signals of the finite state machine, is considered. The classical combination of the synthesizable and behavioral parts of the model allows presenting the results of the synthesis of a finite state machine with datapath of transitions as a separate project that can be used as a structural component of the designed digital system.</p> <p>Results. Using the example of an abstract graph-scheme of the control algorithm, a VHDL model of a finite state machine with datapath of transitions was developed. With the help of CAD AMD Vivado, a synthesis of the developed model was carried out and behavioral modeling of the operation of the finite state machine circuit was carried out. The results of the circuit synthesis made it possible to obtain the value of hardware expenses when implementing the circuit in the FPGA basis. According to the results of behavioral modeling, time diagrams were obtained, which testify to the correctness of the implementation of the functions of transitions and outputs of the synthesized state machine.</p> <p>Conclusions. In traditional VHDL models of finite state machines, the states do not contain specific codes and are identified using literals. This allows CAD to encode states at its own discretion. However, this approach is not suitable for describing a finite state machine with datapath of transitions. The transformation of states codes using a set of arithmetic and logic operations requires the use of fixed values of states codes, which determines the specifics of the VHDL model proposed in this paper. This and similar models can be used, in particular, in the study of the effectiveness of a finite state machine according to the criterion of hardware expenses in the device circuit.</p>A. A. BarkalovL. A. TitarenkoR. M. Babakov
Copyright (c) 2024 A. A. Barkalov, L. A. Titarenko, R. M. Babakov
https://creativecommons.org/licenses/by-sa/4.0
2024-01-042024-01-04413513510.15588/1607-3274-2023-4-13TECHNOLOGY FOR AUTOMATED CONSTRUCTION OF DOMAIN DICTIONARIES WITH SPECIAL PROCESSING OF SHORT DOCUMENTS
http://ric.zntu.edu.ua/article/view/296240
<p>Context. The task of automating the construction of domain dictionaries in the process of implementing software projects based on the analysis of documents, taking into account their size and presentation form.</p> <p>Objective. The goal of the work is to improve the quality of the dictionary based on the use of new technology, including special processing of short documents.</p> <p>Method. A model of a short document is proposed, which presents it in the form of three parts: header, content and final. The header and final parts usually contain information not related to the subject area. Therefore, a method for extracting content based on the use of many keywords has been proposed. The size of a short document (its content) does not allow determining the frequency characteristics of words and, therefore, identifying multi-word terms, the share of which reaches 50% of all terms. To make it possible to identify terms in short documents, a method for their clustering is proposed, based on the selection of nouns and the calculation of their frequency characteristics. The resulting clusters are treated as ordinary documents, since their size allows for the selection of multi-word terms. To highlight terms, it is proposed to select sequences of words containing nouns in the text. Analysis of the frequency of repetition of such sequences allows us to identify multi-word terms. To determine the interpretation of terms, a previously developed method of automated search for interpretations in dictionaries was used.</p> <p>Results. Based on the proposed model and methods, software was created to build a domain dictionary and a number of experiments were conducted to confirm the effectiveness of the developed solutions.</p> <p>Conclusions. The experiments carried out confirmed the performance of the proposed software and allow us to recommend it for use in practice for creating dictionaries of the subject area of various information systems. Prospects for further research may include the construction of corporate search systems based on dictionaries of terms and document clustering.</p>O. B. KungurtsevI. I. MileikoN. O. Novikova
Copyright (c) 2024 O. B. Kungurtsev, I. I. Mileiko, N. O. Novikova
https://creativecommons.org/licenses/by-sa/4.0
2024-01-042024-01-04414814810.15588/1607-3274-2023-4-14SONGS CONTINUATION GENERATION TECHNOLOGY BASED ON TEST GENERATION STRATEGIES, TEXTMINING AND LANGUAGE MODEL T5
http://ric.zntu.edu.ua/article/view/296243
<p>Context. Pre-trained large language models are currently the driving force behind the development of not only NLP, but also deep learning systems in general. Model transformers are able to solve virtually all problems that currently exist, provided that certain requirements and training practices are met. In turn, words, sentences and texts are the basic and most important way of communication between intellectually developed beings. Of course, speech and texts are used to convey certain emotions, events, etc. One of the main ways of using language to describe experienced emotions is songs with lyrics. However, often due to the need to preserve rhyme and rhyming, the dimensions of verse lines, song structure, etc., artists have to use repetition of lines in the lyrics. In addition, the process of writing texts can be long.</p> <p>Objective of the study is to develop information technology for generating the continuation of song texts based on the T5 machine learning model with (SA, specific author) and without (NSA, non-specific author) consideration of the author's style.</p> <p>Method. Choosing a decoding strategy is important for the generation process. However, instead of favoring a particular strategy, the system will support multiple strategies. In particular, the following 8 strategies: Contrastive search, Top-p sampling, Top-k sampling, Multinomial sampling, Beam search, Diverse beam search, Greedy search, and Beam-search multinomial sampling.</p> <p>Results. A machine learning model was developed to generate the continuation of song lyrics using large language models, in particular the T5 model, to accelerate, complement and increase the flexibility of the songwriting process.</p> <p>Conclusions. The created model shows excellent results of generating the continuation of song texts on test data. Analysis of the raw data showed that the NSA model has less degrading results, while the SA model needs to balance the amount of text for each author. Several text metrics such as BLEU, RougeL and RougeN are calculated to quantitatively compare the results of the models and generation strategies. The value of the BLEU metric is the most variable, and its value varies significantly depending on the strategy. At the same time, Rouge metrics have less variability, a smaller range of values. For comparison, 8 different decoding methods for text generation, supported by the transformers library, were used. From all the results of the text comparison, it is clear that the metrically best method of song text generation is beam search and its variations, in particular beam sampling. Contrastive search usually outperformed the conventional greedy approach. The top-p and top-k methods are not clearly superior to each other, and in different situations gave different results.</p>O. MediakovV. Vysotska
Copyright (c) 2024 О. О. Медяков, В. А. Висоцька
https://creativecommons.org/licenses/by-sa/4.0
2024-01-042024-01-04415715710.15588/1607-3274-2023-4-15DETECTION OF A SEISMIC SIGNAL BY A THREE-COMPONENT SEISMIC STATION AND DETERMINATION OF THE SEISMIC EVENT CENTER
http://ric.zntu.edu.ua/article/view/296246
<p>Context. The work is devoted on the development of theoretical foundations aimed at automating process of determining location at the seismic event center.</p> <p>Objective. The purpose of the work is to develop a method for determining the center of a seismic event based on the use of the features of the angular characteristics of the constituent volume waves of a seismic signal obtained with the help of a threecomponent seismic station. The proposed method will reduce the time it takes to provide users with preliminary information about the fact of a seismic event and its parameters.</p> <p>Method. The method of automatic detection focal point is based on features of orthogonality in the angular characteristics volume waves registered sample of three-component seismic recordings from a certain direction. Implementation of the proposed approaches makes it possible to reduce the processing time of the seismic record with appropriate reliability compared to processing in manual mode.An example application of the proposed method (algorithm) for processing a seismic signal in the Vrancea zone on 27.10.2004 with magnitude M=5.7 is considered.</p> <p>Results. The proposed approach to processing the measured data of a separate seismic three-component seismic station using a polarization analysis device allows detecting the arrival of a seismic signal, identifying the main components of a seismic signal, and estimating the location of the epicenter of a seismic event.Experimental research on the use of the proposed algorithm for determining the location of the epicenter of a seismic event showed that the time of establishing an emergency event within the borders of Ukraine was reduced five times (from 15 to 3 minutes), and the detection error was 37 km.</p> <p>Conclusions. The formed basis and proposed approach to detecting a seismic signal, identifying its components and determining a seismic event focal point based on results of processing a three-component seismic record are effective. Proposed method (algorithm) should be used to automate process of seismic signal detection by a three-component seismic station and to determine the seismic event center.</p>T. A. VakaliukI. A. PilkevychY. O. HordiienkoV. V. LobodaA. O. Saliy
Copyright (c) 2024 T. A. Vakaliuk, I. A. Pilkevych, Y. O. Hordiienko, V. V. Loboda, A. O. Saliy
https://creativecommons.org/licenses/by-sa/4.0
2024-01-042024-01-04417517510.15588/1607-3274-2023-4-16DEVELOPMENT OF APPLIED ONTOLOGY FOR THE ANALYSIS OF DIGITAL CRIMINAL CRIME
http://ric.zntu.edu.ua/article/view/296250
<p>Context. A feature of the modern digital world is that crime is often committed thanks to the latest computer technologies, and the work of law enforcement agencies faces a number of complex challenges in the digital environment. The development of information technology and Internet communications creates new opportunities for criminals who use digital traces and evidence to commit crimes, which complicates the process of identifying and tracking them.</p> <p>Objective. Development of an applied ontology for a system for analyzing a digital criminal offense, which will effectively analyze, process and interpret a large amount of digital data. It will help to cope with the complex task of processing digital data, and will also help automate the process of discovering new knowledge.</p> <p>Methods. To build an ontological model as a means of reflecting knowledge about digital crime, information was collected on existing international and domestic classifications. The needs and requirements that must be satisfied by the developed ontology were also analyzed. The creation of an ontological model that reflects the basic concepts, relationships in the field of digital criminal offense was carried out in accordance with a recognized ontological analysis of a specialized subject area.</p> <p>Results. An applied ontology contains the definition of entities, properties, classes, subclasses, etc., as well as the creation of semantic relationships between them. At the center of the semantics is the Digital Crime class, the problem area of which is complemented by the interrelated classes Intruder, Digital evidence, Types of Crime, and Criminal liability. Such characteristics as motive, type of crime, method of commission, classification signs of digital traces and types of crime, as well as other individual information were assigned to the attributes of the corresponding classes. An ontological model was implemented in OWL using the Protégé software tool. A feature of the implementation of the applied ontology was the creation of SWRL rules for automatically filling in additional links between a class instance. Manual and automatic verification of the ontology showed the integrity, consistency, a high degree of correctness and adequacy of the model. The bugs found were usually related to technical aspects and semantic inconsistencies, which were corrected during further development iterations.</p> <p>Conclusions. The research confirmed the effectiveness of the developed applied ontology for the analysis of digital criminality, providing more accurate and faster results compared to traditional approaches.</p>L. O. VlasenkoN. M LutskaN. A. ZaietsT. V. SavchenkoA. A. Rudenskiy
Copyright (c) 2024 L. O. Vlasenko, N. M Lutska, N. A. Zaiets, T. V. Savchenko, A. A. Rudenskiy
https://creativecommons.org/licenses/by-sa/4.0
2024-01-042024-01-04418418410.15588/1607-3274-2023-4-17NEURAL ORDINARY DIFFERENTIAL EQUATIONS FOR TIME SERIES RECONSTRUCTION
http://ric.zntu.edu.ua/article/view/294369
<p>Context. Neural Ordinary Differential Equations is a deep neural networks family that leverage numerical methods approaches for solving the problem of time series reconstruction, given small amount of unevenly distributed samples.</p> <p>Objective. The goal of the following research is the synthesis of a deep neural network that is able to solve input signal reconstruction and time series extrapolation task.</p> <p>Method. The proposed method exhibits the benefits of solving time series extrapolation task over forecasting one. A model that implements encoder-decoder architecture with differential equation solving in latent space, is proposed. The latter approach was proven to demonstrate outstanding performance in solving time series reconstruction task given a small percentage of noisy and uneven distributed input signals. The proposed Latent Ordinary Differential Equations Variational Autoencoder (LODE-VAE) model was benchmarked on synthetic non-stationary data with added white noise and randomly sampled with random intervals between each signal.</p> <p>Results. The proposed method was implemented via deep neural network to solve time series extrapolation task.</p> <p>Conclusions. The conducted experiments have confirmed that proposed model solves the given task effectively and is recommended to apply it to solving real-world problems that require reconstructing dynamics of non-stationary processes. The prospects for further research may include the process of computational optimization of proposed models, as well as conducting additional experiments involving different baselines, e. g. Generative Adversarial Networks or attention Networks.</p>D. V. Androsov
Copyright (c) 2023 Д. В. Андросов
https://creativecommons.org/licenses/by-sa/4.0
2023-12-242023-12-244696910.15588/1607-3274-2023-4-7DEEP NETWORK-BASED METHOD AND SOFTWARE FOR SMALL SAMPLE BIOMEDICAL IMAGE GENERATION AND CLASSIFICATION
http://ric.zntu.edu.ua/article/view/296152
<p>Context. The authors of the article investigated the problem of generating and classifying breast cancer histological images. The widespread incidence of breast cancer explains the problem’s relevance. The automated diagnosing procedure saves time and eliminates the subjective aspect. The study’s findings can be applied to cancer CAD systems.</p> <p>Objective. The purpose of the study is to develop a deep neural network-based method and software tool for generating and classifying histological images in order to increase classification accuracy.</p> <p>Method. The method of histological image generation and classification was developed in the research study. This method employs CNN and GAN. To improve the classification accuracy, the initial image sample was expanded using GAN.</p> <p>Results. The computer research of the developed method of image generation and classification was conducted on the basis of the dataset located on the Zenodo platform. Light microscopy served as the basis for obtaining the image. The dataset contained three classes of G1, G2, and G3 breast cancer histological images. Based on the developed method, the accuracy of image classification was 96%. This is a higher classification accuracy compared to existing models such as AlexNet, LeNet5, and VGG16. The software module can be integrated into CAD.</p> <p>Conclusions. The developed method of generating and classifying images is the basis of the software module. The software module can be integrated into CAD.</p>O. M. BerezskyP. B. LiashchynskyiO. Y. PitsunG. M. Melnyk
Copyright (c) 2024 O. M. Berezsky, P. B. Liashchynskyi, O. Y. Pitsun, G. M. Melnyk
https://creativecommons.org/licenses/by-sa/4.0
2024-01-022024-01-024767610.15588/1607-3274-2023-4-8ENSEMBLE OF ADAPTIVE PREDICTORS FOR MULTIVARIATE NONSTATIONARY SEQUENCES AND ITS ONLINE LEARNING
http://ric.zntu.edu.ua/article/view/296164
<p>Context. In this research, we explore an ensemble of metamodels that utilizes multivariate signals to generate forecasts. The ensemble includes various traditional forecasting models such as multivariate regression, exponential smoothing, ARIMAX, as well as nonlinear structures based on artificial neural networks, ranging from simple feedforward networks to deep architectures like LSTM and transformers.</p> <p>Objective. A goal of this research is to develop an effective method for combining forecasts from multiple models forming metamodels to create a unified forecast that surpasses the accuracy of individual models. We are aimed to investigate the effectiveness of the proposed ensemble in the context of forecasting tasks with nonstationary signals.</p> <p>Method. The proposed ensemble of metamodels employs the method of Lagrange multipliers to estimate the parameters of the metamodel. The Kuhn-Tucker system of equations is solved to obtain unbiased estimates using the least squares method. Additionally, we introduce a recurrent form of the least squares algorithm for adaptive processing of nonstationary signals.</p> <p>Results. The evaluation of the proposed ensemble method is conducted on a dataset of time series. Metamodels formed by combining various individual models demonstrate improved forecast accuracy compared to individual models. The approach shows effectiveness in capturing nonstationary patterns and enhancing overall forecasting accuracy.</p> <p>Conclusions. The ensemble of metamodels, which utilizes multivariate signals for forecast generation, offers a promising approach to achieve better forecasting accuracy. By combining diverse models, the ensemble exhibits robustness to nonstationarity and improves the reliability of forecasts.</p>Ye. V. BodyanskiyKh. V. Lipianina-HoncharenkoA. O. Sachenko
Copyright (c) 2024 Ye. V. Bodyanskiy, Kh. V. Lipianina-Honcharenko, A. O. Sachenko
https://creativecommons.org/licenses/by-sa/4.0
2024-01-022024-01-024919110.15588/1607-3274-2023-4-9METHOD FOR AGENT-ORIENTED TRAFFIC PREDICTION UNDER DATA AND RESOURCE CONSTRAINTS
http://ric.zntu.edu.ua/article/view/296210
<p>Context. Problem of traffic prediction in a city is closely connected to the tasks of transportations in a city as well as air pollution detection in a city. Modern prediction models have redundant complexity when used for separate stations, require large number of measuring stations, long measurement period when predictions are made hourly. Therefore, there is a lack of method to overcome these constraints. The object of the study is a city traffic.</p> <p>Objective. The objective of the study is to develop a method for traffic prediction, providing models for traffic quantification at measuring stations in the future under data and resource constraints.</p> <p>Method. The method for agent-oriented traffic prediction under data and resource constraints was proposed in the paper. This method uses biLSTM models with input features, including traffic data obtained from agent, representing target station, and other agents, representing informative city stations. These agents are selected by ensembles of decision trees using Random Forest method. Input time period length is proposed to set using autocorrelation data.</p> <p>Results. Experimental investigation was conducted on traffic data taken in Madrid from 59 measuring stations. Models created by the proposed method had higher prediction accuracy with lower values of MSE, MAE, RMSE and higher informativeness compared to base LSTM models.</p> <p>Conclusions. Obtained models as study results have optimal number of input features compared to the known models, do not require complete system of city stations for all roads. It enables to apply these models under city traffic data and resource constraints. The proposed solutions provide high informativeness of obtained models with practically applicable accuracy level.</p>V. M. LovkinS. A. SubbotinA. O. Oliinyk
Copyright (c) 2024 V. M. Lovkin, S. A. Subbotin, A. O. Oliinyk
https://creativecommons.org/licenses/by-sa/4.0
2024-01-032024-01-034999910.15588/1607-3274-2023-4-10PARALLEL AND DISTRIBUTED COMPUTING TECHNOLOGIES FOR AUTONOMOUS VEHICLE NAVIGATION
http://ric.zntu.edu.ua/article/view/296211
<p>Context. Autonomous vehicles are becoming increasingly popular, and one of the important modern challenges in their development is ensuring their effective navigation in space and movement within designated lanes. This paper examines a method of spatial orientation for vehicles using computer vision and artificial neural networks. The research focused on the navigation system of an autonomous vehicle, which incorporates the use of modern distributed and parallel computing technologies.</p> <p>Objective. The aim of this work is to enhance modern autonomous vehicle navigation algorithms through parallel training of artificial neural networks and to determine the optimal combination of technologies and nodes of devices to increase speed and enable real-time decision-making capabilities in spatial navigation for autonomous vehicles.</p> <p>Method. The research establishes that the utilization of computer vision and neural networks for road lane segmentation proves to be an effective method for spatial orientation of autonomous vehicles. For multi-core computing systems, the application of parallel programming technology, OpenMP, for neural network training on processors with varying numbers of parallel threads increases the algorithm’s execution speed. However, the use of CUDA technology for neural network training on a graphics processing unit significantly enhances prediction speeds compared to OpenMP. Additionally, the feasibility of employing PyTorch Distributed Data Parallel (DDP) technology for training the neural network across multiple graphics processing units (nodes) simultaneously was explored. This approach further improved prediction execution times compared to using a single graphics processing unit.</p> <p>Results. An algorithm for training and prediction of an artificial neural network was developed using two independent nodes, each equipped with separate graphics processing units, and their synchronization for exchanging training results after each epoch, employing PyTorch Distributed Data Parallel (DDP) technology. This approach allows for scalable computations across a higher number of resources, significantly expediting the model training process.</p> <p>Conclusions. The conducted experiments have affirmed the effectiveness of the proposed algorithm, warranting the recommendation of this research for further advancement in autonomous vehicles and enhancement of their navigational capabilities. Notably, the research outcomes can find applications in various domains, encompassing automotive manufacturing, logistics, and urban transportation infrastructure. The obtained results are expected to assist future researchers in understanding the most efficient hardware and software resources to employ for implementing AI-based navigation systems in autonomous vehicles. Prospects for future investigations may encompass refining the accuracy of the proposed parallel algorithm without compromising its efficiency metrics. Furthermore, there is potential for experimental exploration of the proposed algorithm in more intricate practical scenarios of diverse nature and dimensions.</p>L. I. MochuradM. V. Mamchur
Copyright (c) 2024 L. I. Mochurad, M. V. Mamchur
https://creativecommons.org/licenses/by-sa/4.0
2024-01-032024-01-03411111110.15588/1607-3274-2023-4-11RCF-ST: RICHER CONVOLUTIONAL FEATURES NETWORK WITH STRUCTURAL TUNING FOR THE EDGE DETECTION ON NATURAL IMAGES
http://ric.zntu.edu.ua/article/view/296231
<p>Context. The problem of automating of the edge detection on natural images in intelligent systems is considered. The subject of the research is the deep learning convolutional neural networks for edge detection on natural images.</p> <p>Objective. The objective of the research is to improve the edge detection performance of natural images by structural tuning the richer convolutional features network architecture.</p> <p>Method. In general, the edge detection performance is influenced by a neural network architecture. To automate the design of the network structure in the paper a structural tuning of a neural network is applied. Computational costs of a structural tuning are incomparably less compared with neural architecture search, but a higher qualification of the researcher is required, and the resulting solution will be suboptimal. In this research it is successively applied first a destructive approach and then a constructive approach to structural tuning of the based architecture of the RCF neural network. The constructive approach starts with a simple architecture network. Hidden layers, nodes, and connections are added to expand the network. The destructive approach starts with a complex architecture network. Hidden layers, nodes, and connections are then deleted to contract the network. The structural tuning of the richer convolutional features network includes: (1) reducing the number of convolutional layers; (2) reducing the number of convolutions in convolutional layers; (3) removing at each stage the sigmoid activation function with subsequent calculation of the loss function; (4) addition of the batch normalization layers after convolutional layers; (5) including the ReLU activation functions after the added batch normalization layers. The obtained neural network is named RCF-ST. The initial color images were scaled to the specified size and then inputted in the neural network. The advisability of each of the proposed stages of network structural tuning was reseached by estimating the edge detection performance using the confusion matrix elements and Figure of Merit. The advisability of a structural tuning of the neural network as a whole was estimated by comparing it with methods known from the literature using the Optimal Dataset Scale and Optimal Image Scale.</p> <p>Results. The proposed convolutional neural network has been implemented in software and researched for solving the problem of edge detection on natural images. The structural tuning technique may be used for informed design of the neural network architectures for other artificial intelligence problems.</p> <p>Conclusions. The obtained RCF-ST network allows to improve the performance of edge detection on natural images. RCF-ST network is characterized by a significantly fewer parameters compared to the RCF network, which makes it possible to reduce the resource consumption of the network. Besides, RCF-ST network ensures the enhancing of the robustness of edge detection on texture background.</p>M. V. Polyakova
Copyright (c) 2024 M. V. Polyakova
https://creativecommons.org/licenses/by-sa/4.0
2024-01-042024-01-04412212210.15588/1607-3274-2023-4-12DEVELOPMENT OF TECHNIQUE FOR STRUCTURING OF GROUP EXPERT ASSESSMENTS UNDER UNCERTAINTY AND INCONCISTANCY
http://ric.zntu.edu.ua/article/view/294290
<p>Context. The issues of structuring group expert assessments are considered in order to determine a generalized assessment under inconsistency between expert assessments. The object of the study is the process of synthesis of mathematical models of structuring (clustering, partitioning) of expert assessments that are formed within the framework of Shafer model under uncertainty, inconsistency (conflict).</p> <p>Objective. The purpose of the article is to develop an approach based on the metrics of theory of evidence, which allows to identify a number of homogeneous subgroups from the initial heterogeneous set of expert judgments formed within the framework of the Shafer model, or to identify experts whose judgments differ significantly from the judgments of the rest of the group.</p> <p>Method. The research methodology is based on the mathematical apparatus of theory of evidence and cluster analysis. The proposed approach uses the principles of hierarchical clustering to form a partition of a heterogeneous (inconsistent) set of expert evidence into a number of subgroups (clusters), within which expert assessments are close to each other. Metrics of the theory of evidence are considered as a criterion for determining the similarity and dissimilarity of clusters. Experts’ evidence are considered consistent in the formed cluster if the average or maximum (depending on certain initial conditions) level of conflict between them does not exceed a given threshold level.</p> <p>Results. The proposed approach for structuring expert information makes it possible to assess the degree of consistency of expert assessments within an expert group based on an analysis of the distance between expert evidence bodies. In case of a lack of consistency within the expert group, it is proposed to select from a heterogeneous set of assessments subgroups of experts whose assessments are close to each other for further aggregation in order to obtain a generalized assessment.</p> <p>Conclusions. Models and methods for analyzing and structuring group expert assessments formed within the notation of the theory of evidence under uncertainty, inconsistency, and conflict were further developed. An approach to clustering group expert assessments formed under uncertainty and inconsistency (conflict) within the framework of the Shafer model is proposed in order to identify subgroups within which expert assessments are considered consistent. In contrast to existing clustering methods, the proposed approach allows processing expert evidence of a various structure and taking into account possible ways of their interaction (combination, intersection).</p>Ye. O. DavydenkoA. V. ShvedN. V. Honcharova
Copyright (c) 2023 Ye. O. Davydenko, A. V. Shved, N. V. Honcharova
https://creativecommons.org/licenses/by-sa/4.0
2023-12-222023-12-224303010.15588/1607-3274-2023-4-3METHOD OF MINIMIZATION SIDELOBES LEVEL AUTOCORRELATION FUNCTIONS OF SIGNALS WITH NON-LINEAR FREQUENCY MODULATION
http://ric.zntu.edu.ua/article/view/294294
<p>Context. At present, when creating new and upgrading existing radar systems, solid-state generator devices are widely used, which imposes certain restrictions on the peak power of probing signals. To overcome this limitation, longer duration signals with internal pulse modulation are used. The main efforts of the researchers are focused on reducing the maximum level of the side lobes of the autocorrelation function of such signals, which, without taking additional measures, has a significant level, which complicates the work of systems for detecting and stabilizing the level of false alarms. Attention is paid to signals with non-linear frequency modulation, which consist of two and three linearly frequency-modulated fragments. The maximum level of the side lobes of such signals depends significantly on the frequency-time parameters of the fragments, and therefore it is very difficult to obtain its stable value. Searching for signals with minimal side lobe level values by optimizing their time-frequency parameters is a difficult task, because changing the parameters of previous signal fragments leads to changes in the parameters of subsequent fragments</p> <p>Objective. The aim of the work is to develop a method for simplifying the search for local minima of the level of side lobes of two- and three-fragment signals with nonlinear frequency modulation by using a modified mathematical model with a whole number of periods of radio oscillations of linear-frequency modulated fragments.</p> <p>Method. The developed method is based on the proposed modification of the mathematical model, which corrects the frequencytime parameters of two- and three-fragment signals with non-linear frequency modulation by modifying the values of the frequency modulation speed while providing an integer number of complete periods of radio frequency oscillations for each of the fragments, which simplifies the process of finding local minima of the level of side lobes.</p> <p>Results. Modification of the initial mathematical model leads to the expansion of the possible range of values of frequency-time parameters, ratios of durations and frequency deviations of linearly-frequency modulated fragments and ensures stability of the mathematical model with a decrease in the maximum level of side lobes of the autocorrelation function.</p> <p>Conclusions. It has been experimentally confirmed that the use of the proposed method of modifying the input frequency-time parameters of signals with non-linear frequency modulation in the vast majority of cases reduces the maximum level of side lobes and simpli-fies the process of finding its local minima. The optimal ratios of durations and deviations of the frequency of the signal frag-ments are determined, subject to these, stable operation of the models is ensured and, in most cases, - less than the value of the maximum level of the side lobes.</p>О. О. KostyriaА. А. HryzoО. М. DodukhB. А. LisohorskyiА. А. Lukianchykov
Copyright (c) 2023 О. О. Костиря, А. А. Гризо, О. М. Додух, Б. А. Лісогорський, А. А. Лук’янчиков
https://creativecommons.org/licenses/by-sa/4.0
2023-12-232023-12-234393910.15588/1607-3274-2023-4-4TEMPORAL EVENTS PROCESSING MODELS IN FINITE STATE MACHINES
http://ric.zntu.edu.ua/article/view/294297
<p>Context. The issue of a synthesizable finite state machine with temporal events processing using hardware description language pattern. The object of this study is external event processing in real-time systems.</p> <p>Objective. The goal of this work is to introduce methods to express external temporal events on finite state machine state diagrams and corresponding HDL patterns of such events processing in control systems.</p> <p>Method. The classification of external events in real-time systems is analyzed. A device class that changes its internal state depending on the temporal external events is introduced. A method to express these events on the temporal state diagram is introduced. Possible model behavior scenarios based on the external event duration are analyzed. A Verilog HDL external event processing pattern is introduced. The efficiency of the proposed model is proved by developing, verifying, and synthesis of a powersaving module in Xilinx ISE. The results and testing showed the model’s correctness.</p> <p>Results. External temporal events processing methods in real-time device models are proposed. The corresponding HDL pattern for the proposed model implementation is presented.</p> <p>Conclusions. The real-time systems with external temporal events automated synthesis problem has been solved. To solve this problem, a finite state machine model-based device using the Verilog language was developed and tested. The scientific novelty lies in the introduction a method to express temporal events on the state diagram of the finite state machine as well as in a HDL when implementing the proposed model on CPLD and FPGA.</p>M. A. MiroshnykS. I. ShmatkovO. S. ShkilА. М. MiroshnykK. Y. Pshenychnyi
Copyright (c) 2023 М. А. Мірошник, С. І. Шматков, О. С. Шкіль, А. М. Мірошник, К. Ю. Пшеничний
https://creativecommons.org/licenses/by-sa/4.0
2023-12-232023-12-234494910.15588/1607-3274-2023-4-5THE METHOD OF HYDRODYNAMIC MODELING USING A CONVOLUTIONAL NEURAL NETWORK
http://ric.zntu.edu.ua/article/view/294368
<p>Context. Solving hydrodynamic problems is associated with high computational complexity and therefore requires considerable computing resources and time. The proposed approach makes it possible to significantly reduce the time for solving such problems by applying a combination of two improved modeling methods.</p> <p>Objective. The goal is to create a comprehensive hydrodynamic modeling method that requires significantly less time to determine the dynamics of the velocity field by using the modified lattice Boltzmann method and the pressure distribution by using a convolutional neural network.</p> <p>Method. A method of hydrodynamic modeling is proposed, which realizes the synergistic effect arising from the combination of the improved lattice Boltzmann method and a convolutional neural network with a specially adapted structure. The essence of the method consists of implementing a sequence of iterations, each of which simulates the process of changing parameters when moving to the next time layer. Each iteration includes a predictor step and a corrector step. At the predictor step, the lattice Boltzmann method works, which allows us to obtain the field of fluid velocities in the working area at the next time layer using the field of velocities at the previous layer. At the corrector step, we apply an improved convolutional neural network trained on a previously created data set. Using a neural network allows us to determine the pressure distribution on a new time layer with a predetermined accuracy. After adding the fluid compressibility correction on the new time layer, we get a refined value of the velocity field, which can be used as initial data for applying the lattice Boltzmann method at the next iteration. Calculations stop when the specified number of iterations is reached.</p> <p>Results. The operation of the proposed method was studied on the example of modeling fluid movement in a fragment of the human gastrointestinal tract. The simulation results showed that the time spent implementing the simulation process was reduced by 6–7 times while maintaining acceptable accuracy for practical tasks.</p> <p>Conclusions. The proposed hydrodynamic modeling method with a convolutional neural network and the lattice Boltzmann method significantly reduces the time and computing resources required to implement the modeling process in areas with complex geometry. Further development of this method will make it possible to implement real-time hydrodynamic modeling in threedimensional domains.</p>M. A. NovotarskyiV. A. Kuzmych
Copyright (c) 2023 М. А. Новотарський, В. А. Кузьмич
https://creativecommons.org/licenses/by-sa/4.0
2023-12-242023-12-244585810.15588/1607-3274-2023-4-6TELETRAFFIC FORECASTING IN MEDIA SERVICE SYSTEMS
http://ric.zntu.edu.ua/article/view/294126
<p>Context. The development of information and communication technologies has led to an increase in the volume of information sent over the network. Media service platforms play an important role in the creation and processing of bitrate in the information network. Therefore, there is a need to develop a methodology for predicting bitrate in various media service platforms by creating an effective algorithm that minimizes the forecast error.</p> <p>Objective. The aim of the work is to synthesize in analytical form the state transition matrix of the Kalman filter for nonstationary self-similar processes when predicting the bitrate in telecommunication networks.</p> <p>Method. A methodology has been developed for predicting teletraffic in media service platforms, based on a modification of the Kalman filter for non-Gaussian processes. This methodology uses an original procedure for calculating statistics, which makes it possible to reduce the filtering and forecast error that arises due to the uncertainty of the analytical model of the process under study. The methodology does not require knowledge of the analytical model of the process, as well as strict restrictions on its stochastic characteristics.</p> <p>Results. A methodology for estimating and forecasting bitrate in telecommunication systems is proposed. This methodology was used to study teletraffic processes in the media service platforms Google Meet, Zoom, Microsoft Teams. The passage of real bitrate through the specified media service platforms was studied. A comparison of real teletraffic with predicted teletraffic was carried out. The influence of the order of the state transition matrix of the Kalman filter on the error of estimation and prediction has been studied. It has been established that even a low (second) order of the state transition matrix allows one to obtain satisfactory forecast results. It is shown that the use of the proposed methodology makes it possible to predict traffic with a relative error of the order of 3– 4%.</p> <p>Conclusions. An original algorithm for assessing and forecasting the characteristics of media traffic has been developed. Recommendations for improving the technology for building media service platforms are formulated. It is shown that the bitrates generated by various media service platforms, in the case of applying the proposed estimation and forecasting methodology, are invariant with respect to the type of stochastic processes being processed.</p>O. Yu. GusievV. І. МagroO. I. Nikolska
Copyright (c) 2023 О. Ю. Гусєв, В. И. Магро, О. І. Нікольська
https://creativecommons.org/licenses/by-sa/4.0
2023-12-222023-12-2247710.15588/1607-3274-2023-4-1THE METHOD OF OPTIMIZING THE DISTRIBUTION OF RADIO SUPPRESSION MEANS AND DESTRUCTIVE SOFTWARE INFLUENCE ON COMPUTER NETWORKS
http://ric.zntu.edu.ua/article/view/294287
<p>Context. Currently, generalized methodical approaches to the development of scenarios of complex radio suppression and electromagnetic influence of typical special telecommunication systems have been developed. However, during the development of possible cases for the complex application of radio suppression and destructive software influence,the problem of optimizing the resource of these means and its distribution according to the goals of radio suppression and objects of destructive computer influence arose, which has not yet been fully resolved.Especially in the literature known to the authors, there is no method for optimizing the resource distribution of radio and computer influence, used for the development and practical implementation of optimal scenarios of destructive influence on computer networks of enemy military groups in military operations.</p> <p>Therefore, it is necessary to formulate a problem and develop a method of optimizing the distribution of the resource of radio suppression and destructive software influence for the development of possible scenarios of the enemy’s violation of information exchange in a standart telecommunication network.</p> <p>Objective. The purpose of the research is to develop a method for optimizing the distribution of the resource of radio suppression and destructive software influence for the development of scenarios of information exchange violations by the enemy in the telecommunications network.</p> <p>Method.To achieve the purpose of the research, the methods of nonlinear optimization of heterogeneous resource distribution, mass service theory, and expert evaluation were comprehensively applied and developed in the field of modeling of information conflict.</p> <p>To determine the coefficients of protection of objects from radio-electronic and destructive computer influence, expert evaluation methods are used, in particular, the method of frequencies of preferences of the decision-maker using the Thurstone method. This method requires only one expert (a decision-maker), minimal communication time with him, minimal expert information (full ordering of weighting factors) and can be applied with a small number of evaluated weighting factors.</p> <p>To solve the problem of optimal distribution of a heterogeneous resource of means of destructive influence, to ensure the value of the multiplicative objective function of an arbitrary form is not less than the given one, the method of successive increments is applied.</p> <p>To determine the efficiency indicator of information exchange violation, the methods of mass service theory are applied, which allows to formalize special telecommunication systems as a set of mass service systems – subsystems of digital communication and computer networks.</p> <p>Results. The formulated problem and the entered indicators made it possible to solve the problem of determining the minimum resource of means of destructive influence and their optimal distribution according to the purposes of radio suppression on the objects of destructive program influence in order to achieve the required level of disruption of the efficiency of information exchange in special telecommunication systems.</p> <p>Conclusions.According to the results of the article, a method for optimizing the distribution of the resource of radio suppression and destructive software influence has been developed for the development of possible scenarios of information exchange violations by the enemy in a typical telecommunications network.The verification of the proposed method was carried out by comparing the theoretical results with the results of simulated modeling of scenarios of violation of the information exchange in the telecommunications network by the enemy.</p>S. M. SholokhovP. M. PavlenkoB. A. NikolaienkoI. I. SamborskyE. I. Samborsky
Copyright (c) 2023 S. M. Sholokhov, P. M. Pavlenko, B. A. Nikolaienko, I. I. Samborsky, E. I. Samborsky
https://creativecommons.org/licenses/by-sa/4.0
2023-12-222023-12-224161610.15588/1607-3274-2023-4-2