http://ric.zntu.edu.ua/issue/feed Radio Electronics, Computer Science, Control 2024-06-27T13:27:46+03:00 Sergey A. Subbotin subbotin.csit@gmail.com Open Journal Systems <p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="http://zntu.edu.ua/zaporozhye-national-technical-university" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Ukrainian. Before 2022 also Russian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="https://mjl.clarivate.com/search-results" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="http://ric.zntu.edu.ua/about/editorialTeam" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals and conferences. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="http://journals.uran.ua/public/site/images/grechko/1OA1.png" alt="" /> <img src="http://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="" /></span></strong></p> http://ric.zntu.edu.ua/article/view/305783 IMPROVED METHOD FOR ASSESSING THE RELIABILITY OF OBJECTS WITH A VARIABLE STRUCTURE 2024-06-07T10:19:47+03:00 O. S. Babii rvv@zntu.edu.ua L. M. Sakovych rvv@zntu.edu.ua O. O. Sliusarchuk rvv@zntu.edu.ua Y. M. Yelisov rvv@zntu.edu.ua Y. E Kuryata rvv@zntu.edu.ua <p>Context. The main idea is to take into account the possibility of the influence of hidden defects on the reliability of multi-mode radioelectronic equipment with a variable structure, which do not take into account the known methods for calculating reliability indicators. A quantitative assessment of the parameter of the failure flow of products is proposed, taking into account the impact of the accumulation of hidden defects on its value.</p> <p>Objective. Improvement of the method for assessing the reliability of objects with a variable structure, taking into account the possibility of hidden defects when used for their intended purpose in certain operating modes.</p> <p>Method. The methodology for assessing the values of reliability indicators of complex technical systems is used. The method being developed is the development of an algorithm for assessing the reliability indicators of multi-mode objects in the direction of taking into account the possibility of appearance and accumulation of hidden defects in subsets of elements of the object, which are not used when it operates in separate modes.</p> <p>Results. Functional dependencies of partial and complex indicators of reliability of multi-mode objects on the accumulation of hidden defects, which manifest themselves only during maintenance or change of operating modes, are obtained. The solution is formalized in the form of an algorithm that uses the results of trial operation of products as initial data.</p> <p>Conclusions. The scientific novelty lies in the development of the following innovative solutions: 1) for the first time it is proposed to take into account the presence of hidden defects when assessing the reliability of multi-mode objects with a variable structure; 2) for the first time, functional dependencies of the influence of the presence of hidden defects on the values of partial and complex reliability indicators were obtained and studied. The practical significance of the results lies in the fact that it allows, at the stage of trial operation of radioelectronic equipment with a variable structure, to objectively assess the compliance of calculations with the required values of reliability indicators by taking into account the possibility of hidden defects.</p> 2024-06-11T00:00:00+03:00 Copyright (c) 2024 O. S. Babii, L. M. Sakovych, O. O. Sliusarchuk, Y. M. Yelisov, Y. E Kuryata http://ric.zntu.edu.ua/article/view/305804 OPTIMAL SYNTHESIS OF STUB MICROWAVE FILTERS 2024-06-07T14:59:06+03:00 L. M. Karpukov rvv@zntu.edu.ua V. O. Voskoboynyk rvv@zntu.edu.ua Iu. V. Savchenko rvv@zntu.edu.ua <p>Context. Microwave stub filters are widely used in radio engineering and telecommunication systems, as well as in technical information protection systems due to simplicity of design, possibility of realization in microstrip design and manufacturability in mass production. For synthesis of stub filters nowadays traditional methods based on transformation of low-frequency prototype filters on LC-elements into filtering structures on elements with distributed parameters are used. The transformations used are approximate and provide satisfactory results for narrowband stub filters. In this connection there is a necessity in development of direct synthesis methods for stub filters, excluding various approximations and providing obtaining of amplitude-frequency characteristics with optimal shape for any bandwidths.</p> <p>Objective. The purpose of the study is to develop a method for direct synthesis of stub band-pass filters and low-pass filters with Chebyshev amplitude-frequency response in the passband.</p> <p>Method. The procedure of direct synthesis includes the formulation of relations for filter functions of plume structures, selection of approximating functions of Chebyshev type for filter functions and formation of a system of nonlinear equations for calculation of parameters of filter elements.</p> <p>Results. A method for the direct synthesis of stub bandpass and lowpass filters with Chebyshev response is developed.</p> <p>Conclusions. Scientific novelty of the work consists in the development of a new method of direct synthesis of l stub filters. The method, in contrast to approximate traditional methods of synthesis of microwave filters, is exact, and the obtained solutions of synthesis problems are optimal.</p> <p>The experiments confirmed the performance of the proposed method and the optimality of the obtained solutions. Prospects for further research suggest adapting the method to the synthesis of filter structures with more complex resonators compared to stubs.</p> 2024-06-11T00:00:00+03:00 Copyright (c) 2024 L. M. Karpukov, V. O. Voskoboynyk, Iu. V. Savchenko http://ric.zntu.edu.ua/article/view/305806 MATHEMATICAL MODEL OF CURRENT TIME OF SIGNAL FROM SERIAL COMBINATION LINEAR-FREQUENCY AND QUADRATICALLY MODULATED FRAGMENTS 2024-06-07T15:19:36+03:00 O. O. Kostyria rvv@zntu.edu.ua A. A. Нryzo rvv@zntu.edu.ua H. V. Khudov rvv@zntu.edu.ua O. M. Dodukh rvv@zntu.edu.ua Y. S. Solomonenko rvv@zntu.edu.ua <p>Context. One of the methods of solving the actual scientific and technical problem of reducing the maximum level of side lobes of autocorrelation functions of radar signals is the use of nonlinear-frequency modulated signals. This rounds the signal spectrum, which is equivalent to the weight (window) processing of the signal in the time do-main and can be used in conjunction with it.</p> <p>A number of studies of signals with non-linear frequency modulation, which include linearly-frequency modulated fragments, indicate that distortions of their frequency-phase structure occur at the junction of the fragments. These distortions, depending on the type of mathematical model of the signal – the current or shifted time, cause in the generated signal, respectively, a jump in the instantaneous frequency and the instantaneous phase or only the phase. The paper shows that jumps occur at the moments when the value of the derivative of the instantaneous phase changes at the end of the linearly-frequency modulated fragment. The instantaneous signal frequency, which is the first derivative of the instantaneous phase, has an interpretation of the rotation speed of the signal vector on the complex plane. The second derivative of the instantaneous phase of the signal is understood as the frequency modulation rate.</p> <p>Distortion of these components leads to the appearance of an additional component in the linear term of the instantaneous phase, starting with the second fragment. Disregarding these frequency-phase (or only phase) distortions causes distortion of the spectrum of the resulting signal and, as a rule, leads to an increase in the maxi-mum level of the side lobes of its autocorrelation function. The features of using fragments with frequency modulation laws in complex signals, which have different numbers of derivatives of the instantaneous phase of the signal, were not considered in the known works, therefore this article is devoted to this issue.</p> <p>Objective. The aim of the work is to develop a mathematical model of the current time of two-fragment nonlinear-frequency modulated signals with a sequential combination of linear-frequency and quadratically modulated fragments, which provides rounding of the signal spectrum in the region of high frequencies and reducing the maximum level of side lobes of the autocorrelation function and increasing the speed of its descent.</p> <p>Method. Nonlinear-frequency modulated signals consisting of linearly-frequency and quadratically modulated fragments were studied in the work. Using differential analysis, the degree of influence of the highest derivative of the instantaneous phase on the frequency-phase structure of the signal was determined. Its changes were evaluated using time and spectral correlation analysis methods. The parameters of the resulting signal evaluated are phase and frequency jumps at the junction of fragments, the shape of the spectrum, the maximum level of the side lobes of the autocorrelation function and the speed of their descent.</p> <p>Results. The article has further developed the theory of synthesis of nonlinear-frequency modulated signals. The theoretical contribution is to determine a new mechanism for the manifestation of frequency-phase distortion at the junction of fragments and its mathematical description. It was found that when switching from a linearly-frequency modulated fragment to a quadratically modulated frequency-phase distortion of the resulting signal, the third derivative of the instantaneous phase becomes, which, by analogy with the theory of motion of physical bodies, is an acceleration of frequency modulation. The presence of this derivative leads to the appearance of new components in the expression of the instantaneous frequency and phase of the signal. The compensation of these distortions provides a decrease in the maximum level of the side lobes by 5 dB and an increase in its descent rate by 8 dB/deck for the considered version of the non-linear-frequency modulated signal.</p> <p>Conclusions. A new mathematical model of the current time has been developed for calculating the values of the instantaneous phase of a nonlinear-frequency modulated signal, the first fragment of which has linear, and the second – quadratic frequency modulation. The difference between this model and the known ones is the introduction of new components that provide compensation for frequency-phase distortions at the junction of fragments and in a fragment with quadratic frequency modulation. The obtained oscillogram, spectrum and autocorrelation function of one of the synthesized two-fragment signals correspond to the theoretical form, which indicates the adequacy and reliability of the proposed mathematical model.</p> 2024-06-11T00:00:00+03:00 Copyright (c) 2024 О. О. Костиря, А. А. Гризо, Г. В. Худов, О. М. Додух, Ю. С. Соломоненко http://ric.zntu.edu.ua/article/view/305860 METHOD OF IMPERATIVE VARIABLES FOR SEARCH AUTOMATION OF TEXTUAL CONTENT IN UNSTRUCTURED DOCUMENTS 2024-06-08T15:18:34+03:00 V. O. Boiko rvv@zntu.edu.ua <p>Context. Currently, there are a lot of approaches that are used for textual search. Nowadays, methods such as pattern-matching and optical character recognition are highly used for retrieving preferred information from documents with proven effectiveness. However, they work with a common or predictive document structure, while unstructured documents are neglected. The problem – is automating the textual search in documents with unstructured content. The object of the study was to develop a method and implement it into an efficient model for searching the content in unstructured textual information.</p> <p>Objective. The goal of the work is the implementation of a rule-based textual search method and a model for seeking and retrieving information from documents with unstructured text content.</p> <p>Method. To achieve the purpose of the research, the method of rule-based textual search in heterogenous content was developed and applied in the appropriately designed model. It is based on natural language processing that has been improved in recent years along with a new generative artificial intelligence becoming more available.</p> <p>Results. The method has been implemented in a designed model that represents a pattern or a framework of unstructured textual search for software engineers. The application programming interface has been implemented.</p> <p>Conclusions. The conducted experiments have confirmed the proposed software’s operability and allow recommendations for use in practice for solving the problems of textual search in unstructured documents. The prospects for further research may include the improvement of the performance using multithreading or parallelization for large textual documents along with the optimization approaches to minimize the impact of OpenAI application programming interface content processing limitations. Furthermore, additional investigation might incorporate extending the area of imperative variables usage in programming and software development.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 В. О. Бойко http://ric.zntu.edu.ua/article/view/305862 INFORMATION TECHNOLOGY FOR RECOGNIZING PROPAGANDA, FAKES AND DISINFORMATION IN TEXTUAL CONTENT BASED ON NLP AND MACHINE LEARNING METHODS 2024-06-08T16:03:21+03:00 V. Vysotska rvv@zntu.edu.ua <p>Context. The research is aimed at the application of artificial intelligence for the development and improvement of means of cyber warfare, in particular for combating disinformation, fakes and propaganda in the Internet space, identifying sources of disinformation and inauthentic behavior (bots) of coordinated groups. The implementation of the project will contribute to solving the important and currently relevant issue of information manipulation in the media, because in order to effectively fight against distortion and disinformation, it is necessary to obtain an effective tool for recognizing these phenomena in textual data in order to develop a further strategy to prevent the spread of such data.</p> <p>Objective of the study is to develop or automatic recognition of political propaganda in textual data, which is built on the basis of machine learning with a teacher and implemented using natural language processing methods.</p> <p>Method. Recognition of the presence of propaganda will occur at two levels: at the general level, that is, at the level of the document, and at the level of individual sentences. To implement the project, such feature construction methods as the TF-IDF statistical indicator, the “Bag of Words” vectorization model, the marking of parts of speech, the word2vec model for obtaining vector representations of words, as well as the recognition of trigger words (reinforcing words, absolute pronouns and “shiny” words). Logistic regression was used as the main modeling algorithm.</p> <p>Results. Machine learning models have been developed to recognize propaganda, fakes and disinformation at the document (article) and sentence level. Both model scores are satisfactory, but the model for document-level propaganda recognition performed almost 1.2 times better (by 20%).</p> <p>Conclusions. The created model shows excellent results in recognizing propaganda, fakes and disinformation in textual content based on NLP and machine learning methods. The analysis of the raw data showed that the propaganda recognition model at the document (article) level was able to correctly classify 6097 non-propaganda articles and 694 propaganda articles. 123 propaganda articles and 285 non-propaganda articles were misclassified. The obtained estimate of the model: 0.9433254618697041. The sentence-level propaganda recognition model successfully classified 205 propaganda articles and 1917 non-propaganda articles. The model score is: 0.7437784787942516 (but 731 articles were incorrectly classified).</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 В. А. Висоцька http://ric.zntu.edu.ua/article/view/305863 METHOD AUTOMATED CLASS CONVERSION FOR COMPOSITION IMPLEMENTATION 2024-06-08T16:15:09+03:00 O. B. Kungurtsev rvv@zntu.edu.ua V. R. Bondar rvv@zntu.edu.ua K. O. Gratilova rvv@zntu.edu.ua N. O. Novikova rvv@zntu.edu.ua <p>Context. Using the composition relation is one of the most effective and commonly used ways to specialize classes in object-oriented programming.</p> <p>Objective. Problems arise when “redundant” attributes are detected in an inner class, which are not necessary for solving the tasks of a specialized class. To work with such attributes, the inner class has corresponding program methods, whose usage not only does not solve the tasks of the specialized class, but can lead to errors in its work. The purpose of this work is to remove “redundant” attributes from the inner class, as well as all methods of the class directly or indirectly (through other methods) using these attributes.</p> <p>Method. A mathematical model of the inner class was developed, which allowed us to identify “redundant” elements of the class. The method of internal class transformation is proposed, which, based on the analysis of the class code, provides the developer with information to make a decision about “redundant” attributes, and then in the automated mode gradually removes and transforms the class elements.</p> <p>Result. To approbate the proposed solutions, a software product Composition Converter was developed. Experiments were carried out to compare the conversion of classes in “manual” and automated modes. The results showed a multiple reduction of conversion time in the automated mode.</p> <p>Conclusions. The proposed method of automated transformation of the inner class according to the tasks of the outer class when implementing composition allows to significantly reduce the time or the number of errors when editing the code of the inner class. The method can be used for various object-oriented languages.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 О. Б. Кунгурцев, В. Р. Бондар, К. О. Гратілова, Н. О. Новікова http://ric.zntu.edu.ua/article/view/305884 DEVELOPMENT OF A PLUG-IN FOR VIZUALIZATION OG STRUCTURAL SCHEMES OF COMPUTERS BASED ON THE TEXTUAL DESCRIPTION OF ALGORITHMS OF HARMONIC TRANSFORMS 2024-06-08T23:08:30+03:00 I. Prots’ko rvv@zntu.edu.ua V. Teslyuk rvv@zntu.edu.ua <p>Context. In many areas of science and technology, the numerical solution of problems is not enough for the further development of the implementation of the obtained results. Among the existing information visualization approaches, the one that allows you to effectively reveal unstructured actionable ideas, generalize or simplify the analysis of the received data is chosen. The results of visualization of generalized structural diagrams based on the textual description of the algorithm clearly reflect the interaction of its parts, which is important at the system engineering stage of computer design.</p> <p>Objective of the study is the analysis and software implementation of structure visualization using the example of discrete harmonic transformation calculators obtained as a result of the synthesis of an algorithm based on cyclic convolutions with the possibility of extending the structure visualization to other computational algorithms.</p> <p>Method. The generalized scheme of the synthesis of algorithms of fast harmonic transformations in the form of a set of cyclic convolution operations on the combined sequences of input data and the coefficients of the harmonic transformation function with their visualization in the form of a generalized structural diagram of the calculator.</p> <p>The results. The result of the work is a software implementation of the visualization of generalized structural diagrams for the synthesized algorithms of cosine and Hartley transformations, which visually reflect the interaction of the main blocks of the computer. The software implementation of computer structure visualization is made in TypeScript using the Phaser 3 framework.</p> <p>Conclusions. The work considers and analyzes the developed software implementation of visualization of the general structure of the calculator for fast algorithms of discrete harmonic transformations in the domain of real numbers, obtained as a result of the synthesis of the algorithm based on cyclic convolutions. The results of visualization of variants of structural schemes of computers clearly and clearly reflect the interaction of its parts and allow to evaluate one or another variant of the computing algorithm in the design process.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 І. Процько, В. Теслюк http://ric.zntu.edu.ua/article/view/305888 DETERMINING OBJECT-ORIENTED DESIGN COMPLEXITY DUE TO THE IDENTIFICATION OF CLASSES OF OPEN-SOURCE WEB APPLICATIONS CREATED USING PHP FRAMEWORKS 2024-06-08T23:30:14+03:00 A. S. Prykhodko rvv@zntu.edu.ua E. V. Malakhov rvv@zntu.edu.ua <p>Context. The problem of determining the object-oriented design (OOD) complexity of the open-source software, including Web apps created using the PHP frameworks, is important because nowadays open-source software is growing in popularity and using the PHP frameworks making app development faster. The object of the study is the process of determining the OOD complexity of the open-source Web apps created using the PHP frameworks. The subject of the study is the mathematical models to determine the OOD complexity due to the identification of classes of the open-source Web apps created using the PHP frameworks.</p> <p>Objective. The goal of the work is the build a mathematical model for determining the OOD complexity due to the identification of classes of the open-source Web apps created using the PHP frameworks based on the three-variate Box-Cox normalizing transformation to increase confidence in determining the OOD complexity of these apps.</p> <p>Method. The mathematical model for determining the OOD complexity due to the identification of classes of the open-source Web apps created using the PHP frameworks is constructed in the form of the prediction ellipsoid equation for normalized metrics WMC, DIT, and NOC at the app level. We apply the three-variate Box-Cox transformation for normalizing the above metrics. The maximum likelihood method is used to compute the parameter estimates of the three-variate Box-Cox transformation.</p> <p>Results. A comparison of the constructed model based on the F distribution quantile with the prediction ellipsoid equation based on the Chi-Square distribution quantile has been performed.</p> <p>Conclusions. The mathematical model in the form of the prediction ellipsoid equation for the normalized WMC, DIT, and NOC metrics at the app level to determine the OOD complexity due to the identification of classes of the open-source Web apps created using the PHP frameworks is firstly built based on the three-variate Box-Cox transformation. This model takes into account the correlation between the WMC, DIT, and NOC metrics at the app level. The prospects for further research may include the use of other data sets to confirm or change the prediction ellipsoid equation for determining the OOD complexity due to the identification of classes of the open-source Web apps created using the PHP frameworks.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 А. С. Приходько, Є. В. Малахов http://ric.zntu.edu.ua/article/view/305809 CONVOLUTIONAL NEURAL NETWORK SCALING METHODS IN SEMANTIC SEGMENTATION 2024-06-07T17:03:42+03:00 I. O. Hmyria rvv@zntu.edu.ua N. S. Kravets rvv@zntu.edu.ua <p>Context. Designing a new architecture is difficult and time-consuming process, that in some cases can be replaced by scaling existing model. In this paper we examine convolutional neural network scaling methods and aiming on the development of the method that allows to scale original network that solves segmentation task into more accurate network.</p> <p>Objective. The goal of the work is to develop a method of scaling a convolutional neural network, that achieve or outperform existing scaling methods, and to verify its effectiveness in solving semantic segmentation task.</p> <p>Method. The proposed asymmetric method combines advantages of other methods and provides same high accuracy network in the result as combined method and even outperform other methods. The method is developed to be appliable for convolutional neural networks which follows encoder-decoder architecture designed to solve semantic segmentation task. The method is enhancing feature extraction potential of the encoder part, meanwhile preserving decoder part of architecture. Because of its asymmetric nature, proposed method more efficient, since it results in smaller increase of parameters amount.</p> <p>Results. The proposed method was implemented on U-net architecture that was applied to solve semantic segmentation task. The evaluation of the method as well as other methods was performed on the semantic dataset. The asymmetric scaling method showed its efficiency outperformed or achieved other scaling methods results, meanwhile it has fewer parameters.</p> <p>Conclusions. Scaling techniques could be beneficial in cases where some extra computational resources are available. The proposed method was evaluated on the solving semantic segmentation task, on which method showed its efficiency. Even though scaling methods improves original network accuracy they highly increase network requirements, which proposed asymmetric method dedicated to decrease. The prospects for further research may include the optimization process and investigation of tradeoff between accuracy gain and resources requirements, as well as a conducting experiment that includes several different architectures.</p> 2024-06-18T00:00:00+03:00 Copyright (c) 2024 I. O. Hmyria, N. S. Kravets http://ric.zntu.edu.ua/article/view/305850 FUZZY MODEL FOR INTELLECTUALIZING MEDICAL KNOWLEDGE 2024-06-08T13:03:54+03:00 M. M. Malyar rvv@zntu.edu.ua N. M. Malyar-Gazda rvv@zntu.edu.ua M. M. Sharkadi rvv@zntu.edu.ua <p>Context. The research is devoted to the development of a flexible mathematical apparatus for the intellectualisation of knowledge in the medical field. As a rule, human thinking is based on inaccurate, approximate data, the analysis of which allows us to formulate clear decisions. In cases where there is no exact mathematical model of an object, or the model is difficult to implement, it is advisable to use a fuzzy logic apparatus. The article is aimed at expanding the range of knowledge of researchers working in the field of medical diagnostics.</p> <p>Objective. The aim of the study is to improve the quality of reflection of the subject area of the medical sphere on the basis of building type-2 fuzzy knowledge bases with interval membership functions.</p> <p>Method. The article describes an approach to formalising the knowledge of a medical specialist using second-order fuzzy sets, which allows taking into account the uncertainty and vagueness inherent in medical data and solving the problem of interpreting the results obtained.</p> <p>Results. The developed approach is implemented on a specific problem faced by an anaesthetist when admitting a patient to elective (planned) surgery.</p> <p>Conclusions. Experimental studies have shown that the presented type-2 fuzzy model with interval membership functions allows to adequately reflect the input medical variables of a qualitative nature and take into account both the knowledge of a specialist in medical practice and research medical and biological data. The acquired results hold substantial practical importance for medical practitioners, especially anesthetists, as they lead to enhanced patient assessments, error reduction, and tailored recommendations. This research fosters the advancement of intelligent systems capable of positively influencing clinical practices and improving patient outcomes within the realm of medical diagnostics.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 М. М. Маляр, Н. М. Маляр-Газда, М. М. Шаркаді http://ric.zntu.edu.ua/article/view/305852 USING MODULAR NEURAL NETWORKS AND MACHINE LEARNING WITH REINFORCEMENT LEARNING TO SOLVE CLASSIFICATION PROBLEMS 2024-06-08T13:31:03+03:00 S. D. Leoshchenko rvv@zntu.edu.ua A. O. Oliinyk rvv@zntu.edu.ua S. A. Subbotin rvv@zntu.edu.ua T. O. Kolpakova rvv@zntu.edu.ua <p>Context. The solution of the classification problem (including graphical data) based on the use of modular neural networks and modified machine learning methods with reinforcement for the synthesis of neuromodels that are characterized by a high level of accuracy is considered. The object of research is the process of synthesizing modular neural networks based on machine learning methods with reinforcement.</p> <p>Objective is to develop a method for synthesizing modular neural networks based on machine learning methods with reinforcement, for constructing high-precision neuromodels for solving classification problems.</p> <p>Method. A method for synthesizing modular neural networks based on a reinforcement machine learning approach is proposed. At the beginning, after initializing a system of modular neural networks built on the bottom-up principle, input data is provided – a training set of data from the sample and a hyperparameter to select the size of each module. The result of this method is a trained system of modular neural networks. The process starts with a single supergroup that contains all the categories of the data set. Then the network size is selected. The output matrix is softmax, similar to the trained network. After that, the average probability of softmax is used as a similarity indicator for group categories. If new child supergroups are formed, the module learns to classify between new supergroups. The training cycle of modular neural network modules is repeated until the training modules of all supergroups are completed. This method allows you to improve the accuracy of the resulting model.</p> <p>Results. The developed method is implemented and investigated on the example of neuromodel synthesis based on a modular neural network for image classification, which can later be used as a model for technical diagnostics. Using the developed method significantly reduces the resource intensity of setting up hyperparameters.</p> <p>Conclusions. The conducted experiments confirmed the operability of the proposed method of neuromodel synthesis for image classification and allow us to recommend it for use in practice in the synthesis of modular neural networks as a basis for classification models for further automation of tasks of technical diagnostics and image recognition using big data. Prospects for further research may lie in using the parallel capacities of GPU-based computing systems to organize directly modular neural networks based on them.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 S. D. Leoshchenko, A. O. Oliinyk, S. A. Subbotin, T. O. Kolpakova http://ric.zntu.edu.ua/article/view/305853 FACE RECOGNITION USING THE TEN-VARIATE PREDICTION ELLIPSOID FOR NORMALIZED DATA BASED ON THE BOX-COX TRANSFORMATION 2024-06-08T14:18:18+03:00 S. B. Prykhodko rvv@zntu.edu.ua A. S. Trukhov rvv@zntu.edu.ua <p>Context. Face recognition, which is one of the tasks of pattern recognition, plays an important role in the modern information world and is widely used in various fields, including security systems, access control, etc. This makes it an important tool for security and personalization. However, the low probability of identifying a person by face can have negative consequences, so there is a need for the development and improvement of face recognition methods. The object of research is the face recognition process. The subject of the research is a mathematical model for face recognition.</p> <p>One of the frequently used methods of pattern recognition is the construction of decision rules based on the prediction ellipsoid. An important limitation of its application is the need to fulfill the assumption of a multivariate normal distribution of data. However, in many cases, the multivariate distribution of real data may deviate from normal, which leads to a decrease in the probability of recognition. Therefore, there is a need to improve mathematical models that would take into account the specified deviation.</p> <p>The objective of the work is to increase the probability of face recognition by constructing a ten-variate prediction ellipsoid for data normalized by the Box-Cox transformation.</p> <p>Method. Application of the Mardia test to test the deviation of a multivariate distribution of data from normality. Building decision rules for face recognition using a ten-variate prediction ellipsoid for data normalized based on the Box-Cox transformation. Obtaining estimates of the parameters of the univariate and ten-variate Box-Cox transformations using the maximum likelihood method.</p> <p>Results. A comparison of the results of face recognition using decision rules, which were built using a ten-variate ellipsoid of prediction for data normalized by various transformations, was carried out. In comparison with the use of univariate normalizing transformations (decimal logarithm and Box-Cox) and the absence of normalization, the use of the ten-variate Box-Cox transformation leads to an increase in the probability of face recognition.</p> <p>Conclusions. For face recognition, a mathematical model in the form of a ten-variate prediction ellipsoid for data normalized using the multivariate Box-Cox transformation has been improved, which allows to increase in the probability of recognition in comparison with the use of corresponding models that are built either without normalization or with the use of univariate normalizing transformations. It was found that a mathematical model built for normalized data using a multivariate Box-Cox transformation has a higher probability of recognition since univariate transformations neglect the correlation between geometric features of the face.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 С. Б. Приходько, А. С. Трухов http://ric.zntu.edu.ua/article/view/305855 BUILDING A SCALABLE DATASET FOR FRIDAY SERMONS OF AUDIO AND TEXT (SAT) 2024-06-08T14:35:01+03:00 A. A. Samah rvv@zntu.edu.ua H. A. Dimah rvv@zntu.edu.ua M. A. Hassanin rvv@zntu.edu.ua <p>Context. Today, collecting and creating datasets in various sectors has become increasingly prevalent. Despite this widespread data production, a gap still exists in specialized domains, particularly in the Islamic Friday Sermons (IFS) domain. It is rich with theological, cultural, and linguistic studies that are relevant to Arab and Muslim countries, not just religious discourses.</p> <p>Objective. The goal of this research is to bridge this lack by introducing a comprehensive Sermon Audio and Text (SAT) dataset with its metadata. It seeks to provide an extensive resource for religion, linguistics, and sociology studies. Moreover, it aims to support advancements in Artificial Intelligence (AI), such as Natural Language Processing and Speech Recognition technologies.</p> <p>Method. The development of the SAT dataset was conducted through four distinct phases: planning, creation and processing, measurement, and deployment. The SAT dataset contains a collection of 21,253 audio and corresponding transcript files that were successfully created. Advanced audio processing techniques were used to enhance speech recognition and provide a dataset that is suitable for wide-range use.</p> <p>Results. The fine-tuned SAT dataset achieved a 5.13% Word Error Rate (WER), indicating a significant improvement in accuracy compared to the baseline model of Microsoft Azure Speech. This achievement indicates the dataset’s quality and the employed processing techniques’ effectiveness. In light of this, a novel Closest Matching Phrase (CMP) algorithm was developed to enhance the high confidence of equivalent speech-to-text by adjusting lower ratio phrases.</p> <p>Conclusions. This research contributes significant impact and insight into different studies, such as religion, linguistics, and sociology, providing invaluable insights and resources. In addition, it is demonstrating its potential in Artificial Intelligence (AI) and supporting its applications. In future research, we will focus on enriching this dataset expansion by adding a sign language video corpus, using advanced alignment techniques. It will support ongoing Machine Translation (MT) developments for a broader understanding of Islamic Friday Sermons across different linguistics and cultures.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 A. A. Samah, H. A. Dimah, M. A. Hassanin http://ric.zntu.edu.ua/article/view/305857 DEVELOPMENT OF TECHNIQUE FOR DETERMINING THE MEMBERSHIP FUNCTION VALUES ON THE BASIS OF GROUP EXPERT ASSESSMENT IN FUZZY DECISION TREE METHOD 2024-06-08T14:56:29+03:00 A. V. Shved rvv@zntu.edu.ua <p>Context. Recently, fuzzy decision trees have become widely used in solving the classification problem. In the absence of objective information to construct the membership function that shows the degrees of belongingness of elements to tree nodes, the only way to obtain information is to involve experts. In the case of group decision making, the task of aggregation of experts’ preferences in order to synthesize a group decision arises. The object of the study is group expert preferences of the degree of belonging (membership function) of an element to a given class, attribute, which require structuring and aggregation in the process of construction and analysis of a fuzzy decision tree.</p> <p>Objective. The purpose of the article is to develop a methodology for determining the membership degree of elements to a given class (attribute) based on the group expert assessment in the process of construction and analysis of fuzzy decision trees.</p> <p>Method. The research methodology is based on the complex application of the mathematical apparatus of the theory of plausible and paradoxical reasoning and methods of fuzzy logic to solve the problem of aggregating fuzzy judgments of the classification attribute values in the process of construction and analysis of a fuzzy decision tree. The proposed approach uses the mechanism of combination of expert evidences (judgments), formed within the framework of the Dezert-Smarandache hybrid model, based on the PCR5 proportional conflict redistribution rule to construct a group solution.</p> <p>Results. The issues of structuring fuzzy expert judgments are considered and the method of synthesis of group expert judgments regarding the values of membership degree of elements to classification attributes in the process of construction and analysis of fuzzy decision trees has been proposed.</p> <p>Conclusions. The models and methods of structuring and synthesis of group decisions based on fuzzy expert information were further developed. In contrast to the existing expert methods for the construction of membership function in context of group decision making, the proposed approach allows synthesizing a group decision taking into account the varying degree of conflict mass in the process of combination of original expert evidenced. This approach allows to correctly aggregate both agreed and contradictory (conflicting) expert judgments.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 A. V. Shved http://ric.zntu.edu.ua/article/view/305807 ANALYSIS OF THE RESULTS OF SIMULATION MODELING OF THE INFORMATION SECURITY SYSTEM AGAINST UNAUTHORIZED ACCESS IN SERVICE NETWORKS 2024-06-07T15:56:23+03:00 B. G. Ismailov rvv@zntu.edu.ua <p>Context. An analysis of the service network shows that insufficient information security in service networks is the cause of huge losses incurred by corporations. Despite the appearance of a number of works and materials on standardization, there is currently no unified system for assessing information security in the field of information security. It should be noted that existing methods, as well as accumulated experience in this area, do not completely overcome these difficulties. This circumstance confirms that this problem has not yet been sufficiently studied and, therefore, remains relevant. The presented work is one of the steps towards creating a unified system for assessing information security in service networks.</p> <p>Objective. Development of an algorithm and simulation model, analysis of simulation results to determine the key characteristics of the Information Security System, providing the capability for complete closure, through the security system, of all potential threat channels by ensuring control over the passage of all unauthorized access requests through defense mechanisms.</p> <p>Method. To solve the problem, a simulation method was applied using the principles of queuing system modeling. This method makes it possible to obtain the main characteristics of the Information Security System from the unauthorized access with a limited amount of buffer memory.</p> <p>Results. Algorithms, models, and methodology have been developed for the development of Information Security System from unauthorized access, considered as a single-phase multi-channel queuing system with a limited volume of buffer memory. The process of obtaining model results was implemented in the General Purpose Simulation System World modelling system, and comparative assessments of the main characteristics of the Information Security System were carried out for various laws of distribution of output parameters, i.e., in this case, unauthorized access requests are the simplest flows, and the service time obeys exponential, constant, and Erlang distribution laws.</p> <p>Conclusions. The conducted experiments based on the algorithm and model confirmed the expected results when analyzing the characteristics of the Information Security System from the unauthorized access as a single-phase multi-channel queuing system with a limited waiting time for requests in the queue. These results can be used for practical construction of new or modification of existing Information Security System s in service networks of objects of various purposes. This work is one of the approaches to generalizing the problems under consideration for systems with a limited volume of buffer memory. Prospects for further research include research and development of the principles of hardware and software implementation of Information Security System in service networks.</p> 2024-06-12T00:00:00+03:00 Copyright (c) 2024 B. G. Ismailov http://ric.zntu.edu.ua/article/view/305808 ANALYSIS OF DATA UNCERTAINTIES IN MODELING AND FORECASTING OF ACTUARIAL PROCESSES 2024-06-07T16:46:08+03:00 R. S. Panibratov rvv@zntu.edu.ua <p>ABSTRACT Context. Analysis of data uncertainties in modeling and forecasting of actuarial processes is very important issue because it allows actuaries to efficiently construct mathematical models and minimize insurance risks considering different situations.</p> <p>Objective. The goal of the following research is to develop an approach that allows for predicting future insurance payments with prior minimization of possible statistical data uncertainty.</p> <p>Method. The proposed method allows for the implementation of algorithms for estimating the parameters of generalized linear models with the preliminary application to data of the optimal Kalman filter. The results demonstrated better forecast results and more adequate model structures. This approach was applied successfully to the simulation procedure of insurance data. For generating insurance dataset the next features of clients were used: age; sex; body mass index (applying normal distribution); number of children (between 0 and 5); smoker status; region (north, east, south, west, center); charges. For creating the last feature normal distribution with known variance and a logarithmic function, exponential distribution with the identity link function and Pareto distribution with a known scale parameter and a negative linear function were used.</p> <p>Results. The proposed approach was implemented in the form of information processing system for solving the problem of predicting insurance payments based on insurance data and with taking into account the noise of the data.</p> <p>Conclusions. The conducted experiments confirmed that the proposed approach allows for more adequate model constructing and accurate forecasting of insurance payments, which is important point in the analysis of actuarial risks. The prospects for further research may include the use of this approach proposed in other fields of insurance related to availability of actuarial risk. A specialized intellectual decision support system should be designed and implemented to solve the problem by using actual insurance data from real world in online mode as well as modern information technologies and intellectual data analysis.</p> 2024-06-14T00:00:00+03:00 Copyright (c) 2024 Р. С. Панібратов http://ric.zntu.edu.ua/article/view/305891 FORMALIZATION OF THE MASTER PRODUCTION SHEDULE FORMATION TASK IN THE MRP II PLANNING SYSTEM 2024-06-08T23:51:39+03:00 V. P. Novinskyi rvv@zntu.edu.ua V. D. Popenko rvv@zntu.edu.ua <p>Context. Considered the task of forming the Master Production Shedule in the process of production management based on the MRP II standard. The object of the study is the algorithm for forming this plan for further planning of materials supply for production and the organization of production itself.</p> <p>Objective. Improvement of the algorithm of Master Production Shedule formation to avoid unnecessary stages of the algorithm.</p> <p>Method. It is proposed to improve the algorithm of the Master Production Shedule formation. It consists in simultaneously taking into account the requirements for timely delivery of products to customers, limitations regarding the capacities of the company’s work centers, and limitations regarding the duration of procurement cycles in the process of supplying materials. The MRP II standard envisages first planning the terms and quantity of product releases, and only at the next step checking the formed plan for admissibility with regard to the required time of operation of the equipment and the availability of the required materials quantity. In case of the calculated plan limitations violation, it is necessary to either plan and implement measures to overcome the specified limitations, i.e. organize additional shifts for work centers, use additional capacities, speed up the delivery of some materials, or reduce the sales plan. All these measures are associated with additional costs. In the proposed version of the planning process, this should be done only if the algorithm does not find an acceptable solution. The task of forming the Master Production Shedule, which is central to the MRP standard, is formulated by the authors as a linear programming task due to the linear nature of the specified restrictions on production capacities and materials. In particular, in the case of sufficiently severe restrictions on the work centers capacity, the plan for replenishing the remaining products from production is shifted to earlier planning intervals and only then rests against the restrictions. Several strategies are proposed for planning replenishments from the production of products stock.</p> <p>Results. The developed algorithms are implemented in the form of Microsoft Excel templates and are available for use in order to deepen the understanding of the MRP II standard. They are also used in the educational process.</p> <p>Conclusions. Approbation of the solution by the authors confirmed its workability, as well as the expediency of implementing the developed modification of the MRP II planning process into the software of leading ERP class systems suppliers. Prospects for further research may consist in a comparative analysis of the proposed options for placement of products replenishment from production, through economic evaluation of these options, as well as through simulation modeling.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 В. П. Новінський, В. Д. Попенко http://ric.zntu.edu.ua/article/view/305893 DEVELOPMENT OF AUTOMATED CONTROL SYSTEM FOR CONTINUOUS CASTING 2024-06-09T00:08:34+03:00 S. V. Sotnik rvv@zntu.edu.ua <p>Context. Today, automated continuous casting control systems are developing rapidly, as process of manufacturing billets (products) of same size from metal in casting mold in mass production has long been outdated and “continuous casting stage” is coming. This process is suitable for non-ferrous metals and steel. However, each time during development, task of improving quality of resulting billet arises, which directly depends on optimizing efficiency and reliability of automated systems themselves. Optimization is key stage in development process, as it is aimed at ensuring accuracy and stability of casting process, which includes development of parametric model and accurate algorithms that ensure optimal temperature, metal pouring rate, oscillation frequency, oscillation amplitude, metal level in crystallizer, and position of position of industrial bucket stopper for each casting stage. In particular, this problem has not yet been fully solved in literature known to authors, so it is necessary to formulate problem and develop algorithm for system operation for specific safety casting unit.</p> <p>Objective. The aim of study is to develop automated control system to ensure accuracy and stability of casting process.</p> <p>Method. The developed control system for continuous casting plant is based on proposed parametric model, which is formalized on basis of set theory. The model takes into account key parameters for particular casting process: metal pouring rate, oscillation frequency, oscillation amplitude, metal level in crystallizer, and position of industrial bucket stopper.</p> <p>Results. The problem was formulated and key parameters were determined, which are taken into account in system’s algorithm, which made it possible to develop control system for continuous casting plant to solve problem of improving quality of resulting billet.</p> <p>Conclusions. A parametric model and generalized black box model representation were created, which are necessary for both new continuous casting projects and existing units to optimize metal casting process. To set up continuous casting system, controlled parameters such as pouring speed, oscillation frequency and amplitude, metal level in crystallizer, and position of industrial bucket stopper were determined. The algorithm of control system for continuous casting plant was developed, on basis of which system was developed that allows monitoring, regulation and control of obtaining steel process or non-ferrous billets. The developed user interface of control system is simple and easy to use.</p> 2024-06-27T00:00:00+03:00 Copyright (c) 2024 С. В. Сотник