TY - JOUR AU - Khabarlak, K. S. AU - Koriashkina, L. S. PY - 2019/05/28 Y2 - 2024/03/29 TI - SCOPING ADVERSARIAL ATTACK FOR IMPROVING ITS QUALITY JF - Radio Electronics, Computer Science, Control JA - RIC VL - 0 IS - 2 SE - Progressive information technologies DO - 10.15588/1607-3274-2019-2-12 UR - http://ric.zntu.edu.ua/article/view/178284 SP - 108 - 118 AB - Context. The subject of this paper is adversarial attacks, their types, reasons for the emergence. A simplified fast and effective logistic regression attack algorithm has been presented. The work’s relevance is explained by the fact that neural network’s critical vulnerability the so-called adversarial examples is yet to be deeply explored. By exploiting such a mechanism, it is possible to get a deliberate result from it breaking defenses of neural-network-based safety systems.<br />Objective. The purpose of the work is to develop algorithms for different kinds of attacks of a trained neural network with respect to preliminary the network’s weights analysis, to estimate attacked image quality loss, to perform a comparison of the developed algorithms and other adversarial attacks of a similar type.<br />Method. A fast and fairly efficient attack algorithm that can use either whole image or its certain regions is presented. Using the SSIM image structural similarity metric, an analysis of the algorithm and its modifications was carried out, as well as a comparison with previous methods using gradient for the attack.<br />Results. Simplified targeted and non-targeted attack algorithms have been built for a single-layer neural network trained to perform handwritten digit classification on the MNIST dataset. A visual and semantic interpretation of weights as pixel “importance” for recognizing an image as one class or another is given. Based on structural image similarity index SSIM an image quality loss analysis has been performed for images attacked by the proposed algorithms on the whole test dataset. Such an analysis has revealed the classes the most vulnerable to an adversarial attack as well as images, whose class can be changed by adding noise imperceptible by a human being.<br />Adversarial examples built with the developed algorithm has been transferred to a 5-layered network of an unknown architecture. In many cases images that were difficult to attack for the original network have seen a higher transfer rates, then the ones needed only minor image changes.<br />Conclusions. Adversarial examples built upon the adversarial attack scoping idea and the methodic of the input data analysis can be easily generalized to other image recognition problems which makes it applicable to a wide range of practical tasks. This way, another way of analyzing neural network safety (logistic regression included) against input data attacks is presented. ER -