Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors


Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors

Hanfeld, P.; Höhne, M. M.-C.; Bussmann, M.; Hönig, W.

Abstract

Autonomous flying robots, e.g. multirotors, often rely on a neural network that makes predictions based on a camera image. These deep learning (DL) models can compute surprising results if applied to input images outside the training domain. Adversarial attacks exploit this fault, for example, by computing small images, so-called adversarial patches, that can be placed in the environment to manipulate the neural network's prediction. We introduce flying adversarial patches, where an image is mounted on another flying robot and therefore can be placed anywhere in the field of view of a victim multirotor. For an effective attack, we compare three methods that simultaneously optimize the adversarial patch and its position in the input image. We perform an empirical validation on a publicly available DL model and dataset for autonomous multirotors. Ultimately, our attacking multirotor would be able to gain full control over the motions of the victim multirotor.

Keywords: Adversarial Attacks; Multi-Robot Systems; Security

  • Open Access Logo Beitrag zu Proceedings
    Workshop on Multi-Robot Learning, 29.05.-02.06.2023, London, United Kingdom
    International Conference on Robotics and Automation
    DOI: 10.48550/arXiv.2305.12859

Permalink: https://www.hzdr.de/publications/Publ-37015