Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches


Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches

Hanfeld, P.; Wahba, K.; Höhne, M. M.-C.; Bussmann, M.; Hönig, W.

Abstract

Autonomous flying robots, such as multirotors, often rely on deep learning models that makes predictions based on a camera image, e.g. for pose estimation. These models can predict surprising results if applied to input images outside the training domain. This fault can be exploited by adversarial attacks, for example, by computing small images, so-called adversarial patches, that can be placed in the environment to manipulate the neural network's prediction. We introduce flying adversarial patches, where multiple images are mounted on at least one other flying robot and therefore can be placed anywhere in the field of view of a victim multirotor. By introducing the attacker robots, the system is extended to an adversarial multi-robot system. For an effective attack, we compare three methods that simultaneously optimize multiple adversarial patches and their position in the input image. We show that our methods scale well with the number of adversarial patches. Moreover, we demonstrate physical flights with two robots, where we employ a novel attack policy that uses the computed adversarial patches to kidnap a robot that was supposed to follow a human.

Keywords: Multi-Robot Systems; Deep Learning; Adversarial Attacks; Security

  • Open Access Logo Beitrag zu Proceedings
    International Symposium on Multi-Robot & Multi-Agent Systems (MRS), 04.-05.12.2023, Boston, United States of America
    2023 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), Boston, MA, USA: IEEE, 979-8-3503-7076-8, 78-84
    DOI: 10.1109/MRS60187.2023.10416782

Downloads

Permalink: https://www.hzdr.de/publications/Publ-37581