Francesco Tudisco

Low-Rank Adversarial PGD Attack

Dayana Savostianova, Emanuele Zangrando, Francesco Tudisco,
preprint, (2024)

Abstract

Adversarial attacks on deep neural network models have seen rapid development and are extensively used to study the stability of these networks. Among various adversarial strategies, Projected Gradient Descent (PGD) is a widely adopted method in computer vision due to its effectiveness and quick implementation, making it suitable for adversarial training. In this work, we observe that in many cases, the perturbations computed using PGD predominantly affect only a portion of the singular value spectrum of the original image, suggesting that these perturbations are approximately low-rank. Motivated by this observation, we propose a variation of PGD that efficiently computes a low-rank attack. We extensively validate our method on a range of standard models as well as robust models that have undergone adversarial training. Our analysis indicates that the proposed low-rank PGD can be effectively used in adversarial training due to its straightforward and fast implementation coupled with competitive performance. Notably, we find that low-rank PGD often performs comparably to, and sometimes even outperforms, the traditional full-rank PGD attack, while using significantly less memory.

Please cite this paper as:

@article{savostianova2024lowrank,
  title={Low-Rank Adversarial PGD Attack},
  author={Savostianova, Dayana and Ceruti, Gianluca and Tudisco, Francesco},
  journal={arxiv preprint arXiv:2410.12607},
  year={2024}
}

Links: arxiv

Keywords: deep learning neural networks low-rank adversarial robustness