Abstract
Deep learning models have proven to be successful in a wide
range of machine learning tasks. Yet, they are often highly sensitive to
perturbations on the input data which can lead to incorrect decisions
with high confidence, hampering their deployment for practical
use-cases. Thus, finding architectures that are (more) robust against
perturbations has received much attention in recent years. Just like the
search for well-performing architectures in terms of clean accuracy,
this usually involves a tedious trial-and-error process with one
additional challenge: the evaluation of a network's robustness is
significantly more expensive than its evaluation for clean accuracy.
Thus, the aim of this paper is to facilitate better streamlined research
on architectural design choices with respect to their impact on
robustness as well as, for example, the evaluation of surrogate measures
for robustness. We therefore borrow one of the most commonly considered
search spaces for neural architecture search for image classification,
NAS-Bench-201, which contains a manageable size of 6466 non-isomorphic
network designs. We evaluate all these networks on a range of common
adversarial attacks and corruption types and introduce a database on
neural architecture design and robustness evaluations. We further
present three exemplary use cases of this dataset, in which we (i)
benchmark robustness measurements based on Jacobian and Hessian matrices
for their robustness predictability, (ii) perform neural architecture
search on robust accuracies, and (iii) provide an initial analysis of
how architectural design choices affect robustness. We find that
carefully crafting the topology of a network can have substantial impact
on its robustness, where networks with the same parameter count range in
mean adversarial robust accuracy from 20%-41%.
BibTeX
@inproceedings{Jung_ICLR23, TITLE = {Neural Architecture Design and Robustness: {A} Dataset}, AUTHOR = {Jung, Steffen and Lukasik, Jovita and Keuper, Margret}, LANGUAGE = {eng}, PUBLISHER = {OpenReview.net}, YEAR = {2023}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Deep learning models have proven to be successful in a wide <br>range of machine learning tasks. Yet, they are often highly sensitive to <br>perturbations on the input data which can lead to incorrect decisions <br>with high confidence, hampering their deployment for practical <br>use-cases. Thus, finding architectures that are (more) robust against <br>perturbations has received much attention in recent years. Just like the <br>search for well-performing architectures in terms of clean accuracy, <br>this usually involves a tedious trial-and-error process with one <br>additional challenge: the evaluation of a network's robustness is <br>significantly more expensive than its evaluation for clean accuracy. <br>Thus, the aim of this paper is to facilitate better streamlined research <br>on architectural design choices with respect to their impact on <br>robustness as well as, for example, the evaluation of surrogate measures <br>for robustness. We therefore borrow one of the most commonly considered <br>search spaces for neural architecture search for image classification, <br>NAS-Bench-201, which contains a manageable size of 6466 non-isomorphic <br>network designs. We evaluate all these networks on a range of common <br>adversarial attacks and corruption types and introduce a database on <br>neural architecture design and robustness evaluations. We further <br>present three exemplary use cases of this dataset, in which we (i) <br>benchmark robustness measurements based on Jacobian and Hessian matrices <br>for their robustness predictability, (ii) perform neural architecture <br>search on robust accuracies, and (iii) provide an initial analysis of <br>how architectural design choices affect robustness. We find that <br>carefully crafting the topology of a network can have substantial impact <br>on its robustness, where networks with the same parameter count range in <br>mean adversarial robust accuracy from 20%-41%.}, BOOKTITLE = {Eleventh International Conference on Learning Representations (ICLR 2023)}, ADDRESS = {Kigali, Rwanda}, }
Endnote
%0 Conference Proceedings %A Jung, Steffen %A Lukasik, Jovita %A Keuper, Margret %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T Neural Architecture Design and Robustness: A Dataset : %G eng %U http://hdl.handle.net/21.11116/0000-000C-738F-2 %D 2023 %B Eleventh International Conference on Learning Representations %Z date of event: 2023-05-01 - 2023-05-05 %C Kigali, Rwanda %X Deep learning models have proven to be successful in a wide <br>range of machine learning tasks. Yet, they are often highly sensitive to <br>perturbations on the input data which can lead to incorrect decisions <br>with high confidence, hampering their deployment for practical <br>use-cases. Thus, finding architectures that are (more) robust against <br>perturbations has received much attention in recent years. Just like the <br>search for well-performing architectures in terms of clean accuracy, <br>this usually involves a tedious trial-and-error process with one <br>additional challenge: the evaluation of a network's robustness is <br>significantly more expensive than its evaluation for clean accuracy. <br>Thus, the aim of this paper is to facilitate better streamlined research <br>on architectural design choices with respect to their impact on <br>robustness as well as, for example, the evaluation of surrogate measures <br>for robustness. We therefore borrow one of the most commonly considered <br>search spaces for neural architecture search for image classification, <br>NAS-Bench-201, which contains a manageable size of 6466 non-isomorphic <br>network designs. We evaluate all these networks on a range of common <br>adversarial attacks and corruption types and introduce a database on <br>neural architecture design and robustness evaluations. We further <br>present three exemplary use cases of this dataset, in which we (i) <br>benchmark robustness measurements based on Jacobian and Hessian matrices <br>for their robustness predictability, (ii) perform neural architecture <br>search on robust accuracies, and (iii) provide an initial analysis of <br>how architectural design choices affect robustness. We find that <br>carefully crafting the topology of a network can have substantial impact <br>on its robustness, where networks with the same parameter count range in <br>mean adversarial robust accuracy from 20%-41%. %B Eleventh International Conference on Learning Representations %I OpenReview.net