Steffen Jung (PhD Student)

MSc Steffen Jung
- Address
- Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken - Location
- E1 4 - 624
- Phone
- +49 681 9325 2124
- Fax
- +49 681 9325 2099
- Get email via email
Personal Information
Publications
2023
Neural Architecture Design and Robustness: A Dataset
S. Jung, J. Lukasik and M. Keuper
Eleventh International Conference on Learning Representations (ICLR 2023), 2023
(Accepted/in press) S. Jung, J. Lukasik and M. Keuper
Eleventh International Conference on Learning Representations (ICLR 2023), 2023
Abstract
Deep learning models have proven to be successful in a wide <br>range of machine learning tasks. Yet, they are often highly sensitive to <br>perturbations on the input data which can lead to incorrect decisions <br>with high confidence, hampering their deployment for practical <br>use-cases. Thus, finding architectures that are (more) robust against <br>perturbations has received much attention in recent years. Just like the <br>search for well-performing architectures in terms of clean accuracy, <br>this usually involves a tedious trial-and-error process with one <br>additional challenge: the evaluation of a network's robustness is <br>significantly more expensive than its evaluation for clean accuracy. <br>Thus, the aim of this paper is to facilitate better streamlined research <br>on architectural design choices with respect to their impact on <br>robustness as well as, for example, the evaluation of surrogate measures <br>for robustness. We therefore borrow one of the most commonly considered <br>search spaces for neural architecture search for image classification, <br>NAS-Bench-201, which contains a manageable size of 6466 non-isomorphic <br>network designs. We evaluate all these networks on a range of common <br>adversarial attacks and corruption types and introduce a database on <br>neural architecture design and robustness evaluations. We further <br>present three exemplary use cases of this dataset, in which we (i) <br>benchmark robustness measurements based on Jacobian and Hessian matrices <br>for their robustness predictability, (ii) perform neural architecture <br>search on robust accuracies, and (iii) provide an initial analysis of <br>how architectural design choices affect robustness. We find that <br>carefully crafting the topology of a network can have substantial impact <br>on its robustness, where networks with the same parameter count range in <br>mean adversarial robust accuracy from 20%-41%.
2022
FrequencyLowCut Pooling - Plug & Play against Catastrophic Overfitting
J. Grabinski, S. Jung, J. Keuper and M. Keuper
Computer Vision -- ECCV 2022, 2022
J. Grabinski, S. Jung, J. Keuper and M. Keuper
Computer Vision -- ECCV 2022, 2022
Learning Where To Look - Generative NAS is Surprisingly Efficient
J. Lukasik, S. Jung and M. Keuper
Computer Vision -- ECCV 2022, 2022
J. Lukasik, S. Jung and M. Keuper
Computer Vision -- ECCV 2022, 2022
Learning to solve Minimum Cost Multicuts efficiently using Edge-Weighted Graph Convolutional Neural Networks
S. Jung and M. Keuper
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2022), 2022
S. Jung and M. Keuper
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2022), 2022
Abstract
The minimum cost multicut problem is the NP-hard/APX-hard combinatorial<br>optimization problem of partitioning a real-valued edge-weighted graph such as<br>to minimize the total cost of the partition. While graph convolutional neural<br>networks (GNN) have proven to be promising in the context of combinatorial<br>optimization, most of them are only tailored to or tested on positive-valued<br>edge weights, i.e. they do not comply to the nature of the multicut problem. We<br>therefore adapt various GNN architectures including Graph Convolutional<br>Networks, Signed Graph Convolutional Networks and Graph Isomorphic Networks to<br>facilitate the efficient encoding of real-valued edge costs. Moreover, we<br>employ a reformulation of the multicut ILP constraints to a polynomial program<br>as loss function that allows to learn feasible multicut solutions in a scalable<br>way. Thus, we provide the first approach towards end-to-end trainable<br>multicuts. Our findings support that GNN approaches can produce good solutions<br>in practice while providing lower computation times and largely improved<br>scalability compared to LP solvers and optimized heuristics, especially when<br>considering large instances.<br>
Optimizing Edge Detection for Image Segmentation with Multicut Penalties
S. Jung, S. Ziegler, A. Kardoost and M. Keuper
Pattern Recognition (DAGM GCPR 2022), 2022
S. Jung, S. Ziegler, A. Kardoost and M. Keuper
Pattern Recognition (DAGM GCPR 2022), 2022
Abstract
The Minimum Cost Multicut Problem (MP) is a popular way for obtaining a graph<br>decomposition by optimizing binary edge labels over edge costs. While the<br>formulation of a MP from independently estimated costs per edge is highly<br>flexible and intuitive, solving the MP is NP-hard and time-expensive. As a<br>remedy, recent work proposed to predict edge probabilities with awareness to<br>potential conflicts by incorporating cycle constraints in the prediction<br>process. We argue that such formulation, while providing a first step towards<br>end-to-end learnable edge weights, is suboptimal, since it is built upon a<br>loose relaxation of the MP. We therefore propose an adaptive CRF that allows to<br>progressively consider more violated constraints and, in consequence, to issue<br>solutions with higher validity. Experiments on the BSDS500 benchmark for<br>natural image segmentation as well as on electron microscopic recordings show<br>that our approach yields more precise edge detection and image segmentation.<br>
2021
Internalized Biases in Fréchet Inception Distance
S. Jung and M. Keuper
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications (NeurIPS 2021 Workshop DistShift), 2021
S. Jung and M. Keuper
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications (NeurIPS 2021 Workshop DistShift), 2021