Ning Yu (PhD Student)

MSc Ning Yu

Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
E1 4 - 626
+49 681 9325 2026
+49 681 9325 2099

Personal Information


Inclusive GAN: Improving Data and Minority Coverage in Generative Models
N. Yu, K. Li, P. Zhou, J. Malik, L. Davis and M. Fritz
Computer Vision -- ECCV 2020, 2020
Long-Tailed Recognition Using Class-Balanced Experts
S. Sharma, N. Yu, M. Fritz and B. Schiele
Pattern Recognition (GCPR 2020), 2020
(Accepted/in press)
Texture Mixer: A Network for Controllable Synthesis and Interpolation of Texture
N. Yu, C. Barnes, E. Shechtman, S. Amirghodsi and M. Lukáč
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2019
Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints
N. Yu, L. Davis and M. Fritz
International Conference on Computer Vision (ICCV 2019), 2019
GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs
D. Chen, N. Yu, Y. Zhang and M. Fritz
Technical Report, 2019
(arXiv: 1909.03935)
In recent years, the success of deep learning has carried over from discriminative models to generative models. In particular, generative adversarial networks (GANs) have facilitated a new level of performance ranging from media manipulation to dataset re-generation. Despite the success, the potential risks of privacy breach stemming from GANs are less well explored. In this paper, we focus on membership inference attack against GANs that has the potential to reveal information about victim models' training data. Specifically, we present the first taxonomy of membership inference attacks, which encompasses not only existing attacks but also our novel ones. We also propose the first generic attack model that can be instantiated in various settings according to adversary's knowledge about the victim model. We complement our systematic analysis of attack vectors with a comprehensive experimental study, that investigates the effectiveness of these attacks w.r.t. model type, training configurations, and attack type across three diverse application scenarios ranging from images, over medical data to location data. We show consistent effectiveness in all the setups, which bridges the assumption gap and performance gap in previous study with a complete spectrum of performance across settings. We conclusively remind users to think over before publicizing any part of their models.