Ning Yu (PhD Student)

MSc Ning Yu

Address
Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
Location
E1 4 - 626
Phone
+49 681 9325 2026
Fax
+49 681 9325 2099

Personal Information

Publications

2019
Texture Mixer: A Network for Controllable Synthesis and Interpolation of Texture
N. Yu, C. Barnes, E. Shechtman, S. Amirghodsi and M. Lukáč
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2019
Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints
N. Yu, L. Davis and M. Fritz
International Conference on Computer Vision (ICCV 2019), 2019
GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs
D. Chen, N. Yu, Y. Zhang and M. Fritz
Technical Report, 2019
(arXiv: 1909.03935)
Abstract
In recent years, the success of deep learning has carried over from discriminative models to generative models. In particular, generative adversarial networks (GANs) have facilitated a new level of performance ranging from media manipulation to dataset re-generation. Despite the success, the potential risks of privacy breach stemming from GANs are less well explored. In this paper, we focus on membership inference attack against GANs that has the potential to reveal information about victim models' training data. Specifically, we present the first taxonomy of membership inference attacks, which encompasses not only existing attacks but also our novel ones. We also propose the first generic attack model that can be instantiated in various settings according to adversary's knowledge about the victim model. We complement our systematic analysis of attack vectors with a comprehensive experimental study, that investigates the effectiveness of these attacks w.r.t. model type, training configurations, and attack type across three diverse application scenarios ranging from images, over medical data to location data. We show consistent effectiveness in all the setups, which bridges the assumption gap and performance gap in previous study with a complete spectrum of performance across settings. We conclusively remind users to think over before publicizing any part of their models.