Yongqin Xian (PhD Student)

MSc Yongqin Xian

Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
E1 4 - 618
+49 681 9325 2118
+49 681 9325 2099

Personal Information

Research Interests

  • Computer Vision
  • Machine Learning


  • 2020-present, Postdoc, Max Planck Institute for Informatics, Germany
  • 2016-2020, Ph.D. (summa cum laude) in Computer Science, Max Planck Institute for Informatics, Germany
  • 2013-2015, M.Sc. (Honors Degree) in Computer Science, Saarland University, Germany
  • 2009-2013, B.E. in Computer Science, Beijing Institute of Technology, China 


  • Jan 2019-April 2019, Research Intern, Facebook AI Research, Cambridge, USA 

Student (Co-)Supervision

  • Bachelor thesis of Tobias Lorenz (now master student at ETH)
  • Master thesis of Subhabrata Choudhury (now PhD student at Oxford)
  • Master thesis of Yue Fan (now PhD student at MPII)


  • Finalist of Qualcomm Innovation Fellowship 2018
  • Chinese Goverment Award for Outstanding Students Abroad 2018


  • Teaching Assistant, Machine Learning, Winter Semester 2015/2016 (taught by Matthias Hein)


Reviewing Activities

  • Reviewer in Journals: TPAMI 2017-20, TIP 2018-19
  • Reviewer in Conferences: CVPR 2018-20, ICCV 2019, ECCV 2018-20, NIPS 2018-20, ICML 2019-20, ICLR 2019-20


Semantic Projection Network for Zero- and Few-Label Semantic Segmentation
Y. Xian, S. Choudhury, Y. He, B. Schiele and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2019
f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning
Y. Xian, S. Sharma, B. Schiele and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2019
When labeled training data is scarce, a promising data augmentation approach is to generate visual features of unknown classes using their attributes. To learn the class conditional distribution of CNN features, these models rely on pairs of image features and class attributes. Hence, they can not make use of the abundance of unlabeled data samples. In this paper, we tackle any-shot learning problems i.e. zero-shot and few-shot, in a unified feature generating framework that operates in both inductive and transductive learning settings. We develop a conditional generative model that combines the strength of VAE and GANs and in addition, via an unconditional discriminator, learns the marginal feature distribution of unlabeled images. We empirically show that our model learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e. inductive and transductive (generalized) zero- and few-shot learning settings. We also demonstrate that our learned features are interpretable: we visualize them by inverting them back to the pixel space and we explain them by generating textual arguments of why they are associated with a certain label.
Zero-shot Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly
Y. Xian, C. H. Lampert, B. Schiele and Z. Akata
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Number 9, 2019
Due to the importance of zero-shot learning, i.e. classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g. pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it.
Feature Generating Networks for Zero-Shot Learning
Y. Xian, T. Lorenz, B. Schiele and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018), 2018
Zero-shot learning - The Good, the Bad and the Ugly
Y. Xian, B. Schiele and Z. Akata
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
Latent Embeddings for Zero-shot Classification
Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016