Yongqin Xian (Postdoc)

Personal Information
Publications
2023
2022
Attribute Prototype Network for Any-Shot Learning
W. Xu, Y. Xian, J. Wang, B. Schiele and Z. Akata
International Journal of Computer Vision, Volume 130, 2022
W. Xu, Y. Xian, J. Wang, B. Schiele and Z. Akata
International Journal of Computer Vision, Volume 130, 2022
2021
Distilling Audio-Visual Knowledge by Compositional Contrastive Learning
Y. Chen, Y. Xian, A. S. Koepke and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), 2021
Y. Chen, Y. Xian, A. S. Koepke and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), 2021
Open World Compositional Zero-Shot Learning
M. Mancini, M. F. Naeem, Y. Xian and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), 2021
M. Mancini, M. F. Naeem, Y. Xian and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), 2021
Learning Graph Embeddings for Compositional Zero-shot Learning
M. F. Naeem, Y. Xian, F. Tombari and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), 2021
M. F. Naeem, Y. Xian, F. Tombari and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), 2021
(SP)2Net for Generalized Zero-Label Semantic Segmentation
A. Das, Y. Xian, Y. He, B. Schiele and Z. Akata
Pattern Recognition (GCPR 2021), 2021
A. Das, Y. Xian, Y. He, B. Schiele and Z. Akata
Pattern Recognition (GCPR 2021), 2021
A Closer Look at Self-training for Zero-Label Semantic Segmentation
G. Pastore, F. Cermelli, Y. Xian, M. Mancini, Z. Akata and B. Caputo
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2021), 2021
G. Pastore, F. Cermelli, Y. Xian, M. Mancini, Z. Akata and B. Caputo
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR 2021), 2021
2020
Attribute Prototype Network for Zero-Shot Learning
W. Xu, Y. Xian, J. Wang, B. Schiele and Z. Akata
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020
W. Xu, Y. Xian, J. Wang, B. Schiele and Z. Akata
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020
Analyzing the Dependency of ConvNets on Spatial Information
Y. Fan, Y. Xian, M. M. Losch and B. Schiele
Technical Report, 2020
(arXiv: 2002.01827) Y. Fan, Y. Xian, M. M. Losch and B. Schiele
Technical Report, 2020
Abstract
Intuitively, image classification should profit from using spatial<br>information. Recent work, however, suggests that this might be overrated in<br>standard CNNs. In this paper, we are pushing the envelope and aim to further<br>investigate the reliance on spatial information. We propose spatial shuffling<br>and GAP+FC to destroy spatial information during both training and testing<br>phases. Interestingly, we observe that spatial information can be deleted from<br>later layers with small performance drops, which indicates spatial information<br>at later layers is not necessary for good performance. For example, test<br>accuracy of VGG-16 only drops by 0.03% and 2.66% with spatial information<br>completely removed from the last 30% and 53% layers on CIFAR100, respectively.<br>Evaluation on several object recognition datasets (CIFAR100, Small-ImageNet,<br>ImageNet) with a wide range of CNN architectures (VGG16, ResNet50, ResNet152)<br>shows an overall consistent pattern.<br>
Learning from Limited Labeled Data - Zero-Shot and Few-Shot Learning
Y. Xian
PhD Thesis, Universität des Saarlandes, 2020
Y. Xian
PhD Thesis, Universität des Saarlandes, 2020
2019
f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning
Y. Xian, S. Sharma, B. Schiele and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2019
Y. Xian, S. Sharma, B. Schiele and Z. Akata
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), 2019
Abstract
When labeled training data is scarce, a promising data augmentation approach is to generate visual features of unknown classes using their attributes. To learn the class conditional distribution of CNN features, these models rely on pairs of image features and class attributes. Hence, they can not make use of the abundance of unlabeled data samples. In this paper, we tackle any-shot learning problems i.e. zero-shot and few-shot, in a unified feature generating framework that operates in both inductive and transductive learning settings. We develop a conditional generative model that combines the strength of VAE and GANs and in addition, via an unconditional discriminator, learns the marginal feature distribution of unlabeled images. We empirically show that our model learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e. inductive and transductive (generalized) zero- and few-shot learning settings. We also demonstrate that our learned features are interpretable: we visualize them by inverting them back to the pixel space and we explain them by generating textual arguments of why they are associated with a certain label.
Zero-shot Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly
Y. Xian, C. H. Lampert, B. Schiele and Z. Akata
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Number 9, 2019
Y. Xian, C. H. Lampert, B. Schiele and Z. Akata
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Number 9, 2019
Abstract
Due to the importance of zero-shot learning, i.e. classifying images where<br>there is a lack of labeled training data, the number of proposed approaches has<br>recently increased steadily. We argue that it is time to take a step back and<br>to analyze the status quo of the area. The purpose of this paper is three-fold.<br>First, given the fact that there is no agreed upon zero-shot learning<br>benchmark, we first define a new benchmark by unifying both the evaluation<br>protocols and data splits of publicly available datasets used for this task.<br>This is an important contribution as published results are often not comparable<br>and sometimes even flawed due to, e.g. pre-training on zero-shot test classes.<br>Moreover, we propose a new zero-shot learning dataset, the Animals with<br>Attributes 2 (AWA2) dataset which we make publicly available both in terms of<br>image features and the images themselves. Second, we compare and analyze a<br>significant number of the state-of-the-art methods in depth, both in the<br>classic zero-shot setting but also in the more realistic generalized zero-shot<br>setting. Finally, we discuss in detail the limitations of the current status of<br>the area which can be taken as a basis for advancing it.<br>
2018
2017
2016