D2
Computer Vision and Machine Learning

Yue Fan (PhD Student)

MSc Yue Fan

Address
Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
Location
E1 4 - 608
Phone
+49 681 9325 2138
Fax
+49 681 9325 2099

Personal Information

Publications

Fan, Y., Xian, Y., Losch, M. M., & Schiele, B. (2021). Analyzing the Dependency of ConvNets on Spatial Information. In Pattern Recognition (GCPR 2020). Tübingen, Germany: Springer. doi:10.1007/978-3-030-71278-5_8
Export
BibTeX
@inproceedings{Fan_GCPR2020, TITLE = {Analyzing the Dependency of {ConvNets} on Spatial Information}, AUTHOR = {Fan, Yue and Xian, Yongqin and Losch, Max Maria and Schiele, Bernt}, LANGUAGE = {eng}, ISBN = {978-3-030-71277-8}, DOI = {10.1007/978-3-030-71278-5_8}, PUBLISHER = {Springer}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, DATE = {2021}, BOOKTITLE = {Pattern Recognition (GCPR 2020)}, EDITOR = {Akata, Zeynep and Geiger, Andreas and Sattler, Torsten}, PAGES = {101--115}, SERIES = {Lecture Notes in Computer Science}, VOLUME = {12544}, ADDRESS = {T{\"u}bingen, Germany}, }
Endnote
%0 Conference Proceedings %A Fan, Yue %A Xian, Yongqin %A Losch, Max Maria %A Schiele, Bernt %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T Analyzing the Dependency of ConvNets on Spatial Information : %G eng %U http://hdl.handle.net/21.11116/0000-0008-3292-A %R 10.1007/978-3-030-71278-5_8 %D 2021 %B 42nd German Conference on Pattern Recognition %Z date of event: 2020-09-28 - 2020-10-01 %C Tübingen, Germany %B Pattern Recognition %E Akata, Zeynep; Geiger, Andreas; Sattler, Torsten %P 101 - 115 %I Springer %@ 978-3-030-71277-8 %B Lecture Notes in Computer Science %N 12544
Fan, Y., Dai, D., & Schiele, B. (n.d.). CoSSL: Co-Learning of Representation and Classifier for Imbalanced Semi-Supervised Learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022). New Orleans, LA, USA: IEEE.
(arXiv: 2112.04564, Accepted/in press)
Abstract
In this paper, we propose a novel co-learning framework (CoSSL) with<br>decoupled representation learning and classifier learning for imbalanced SSL.<br>To handle the data imbalance, we devise Tail-class Feature Enhancement (TFE)<br>for classifier learning. Furthermore, the current evaluation protocol for<br>imbalanced SSL focuses only on balanced test sets, which has limited<br>practicality in real-world scenarios. Therefore, we further conduct a<br>comprehensive evaluation under various shifted test distributions. In<br>experiments, we show that our approach outperforms other methods over a large<br>range of shifted distributions, achieving state-of-the-art performance on<br>benchmark datasets ranging from CIFAR-10, CIFAR-100, ImageNet, to Food-101. Our<br>code will be made publicly available.<br>
Export
BibTeX
@inproceedings{Fan_CVPR2022, TITLE = {{CoSSL}: {C}o-Learning of Representation and Classifier for Imbalanced Semi-Supervised Learning}, AUTHOR = {Fan, Yue and Dai, Dengxin and Schiele, Bernt}, LANGUAGE = {eng}, EPRINT = {2112.04564}, EPRINTTYPE = {arXiv}, PUBLISHER = {IEEE}, YEAR = {2022}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In this paper, we propose a novel co-learning framework (CoSSL) with<br>decoupled representation learning and classifier learning for imbalanced SSL.<br>To handle the data imbalance, we devise Tail-class Feature Enhancement (TFE)<br>for classifier learning. Furthermore, the current evaluation protocol for<br>imbalanced SSL focuses only on balanced test sets, which has limited<br>practicality in real-world scenarios. Therefore, we further conduct a<br>comprehensive evaluation under various shifted test distributions. In<br>experiments, we show that our approach outperforms other methods over a large<br>range of shifted distributions, achieving state-of-the-art performance on<br>benchmark datasets ranging from CIFAR-10, CIFAR-100, ImageNet, to Food-101. Our<br>code will be made publicly available.<br>}, BOOKTITLE = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022)}, ADDRESS = {New Orleans, LA, USA}, }
Endnote
%0 Conference Proceedings %A Fan, Yue %A Dai, Dengxin %A Schiele, Bernt %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T CoSSL: Co-Learning of Representation and Classifier for Imbalanced Semi-Supervised Learning : %G eng %U http://hdl.handle.net/21.11116/0000-000A-16BA-C %D 2022 %B 35th IEEE/CVF Conference on Computer Vision and Pattern Recognition %Z date of event: 2022-06-19 - 2022-06-24 %C New Orleans, LA, USA %X In this paper, we propose a novel co-learning framework (CoSSL) with<br>decoupled representation learning and classifier learning for imbalanced SSL.<br>To handle the data imbalance, we devise Tail-class Feature Enhancement (TFE)<br>for classifier learning. Furthermore, the current evaluation protocol for<br>imbalanced SSL focuses only on balanced test sets, which has limited<br>practicality in real-world scenarios. Therefore, we further conduct a<br>comprehensive evaluation under various shifted test distributions. In<br>experiments, we show that our approach outperforms other methods over a large<br>range of shifted distributions, achieving state-of-the-art performance on<br>benchmark datasets ranging from CIFAR-10, CIFAR-100, ImageNet, to Food-101. Our<br>code will be made publicly available.<br> %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Learning, cs.LG %B IEEE/CVF Conference on Computer Vision and Pattern Recognition %I IEEE
Fan, Y., Xian, Y., Losch, M. M., & Schiele, B. (2020). Analyzing the Dependency of ConvNets on Spatial Information. Retrieved from https://arxiv.org/abs/2002.01827
(arXiv: 2002.01827)
Abstract
Intuitively, image classification should profit from using spatial<br>information. Recent work, however, suggests that this might be overrated in<br>standard CNNs. In this paper, we are pushing the envelope and aim to further<br>investigate the reliance on spatial information. We propose spatial shuffling<br>and GAP+FC to destroy spatial information during both training and testing<br>phases. Interestingly, we observe that spatial information can be deleted from<br>later layers with small performance drops, which indicates spatial information<br>at later layers is not necessary for good performance. For example, test<br>accuracy of VGG-16 only drops by 0.03% and 2.66% with spatial information<br>completely removed from the last 30% and 53% layers on CIFAR100, respectively.<br>Evaluation on several object recognition datasets (CIFAR100, Small-ImageNet,<br>ImageNet) with a wide range of CNN architectures (VGG16, ResNet50, ResNet152)<br>shows an overall consistent pattern.<br>
Export
BibTeX
@online{Fan_arXiv2002.01827, TITLE = {Analyzing the Dependency of {ConvNets} on Spatial Information}, AUTHOR = {Fan, Yue and Xian, Yongqin and Losch, Max Maria and Schiele, Bernt}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2002.01827}, EPRINT = {2002.01827}, EPRINTTYPE = {arXiv}, YEAR = {2020}, ABSTRACT = {Intuitively, image classification should profit from using spatial<br>information. Recent work, however, suggests that this might be overrated in<br>standard CNNs. In this paper, we are pushing the envelope and aim to further<br>investigate the reliance on spatial information. We propose spatial shuffling<br>and GAP+FC to destroy spatial information during both training and testing<br>phases. Interestingly, we observe that spatial information can be deleted from<br>later layers with small performance drops, which indicates spatial information<br>at later layers is not necessary for good performance. For example, test<br>accuracy of VGG-16 only drops by 0.03% and 2.66% with spatial information<br>completely removed from the last 30% and 53% layers on CIFAR100, respectively.<br>Evaluation on several object recognition datasets (CIFAR100, Small-ImageNet,<br>ImageNet) with a wide range of CNN architectures (VGG16, ResNet50, ResNet152)<br>shows an overall consistent pattern.<br>}, }
Endnote
%0 Report %A Fan, Yue %A Xian, Yongqin %A Losch, Max Maria %A Schiele, Bernt %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T Analyzing the Dependency of ConvNets on Spatial Information : %G eng %U http://hdl.handle.net/21.11116/0000-0007-80CB-3 %U https://arxiv.org/abs/2002.01827 %D 2020 %X Intuitively, image classification should profit from using spatial<br>information. Recent work, however, suggests that this might be overrated in<br>standard CNNs. In this paper, we are pushing the envelope and aim to further<br>investigate the reliance on spatial information. We propose spatial shuffling<br>and GAP+FC to destroy spatial information during both training and testing<br>phases. Interestingly, we observe that spatial information can be deleted from<br>later layers with small performance drops, which indicates spatial information<br>at later layers is not necessary for good performance. For example, test<br>accuracy of VGG-16 only drops by 0.03% and 2.66% with spatial information<br>completely removed from the last 30% and 53% layers on CIFAR100, respectively.<br>Evaluation on several object recognition datasets (CIFAR100, Small-ImageNet,<br>ImageNet) with a wide range of CNN architectures (VGG16, ResNet50, ResNet152)<br>shows an overall consistent pattern.<br> %K Computer Science, Computer Vision and Pattern Recognition, cs.CV
Fan, Y. (2019). Analyzing the Dependency of ConvNets on Spatial Information. Universität des Saarlandes, Saarbrücken.
Export
BibTeX
@mastersthesis{FanMaster2019, TITLE = {Analyzing the Dependency of {ConvNets} on Spatial Information}, AUTHOR = {Fan, Yue}, LANGUAGE = {eng}, SCHOOL = {Universit{\"a}t des Saarlandes}, ADDRESS = {Saarbr{\"u}cken}, YEAR = {2019}, DATE = {2019}, }
Endnote
%0 Thesis %A Fan, Yue %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T Analyzing the Dependency of ConvNets on Spatial Information : %G eng %U http://hdl.handle.net/21.11116/0000-0007-B435-2 %I Universit&#228;t des Saarlandes %C Saarbr&#252;cken %D 2019 %P 75 p. %V master %9 master