Wenbin Li (PhD Student)

MSc Wenbin Li

Address
Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
Location
E1 4 - Room 615
Phone
+49 681 9325 2110
Fax
+49 681 9325 2099
Email
Get email via email

Personal Information

Research Interests

  • Robotics
  • Activity Modeling
  • Material Recognition
  • Machine Learning

Education

  • 2013-present: PhD student at Max Planck Institute for Informatics and Saarland University, Germany
  • 2010-present: Graduate student at Graduate School for Computer Science, Saarland University, Germany
  • 2010-2013: M.Sc. in Computer Science, Saarland University, Germany
  • 2006-2010: B.Sc. in Science and Technology of Intelligence, Beijing University of Posts and Telecommunications, China

For more information, please visit my personal homepage.

Publications

2017
Visual Stability Prediction for Robotic Manipulation
W. Li, A. Leonardis and M. Fritz
IEEE International Conference on Robotics and Automation (ICRA 2017), 2017
(Accepted/in press)
2016
Recognition of Ongoing Complex Activities by Sequence Prediction Over a Hierarchical Label Space
W. Li and M. Fritz
2016 IEEE Winter Conference on Applications of Computer Vision (WACV 2016), 2016
Visual Stability Prediction and Its Application to Manipulation
W. Li, A. Leonardis and M. Fritz
Technical Report, 2016
(arXiv: 1609.04861)
Abstract
Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an {\em end-to-end} approach that directly predicts stability from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way---bypassing the need for an explicit simulation at run-time. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. We first evaluate the approach on synthetic data and compared the results to human judgments on the same stimuli. Further, we extend this approach to reason about future states of such towers that in turn enables successful stacking.
To Fall Or Not To Fall: A Visual Approach to Physical Stability Prediction
W. Li, S. Azimi, A. Leonardis and M. Fritz
Technical Report, 2016
(arXiv: 1604.00066)
Abstract
Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.
2015
Teaching Robots the Use of Human Tools from Demonstration with Non-dexterous End-effectors
W. Li and M. Fritz
2015 IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS 2015), 2015
2014
Learning Multi-scale Representations for Material Classification
W. Li
Pattern Recognition (GCPR 2014), 2014
Learning Multi-scale Representations for Material Classification
W. Li and M. Fritz
Technical Report, 2014
(arXiv: 1408.2938)
Abstract
The recent progress in sparse coding and deep learning has made unsupervised feature learning methods a strong competitor to hand-crafted descriptors. In computer vision, success stories of learned features have been predominantly reported for object recognition tasks. In this paper, we investigate if and how feature learning can be used for material recognition. We propose two strategies to incorporate scale information into the learning procedure resulting in a novel multi-scale coding procedure. Our results show that our learned features for material recognition outperform hand-crafted descriptors on the FMD and the KTH-TIPS2 material classification benchmarks.
2012
Recognizing Materials from Virtual Examples
W. Li and M. Fritz
Computer Vision - ECCV 2012, 2012