2017
Gaze Embeddings for Zero-Shot Image Classification
N. Karessli, Z. Akata, B. Schiele and A. Bulling
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
Simple Does It: Weakly Supervised Instance and Semantic Segmentation
A. Khoreva, R. Benenson, J. Hosang, M. Hein and B. Schiele
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
Learning Video Object Segmentation from Static Images
A. Khoreva, F. Perazzi, R. Benenson, B. Schiele and A. Sorkine-Hornung
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
Joint Graph Decomposition and Node Labeling: Problem, Algorithms, Applications
E. Levinkov, J. Uhrig, S. Tang, M. Omran, E. Insafutdinov, A. Kirillov, C. Rother, T. Brox, B. Schiele and B. Andres
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
A Dataset and Exploration of Models for Understanding Video Data through Fill-in-the-blank Question-answering
T. Maharaj, N. Ballas, A. Rohrbach, A. Courville and C. Pal
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
Exploiting Saliency for Object Segmentation from Image Level Labels
S. J. Oh, R. Benenson, A. Khoreva, Z. Akata, M. Fritz and B. Schiele
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
Generating Descriptions with Grounded and Co-Referenced People
A. Rohrbach, M. Rohrbach, S. Tang, S. J. Oh and B. Schiele
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
Zero-shot learning - The Good, the Bad and the Ugly
Y. Xian, B. Schiele and Z. Akata
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
Noticeable or Distractive? A Design Space for Gaze-Contingent User Interface Notifications
M. Klauck, Y. Sugano and A. Bulling
CHI 2017 Extended Abstracts, 2017
(Accepted/in press)
Visual Stability Prediction for Robotic Manipulation
W. Li, A. Leonardis and M. Fritz
IEEE International Conference on Robotics and Automation (ICRA 2017), 2017
(Accepted/in press)
MARCOnI-ConvNet-Based MARker-Less Motion Capture in Outdoor and Indoor Scenes
A. Elhayek, E. de Aguiar, A. Jain, J. Thompson, L. Pishchulin, M. Andriluka, C. Bregler, B. Schiele and C. Theobalt
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 33, Number 3, 2017
Expanded Parts Model for Semantic Description of Humans in Still Images
G. Sharma, F. Jurie and C. Schmid
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 39, Number 1, 2017
Movie Description
A. Rohrbach, A. Torabi, M. Rohrbach, N. Tandon, C. Pal, H. Larochelle, A. Courville and B. Schiele
International Journal of Computer Vision, Volume First Online, 2017
Abstract
Audio Description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length movies. In addition we also collected and aligned movie scripts used in prior work and compare the two sources of descriptions. In total the Large Scale Movie Description Challenge (LSMDC) contains a parallel corpus of 118,114 sentences and video clips from 202 movies. First we characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are indeed more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. Furthermore, we present and compare the results of several teams who participated in a challenge organized in the context of the workshop "Describing and Understanding Video & The Large Scale Movie Description Challenge (LSMDC)", at ICCV 2015.
Online Growing Neural Gas for Anomaly Detection in Changing Surveillance Scenes
Q. Sun, H. Liu and T. Harada
Pattern Recognition, Volume 64, 2017
Look Together: Using Gaze for Assisting Co-located Collaborative Search
Y. Zhang, K. Pfeuffer, M. K. Chong, J. Alexander, A. Bulling and H. Gellersen
Personal and Ubiquitous Computing, Volume 21, Number 1, 2017
Efficiently Summarising Event Sequences with Rich Interleaving Patterns
A. Bhattacharyya and J. Vreeken
Proceedings of the Seventeenth SIAM International Conference on Data Mining (SDM 2017), 2017
(Accepted/in press)
Lucid Data Dreaming for Object Tracking
A. Khoreva, R. Benenson, E. Ilg, T. Brox and B. Schiele
Technical Report, 2017
(arXiv: 1703.09554)
Abstract
Convolutional networks reach top quality in pixel-level object tracking but require a large amount of training data (1k ~ 10k) to deliver such results. We propose a new training strategy which achieves state-of-the-art results across three evaluation datasets while using 20x ~ 100x less annotated data than competing methods. Instead of using large training sets hoping to generalize across domains, we generate in-domain training data using the provided annotation on the first frame of each video to synthesize ("lucid dream") plausible future video frames. In-domain per-video training data allows us to train high quality appearance- and motion-based models, as well as tune the post-processing stage. This approach allows to reach competitive results even when training from only a single annotated frame, without ImageNet pre-training. Our results indicate that using a larger training set is not automatically better, and that for the tracking task a smaller training set that is closer to the target domain is more effective. This changes the mindset regarding how many training samples and general "objectness" knowledge are required for the object tracking task.
Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images
T. Orekondy, B. Schiele and M. Fritz
Technical Report, 2017
(arXiv: 1703.10660)
Abstract
With an increasing number of users sharing information online, privacy implications entailing such actions are a major concern. For explicit content, such as user profile or GPS data, devices (e.g. mobile phones) as well as web services (e.g. Facebook) offer to set privacy settings in order to enforce the users' privacy preferences. We propose the first approach that extends this concept to image content in the spirit of a Visual Privacy Advisor. First, we categorize personal information in images into 68 image attributes and collect a dataset, which allows us to train models that predict such information directly from images. Second, we run a user study to understand the privacy preferences of different users w.r.t. such attributes. Third, we propose models that predict user specific privacy score from images in order to enforce the users' privacy preferences. Our model is trained to predict the user specific privacy risk and even outperforms the judgment of the users, who often fail to follow their own privacy preferences on image data.
Efficient Algorithms for Moral Lineage Tracing
M. Rempfler, J.-H. Lange, F. Jug, C. Blasse, E. W. Myers, B. H. Menze and B. Andres
Technical Report, 2017
(arXiv: 1702.04111)
Abstract
Lineage tracing, the joint segmentation and tracking of living cells as they move and divide in a sequence of light microscopy images, is a challenging task. Jug et al. have proposed a mathematical abstraction of this task, the moral lineage tracing problem (MLTP) whose feasible solutions define a segmentation of every image and a lineage forest of cells. Their branch-and-cut algorithm, however, is prone to many cuts and slow convergences for large instances. To address this problem, we make three contributions: Firstly, we improve the branch-and-cut algorithm by separating tighter cutting planes. Secondly, we define two primal feasible local search algorithms for the MLTP. Thirdly, we show in experiments that our algorithms decrease the runtime on the problem instances of Jug et al. considerably and find solutions on larger instances in reasonable time.
2016
Multi-Cue Zero-Shot Learning with Strong Supervision
Z. Akata, M. Malinowski, M. Fritz and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
CP-mtML: Coupled Projection Multi-task Metric Learning for Large Scale Face Retrieval
B. Bhattarai, G. Sharma and F. Jurie
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
The Cityscapes Dataset for Semantic Urban Scene Understanding
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
Moral Lineage Tracing
F. Jug, E. Levinkov, C. Blasse, E. W. Myers and B. Andres
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
Weakly Supervised Object Boundaries
A. Khoreva, R. Benenson, M. Omran, M. Hein and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
Abstract
State-of-the-art learning based boundary detection methods require extensive training data. Since labelling object boundaries is one of the most expensive types of annotations, there is a need to relax the requirement to carefully annotate images to make both the training more affordable and to extend the amount of training data. In this paper we propose a technique to generate weakly supervised annotations and show that bounding box annotations alone suffice to reach high-quality object boundaries without using any object-specific boundary annotations. With the proposed weak supervision techniques we achieve the top performance on the object boundary detection task, outperforming by a large margin the current fully supervised state-of-the-art methods.
Loss Functions for Top-k Error: Analysis and Insights
M. Lapin, M. Hein and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation
L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
Learning Deep Representations of Fine-Grained Visual Descriptions
S. Reed, Z. Akata, H. Lee and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
Deep Reflectance Maps
K. Rematas, T. Ritschel, M. Fritz, E. Gavves and T. Tuytelaars
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
Abstract
Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.
Convexity Shape Constraints for Image Segmentation
L. A. Royer, D. L. Richmond, B. Andres and D. Kainmueller
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
LOMo: Latent Ordinal Model for Facial Analysis in Videos
K. Sikka, G. Sharma and M. Bartlett
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
End-to-end People Detection in Crowded Scenes
R. Stewart and M. Andriluka
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
Latent Embeddings for Zero-shot Classification
Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
How Far are We from Solving Pedestrian Detection?
S. Zhang, R. Benenson, M. Omran, J. Hosang and B. Schiele
29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016
EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras
H. Rhodin, C. Richardt, D. Casas, E. Insafutdinov, M. Shafiei, H.-P. Seidel, B. Schiele and C. Theobalt
ACM Transactions on Graphics (Proc. ACM SIGGRAPH Asia 2016), Volume 35, Number 6, 2016
Learning What and Where to Draw
S. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele and L. Honglak
Advances in Neural Information Processing Systems 29 (NIPS 2016), 2016
SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull
S. Schneegass, Y. Oualil and A. Bulling
CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems, 2016
Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces
P. Xu, Y. Sugano and A. Bulling
CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems, 2016
GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices
M. Khamis, F. Alt, M. Hassib, E. von Zezschwitz, R. Hasholzner and A. Bulling
CHI 2016 Extended Abstracts, 2016
On the Verge: Voluntary Convergences for Accurate and Precise Timing of Gaze Input
D. Kirst and A. Bulling
CHI 2016 Extended Abstracts, 2016
Abstract
Rotations performed with the index finger and thumb involve some of the most complex motor action among common multi-touch gestures, yet little is known about the factors affecting performance and ergonomics. This note presents results from a study where the angle, direction, diameter, and position of rotations were systematically manipulated. Subjects were asked to perform the rotations as quickly as possible without losing contact with the display, and were allowed to skip rotations that were too uncomfortable. The data show surprising interaction effects among the variables, and help us identify whole categories of rotations that are slow and cumbersome for users.
Pervasive Attentive User Interfaces
A. Bulling
Computer, Volume 49, Number 1, 2016
Towards Segmenting Consumer Stereo Videos: Benchmark, Baselines and Ensembles
W.-C. Chiu, F. Galasso and M. Fritz
Computer Vision - ACCV 2016, 2016
(Accepted/in press)
Local Higher-order Statistics (LHS) Describing Images with Statistics of Local Non-binarized Pixel Patterns
G. Sharma and F. Jurie
Computer Vision and Image Understanding, Volume 142, 2016
An Efficient Fusion Move Algorithm for the Minimum Cost Lifted Multicut Problem
T. Beier, B. Andres, U. Köthe and F. A. Hamprecht
Computer Vision - ECCV 2016, 2016
Generating Visual Explanations
L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele and T. Darrell
Computer Vision -- ECCV 2016, 2016
Abstract
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. We propose a new model that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image. We propose a novel loss function based on sampling and reinforcement learning that learns to generate sentences that realize a global sentence property, such as class specificity. Our results on a fine-grained bird species classification dataset show that our model is able to generate explanations which are not only consistent with an image but also more discriminative than descriptions produced by existing captioning methods.
DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model
E. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka and B. Schiele
Computer Vision -- ECCV 2016, 2016
Abstract
The goal of this paper is to advance the state-of-the-art of articulated pose estimation in scenes with multiple people. To that end we contribute on three fronts. We propose (1) improved body part detectors that generate effective bottom-up proposals for body parts; (2) novel image-conditioned pairwise terms that allow to assemble the proposals into a variable number of consistent body part configurations; and (3) an incremental optimization strategy that explores the search space more efficiently thus leading both to better performance and significant speed-up factors. We evaluate our approach on two single-person and two multi-person pose estimation benchmarks. The proposed approach significantly outperforms best known multi-person pose estimation results while demonstrating competitive performance on the task of single person pose estimation. Models and code available at http://pose.mpi-inf.mpg.de
Faceless Person Recognition: Privacy Implications in Social Media
S. J. Oh, R. Benenson, M. Fritz and B. Schiele
Computer Vision -- ECCV 2016, 2016
Grounding of Textual Phrases in Images by Reconstruction
A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell and B. Schiele
Computer Vision -- ECCV 2016, 2016
A 3D Morphable Eye Region Model for Gaze Estimation
E. Wood, T. Baltrušaitis, L.-P. Morency, P. Robinson and A. Bulling
Computer Vision -- ECCV 2016, 2016
VConv-DAE: Deep Volumetric Shape Learning Without Object Labels
A. Sharma, O. Grau and M. Fritz
Computer Vision - ECCV 2016 Workshops, 2016
Abstract
With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (eg. Kinect) still comes with several challenges that result in noise or even incomplete shapes. Recent success in deep learning has shown how to learn complex shape distributions in a data-driven way from large scale 3D CAD Model collections and to utilize them for 3D processing on volumetric representations and thereby circumventing problems of topology and tessellation. Prior work has shown encouraging results on problems ranging from shape completion to recognition. We provide an analysis of such approaches and discover that training as well as the resulting representation are strongly and unnecessarily tied to the notion of object labels. Furthermore, deep learning research argues ~\cite{Vincent08} that learning representation with over-complete model are more prone to overfitting compared to the approach that learns from noisy data. Thus, we investigate a full convolutional volumetric denoising auto encoder that is trained in a unsupervised fashion. It outperforms prior work on recognition as well as more challenging tasks like denoising and shape completion. In addition, our approach is atleast two order of magnitude faster at test time and thus, provides a path to scaling up 3D deep learning.
Multi-Person Tracking by Multicut and Deep Matching
S. Tang, B. Andres, M. Andriluka and B. Schiele
Computer Vision - ECCV 2016 Workshops, 2016
Improved Image Boundaries for Better Video Segmentation
A. Khoreva, R. Benenson, F. Galasso, M. Hein and B. Schiele
Computer Vision -- ECCV 2016 Workshops, 2016
Abstract
Graph-based video segmentation methods rely on superpixels as starting point. While most previous work has focused on the construction of the graph edges and weights as well as solving the graph partitioning problem, this paper focuses on better superpixels for video segmentation. We demonstrate by a comparative analysis that superpixels extracted from boundaries perform best, and show that boundary estimation can be significantly improved via image and time domain cues. With superpixels generated from our better boundaries we observe consistent improvement for two video segmentation methods in two different datasets.
Eyewear Computing -- Augmenting the Human with Head-mounted Wearable Assistants
A. Bulling, O. Cakmakci, K. Kunze and J. M. Rehg (Eds.)
Schloss Dagstuhl, 2016
Attention, please!: Comparing Features for Measuring Audience Attention Towards Pervasive Displays
F. Alt, A. Bulling, L. Mecke and D. Buschek
DIS 2016, 11th ACM SIGCHI Designing Interactive Systems Conference, 2016
Sensing and Controlling Human Gaze in Daily Living Space for Human-Harmonized Information Environments
Y. Sato, Y. Sugano, A. Sugimoto, Y. Kuno and H. Koike
Human-Harmonized Information Technology, 2016
Smooth Eye Movement Interaction Using EOG Glasses
M. Dhuliawala, J. Lee, J. Shimizu, A. Bulling, K. Kunze, T. Starner and W. Woo
ICMI’16, 18th ACM International Conference on Multimodal Interaction, 2016
Xplore-M-Ego: Contextual Media Retrieval Using Natural Language Queries
S. Nag Chowdhury, M. Malinowski, A. Bulling and M. Fritz
ICMR’16, ACM International Conference on Multimedia Retrieval, 2016
Ask Your Neurons Again: Analysis of Deep Methods with Global Image Representation
M. Malinowski, M. Rohrbach and M. Fritz
IEEE Conference on Computer Vision and Pattern Recognition Workshops (VQA 2016), 2016
(Accepted/in press)
Abstract
We are addressing an open-ended question answering task about real-world images. With the help of currently available methods developed in Computer Vision and Natural Language Processing, we would like to push an architecture with a global visual representation to its limits. In our contribution, we show how to achieve competitive performance on VQA with global visual features (Residual Net) together with a carefully desgined architecture.
A Joint Learning Approach for Cross Domain Age Estimation
B. Bhattarai, G. Sharma, A. Lechervy and F. Jurie
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016), 2016
Learning to Detect Visual Grasp Affordance
H. Oh Song, M. Fritz, D. Goehring and T. Darell
IEEE Transactions on Automation Science and Engineering, Volume 13, Number 2, 2016
Label-Embedding for Image Classification
Z. Akata, F. Perronnin, Z. Harchaoui and C. Schmid
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 38, Number 7, 2016
3D Pictorial Structures Revisited: Multiple Human Pose Estimation
V. Belagiannis, S. Amin, M. Andriluka, B. Schiele, N. Navab and S. Ilic
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 38, Number 10, 2016
Leveraging the Wisdom of the Crowd for Fine-Grained Recognition
J. Deng, J. Krause, M. Stark and L. Fei-Fei
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 38, Number 4, 2016
What Makes for Effective Detection Proposals?
J. Hosang, R. Benenson, P. Dollár and B. Schiele
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 38, Number 4, 2016
Novel Views of Objects from a Single Image
K. Rematas, C. Nguyen, T. Ritschel, M. Fritz and T. Tuytelaars
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016
Reconstructing Curvilinear Networks using Path Classifiers and Integer Programming
E. T. Turetken, F. Benmansour, B. Andres, P. Głowacki and H. Pfister
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 38, Number 12, 2016
Combining Eye Tracking with Optimizations for Lens Astigmatism in modern wide-angle HMDs
D. Pohl, X. Zhang and A. Bulling
2016 IEEE Virtual Reality Conference (VR), 2016
Recognition of Ongoing Complex Activities by Sequence Prediction Over a Hierarchical Label Space
W. Li and M. Fritz
2016 IEEE Winter Conference on Applications of Computer Vision (WACV 2016), 2016
Eyewear Computers for Human-Computer Interaction
A. Bulling and K. Kunze
Interactions, Volume 23, Number 3, 2016
Demo hour
H. Jeong, D. Saakes, U. Lee, A. Esteves, E. Velloso, A. Bulling, K. Masai, Y. Sugiura, M. Ogata, K. Kunze, M. Inami, M. Sugimoto, A. Rathnayake and T. Dias
Interactions, Volume 23, Number 1, 2016
Recognizing Fine-grained and Composite Activities Using Hand-centric Features and Script Data
M. Rohrbach, A. Rohrbach, M. Regneri, S. Amin, M. Andriluka, M. Pinkal and B. Schiele
International Journal of Computer Vision, Volume 119, Number 3, 2016
Pattern Recognition
B. Rosenhahn and B. Andres (Eds.)
Springer, 2016
Pupil Detection for Head-mounted Eye Tracking in the Wild: An Evaluation of the State of the Art
W. Fuhl, M. Tonsen, A. Bulling and E. Kasneci
Machine Vision and Applications, Volume 27, Number 8, 2016
The Minimum Cost Connected Subgraph Problem in Medical Image Analysis
M. Rempfler, B. Andres and B. H. Menze
Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2016, 2016
Demo: I-Pic: A Platform for Privacy-Compliant Image Capture
P. Aditya, R. Sen, P. Druschel, S. J. Oh, R. Benenson, M. Fritz, B. Schiele, B. Bhattachariee and T. T. Wu
MobiSys’16, 4th Annual International Conference on Mobile Systems, Applications, and Services, 2016
I-Pic: A Platform for Privacy-Compliant Image Capture
P. Aditya, R. Sen, P. Druschel, S. J. Oh, R. Benenson, M. Fritz, B. Schiele, B. Bhattachariee and T. T. Wu
MobiSys’16, 4th Annual International Conference on Mobile Systems, Applications, and Services, 2016
Long Term Boundary Extrapolation for Deterministic Motion
A. Bhattacharyya, M. Malinowski and M. Fritz
NIPS Workshop on Intuitive Physics, 2016
A Convnet for Non-maximum Suppression
J. Hosang, R. Benenson and B. Schiele
Pattern Recognition (GCPR 2016), 2016
Abstract
Non-maximum suppression (NMS) is used in virtually all state-of-the-art object detection pipelines. While essential object detection ingredients such as features, classifiers, and proposal methods have been extensively researched surprisingly little work has aimed to systematically address NMS. The de-facto standard for NMS is based on greedy clustering with a fixed distance threshold, which forces to trade-off recall versus precision. We propose a convnet designed to perform NMS of a given set of detections. We report experiments on a synthetic setup, and results on crowded pedestrian detection scenes. Our approach overcomes the intrinsic limitations of greedy NMS, obtaining better recall and precision.
Learning to Select Long-Track Features for Structure-From-Motion and Visual SLAM
J. Scheer, M. Fritz and O. Grau
Pattern Recognition (GCPR 2016), 2016
Convexification of Learning from Constraints
I. Shcherbatyi and B. Andres
Pattern Recognition (GCPR 2016), 2016
Special Issue Introduction
D. J. Cook, A. Bulling and Z. Yu
Pervasive and Mobile Computing (Proc. PerCom 2015), Volume 26, 2016
Prediction of Gaze Estimation Error for Error-Aware Gaze-Based Interfaces
M. Barz, F. Daiber and A. Bulling
Proceedings ETRA 2016, 2016
3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers
M. Mansouryar, J. Steil, Y. Sugano and A. Bulling
Proceedings ETRA 2016, 2016
Gaussian Processes as an Alternative to Polynomial Gaze Estimation Functions
L. Sesma-Sanchez, Y. Zhang, H. Gellersen and A. Bulling
Proceedings ETRA 2016, 2016
Labelled Pupils in the Wild: A Dataset for Studying Pupil Detection in Unconstrained Environments
M. Tonsen, X. Zhang, Y. Sugano and A. Bulling
Proceedings ETRA 2016, 2016
Learning an Appearance-based Gaze Estimator from One Million Synthesised Images
E. Wood, T. Baltrušaitis, L.-P. Morency, P. Robinson and A. Bulling
Proceedings ETRA 2016, 2016
Long-term Memorability of Cued-Recall Graphical Passwords with Saliency Masks
F. Alt, M. Mikusz, S. Schneegass and A. Bulling
Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia (MUM 2016), 2016
EyeVote in the Wild: Do Users bother Correcting System Errors on Public Displays?
M. Khamis, L. Trotter, M. Tessman, C. Dannhart, A. Bulling and F. Alt
Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia (MUM 2016), 2016
Generative Adversarial Text to Image Synthesis
S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele and H. Lee
Proceedings of the 33rd International Conference on Machine Learning (ICML 2016), 2016
Mean Box Pooling: A Rich Image Representation and Output Embedding for the Visual Madlibs Task
A. Mokarian Forooshani, M. Malinowski and M. Fritz
Proceedings of the British Machine Vision Conference (BMVC 2016), 2016
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell and M. Rohrbach
Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), 2016
Three-Point Interaction: Combining Bi-manual Direct Touch with Gaze
A. L. Simeone, A. Bulling, J. Alexander and H. Gellersen
Proceedings of the 2016 International Working Conference on Advanced Visual Interfaces (AVI 2016), 2016
Commonsense in Parts: Mining Part-Whole Relations from the Web and Image Tags
N. Tandon, C. D. Hariman, J. Urbani, A. Rohrbach, M. Rohrbach and G. Weikum
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016
Concept for Using Eye Tracking in a Head-mounted Display to Adapt Rendering to the User’s Current Visual Field
D. Pohl, X. Zhang, A. Bulling and O. Grau
Proceedings VRST 2016, 2016
Visual Object Class Recognition
M. Stark, B. Schiele and A. Leonardis
Springer Handbook of Robotics, 2016
Interactive Multicut Video Segmentation
E. Levinkov, J. Tompkin, N. Bonneel, S. Kirchhoff, B. Andres and H. Pfister
The 24th Pacific Conference on Computer Graphics and Applications Short Papers Proceedings (Pacific Graphics 2016), 2016
TextPursuits: Using Text for Pursuits-based Interaction and Calibration on Public Displays
M. Khamis, O. Saltuk, A. Hang, K. Stolz, A. Bulling and F. Alt
UbiComp’16, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2016
EyeWear 2016: First Workshop on EyeWear Computing
A. Bulling, O. Cakmakci, K. Kunze and J. M. Rehg
UbiComp’16 Adjunct, 2016
Challenges and Design Space of Gaze-enabled Public Displays
M. Khamis, F. Alt and A. Bulling
UbiComp’16 Adjunct, 2016
Solar System: Smooth Pursuit Interactions Using EOG Glasses
J. Shimizu, J. Lee, M. Dhuliawala, A. Bulling, T. Starner, W. Woo and K. Kunze
UbiComp’16 Adjunct, 2016
AggreGaze: Collective Estimation of Audience Attention on Public Displays
Y. Sugano, X. Zhang and A. Bulling
UIST 2016, 29th Annual Symposium on User Interface Software and Technology, 2016
Lifting of Multicuts
B. Andres, A. Fuksova and J.-H. Lange
Technical Report, 2016
(arXiv: 1503.03791)
Abstract
For every simple, undirected graph $G = (V, E)$, a one-to-one relation exists between the decompositions and the multicuts of $G$. A decomposition of $G$ is a partition $\Pi$ of $V$ such that, for every $U \in \Pi$, the subgraph of $G$ induced by $U$ is connected. A multicut of $G$ is an $M \subseteq E$ such that, for every (chordless) cycle $C \subseteq E$ of $G$, $|M \cap C| \neq 1$. The multicut related to a decomposition is the set of edges that straddle distinct components. The characteristic function $x \in \{0, 1\}^E$ of a multicut $M = x^{-1}(1)$ of $G$ makes explicit, for every pair $\{v,w\} \in E$ of neighboring nodes, whether $v$ and $w$ are in distinct components. In order to make explicit also for non-neighboring nodes, specifically, for all $\{v,w\} \in E'$ with $E \subseteq E' \subseteq {V \choose 2}$, whether $v$ and $w$ are in distinct components, we define a lifting of the multicuts of $G$ to multicuts of $G' = (V, E')$. We show that, if $G$ is connected, the convex hull of the characteristic functions of those multicuts of $G'$ that are lifted from $G$ is an $|E'|$-dimensional polytope in $\mathbb{R}^{E'}$. We establish properties of trivial facets of this polytope.
Long-Term Image Boundary Extrapolation
A. Bhattacharyya, M. Malinowski, B. Schiele and M. Fritz
Technical Report, 2016
(arXiv: 1611.08841)
Abstract
Boundary prediction in images and videos has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established real-world video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation, that in contrast to prior work on RGB extrapolation maintains a crisp result. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We argue that our model has with minimalistic model assumptions derived a notion of "intuitive physics".
Spatio-Temporal Image Boundary Extrapolation
A. Bhattacharyya, M. Malinowski and M. Fritz
Technical Report, 2016
(arXiv: 1605.07363)
Abstract
Boundary prediction in images as well as video has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established real-world video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation in this challenging scenario. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We successfully predict boundaries in a billiard scenario without any assumptions of a strong parametric model or any object notion. We argue that our model has with minimalistic model assumptions derived a notion of 'intuitive physics' that can be applied to novel scenes.
Bayesian Non-Parametrics for Multi-Modal Segmentation
W.-C. Chiu
PhD Thesis, Universität des Saarlandes, 2016
Natural Illumination from Multiple Materials Using Deep Learning
S. Georgoulis, K. Rematas, T. Ritschel, M. Fritz, T. Tuytelaars and L. Van Gool
Technical Report, 2016
(arXiv: 1611.09325)
Abstract
Recovering natural illumination from a single Low-Dynamic Range (LDR) image is a challenging task. To remedy this situation we exploit two properties often found in everyday images. First, images rarely show a single material, but rather multiple ones that all reflect the same illumination. However, the appearance of each material is observed only for some surface orientations, not all. Second, parts of the illumination are often directly observed in the background, without being affected by reflection. Typically, this directly observed part of the illumination is even smaller. We propose a deep Convolutional Neural Network (CNN) that combines prior knowledge about the statistics of illumination and reflectance with an input that makes explicit use of these two observations. Our approach maps multiple partial LDR material observations represented as reflectance maps and a background image to a spherical High-Dynamic Range (HDR) illumination map. For training and testing we propose a new data set comprising of synthetic and real images with multiple materials observed under the same illumination. Qualitative and quantitative evidence shows how both multi-material and using a background are essential to improve illumination estimations.
DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination
S. Georgoulis, K. Rematas, T. Ritschel, M. Fritz, L. Van Gool and T. Tuytelaars
Technical Report, 2016
(arXiv: 1603.08240)
Abstract
In this paper we are extracting surface reflectance and natural environmental illumination from a reflectance map, i.e. from a single 2D image of a sphere of one material under one illumination. This is a notoriously difficult problem, yet key to various re-rendering applications. With the recent advances in estimating reflectance maps from 2D images their further decomposition has become increasingly relevant. To this end, we propose a Convolutional Neural Network (CNN) architecture to reconstruct both material parameters (i.e. Phong) as well as illumination (i.e. high-resolution spherical illumination maps), that is solely trained on synthetic data. We demonstrate that decomposition of synthetic as well as real photographs of reflectance maps, both in High Dynamic Range (HDR), and, for the first time, on Low Dynamic Range (LDR) as well. Results are compared to previous approaches quantitatively as well as qualitatively in terms of re-renderings where illumination, material, view or shape are changed.
RGBD Semantic Segmentation Using Spatio-Temporal Data-Driven Pooling
Y. He, W.-C. Chiu, M. Keuper and M. Fritz
Technical Report, 2016
(arXiv: 1604.02388)
Abstract
Beyond the success in classification, neural networks have recently shown strong results on pixel-wise prediction tasks like image semantic segmentation on RGBD data. However, the commonly used deconvolutional layers for upsampling intermediate representations to the full-resolution output still show different failure modes, like imprecise segmentation boundaries and label mistakes in particular on large, weakly textured objects (e.g. fridge, whiteboard, door). We attribute these errors in part to the rigid way, current network aggregate information, that can be either too local (missing context) or too global (inaccurate boundaries). Therefore we propose a data-driven pooling layer that integrates with fully convolutional architectures and utilizes boundary detection from RGBD image segmentation approaches. We extend our approach to leverage region-level correspondences across images with an additional temporal pooling stage. We evaluate our approach on the NYU-Depth-V2 dataset comprised of indoor RGBD video sequences and compare it to various state-of-the-art baselines. Besides a general improvement over the state-of-the-art, our approach shows particularly good results in terms of accuracy of the predicted boundaries and in segmenting previously problematic classes.
End-to-End Eye Movement Detection Using Convolutional Neural Networks
S. Hoppe and A. Bulling
Technical Report, 2016
(arXiv: 1609.02452)
Abstract
Common computational methods for automated eye movement detection - i.e. the task of detecting different types of eye movement in a continuous stream of gaze data - are limited in that they either involve thresholding on hand-crafted signal features, require individual detectors each only detecting a single movement, or require pre-segmented data. We propose a novel approach for eye movement detection that only involves learning a single detector end-to-end, i.e. directly from the continuous gaze data stream and simultaneously for different eye movements without any manual feature crafting or segmentation. Our method is based on convolutional neural networks (CNN) that recently demonstrated superior performance in a variety of tasks in computer vision, signal processing, and machine learning. We further introduce a novel multi-participant dataset that contains scripted and free-viewing sequences of ground-truth annotated saccades, fixations, and smooth pursuits. We show that our CNN-based method outperforms state-of-the-art baselines by a large margin on this challenging dataset, thereby underlining the significant potential of this approach for holistic, robust, and accurate eye movement protocol analysis.
Articulated Multi-person Tracking in the Wild
E. Insafutdinov, M. Andriluka, L. Pishchulin, S. Tang, E. Levinkov, B. Andres and B. Schiele
Technical Report, 2016
(arXiv: 1612.01465)
Abstract
In this paper we propose an approach for articulated tracking of multiple people in unconstrained videos. Our starting point is a model that resembles existing architectures for single-frame pose estimation but is several orders of magnitude faster. We achieve this in two ways: (1) by simplifying and sparsifying the body-part relationship graph and leveraging recent methods for faster inference, and (2) by offloading a substantial share of computation onto a feed-forward convolutional architecture that is able to detect and associate body joints of the same person even in clutter. We use this model to generate proposals for body joint locations and formulate articulated tracking as spatio-temporal grouping of such proposals. This allows to jointly solve the association problem for all people in the scene by propagating evidence from strong detections through time and enforcing constraints that each proposal can be assigned to one person only. We report results on a public MPII Human Pose benchmark and on a new dataset of videos with multiple people. We demonstrate that our model achieves state-of-the-art results while using only a fraction of time and is able to leverage temporal information to improve state-of-the-art for crowded scenes.
A Multi-cut Formulation for Joint Segmentation and Tracking of Multiple Objects
M. Keuper, S. Tang, Z. Yu, B. Andres, T. Brox and B. Schiele
Technical Report, 2016
(arXiv: 1607.06317)
Abstract
Recently, Minimum Cost Multicut Formulations have been proposed and proven to be successful in both motion trajectory segmentation and multi-target tracking scenarios. Both tasks benefit from decomposing a graphical model into an optimal number of connected components based on attractive and repulsive pairwise terms. The two tasks are formulated on different levels of granularity and, accordingly, leverage mostly local information for motion segmentation and mostly high-level information for multi-target tracking. In this paper we argue that point trajectories and their local relationships can contribute to the high-level task of multi-target tracking and also argue that high-level cues from object detection and tracking are helpful to solve motion segmentation. We propose a joint graphical model for point trajectories and object detections whose Multicuts are solutions to motion segmentation {\it and} multi-target tracking problems at once. Results on the FBMS59 motion segmentation benchmark as well as on pedestrian tracking sequences from the 2D MOT 2015 benchmark demonstrate the promise of this joint approach.
InstanceCut: from Edges to Instances with MultiCut
A. Kirillov, E. Levinkov, B. Andres, B. Savchynskyy and C. Rother
Technical Report, 2016
(arXiv: 1611.08272)
Abstract
This work addresses the task of instance-aware semantic segmentation. Our key motivation is to design a simple method with a new modelling-paradigm, which therefore has a different trade-off between advantages and disadvantages compared to known approaches. Our approach, we term InstanceCut, represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard convolutional neural network for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. We evaluate our approach on the challenging CityScapes dataset. Despite the conceptual simplicity of our approach, we achieve the best result among all published methods, and perform particularly well for rare object classes.
Analysis and Optimization of Loss Functions for Multiclass, Top-k, and Multilabel Classification
M. Lapin, M. Hein and B. Schiele
Technical Report, 2016
(arXiv: 1612.03663)
Abstract
Top-k error is currently a popular performance measure on large scale image classification benchmarks such as ImageNet and Places. Despite its wide acceptance, our understanding of this metric is limited as most of the previous research is focused on its special case, the top-1 error. In this work, we explore two directions that shed more light on the top-k error. First, we provide an in-depth analysis of established and recently proposed single-label multiclass methods along with a detailed account of efficient optimization algorithms for them. Our results indicate that the softmax loss and the smooth multiclass SVM are surprisingly competitive in top-k error uniformly across all k, which can be explained by our analysis of multiclass top-k calibration. Further improvements for a specific k are possible with a number of proposed top-k loss functions. Second, we use the top-k methods to explore the transition from multiclass to multilabel learning. In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant. Finally, our contribution of efficient algorithms for training with the considered top-k and multilabel loss functions is of independent interest.
Visual Stability Prediction and Its Application to Manipulation
W. Li, A. Leonardis and M. Fritz
Technical Report, 2016
(arXiv: 1609.04861)
Abstract
Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an {\em end-to-end} approach that directly predicts stability from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way---bypassing the need for an explicit simulation at run-time. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. We first evaluate the approach on synthetic data and compared the results to human judgments on the same stimuli. Further, we extend this approach to reason about future states of such towers that in turn enables successful stacking.
To Fall Or Not To Fall: A Visual Approach to Physical Stability Prediction
W. Li, S. Azimi, A. Leonardis and M. Fritz
Technical Report, 2016
(arXiv: 1604.00066)
Abstract
Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.
Ask Your Neurons: A Deep Learning Approach to Visual Question Answering
M. Malinowski, M. Rohrbach and M. Fritz
Technical Report, 2016
(arXiv: 1605.02697)
Abstract
We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Moreover, we also extend our analysis to VQA, a large-scale question answering about images dataset, where we investigate some particular design choices and show the importance of stronger visual models. At the same time, we achieve strong performance of our model that still uses a global image representation. Finally, based on such analysis, we refine our Ask Your Neurons on DAQUAR, which also leads to a better performance on this challenging task.
Tutorial on Answering Questions about Images with Deep Learning
M. Malinowski and M. Fritz
Technical Report, 2016
(arXiv: 1610.01076)
Abstract
Together with the development of more accurate methods in Computer Vision and Natural Language Understanding, holistic architectures that answer on questions about the content of real-world images have emerged. In this tutorial, we build a neural-based approach to answer questions about images. We base our tutorial on two datasets: (mostly on) DAQUAR, and (a bit on) VQA. With small tweaks the models that we present here can achieve a competitive performance on both datasets, in fact, they are among the best methods that use a combination of LSTM with a global, full frame CNN representation of an image. We hope that after reading this tutorial, the reader will be able to use Deep Learning frameworks, such as Keras and introduced Kraino, to build various architectures that will lead to a further performance improvement on this challenging task.
Attentive Explanations: Justifying Decisions and Pointing to the Evidence
D. H. Park,, L. A. Hendricks, Z. Akata, B. Schiele, T. Darrell and M. Rohrbach
Technical Report, 2016
(arXiv: 1612.04757)
Abstract
Deep models are the defacto standard in visual decision models due to their impressive performance on a wide array of visual tasks. However, they are frequently seen as opaque and are unable to explain their decisions. In contrast, humans can justify their decisions with natural language and point to the evidence in the visual world which led to their decisions. We postulate that deep models can do this as well and propose our Pointing and Justification (PJ-X) model which can justify its decision with a sentence and point to the evidence by introspecting its decision and explanation process using an attention mechanism. Unfortunately there is no dataset available with reference explanations for visual decision making. We thus collect two datasets in two domains where it is interesting and challenging to explain decisions. First, we extend the visual question answering task to not only provide an answer but also a natural language explanation for the answer. Second, we focus on explaining human activities which is traditionally more challenging than object classification. We extensively evaluate our PJ-X model, both on the justification and pointing tasks, by comparing it to prior models and ablations using both automatic and human evaluations.
Articulated People Detection and Pose Estimation in Challenging Real World Environments
L. Pishchulin
PhD Thesis, Universität des Saarlandes, 2016
Predicting the Category and Attributes of Mental Pictures Using Deep Gaze Pooling
H. Sattar, A. Bulling and M. Fritz
Technical Report, 2016
(arXiv: 1611.10162)
Abstract
Previous work focused on predicting visual search targets from human fixations but, in the real world, a specific target is often not known, e.g. when searching for a present for a friend. In this work we instead study the problem of predicting the mental picture, i.e. only an abstract idea instead of a specific target. This task is significantly more challenging given that mental pictures of the same target category can vary widely depending on personal biases, and given that characteristic target attributes can often not be verbalised explicitly. We instead propose to use gaze information as implicit information on users' mental picture and present a novel gaze pooling layer to seamlessly integrate semantic and localized fixation information into a deep image representation. We show that we can robustly predict both the mental picture's category as well as attributes on a novel dataset containing fixation data of 14 users searching for targets on a subset of the DeepFahion dataset. Our results have important implications for future search interfaces and suggest deep gaze pooling as a general-purpose approach for gaze-supported computer vision systems.
Tracking Hands in Action for Gesture-based Computer Input
S. Sridhar
PhD Thesis, Universität des Saarlandes, 2016
Seeing with Humans: Gaze-Assisted Neural Image Captioning
Y. Sugano and A. Bulling
Technical Report, 2016
(arXiv: 1608.05203)
Abstract
Gaze reflects how humans process visual scenes and is therefore increasingly used in computer vision systems. Previous works demonstrated the potential of gaze for object-centric tasks, such as object localization and recognition, but it remains unclear if gaze can also be beneficial for scene-centric tasks, such as image captioning. We present a new perspective on gaze-assisted image captioning by studying the interplay between human gaze and the attention mechanism of deep neural networks. Using a public large-scale gaze dataset, we first assess the relationship between state-of-the-art object and scene recognition models, bottom-up visual saliency, and human gaze. We then propose a novel split attention model for image captioning. Our model integrates human gaze information into an attention-based long short-term memory architecture, and allows the algorithm to allocate attention selectively to both fixated and non-fixated image regions. Through evaluation on the COCO/SALICON datasets we show that our method improves image captioning performance and that gaze can complement machine attention for semantic scene understanding tasks.
A Message Passing Algorithm for the Minimum Cost Multicut Problem
P. Swoboda and B. Andres
Technical Report, 2016
(arXiv: 1612.05441)
Abstract
We propose a dual decomposition and linear program relaxation of the NP -hard minimum cost multicut problem. Unlike other polyhedral relaxations of the multicut polytope, it is amenable to efficient optimization by message passing. Like other polyhedral elaxations, it can be tightened efficiently by cutting planes. We define an algorithm that alternates between message passing and efficient separation of cycle- and odd-wheel inequalities. This algorithm is more efficient than state-of-the-art algorithms based on linear programming, including algorithms written in the framework of leading commercial software, as we show in experiments with large instances of the problem from applications in computer vision, biomedical image analysis and data mining.
It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation
X. Zhang, Y. Sugano, M. Fritz and A. Bulling
Technical Report, 2016
(arXiv: 1611.08860)
Abstract
While appearance-based gaze estimation methods have traditionally exploited information encoded solely from the eyes, recent results from a multi-region method indicated that using the full face image can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in different facial regions. Through evaluation on the recent MPIIGaze and EYEDIAP gaze estimation datasets, we show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation, achieving improvements of up to 14.3% on MPIIGaze and 27.7% on EYEDIAP for person-independent 3D gaze estimation. We further show that this improvement is consistent across different illumination conditions and gaze directions and particularly pronounced for the most challenging extreme head poses.
2015
On the Interplay between Spontaneous Spoken Instructions and Human Visual Behaviour in an Indoor Guidance Task
N. Koleva, S. Hoppe, M. M. Moniri, M. Staudte and A. Bulling
37th Annual Meeting of the Cognitive Science Society (COGSCI 2015), 2015
Scene Viewing and Gaze Analysis during Phonetic Segmentation Tasks
A. Khan, I. Steiner, R. G. Macdonald, Y. Sugano and A. Bulling
Abstracts of the 18th European Conference on Eye Movements (ECEM 2015), 2015
The Feet in Human-Computer Interaction: A Survey of Foot-Based Interaction
E. Velloso, D. Schmidt, J. Alexander, H. Gellersen and A. Bulling
ACM Computing Surveys, Volume 48, Number 2, 2015
Introduction to the Special Issue on Activity Recognition for Interaction
A. Bulling, U. Blanke, D. Tan, J. Rekimoto and G. Abowd
ACM Transactions on Interactive Intelligent Systems, Volume 4, Number 4, 2015
Efficient Output Kernel Learning for Multiple Tasks
P. Jawanpuria, M. Lapin, M. Hein and B. Schiele
Advances in Neural Information Processing Systems 28 (NIPS 2015), 2015
Top-k Multiclass SVM
M. Lapin, M. Hein and B. Schiele
Advances in Neural Information Processing Systems 28 (NIPS 2015), 2015
Rekonstruktion zerebraler Gefässnetzwerke aus in-vivo μMRA mittels physiologischem Vorwissen zur lokalen Gefässgeometrie
M. Rempfler, M. Schneider, G. D. Ielacqua, T. Sprenger, X. Xiao, S. R. Stock, J. Klohs, G. Székely, B. Andres and B. H. Menze
Bildverarbeitung für die Medizin 2015 (BVM 2015), 2015
A Study on the Natural History of Scanning Behaviour in Patients with Visual Field Defects after Stroke
T. Loetscher, C. Chen, S. Wignall, A. Bulling, S. Hoppe, O. Churches, N. A. Thomas, M. E. R. Nicholls and A. Lee
BMC Neurology, Volume 15, 2015
Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-scale-translate Tasks
J. Turner, J. Alexander, A. Bulling and H. Gellersen
CHI 2015, 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015
The Royal Corgi: Exploring Social Gaze Interaction for Immersive Gameplay
M. Vidal, R. Bismuth, A. Bulling and H. Gellersen
CHI 2015, 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015
Abstract
The eyes are a rich channel for non-verbal communication in our daily interactions. We propose social gaze interaction as a game mechanic to enhance user interactions with virtual characters. We develop a game from the ground-up in which characters are esigned to be reactive to the player’s gaze in social ways, such as etting annoyed when the player seems distracted or changing their dialogue depending on the player’s apparent focus of ttention. Results from a qualitative user study provide insights bout how social gaze interaction is intuitive for users, elicits deep feelings of immersion, and highlight the players’ self-consciousness of their own eye movements through their strong reactions to the characters
Editorial of Special Issue on Shape Representations Meet Visual Recognition
S. Savarese, M. Sun and M. Stark
Computer Vision and Image Understanding, Volume 139, 2015
Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers
M. Barz, A. Bulling and F. Daiber
Technical Report, 2015
Abstract
Head-mounted eye tracking has significant potential for mobile gaze-based interaction with ambient displays but current interfaces lack information about the tracker\'s gaze estimation error. Consequently, current interfaces do not exploit the full potential of gaze input as the inherent estimation error can not be dealt with. The error depends on the physical properties of the display and constantly varies with changes in position and distance of the user to the display. In this work we present a computational model of gaze estimation error for head-mounted eye trackers. Our model covers the full processing pipeline for mobile gaze estimation, namely mapping of pupil positions to scene camera coordinates, marker-based display detection, and display mapping. We build the model based on a series of controlled measurements of a sample state-of-the-art monocular head-mounted eye tracker. Results show that our model can predict gaze estimation error with a root mean squared error of 17.99~px ($1.96^\\circ$).
GazeProjector: Location-independent Gaze Interaction on and Across Multiple Displays
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
Technical Report, 2015
Abstract
Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy still represents a significant challenge. To address this, we present GazeProjector, a system that combines accurate point-of-gaze estimation with natural feature tracking on displays to determine the mobile eye tracker’s position relative to a display. The detected eye positions are transformed onto that display allowing for gaze-based interaction. This allows for seamless gaze estimation and interaction on (1) multiple displays of arbitrary sizes, (2) independently of the user’s position and orientation to the display. In a user study with 12 participants we compared GazeProjector to existing well- established methods such as visual on-screen markers and a state-of-the-art motion capture system. Our results show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration. The system represents an important step towards the vision of pervasive gaze-based interfaces.
An Empirical Investigation of Gaze Selection in Mid-Air Gestural 3D Manipulation
E. Velloso, J. Turner, J. Alexander, A. Bulling and H. Gellersen
Human-Computer Interaction -- INTERACT 2015, 2015
Interactions Under the Desk: A Characterisation of Foot Movements for Input in a Seated Position
E. Velloso, J. Alexander, A. Bulling and H. Gellersen
Human-Computer Interaction -- INTERACT 2015, 2015
See the Difference: Direct Pre-Image Reconstruction and Pose Estimation by Differentiating HOG
W.-C. Chiu and M. Fritz
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Efficient Decomposition of Image and Mesh Graphs by Lifted Multicuts
M. Keuper, E. Levinkov, N. Bonneel, G. Layoue, T. Brox and B. Andres
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Motion Trajectory Segmentation via Minimum Cost Multicuts
M. Keuper, B. Andres and T. Brox
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Ask Your Neurons: A Neural-based Approach to Answering Questions About Images
M. Malinowski, M. Rohrbach and M. Fritz
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Person Recognition in Personal Photo Collections
S. J. Oh, R. Benenson, M. Fritz and B. Schiele
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Scalable Nonlinear Embeddings for Semantic Category-based Image Retrieval
G. Sharma and B. Schiele
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
E. Wood, T. Baltrusaitis, X. Zhang, Y. Sugano, P. Robinson and A. Bulling
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Evaluation of Output Embeddings for Fine-grained Image Classification
Z. Akata, S. Reed, D. Walter, H. Lee and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Enriching Object Detection with 2D-3D Registration and Continuous Viewpoint Estimation
C. Choy, M. Stark and S. Savarese
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Efficient ConvNet-based Marker-less Motion Capture in General Scenes with a Low Number of Cameras
A. Elhayek, E. de Aguiar, J. Tompson, A. Jain, L. Pishchulin, M. Andriluka, C. Bregler, B. Schiele and C. Theobalt
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Taking a Deeper Look at Pedestrians
J. Hosang, M. Omran, R. Benenson and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Image Retrieval using Scene Graphs
J. Johnson, R. Krishna, M. Stark, J. Li, M. Bernstein and L. Fei-Fei
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Classifier Based Graph Construction for Video Segmentation
A. Khoreva, F. Galasso, M. Hein and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
A Flexible Tensor Block Coordinate Ascent Scheme for Hypergraph Matching
Q. N. Nguyen, A. Gautier and M. Hein
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
A Dataset for Movie Description
A. Rohrbach, M. Rohrbach, N. Tandon and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Prediction of Search Targets from Fixations in Open-world Settings
H. Sattar, S. Müller, M. Fritz and A. Bulling
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Subgraph Decomposition for Multi-target Tracking
S. Tang, B. Andres, M. Andriluka and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Filtered Channel Features for Pedestrian Detection
S. Zhang, R. Benenson and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Appearance-based Gaze Estimation in the Wild
X. Zhang, Y. Sugano, M. Fritz and A. Bulling
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
3D Object Class Detection in the Wild
B. Pepik, M. Stark, P. Gehler, T. Ritschel and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition Workshops (3DSI 2015), 2015
Joint Segmentation and Activity Discovery using Semantic and Temporal Priors
J. Seiter, W.-C. Chiu, M. Fritz, O. Amft and G. Tröster
IEEE International Conference on Pervasive Computing and Communication (PERCOM 2015), 2015
Teaching Robots the Use of Human Tools from Demonstration with Non-dexterous End-effectors
W. Li and M. Fritz
2015 IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS 2015), 2015
GyroPen: Gyroscopes for Pen-Input with Mobile Phones
T. Deselaers, D. Keysers, J. Hosang and H. Rowley
IEEE Transactions on Human-Machine Systems, Volume 45, Number 2, 2015
Appearance-based Gaze Estimation with Online Calibration from Mouse Operations
Y. Sugano, Y. Matsushita, Y. Sato and H. Koike
IEEE Transactions on Human-Machine Systems, Volume 45, Number 6, 2015
Gaze Estimation From Eye Appearance: A Head Pose-free Method via Eye Image Synthesis
F. Lu, Y. Sugano, T. Okabe and Y. Sato
IEEE Transactions on Image Processing, Volume 24, Number 11, 2015
Detecting Surgical Tools by Modelling Local Appearance and Global Shape
D. Bouget, R. Benenson, M. Omran, L. Riffaud, B. Schiele and P. Jannin
IEEE Transactions on Medical Imaging, Volume 34, Number 12, 2015
Multi-view and 3D Deformable Part Models
B. Pepik, M. Stark, P. Gehler and B. Schiele
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 37, Number 11, 2015
Emotion Recognition from Embedded Bodily Expressions and Speech During Dyadic Interactions
P. Müller, S. Amin, P. Verma, M. Andriluka and A. Bulling
International Conference on Affective Computing and Intelligent Interaction (ACII 2015), 2015
A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems
J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schnörr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler, T. Kröger, J. Lellmann, N. Komodakis, B. Savchynskyy and C. Rother
International Journal of Computer Vision, Volume 115, Number 2, 2015
Abstract
Szeliski et al. published an influential study in 2006 on energy minimization methods for Markov Random Fields (MRF). This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the kinds of inference problems that have to be solved changed significantly. Specifically, the models today often include higher order interactions, flexible connectivity structures, large la\-bel-spaces of different cardinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of 32 state-of-the-art optimization techniques on a corpus of 2,453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of Szeliski et al. for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types.
Towards Scene Understanding with Detailed 3D Object Representations
Z. Zia, M. Stark and K. Schindler
International Journal of Computer Vision, Volume 112, Number 2, 2015
Walking Reduces Spatial Neglect
T. Loetscher, C. Chen, S. Hoppe, A. Bulling, S. Wignall, C. Owen, N. Thomas and A. Lee
Journal of the International Neuropsychological Society, 2015
Bridging the Gap Between Synthetic and Real Data
M. Fritz
Machine Learning with Interdependent and Non-Identically Distributed Data, 2015
Reconstructing Cerebrovascular Networks under Local Physiological Constraints by Integer Programming
M. Rempfler, M. Schneider, G. D. Ielacqua, X. Xiao, S. R. Stock, J. Klohs, G. Székely, B. Andres and B. H. Menze
Medical Image Analysis, Volume 25, Number 1, 2015
Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes
F. Alt, S. Schneegass, A. Shirazi, M. Hassib and A. Bulling
MobileHCI’15, 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, 2015
What is Holding Back Convnets for Detection?
B. Pepik, R. Benenson, T. Ritschel and B. Schiele
Pattern Recognition (GCPR 2015), 2015
The Long-short Story of Movie Description
A. Rohrbach, M. Rohrbach and B. Schiele
Pattern Recognition (GCPR 2015), 2015
Eye Tracking for Public Displays in the Wild
Y. Zhang, M. K. Chong, A. Bulling and H. Gellersen
Personal and Ubiquitous Computing, Volume 19, Number 5, 2015
The Cityscapes Dataset
M. Cordts, M. Omran, S. Ramos, T. Scharwächter, M. Enzweiler, R. Benenson, U. Franke, S. Roth and B. Schiele
The Future of Datasets in Vision 2015 (CVPR 2015 Workshop), 2015
Latent Max-margin Metric Learning for Comparing Video Face Tubes
G. Sharma and P. Pérez
The IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2015), 2015
Hard to Cheat: A Turing Test based on Answering Questions about Images
M. Malinowski and M. Fritz
Twenty-Ninth AAAI Conference on Artificial Intelligence W6, Beyond the Turing Test (AAAI 2015 W6), 2015
(arXiv: 1501.03302, Accepted/in press)
Abstract
Progress in language and image understanding by machines has sparkled the interest of the research community in more open-ended, holistic tasks, and refueled an old AI dream of building intelligent machines. We discuss a few prominent challenges that characterize such holistic tasks and argue for "question answering about images" as a particular appealing instance of such a holistic task. In particular, we point out that it is a version of a Turing Test that is likely to be more robust to over-interpretations and contrast it with tasks like grounding and generation of descriptions. Finally, we discuss tools to measure progress in this field.
Discovery of Everyday Human Activities From Long-Term Visual Behaviour Using Topic Models
J. Steil and A. Bulling
UbiComp 2015, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Analyzing Visual Attention During Whole Body Interaction with Public Displays
R. Walter, A. Bulling, D. Lindbauer, M. Schuessler and J. Müller
UbiComp 2015, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Human Visual Behaviour for Collaborative Human-Machine Interaction
A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Orbits: Enabling Gaze Interaction in Smart Watches Using Moving Targets
A. Esteves, E. Velloso, A. Bulling and H. Gellersen
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Recognition of Curiosity Using Eye Movement Analysis
S. Hoppe, T. Loetscher, S. Morey and A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Tackling Challenges of Interactive Public Displays Using Gaze
M. Khamis, A. Bulling and F. Alt
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
A Field Study on Spontaneous Gaze-based Interaction with a Public Display using Pursuits
M. Khamis, F. Alt and A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues
F. Alt, A. Bulling, G. Gravanis and D. Buschek
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements
A. Esteves, E. Velloso, A. Bulling and H. Gellersen
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
Self-calibrating Head-mounted Eye Trackers Using Egocentric Visual Saliency
Y. Sugano and A. Bulling
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
What Makes for Effective Detection Proposals?
J. Hosang, R. Benenson, P. Dollár and B. Schiele
Technical Report, 2015
(arXiv: 1502.05082)
Abstract
Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL and ImageNet, and impact on DPM and R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detector performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods.
Richer Object Representations for Object Class Detection in Challenging Real World Image
B. Pepik
PhD Thesis, Universität des Saarlandes, 2015
Building Statistical Shape Spaces for 3D Human Modeling
L. Pishchulin, S. Wuhrer, T. Helten, C. Theobalt and B. Schiele
Technical Report, 2015
(arXiv: 1503.05860)
Abstract
Statistical models of 3D human shape and pose learned from scan databases have developed into valuable tools to solve a variety of vision and graphics problems. Unfortunately, most publicly available models are of limited expressiveness as they were learned on very small databases that hardly reflect the true variety in human body shapes. In this paper, we contribute by rebuilding a widely used statistical body representation from the largest commercially available scan database, and making the resulting model available to the community (visit http://humanshape.mpi-inf.mpg.de). As preprocessing several thousand scans for learning the model is a challenge in itself, we contribute by developing robust best practice solutions for scan alignment that quantitatively lead to the best learned models. We make implementations of these preprocessing steps also publicly available. We extensively evaluate the improved accuracy and generality of our new model, and show its improved performance for human body reconstruction from sparse input data.
GazeDPM: Early Integration of Gaze Information in Deformable Part Models
I. Shcherbatyi, A. Bulling and M. Fritz
Technical Report, 2015
(arXiv: 1505.05753)
Abstract
An increasing number of works explore collaborative human-computer systems in which human gaze is used to enhance computer vision systems. For object detection these efforts were so far restricted to late integration approaches that have inherent limitations, such as increased precision without increase in recall. We propose an early integration approach in a deformable part model, which constitutes a joint formulation over gaze and visual data. We show that our GazeDPM method improves over the state-of-the-art DPM baseline by 4% and a recent method for gaze-supported object detection by 3% on the public POET dataset. Our approach additionally provides introspection of the learnt models, can reveal salient image structures, and allows us to investigate the interplay between gaze attracting and repelling areas, the importance of view-specific models, as well as viewers' personal biases in gaze patterns. We finally study important practical aspects of our approach, such as the impact of using saliency maps instead of real fixations, the impact of the number of fixations, as well as robustness to gaze estimation error.
Labeled Pupils in the Wild: A Dataset for Studying Pupil Detection in Unconstrained Environments
M. Tonsen, X. Zhang, Y. Sugano and A. Bulling
Technical Report, 2015
(arXiv: 1511.05768)
Abstract
We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover people with different ethnicities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions. The dataset also includes participants wearing glasses, contact lenses, as well as make-up. We benchmark five state-of-the-art pupil detection algorithms on our dataset with respect to robustness and accuracy. We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance. Our evaluations provide valuable insights into the general pupil detection problem and allow us to identify key challenges for robust pupil detection on head-mounted eye trackers.
2014
A Tutorial on Human Activity Recognition Using Body-worn Inertial Sensors
A. Bulling, U. Blanke and B. Schiele
ACM Computing Surveys, Volume 46, Number 3, 2014
Pursuits: Spontaneous Eye-based Interaction for Dynamic Interfaces
M. Vidal, A. Bulling and H. Gellersen
ACM SIGMOBILE Mobile Computing and Communications Review, Volume 18, Number 4, 2014
Abstract
Although gaze is an attractive modality for pervasive interaction, real-world implementation of eye-based interfaces poses significant challenges. In particular, user calibration is tedious and time consuming. Pursuits is an innovative interaction technique that enables truly spontaneous interaction with eye-based interfaces. A user can simply walk up to the screen and readily interact with moving targets. Instead of being based on gaze location, Pursuits correlates eye pursuit movements with objects dynamically moving on the interface.
A Multi-world Approach to Question Answering about Real-world Scenes based on Uncertain Input
M. Malinowski and M. Fritz
Advances in Neural Information Processing Systems 27 (NIPS 2014), 2014
Eye Tracking and Eye-based Human–computer Interaction
P. Majaranta and A. Bulling
Advances in Physiological Computing, 2014
Ubic: Bridging the Gap Between Digital Cryptography and the Physical World
M. Simkin, A. Bulling, M. Fritz and D. Schröder
Computer Security - ESORICS 2014, 2014
Estimation of Human Body Shape and Posture under Clothing
S. Wuhrer, L. Pishchulin, A. Brunton, C. Shu and J. Lang
Computer Vision and Image Understanding, Volume 127, 2014
Face Detection Without Bells and Whistles
M. Mathias, R. Benenson, M. Pedersoli and L. Van Gool
Computer Vision - ECCV 2014, 2014
Multiple Human Pose Estimation with Temporally Consistent 3D Pictorial Structures
X. Wang, B. Schiele, P. Fua, V. Belagiannis, S. Ilic and N. Navab
Computer Vision - ECCV 2014 Workshops, 2014
First International Workshop on Video Segmentation -- Panel Discussion
T. Brox, F. Galasso, F. Li, J. M. Rehg and B. Schiele
Computer Vision -- ECCV 2014 Workshops, 2014
Ten Years of Pedestrian Detection, What Have We Learned?
R. Benenson, M. Omran, J. Hosang and B. Schiele
Computer Vision - ECCV 2014 Workshops (ECCV 2014 Workshop CVRSUAD), 2014
2D Human Pose Estimation: New Benchmark and State of the Art Analysis
M. Andriluka, L. Pishchulin, P. Gehler and B. Schiele
2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), 2014
3D Pictorial Structures for Multiple Human Pose Estimation
V. Belagiannis, S. Amin, M. Andriluka, B. Schiele, N. Navab and S. Ilic
2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), 2014
Spectral Graph Reduction for Efficient Image and Streaming Video Segmentation
F. Galasso, M. Keuper, T. Brox and B. Schiele
2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), 2014
Anytime Recognition of Objects and Scenes
S. Karayev, M. Fritz and T. Darrell
2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), 2014
Scalable Multitask Representation Learning for Scene Classification
M. Lapin, B. Schiele and M. Hein
2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), 2014
Image-based Synthesis and Re-Synthesis of Viewpoints Guided by 3D Models
K. Rematas, T. Ritschel, M. Fritz and T. Tuytelaars
2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), 2014
Are Cars Just 3D Boxes? - Jointly Estimating the 3D Shape of Multiple Objects
M. Z. Zia, M. Stark and K. Schindler
2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014), 2014
Cognition-aware Computing
A. Bulling and T. O. Zander
IEEE Pervasive Computing, Volume 13, Number 3, 2014
3D Traffic Scene Understanding from Movable Platforms
A. Geiger, M. Lauer, C. Wojek, C. Stiller and R. Urtasun
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 36, Number 5, 2014
Learning Human Pose Estimation Features with Convolutional Networks
A. Jain, J. Tompson, M. Andriluka, G. W. Taylor and C. Bregler
International Conference on Learning Representations 2014 (ICLR 2014), 2014
(arXiv: 1312.7302)
Abstract
This paper introduces a new architecture for human pose estimation using a multi- layer convolutional network architecture and a modified learning technique that learns low-level features and higher-level weak spatial models. Unconstrained human pose estimation is one of the hardest problems in computer vision, and our new architecture and learning schema shows significant improvement over the current state-of-the-art results. The main contribution of this paper is showing, for the first time, that a specific variation of deep learning is able to outperform all existing traditional architectures on this task. The paper also discusses several lessons learned while researching alternatives, most notably, that it is possible to learn strong low-level feature detectors on features that might even just cover a few pixels in the image. Higher-level spatial models improve somewhat the overall result, but to a much lesser extent then expected. Many researchers previously argued that the kinematic structure and top-down information is crucial for this domain, but with our purely bottom up, and weak spatial model, we could improve other more complicated architectures that currently produce the best results. This mirrors what many other researchers, like those in the speech recognition, object recognition, and other domains have experienced.
Multi-view Priors for Learning Detectors from Sparse Viewpoint Data
B. Pepik, M. Stark, P. Gehler and B. Schiele
International Conference on Learning Representations 2014 (ICLR 2014), 2014
(arXiv: 1312.6095)
Abstract
While the majority of today's object class models provide only 2D bounding boxes, far richer output hypotheses are desirable including viewpoint, fine-grained category, and 3D geometry estimate. However, models trained to provide richer output require larger amounts of training data, preferably well covering the relevant aspects such as viewpoint and fine-grained categories. In this paper, we address this issue from the perspective of transfer learning, and design an object class model that explicitly leverages correlations between visual features. Specifically, our model represents prior distributions over permissible multi-view detectors in a parametric way -- the priors are learned once from training data of a source object class, and can later be used to facilitate the learning of a detector for a target class. As we show in our experiments, this transfer is not only beneficial for detectors based on basic-level category representations, but also enables the robust learning of detectors that represent classes at finer levels of granularity, where training data is typically even scarcer and more unbalanced. As a result, we report largely improved performance in simultaneous 2D object localization and viewpoint estimation on a recent dataset of challenging street scenes.
Detection and Tracking of Occluded People
S. Tang, M. Andriluka and B. Schiele
International Journal of Computer Vision, Volume 110, Number 1, 2014
Introduction to the PETMEI Special Issue
A. Bulling and R. Bednarik
Journal of Eye Movement Research, Volume 7, Number 3, 2014
Computer Vision - ECCV 2014
D. Fleet, T. Pajdla, B. Schiele and T. Tuytelaars (Eds.)
Springer, 2014
Candidate Sampling for Neuron Reconstruction from Anisotropic Electron Microscopy Volumes
J. Funke, J. N. P. Martel, S. Gerhard, B. Andres, D. C. Ciresan, A. Giusti, L. M. Gambardella, J. Schmidhuber, H. Pfister, A. Cardona and M. Cook
Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2014, 2014
Extracting Vascular Networks under Physiological Constraints via Integer Programming
M. Rempfler, M. Schneider, G. D. Ielacqua, X. Xiao, S. R. Stock, J. Klohs, G. Székely, B. Andres and B. H. Menze
Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2014, 2014
Learning Using Privileged Information: SVM+ and Weighted SVM
M. Lapin, M. Hein and B. Schiele
Neural Networks, Volume 53, 2014
Towards a Visual Turing Challenge
M. Malinowski and M. Fritz
NIPS 2014 Workshop on Learning Semantics, 2014
(arXiv: 1410.8027)
Abstract
As language and visual understanding by machines progresses rapidly, we are observing an increasing interest in holistic architectures that tightly interlink both modalities in a joint learning and inference process. This trend has allowed the community to progress towards more challenging and open tasks and refueled the hope at achieving the old AI dream of building machines that could pass a turing test in open domains. In order to steadily make progress towards this goal, we realize that quantifying performance becomes increasingly difficult. Therefore we ask how we can precisely define such challenges and how we can evaluate different algorithms on this open tasks? In this paper, we summarize and discuss such challenges as well as try to give answers where appropriate options are available in the literature. We exemplify some of the solutions on a recently presented dataset of question-answering task based on real-world indoor images that establishes a visual turing challenge. Finally, we argue despite the success of unique ground-truth annotation, we likely have to step away from carefully curated dataset and rather rely on ’}social consensus{’ as the main driving force to create suitable benchmarks. Providing coverage in this inherently ambiguous output space is an emerging challenge that we face in order to make quantifiable progress in this area.
Expressive Models and Comprehensive Benchmark for 2D Human Pose Estimation
L. Pishchulin, M. Andriluka, P. Gehler and B. Schiele
Parts and Attributes (ECCV 2014 Workshop PA), 2014
Test-time Adaptation for 3D Human Pose Estimation
S. Amin, P. Müller, A. Bulling and M. Andriluka
Pattern Recognition (GCPR 2014), 2014
Learning Must-Link Constraints for Video Segmentation Based on Spectral Clustering
A. Khoreva, F. Galasso, M. Hein and B. Schiele
Pattern Recognition (GCPR 2014), 2014
Learning Multi-scale Representations for Material Classification
W. Li
Pattern Recognition (GCPR 2014), 2014
Fine-grained Activity Recognition with Holistic and Pose Based Features
L. Pishchulin, M. Andriluka and B. Schiele
Pattern Recognition (GCPR 2014), 2014
Coherent Multi-sentence Video Description with Variable Level of Detail
A. Rohrbach, M. Rohrbach, W. Qiu, A. Friedrich, M. Pinkal and B. Schiele
Pattern Recognition (GCPR 2014), 2014
Cross-device Gaze-supported Point-to-point Content Transfer
J. Turner, A. Bulling, J. Alexander and H. Gellersen
Proceedings ETRA 2014, 2014
EyeTab: Model-based Gaze Estimation on Unmodified Tablet Computers
E. Wood and A. Bulling
Proceedings ETRA 2014, 2014
In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass
S. Ishimaru, K. Kunze, K. Kise, J. Weppner, A. Dengel, P. Lukowicz and A. Bulling
Proceedings of the 5th Augmented Human International Conference (AH 2014), 2014
Object Disambiguation for Augmented Reality Applications
W.-C. Chiu, G. Johnson, D. McCulley, O. Grau and M. Fritz
Proceedings of the British Machine Vision Conference (BMVC 2014), 2014
How Good are Detection Proposals, really?
J. Hosang, R. Benenson and B. Schiele
Proceedings of the British Machine Vision Conference (BMVC 2014), 2014
Abstract
Current top performing Pascal VOC object detectors employ detection proposals to guide the search for objects thereby avoiding exhaustive sliding window search across images. Despite the popularity of detection proposals, it is unclear which trade‐offs are made when using them during object detection. We provide an in depth analysis of ten object proposal methods along with four baselines regarding ground truth annotation recall (on Pascal VOC 2007 and ImageNet 2013), repeatability, and impact on DPM detector performance. Our findings show common weaknesses of existing methods, and provide insights to choose the most adequate method for different settings.
Pupil-Canthi-Ratio: A Calibration-free Method for Tracking Horizontal Gaze Direction
Y. Zhang, A. Bulling and H. Gellersen
Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (AVI 2014), 2014
Scalable Multitask Representation Learning for Scene Classification
M. Lapin, B. Schiele and M. Hein
Scene Understanding Workshop (SUNw 2014), 2014
Learning People Detectors for Tracking in Crowded Scenes
S. Tang, M. Andriluka, A. Milan, K. Schindler, S. Roth and B. Schiele
Scene Understanding Workshop (SUNw 2014), 2014
High-Resolution 3D Layout from a Single View
M. Z. Zia, M. Stark and K. Schindler
Scene Understanding Workshop (SUNw 2014), 2014
SmudgeSafe: Geometric Image Transformations for Smudge-resistant User Authentication
S. Schneegass, F. Steimle, A. Bulling, F. Alt and A. Schmidt
UbiComp’14, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014
GazeHorizon: Enabling Passers-by to Interact with Public Displays by Gaze
Y. Zhang, J. Müller, M. K. Chong, A. Bulling and H. Gellersen
UbiComp’14, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014
Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction
M. Kassner, W. Patera and A. Bulling
UbiComp’14 Adjunct, 2014
Physically Grounded 3D Scene Interpretation with Detailed Object Models
M. Z. Zia, M. Stark and K. Schindler
Vision Meets Cognition Workshop: Functionality, Physics, Intentionality, and Causality (CVPR 2014 Workshop FPIC), 2014
Zero-Shot Learning with Structured Embeddings
Z. Akata, H. Lee and B. Schiele
Technical Report, 2014
(arXiv: 1409.8403)
Abstract
Despite significant recent advances in image classification, fine-grained classification remains a challenge. In the present paper, we address the zero-shot and few-shot learning scenarios as obtaining labeled data is especially difficult for fine-grained classification tasks. First, we embed state-of-the-art image descriptors in a label embedding space using side information such as attributes. We argue that learning a joint embedding space, that maximizes the compatibility between the input and output embeddings, is highly effective for zero/few-shot learning. We show empirically that such embeddings significantly outperforms the current state-of-the-art methods on two challenging datasets (Caltech-UCSD Birds and Animals with Attributes). Second, to reduce the amount of costly manual attribute annotations, we use alternate output embeddings based on the word-vector representations, obtained from large text-corpora without any supervision. We report that such unsupervised embeddings achieve encouraging results, and lead to further improvements when combined with the supervised ones.
Data-driven Methods for Interactive Visual Content Creation and Manipulation
A. Jain
PhD Thesis, Universität des Saarlandes, 2014
Learning Multi-scale Representations for Material Classification
W. Li and M. Fritz
Technical Report, 2014
(arXiv: 1408.2938)
Abstract
The recent progress in sparse coding and deep learning has made unsupervised feature learning methods a strong competitor to hand-crafted descriptors. In computer vision, success stories of learned features have been predominantly reported for object recognition tasks. In this paper, we investigate if and how feature learning can be used for material recognition. We propose two strategies to incorporate scale information into the learning procedure resulting in a novel multi-scale coding procedure. Our results show that our learned features for material recognition outperform hand-crafted descriptors on the FMD and the KTH-TIPS2 material classification benchmarks.
A Pooling Approach to Modelling Spatial Relations for Image Retrieval and Annotation
M. Malinowski and M. Fritz
Technical Report, 2014
(arXiv: 1411.5190)
Abstract
Over the last two decades we have witnessed strong progress on modeling visual object classes, scenes and attributes that have significantly contributed to automated image understanding. On the other hand, surprisingly little progress has been made on incorporating a spatial representation and reasoning in the inference process. In this work, we propose a pooling interpretation of spatial relations and show how it improves image retrieval and annotations tasks involving spatial language. Due to the complexity of the spatial language, we argue for a learning-based approach that acquires a representation of spatial relations by learning parameters of the pooling operator. We show improvements on previous work on two datasets and two different tasks as well as provide additional insights on a new dataset with an explicit focus on spatial relations.
Estimating Maximally Probable Constrained Relations by Mathematical Programming
L. Qu and B. Andres
Technical Report, 2014
(arXiv: 1408.0838)
Abstract
Estimating a constrained relation is a fundamental problem in machine learning. Special cases are classification (the problem of estimating a map from a set of to-be-classified elements to a set of labels), clustering (the problem of estimating an equivalence relation on a set) and ranking (the problem of estimating a linear order on a set). We contribute a family of probability measures on the set of all relations between two finite, non-empty sets, which offers a joint abstraction of multi-label classification, correlation clustering and ranking by linear ordering. Estimating (learning) a maximally probable measure, given (a training set of) related and unrelated pairs, is a convex optimization problem. Estimating (inferring) a maximally probable relation, given a measure, is a 01-linear program. It is solved in linear time for maps. It is NP-hard for equivalence relations and linear orders. Practical solutions for all three cases are shown in experiments with real data. Finally, estimating a maximally probable measure and relation jointly is posed as a mixed-integer nonlinear program. This formulation suggests a mathematical programming approach to semi-supervised learning.
Combining Visual Recognition and Computational Linguistics : Linguistic Knowledge for Visual Recognition and Natural Language Descriptions of Visual Content
M. Rohrbach
PhD Thesis, Universität des Saarlandes, 2014
Coherent Multi-sentence Video Description with Variable Level of Detail
A. Senina, M. Rohrbach, W. Qiu, A. Friedrich, S. Amin, M. Andriluka, M. Pinkal and B. Schiele
Technical Report, 2014
(arXiv: 1403.6173)
Abstract
Humans can easily describe what they see in a coherent way and at varying level of detail. However, existing approaches for automatic video description are mainly focused on single sentence generation and produce descriptions at a fixed level of detail. In this paper, we address both of these limitations: for a variable level of detail we produce coherent multi-sentence descriptions of complex videos. We follow a two-step approach where we first learn to predict a semantic representation (SR) from video and then generate natural language descriptions from the SR. To produce consistent multi-sentence descriptions, we model across-sentence consistency at the level of the SR by enforcing a consistent topic. We also contribute both to the visual recognition of objects proposing a hand-centric approach as well as to the robust generation of sentences using a word lattice. Human judges rate our multi-sentence descriptions as more readable, correct, and relevant than related work. To understand the difference between more detailed and shorter descriptions, we collect and analyze a video description corpus of three levels of detail.
2013
Where Next in Object Recognition and how much Supervision Do We Need?
S. Ebert and B. Schiele
Advanced Topics in Computer Vision, 2013
Transfer Learning in a Transductive Setting
M. Rohrbach, S. Ebert and B. Schiele
Advances in Neural Information Processing Systems 26 (NIPS 2013), 2013
Abstract
Category models for objects or activities typically rely on supervised learning requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels however is far less researched even though it is a common scenario. In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances. Our proposed approach Propagated Semantic Transfer combines three main ingredients. First, we transfer information from known to novel categories by incorporating external knowledge, such as linguistic or expert-specified information, e.g., by a mid-level layer of semantic attributes. Second, we exploit the manifold structure of novel classes. More specifically we adapt a graph-based learning algorithm - so far only used for semi-supervised learning - to zero-shot and few-shot learning. Third, we improve the local neighborhood in such graph structures by replacing the raw feature-based representation with a mid-level object- or attribute-based representation. We evaluate our approach on three challenging datasets in two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer and semi-supervised approaches on all datasets.
EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour
A. Bulling, C. Weichel and H. Gellersen
CHI 2013, The 31st Annual CHI Conference on Human Factors in Computing Systems, 2013
Abstract
Automatic annotation of life logging data is challenging. In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conduct a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Using person-dependent training, we obtain a top performance of 85.3% precision (98.0% recall) for recognising social interactions. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.
MotionMA: Motion Modelling and Analysis by Demonstration
E. Velloso, A. Bulling and H. Gellersen
CHI 2013, The 31st Annual CHI Conference on Human Factors in Computing Systems, 2013
SideWays: A Gaze Interface for Spontaneous Interaction with Situated Displays
Y. Zhang, A. Bulling and H. Gellersen
CHI 2013, The 31st Annual CHI Conference on Human Factors in Computing Systems, 2013
Pursuits: Eye-based Interaction with Moving Targets
M. Vidal, K. Pfeuffer, A. Bulling and H. W. Gellersen
CHI 2013 Extended Abstracts, 2013
Abstract
Eye-based interaction has commonly been based on estimation of eye gaze direction, to locate objects for interaction. We introduce Pursuits, a novel and very different eye tracking method that instead is based on following the trajectory of eye movement and comparing this with trajectories of objects in the field of view. Because the eyes naturally follow the trajectory of moving objects of interest, our method is able to detect what the user is looking at, by matching eye movement and object movement. We illustrate Pursuits with three applications that demonstrate how the method facilitates natural interaction with moving targets.
A Category-level 3D Object Dataset: Putting the Kinect to Work
A. Janoch, S. Karayev, Y. Jia, J. T. Barron, M. Fritz, K. Saenko and T. Darrell
Consumer Depth Cameras for Computer Vision, 2013
Multi-view Pictorial Structures for 3D Human Pose Estimation
S. Amin, M. Andriluka, M. Rohrbach and B. Schiele
Electronic Proceedings of the British Machine Vision Conference 2013 (BMVC 2013), 2013
Learning Smooth Pooling Regions for Visual Recognition
M. Malinowski and M. Fritz
Electronic Proceedings of the British Machine Vision Conference 2013 (BMVC 2013), 2013
Abstract
From the early HMAX model to Spatial Pyramid Matching, spatial pooling has played an important role in visual recognition pipelines. By aggregating local statistics, it equips the recognition pipelines with a certain degree of robustness to translation and deformation yet preserving spatial information. Despite of its predominance in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. In this paper, we propose a flexible parameterization of the spatial pooling step and learn the pooling regions together with the classifier. We investigate a smoothness regularization term that in conjuncture with an efficient learning scheme makes learning scalable. Our framework can work with both popular pooling operators: sum-pooling and max-pooling. Finally, we show benefits of our approach for object recognition tasks based on visual words and higher level event recognition tasks based on object-bank features. In both cases, we improve over the hand-crafted spatial pooling step showing the importance of its adaptation to the task.
Segmenting Planar Superpixel Adjacency Graphs w.r.t. Non-planar Superpixel Affinity Graphs
B. Andres, J. Yarkony, B. S. Manjunath, S. Kirchhoff, E. Turetken, C. C. Fowlkes and H. Pfister
Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR 2013), 2013
AutoBAP: Automatic Coding of Body Action and Posture Units from Wearable Sensors
E. Velloso, A. Bulling and H. Gellersen
2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII 2013), 2013
Eye Pull, Eye Push: Moving Objects between Large Screens and Personal Devices with Gaze & Touch
J. Turner, J. Alexander, A. Bulling, S. Dominik and H. Gellersen
Human-Computer Interaction – INTERACT 2013, 2013
Abstract
Previous work has validated the eyes and mobile input as a viable approach for pointing at, and selecting out of reach objects. This work presents Eye Pull, Eye Push, a novel interaction concept for content transfer between public and personal devices using gaze and touch. We present three techniques that enable this interaction: Eye Cut & Paste, Eye Drag & Drop, and Eye Summon & Cast. We outline and discuss several scenarios in which these techniques can be used. In a user study we found that participants responded well to the visual feedback provided by Eye Drag & Drop during object movement. In contrast, we found that although Eye Summon & Cast significantly improved performance, participants had difficulty coordinating their hands and eyes during interaction.
A Unified Video Segmentation Benchmark: Annotation, Metrics and Analysis
F. Galasso, N. S. Nagaraja, T. Jiménez Cárdenas, T. Brox and B. Schiele
ICCV 2013, IEEE International Conference on Computer Vision, 2013
Sequential Bayesian Model Update under Structured Scene Prior for Semantic Road Scenes Labeling
E. Levinkov and M. Fritz
ICCV 2013, IEEE International Conference on Computer Vision, 2013
Handling Occlusions with Franken-classifiers
M. Mathias, R. Benenson, R. Timofte and L. van Gool
ICCV 2013, IEEE International Conference on Computer Vision, 2013
Translating Video Content to Natural Language Descriptions
M. Rohrbach, W. Qiu, I. Titov, S. Thater, M. Pinkal and B. Schiele
ICCV 2013, IEEE International Conference on Computer Vision, 2013
Abstract
Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several base line approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task.
Learning People Detectors for Tracking in Crowded Scenes
S. Tang, M. Andriluka, A. Milan, K. Schindler, S. Roth and B. Schiele
ICCV 2013, IEEE International Conference on Computer Vision, 2013
Abstract
People tracking in crowded real-world scenes is challenging due to frequent and long-term occlusions. Recent tracking methods obtain the image evidence from object (people) detectors, but typically use off-the-shelf detectors and treat them as black box components. In this paper we argue that for best performance one should explicitly train people detectors on failure cases of the overall tracker instead. To that end, we first propose a novel joint people detector that combines a state-of-the-art single person detector with a detector for pairs of people, which explicitly exploits common patterns of person-person occlusions across multiple viewpoints that are a common failure case for tracking in crowded scenes. To explicitly address remaining failure cases of the tracker we explore two methods. First, we analyze typical failure cases of trackers and train a detector explicitly on those failure cases. And second, we train the detector with the people tracker in the loop, focusing on the most common tracker failures. We show that our joint multi-person detector significantly improves both detection accuracy as well as tracker performance, improving the state-of-the-art on standard benchmarks.
Seeking the Strongest Rigid Detector
R. Benenson, M. Mathias, T. Tuytelaars and L. van Gool
2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), 2013
Multi-class Video Co-segmentation with a Generative Multi-video Model
W.-C. Chiu and M. Fritz
2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), 2013
A Comparative Study of Modern Inference Techniques for Discrete Energy Minimization Problem
J. H. Kappes, B. Andres, F. A. Hamprecht, C. Schnörr, S. Nowozin, D. Batra, S. Kim, B. X. Kausler, J. Lellmann, N. Komodakis and C. Rother
2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), 2013
Occlusion Patterns for Object Class Detection
B. Pepik, M. Stark, P. Gehler and B. Schiele
2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), 2013
Poselet Conditioned Pictorial Structures
L. Pishchulin, M. Andriluka, P. Gehler and B. Schiele
2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), 2013
Reconstructing Loopy Curvilinear Structures Using Integer Programming
E. Turetken, F. Benmansour, B. Andres, H. Pfister and P. Fua
2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), 2013
Explicit Occlusion Modeling for 3D Object Class Representations
Z. Zia, M. Stark and K. Schindler
2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), 2013
3D Object Representations for Fine-grained Categorization
J. Krause, M. Stark, J. Deng and L. Fei-Fei
2013 IEEE International Conference on Computer Vision Workshops (ICCVW 2013), 2013
Monocular Visual Scene Understanding: Understanding Multi-object Traffic Scenes
C. Wojek, S. Walk, S. Roth, K. Schindler and B. Schiele
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 35, Number 4, 2013
Detailed 3D Representations for Object Recognition and Modeling
Z. Zia, M. Stark, B. Schiele and K. Schindler
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 35, Number 11, 2013
Learnable Pooling Regions for Image Classification
M. Malinowski and M. Fritz
International Conference on Learning Representations Workshop Proceedings (ICLR 2013), 2013
(arXiv: 1301.3516)
Abstract
Biologically inspired, from the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms showing that the smooth regularization term is crucial to achieve strong performance using the presented architecture. Finally, we propose an efficient and parallel method to train the model. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter.
Traffic Sign Recognition - How far are we from the solution?
M. Mathias, R. Timofte, R. Benenson and L. Van Gool
2013 International Joint Conference on Neural Networks (IJCNN 2013), 2013
I Know What You Are Reading - Recognition of Document Types Using Mobile Eye Tracking
K. Kunze, Y. Utsumi, S. Yuki, K. Kise and A. Bulling
ISWC’13, ACM International Symposium on Wearable Computers, 2013
Pattern Recognition
J. Weickert, M. Hein and B. Schiele (Eds.)
Springer, 2013
Signal Processing Technologies for Activity-aware Smart Textiles
D. Roggen, G. Tröster and A. Bulling
Multidisciplinary Know-How for Smart-Textiles Developers, 2013
Abstract
Garments made of smart textiles have an enormous potential for embedding sensors in close proximity to the body in an unobtrusive and comfortable manner. Combined with signal processing and pattern recognition technologies, complex high-level information about human behaviors or situations can be inferred from the sensor data. The goal of this chapter is to introduce the reader to the design of activity-aware systems that use body-worn sensors, such as those that can be made available through smart textiles. We start this chapter by emphasizing recent trends towards ‘}wearable{’ sensing and computing and we present several examples of activity-aware applications. Then we outline the role that smart textiles can play in activity-aware applications, but also the challenges that they pose. We conclude by discussing the design process followed to devise activity-aware systems: the choice of sensors, the available data processing methods, and the evaluation techniques. We discuss recent data processing methods that address the challenges resulting from the use of smart textiles.
Monocular Pose Capture with a Depth Camera Using a Sums-of-Gaussians Body Model
D. Kurmankhojayev, N. Hasler and C. Theobalt
Pattern Recognition (GCPR 2013), 2013
Dynamic Feature Selection for Classification on a Budget
S. Karayev, M. Fritz and T. Darrell
Prediction with Sequential Models (ICML 2013 Workshop), 2013
Eye Drop: An Interaction Concept for Gaze-supported Point-to-point Content Transfer
J. Turner, A. Bulling, J. Alexander and H. Gellersen
Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia (MUM 2013), 2013
Qualitative Activity Recognition of Weight Lifting Exercises
E. Velloso, A. Bulling, H. Gellersen, W. Ugulino and H. Fuks
Proceedings of the 4th Augmented Human International Conference (AH 2013), 2013
Abstract
Research on human activity recognition has traditionally focused on discriminating between different activities, i.e. to predict \textquoteleft}{\textquoteleft}which{\textquoteright}{\textquoteright} activity was performed at a specific point in time. The quality of executing an activity, the {\textquoteleft}{\textquoteleft}how (well){\textquoteright}{\textquoteright, has only received little attention so far, even though it potentially provides useful information for a large variety of applications, such as sports training. In this work we first define quality of execution and investigate three aspects that pertain to qualitative activity recognition: the problem of specifying correct execution, the automatic and robust detection of execution mistakes, and how to provide feedback on the quality of execution to the user. We illustrate our approach on the example problem of qualitatively assessing and providing feedback on weight lifting exercises. In two user studies we try out a sensor- and a model-based approach to qualitative activity recognition. Our results underline the potential of model-based assessment and the positive impact of real-time user feedback on the quality of execution.
Towards Scene Understanding with Detailed 3D Object Representations
Z. Zia, M. Stark and K. Schindler
Scene Understanding Workshop (SUNw 2013), 2013
Collecting a Large-scale Dataset of Fine-grained Cars
J. Krause, J. Deng, M. Stark and L. Fei-Fei
Second Workshop on Fine-Grained Visual Categorization (FGVC2), 2013
Modeling Instance Appearance for Recognition - Can We Do Better Than EM?
A. Chou, H. Wang, M. Stark and D. Koller
Structured Prediction : Tractability, Learning, and Inference (CVPR 2013 Workshop SPTLI), 2013
Grounding Action Descriptions in Videos
M. Regneri, M. Rohrbach, D. Wetzel, S. Thater, B. Schiele and M. Pinkal
Transactions of the Association for Computational Linguistics, Volume 1, 2013
Pursuits: Spontaneous Interaction with Displays based on Smooth Pursuit Eye Movement and Moving Targets
M. Vidal, A. Bulling and H. Gellersen
UbiComp’13, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2013
Pursuit Calibration: Making Gaze Calibration Less Tedious and More Flexible
K. Pfeuffer, M. Vidal, J. Turner, A. Bulling and H. Gellersen
UIST’13, ACM Symposium on User Interface Software and Technology, 2013
Abstract
Eye gaze is a compelling interaction modality but requires a user calibration before interaction can commence. State of the art procedures require the user to fixate on a succession of calibration markers, a task that is often experienced as difficult and tedious. We present a novel approach, pursuit calibration, that instead uses moving targets for calibration. Users naturally perform smooth pursuit eye movements when they follow a moving target, and we use correlation of eye and target movement to detect the users attention and to sample data for calibration. Because the method knows when the users is attending to a target, the calibration can be performed implicitly, which enables more flexible design of the calibration task. We demonstrate this in application examples and user studies, and show that pursuit calibration is tolerant to interruption, can blend naturally with applications, and is able to calibrate users without their awareness.
3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-based Interaction
A. Bulling and R. Bednarik (Eds.)
petmei.org, 2013
Proceedings of the 4th Augmented Human International Conference
A. Schmidt, A. Bulling and C. Holz (Eds.)
ACM, 2013
Abstract
We are very happy to present the proceedings of the 4th Augmented Human International Conference (Augmented Human 2013). Augmented Human 2013 focuses on augmenting human capabilities through technology for increased well-being and enjoyable human experience. The conference is in cooperation with ACM SIGCHI, with its proceedings to be archived in ACM\textquoteright}s Digital Library. With technological advances, computing has progressively moved beyond the desktop into new physical and social contexts. As physical artifacts gain new computational behaviors, they become reprogrammable, customizable, repurposable, and interoperable in rich ecologies and diverse contexts. They also become more complex, and require intense design effort in order to be functional, usable, and enjoyable. Designing such systems requires interdisciplinary thinking. Their creation must not only encompass software, electronics, and mechanics, but also the system{\textquoterights physical form and behavior, its social and physical milieu, and beyond.
2012
Timely Object Recognition
S. Karayev, T. Baumgarnter, M. Fritz and T. Darrell
Advances in Neural Information Processing Systems 25 (NIPS 2012), 2012
Human Context: Modeling Human-Human Interactions for Monocular 3D Pose Estimation
M. Andriluka and L. Sigal
Articulated Motion and Deformable Objects (AMDO 2012), 2012
Semi-supervised Learning on a Budget: Scaling Up to Large Datasets
S. Ebert, M. Fritz and B. Schiele
Computer Vision - ACCV 2012, 2012
Video Segmentation with Superpixels
F. Galasso, R. Cipolla and B. Schiele
Computer Vision - ACCV 2012, 2012
The Pooled NBNN Kernel: Beyond Image-to-Class and Image-to-Image
K. Rematas, M. Fritz and T. Tuytelaars
Computer Vision - ACCV 2012, 2012
What Makes a Good Detector? - Structured Priors for Learning from Few Examples
T. Gao, M. Stark and D. Koller
Computer Vision - ECCV 2012, 2012
A Discrete Chain Graph Model for 3d+t Cell Tracking with High Misdetection Robustness
B. X. Kausler, S. Martin, B. Andres, M. Lindner, U. Köthe, H. Leitte, H. Wittbrodt, L. Hufnagel and F. A. Hamprecht
Computer Vision - ECCV 2012, 2012
Recognizing Materials from Virtual Examples
W. Li and M. Fritz
Computer Vision - ECCV 2012, 2012
3D2PM - 3D Deformable Part Models
B. Pepik, P. Gehler, M. Stark and B. Schiele
Computer Vision - ECCV 2012, 2012
Script Data for Attribute-based Recognition of Composite Activities
M. Rohrbach, M. Regneri, M. Andriluka, S. Amin, M. Pinkal and B. Schiele
Computer Vision - ECCV 2012, 2012
Sparselet Models for Efficient Multiclass Object Detection
H. O. Song, S. Zickler, T. Althoff, R. B. Girshick, M. Fritz, C. Geyer, P. F. Felzenszwalb and T. Darrell
Computer Vision - ECCV 2012, 2012
3D Object Detection with Multiple Kinects
W. Susanto, M. Rohrbach and B. Schiele
Computer Vision - ECCV 2012, 2012
Fine-grained Categorization for 3D Scene Understanding
M. Stark, J. Krause, B. Pepik, D. Meger, J. J. Little, B. Schiele and D. Koller
Electronic Proceedings of the British Machine Vision Conference 2012 (BMVC 2012), 2012
Detection and Tracking of Occluded People
S. Tang, M. Andriluka and B. Schiele
Electronic Proceedings of the British Machine Vision Conference 2012 (BMVC 2012), 2012
RALF: A Reinforced Active Learning Formulation for Object Class Recognition
S. Ebert, M. Fritz and B. Schiele
2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012), 2012
Teaching 3D Geometry to Deformable Part Models
B. Pepik, M. Stark, P. Gehler and B. Schiele
2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012), 2012
Abstract
Current object class recognition systems typically target 2D bounding box localization, encouraged by benchmark data sets, such as Pascal VOC. While this seems suitable for the detection of individual objects, higher-level applications such as 3D scene understanding or 3D object tracking would benefit from more fine-grained object hypotheses incorporating 3D geometric information, such as viewpoints or the locations of individual parts. In this paper, we help narrowing the representational gap between the ideal input of a scene understanding system and object class detector output, by designing a detector particularly tailored towards 3D geometric reasoning. In particular, we extend the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints. We experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t. 2D bounding box localization, but outperforms prior work in 3D viewpoint estimation and ultra-wide baseline matching.
Articulated People Detection and Pose Estimation: Reshaping the Future
L. Pishchulin, A. Jain, M. Andriluka, T. Thormaehlen and B. Schiele
2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012), 2012
Abstract
State-of-the-art methods for human detection and pose estimation require many training samples for best performance. While large, manually collected datasets exist, the captured variations w.r.t. appearance, shape and pose are often uncontrolled thus limiting the overall performance. In order to overcome this limitation we propose a new technique to extend an existing training set that allows to explicitly control pose and shape variations. For this we build on recent advances in computer graphics to generate samples with realistic appearance and background while modifying body shape and pose. We validate the effectiveness of our approach on the task of articulated human detection and articulated pose estimation. We report close to state of the art results on the popular Image Parsing human pose estimation benchmark and demonstrate superior performance for articulated human detection. In addition we define a new challenge of combined articulated human detection and pose estimation in real-world scenes.
A Database for Fine Grained Activity Detection of Cooking Activities
M. Rohrbach, S. Amin, M. Andriluka and B. Schiele
2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012
Pedestrian Detection: An Evaluation of the State of the Art
P. Dollár, C. Wojek and B. Schiele
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 34, Number 4, 2012
Discriminative Appearance Models for Pictorial Structures
M. Andriluka, S. Roth and B. Schiele
International Journal of Computer Vision, Volume 99, Number 3, 2012
Abstract
In this paper we consider people detection and articulated pose estimation, two closely related and challenging problems in computer vision. Conceptually, both of these problems can be addressed within the pictorial structures framework (Felzenszwalb and Huttenlocher in Int. J. Comput. Vis. 61(1):55–79, 2005; Fischler and Elschlager in IEEE Trans. Comput. C-22(1):67–92, 1973), even though previous approaches have not shown such generality. A principal difficulty for such a general approach is to model the appearance of body parts. The model has to be discriminative enough to enable reliable detection in cluttered scenes and general enough to capture highly variable appearance. Therefore, as the first important component of our approach, we propose a discriminative appearance model based on densely sampled local descriptors and AdaBoost classifiers. Secondly, we interpret the normalized margin of each classifier as likelihood in a generative model and compute marginal posteriors for each part using belief propagation. Thirdly, non-Gaussian relationships between parts are represented as Gaussians in the coordinate system of the joint between the parts. Additionally, in order to cope with shortcomings of tree-based pictorial structures models, we augment our model with additional repulsive factors in order to discourage overcounting of image evidence. We demonstrate that the combination of these components within the pictorial structures framework results in a generic model that yields state-of-the-art performance for several datasets on a variety of tasks: people detection, upper body pose estimation, and full body pose estimation.
A Geometric Approach To Robotic Laundry Folding
S. Miller, J. van den Berg, M. Fritz, T. Darrell, K. Goldberg and P. Abbeel
International Journal of Robotics Research, Volume 31, Number 2, 2012
Kernel Density Topic Models: Visual Topics Without Visual Words
K. Rematas, M. Fritz and T. Tuytelaars
NIPS 2012 Workshop Modern Nonparametric Methods in Machine Learning, 2012
Active Metric Learning for Object Recognition
S. Ebert, M. Fritz and B. Schiele
Pattern Recognition (DAGM-OAGM 2012), 2012
Semi-supervised Learning for Image Classification
S. Ebert
PhD Thesis, Universität des Saarlandes, 2012
Abstract
Object class recognition is an active topic in computer vision still presenting many challenges. In most approaches, this task is addressed by supervised learning algorithms that need a large quantity of labels to perform well. This leads either to small datasets (< 10,000 images) that capture only a subset of the real-world class distribution (but with a controlled and verified labeling procedure), or to large datasets that are more representative but also add more label noise. Therefore, semi-supervised learning is a promising direction. It requires only few labels while simultaneously making use of the vast amount of images available today. We address object class recognition with semi-supervised learning. These algorithms depend on the underlying structure given by the data, the image description, and the similarity measure, and the quality of the labels. This insight leads to the main research questions of this thesis: Is the structure given by labeled and unlabeled data more important than the algorithm itself? Can we improve this neighborhood structure by a better similarity metric or with more representative unlabeled data? Is there a connection between the quality of labels and the overall performance and how can we get more representative labels? We answer all these questions, i.e., we provide an extensive evaluation, we propose several graph improvements, and we introduce a novel active learning framework to get more representative labels.
2011
South by South-east or Sitting at the Desk: Can Orientation be a Place?
U. Blanke, R. Rehner and B. Schiele
15th Annual International Symposium on Wearable Computers (ISWC 2011), 2011
Abstract
Location is a key information for context-aware systems. While coarse-grained indoor location estimates may be obtained quite easily (e.g. based on WiFi or GSM), finer-grained estimates typically require additional infrastructure (e.g. ultrasound). This work explores an approach to estimate significant places, e.g., at the fridge, with no additional setup or infrastructure. We use a pocket-based inertial measurement sensor, which can be found in many recent phones. We analyze how the spatial layout such as geographic orientation of buildings, arrangement and type of furniture can serve as the basis to estimate typical places in a daily scenario. Initial experiments reveal that our approach can detect fine-grained locations without relying on any infrastructure or additional devices.
Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance
P. Gehler, C. Rother, M. Kiefel, L. Zhang and B. Schölkopf
Advances in Neural Information Processing Systems 24 (NIPS 2011), 2011
Abstract
We address the challenging task of decoupling material properties from lighting properties given a single image. In the last two decades virtually all works have concentrated on exploiting edge information to address this problem. We take a different route by introducing a new prior on reflectance, that models reflectance values as being drawn from a sparse set of basis colors. This results in a Random Field model with global, latent variables (basis colors) and pixel-accurate output reflectance values. We show that without edge information high-quality results can be achieved, that are on par with methods exploiting this source of information. Finally, we are able to improve on state-of-the-art results by integrating edge information into our model. We believe that our new approach is an excellent starting point for future developments in this field.
Joint 3D Estimation of Objects and Scene Layout
A. Geiger, E. Weydert and R. Urtasun
Advances in Neural Information Processing Systems 24 (NIPS 2011), 2011
Disparity Statistics for Pedestrian Detection: Combining Appearance, Motion and Stereo
S. Walk, K. Schindler and B. Schiele
Computer Vision - ECCV 2010, 2011
Monocular 3D Scene Modeling and Inference: Understanding Multi-Object Traffic Scenes
C. Wojek, S. Roth, K. Schindler and B. Schiele
Computer Vision - ECCV 2010, 2011
Practical 3-D Object Detection Using Category and Instance-level Appearance Models
K. Saenko, S. Karayev, Y. Yia, A. Shyr, A. Janoch, J. Long, M. Fritz and T. Darrell
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’11), 2011
Perception for the Manipulation of Socks
P. C. Wang, S. Miller, M. Fritz, T. Darrell and P. Abbeel
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), 2011
A Probabilistic Model for Recursive Factorized Image Features
S. Karayev, M. Fritz, S. Fidler and T. Darrell
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), 2011
Learning People Detection Models from Few Training Samples
L. Pishchulin, A. Jain, C. Wojek, M. Andriluka, T. Thormaehlen and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), 2011
Evaluating Knowledge Transfer and Zero-shot Learning in a Large-scale Setting
M. Rohrbach, M. Stark and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), 2011
Monocular 3D Scene Understanding with Explicit Occlusion Reasoning
C. Wojek, S. Walk, S. Roth and B. Schiele
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), 2011
Warp that Smile on your Face: Optimal and Smooth Deformations for Face Recognition
T. Gass, L. Pishchulin, P. Dreuw and H. Ney
IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011
A Category-level 3-D Object Dataset: Putting the Kinect to Work
A. Janoch, S. Karayev, Y. Jia, J. T. Barron, M. Fritz, K. Saenko and T. Darrell
2011 IEEE International Conference on Computer Vision (ICCV 2011), 2011
The NBNN Kernel
T. Tuytelaars, M. Fritz, K. Saenko and T. Darrell
IEEE International Conference on Computer Vision (ICCV 2011), 2011
Revisiting 3D Geometric Models for Accurate Object Shape and Pose
M. Z. Zia, M. Stark, B. Schiele and K. Schindler
IEEE International Conference on Computer Vision (ICCV 3dRR 2011), 2011
Abstract
Geometric 3D reasoning has received renewed attention recently, in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative or coarse-grained quantitative representations. This is linked to the fact that today's object class detectors are tuned towards robust 2D matching rather than accurate 3D pose estimation, encouraged by 2D bounding box-based benchmarks such as Pascal VOC. In this paper, we therefore revisit ideas from the early days of computer vision, namely, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just 2D bounding boxes, including relative 3D positions of object parts. In combination with recent robust techniques for shape description and inference, our approach outperforms state-of-the-art results in 3D pose estimation, while at the same time improving 2D localization. In a series of experiments, we analyze our approach in detail, and demonstrate novel applications enabled by our geometric object class representation, such as fine-grained categorization of cars according to their 3D geometry and ultra-wide baseline matching.
Visual Grasp Affordances From Appearance-based Cues
H. O. Song, M. Fritz, C. Gu and T. Darrell
2011 IEEE International Conference on Computer Vision (ICCW 2011), 2011
I Spy with my Little Eye: Learning Optimal Filters for Cross-Modal Stereo under Projected Patterns
W.-C. Chiu, U. Blanke and M. Fritz
2011 IEEE International Conference on Computer Vision (WS 2011), 2011
The Benefits of Dense Stereo for Pedestrian Detection
C. G. Keller, M. Enzweiler, M. Rohrbach, D. F. Llorca, C. Schnörr and D. M. Gavrila
IEEE Transactions on Intelligent Transportation Systems, Volume 12, Number 4, 2011
Abstract
This paper presents a novel pedestrian detection system for intelligent vehicles. We propose the use of dense stereo for both the generation of regions of interest and pedestrian classification. Dense stereo allows the dynamic estimation of camera parameters and the road profile, which, in turn, provides strong scene constraints on possible pedestrian locations. For classification, we extract spatial features (gradient orientation histograms) directly from dense depth and intensity images. Both modalities are represented in terms of individual feature spaces, in which discriminative classifiers (linear support vector machines) are learned. We refrain from the construction of a joint feature space but instead employ a fusion of depth and intensity on the classifier level. Our experiments involve challenging image data captured in complex urban environments (i.e., undulating roads and speed bumps). Our results show a performance improvement by up to a factor of 7.5 at the classification level and up to a factor of 5 at the tracking level (reduction in false alarms at constant detection rates) over a system with static scene constraints and intensity-only classification.
Weakly Supervised Recognition of Daily Life Activities with Wearable Sensors
M. Stikic, D. Larlus, S. Ebert and B. Schiele
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 33, Number 12, 2011
Pick your Neighborhood - Improving Labels and Neighborhood Structure for Label Propagation
S. Ebert, M. Fritz and B. Schiele
Pattern Recognition (DAGM 2011), 2011
Image Warping for Face Recognition: From Local Optimality Towards Global Optimization
L. Pishchulin, T. Gass, P. Dreuw and H. Ney
Pattern Recognition (Proc. IbPRIA 2011), 2011
The Fast and the Flexible: Extended Pseudo Two-dimensional Warping for Face Recognition
L. Pishchulin, T. Gass, P. Dreuw and H. Ney
Pattern Recognition and Image Analysis (IbPRIA 2011), 2011
Recognition of Hearing Needs From Body and Eye Movements to Improve Hearing Instruments
B. Tessendorf, A. Bulling, D. Roggen, T. Stiefmeier, M. Feilner, P. Derleth and G. Tröster
Pervasive Computing, 2011
Abstract
Hearing instruments (HIs) have emerged as true pervasive computers as they continuously adapt the hearing program to the user\textquoterights context. However, current HIs are not able to distinguish different hearing needs in the same acoustic environment. In this work, we explore how information derived from body and eye movements can be used to improve the recognition of such hearing needs. We conduct an experiment to provoke an acoustic environment in which different hearing needs arise: active conversation and working while colleagues are having a conversation in a noisy office environment. We record body movements on nine body locations, eye movements using electrooculography (EOG), and sound using commercial HIs for eleven participants. Using a support vector machine (SVM) classifier and person-independent training we improve the accuracy of 77% based on sound to an accuracy of 92% using body movements. With a view to a future implementation into a HI we then perform a detailed analysis of the sensors attached to the head. We achieve the best accuracy of 86% using eye movements compared to 84% for head movements. Our work demonstrates the potential of additional sensor modalities for future HIs and motivates to investigate the wider applicability of this approach on further hearing situations and needs.
Learning Output Kernels with Block Coordinate Descent
F. Dinuzzo, C. S. Ong, P. Gehler and G. Pillonetto
Proceedings of the 28th Internationl Conference on Machine Learning (ICML 2011), 2011
Abstract
We propose a method to learn simultaneously a vector-valued function and a kernel between its components. The obtained kernel can be used both to improve learning performance and to reveal structures in the output space which may be important in their own right. Our method is based on the solution of a suitable regularization problem over a reproducing kernel Hilbert space of vector-valued functions. Although the regularized risk functional is non-convex, we show that it is invex, implying that all local minimizers are global minimizers. We derive a block-wise coordinate descent method that efficiently exploits the structure of the objective functional. Then, we empirically demonstrate that the proposed method can improve classification accuracy. Finally, we provide a visual interpretation of the learned kernel matrix for some well known datasets.
Improving the Kinect by Cross-modal Stereo
W.-C. Chiu, U. Blanke and M. Fritz
Proceedings of the British Machine Vision Conference 2011 (BMVC 2011), 2011
Branch&Rank: Non-linear Object Detection
A. Lehmann, P. Gehler and L. Van Gool
Proceedings of the British Machine Vision Conference 2011 (BMVC 2011), 2011
Abstract
Branch&rank is an object detection scheme that overcomes the inherent limitation of branch&bound: this method works with arbitrary (classifier) functions whereas tight bounds exist only for simple functions. Objects are usually detected with less than 100 classifier evaluation, which paves the way for using strong (and thus costly) classifiers: We utilize non-linear SVMs with RBF- 2 kernels without a cascade-like approximation. Our approach features three key components: a ranking function that operates on sets of hypotheses and a grouping of these into different tasks. Detection efficiency results from adaptively sub-dividing the object search space into decreasingly smaller sets. This is inherited from branch&bound, while the ranking function supersedes a tight bound which is often unavailable (except for too simple function classes). The grouping makes the system effective: it separates image classification from object recognition, yet combines them in a single, structured SVM formulation. A novel aspect of branch&rank is that a better ranking function is expected to decrease the number of classifier calls during detection. We demonstrate the algorithmic properties using the VOC'07 dataset.
Explicit Occlusion Reasoning for 3D Object Detection
D. Meger, C. Wojek, B. Schiele and J. J. Little
Proceedings of the British Machine Vision Conference 2011 (BMVC 2011), 2011
In Good Shape: Robust People Detection Based on Appearance and Shape
L. Pishchulin, A. Jain, C. Wojek, T. Thormaehlen and B. Schiele
Proceedings of the British Machine Vision Conference 2011 (BMVC 2011), 2011
Benchmark Datasets for Pose Estimation and Tracking
M. Andriluka, L. Sigal and M. Black
Visual Analysis of Humans: Looking at People, 2011
2010
Back to the Future: Learning Shape Models from 3D CAD Data
M. Stark, M. Goesele and B. Schiele
21st British Machine Vision Conference (BMVC 2010), 2010
Abstract
Recognizing 3D objects from arbitrary view points is one of the most fundamental problems in computer vision. A major challenge lies in the transition between the 3D geometry of objects and 2D representations that can be robustly matched to natural images. Most approaches thus rely on 2D natural images either as the sole source of training data for building an implicit 3D representation, or by enriching 3D models with natural image features. In this paper, we go back to the ideas from the early days of computer vision, by using 3D object models as the only source of information for building a multi-view object class detector. In particular, we use these models for learning 2D shape that can be robustly matched to 2D natural images. Our experiments confirm the validity of our approach, which outperforms current state-of-the-art techniques on a multi-view detection data set.
All for one or one for all? – Combining Heterogeneous Features for Activity Spotting
U. Blanke, M. Kreil, B. Schiele, P. Lukowicz, B. Sick and T. Gruber
2010 8th IEEE International Conference on Pervasive Computing and Communications Workshops : PerCom Workshops 2010 : 7th IEEE International Workshop on Context Modeling and Reasoning (CoMoRea 2010), 2010
Size Matters: Metric Visual Search Constraints from Monocular Metadata
M. Fritz, K. Saenko and T. Darrell
Advances in Neural Information Processing Systems 23 (NIPS 2010), 2010
Multi-Modal Learning
D. Skocaj, K. Matej, A. Vrecko, A. Leonardis, M. Fritz, M. Stark, B. Schiele, S. Hongeng and J. L. Wyatt
Cognitive Systems, 2010
Tutor-based Learning of Visual Categories Using Different Levels of Supervision
M. Fritz, G.-J. M. Kruijff and B. Schiele
Computer Vision and Image Understanding, Volume 114, Number 5, 2010
Extracting Structures in Image Collections for Object Recognition
S. Ebert, D. Larlus and B. Schiele
Computer Vision - ECCV 2010, 2010
Abstract
Many computer vision methods rely on annotated image databases without taking advantage of the increasing number of unlabeled images available. This paper explores an alternative approach involving unsupervised structure discovery and semi-supervised learning (SSL) in image collections. Focusing on object classes, the first part of the paper contributes with an extensive evaluation of state-of-the-art image representations underlining the decisive influence of the local neighborhood structure, its direct consequences on SSL results, and the importance of developing powerful object representations. In a second part, we propose and explore promising directions to improve results by looking at the local topology between images and feature combination strategies.
Combining Language Sources and Robust Semantic Relatedness for Attribute-Based Knowledge Transfer
M. Rohrbach, M. Stark, G. Szarvas and B. Schiele
First International Workshop on Parts and Attributes in Conjunction with ECCV 2010, 2010
Abstract
Knowledge transfer between object classes has been identified as an important tool for scalable recognition. However, determining which knowledge to transfer where remains a key challenge. While most approaches employ varying levels of human supervision, we follow the idea of mining linguistic knowledge bases to automatically infer transferable knowledge. In contrast to previous work, we explicitly aim to design robust semantic relatedness measures and to combine different language sources for attribute-based knowledge transfer. On the challenging Animals with Attributes (AwA) data set, we report largely improved attribute-based zero-shot object class recognition performance that matches the performance of human supervision.
Vision Based Victim Detection from Unmanned Aerial Vehicles
M. Andriluka, P. Schnitzspan, J. Meyer, S. Kohlbrecher, K. Petersen, O. von Stryk, S. Roth and B. Schiele
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010
Abstract
Finding injured humans is one of the primary goals of any search and rescue operation. The aim of this paper is to address the task of automatically finding people lying on the ground in images taken from the on-board camera of an unmanned aerial vehicle (UAV). In this paper we evaluate various state-of-the-art visual people detection methods in the context of vision based victim detection from an UAV. The top performing approaches in this comparison are those that rely on flexible part-based representations and discriminatively trained part detectors. We discuss their strengths and weaknesses and demonstrate that by combining multiple models we can increase the reliability of the system. We also demonstrate that the detection performance can be substantially improved by integrating the height and pitch information provided by on-board sensors. Jointly these improvements allow us to significantly boost the detection performance over the current de-facto standard, which provides a substantial step towards making autonomous victim detection for UAVs practical.
Monocular 3D Pose Estimation and Tracking by Detection
M. Andriluka, S. Roth and B. Schiele
2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), 2010
Abstract
Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.
Multi-cue Pedestrian Classification with Partial Occlusion Handling
M. Enzweiler, A. Eigenstetter, B. Schiele and D. M. Gavrila
2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), 2010
What helps Where - and Why? Semantic Relatedness for Knowledge Transfer
M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych and B. Schiele
2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), 2010
Abstract
Remarkable performance has been reported to recognize single object classes. Scalability to large numbers of classes however remains an important challenge for today's recognition methods. Several authors have promoted knowledge transfer between classes as a key ingredient to address this challenge. However, in previous work the decision which knowledge to transfer has required either manual supervision or at least a few training examples limiting the scalability of these approaches. In this work we explicitly address the question of how to automatically decide which information to transfer between classes without the need of any human intervention. For this we tap into linguistic knowledge bases to provide the semantic link between sources (what) and targets (where) of knowledge transfer. We provide a rigorous experimental evaluation of different knowledge bases and state-of-the-art techniques from Natural Language Processing which goes far beyond the limited use of language in related work. We also give insights into the applicability (why) of different knowledge sources and similarity measures for knowledge transfer.
Automatic Discovery of Meaningful Object Parts with Latent CRFs
P. Schnitzspan, S. Roth and B. Schiele
2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), 2010
Abstract
Object recognition is challenging due to high intra-class variability caused, e.g., by articulation, viewpoint changes, and partial occlusion. Successful methods need to strike a balance between being flexible enough to model such variation and discriminative enough to detect objects in cluttered, real world scenes. Motivated by these challenges we propose a latent conditional random field (CRF) based on a flexible assembly of parts. By modeling part labels as hidden nodes and developing an EM algorithm for learning from class labels alone, this new approach enables the automatic discovery of semantically meaningful object part representations. To increase the flexibility and expressiveness of the model, we learn the pairwise structure of the underlying graphical model at the level of object part interactions. Efficient gradient-based techniques are used to estimate the structure of the domain of interest and carried forward to the multi-label or object part case. Our experiments illustrate the meaningfulness of the discovered parts and demonstrate state-of-the-art performance of the approach.
New Features and Insights for Pedestrian Detection
S. Walk, N. Majer, K. Schindler and B. Schiele
2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), 2010
Dead Reckoning from the Pocket - An Experimental Study
U. Steinhoff and B. Schiele
IEEE 2010 International Conference on Pervasive Computing and Communications (PerCom 2010), 2010
Towards Human Motion Capturing using Gyroscopeless Orientation Estimation
U. Blanke and B. Schiele
International Symposium on Wearable Computers 2010 (ISCW 2010), 2010
Remember and Transfer what you have Learned - Recognizing Composite Activities based on Activity Spotting
U. Blanke and B. Schiele
International Symposium on Wearable Computers 2010 (ISWC 2010), 2010
A Semantic World Model for Urban Search and Rescue Based on Heterogeneous Sensors
J. Meyer, P. Schnitzspan, S. Kohlbrecher, K. Petersen, O. Schwahn, M. Andriluka, U. Klingauf, S. Roth, B. Schiele and O. von Stryk
RoboCup 2010, 14th International RoboCup Symposium, 2010
Combining Language Sources and Robust Semantic Relatedness for Attribute-based Knowledge Transfer
M. Rohrbach, M. Stark, G. Szarvas and B. Schiele
Trends and Topics in Computer Vision (ECCV 2010 Workshops), 2010
Real-time Full-body Visual Traits Recognition from Image Sequences
C. Jung, R. Tausch and C. Wojek
VMV 2010, 2010
2004
A Model for Human Interruptability: Experimental Evaluation and Automatic Estimation from Wearable Sensors
N. Kern, S. Antifakos, B. Schiele and A. Schwaninger
Eighth International Symposium on Wearable Computers (ISWC 2004), 2004
Less Contact: Heart-rate Detection Without Even Touching the User
F. Michahelles, R. Wicki and B. Schiele
Eighth International Symposium on Wearable Computers (ISWC 2004), 2004