Andreas Bulling (Senior Researcher)

Dr. Andreas Bulling

Address
Max-Planck-Institut für Informatik
Campus E1 4
66123 Saarbrücken
Location
E1 4 - Room 628
Phone
+49 681 9325 2128
Fax
+49 681 9325 2099
Email
Get email via email

Personal Information

Homepage

Group Homepage

Open Positions

Research Interests

  • Human-Computer Interaction
  • Ubiquitous Computing
  • Eye Tracking
  • Machine Learning and Pattern Recognition
  • Egocentric Computer Vision

Education

  • PhD in Information Technology and Electrical Engineering (October 2006 - June 2010)
    Swiss Federal Institute of Technology (ETH) Zurich, Switzerland
  • MSc in Computer Science (October 2001 - June 2006)
    Technical University of Karlsruhe, Germany

Short-Bio

Andreas Bulling is an Independent Research Group Leader (W2) at the Max Planck Institute for Informatics and the Cluster of Excellence on Multimodal Computing and Interaction where he leads the Perceptual User Interfaces Group. He received his MSc. (Dipl.-Inform.) in Computer Science from the Technical University of Karlsruhe (TH), Germany, in 2006, focusing on embedded systems, robotics and biomedical engineering. He holds a PhD in Information Technology and Electrical Engineering from the Swiss Federal Institute of Technology (ETH) Zurich, Switzerland. Dr. Bulling was previously a Feodor Lynen Research Fellow and a Marie Curie Research Fellow in the Computer Laboratory at the University of Cambridge, United Kingdom, a postdoctoral research associate in the School of Computing and Communications at Lancaster University, United Kingdom, as well as a Junior Research Fellow at Wolfson College, Cambridge. Dr. Bulling served as co-chair, TPC member and reviewer for major conferences and journals as well as TPC co-chair for AugmentedHuman 2013 and associate chair for CHI 2013 and CHI 2014.

Publications

2016
SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull
S. Schneegass, Y. Oualil and A. Bulling
CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems, 2016
Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces
P. Xu, Y. Sugano and A. Bulling
CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems, 2016
GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices
M. Khamis, F. Alt, M. Hassib, E. von Zezschwitz, R. Hasholzner and A. Bulling
CHI 2016 Extended Abstracts, 2016
On the Verge: Voluntary Convergences for Accurate and Precise Timing of Gaze Input
D. Kirst and A. Bulling
CHI 2016 Extended Abstracts, 2016
Abstract
Rotations performed with the index finger and thumb involve some of the most complex motor action among common multi-touch gestures, yet little is known about the factors affecting performance and ergonomics. This note presents results from a study where the angle, direction, diameter, and position of rotations were systematically manipulated. Subjects were asked to perform the rotations as quickly as possible without losing contact with the display, and were allowed to skip rotations that were too uncomfortable. The data show surprising interaction effects among the variables, and help us identify whole categories of rotations that are slow and cumbersome for users.
Pervasive Attentive User Interfaces
A. Bulling
Computer, Volume 49, Number 1, 2016
Eyewear Computing -- Augmenting the Human with Head-mounted Wearable Assistants
A. Bulling, O. Cakmakci, K. Kunze and J. M. Rehg (Eds.)
Schloss Dagstuhl, 2016
Attention, please!: Comparing Features for Measuring Audience Attention Towards Pervasive Displays
F. Alt, A. Bulling, L. Mecke and D. Buschek
DIS 2016, 11th ACM SIGCHI Designing Interactive Systems Conference, 2016
Xplore-M-Ego: Contextual Media Retrieval Using Natural Language Queries
S. N. Chowdhury, M. Malinowski, A. Bulling and M. Fritz
ICMR’16, ACM International Conference on Multimedia Retrieval, 2016
Combining Eye Tracking with Optimizations for Lens Astigmatism in modern wide-angle HMDs
D. Pohl, X. Zhang and A. Bulling
2016 IEEE Virtual Reality Conference (VR), 2016
Eyewear Computers for Human-Computer Interaction
A. Bulling and K. Kunze
Interactions, Volume 23, Number 3, 2016
Demo hour
H. Jeong, D. Saakes, U. Lee, A. Esteves, E. Velloso, A. Bulling, K. Masai, Y. Sugiura, M. Ogata, K. Kunze, M. Inami, M. Sugimoto, A. Rathnayake and T. Dias
Interactions, Volume 23, Number 1, 2016
Pupil Detection for Head-mounted Eye Tracking in the Wild: An Evaluation of the State of the Art
W. Fuhl, M. Tonsen, A. Bulling and E. Kasneci
Machine Vision and Applications, Volume Online First, 2016
Special Issue Introduction
D. J. Cook, A. Bulling and Z. Yu
Pervasive and Mobile Computing (Proc. PerCom 2015), Volume 26, 2016
Prediction of Gaze Estimation Error for Error-Aware Gaze-Based Interfaces
M. Barz, F. Daiber and A. Bulling
Proceedings ETRA 2016, 2016
3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers
M. Mansouryar, J. Steil, Y. Sugano and A. Bulling
Proceedings ETRA 2016, 2016
Gaussian Processes as an Alternative to Polynomial Gaze Estimation Functions
L. Sesma-Sanchez, Y. Zhang, H. Gellersen and A. Bulling
Proceedings ETRA 2016, 2016
Labelled Pupils in the Wild: A Dataset for Studying Pupil Detection in Unconstrained Environments
M. Tonsen, X. Zhang, Y. Sugano and A. Bulling
Proceedings ETRA 2016, 2016
Learning an Appearance-based Gaze Estimator from One Million Synthesised Images
E. Wood, T. Baltrušaitis, L.-P. Morency, P. Robinson and A. Bulling
Proceedings ETRA 2016, 2016
Three-Point Interaction: Combining Bi-manual Direct Touch with Gaze
A. L. Simeone, A. Bulling, J. Alexander and H. Gellersen
Proceedings of the 2016 International Working Conference on Advanced Visual Interfaces (AVI 2016), 2016
Contextual Media Retrieval Using Natural Language Queries
S. N. Chowdhury, M. Malinowski, A. Bulling and M. Fritz
Technical Report, 2016
(arXiv: 1602.04983)
Abstract
The widespread integration of cameras in hand-held and head-worn devices as well as the ability to share content online enables a large and diverse visual capture of the world that millions of users build up collectively every day. We envision these images as well as associated meta information, such as GPS coordinates and timestamps, to form a collective visual memory that can be queried while automatically taking the ever-changing context of mobile users into account. As a first step towards this vision, in this work we present Xplore-M-Ego: a novel media retrieval system that allows users to query a dynamic database of images and videos using spatio-temporal natural language queries. We evaluate our system using a new dataset of real user queries as well as through a usability study. One key finding is that there is a considerable amount of inter-user variability, for example in the resolution of spatial relations in natural language utterances. We show that our retrieval system can cope with this variability using personalisation through an online learning-based retrieval formulation.
2015
On the Interplay between Spontaneous Spoken Instructions and Human Visual Behaviour in an Indoor Guidance Task
N. Koleva, S. Hoppe, M. M. Moniri, M. Staudte and A. Bulling
37th Annual Meeting of the Cognitive Science Society (COGSCI 2015), 2015
Scene Viewing and Gaze Analysis during Phonetic Segmentation Tasks
A. Khan, I. Steiner, R. G. Macdonald, Y. Sugano and A. Bulling
Abstracts of the 18th European Conference on Eye Movements (ECEM 2015), 2015
The Feet in Human-Computer Interaction: A Survey of Foot-Based Interaction
E. Velloso, D. Schmidt, J. Alexander, H. Gellersen and A. Bulling
ACM Computing Surveys, Volume 48, Number 2, 2015
Introduction to the Special Issue on Activity Recognition for Interaction
A. Bulling, U. Blanke, D. Tan, J. Rekimoto and G. Abowd
ACM Transactions on Interactive Intelligent Systems, Volume 4, Number 4, 2015
A Study on the Natural History of Scanning Behaviour in Patients with Visual Field Defects after Stroke
T. Loetscher, C. Chen, S. Wignall, A. Bulling, S. Hoppe, O. Churches, N. A. Thomas, M. E. R. Nicholls and A. Lee
BMC Neurology, Volume 15, 2015
Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-scale-translate Tasks
J. Turner, J. Alexander, A. Bulling and H. Gellersen
CHI 2015, 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015
The Royal Corgi: Exploring Social Gaze Interaction for Immersive Gameplay
M. Vidal, R. Bismuth, A. Bulling and H. Gellersen
CHI 2015, 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015
Abstract
The eyes are a rich channel for non-verbal communication in our daily interactions. We propose social gaze interaction as a game mechanic to enhance user interactions with virtual characters. We develop a game from the ground-up in which characters are esigned to be reactive to the player’s gaze in social ways, such as etting annoyed when the player seems distracted or changing their dialogue depending on the player’s apparent focus of ttention. Results from a qualitative user study provide insights bout how social gaze interaction is intuitive for users, elicits deep feelings of immersion, and highlight the players’ self-consciousness of their own eye movements through their strong reactions to the characters
Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers
M. Barz, A. Bulling and F. Daiber
Technical Report, 2015
Abstract
Head-mounted eye tracking has significant potential for mobile gaze-based interaction with ambient displays but current interfaces lack information about the tracker\'s gaze estimation error. Consequently, current interfaces do not exploit the full potential of gaze input as the inherent estimation error can not be dealt with. The error depends on the physical properties of the display and constantly varies with changes in position and distance of the user to the display. In this work we present a computational model of gaze estimation error for head-mounted eye trackers. Our model covers the full processing pipeline for mobile gaze estimation, namely mapping of pupil positions to scene camera coordinates, marker-based display detection, and display mapping. We build the model based on a series of controlled measurements of a sample state-of-the-art monocular head-mounted eye tracker. Results show that our model can predict gaze estimation error with a root mean squared error of 17.99~px ($1.96^\\circ$).
GazeProjector: Location-independent Gaze Interaction on and Across Multiple Displays
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
Technical Report, 2015
Abstract
Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy still represents a significant challenge. To address this, we present GazeProjector, a system that combines accurate point-of-gaze estimation with natural feature tracking on displays to determine the mobile eye tracker’s position relative to a display. The detected eye positions are transformed onto that display allowing for gaze-based interaction. This allows for seamless gaze estimation and interaction on (1) multiple displays of arbitrary sizes, (2) independently of the user’s position and orientation to the display. In a user study with 12 participants we compared GazeProjector to existing well- established methods such as visual on-screen markers and a state-of-the-art motion capture system. Our results show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration. The system represents an important step towards the vision of pervasive gaze-based interfaces.
An Empirical Investigation of Gaze Selection in Mid-Air Gestural 3D Manipulation
E. Velloso, J. Turner, J. Alexander, A. Bulling and H. Gellersen
Human-Computer Interaction -- INTERACT 2015, 2015
Interactions Under the Desk: A Characterisation of Foot Movements for Input in a Seated Position
E. Velloso, J. Alexander, A. Bulling and H. Gellersen
Human-Computer Interaction -- INTERACT 2015, 2015
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
E. Wood, T. Baltrusaitis, X. Zhang, Y. Sugano, P. Robinson and A. Bulling
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Prediction of Search Targets from Fixations in Open-world Settings
H. Sattar, S. Müller, M. Fritz and A. Bulling
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Appearance-based Gaze Estimation in the Wild
X. Zhang, Y. Sugano, M. Fritz and A. Bulling
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Emotion Recognition from Embedded Bodily Expressions and Speech During Dyadic Interactions
P. Müller, S. Amin, P. Verma, M. Andriluka and A. Bulling
International Conference on Affective Computing and Intelligent Interaction (ACII 2015), 2015
Walking Reduces Spatial Neglect
T. Loetscher, C. Chen, S. Hoppe, A. Bulling, S. Wignall, C. Owen, N. Thomas and A. Lee
Journal of the International Neuropsychological Society, 2015
Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes
F. Alt, S. Schneegass, A. Shirazi, M. Hassib and A. Bulling
MobileHCI’15, 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, 2015
Eye Tracking for Public Displays in the Wild
Y. Zhang, M. K. Chong, A. Bulling and H. Gellersen
Personal and Ubiquitous Computing, Volume 19, Number 5, 2015
Discovery of Everyday Human Activities From Long-Term Visual Behaviour Using Topic Models
J. Steil and A. Bulling
UbiComp 2015, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Analyzing Visual Attention During Whole Body Interaction with Public Displays
R. Walter, A. Bulling, D. Lindbauer, M. Schuessler and J. Müller
UbiComp 2015, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Human Visual Behaviour for Collaborative Human-Machine Interaction
A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Orbits: Enabling Gaze Interaction in Smart Watches Using Moving Targets
A. Esteves, E. Velloso, A. Bulling and H. Gellersen
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Recognition of Curiosity Using Eye Movement Analysis
S. Hoppe, T. Loetscher, S. Morey and A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Tackling Challenges of Interactive Public Displays Using Gaze
M. Khamis, A. Bulling and F. Alt
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
A Field Study on Spontaneous Gaze-based Interaction with a Public Display using Pursuits
M. Khamis, F. Alt and A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues
F. Alt, A. Bulling, G. Gravanis and D. Buschek
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements
A. Esteves, E. Velloso, A. Bulling and H. Gellersen
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
Self-calibrating Head-mounted Eye Trackers Using Egocentric Visual Saliency
Y. Sugano and A. Bulling
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
Prediction of Search Targets from Fixations in Open-world Settings
H. Sattar, S. Müller, M. Fritz and A. Bulling
Technical Report, 2015
(arXiv: 1502.05137)
Abstract
Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state-of-the-art by studying search target prediction in an open-world setting. To this end, we present a dataset containing fixation data of 18 users searching for natural images from three image categories within image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct mental image out of a candidate set of five images. In an open-world experiment we no longer assume potential search targets to be part of the training set and we also no longer assume that we have fixation data for these targets. We present a new problem formulation for search target recognition in the open-world setting, which is based on learning compatibilities between fixations and potential targets.
GazeDPM: Early Integration of Gaze Information in Deformable Part Models
I. Shcherbatyi, A. Bulling and M. Fritz
Technical Report, 2015
(arXiv: 1505.05753)
Abstract
An increasing number of works explore collaborative human-computer systems in which human gaze is used to enhance computer vision systems. For object detection these efforts were so far restricted to late integration approaches that have inherent limitations, such as increased precision without increase in recall. We propose an early integration approach in a deformable part model, which constitutes a joint formulation over gaze and visual data. We show that our GazeDPM method improves over the state-of-the-art DPM baseline by 4% and a recent method for gaze-supported object detection by 3% on the public POET dataset. Our approach additionally provides introspection of the learnt models, can reveal salient image structures, and allows us to investigate the interplay between gaze attracting and repelling areas, the importance of view-specific models, as well as viewers' personal biases in gaze patterns. We finally study important practical aspects of our approach, such as the impact of using saliency maps instead of real fixations, the impact of the number of fixations, as well as robustness to gaze estimation error.
Labeled Pupils in the Wild: A Dataset for Studying Pupil Detection in Unconstrained Environments
M. Tonsen, X. Zhang, Y. Sugano and A. Bulling
Technical Report, 2015
(arXiv: 1511.05768)
Abstract
We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover people with different ethnicities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions. The dataset also includes participants wearing glasses, contact lenses, as well as make-up. We benchmark five state-of-the-art pupil detection algorithms on our dataset with respect to robustness and accuracy. We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance. Our evaluations provide valuable insights into the general pupil detection problem and allow us to identify key challenges for robust pupil detection on head-mounted eye trackers.
2014
A Tutorial on Human Activity Recognition Using Body-worn Inertial Sensors
A. Bulling, U. Blanke and B. Schiele
ACM Computing Surveys, Volume 46, Number 3, 2014
Pursuits: Spontaneous Eye-based Interaction for Dynamic Interfaces
M. Vidal, A. Bulling and H. Gellersen
ACM SIGMOBILE Mobile Computing and Communications Review, Volume 18, Number 4, 2014
Abstract
Although gaze is an attractive modality for pervasive interaction, real-world implementation of eye-based interfaces poses significant challenges. In particular, user calibration is tedious and time consuming. Pursuits is an innovative interaction technique that enables truly spontaneous interaction with eye-based interfaces. A user can simply walk up to the screen and readily interact with moving targets. Instead of being based on gaze location, Pursuits correlates eye pursuit movements with objects dynamically moving on the interface.
Eye Tracking and Eye-based Human–computer Interaction
P. Majaranta and A. Bulling
Advances in Physiological Computing, 2014
Ubic: Bridging the Gap Between Digital Cryptography and the Physical World
M. Simkin, A. Bulling, M. Fritz and D. Schröder
Computer Security - ESORICS 2014, 2014
Cognition-aware Computing
A. Bulling and T. O. Zander
IEEE Pervasive Computing, Volume 13, Number 3, 2014
Introduction to the PETMEI Special Issue
A. Bulling and R. Bednarik
Journal of Eye Movement Research, Volume 7, Number 3, 2014
Test-time Adaptation for 3D Human Pose Estimation
S. Amin, P. Müller, A. Bulling and M. Andriluka
Pattern Recognition (GCPR 2014), 2014
Cross-device Gaze-supported Point-to-point Content Transfer
J. Turner, A. Bulling, J. Alexander and H. Gellersen
Proceedings ETRA 2014, 2014
EyeTab: Model-based Gaze Estimation on Unmodified Tablet Computers
E. Wood and A. Bulling
Proceedings ETRA 2014, 2014
In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass
S. Ishimaru, K. Kunze, K. Kise, J. Weppner, A. Dengel, P. Lukowicz and A. Bulling
Proceedings of the 5th Augmented Human International Conference (AH 2014), 2014
Pupil-Canthi-Ratio: A Calibration-free Method for Tracking Horizontal Gaze Direction
Y. Zhang, A. Bulling and H. Gellersen
Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (AVI 2014), 2014
SmudgeSafe: Geometric Image Transformations for Smudge-resistant User Authentication
S. Schneegass, F. Steimle, A. Bulling, F. Alt and A. Schmidt
UbiComp’14, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014
GazeHorizon: Enabling Passers-by to Interact with Public Displays by Gaze
Y. Zhang, J. Müller, M. K. Chong, A. Bulling and H. Gellersen
UbiComp’14, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014
Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction
M. Kassner, W. Patera and A. Bulling
UbiComp’14 Adjunct, 2014
2013
EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour
A. Bulling, C. Weichel and H. Gellersen
CHI 2013, The 31st Annual CHI Conference on Human Factors in Computing Systems, 2013
Abstract
Automatic annotation of life logging data is challenging. In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conduct a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Using person-dependent training, we obtain a top performance of 85.3% precision (98.0% recall) for recognising social interactions. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.
MotionMA: Motion Modelling and Analysis by Demonstration
E. Velloso, A. Bulling and H. Gellersen
CHI 2013, The 31st Annual CHI Conference on Human Factors in Computing Systems, 2013
SideWays: A Gaze Interface for Spontaneous Interaction with Situated Displays
Y. Zhang, A. Bulling and H. Gellersen
CHI 2013, The 31st Annual CHI Conference on Human Factors in Computing Systems, 2013
Pursuits: Eye-based Interaction with Moving Targets
M. Vidal, K. Pfeuffer, A. Bulling and H. W. Gellersen
CHI 2013 Extended Abstracts, 2013
Abstract
Eye-based interaction has commonly been based on estimation of eye gaze direction, to locate objects for interaction. We introduce Pursuits, a novel and very different eye tracking method that instead is based on following the trajectory of eye movement and comparing this with trajectories of objects in the field of view. Because the eyes naturally follow the trajectory of moving objects of interest, our method is able to detect what the user is looking at, by matching eye movement and object movement. We illustrate Pursuits with three applications that demonstrate how the method facilitates natural interaction with moving targets.
AutoBAP: Automatic Coding of Body Action and Posture Units from Wearable Sensors
E. Velloso, A. Bulling and H. Gellersen
2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII 2013), 2013
Eye Pull, Eye Push: Moving Objects between Large Screens and Personal Devices with Gaze & Touch
J. Turner, J. Alexander, A. Bulling, S. Dominik and H. Gellersen
Human-Computer Interaction – INTERACT 2013, 2013
Abstract
Previous work has validated the eyes and mobile input as a viable approach for pointing at, and selecting out of reach objects. This work presents Eye Pull, Eye Push, a novel interaction concept for content transfer between public and personal devices using gaze and touch. We present three techniques that enable this interaction: Eye Cut & Paste, Eye Drag & Drop, and Eye Summon & Cast. We outline and discuss several scenarios in which these techniques can be used. In a user study we found that participants responded well to the visual feedback provided by Eye Drag & Drop during object movement. In contrast, we found that although Eye Summon & Cast significantly improved performance, participants had difficulty coordinating their hands and eyes during interaction.
I Know What You Are Reading - Recognition of Document Types Using Mobile Eye Tracking
K. Kunze, Y. Utsumi, S. Yuki, K. Kise and A. Bulling
ISWC’13, ACM International Symposium on Wearable Computers, 2013
Signal Processing Technologies for Activity-aware Smart Textiles
D. Roggen, G. Tröster and A. Bulling
Multidisciplinary Know-How for Smart-Textiles Developers, 2013
Abstract
Garments made of smart textiles have an enormous potential for embedding sensors in close proximity to the body in an unobtrusive and comfortable manner. Combined with signal processing and pattern recognition technologies, complex high-level information about human behaviors or situations can be inferred from the sensor data. The goal of this chapter is to introduce the reader to the design of activity-aware systems that use body-worn sensors, such as those that can be made available through smart textiles. We start this chapter by emphasizing recent trends towards ‘}wearable{’ sensing and computing and we present several examples of activity-aware applications. Then we outline the role that smart textiles can play in activity-aware applications, but also the challenges that they pose. We conclude by discussing the design process followed to devise activity-aware systems: the choice of sensors, the available data processing methods, and the evaluation techniques. We discuss recent data processing methods that address the challenges resulting from the use of smart textiles.
Eye Drop: An Interaction Concept for Gaze-supported Point-to-point Content Transfer
J. Turner, A. Bulling, J. Alexander and H. Gellersen
Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia (MUM 2013), 2013
Qualitative Activity Recognition of Weight Lifting Exercises
E. Velloso, A. Bulling, H. Gellersen, W. Ugulino and H. Fuks
Proceedings of the 4th Augmented Human International Conference (AH 2013), 2013
Abstract
Research on human activity recognition has traditionally focused on discriminating between different activities, i.e. to predict \textquoteleft}{\textquoteleft}which{\textquoteright}{\textquoteright} activity was performed at a specific point in time. The quality of executing an activity, the {\textquoteleft}{\textquoteleft}how (well){\textquoteright}{\textquoteright, has only received little attention so far, even though it potentially provides useful information for a large variety of applications, such as sports training. In this work we first define quality of execution and investigate three aspects that pertain to qualitative activity recognition: the problem of specifying correct execution, the automatic and robust detection of execution mistakes, and how to provide feedback on the quality of execution to the user. We illustrate our approach on the example problem of qualitatively assessing and providing feedback on weight lifting exercises. In two user studies we try out a sensor- and a model-based approach to qualitative activity recognition. Our results underline the potential of model-based assessment and the positive impact of real-time user feedback on the quality of execution.
Pursuits: Spontaneous Interaction with Displays based on Smooth Pursuit Eye Movement and Moving Targets
M. Vidal, A. Bulling and H. Gellersen
UbiComp’13, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2013
Pursuit Calibration: Making Gaze Calibration Less Tedious and More Flexible
K. Pfeuffer, M. Vidal, J. Turner, A. Bulling and H. Gellersen
UIST’13, ACM Symposium on User Interface Software and Technology, 2013
Abstract
Eye gaze is a compelling interaction modality but requires a user calibration before interaction can commence. State of the art procedures require the user to fixate on a succession of calibration markers, a task that is often experienced as difficult and tedious. We present a novel approach, pursuit calibration, that instead uses moving targets for calibration. Users naturally perform smooth pursuit eye movements when they follow a moving target, and we use correlation of eye and target movement to detect the users attention and to sample data for calibration. Because the method knows when the users is attending to a target, the calibration can be performed implicitly, which enables more flexible design of the calibration task. We demonstrate this in application examples and user studies, and show that pursuit calibration is tolerant to interruption, can blend naturally with applications, and is able to calibrate users without their awareness.
3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-based Interaction
A. Bulling and R. Bednarik (Eds.)
petmei.org, 2013
Proceedings of the 4th Augmented Human International Conference
A. Schmidt, A. Bulling and C. Holz (Eds.)
ACM, 2013
Abstract
We are very happy to present the proceedings of the 4th Augmented Human International Conference (Augmented Human 2013). Augmented Human 2013 focuses on augmenting human capabilities through technology for increased well-being and enjoyable human experience. The conference is in cooperation with ACM SIGCHI, with its proceedings to be archived in ACM\textquoteright}s Digital Library. With technological advances, computing has progressively moved beyond the desktop into new physical and social contexts. As physical artifacts gain new computational behaviors, they become reprogrammable, customizable, repurposable, and interoperable in rich ecologies and diverse contexts. They also become more complex, and require intense design effort in order to be functional, usable, and enjoyable. Designing such systems requires interdisciplinary thinking. Their creation must not only encompass software, electronics, and mechanics, but also the system{\textquoterights physical form and behavior, its social and physical milieu, and beyond.
2011
Recognition of Hearing Needs From Body and Eye Movements to Improve Hearing Instruments
B. Tessendorf, A. Bulling, D. Roggen, T. Stiefmeier, M. Feilner, P. Derleth and G. Tröster
Pervasive Computing, 2011
Abstract
Hearing instruments (HIs) have emerged as true pervasive computers as they continuously adapt the hearing program to the user\textquoterights context. However, current HIs are not able to distinguish different hearing needs in the same acoustic environment. In this work, we explore how information derived from body and eye movements can be used to improve the recognition of such hearing needs. We conduct an experiment to provoke an acoustic environment in which different hearing needs arise: active conversation and working while colleagues are having a conversation in a noisy office environment. We record body movements on nine body locations, eye movements using electrooculography (EOG), and sound using commercial HIs for eleven participants. Using a support vector machine (SVM) classifier and person-independent training we improve the accuracy of 77% based on sound to an accuracy of 92% using body movements. With a view to a future implementation into a HI we then perform a detailed analysis of the sensors attached to the head. We achieve the best accuracy of 86% using eye movements compared to 84% for head movements. Our work demonstrates the potential of additional sensor modalities for future HIs and motivates to investigate the wider applicability of this approach on further hearing situations and needs.