Andreas Bulling (Senior Researcher)

Dr. Andreas Bulling

Address
Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
Location
E1 4 - Room 607
Phone
+49 681 9325 2128
Fax
+49 681 9325 2099
Email
Get email via email

Personal Information

Homepage

Group Homepage

Open Positions

Research Interests

  • Human-Computer Interaction
  • Ubiquitous Computing
  • Eye Tracking
  • Machine Learning and Pattern Recognition
  • Egocentric Computer Vision

Education

  • PhD in Information Technology and Electrical Engineering (October 2006 - June 2010)
    Swiss Federal Institute of Technology (ETH) Zurich, Switzerland
  • MSc in Computer Science (October 2001 - June 2006)
    Technical University of Karlsruhe, Germany

Short-Bio

Andreas Bulling is an Independent Research Group Leader (W2) at the Max Planck Institute for Informatics and the Cluster of Excellence on Multimodal Computing and Interaction where he leads the Perceptual User Interfaces Group. He received his MSc. (Dipl.-Inform.) in Computer Science from the Technical University of Karlsruhe (TH), Germany, in 2006, focusing on embedded systems, robotics and biomedical engineering. He holds a PhD in Information Technology and Electrical Engineering from the Swiss Federal Institute of Technology (ETH) Zurich, Switzerland. Dr. Bulling was previously a Feodor Lynen Research Fellow and a Marie Curie Research Fellow in the Computer Laboratory at the University of Cambridge, United Kingdom, a postdoctoral research associate in the School of Computing and Communications at Lancaster University, United Kingdom, as well as a Junior Research Fellow at Wolfson College, Cambridge. Dr. Bulling served as co-chair, TPC member and reviewer for major conferences and journals as well as TPC co-chair for AugmentedHuman 2013 and associate chair for CHI 2013 and CHI 2014.

Publications

2017
Gaze Embeddings for Zero-Shot Image Classification
N. Karessli, Z. Akata, B. Schiele and A. Bulling
30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017
(Accepted/in press)
Noticeable or Distractive? A Design Space for Gaze-Contingent User Interface Notifications
M. Klauck, Y. Sugano and A. Bulling
CHI 2017 Extended Abstracts, 2017
(Accepted/in press)
Look Together: Using Gaze for Assisting Co-located Collaborative Search
Y. Zhang, K. Pfeuffer, M. K. Chong, J. Alexander, A. Bulling and H. Gellersen
Personal and Ubiquitous Computing, Volume 21, Number 1, 2017
2016
SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull
S. Schneegass, Y. Oualil and A. Bulling
CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems, 2016
Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces
P. Xu, Y. Sugano and A. Bulling
CHI 2016, 34th Annual ACM Conference on Human Factors in Computing Systems, 2016
GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices
M. Khamis, F. Alt, M. Hassib, E. von Zezschwitz, R. Hasholzner and A. Bulling
CHI 2016 Extended Abstracts, 2016
On the Verge: Voluntary Convergences for Accurate and Precise Timing of Gaze Input
D. Kirst and A. Bulling
CHI 2016 Extended Abstracts, 2016
Abstract
Rotations performed with the index finger and thumb involve some of the most complex motor action among common multi-touch gestures, yet little is known about the factors affecting performance and ergonomics. This note presents results from a study where the angle, direction, diameter, and position of rotations were systematically manipulated. Subjects were asked to perform the rotations as quickly as possible without losing contact with the display, and were allowed to skip rotations that were too uncomfortable. The data show surprising interaction effects among the variables, and help us identify whole categories of rotations that are slow and cumbersome for users.
Pervasive Attentive User Interfaces
A. Bulling
Computer, Volume 49, Number 1, 2016
A 3D Morphable Eye Region Model for Gaze Estimation
E. Wood, T. Baltrušaitis, L.-P. Morency, P. Robinson and A. Bulling
Computer Vision -- ECCV 2016, 2016
Eyewear Computing -- Augmenting the Human with Head-mounted Wearable Assistants
A. Bulling, O. Cakmakci, K. Kunze and J. M. Rehg (Eds.)
Schloss Dagstuhl, 2016
Attention, please!: Comparing Features for Measuring Audience Attention Towards Pervasive Displays
F. Alt, A. Bulling, L. Mecke and D. Buschek
DIS 2016, 11th ACM SIGCHI Designing Interactive Systems Conference, 2016
Smooth Eye Movement Interaction Using EOG Glasses
M. Dhuliawala, J. Lee, J. Shimizu, A. Bulling, K. Kunze, T. Starner and W. Woo
ICMI’16, 18th ACM International Conference on Multimodal Interaction, 2016
Xplore-M-Ego: Contextual Media Retrieval Using Natural Language Queries
S. Nag Chowdhury, M. Malinowski, A. Bulling and M. Fritz
ICMR’16, ACM International Conference on Multimedia Retrieval, 2016
Combining Eye Tracking with Optimizations for Lens Astigmatism in modern wide-angle HMDs
D. Pohl, X. Zhang and A. Bulling
2016 IEEE Virtual Reality Conference (VR), 2016
Eyewear Computers for Human-Computer Interaction
A. Bulling and K. Kunze
Interactions, Volume 23, Number 3, 2016
Pupil Detection for Head-mounted Eye Tracking in the Wild: An Evaluation of the State of the Art
W. Fuhl, M. Tonsen, A. Bulling and E. Kasneci
Machine Vision and Applications, Volume 27, Number 8, 2016
Special Issue Introduction
D. J. Cook, A. Bulling and Z. Yu
Pervasive and Mobile Computing (Proc. PerCom 2015), Volume 26, 2016
Prediction of Gaze Estimation Error for Error-Aware Gaze-Based Interfaces
M. Barz, F. Daiber and A. Bulling
Proceedings ETRA 2016, 2016
3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers
M. Mansouryar, J. Steil, Y. Sugano and A. Bulling
Proceedings ETRA 2016, 2016
Gaussian Processes as an Alternative to Polynomial Gaze Estimation Functions
L. Sesma-Sanchez, Y. Zhang, H. Gellersen and A. Bulling
Proceedings ETRA 2016, 2016
Labelled Pupils in the Wild: A Dataset for Studying Pupil Detection in Unconstrained Environments
M. Tonsen, X. Zhang, Y. Sugano and A. Bulling
Proceedings ETRA 2016, 2016
Learning an Appearance-based Gaze Estimator from One Million Synthesised Images
E. Wood, T. Baltrušaitis, L.-P. Morency, P. Robinson and A. Bulling
Proceedings ETRA 2016, 2016
Long-term Memorability of Cued-Recall Graphical Passwords with Saliency Masks
F. Alt, M. Mikusz, S. Schneegass and A. Bulling
Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia (MUM 2016), 2016
EyeVote in the Wild: Do Users bother Correcting System Errors on Public Displays?
M. Khamis, L. Trotter, M. Tessman, C. Dannhart, A. Bulling and F. Alt
Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia (MUM 2016), 2016
Three-Point Interaction: Combining Bi-manual Direct Touch with Gaze
A. L. Simeone, A. Bulling, J. Alexander and H. Gellersen
Proceedings of the 2016 International Working Conference on Advanced Visual Interfaces (AVI 2016), 2016
Concept for Using Eye Tracking in a Head-mounted Display to Adapt Rendering to the User’s Current Visual Field
D. Pohl, X. Zhang, A. Bulling and O. Grau
Proceedings VRST 2016, 2016
TextPursuits: Using Text for Pursuits-based Interaction and Calibration on Public Displays
M. Khamis, O. Saltuk, A. Hang, K. Stolz, A. Bulling and F. Alt
UbiComp’16, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2016
EyeWear 2016: First Workshop on EyeWear Computing
A. Bulling, O. Cakmakci, K. Kunze and J. M. Rehg
UbiComp’16 Adjunct, 2016
Challenges and Design Space of Gaze-enabled Public Displays
M. Khamis, F. Alt and A. Bulling
UbiComp’16 Adjunct, 2016
Solar System: Smooth Pursuit Interactions Using EOG Glasses
J. Shimizu, J. Lee, M. Dhuliawala, A. Bulling, T. Starner, W. Woo and K. Kunze
UbiComp’16 Adjunct, 2016
AggreGaze: Collective Estimation of Audience Attention on Public Displays
Y. Sugano, X. Zhang and A. Bulling
UIST 2016, 29th Annual Symposium on User Interface Software and Technology, 2016
End-to-End Eye Movement Detection Using Convolutional Neural Networks
S. Hoppe and A. Bulling
Technical Report, 2016
(arXiv: 1609.02452)
Abstract
Common computational methods for automated eye movement detection - i.e. the task of detecting different types of eye movement in a continuous stream of gaze data - are limited in that they either involve thresholding on hand-crafted signal features, require individual detectors each only detecting a single movement, or require pre-segmented data. We propose a novel approach for eye movement detection that only involves learning a single detector end-to-end, i.e. directly from the continuous gaze data stream and simultaneously for different eye movements without any manual feature crafting or segmentation. Our method is based on convolutional neural networks (CNN) that recently demonstrated superior performance in a variety of tasks in computer vision, signal processing, and machine learning. We further introduce a novel multi-participant dataset that contains scripted and free-viewing sequences of ground-truth annotated saccades, fixations, and smooth pursuits. We show that our CNN-based method outperforms state-of-the-art baselines by a large margin on this challenging dataset, thereby underlining the significant potential of this approach for holistic, robust, and accurate eye movement protocol analysis.
Predicting the Category and Attributes of Mental Pictures Using Deep Gaze Pooling
H. Sattar, A. Bulling and M. Fritz
Technical Report, 2016
(arXiv: 1611.10162)
Abstract
Previous work focused on predicting visual search targets from human fixations but, in the real world, a specific target is often not known, e.g. when searching for a present for a friend. In this work we instead study the problem of predicting the mental picture, i.e. only an abstract idea instead of a specific target. This task is significantly more challenging given that mental pictures of the same target category can vary widely depending on personal biases, and given that characteristic target attributes can often not be verbalised explicitly. We instead propose to use gaze information as implicit information on users' mental picture and present a novel gaze pooling layer to seamlessly integrate semantic and localized fixation information into a deep image representation. We show that we can robustly predict both the mental picture's category as well as attributes on a novel dataset containing fixation data of 14 users searching for targets on a subset of the DeepFahion dataset. Our results have important implications for future search interfaces and suggest deep gaze pooling as a general-purpose approach for gaze-supported computer vision systems.
Seeing with Humans: Gaze-Assisted Neural Image Captioning
Y. Sugano and A. Bulling
Technical Report, 2016
(arXiv: 1608.05203)
Abstract
Gaze reflects how humans process visual scenes and is therefore increasingly used in computer vision systems. Previous works demonstrated the potential of gaze for object-centric tasks, such as object localization and recognition, but it remains unclear if gaze can also be beneficial for scene-centric tasks, such as image captioning. We present a new perspective on gaze-assisted image captioning by studying the interplay between human gaze and the attention mechanism of deep neural networks. Using a public large-scale gaze dataset, we first assess the relationship between state-of-the-art object and scene recognition models, bottom-up visual saliency, and human gaze. We then propose a novel split attention model for image captioning. Our model integrates human gaze information into an attention-based long short-term memory architecture, and allows the algorithm to allocate attention selectively to both fixated and non-fixated image regions. Through evaluation on the COCO/SALICON datasets we show that our method improves image captioning performance and that gaze can complement machine attention for semantic scene understanding tasks.
It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation
X. Zhang, Y. Sugano, M. Fritz and A. Bulling
Technical Report, 2016
(arXiv: 1611.08860)
Abstract
While appearance-based gaze estimation methods have traditionally exploited information encoded solely from the eyes, recent results from a multi-region method indicated that using the full face image can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in different facial regions. Through evaluation on the recent MPIIGaze and EYEDIAP gaze estimation datasets, we show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation, achieving improvements of up to 14.3% on MPIIGaze and 27.7% on EYEDIAP for person-independent 3D gaze estimation. We further show that this improvement is consistent across different illumination conditions and gaze directions and particularly pronounced for the most challenging extreme head poses.
2015
On the Interplay between Spontaneous Spoken Instructions and Human Visual Behaviour in an Indoor Guidance Task
N. Koleva, S. Hoppe, M. M. Moniri, M. Staudte and A. Bulling
37th Annual Meeting of the Cognitive Science Society (COGSCI 2015), 2015
Scene Viewing and Gaze Analysis during Phonetic Segmentation Tasks
A. Khan, I. Steiner, R. G. Macdonald, Y. Sugano and A. Bulling
Abstracts of the 18th European Conference on Eye Movements (ECEM 2015), 2015
The Feet in Human-Computer Interaction: A Survey of Foot-Based Interaction
E. Velloso, D. Schmidt, J. Alexander, H. Gellersen and A. Bulling
ACM Computing Surveys, Volume 48, Number 2, 2015
Introduction to the Special Issue on Activity Recognition for Interaction
A. Bulling, U. Blanke, D. Tan, J. Rekimoto and G. Abowd
ACM Transactions on Interactive Intelligent Systems, Volume 4, Number 4, 2015
Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-scale-translate Tasks
J. Turner, J. Alexander, A. Bulling and H. Gellersen
CHI 2015, 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015
The Royal Corgi: Exploring Social Gaze Interaction for Immersive Gameplay
M. Vidal, R. Bismuth, A. Bulling and H. Gellersen
CHI 2015, 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015
Abstract
The eyes are a rich channel for non-verbal communication in our daily interactions. We propose social gaze interaction as a game mechanic to enhance user interactions with virtual characters. We develop a game from the ground-up in which characters are esigned to be reactive to the player’s gaze in social ways, such as etting annoyed when the player seems distracted or changing their dialogue depending on the player’s apparent focus of ttention. Results from a qualitative user study provide insights bout how social gaze interaction is intuitive for users, elicits deep feelings of immersion, and highlight the players’ self-consciousness of their own eye movements through their strong reactions to the characters
An Empirical Investigation of Gaze Selection in Mid-Air Gestural 3D Manipulation
E. Velloso, J. Turner, J. Alexander, A. Bulling and H. Gellersen
Human-Computer Interaction -- INTERACT 2015, 2015
Interactions Under the Desk: A Characterisation of Foot Movements for Input in a Seated Position
E. Velloso, J. Alexander, A. Bulling and H. Gellersen
Human-Computer Interaction -- INTERACT 2015, 2015
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
E. Wood, T. Baltrusaitis, X. Zhang, Y. Sugano, P. Robinson and A. Bulling
ICCV 2015, IEEE International Conference on Computer Vision, 2015
Prediction of Search Targets from Fixations in Open-world Settings
H. Sattar, S. Müller, M. Fritz and A. Bulling
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Appearance-based Gaze Estimation in the Wild
X. Zhang, Y. Sugano, M. Fritz and A. Bulling
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015
Emotion Recognition from Embedded Bodily Expressions and Speech During Dyadic Interactions
P. Müller, S. Amin, P. Verma, M. Andriluka and A. Bulling
International Conference on Affective Computing and Intelligent Interaction (ACII 2015), 2015
Walking Reduces Spatial Neglect
T. Loetscher, C. Chen, S. Hoppe, A. Bulling, S. Wignall, C. Owen, N. Thomas and A. Lee
Journal of the International Neuropsychological Society, 2015
Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes
F. Alt, S. Schneegass, A. Shirazi, M. Hassib and A. Bulling
MobileHCI’15, 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, 2015
Eye Tracking for Public Displays in the Wild
Y. Zhang, M. K. Chong, A. Bulling and H. Gellersen
Personal and Ubiquitous Computing, Volume 19, Number 5, 2015
Discovery of Everyday Human Activities From Long-Term Visual Behaviour Using Topic Models
J. Steil and A. Bulling
UbiComp 2015, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Analyzing Visual Attention During Whole Body Interaction with Public Displays
R. Walter, A. Bulling, D. Lindbauer, M. Schuessler and J. Müller
UbiComp 2015, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Human Visual Behaviour for Collaborative Human-Machine Interaction
A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Orbits: Enabling Gaze Interaction in Smart Watches Using Moving Targets
A. Esteves, E. Velloso, A. Bulling and H. Gellersen
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Recognition of Curiosity Using Eye Movement Analysis
S. Hoppe, T. Loetscher, S. Morey and A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
Tackling Challenges of Interactive Public Displays Using Gaze
M. Khamis, A. Bulling and F. Alt
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
A Field Study on Spontaneous Gaze-based Interaction with a Public Display using Pursuits
M. Khamis, F. Alt and A. Bulling
UbiComp & ISWC’15, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015
GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues
F. Alt, A. Bulling, G. Gravanis and D. Buschek
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements
A. Esteves, E. Velloso, A. Bulling and H. Gellersen
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
Self-calibrating Head-mounted Eye Trackers Using Egocentric Visual Saliency
Y. Sugano and A. Bulling
UIST’15, 28th Annual ACM Symposium on User Interface Software and Technology, 2015
GazeDPM: Early Integration of Gaze Information in Deformable Part Models
I. Shcherbatyi, A. Bulling and M. Fritz
Technical Report, 2015
(arXiv: 1505.05753)
Abstract
An increasing number of works explore collaborative human-computer systems in which human gaze is used to enhance computer vision systems. For object detection these efforts were so far restricted to late integration approaches that have inherent limitations, such as increased precision without increase in recall. We propose an early integration approach in a deformable part model, which constitutes a joint formulation over gaze and visual data. We show that our GazeDPM method improves over the state-of-the-art DPM baseline by 4% and a recent method for gaze-supported object detection by 3% on the public POET dataset. Our approach additionally provides introspection of the learnt models, can reveal salient image structures, and allows us to investigate the interplay between gaze attracting and repelling areas, the importance of view-specific models, as well as viewers' personal biases in gaze patterns. We finally study important practical aspects of our approach, such as the impact of using saliency maps instead of real fixations, the impact of the number of fixations, as well as robustness to gaze estimation error.
Labeled Pupils in the Wild: A Dataset for Studying Pupil Detection in Unconstrained Environments
M. Tonsen, X. Zhang, Y. Sugano and A. Bulling
Technical Report, 2015
(arXiv: 1511.05768)
Abstract
We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover people with different ethnicities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions. The dataset also includes participants wearing glasses, contact lenses, as well as make-up. We benchmark five state-of-the-art pupil detection algorithms on our dataset with respect to robustness and accuracy. We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance. Our evaluations provide valuable insights into the general pupil detection problem and allow us to identify key challenges for robust pupil detection on head-mounted eye trackers.
2014
A Tutorial on Human Activity Recognition Using Body-worn Inertial Sensors
A. Bulling, U. Blanke and B. Schiele
ACM Computing Surveys, Volume 46, Number 3, 2014
Eye Tracking and Eye-based Human–computer Interaction
P. Majaranta and A. Bulling
Advances in Physiological Computing, 2014
Ubic: Bridging the Gap Between Digital Cryptography and the Physical World
M. Simkin, A. Bulling, M. Fritz and D. Schröder
Computer Security - ESORICS 2014, 2014
Cognition-aware Computing
A. Bulling and T. O. Zander
IEEE Pervasive Computing, Volume 13, Number 3, 2014
Introduction to the PETMEI Special Issue
A. Bulling and R. Bednarik
Journal of Eye Movement Research, Volume 7, Number 3, 2014
Test-time Adaptation for 3D Human Pose Estimation
S. Amin, P. Müller, A. Bulling and M. Andriluka
Pattern Recognition (GCPR 2014), 2014
Cross-device Gaze-supported Point-to-point Content Transfer
J. Turner, A. Bulling, J. Alexander and H. Gellersen
Proceedings ETRA 2014, 2014
EyeTab: Model-based Gaze Estimation on Unmodified Tablet Computers
E. Wood and A. Bulling
Proceedings ETRA 2014, 2014
In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass
S. Ishimaru, K. Kunze, K. Kise, J. Weppner, A. Dengel, P. Lukowicz and A. Bulling
Proceedings of the 5th Augmented Human International Conference (AH 2014), 2014
Pupil-Canthi-Ratio: A Calibration-free Method for Tracking Horizontal Gaze Direction
Y. Zhang, A. Bulling and H. Gellersen
Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (AVI 2014), 2014
SmudgeSafe: Geometric Image Transformations for Smudge-resistant User Authentication
S. Schneegass, F. Steimle, A. Bulling, F. Alt and A. Schmidt
UbiComp’14, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014
GazeHorizon: Enabling Passers-by to Interact with Public Displays by Gaze
Y. Zhang, J. Müller, M. K. Chong, A. Bulling and H. Gellersen
UbiComp’14, ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014
Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction
M. Kassner, W. Patera and A. Bulling
UbiComp’14 Adjunct, 2014
2013
Eye Drop: An Interaction Concept for Gaze-supported Point-to-point Content Transfer
J. Turner, A. Bulling, J. Alexander and H. Gellersen
Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia (MUM 2013), 2013
3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-based Interaction
A. Bulling and R. Bednarik (Eds.)
petmei.org, 2013