MPIIEmo Dataset

This site hosts the MPIIEmo dataset [1]. In case you have any questions concerning the dataset or how to use it, don't hesitate to contact Philipp Müller.

Dataset

 

License

The data is only to be used for scientific purposes and must not be republished other than by the Max Planck Institute for Informatics. The scientific use includes processing the data and showing it in publications and presentations. When using it please cite [1].

Structure of the Dataset

As a guildine for our actor's improvisation we provided them with 7 scenarios which could evolve in 4 different ways each ("subscenarios"). A complete list of all scenarios and subscenarios can be found here.

Each of the 8 pairs of actors who we recorded performed all (sub)scenarios. Thus, the dataset consists of 224 sequences. There are 8 viewpoints for every sequence, resulting in 1792 video files. We provide the videos in archives seperated by viewpoints. If you want to get an impression of the interactions, it is best to start with viewpoint 2, as it gives a good overview over the environment. The audio in the videos is a simple downmix of all 4 recorded audio channels to mono. We will release the raw audiofiles soon.

The structure inside one archive is the following:

<id of scenario>_<id of subscenario>_A<id of actor starting in kitchen>_B<id of actor starting outside kitchen>.avi

Downloads

view1, view2, view3, view4, view5, view6, view7, view8 (~4.5GB each)

supplementary material with list of all scenarios and subscenarios and table of similar datasets

raw annotations from all 5 annotators

Acknowledgements

The authors would like to thank Johannes Tröger for working as a director in our recordings, as well as all involved actors and annotators.

Hardware

The dataset was recorded with a camera system from 4D View Solutions.

References

[1] Emotion Recognition from Embedded Bodily Expressions and Speech during Dyadic Interactions,
Philipp M. Müller, Sikandar Amin, Prateek Verma, Mykhaylo Andriluka, Andreas Bulling, 6th International Conference on Affective Computing and Intelligent Interaction (ACII 2015)