Video-based Reconstruction of Animatable Human Characters

Carsten Stoll1   Jürgen Gall2   Edilson de Aguiar3   Sebastian Thrun4   Christian Theobalt1

1MPI Informatik    2ETH Zürich    3Disney Research    4Stanford University


Abstract:

We present a new performance capture approach that incorporates a physically-based cloth model to reconstruct a rigged fully-animatable virtual double of a real person in loose apparel from multi-view video recordings. Our algorithm only requires a minimum of manual interaction. Without the use of optical markers in the scene, our algorithm first reconstructs skeleton motion and detailed time-varying surface geometry of a real person from a reference video sequence. These captured reference performance data are then analyzed to automatically identify non-rigidly deforming pieces of apparel on the animated geometry. For each piece of apparel, parameters of a physically-based real-time cloth simulation model are estimated, and surface geometry of occluded body regions is approximated. The reconstructed character model comprises a skeleton-based representation for the actual body parts and a physically-based simulation model for the apparel. In contrast to previous performance capture methods, we can now also create new real-time animations of actors captured in general apparel.




Download Video: [mp4]

Download Preprint (pdf):