This paper proposes a new marker-less approach to capturing human performances from multi-view video. Our algorithm can jointly reconstruct spatio-temporally coherent geometry, motion and textural surface appearance of actors that perform complex and rapid moves. Furthermore, since our algorithm is purely meshbased and makes as few as possible prior assumptions about the type of subject being tracked, it can even capture performances of people wearing wide apparel, such as a dancer wearing a skirt. To serve this purpose our method efficiently and effectively combines the power of surface- and volume-based shape deformation techniques with a new mesh-based analysis-through-synthesis framework. This framework extracts motion constraints from video and makes the laser-scan of the tracked subject mimic the recorded performance. Also small-scale time-varying shape detail is recovered by applying model-guided multi-view stereo to refine the model surface. Our method delivers captured performance data at higher level of detail, is highly versatile, and is applicable to many complex types of scenes that could not be handled by alternative marker-based or marker-free recording techniques.
The New Scientist has published an article about our method. You can find the digital version here.
We will provide the input and output data of our algorithm for scientific purposes on request.
Important: We do not provide the source code of our algorithm as a download.
To obtain a copy, please write an eMail to : firstname.lastname@example.org with further information about you and your work. Please understand that we can only provide the data to you if you are a senior project manager or senior researcher at your institution.
We would also ask you to include the following sentence in the acknowledgements section of your paper if you use any of our data :
The captured performance data were provided courtesy of the research group 3D Video and Vision-based Graphics of the Max-Planck-Center for Visual Computing and Communication (MPI Informatik / Stanford)
Please note that you are not allowed to pass the data on to a third party without permission.