Thiemo Alldieck (Guest)
Personal Information
Visiting PhD student at the "Real Virtual Humans" group.
Research Interests
- Computer vision for graphics
- Analyzing people in monocular video
Recent Publications
Detailed Human Avatars from Monocular Video
T. Alldieck, M. Magnor, W. Xu, C. Theobalt, G. Pons-Moll
International Conference on 3D Vision, 2018
[project page] [arXiv]
Video Based Reconstruction of 3D People Models
T. Alldieck, M. Magnor, W. Xu, C. Theobalt, G. Pons-Moll
IEEE Conference on Computer Vision and Pattern Recognition, 2018 (Spotlight)
[project page] [arXiv]
See my homepage for full list of publications.
Publications
2020
- “Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, WA, USA (Virtual), 2020.
- “Learning to Transfer Texture from Clothing Images to 3D Humans,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, WA, USA (Virtual), 2020.
2019
- “Learning to Reconstruct People in Clothing from a Single RGB Camera,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019), Long Beach, CA, USA, 2019.
- “Tex2Shape: Detailed Full Human Body Geometry from a Single Image,” in International Conference on Computer Vision (ICCV 2019), Seoul, Korea, 2019.more
Abstract
We present a simple yet effective method to infer detailed full human body
shape from only a single photograph. Our model can infer full-body shape
including face, hair, and clothing including wrinkles at interactive
frame-rates. Results feature details even on parts that are occluded in the
input image. Our main idea is to turn shape regression into an aligned
image-to-image translation problem. The input to our method is a partial
texture map of the visible region obtained from off-the-shelf methods. From a
partial texture, we estimate detailed normal and vector displacement maps,
which can be applied to a low-resolution smooth body model to add detail and
clothing. Despite being trained purely with synthetic data, our model
generalizes well to real-world photographs. Numerous results demonstrate the
versatility and robustness of our method.
2018
- “Detailed Human Avatars from Monocular Video,” in 3DV 2018 , International Conference on 3D Vision, Verona, Italy, 2018.