Computer Graphics
TU Braunschweig

Comprehensive Human Performance Capture from Monocular Video Footage

Abstract

Photo-realistic modeling and digital editing of image sequences with human actors are common tasks in the movies and games industry. The processes are however still laborious since tools only allow basic manipulations. In cooperation with the Institut für Informationsverarbeitung (TNT) of the University of Hannover (http://www.tnt.uni-hannover.de/), this project aims to solve this dilemma by providing algorithms and tools for automatic and semi-automatic digital editing of actors in monocular footage. To enable visual convincing renderings, a digital model of the human actor, detailed spatial scene information as well as scene illumination need to be reconstructed. Hereby plausible look and motion of the digital model are crucial.

This research project is partially funded by the German Science Foundation DFG.

Publications

Moritz Kappel, Vladislav Golyanik, Susana Castillo, Christian Theobalt, Marcus Magnor:
Fast Non-Rigid Radiance Fields from Monocularized Data
arXiv preprint, to appear.
url: https://arxiv.org/abs/2212.01368



Thiemo Alldieck, Moritz Kappel, Susana Castillo, Marcus Magnor:
Reconstructing 3D Human Avatars from Monocular Images
in Magnor M., Sorkine-Hornung A. (Eds.): Real VR – Immersive Digital Reality: How to Import the Real World into Head-Mounted Immersive Displays, Springer International Publishing, Cham, ISBN 978-3-030-41816-8, pp. 188-218, March 2020.




Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, Gerard Pons-Moll:
Video Based Reconstruction of 3D People Models
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 8387-8397, June 2018.
CVPR Spotlight Paper

Thiemo Alldieck, Marc Kassubeck, Bastian Wandt, Bodo Rosenhahn, Marcus Magnor:
Optical Flow-based 3D Human Motion Estimation from Monocular Video
in Proc. German Conference on Pattern Recognition (GCPR), Springer, pp. 347-360, September 2017.

Related Projects

Monocular Video Augmentation

The goal of this project is to augment video data with high-quality 3D geometry, while only using a single camera as input. As an application of this project, we want to dress a person in a video with artificial clothing. We reconstruct a 3D human pose from 2D input data. This information can be used to drive a cloth simulation creating a plausible 3D garment for the observed pose. Composing this animated garment into the original video creates the illusion of the person wearing different clothing. We aim at real-time frame rates for this system, allowing for virtual mirror applications.