Comprehensive Human Performance Capture from Monocular Video Footage
Abstract
Photo-realistic modeling and digital editing of image sequences with human actors are common tasks in the movies and games industry. The processes are however still laborious since tools only allow basic manipulations. In cooperation with the Institut für Informationsverarbeitung (TNT) of the University of Hannover (http://www.tnt.uni-hannover.de/), this project aims to solve this dilemma by providing algorithms and tools for automatic and semi-automatic digital editing of actors in monocular footage. To enable visual convincing renderings, a digital model of the human actor, detailed spatial scene information as well as scene illumination need to be reconstructed. Hereby plausible look and motion of the digital model are crucial.
This research project is partially funded by the German Science Foundation DFG.
Publications
Fast Non-Rigid Radiance Fields from Monocularized Data
arXiv preprint, to appear.
url: https://arxiv.org/abs/2212.01368
High-Fidelity Neural Human Motion Transfer from Monocular Video
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1541-1550, June 2021.
Oral presentation
Reconstructing 3D Human Avatars from Monocular Images
PhD thesis, TU Braunschweig, April 2020.
Reconstructing 3D Human Avatars from Monocular Images
in Magnor M., Sorkine-Hornung A. (Eds.): Real VR – Immersive Digital Reality: How to Import the Real World into Head-Mounted Immersive Displays, Springer International Publishing, Cham, ISBN 978-3-030-41816-8, pp. 188-218, March 2020.
Tex2Shape: Detailed Full Human Body Geometry from a Single Image
in IEEE International Conference on Computer Vision (ICCV), IEEE, pp. 2293-2303, October 2019.
Learning to Reconstruct People in Clothing from a Single RGB Camera
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 1175-1186, June 2019.
Detailed Human Avatars from Monocular Video
in International Conference on 3D Vision, IEEE, pp. 98-109, September 2018.
Video Based Reconstruction of 3D People Models
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 8387-8397, June 2018.
CVPR Spotlight Paper
Optical Flow-based 3D Human Motion Estimation from Monocular Video
in Proc. German Conference on Pattern Recognition (GCPR), Springer, pp. 347-360, September 2017.
Related Projects
The goal of this project is to augment video data with high-quality 3D geometry, while only using a single camera as input. As an application of this project, we want to dress a person in a video with artificial clothing. We reconstruct a 3D human pose from 2D input data. This information can be used to drive a cloth simulation creating a plausible 3D garment for the observed pose. Composing this animated garment into the original video creates the illusion of the person wearing different clothing. We aim at real-time frame rates for this system, allowing for virtual mirror applications.