Computer Graphics
TU Braunschweig

Neural Reconstruction and Rendering of Dynamic Real-World Scenes

Abstract

The photorealistic reconstruction and representation of real-world scenes has always been an integral field of research in computer graphics, and includes traditional rendering as well as interdisciplinary techniques from computer vision and machine learning. In addition to conventional applications in photogrammetry, detailed reconstructions from camera or smartphone images have recently also enabled the automated integration of real, photorealistic content in multimedia applications such as virtual reality.

A large number of current methods focus on the 3D representation of static content. In practice, however, many scenes are subject to temporal deformation and therefore require an additional reconstruction of the temporal dimension. At the ICG, we develop technologies for the reconstruction and visualization of dynamic scenes from monocular video recordings. The methods we have developed allow not only the real-time display of new, high-resolution camera views but also the and manipulation of temporal sequences, such as the “bullet time” effect the “bullet time” effect known from Matrix. In the future, the resulting resulting models will enable exciting new applications, such as the immersive reproduction of experiences in virtual reality.

Publications

Moritz Kappel, Florian Hahlbohm, Timon Scholz, Susana Castillo, Christian Theobalt, Martin Eisemann, Vladislav Golyanik, Marcus Magnor:
D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video
arXiv preprint, pp. 1-16, June 2024.



Related Projects

Comprehensive Human Performance Capture from Monocular Video Footage

Photo-realistic modeling and digital editing of image sequences with human actors are common tasks in the movies and games industry. The processes are however still laborious since tools only allow basic manipulations. In cooperation with the Institut für Informationsverarbeitung (TNT) of the University of Hannover (http://www.tnt.uni-hannover.de/), this project aims to solve this dilemma by providing algorithms and tools for automatic and semi-automatic digital editing of actors in monocular footage. To enable visual convincing renderings, a digital model of the human actor, detailed spatial scene information as well as scene illumination need to be reconstructed. Hereby plausible look and motion of the digital model are crucial.

This research project is partially funded by the German Science Foundation DFG.