Computer Graphics
TU Braunschweig

Reality CG

Abstract

Scope of "Reality CG" is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.


Computer Graphics of the Real World - Realistic Rendering, Modelling, and Editing of Dynamic, Complex Natural Scenes

ERC Starting Grant No.256941

2011 - 2016


Project Summary

Scope of Reality CG is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.

Today's state-of-the-art 3D reconstruction methods from computer vision are able to estimate digital models of a wide variety of real-world, dynamic scenes from multi-view video recordings. Inevitable model reconstruction inaccuracies, however, lead to rendering artefacts, and missing model editing capabilities so far prevent the wide-spread use of real world-based models for realistic computer graphics. Reality CG aims at overcoming these limitations. The goal is to demonstrate that realistic rendering, modelling and editing from real world-acquired dynamic scenes is a viable and advantageous alternative to conventional 3D digital content creation.

The project is motivated by the continuously increasing demand for visual realism in many application areas of computer graphics. Recent advances in graphics hardware and algorithms have made it possible to achieve realistic rendering results in real-time as long as the digital models to be rendered are realistically detailed. Using today's 3D modelling tools, however, the degree of model detail scales roughly linearly with the amount of time invested into manual model design. As a result, the traditional, labour-intensive process of 3D digital content creation threatens to stall further progress in realistic computer graphics applications and new visual media.

Reality CG addresses this precarious modelling bottleneck. To find viable solutions, the project involves and inter-connects three different areas of visual research: Reality CG makes use of the sophisticated mathematical methods developed in computer vision and combines them with knowledge from visual perception to develop new techniques for realistic modelling, editing, and rendering in computer graphics.

Over the course of the project, Reality CG will provide the enabling technology to open up the real world to computer graphics methodology and applications. By extending the scope of computer graphics beyond virtual content, the project will make a profound impact on the field of visual computing, pioneering new research directions as well as breaking ground for novel applications.

Researchers

Benjamin Hell

Felix Klose

Maryam Mustafa

Michael Stengel

Affiliated Researchers

Thomas Neumann

Matthias Überheide

 

Alumni

Martin Eisemann

Stefan Guthe

Anna Hilsmann

Stefan John

Lea Lindemann

Christian Linz

Christian Lipski

Lorenz Rogge

Kai Ruhl

Anita Sellent

 

Publications


Michael Stengel, Marcus Magnor:
Gaze-contingent Computational Displays: Boosting perceptual fidelity
in IEEE Signal Processing Magazine, vol. 33, no. 5, IEEE, pp. 139-148, September 2016.

Michael Stengel, Steve Grogorick, Martin Eisemann, Marcus Magnor:
Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering
in Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering EGSR), vol. 35, no. 4, pp. 129-139, July 2016.
EGSR'16 Best Paper Award



Matthias Überheide, Felix Klose, Tilak Varisetty, Markus Fidler, Marcus Magnor:
Web-based Interactive Free-Viewpoint Streaming
in Proc. ACM Multimedia, pp. 1031-1034, October 2015.
Poster Presentation

Michael Stengel, Steve Grogorick, Elmar Eisemann, Martin Eisemann, Marcus Magnor:
An Affordable Solution for Binocular Eye Tracking and Calibration in Head-mounted Displays
Poster @ ACM Multimedia 2015, October 2015.


Benjamin Hell, Marcus Magnor:
A Convex Clustering-based Regularizer for Image Segmentation
in Proc. Vision, Modeling and Visualization (VMV), Eurographics Association, pp. 87-94, October 2015.

Benjamin Hell, Marc Kassubeck, Pablo Bauszat, Martin Eisemann, Marcus Magnor:
An Approach Towards Fast Gradient-based Image Segmentation
in IEEE Transactions on Image Processing (TIP), vol. 24, no. 9, pp. 2633-2645, September 2015.

Felix Klose, Oliver Wang, Jean-Charles Bazin, Marcus Magnor, Alexander Sorkine-Hornung:
Sampling Based Scene-Space Video Processing
in ACM Transactions on Graphics (Proc. of Siggraph), vol. 34, no. 4, pp. 67:1-67:11, August 2015.
ACM Siggraph 2015 paper



Lorenz Rogge:
Augmenting People in Monocular Video Data
PhD thesis, TU Braunschweig, July 2015.

Martin Eisemann, Jan-Michael Frahm, Yannick Remion, Muhannad Ismael:
Reconstruction of Dense Correspondences
in Marcus A. Magnor, Oliver Grau, Olga Sorkine-Hornung, Christian Theobalt (Eds.): Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality, CRC Press, ISBN 9781482243819, pp. 113-133, May 2015.

Christian Lipski, Anna Hilsmann, Carsten Dachsbacher, Martin Eisemann:
Image- and Video-based Rendering
in Marcus A. Magnor, Oliver Grau, Olga Sorkine-Hornung, Christian Theobalt (Eds.): Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality, CRC Press, ISBN 9781482243819, pp. 261-280, May 2015.

Marcus Magnor, Oliver Grau, Olga Sorkine-Hornung, Christian Theobalt (Eds.):
Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality
A K Peters/CRC Press, ISBN 9781482243819, May 2015.

Anna Hilsmann, Michael Stengel, Lorenz Rogge:
Cloth Modeling
in Marcus Magnor and Oliver Grau and Olga Sorkine-Hornung and Christian Theobalt (Eds.): Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality, CRC Press, pp. 229-243, May 2015.

Kai Ruhl:
Stereo 3D and Viewing Experience
in Marcus Magnor and Oliver Grau and Olga Sorkine-Hornung and Christian Theobalt (Eds.): Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality, CRC Press, ISBN 9781482243819, pp. 281-295, May 2015.

Michael Stengel, Pablo Bauszat, Martin Eisemann, Elmar Eisemann, Marcus Magnor:
Temporal Video Filtering and Exposure Control for Perceptual Motion Blur
in IEEE Transactions on Visualization and Computer Graphics (TVCG), vol. 21, no. 5, pp. 663-671, May 2015.
10.1109/TVCG.2014.2377753

Michael Stengel, Steve Grogorick, Martin Eisemann, Elmar Eisemann, Marcus Magnor:
A Nonobscuring Eye Tracking Solution for Wide Field-of-View Head-mounted Displays
Technical Demo, March 2015.
IEEE VR, won the 'Honorable Mention' for Technical Demos.

Maryam Mustafa, Marcus Magnor:
ElectroEncephaloGraphics: Making Waves in Computer Graphics Research
in IEEE Computer Graphics and Applications, vol. 34, no. 6, pp. 46-56, November 2014.


Lorenz Rogge, Pablo Bauszat, Marcus Magnor:
Monocular Albedo Reconstruction
in Proc. IEEE International Conference on Image Processing (ICIP), IEEE, pp. 1046-1050, October 2014.

Thomas Neumann, Kiran Varanasi, Christian Theobalt, Marcus Magnor, Markus Wacker:
Compressed Manifold Modes for Mesh Processing
in Computer Graphics Forum (Proc. of Symposium on Geometry Processing SGP), vol. 33, no. 5, Eurographics Association, pp. 35-44, July 2014.

Christian Lipski, Felix Klose, Marcus Magnor:
Correspondence and Depth-Image Based Rendering: a Hybrid Approach for Free-Viewpoint Video
in IEEE Trans. Circuits and Systems for Video Technology (T-CSVT), vol. 24, no. 6, pp. 942-951, June 2014.


Thomas Neumann, Kiran Varanasi, Stephan Wenger, Markus Wacker, Marcus Magnor, Christian Theobalt:
Sparse Localized Deformation Components
in ACM Transactions on Graphics (Proc. of Siggraph Asia), vol. 32, no. 6, pp. 179:1-179:10, November 2013.

Kai Ruhl, Martin Eisemann, Marcus Magnor:
Cost Volume-based Interactive Depth Editing in Stereo Post-processing
in Proc. European Conference on Visual Media Production (CVMP), vol. 10, pp. 1-6, November 2013.

Rahul Nair, Kai Ruhl, Stephan Meister, Henrik Schäfer, Christoph S. Garbe, Martin Eisemann, Marcus Magnor, Daniel Kondermann:
A Survey on Time-of-Flight Stereo Fusion
in M. Grzegorzek and C. Theobalt and R. Koch and A. Kolb (Eds.): Time-of-Flight and Depth Imaging, Springer, pp. 105-127, September 2013.

Michael Stengel, Martin Eisemann, Stephan Wenger, Benjamin Hell, Marcus Magnor:
Optimizing Apparent Display Resolution Enhancement for Arbitrary Videos
in IEEE Transactions on Image Processing (TIP), vol. 22, no. 9, pp. 3604-3613, September 2013.
Patent number 10 2013 105 638.


Alexander Lerpe:
Detail Hallucinated Image Interpolation
Master's thesis, TU Braunschweig, May 2013.

Felix Klose, Christian Lipski, Marcus Magnor:
A Framework for Image-Based Stereoscopic View Synthesis from Asynchronous Multi-View Data
in Emerging Technologies for 3D Video: Creation, Coding, Transmission and Rendering, Wiley, ISBN 978-1-118-35511-4, pp. 249-270, May 2013.

Thomas Neumann, Kiran Varanasi, Nils Hasler, Markus Wacker, Marcus Magnor, Christian Theobalt:
Capture and Statistical Modeling of Arm-Muscle Deformations
in Computer Graphics Forum (Proc. of Eurographics EG), vol. 32, no. 2, pp. 285-294, May 2013.

Christian Lipski, Christian Linz, Thomas Neumann, Markus Wacker, Marcus Magnor:
High Resolution Image Correspondences for Video Post-Production
in Journal of Virtual Reality and Broadcasting (JVRB), vol. 9.2012, no. 8, pp. 1-12, December 2012.

Kai Ruhl, Felix Klose, Christian Lipski, Marcus Magnor:
Integrating Approximate Depth Data into Dense Image Correspondence Estimation
in Proc. European Conference on Visual Media Production (CVMP), vol. 9, pp. 1-6, December 2012.



Thomas Neumann, Markus Wacker, Kiran Varanasi, Christian Theobalt, Marcus Magnor:
High Detail Marker based 3D Reconstruction by Enforcing Multiview Constraints
Poster @ SIGGRAPH 2012, August 2012.
SIGGRAPH '12: ACM SIGGRAPH 2012 Posters

Maryam Mustafa, Stefan Guthe, Marcus Magnor:
Single Trial EEG Classification of Artifacts in Videos
in ACM Transactions on Applied Perception, vol. 9, no. 3, pp. 12:1-12:15, July 2012.

Anita Sellent, Kai Ruhl, Marcus Magnor:
A Loop-Consistency Measure for Dense Correspondences in Multi-View Video
in Journal of Image and Vision Computing, vol. 30, no. 9, pp. 641-654, June 2012.


Maryam Mustafa, Lea Lindemann, Marcus Magnor:
EEG Analysis of Implicit Human Visual Perception
in Proc. ACM Human Factors in Computing Systems (CHI), pp. 513-516, May 2012.

Yannic Schröder:
Super Resolution for Active Light Sensor Enhancement
Bachelor thesis, March 2012.


Christian Lipski, Felix Klose, Kai Ruhl, Marcus Magnor:
Making of ”Who Cares?” HD Stereoscopic Free Viewpoint Video
in Proc. European Conference on Visual Media Production (CVMP), vol. 8, pp. 1-10, November 2011.

Martin Eisemann, Felix Klose, Marcus Magnor:
Towards Plenoptic Raumzeit Reconstruction
in Cremers, D. and Magnor, M. and Oswald, M.R. and Zelnik-Manor, L. (Eds.): Video Processing and Computational Video, Springer, ISBN 978-3-642-24869-6, pp. 1-24, October 2011.

Anita Sellent, Martin Eisemann, Marcus Magnor:
Two Algorithms for Motion Estimation from Alternate Exposure Images
in Cremers, D. and Magnor, M. and Oswald, M.R. and Zelnik-Manor, L. (Eds.): Video Processing and Computational Video, Springer, ISBN 978-3-642-24869-6, pp. 25-51, October 2011.

Marcus Magnor, Daniel Cremers, Lihi Zelnik-Manor, Martin Oswald (Eds.):
Video Processing and Computational Video
Springer, ISBN 978-3-642-24869-6, October 2011.

Martin Eisemann, Jan Kokemüller, Marcus Magnor:
Object-aware Gradient-Domain Image Compositing
in Proc. Vision, Modeling and Visualization (VMV), pp. 65-71, October 2011.

Kai Berger, Kai Ruhl, Christian Brümmer, Yannic Schröder, Alexander Scholz, Marcus Magnor:
Markerless Motion Capture using multiple Color-Depth Sensors
in Proc. Vision, Modeling and Visualization (VMV), pp. 317-324, October 2011.

Lorenz Rogge, Thomas Neumann, Markus Wacker, Marcus Magnor:
Monocular Pose Reconstruction for an Augmented Reality Clothing System
in Proc. Vision, Modeling and Visualization (VMV), pp. 339-346, September 2011.

Martin Eisemann:
Error-concealed Image-based Rendering
PhD thesis, TU Braunschweig, September 2011.

Lea Lindemann, Marcus Magnor:
Assessing the Quality of Compressed Images Using EEG
in Proc. IEEE International Conference on Image Processing (ICIP), pp. 3170-3173, September 2011.


Lea Lindemann, Stephan Wenger, Marcus Magnor:
Evaluation of Video Artifact Perception Using Event-Related Potentials
in Proc. ACM Applied Perception in Computer Graphics and Visualization (APGV), p. 5, August 2011.

Kai Ruhl, Kai Berger, Christian Lipski, Felix Klose, Yannic Schröder, Alexander Scholz, Marcus Magnor:
Integrating multiple depth sensors into the virtual video camera
in Proc. SIGGRAPH, ACM, p. 1, August 2011.
SIGGRAPH '11: ACM SIGGRAPH 2011 Posters

Anita Sellent, Martin Eisemann, Bastian Goldlücke, Daniel Cremers, Marcus Magnor:
Motion Field Estimation from Alternate Exposure Images
in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 33, no. 8, pp. 1577-1589, August 2011.

Anita Sellent:
Dense Correspondence Field Estimation from Multiple Images
PhD thesis, TU Braunschweig, June 2011.
Monsenstein und Vannerdat, ISBN 978-3-86991-339-1


Martin Eisemann, Daniel Gohlke, Marcus Magnor:
Edge-Constrained Image Compositing
in Proc. Graphics Interface (GI), pp. 191-198, May 2011.




Timo Stich, Christian Linz, Christian Wallraven, Douglas Cunningham, Marcus Magnor:
Perception-motivated interpolation of image sequences
in ACM Transactions on Applied Perception, vol. 8, no. 2, pp. 1-25, February 2011.

Related Projects

Digital Representations of the Real World

The book presents the state-of-the-art of how to create photo-realistic digital models of the real world. It is the result of work by experts from around the world, offering a comprehensive overview of the entire pipeline from acquisition, data processing, and modelling to content editing, photo-realistic rendering, and user interaction.

ElectroEncephaloGraphics

This project focuses on using electroencephalography (EEG) to analyze the human visual process. Human visual perception is becoming increasingly important in the analyses of rendering methods, animation results, interface design, and visualization techniques. Our work uses EEG data to provide concrete feedback on the perception of rendered videos and images as opposed to user studies that just capture the user's response. Our results so far are very promising. Not only have we been able to detect a reaction to artifacts in the EEG data, but we have also been able to differentiate between artifacts based on the EEG response.

Eye-tracking Head-mounted Display

Immersion is the ultimate goal of head-mounted displays (HMD) for Virtual Reality (VR) in order to produce a convincing user experience. Two important aspects in this context are motion sickness, often due to imprecise calibration, and the integration of a reliable eye tracking. We propose an affordable hard- and software solution for drift-free eye-tracking and user-friendly lens calibration within an HMD. The use of dichroic mirrors leads to a lean design that provides the full field-of-view (FOV) while using commodity cameras for eye tracking.

Floating Textures

We present a novel multi-view, projective texture mapping technique. While previous multi-view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (``floats'') projected textures during run-time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real-time frame rates. The method is very generally applicable and can be used in combination with many image-based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free-viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies. In a nutshell, the notion of Floating Textures is to correct for local texture misalignments by determining the optical flow between projected textures and warping the textures accordingly in the rendered image domain. Both steps, optical flow estimation and multi-texture warping, can be efficiently implemented on graphics hardware to achieve interactive to real-time performance.

Image-space Editing of 3D Content

The goal of this project is to develop algorithms in image space that allow photo-realistic editing of dynamic 3D scenes. Traditional 2D editing tools cannot be applied to 3D video as in addition to correspondences in time spatial correspondences are needed for consistent editing. In this project we analyze how to make use of the redundancy in multi-stereoscopic videos to compute robust and dense correspondence fields. these space-time correspondences can then be used to propagate changes applied to one frame consistently to all other frames in the video. Beside the transition of classical video editing tools we want to develop new tools specifically for 3D video content.

This project has been funded by ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.

Immersive Digital Reality

Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles.

Multiple Kinect Studies

This project investigates multi-camera setups using Microsoft Kinects. Active structured light from the Kinect is used in several scenarious, including gas flow description, motion capture and free-viewpoint video.

While the ability to capture depth alongside color data (RGB-D) is the starting point of the investigations, the structured light is also used more directly. In order to combine Kinects with passive recording approaches, common calibration with HD cameras is also a topic.

Perception-motivated Interpolation of Image Sequences

We present a method for image interpolation which is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to an image interpolation that is perceived as physically correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute such solutions. In our user study we confirm the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases the perceived quality of the interpolation results, compare the results obtained by other methods and investigate the achieved quality for different types of scenes.

Scene-Space Video Processing

The high degree of redundancy in video footage allows compensating for noisy depth estimates and to achieve various high-quality processing effects such as denoising, deblurring, super resolution, object removal, computational shutter functions, and scene-space camera effects.

Video Quality Assessment

Goal of this project is to assess the quality of rendered videos and especially detect those frames that contain visible artifacts, e.g. ghosting, blurring or popping.

Virtual Video Camera

The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).

Visual Fidelity Optimization of Displays

The visual experience afforded by digital displays is not identical to our perception of the genuine real world. Display resolution, refresh rate, contrast, brightness, and color gamut neither match the physics of the real world nor the perceptual characteristics of our Human Visual System. With the aid of new algorithms, however, a number of perceptually noticeable degradations on screen can be diminished or even completely avoided.

Who Cares?

Official music video "Who Cares" by Symbiz Sound; the first major production using our Virtual Video Camera.

Dubstep, spray cans, brush and paint join forces and unite with the latest digital production techniques. All imagery depicts live action graffiti and performance. Camera motion added in post production using the Virtual Video Camera.