Computer Graphics
TU Braunschweig

Research Projects


Real-Action VR

Want to re-live your latest bungee jump? Share your incredible skateboard stunts with your friends in 360? Watch your last vacation adventures in full immersion and 3D? In this project we set out to pioneer the fully immersive experience of action camera recordings in VR headsets.



Neural Reconstruction and Rendering of Dynamic Real-World Scenes

The photorealistic reconstruction and representation of real-world scenes has always been an integral field of research in computer graphics, and includes traditional rendering as well as interdisciplinary techniques from computer vision and machine learning. In addition to conventional applications in photogrammetry, detailed reconstructions from camera or smartphone images have recently also enabled the automated integration of real, photorealistic content in multimedia applications such as virtual reality.

A large number of current methods focus on the 3D representation of static content. In practice, however, many scenes are subject to temporal deformation and therefore require an additional reconstruction of the temporal dimension. At the ICG, we develop technologies for the reconstruction and visualization of dynamic scenes from monocular video recordings. The methods we have developed allow not only the real-time display of new, high-resolution camera views but also the and manipulation of temporal sequences, such as the “bullet time” effect the “bullet time” effect known from Matrix. In the future, the resulting resulting models will enable exciting new applications, such as the immersive reproduction of experiences in virtual reality.


Point-Based Neural Rendering

For decades, computer graphic designers have endeavored to reconstruct our tangible world in order to facilitate the creation of novel virtual environments. The current research in this field is developing methods that utilize neural networks to produce realistic three-dimensional reconstructions, which are generated through a computationally intensive optimization process.

While there are methods that employ, for instance, triangles, voxels, or other implicit representations to model geometry, this project is centered on the utilization of point clouds. Due to their flexibility, they are well suited to representing even complex scenes with a high level of detail. The employment of neural networks facilitates further enhancement of the quality. Consequently, this line of inquiry concentrates principally on two domains. Initially, there is an emphasis on optimizing the efficiency and quality of image generation from point clouds. Secondly, there is an emphasis on improving current network architectures. The long-term objective is to utilize the reconstructed models in interactive real-time applications. For instance, it is conceivable that smartphones could be used not only to capture and view simple images but also detailed 3D reconstructions.



Eye-tracking Head-mounted Display

Immersion is the ultimate goal of head-mounted displays (HMD) for Virtual Reality (VR) in order to produce a convincing user experience. Two important aspects in this context are motion sickness, often due to imprecise calibration, and the integration of a reliable eye tracking. We propose an affordable hard- and software solution for drift-free eye-tracking and user-friendly lens calibration within an HMD. The use of dichroic mirrors leads to a lean design that provides the full field-of-view (FOV) while using commodity cameras for eye tracking.


ICG Dome

Featuring more than 10 million pixels at 120 Hertz refresh rate, full-body motion capture, as well as real-time gaze tracking, our 5-meter ICG Dome enables us to research peripheral visual perception, to devise comprehensive foveal-peripheral rendering strategies, and to explore multi-user immersive visualization and interaction.


Immersive Attention Guidance

Immersive Virtual Reality (VR) offers high flexibility to its users, i.e. viewers of 360° panoramic videos can freely chose their viewing direction while watching a movie that is playing all around them. But this raises the problem that viewers might miss important information because they are not looking into the right direction at the right time — which is an important problem for storytelling in VR.

This research project explores methods for unobtrusive visual attention guidance in 360° virtual environments.


Immersive Digital Reality

Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles.


Increasing Realism of Omnidirectional Videos in Virtual Reality

The goal of this DFG funded research project (ID 491805996) is an overall more lifelike perception of omnidirectional video in VR display systems. For this purpose, visual perception effects are virtually recreated to increase the realism of the displayed content, and display parameters are adapted to our visual perception to increase the immersive effect of VR displays themselves.


Parallax Panorama Video

Recent advances in consumer-grade panorama capturing setups enable personalized 360° experiences. Along with the improvement in head-mounted displays (HMDs), these allow to bring back memories at a previously unprecedented level of immersion. However, the lack of explicit depth and scene geometry prohibits any form of head movement, which is needed for a fully-immersive VR experience.

This research project explores methods for adding motion parallax to previously captured stereo panorama videos and enabling real-time playback of these enhanced videos in HMDs.


Preventing Motion Sickness in VR

Motion sickness, also referred as Simulator Sickness, Virtual sickness, or Cyber sickness, is a problem common to all types of visual simulators consisting of motion sickness-like symptoms that may be experienced while and after being exposed to a dynamic, immersive visualization. It leads to ethical concerns and impaired validity of simulator-based research. Due to the popularity of virtual reality devices, the number of people exposed to this problem is increasing and, therefore, it is crucial to not only find reliable predictors of this condition before any symptoms appear, but also find alternatives to fully prevent its occurrence while experiencing VR content.


Real VR - Immersive Digital Reality

With the advent of consumer-market Virtual Reality (VR) technology, the next revolution in visual entertainment is already on the horizon: real VR will enable us to experience live-action movies, sports broadcasts, concert videos, etc. in true visual (and aural) immersion. This book provides a comprehensive overview of the algorithms and methods that make it possible to immerse into real-world recordings. It brings together the expertise of internationally renowned experts from academia and industry who present the state of the art in this fascinating, interdisciplinary new research field. Written by and for scientists, engineers, and practitioners, this book is the definitive reference for anyone interested in finding out about how to import the real world into head-mounted displays.



Physical Parameter Estimation from Images

The goal of this project is to develop fast image-based measurement methods for optical properties, which would help to close feedback loops in adaptive manufacturing.

The introduction of novel production techniques for integrated optical components demands an increasing amount of quality control and inline feedback. Our focus in this project is the combination of fast optical measurement techniques and physics-based simulations to achieve fast and accurate feedback of physical parameters as close to the machine tool as possible.

The research on this topic is done in collaboration with the PhoenixD Cluster of Excellence. We work closely with expert researchers from other disciplines under the Task Group F2: Expert Systems for Quality Control.


Teach AR

Augmented reality (AR) offers the potential to integrate physical, digital and social learning experiences in hybrid learning environments and thereby to achieve learning gains, higher motivation or improved interaction and collaboration. Moreover, by means of AR, theory- or calculus-based learning and experimental exploration in the lab can be brought closer together. Here we present a data-driven AR enhancement of experiments in an electricity and magnetism lab course, where measurement data such as actual current and voltage, are transmitted to a head-mounted semi-transparent display (HMD).


Wave Optics Rendering

Motivated by the great impact on manufacturing costs that automatically re-adjusting machines' parameters in real-time during production could have, in this project, we propose the use of camera-based optical measurements systems to provide machines with real-time measuring feedback.

In such scenario, the application of traditional real-time computer graphics approximations is innappropiate. Their core relies on the famous rendering equation proposed by Kajiya in 1986 which has proven to be sufficient to allow real-time photorealisitc image synthesis in most situations. However, this simplified model of light transportation and surface interaction oversimplifies the role light's wave properties play in complex optical systems, where we need to accurately simulate diffraction effects in real-time. To overcome this challengue, we strive to combine knowledge from real-time constrained image synthesis research with more detailed physically-based light transport models that incorporate wave characteristics.



Alternate Exposure Imaging

Traditional optic flow algorithms rely on consecutive short-exposure images. In contrast, long-exposed images contain integrated motion information directly in form of motion blur. In this project, we use the additional information provided by a long exposure image to improve robustness and accuracy of motion field estimation. Furthermore, the long exposure image can be used to determine the moment of occlusion for the pixels in any of the short exposure images that are occluded or disoccluded.

This work has been funded by the German Science Foundation, DFG MA2555/4-1


Comprehensive Human Performance Capture from Monocular Video Footage

Photo-realistic modeling and digital editing of image sequences with human actors are common tasks in the movies and games industry. The processes are however still laborious since tools only allow basic manipulations. In cooperation with the Institut für Informationsverarbeitung (TNT) of the University of Hannover (http://www.tnt.uni-hannover.de/), this project aims to solve this dilemma by providing algorithms and tools for automatic and semi-automatic digital editing of actors in monocular footage. To enable visual convincing renderings, a digital model of the human actor, detailed spatial scene information as well as scene illumination need to be reconstructed. Hereby plausible look and motion of the digital model are crucial.

This research project is partially funded by the German Science Foundation DFG.


Monocular Video Augmentation

The goal of this project is to augment video data with high-quality 3D geometry, while only using a single camera as input. As an application of this project, we want to dress a person in a video with artificial clothing. We reconstruct a 3D human pose from 2D input data. This information can be used to drive a cloth simulation creating a plausible 3D garment for the observed pose. Composing this animated garment into the original video creates the illusion of the person wearing different clothing. We aim at real-time frame rates for this system, allowing for virtual mirror applications.


Scene-Space Video Processing

The high degree of redundancy in video footage allows compensating for noisy depth estimates and to achieve various high-quality processing effects such as denoising, deblurring, super resolution, object removal, computational shutter functions, and scene-space camera effects.



ElectroEncephaloGraphics

This project focuses on using electroencephalography (EEG) to analyze the human visual process. Human visual perception is becoming increasingly important in the analyses of rendering methods, animation results, interface design, and visualization techniques. Our work uses EEG data to provide concrete feedback on the perception of rendered videos and images as opposed to user studies that just capture the user's response. Our results so far are very promising. Not only have we been able to detect a reaction to artifacts in the EEG data, but we have also been able to differentiate between artifacts based on the EEG response.


Floating Textures

We present a novel multi-view, projective texture mapping technique. While previous multi-view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (``floats'') projected textures during run-time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real-time frame rates. The method is very generally applicable and can be used in combination with many image-based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free-viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies. In a nutshell, the notion of Floating Textures is to correct for local texture misalignments by determining the optical flow between projected textures and warping the textures accordingly in the rendered image domain. Both steps, optical flow estimation and multi-texture warping, can be efficiently implemented on graphics hardware to achieve interactive to real-time performance.


Perception of Video Manipulation

Recent advances in deep learning-based techniques enable highly realistic facial video manipulations. We investigate the response of human observers’ on these manipulated videos in order to assess the perceived realness of modified faces and their conveyed emotions.

Facial reenactment and face swapping offer great possibilities in creative fields like the post-processing of movie materials. However, they can also easily be abused to create defamatory video content in order to hurt the reputation of the target. As humans are highly specialized in processing and analyzing faces, we aim to investigate perception towards current facial manipulation techniques. Our insights can guide both the creation of virtual actors with a high perceived realness as well as the detection of manipulations based on explicit and implicit feedback of observers.


Perception-motivated Interpolation of Image Sequences

We present a method for image interpolation which is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to an image interpolation that is perceived as physically correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute such solutions. In our user study we confirm the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases the perceived quality of the interpolation results, compare the results obtained by other methods and investigate the achieved quality for different types of scenes.


Simulating Visual Perception

The aim of this work is to simulate glaring headlights on a conventional monitor by first measuring the time-dependent effect of glare on human contrast perception and then to integrate the quantitative findings into a driving simulator by adjusting contrast display according to human perception.


Video Quality Assessment

Goal of this project is to assess the quality of rendered videos and especially detect those frames that contain visible artifacts, e.g. ghosting, blurring or popping.


Visual Fidelity Optimization of Displays

The visual experience afforded by digital displays is not identical to our perception of the genuine real world. Display resolution, refresh rate, contrast, brightness, and color gamut neither match the physics of the real world nor the perceptual characteristics of our Human Visual System. With the aid of new algorithms, however, a number of perceptually noticeable degradations on screen can be diminished or even completely avoided.



Future Lab Water

As a fundamental basis for existence as well as an irreplaceable material for many natural and technical production processes, water is an elementary resource. As in many other industries, the need for application-oriented digital innovations has increased significantly in the water industry. These need to be researched, developed and transferred into practice. The guiding vision of the Future Lab Water (Zukunftslabor Wasser - ZLW) as a new element in the Zentrum für digitale Innovation Niedersachsen (ZDIN) is:

Water resources management, water management and the landscape space of water have an elementary supply function and provide indispensable ecosystem services for our society. Climate change and the inherently heterogeneous and distributed structures of water management call for digitization in water management in order to ensure the security of supply and quality of the resource water in the future and to significantly improve the handling of extreme situations. As a result, there is an acute need for intelligent systems and digital solutions along all levels of digitization maturity in this area.

Contact: Jannis Malte Möller or Prof. Martin Eisemann



Digital Representations of the Real World

The book presents the state-of-the-art of how to create photo-realistic digital models of the real world. It is the result of work by experts from around the world, offering a comprehensive overview of the entire pipeline from acquisition, data processing, and modelling to content editing, photo-realistic rendering, and user interaction.


Image-space Editing of 3D Content

The goal of this project is to develop algorithms in image space that allow photo-realistic editing of dynamic 3D scenes. Traditional 2D editing tools cannot be applied to 3D video as in addition to correspondences in time spatial correspondences are needed for consistent editing. In this project we analyze how to make use of the redundancy in multi-stereoscopic videos to compute robust and dense correspondence fields. these space-time correspondences can then be used to propagate changes applied to one frame consistently to all other frames in the video. Beside the transition of classical video editing tools we want to develop new tools specifically for 3D video content.

This project has been funded by ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.


Multi-Image Correspondences

Multi-view video camera setups record many images that capture nearly the same scene at nearly the same instant in time. Neighboring images in a multi-video setup restrict the solution space between two images: correspondences between one pair of images must be in accordance with the correspondences to the neighboring images.

The concept of accordance or consistency for correspondences between three neighboring images can be employed in the estimation of dense optical flow and in the matching of sparse features between three images.

This work has been funded in parts by the ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.


Reality CG

Scope of "Reality CG" is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.


Virtual Video Camera

The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).



Postdigital Participation - Digital learning support through immersive technologies for people with ADHD (ImmerTec)

Research has shown that digital technologies can be utilised to remove barriers to participation in urban life and in educational spaces. But participation to what end and in what (Kelty 2020)? Following the shock of COVID-19, which destabilised many accepted ways of living, and increased attention to digital technologies, there is now a unique window for problem-oriented research strategies that is ‘bold, activating innovation across sectors, across actors and across disciplines’ (Mazzucato 2018: 2) and to explore the aims and values of participation in a (post)digital world.

Within the Leibniz ScienceCampus 'Postdigital Participation' we investigate 'Digital learning support through immersive technologies for people with ADHD (ImmerTec)'. We design, develop and evaluate learning support systems for people with ADHD (Attention Deficit Hyperactivity Disorder) based on VR and AR technologies



Astrophysical Modeling and Visualization

Humans have been fascinated by astrophysical phenomena since prehistoric times. But while the measurement and image acquisition devices have evolved enormously by now, many restrictions still apply when capturing astronomical data. The most notable limitation is our confined vantage point in the solar system, disallowing us to observe distant objects from different points of view.

In an interdisciplinary German-Mexican research project partially funded by German DFG (Deutsche Forschungsgemeinschaft, grants MA 2555/7-1 and 444 MEX-113/25/0-1) and Mexican CONACyT (Consejo Nacional de Ciencia y Tecnología, grants 49447 and UNAM DGAPA-PAPIIT IN108506-2), we evaluate different approaches for automatical reconstruction of plausible three-dimensional models of planetary nebulae. The team comprises astrophysicists working on planetary nebula morphology as well as computer scientists experienced in the field of reconstruction and visualization of astrophysical objects.


Computed Tomography

The ability to look inside an object is both fascinating and insightful, and has many applications both in medicine and in metrology. Often, one of the challenges here is that the object must not be destroyed and information about its insides can only be acquired in form of radiographic scans, such as x-ray images. These scans inherently represent integrals rather than absolute values. The objective of methods in the project is to reconstruct information from integrated projections efficiently and accurately. The difficulties vary from time-variance and scarcity of data for medical applications to computational cost and memory footprint for metrology. We cooperate with industry to find new methods for solving these problems.


Radio Astronomy Synthesis Imaging

Radio interferometers sample an image of the sky in the spatial frequency domain. Reconstructing the image from a necessarily incomplete set of samples is an ill-posed inverse problem that we address with methods inspired by the theory of compressed sensing.

During two research visits to the National Radio Astronomy Observatory (NRAO) and the University of New Mexico, both gratefully funded by the Alexander von Humboldt Foundation, we had the unique opportunity to work together with world-leading experts in radio astronomy synthesis imaging to develop new algorithms for the Very Large Array (VLA) and other radio telescope arrays.



Accelerating Photo-realistic RT

The goal of this research project is to develop and evaluate new approaches to accelerate photo-realistic ray tracing. Our focus lies on novel acceleration and denoising strategies for fast and memory-efficient photo-realistic rendering. Our research covers various topics from basic research for fast intersection tests to advanced filtering techniques of Monte Carlo simulation-based rendering.


Computer Vision Algorithms for the DARPA Urban Challenge 2007

The TU Braunschweig participated in the DARPA Urban Challenge 2007, its autonomous vehicle 'Caroline' was among the finalists. The Computer Graphics lab provided the real-time vision algorithms for that task.

Caroline's computer vision system consists of two separate systems. The first is a monocular color segmentation based system that classifies the ground in front of the car as drivable, undrivable or unknown. It assists in situations where the drivable terrain and the surrounding area (e.g. grass, concrete or shrubs) differ in color and it deals with man-made artifacts such as lane markings as well as bad lighting and weather conditions. The second vision system is a multi-view lane detection that identifies the different kinds of lanes described by DARPA, such as broken and continuous as well as white and yellow lane markings. Using four high-resolution color cameras and state-of-the-art graphics hardware, it detects its own lane and the two adjacent lanes to the left and right with a field of view of 175 degrees at up to 35 meters. The output of the lane detection algorithm is directly processed by the artificial intelligence.


Lunar Surface Relief Reconstruction

Our "Astrographics" research group works on various methods to overcome the difficulties associated with gaining knowledge about faraway astronomical objects using computer vision and computer graphics algorithms. In this project, we have computed plausible 3D surface data for the moon from photographic imagery from the 1960's "Lunar Orbiter" mission.


Multiple Kinect Studies

This project investigates multi-camera setups using Microsoft Kinects. Active structured light from the Kinect is used in several scenarious, including gas flow description, motion capture and free-viewpoint video.

While the ability to capture depth alongside color data (RGB-D) is the starting point of the investigations, the structured light is also used more directly. In order to combine Kinects with passive recording approaches, common calibration with HD cameras is also a topic.


Photo Zoom

We present a system to automatically construct high resolution images from an unordered set of low resolution photos. It consists of an automatic preprocessing step to establish correspondences between any given photos. The user may then choose one image and the algorithm automatically creates a higher resolution result, several octaves larger up to the desired resolution. Our recursive creation scheme allows to transfer specific details at subpixel positions of the original image. It adds plausible details to regions not covered by any of the input images and eases the acquisition for large scale panoramas spanning different resolution levels.


Physics-based Rendering

In this project, novel techniques to measure different light-matter interaction phenomena are developed in order to provide new or verify existing models for rendering physically correct images.


Scalable Visual Analytics

Goal of this research project is to develop and evaluate a fundamentally new approach to exhaustively search for, and interactively characterize any non-random mutual relationship between attribute dimensions in general data sets. To be able to systematically consider all possible attribute combinations, we propose to apply image analysis to visualization results in order to automatically pre-select only those attribute combinations featuring non-random relationships. To characterize the found information and to build mathematical descriptions, we rely on interactive visual inspection and visualization-assisted interactive information modeling. This way, we intend to discover and explicitly characterize all information implicitly represented in unbiased sets of multi-dimensional data points.


Visual Computing Workshop, June 10—11, 2010

The Visual Computing Workshop is part of the Symposium on Visual Computing and Speech Processing, presented by TU Braunschweig's Center for Informatics and Information Technology (tubs.CITY). The workshop is supported by the Gesellschaft für Informatik, Fachbereich Graphische Datenverarbeitung (GI FB-GDV).


Who Cares?

Official music video "Who Cares" by Symbiz Sound; the first major production using our Virtual Video Camera.

Dubstep, spray cans, brush and paint join forces and unite with the latest digital production techniques. All imagery depicts live action graffiti and performance. Camera motion added in post production using the Virtual Video Camera.