Computer Graphics
TU Braunschweig


Conference Vision, Modeling, and Visualization

27.09.2023 13:00 - 29.09.2023 12:00
Braunschweig, Germany

Chair(s): Marcus Magnor, Martin Eisemann, Susana Castillo

(conference website)

Vision, Modeling, and Visualization

Talk Learned Optics — Improving Computational Imaging Systems through Deep Learning and Optimization

22.05.2023 13:00
IZ G30

Speaker(s): Wolfgang Heidrich

Computational imaging systems are based on the joint design of optics and associated image reconstruction algorithms. Historically, many such systems have employed simple transform-based reconstruction methods. Modern optimization methods and priors can drastically improve the reconstruction quality in computational imaging systems. Furthermore, learning-based methods can be used to design the optics along with the reconstruction method, yielding truly end-to-end optimized imaging systems that outperform classical solutions.

Wolfgang Heidrich is a Professor of Computer Science and Electrical and Computer Engineering in the KAUST Visual Computing Center, for which he also served as director from 2014 to 2021. Prof. Heidrich joined King Abdullah University of Science and Technology (KAUST) in 2014, after 13 years as a faculty member at the University of British Columbia. He received his PhD in from the University of Erlangen in 1999, and then worked as a Research Associate in the Computer Graphics Group of the Max-Planck-Institute for Computer Science in Saarbrucken, Germany, before joining UBC in 2000. Prof. Heidrich's research interests lie at the intersection of imaging, optics, computer vision, computer graphics, and inverse problems. His more recent interest is in computational imaging, focusing on hardware-software co-design of the next generation of imaging systems, with applications such as High-Dynamic Range imaging, compact computational cameras, hyperspectral cameras, to name just a few. Prof. Heidrich's work on High Dynamic Range Displays served as the basis for the technology behind Brightside Technologies, which was acquired by Dolby in 2007. Prof. Heidrich is a Fellow of the IEEE and Eurographics, and the recipient of a Humboldt Research Award.

Talk Promotions-Vor-Vortrag: Fast and Efficient Artifact Correction for CT Reconstruction

17.03.2023 13:00 - 17.03.2023 14:00
IZ G30

Speaker(s): Markus Wedekind

CT reconstruction is a highly studied field in image processing that aims to reconstruct 3D images from radiographic projections. In industrial CT in particular, datasets are of substantially higher resolution than in conventional medical applications and the metrological accuracy of computed results is of great importance. This leads to high demands towards the computational performance of reconstruction algorithms. It also imposes a need to compensate for an abundance of artifacts that are introduced when not accounting for physical effects that play a role during the acquisition of projections. In this dissertation, we present several techniques that combat such artifacts in CT reconstruction. Firstly, we devise a method for reducing or eliminating stair artifacts that occur when polygonizing surface meshes from voxel grids that have been reconstructed using CT. We employ the ability of the commonly used filtered backprojection technique to reconstruct infinitesimal voxels at arbitrary positions of the volume, and use it to circumvent the interpolation sub-voxel data that leads to the stair artifacts in the polygonization. Additionally, we seek to reduce ring artifacts in reconstructed volumes. These artifacts stem from incorrect normalization of detector screen pixels and particularly affects voxels near the axis of rotation in circular scans. We seek to reduce those artifacts by physically correctly modelling the flat-field errors that lead to the emergence of the artifacts. Simultaneously, we demonstrate a computationally efficient way to implement our method in an existing CT reconstruction pipeline. Finally, the challenge of compensating for geometric calibration errors is addressed. In the case of truncated projections, we develop and evaluate methods for calibration correction with limited or no data redundancy. We consider and examine both methods that operate in projection domain as well as ones operating in image domain.

Talk BA-Talk: Selection in Scatter Plots

15.03.2023 13:00

Speaker(s): Richard Neumann

Talk Disputation: Investigating the Perceived Authenticity and Communicative Abilities of Face-Swapped Portrait Videos

17.02.2023 13:00

Speaker(s): Leslie Wöhler

Modern deep learning approaches allow for the automatic creation of highly realistic face-swapped videos. In these videos, recordings of two people are combined in a way that the face of a source person is applied to the video of a target person. This way, the resulting video obtains the facial identity of the source while keeping the body appearance, movements, and facial expressions of the target person. Thanks to their high degree of realism and automation of the generation process, face swaps are a valuable tool for creative and communicative scenarios. However, they could also be abused for criminal activities as they allow the impersonation of others and the generation of manipulated video content.

While many works focus on improving algorithms for the creation and detection of face swaps, there is only limited research on the perception of these modern video manipulations. As humans are very sensitive to changes and imbalances in facial representations, in my thesis I set out to investigate the perception of face swaps. Thereby, I focus on two areas: The perceived authenticity and the communicative abilities of face swaps.
To assess the quality and detectable cues in face swap videos, I examine whether humans can detect face swaps and which artifacts and facial areas are most important to detect manipulations using self-reports and eye tracking data. Furthermore, I discuss the perception of the conveyed emotions and personalities of face swaps to evaluate their usefulness as digital avatars in communicative scenarios. In order to perform reliable experiments and evaluations, I additionally introduce a novel dataset of face swaps designed for perceptual experiments as well as an eye tracking framework which enables the automatic generation of areas of interest in portrait videos.

The results of the experiments performed in this thesis indicate that modern face swaps are generally convincing and often mistaken for genuine videos.
While participants were able to report visible artifacts, they are usually attributed to video quality and did not suspect face swapping. The eye tracking data, on the other hand, revealed significant differences in viewing behavior between genuine and manipulated videos. This may indicate that some differences are perceived, but only subconsciously. Furthermore, my experiments show that face swaps are able to convey emotions and personality which makes them useful in communicative scenarios such as digital avatars.

Talk BA-Talk: Wavelet based Foveated Rendering of Videos in Virtual Reality

28.11.2022 13:00 - 28.11.2022 13:30

Speaker(s): Christopher Graen

Talk MA-Talk: Point Cloud Scene Representations for Free-Viewpoint Synthesis

14.10.2022 13:00 - 14.10.2022 14:00
IZ G30

Speaker(s): Florian Hahlbohm

Talk Promotions-Vorvortrag: Investigating the Perceived Authenticity and Communicative Abilities of Face-Swapped Portrait Videos

07.10.2022 13:00

Speaker(s): Leslie Wöhler

Talk MA-Talk: Entwicklung eines Segmentierungs- und Klassifizierungsverfahrens für räumliche und zeitliche Ionisationstracks eines Timepix3 Detektors

12.09.2022 13:00 - 12.09.2022 14:00
IZ G30 and online (

Speaker(s): Felix Lehner

Talk MA-Talk: Implementierung von unvoreingenommenen Schätzverfahren für Randintegrale im differenzierbaren Rendering

08.08.2022 13:00 - 08.08.2022 13:45

Speaker(s): Leon Schütze

Talk MA Talk Development of a Parametric Microfacet-Based BRDF Model that Approximates Arbitrary Explicit Heightfields

23.06.2022 14:00

Speaker(s): Jan-Frederick Musiol

Talk Promotions Vor-Vortrag: Measuring by Simulation - Efficient Inverse Problem Solutions with Differentiable Rendering

17.06.2022 13:00

Speaker(s): Marc Kassubeck

Talk Promotions-Vorvortrag: Ego-Motion Aware Immersive Rendering from Real-World Recorded Panorama Videos

03.06.2022 13:00

Moritz Mühlhausen is presenting his dissertation pre-talk Ego-Motion Aware Immersive Rendering from Real-World Recorded Panorama Videos on Friday, June, 03. at 1pm.

Talk BA-Talk: Automatic patch sampling for GANs through image retrieval

16.05.2022 10:00

Speaker(s): Reiko Lettmoden

Talk Invited Talks: Layered Weighted Blended Order-Independent Transparency and AR Carcassonne

05.05.2022 13:00

Speaker(s): Fabian Friederichs, Jannis Malte Möller

First talk: Layered Weighted Blended Order-Independent Transparency
Speaker: Fabian Friederichs
The presented approach improves the accuracy of weighted blended order-independent transparency, while remaining efficient and easy to implement. The original algorithm is extended to a layer-based approach, where the content of each layer is blended independently before compositing them globally. Hereby, we achieve a partial ordering but avoid explicit sorting of all elements. To ensure smooth transitions across layers, we introduce a new weighting function. Additionally, we propose several optimizations and demonstrate the method’s effectiveness on various challenging scenes in terms of geometric- and depth complexity. We achieve an error reduction more than an order of magnitude on average compared to weighted blended order-independent transparency for our test scenes.
Second Talk: AR Carcassonne - Extending a social game through projection mapping and traditional image processing
Speaker: Jannis Malte Möller
Abstract: Projection Mapping offers the possibility to extend the real world with virtual content and to experience it together as a group. In contrast to head-mounted displays, projection mapping is much less invasive and thus the usage inhibition threshold is lower. In this talk, I discuss the exemplary implementation of an extension of the popular board game Carcassonne through projection mapping and the underlying technology considerations for robust recognition.

Translated with (free version)

Talk Disputation: Computer Graphics from a Bio-Signal Perspective - Exploration of Autonomic Human Physiological Responses to Synthetic and Natural Imagery

29.04.2022 10:30
IZ 812

Speaker(s): Jan-Philipp Tauscher

Talk Teamprojekt-Abschluss: Special Effects with Video Matching

29.03.2022 13:00

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk BA-Talk: Leistungsanalyse und Vergleich differenzierbarer Renderingsysteme

28.03.2022 13:30 - 28.03.2022 14:00

Speaker(s): Domenik Jaspers

Talk BA-Talk: Erkennung und Einordnung von Emotionen in den semantischen Raum mittels Deep Learning

28.03.2022 13:00 - 28.03.2022 13:30

Speaker(s): Bill Matthias Thang

Talk BA-Talk: Neural Radiance Fields: Eine systematische Übersichtsarbeit und Ausblick auf weitere Entwicklungen

16.03.2022 13:00

Speaker(s): Lars Christian Lund

Talk MA-Talk: Visualisierung wissenschaftlicher Daten in Multi-User Augmented Reality

23.02.2022 15:30

Speaker(s): Jan Wulkop

Talk BA-Talk: Evaluation von Open-Source Experiment-Management- Systemen zur Unterstützung der universitären Forschung

23.09.2021 17:00

Talk Promotions-Vorvortrag: Computer graphics from a bio-signal perspective - Exploration of Autonomic Human Physiological Responses to Synthetic and Natural Imagery

30.07.2021 13:00

JP Tauscher is presenting his dissertation pre-talk Computer graphics from a bio-signal perspective - Exploration of Autonomic Human Physiological Responses to Synthetic and Natural Imagery on Friday, July, 30. at 1pm.


The impact of graphics on our perception is usually measured by asking users to complete self-assessment questionnaires. These psycho-physical rating scales and questionnaires reflect a subjective opinion by conscious responses but may be (in)voluntarily biased and do usually not provide real-time feedback. Subjects may also have difficulties communicating their opinion because a rating scale may not reflect their intrinsic perception or may be biased by external factors such as mood, expectation, past experience or even problems of task definition and understanding.

In this thesis, we investigate how the human body reacts involuntarily to computer-generated as well as real-world image content. Here, we add a whole new range of modalities to our perception quantification apparatus to abstract from subjective ratings towards objective bodily measures. These include electroencephalography (EEG), eye tracking, galvanic skin response (GSR), and cardiac and respiratory data. We seek to explore the gap between what humans consciously see and what they implicitly perceive when consuming generated and natural content. We include different display technologies ranging from traditional monitors to virtual reality (VR) devices commonly used to present computer graphical content.

This thesis shows how the human brain and the autonomous nervous system react to visual stimuli and how these bio-signals can be reliably measured to analyse and quantify the immediate physiological reactions towards certain aspects of generated and natural graphical content. We advance the current frontiers in the context of perceptual graphics towards novel measurement and analysis methods for immediate and involuntary physiological reactions.

Talk BA-Talk: Real-time high-resolution playback of 360° stereoscopic videos in virtual reality

16.07.2021 13:00

Speaker(s): Nikkel Heesen

Talk MA-Talk: Video Objekt Segmentierung für Omnidirektionale Stereo Panoramen

14.05.2021 13:00

Speaker(s): Fan Song