Computer Graphics
TU Braunschweig

Events


Talk Promotions-Vorvortrag: Computer graphics from a bio-signal perspective - Exploration of Autonomic Human Physiological Responses to Synthetic and Natural Imagery

30.07.2021 13:00
Online

JP Tauscher is presenting his dissertation pre-talk Computer graphics from a bio-signal perspective - Exploration of Autonomic Human Physiological Responses to Synthetic and Natural Imagery on Friday, July, 30. at 1pm.

http://webconf.tu-bs.de/mar-3vy-aef

 

The impact of graphics on our perception is usually measured by asking users to complete self-assessment questionnaires. These psycho-physical rating scales and questionnaires reflect a subjective opinion by conscious responses but may be (in)voluntarily biased and do usually not provide real-time feedback. Subjects may also have difficulties communicating their opinion because a rating scale may not reflect their intrinsic perception or may be biased by external factors such as mood, expectation, past experience or even problems of task definition and understanding.


In this thesis, we investigate how the human body reacts involuntarily to computer-generated as well as real-world image content. Here, we add a whole new range of modalities to our perception quantification apparatus to abstract from subjective ratings towards objective bodily measures. These include electroencephalography (EEG), eye tracking, galvanic skin response (GSR), and cardiac and respiratory data. We seek to explore the gap between what humans consciously see and what they implicitly perceive when consuming generated and natural content. We include different display technologies ranging from traditional monitors to virtual reality (VR) devices commonly used to present computer graphical content.


This thesis shows how the human brain and the autonomous nervous system react to visual stimuli and how these bio-signals can be reliably measured to analyse and quantify the immediate physiological reactions towards certain aspects of generated and natural graphical content. We advance the current frontiers in the context of perceptual graphics towards novel measurement and analysis methods for immediate and involuntary physiological reactions.

Talk BA-Talk: Real-time high-resolution playback of 360° stereoscopic videos in virtual reality

16.07.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Nikkel Heesen

Talk MA-Talk: Video Objekt Segmentierung für Omnidirektionale Stereo Panoramen

14.05.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Fan Song

Talk MA-Talk: Functional Volumetric Rendering for Industrial Applications

07.05.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Jan-Christopher Schmidt

Talk Teamprojekt-Abschluss: Schoduvel im Dome

31.03.2021 13:15
Dome (Aufnahmestudio & Visualisierungslabor) / Online

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk MA-Talk: Temporal Coherent Relighting in Portrait Videos from Neural Textures

29.03.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Jann-Ole Henningson

Talk BA-Talk: Bekämpfung von Motion Sickness in VR durch dynamische bipolare Galvanisch Vestibuläre Stimulation

12.03.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Max Hattenbach

Talk BA-Talk: Neuronales Rendering - Wahrnehmungsbasierte Auswertung der Tiefenwirkung in VR

25.01.2021 13:00
Online: https://webconf.tu-bs.de/mar-3vy-aef

Speaker(s): Yannic Rühl

Talk Disputation

22.01.2021 10:00
Online

Speaker(s): Steve Grogorick

Guiding Visual Attention in Immersive Environments

Talk BA-Talk: Entwurf einer interaktiven Simulation zum Erlernen von Sternenkonstellationen in öffentlichen Planetarien

08.12.2020 13:00
Planetarium Wolfsburg

Speaker(s): Lars Richard

Talk Promotions-V-Vg: Guiding Visual Attention in Immersive Environments

30.10.2020 13:00
Online

Growing popularity of virtual reality (VR) technology, presenting content virtually all around a user, creates new challenges for digital content creators and presentation systems. In this dissertation we investigate how to support viewers to not miss important information when exploring unknown virtual environments. We examine different visual stimuli to guide viewers' attention towards predetermined target regions of surrounding environments. To best possibly maintain the original visual appearance of scenes, we aim for subtle visual modifications that operate as close as possible to viewers' perception threshold, while still providing effective guidance.


In a first approach, we identify issues of existing visual guidance stimuli to be effective in VR environments. For use in large field of view (FOV) head-mounted displays (HMDs), we derive techniques to handle perspective distortions, degradation of visual acuity in the peripheral visual field and target regions outside the initial FOV. An existing visual stimulus, originally conceived for desktop environments, is adapted accordingly and successfully evaluated in a perceptual study.
Subsequently the generalizability of these extending techniques is investigated, regarding different guidance methods and VR devices. For this, additional methods from related work are re-implemented and updated accordingly. Two comparable perceptual studies are conducted to evaluate their effectiveness within a consumer-grade HMD and in an immersive dome projection system covering almost the full human visual field. Regardless of the actual success rates, all of the tested methods show a measurable effect on participants' viewing behavior, indicating general applicability of our modification techniques for various guiding methods and VR systems.


Finally, a novel visual guidance method (SIBM) is created, specifically designed for immersive systems. It builds on contrary manipulations of the two stereoscopic frames in VR rendering systems, turning the inevitable overhead of double (per eye) rendering into an advantage that is not available in monocular systems. Moreover, exploiting our visual system's sensitivity for discrepancies in binocular visual input, it allows to noticeably reduce the required per-image contrast of the actual stimulus well below previous state-of-the-art.

Talk SEP-Abschluss: Massively distributed collaborative crowd input system for dome environments

31.08.2020 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Softwareentwicklungspraktikums (SEP).

Talk BA-Talk: Eye Tracking Analysis Framework for Video Portraits

28.08.2020 13:00
Online

Dieser Abschlussvortrag wird online gestreamt:

https://webconf.tu-bs.de/mar-3vy-aef

Talk BA-Talk: Implementing Dynamic Stimuli in VR Environments for Visual Perception Research

04.08.2020 15:00
Dome (Aufnahmestudio & Visualisierungslabor)

Speaker(s): Mai Hellmann

Talk Praktikum-Abschluss: Creating an interactive VR-adventure for the ICG Dome

05.06.2020 13:30
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Computergraphik Praktikums (MA).
(Ein Folgeprojekt vom Computergraphik Praktikum (BA) SS'19)

Talk Teamprojekt-Abschluss: Unser kleines Planetarium

05.06.2020 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Teamprojekts.

Talk MA-Talk: Automatic Face Re-enactment in Real-World Portrait Videos to Manipulate Emotional Expression

24.04.2020 13:15
https://webconf.tu-bs.de/jan-n7t-j7a

Speaker(s): Colin Groth

Talk PhD defense: Reconstructing 3D Human Avatars from Monocular Images

13.03.2020 10:00
Informatikzentrum IZ161

Speaker(s): Thiemo Alldieck

Talk VASC Seminar: Reconstructing 3D Human Avatars from Monocular Images

17.01.2020 16:00
Carnegie Mellon University, Pittsburgh, PA, USA

Speaker(s): Thiemo Alldieck

https://www.ri.cmu.edu/event/reconstructing-3d-human-avatars-from-monocular-images/

Statistical 3D human body models have helped us to better understand human shape and motion and already enabled exciting new applications. However, if we want to learn detailed, personalized, and clothed models of human shape, motion, and dynamics, we require new approaches that learn from ubiquitous data such as plain RGB-images and video. I will discuss recent advances in personalized body shape and clothing estimation from monocular video, from a few frames, and even from a single image. We developed effective methods to learn detailed avatars without the need for expensive scanning equipment. These methods are easy to use and enable personalized avatar creation for example for VR and AR applications. I will conclude my talk by outlining the next challenges in human shape reconstruction.

Talk MA-Talk: Occlusion Aware Iterative Optical Flow Refinement for High Resolution Images

17.01.2020 11:00
Seminarraum G30

Speaker(s): Alexander Manegold

In the field of optical flow estimation many different approaches exist. Mostof the newest published methods use some kind of Convolutional NeuralNetwork (CNN). These CNNs often have high graphics hardware requirements which scale with the size of the input images. High resolution imagesor panoramas can consequently often not be processed at full resolution. The PanoTiler offers an image tiling strategy that can be used to partially estimate the optical flow using arbitrary CNNs and then merge the individual flow tiles. Its advantage over simple tiling techniques is the utilization of multiple resolution levels which allows to find better matching tile pairs between source and target images. Although the original PanoTiler yields good optical flow results for most images, errors are sometimes introduced in higher resolution levels. To solve this issue, I extended the PanoTiler approach with regularization which incorporates the optical flow of all levels for the final result. Additionally, I introduce a new optical flow clustering method to the PanoTiler which mends a vulnerability that produces errorsin higher resolution levels.

To compare the results of optical flow estimation techniques, multiple benchmarks like Middlebury, KITTI 2015 or MPI Sintel were created. These benchmarks mostly contain ground truth optical flow for lower resolution images and not for high resolution images or even panoramas. Because it is challenging to get ground truth optical flow for real world images, I created a simple to follow protocol to create panoramas and their ground truth opticalflow from Unreal Engine 4. The optical flow is generated using a Python tool and it is based on stereo vision and depth render passes.

Talk Fluid Simulation - From Research to Market

13.11.2019 16:45
IZ 161

Speaker(s): Matthias Teschner

Based on many years of research at the University of Freiburg, FIFTY2 Technology develops and markets PreonLab, a framework for Lagrangian fluid simulation with a particular focus on the automotive industry. This presentation discusses the respective evolution from a research project to a product. The first part introduces selected research results that contribute to the success of PreonLab. The second part discusses the technology transfer and aspects that affect the prosperity of a university spin-off.

 

Talk Physics in Graphics: Measuring from Images

07.11.2019 11:00
PhoenixD Retreat, Schneverdingen

Speaker(s): Marcus Magnor

Computer graphics is all about devising highly efficient, hardware-optimized algorithms to numerically evaluate the equations governing our physical world. Areas of physics that reguarly fall prey to computer graphics range from classical and continuum mechanics to hydrodynamics, optics, and radiation transport. In my talk I will give a few examples and discuss how being able to efficiently solve the forward problem of simulating the physical behavior of real-world systems can be used to also tackle the inverse problem of estimating and measuring physical properties from images.

 

Talk What’s missing in Head-mounted VR Displays?

14.10.2019 14:00
DLR Braunschweig

Speaker(s): Marcus Magnor

Thanks to competitively priced HMDs geared towards the consumer market, research in immersive displays and Virtual Reality has seen tremendous progress. Still, a number of challenges remain to make immersive VR experiences truly realistic. In my talk I will shwocase a number of research projects at TU Braunschweig that aim to enhance the immersive viewing experience by taking perceptual issues and real-world recordings into account.

Symposium Computer Vision Colloquium

08.10.2019 09:00 - 09.10.2019 15:00
Informatikzentrum IZ161

National and international experts present the latest research in Computer Vision

Talk Praktikum-Abschluss: HorrorAdventure: Creating an Immersive Wheelchair Experience

27.09.2019 13:00
Dome (Aufnahmestudio & Visualisierungslabor)

Präsentation der Ergebnisse des studentischen Computergraphik Praktikums.
(Ein Folgeprojekt vom Teamprojekt WS '18/19)