Computer Graphics
TU Braunschweig

Events


Talk MA-Talk: Exploration and Analysis of Flow Data in Augmented Reality

08.11.2024 13:00
IZ G30

Speaker(s): Anna-Lena Ehmer

Talk BA-Talk: Extension of the Unreal Engine to Generate Image Datasets with a Physically Plausible Range of Light Intensity Values

02.10.2024 14:00
IZ G30

Speaker(s): Maximilian Giller

Talk MA-Talk: Voice in Focus: Debunking and Identifying Audio Deepfakes in Forensic Scenarios

27.09.2024 13:00
IZ G30

Speaker(s): Maurice Semren

In today's media-dominated world, the use of Voice Conversion systems and manipulated audio samples (deep fakes) is becoming increasingly widespread. However, these methods can often spread misinformation and cause confusion. Although there are systems that can identify these fakes, as of now, there is no technology that can accurately identify the source speaker. Developing such systems could greatly assist law enforcement and discourage the misuse of this technology. This work focuses on identifying the original speaker in Voice Conversion deepfakes using a specific list of potential suspects. We examine various Voice Conversion systems, comparing their overall quality, how closely they resemble the target speaker, and how well they disguise the original speaker. Additionally, we compare results from a human perception experiment with machine-based metrics derived from Speaker Verification tools.
The artificial perception appears to yield more accurate identification results on average, even when the human participants and the speaker are familiar with each other.

Talk BA-Talk: Learned Initialization of Neural Rendering Networks for Point-Based Novel View Synthesis

16.09.2024 13:00
G30

Speaker(s): Leon Overkämping

Talk MA-Talk: Investigating horizon mapping for real-time soft-shadows on planetary datasets

30.08.2024 13:00
IZ G30

Speaker(s): Jonathan Fritsch

Talk MA-Talk: Research, optimization and evaluation of brightness estimation for panoramic images based on deep learning models

15.08.2024 11:00
IZ G30

Speaker(s): Jiankun Zhou

Talk Breaking the Limits of Display and Fabrication using Perception-aware Optimizations

26.07.2024 14:00 - 26.07.2024 15:00
Raum G30

Speaker(s): Piotr Didyk

Novel display devices and fabrication techniques enable highly tangible ways of creating, experiencing, and interacting with digital content. The capabilities offered by these new output devices, such as virtual and augmented reality head-mounted displays and new multi-material 3D printers, make them real game-changers in many fields. At the same time, the new possibilities offered by these devices impose many challenges for content creation techniques regarding quality and computational efficiency. This talk will discuss the concept of perception-aware optimizations, which incorporate insights from human perception into computational methods to optimize content according to the capabilities of different output devices, e.g., displays, 3D printers, and requirements of the human sensory system. As demonstrated, the key advantage of such strategies is that tailoring computation to perceptually relevant aspects of the content often reduces the computational cost related to the content creation or overcomes certain limitations of output devices. Besides discussing the general concept, the talk will present several specific applications where perception-aware optimization has been proven beneficial. The examples include methods for optimizing visual content for novel display devices that focus on perceived quality and new computational fabrication techniques for manufacturing objects that look and feel like real ones.

Talk Promotion: Perception-Based Techniques to Enhance User Experience in Virtual Reality

26.07.2024 10:00 - 26.07.2024 12:00
PK 4.122 (Altgebäude, 1. OG.)

Speaker(s): Colin Groth

Virtual Reality (VR) ushered in a new era of immersive content viewing with vast potential for entertainment, design, medicine, and other fields.
However, the willingness of users to practically apply the technology is bound to the quality of the virtual experience. In this dissertation, we describe the development and investigation of novel techniques to reduce negative influences on the user experience in VR applications.
Our methods not only include substantial technical improvements but also consider important characteristics of human perception that are exploited to make the applications more effective and subtle. Mostly, we are focused on visual perception, since we deal with visual stimuli, but we also consider the vestibular sense which is a key component for the occurrence of negative symptoms in VR, referred to as cybersickness. In this dissertation, our techniques are designed for three groups of VR applications, characterized by the degree of freedom to apply adjustments. The first set of techniques addresses the extension of VR systems with stimulation hardware. By adjusting common techniques from the medical field, we artificially induce human body signals to create immersive experiences that reduce common mismatches between perceptual information. The second group focuses on applications that use common hardware and allow adjustments of the full render pipeline. Here, especially immersive video content is notable, where the frame rates and quality of the presentations are often not in line with the high requirements of VR systems to satisfy a decent user experience. To address the display problems, we present a novel video codec based on wavelet compression and perceptual features of the visual system. Finally, the third group of applications is the most restrictive and does not allow modifications of the rendering pipeline. Here, our techniques consist of post-processing manipulations in screen space after rendering the image, without knowledge of the 3D scene. To allow techniques in this group to be subtle, we exploit fundamental properties of human peripheral vision and apply spatial masking as well as gaze-contingent motion scaling in our methods. 

Talk MA-Talk: Synthetic Data Set Generation for Autonomous Driving using Neural Rendering and Machine Learning

28.06.2024 13:00
IZ G30

Speaker(s): Jonas Penshorn

Talk Functional Programming in C++

18.06.2024 08:00
SN 19.1

Speaker(s): Jonathan Müller

Am 18.06.2024, 08:00 Uhr in SN 19.1

begrüßen wir Jonathan Müller für einen Gastvortrag zum Thema „Functional Programming in C++“. Funktionale Programmierung hat sich in immer mehr Bereichen als sichere und vorteilhafte Variante des Programmierens herausgestellt, so beispielsweise in der parallelen Programmierung. John Carmack, Erfinder des Ego-Shooters und Games wie Doom, Quake und Wolfenstein3D, sagte einmal über funktionale Programmierung: „No matter what language you work in, programming in a functional style provides benefits. You should do it whenever it is convenient, and you should think hard about the decision when it isn’t convenient."

Jonathan ist ein C++-Bibliotheksentwickler bei think-cell, hält Vorträge auf Konferenzen und ist Mitglied des C++-Standardisierungsausschusses.

Er ist der Autor von Open-Source-Projekten wie type_safe, einer Bibliothek von Sicherheitshilfsprogrammen, foonathan/memory, einer Speicherzuweisungsbibliothek, und cppast, einem C++-Reflection-Tool. In letzter Zeit hat er sich für Programmiersprachen und Compiler interessiert und lexy, eine C++-Parser-Bibliothek, und lauf, einen Bytecode-Interpreter, veröffentlicht.
Er bloggt auch unter foonathan.net.

Trotz der frühen Stunde freuen wir uns über interessierte Zuschauer.

Talk Globale, visuelle Lokalisierung durch Abgleich von Punkt- und Linienmerkmalen in Bildern mit bekannten, hochgenauen Geodaten

17.05.2024 13:00
IZ G30

Speaker(s): Junbo Li

Heutzutage spielt die Lokalisierung in vielen Gebieten eine sehr wichtige Rolle, wie autonomes Fliegen und autonomes Fahren. Die gebräuchlichste Methode dazu ist die satellitengestützte Lokalisierung, bei deren Anwendung in Städten jedoch das Problem besteht, dass die Lokalisierungsgenauigkeit aufgrund der Behinderung von Gebäuden erheblich abnimmt. Daher ist die Entwicklung einer auf anderen Informationen basierenden Lokalisierung als Ergänzung oder Ersetzung zur satellitengestützten Lokalisierung in städtischen Anwendungsszenarien zu einem Forschungsschwerpunkt geworden. In dieser Arbeit wird hauptsächlich eine Ende-zu-Ende globale visuelle Lokalisierungspipeline entwickelt, die auf dem Abgleich von Punkt- und Linienmerkmalen in Inferenzbildern mit vorprozessierter Datenbank, die im Voraus einmalig anhand des Lokalisierungsgebiets unter Verwendung bekannter hochpräziser Geodaten erstellt wurde. Bei Tests mit den Daten aus sehr komplexer realer städtischer Umgebung kann die mediane Genauigkeit etwa 1 Meter erreichen.

Talk BA-Talk: Evaluation of Methods for Learned Point Spread Functions through Camera-In-The-Loop Optimization

19.04.2024 11:00
IZ G30

Speaker(s): Karl Ritter

Talk Promotions-Vor-Vortrag: Perception-Based Techniques to Enhance User Experience in Virtual Reality

15.03.2024 13:00
IZ G30

Speaker(s): Colin Groth

Talk MA-Talk: An Investigation on the Practicality of Neural Radiance Field Reconstruction from in-the-wild Multi-View Panorama Recordings

22.12.2023 13:00
IZ G30

Speaker(s): Yannic Rühl

Talk Colloquium on AI in Interactive Systems

07.12.2023 10:00 - 08.12.2023 22:00
IZ161

Talk BA-Talk: Partial Face Swaps

09.10.2023 13:00
G30

Speaker(s): Carlotta Harms

Conference Vision, Modeling, and Visualization

27.09.2023 13:00 - 29.09.2023 12:00
Braunschweig, Germany

Chair(s): Marcus Magnor, Martin Eisemann, Susana Castillo

(conference website)

Vision, Modeling, and Visualization

Talk BA-Talk: Kostengünstige integrierte Steuerung und Überwachung von FDM Druckern mittels digitaler Zwillinge

26.09.2023 13:00
G30

Speaker(s): Marc Majohr

In dieser Arbeit wurde ein integriertes Steuerungs- und Überwachungssystem für Consumer FDM Drucker konzipiert und entwickelt .
Ein Fokus liegt auf universeller Anwendbarkeit auf verschiedenen FDM Druckern (L1), sowie auf minimalem Eingriff in den Druckprozess (L5).

Talk Computer Vision from the Perspective of Surveying

28.08.2023 13:00
IZ G30

Speaker(s): Anita Sellent

Talk Turning Natural Reality into Virtual Reality

18.08.2023 13:00
Stanford University, Packard 202

Speaker(s): Marcus Magnor

SCIEN Colloquium, Electrical Engineering, Stanford University

Talk Turning Natural Reality into Virtual Reality

14.08.2023 10:45
NVIDIA Inc., Santa Clara, CA

Speaker(s): Marcus Magnor

Current endeavors towards immersive visual entertainment are still almost entirely based on 3D graphics content, limiting application scenarios to digital, synthetic worlds only. The reason is that in order to provide for stereo vision and ego-motion parallax, two essential ingredients for visual immersion perception, the scene must be rendered in real-time from varying vantage points. While this can be easily accomplished in 3D graphics via GPU rendering, it is not at all straight-forward to do the same from conventional video footage acquired of real-world events. In my talk I will outline different ideas and approaches of how to utilize graphics hardware in conjunction with video in order to import the real world into VR.

 

Talk BA-Talk: Enhancing Perceived Acceleration using Galvanic Vestibular Stimulation in Virtual Reality

17.07.2023 13:00
G30

Speaker(s): Zandalee Roets

Talk BA-Talk: Perceptually Realistic Rendering of 360° Photos: a State-of-the-Art Overview

16.06.2023 13:00
G30

Speaker(s): Marcel Gädke

Talk BA-Talk: Move to the Music: Analyzing Dance Synchronicity using Smart Watch Motion Sensors

09.06.2023 13:00
IZ G41b Hardstyle Lab

Speaker(s): Maximilian Hein

In a dance performance, the dancers must be in sync with the music.
In this bachelor thesis, a system is presented which can determine whether a dancer has performed on the beat of the music.
For this purpose, a modern smartwatch is used to record the movement data of the dancer.
In parallel, the music is recorded to determine the synchronicity between the motion data and the music.

The evaluation of the system indicates that the proposed method is able to assess a dancers' beat accuracy with certain limitations.
To the best of our knowledge, this work is the first to use motion data from a smartwatch to analyze dance performances.

BA-Talk: Move to the Music: Analyzing Dance Synchronicity using Smart Watch Motion Sensors

Talk MA-Talk: Locally-Adaptive Video Recoloring

02.06.2023 13:00
G30

Speaker(s): Jan Malte Hilgefort

Video recoloring is an essential part of videography, yet there are only a limited number of approaches to locally-adaptive video recoloring, many of them based on mask propagation. In this master’s thesis, an approach to locally-adaptive video recoloring is developed that is both fast and produces realistic results. The approach is based on user-set constraints that influence the pixels based on their color and space distances, also allowing it to perform global recoloring. These constraint influences are then used to apply the re- coloring information the user sets for each constraint onto the pixels. The new approach can, for example, be used to simplify prototyping in video production by providing an interactive and intuitive way to apply an artistic vision to a video.