Immersive Digital Reality
Abstract
Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles.
Immersive Digital Reality
a DFG Reinhart Koselleck Project (2016-2023)
Project Summary
Motivated by the advent of mass-market head-mounted immersive displays, we set out to pioneer the technology needed to experience recordings of the real world with the sense of full immersion as provided by VR goggles. To achieve this goal, a number of interdisciplinary, tightly interrelated challenges from video processing, computer graphics, computer vision, and applied visual perception need to be addressed concertedly. By importing the real world into immersive displays, we want to lay the foundations for the way we may watch movies in the future, leaving fixed-viewpoint, limited field-of-view screens behind for a completely immersive, collective experience.
Researchers
Visiting Researchers
Tobias Bertel (University of Bath, UK)
Preeti Gopal (IIT Bombay, India)
Alumni
Job Openings
We are always looking for excellent researchers. Want to join the project?
Events
September 27-29, 2023 | International Symposium on Vision, Modeling, and Visualization (VMV) at TU Braunschweig (Organizer) |
March 16, 2022 | Our work on Omnidirectional Galvanic Vestibular Stimulation in Virtual Reality presented at IEEE VR has won honorable mention for Best Journal Track Paper |
August 2021 | Publication of our guest-edited CGAA special issue on Real VR |
December 10, 2020 | CfP submission deadline for our CGAA special issue on Real VR (Guest Editor) |
February 28, 2020 | Publication of our Springer book on Real VR (Editor) |
June 30-July 3, 2019 | Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays, Dagstuhl Seminar 19272 (Organizer) |
April 24-26, 2019 | Computational Visual Media Conference in Bath, UK (Program Co-Chair) |
June 7-8, 2017 | Symposium on Visual Computing and Perception (SVCP) at TU Braunschweig (Organizer) |
Invited Guest Lectures
September 18, 2019 | Tobias Bertel (Univ. Bath, UK): Creating Real VR Experiences |
August 19, 2019 | Claudia Menzel (Universität Landau): How much “nature” is in the image? The role of lower-level processed image properties on the processing and evaluation of faces, artworks, and environments |
April 12, 2019 | Sebastian Bosse (FhG-HHI Berlin): Data-driven estimation and neuro-physiological assessment of perceived visual quality |
October 19, 2018 | Susana Castillo (BTU Cottbus): Digital Personality and the Emotional Onion |
April 16, 2018 | Katharina Legde (BTU Cottbus): Human-ness of Virtual Agents |
July 10, 2017 | Gerard Pons-Moll (MPI Informatik): Real Virtual Humans |
June 7-8, 2017 | Symposium on Visual Computing and Perception (SVCP)
|
May 12, 2017 | Marcus Riemer (FH Wedel): Einsatz der Unity-Engine in VR-Umgebungen (in German) |
Invited Talks
August 18, 2023 | Invited talk in the SCIEN Colloquium, Stanford University, USA |
August 14, 2023 | Invited talk at NVIDIA Inc., Santa Clara, CA: "Turning Natural Reality into Virtual Reality" |
January 17, 2020 | Invited talk by Thiemo Alldieck at Carnegie Mellon University, USA: "Reconstructing 3D Human Avatars from Monocular Images" |
October 10, 2019 | Invited talk at DLR Braunschweig "What’s missing in Head-mounted VR Displays?" |
April 26, 2019 | Invited talk by Thiemo Alldieck at TU Tampere, Finland: "Tell Me How You Look and I'll Tell You How You Move" |
November 30, 2018 | Invited talk at FhG Heinrich Hertz Institut Berlin "Turning Reality into Virtual Reality" |
January 12, 2018 | Keynote presentation at VR Walkthrough Technology Day, TU Tampere, Finland (presentation video) |
April 20, 2017 | Invited talk at Stanford Computer Graphics Lab (GCafe), Stanford University, USA |
January 23, 2017 | Invited talk at University of Konstanz/SFB TRR 161: "Visual Computing - Bridging Real and Digital Domain" |
In the News
June 22, 2022 | Article in TU Magazine about our research on galvanic vestibular stimulation to counter cybersickness |
Mai 16, 2022 | Our immersive 360º fulldome multiplayer game "Space Challenge" we developed with our students Jonas Penshorn, Niklas Mainzer und Johannes Weinert has been released in the Planetarium Wolfsburg. Braunschweiger Zeitung, Wolfsburger Allgemeine Zeitung, Wolfsburger Allgemeine Zeitung, Regional Heute |
June 11, 2021 | Our ICG Dome makes a prominent appearance in this year's TU Nights - Lost Places (website) |
January 1, 2021 | Our ICG Dome is picture of the month of our university's research magazine |
November 24, 2020 | Article on our collaboration with Planetarium Wolfsburg (in German) |
March 23, 2018 | Interview in the local newspaper Braunschweiger Zeitung (in German) |
November 10, 2017 | Article in local chamber of commerce magazine standort 38 (in German) |
June 8, 2017 | |
May 4, 2016 | Articles in the local newspaper Braunschweiger Zeitung. TU Research Magazine, and news38.de (in German). |
Publications
D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video
arXiv preprint, pp. 1-16, June 2024.
Fast Non-Rigid Radiance Fields from Monocularized Data
in IEEE Transactions on Visualization and Computer Graphics (TVCG), IEEE, pp. 1-12, February 2024.
PlenopticPoints: Rasterizing Neural Feature Points for High-Quality Novel View Synthesis
in Proc. Vision, Modeling and Visualization (VMV), The Eurographics Association, pp. 53-61, September 2023.
Instant Hand Redirection in Virtual Reality Through Electrical Muscle Stimulation-Triggered Eye Blinks
in ACM Symposium on Virtual Reality Software and Technology (VRST), no. 37, pp. 1-11, August 2023.
Immersive Free-Viewpoint Panorama Rendering from Omnidirectional Stereo Video
in Computer Graphics Forum, vol. 42, no. 6, John Wiley & Sons, Inc., pp. e14796 ff., April 2023.
Wavelet-Based Fast Decoding of 360° Videos
in IEEE Transactions on Visualization and Computer Graphics (TVCG, Proc. IEEE VR), IEEE , pp. 1-9, February 2023.
Fast Non-Rigid Radiance Fields from Monocularized Data
arXiv preprint, to appear.
url: https://arxiv.org/abs/2212.01368
Personality Analysis of Face Swaps: Can They be Used as Avatars?
in ACM Proceedings of the International Conference on Intelligent Virtual Agents, no. 14, ACM, pp. 1-8, September 2022.
Omnidirectional Galvanic Vestibular Stimulation in Virtual Reality
in IEEE Transactions on Visualization and Computer Graphics (TVCG, Proc. IEEE VR), vol. 28, no. 5, pp. 2234-2244, March 2022.
Won "Honorable Mention for Best Journal Track Paper" at IEEE VR
Real VR - special issue of IEEE Computer Graphics and Applications
in IEEE Computer Graphics and Applications, vol. 41, no. 4, IEEE, August 2021.
High-Fidelity Neural Human Motion Transfer from Monocular Video
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1541-1550, June 2021.
Oral presentation
Towards Understanding Perceptual Differences between Genuine and Face-Swapped Videos
in Proc. ACM Human Factors in Computing Systems (CHI), no. 240, Association for Computing Machinery, pp. 1-13, May 2021.
Visual Techniques to Reduce Cybersickness in Virtual Reality
in Proc. IEEE Conference on Virtual Reality and 3D User Interfaces (VR), IEEE, pp. 486-487, March 2021.
Mitigation of Cybersickness in Immersive 360° Videos
in IEEE Virtual Reality Workshop on Immersive Sickness Prevention (WISP), IEEE, pp. 169-177, March 2021.
Guiding Visual Attention in Immersive Environments
PhD thesis, TU Braunschweig, January 2021.
Altering the Conveyed Facial Emotion Through Automatic Reenactment of Video Portraits
in Proc. International Conference on Computer Animation and Social Agents (CASA), vol. 1300, Springer, Cham, pp. 128-135, November 2020.
PEFS: A Validated Dataset for Perceptual Experiments on Face Swap Portrait Videos
in Proc. International Conference on Computer Animation and Social Agents (CASA), vol. 1300, Springer, Cham, pp. 120-127, November 2020.
Temporal Consistent Motion Parallax for Omnidirectional Stereo Panorama Video
in ACM Symposium on Virtual Reality Software and Technology (VRST), no. 21, Association for Computing Machinery, pp. 1-9, November 2020.
Stereo Inverse Brightness Modulation for Guidance in Dynamic Panorama Videos in Virtual Reality
in Computer Graphics Forum, vol. 39, no. 6, August 2020.
Exploring Neural and Peripheral Physiological Correlates of Simulator Sickness
in Computer Animation and Virtual Worlds, vol. 31, no. 4-5, John Wiley & Sons, Inc., pp. e1953 ff., August 2020.
electronic ISSN: 1546-427X
Depth Augmented Omnidirectional Stereo for 6-DoF VR Photography
in Proc. IEEE Virtual Reality (VR) Workshop, IEEE, pp. 660-661, May 2020.
Real VR – Immersive Digital Reality: How to Import the Real World into Head-Mounted Immersive Displays
Springer, ISBN 978-3-030-41815-1, pp. 1-355, March 2020.
Reconstructing 3D Human Avatars from Monocular Images
in Magnor M., Sorkine-Hornung A. (Eds.): Real VR – Immersive Digital Reality: How to Import the Real World into Head-Mounted Immersive Displays, Springer International Publishing, Cham, ISBN 978-3-030-41816-8, pp. 188-218, March 2020.
Multiview Panorama Alignment and Optical Flow Refinement
in Magnor M., Sorkine-Hornung A. (Eds.): Real VR – Immersive Digital Reality: How to Import the Real World into Head-Mounted Immersive Displays, Springer International Publishing, Cham, ISBN 978-3-030-41816-8, pp. 96-108, March 2020.
Subtle Visual Attention Guidance in VR
in Magnor M., Sorkine-Hornung A. (Eds.): Real VR – Immersive Digital Reality: How to Import the Real World into Head-Mounted Immersive Displays, Springer International Publishing, Cham, ISBN 978-3-030-41816-8, pp. 272-284, March 2020.
Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays
Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, ISBN 2192-5283, pp. 143-156, November 2019.
Dagstuhl Seminar 19272
From Reality to Immersive VR: What’s missing in VR?
Dagstuhl Reports @ Dagstuhl Seminar 2019, p. 151, November 2019.
Dagstuhl Seminar 19272
Tex2Shape: Detailed Full Human Body Geometry from a Single Image
in IEEE International Conference on Computer Vision (ICCV), IEEE, pp. 2293-2303, October 2019.
Iterative Optical Flow Refinement for High Resolution Images
in Proc. IEEE International Conference on Image Processing (ICIP), September 2019.
Towards VR Attention Guidance: Environment-dependent Perceptual Threshold for Stereo Inverse Brightness Modulation
in Proc. ACM Symposium on Applied Perception (SAP), September 2019.
Learning to Reconstruct People in Clothing from a Single RGB Camera
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 1175-1186, June 2019.
Gaze and Motion-aware Real-Time Dome Projection System
in Proc. IEEE Virtual Reality (VR) Workshop, IEEE, pp. 1780-1783, March 2019.
PerGraVAR
Immersive EEG: Evaluating Electroencephalography in Virtual Reality
in Proc. IEEE Virtual Reality (VR) Workshop, IEEE, pp. 1794-1800, March 2019.
PerGraVAR
Comparing Unobtrusive Gaze Guiding Stimuli in Head-mounted Displays
in Proc. IEEE International Conference on Image Processing (ICIP), IEEE, October 2018.
Comparison of Unobtrusive Visual Guidance Methods in an Immersive Dome Environment
in ACM Transactions on Applied Perception, vol. 15, no. 4, ACM, pp. 27:1-27:11, October 2018.
Detailed Human Avatars from Monocular Video
in International Conference on 3D Vision, IEEE, pp. 98-109, September 2018.
Low Cost Setup for High Resolution Multiview Panorama Recording and Registration
in Proc. European Signal Processing Conference (EUSIPCO), September 2018.
Analysis of Neural Correlates of Saccadic Eye Movements
in Proc. ACM Symposium on Applied Perception (SAP), no. 17, ACM, pp. 17:1-17:9, August 2018.
On the Delay Performance of Browser-based Interactive TCP Free-viewpoint Streaming
in Proc. IFIP Networking 2018 Conference (NETWORKING 2018), IEEE, pp. 1-9, July 2018.
Video Based Reconstruction of 3D People Models
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, pp. 8387-8397, June 2018.
CVPR Spotlight Paper
Gaze Guidance in Immersive Environments
Poster @ IEEE Virtual Reality 2018, March 2018.
Automatic Upright Alignment of Multi-View Spherical Panoramas
Poster @ European Conference on Visual Media Production 2017, December 2017.
Best Student Poster Award
Subtle Gaze Guidance for Immersive Environments
in Proc. ACM Symposium on Applied Perception (SAP), ACM, pp. 4:1-4:7, September 2017.
Comparative analysis of three different modalities for perception of artifacts in videos
in ACM Transactions on Applied Perception, vol. 14, no. 4, ACM, pp. 1-12, September 2017.
Optical Flow-based 3D Human Motion Estimation from Monocular Video
in Proc. German Conference on Pattern Recognition (GCPR), Springer, pp. 347-360, September 2017.
Perception-driven Accelerated Rendering
in Computer Graphics Forum (Proc. of Eurographics EG), vol. 36, no. 2, The Eurographics Association and John Wiley & Sons Ltd., pp. 611-643, April 2017.
Gaze Visualization for Immersive Video
in Burch, Michael and Chuang, Lewis and Fisher, Brian and Schmidt, Albrecht and Weiskopf, Daniel (Eds.): Eye Tracking and Visualization, Springer, ISBN 978-3319470238, pp. 57-71, March 2017.
Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering
Poster @ German Conference on Pattern Recognition 2016, September 2016.
Gaze-contingent Computational Displays: Boosting perceptual fidelity
in IEEE Signal Processing Magazine, vol. 33, no. 5, IEEE, pp. 139-148, September 2016.
Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering
in Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering EGSR), vol. 35, no. 4, pp. 129-139, July 2016.
EGSR'16 Best Paper Award
Related Projects
Comprehensive Human Performance Capture from Monocular Video Footage
Photo-realistic modeling and digital editing of image sequences with human actors are common tasks in the movies and games industry. The processes are however still laborious since tools only allow basic manipulations. In cooperation with the Institut für Informationsverarbeitung (TNT) of the University of Hannover (http://www.tnt.uni-hannover.de/), this project aims to solve this dilemma by providing algorithms and tools for automatic and semi-automatic digital editing of actors in monocular footage. To enable visual convincing renderings, a digital model of the human actor, detailed spatial scene information as well as scene illumination need to be reconstructed. Hereby plausible look and motion of the digital model are crucial.
This research project is partially funded by the German Science Foundation DFG.
Digital Representations of the Real World
The book presents the state-of-the-art of how to create photo-realistic digital models of the real world. It is the result of work by experts from around the world, offering a comprehensive overview of the entire pipeline from acquisition, data processing, and modelling to content editing, photo-realistic rendering, and user interaction.
Eye-tracking Head-mounted Display
Immersion is the ultimate goal of head-mounted displays (HMD) for Virtual Reality (VR) in order to produce a convincing user experience. Two important aspects in this context are motion sickness, often due to imprecise calibration, and the integration of a reliable eye tracking. We propose an affordable hard- and software solution for drift-free eye-tracking and user-friendly lens calibration within an HMD. The use of dichroic mirrors leads to a lean design that provides the full field-of-view (FOV) while using commodity cameras for eye tracking.
Featuring more than 10 million pixels at 120 Hertz refresh rate, full-body motion capture, as well as real-time gaze tracking, our 5-meter ICG Dome enables us to research peripheral visual perception, to devise comprehensive foveal-peripheral rendering strategies, and to explore multi-user immersive visualization and interaction.
Perception of Video Manipulation
Recent advances in deep learning-based techniques enable highly realistic facial video manipulations. We investigate the response of human observers’ on these manipulated videos in order to assess the perceived realness of modified faces and their conveyed emotions.
Facial reenactment and face swapping offer great possibilities in creative fields like the post-processing of movie materials. However, they can also easily be abused to create defamatory video content in order to hurt the reputation of the target. As humans are highly specialized in processing and analyzing faces, we aim to investigate perception towards current facial manipulation techniques. Our insights can guide both the creation of virtual actors with a high perceived realness as well as the detection of manipulations based on explicit and implicit feedback of observers.
Preventing Motion Sickness in VR
Motion sickness, also referred as Simulator Sickness, Virtual sickness, or Cyber sickness, is a problem common to all types of visual simulators consisting of motion sickness-like symptoms that may be experienced while and after being exposed to a dynamic, immersive visualization. It leads to ethical concerns and impaired validity of simulator-based research. Due to the popularity of virtual reality devices, the number of people exposed to this problem is increasing and, therefore, it is crucial to not only find reliable predictors of this condition before any symptoms appear, but also find alternatives to fully prevent its occurrence while experiencing VR content.
Want to re-live your latest bungee jump? Share your incredible skateboard stunts with your friends in 360? Watch your last vacation adventures in full immersion and 3D? In this project we set out to pioneer the fully immersive experience of action camera recordings in VR headsets.
Scope of "Reality CG" is to pioneer a novel approach to modelling, editing and rendering in computer graphics. Instead of manually creating digital models of virtual worlds, Reality CG will explore new ways to achieve visual realism from the kind of approximate models that can be derived from conventional, real-world imagery as input.
The Virtual Video Camera research project is aimed to provide algorithms for rendering free-viewpoint video from asynchronous camcorder captures. We want to record our multi-video data without the need of specialized hardware or intrusive setup procedures (e.g., waving calibration patterns).