Seminar Computergraphik WS'24/25
Seminar
Dr.-Ing. Susana Castillo
Hörerkreis: Bachelor & Master
Kontakt: seminar@cg.cs.tu-bs.de
Modul: INF-STD-66, INF-STD-68
Vst.Nr.: 4216012, 4216021
Topic: Current Research in Computer Graphics
Content
In the Computer Graphics Seminar we discuss current research results in the field of Computer Graphics. The tasks of the participants are to write up a research report, to review the work of another student in writing, and to later revise and improve their own report to reflect the input gathered from the review. Finally, at the end of the semester and during a block-seminar, each student will give an oral presentation on their respective research reports. This must also be rehearsed beforehand in front of the assigned individual supervisor and his/her suggestions for improvement must be integrated.
Participants
The course is aimed at Bachelor's and Master's students in Computer Science (Informatik), IST, and Business Informatics (Wirtschaftsinformatik), as well as students pursuing their Master in Data Science.
The registration takes place centrally via StudIP. The number of participants is limited to 6 students.
Important Dates
All dates listed here must be adhered to. Attendance at all events is mandatory.
- 04.07.2024 : Registration process via Stud.IP
- Until 09.10.2024: Submission of topic requests
- 16.10.2024, 13:15.: Kick-Off Meeting (G30, ICG) [Slides}
- 28.10.2024: End of the deregistration period
- 17.11.2024: Submission of the written paper
- 01.12.2024: Submission of the review report
- 17.12.2024: Submission of the revised paper
- Until 10.01.2025: Trial presentation
- 16.01.2025: Submission of the presentation slides
- 17.01.2025, 09:30 - 11:00: Presentations - Block Event
- 24.01.2025, 09:00 - 11:00: Presentations - Block Event Part 2
Registered students have the possibility to deregister until 2 weeks after the start of the lectures (i.e., 28.10.24) at the latest. For a successful deregistration, it is necessary to deregister via e-mail with the seminar supervisor (seminar@cg.cs.tu-bs.de).
Registered students, and students on the waiting list, have the possibility to send their top 3 topic requests in order of preference via email to seminar@cg.cs.tu-bs.de until the 09.10.24, so that they will be considered for the topic assignment.
Once a topic has been assigned to the student, all consequent submissions have to be sent by mail to the respective advisor and additionally to seminar@cg.cs.tu-bs.de. If not communicated otherwise, the deadline for all submissions is at 23:59 on the due day.
The respective drop-offs are done by email to seminar@cg.cs.tu-bs.de and if necessary by email to the respective advisor.
If you have any questions about the course, please contact seminar@cg.cs.tu-bs.de.
Format
- The final assignation of topics will be communicated during the Kick-Off event.
- For each topic, the student needs to prepare a report in latex using the ICG Template.
The content of the report is a short summary of the work in one's own words and the elaboration of the main points, with a minimum length of 8 pages. The report should clearly reflect that the topic has been understood and was critically assessed. - Each participant will later write a 1-2 page review on the report of another student (assigned by the seminar's supervisor). For writing the review one should pay particular attention to the comprehensibility and linguistic style of the summary.
- After receiving the review on one's own paper, the student will need to revise and improve their manuscript according to the received feedback.
- For the final presentations, the students can either use their own laptops or one provided by the Institute. If the student needs to use the ICG laptop, they need to contact seminar@cg.cs.tu-bs.de in time, at least two weeks before the presentations. .
- The topics will be presented in approximately 20 minute presentations followed by a discussion.
- The language for the presentations can be either German or English.
- The oral presentation, the written paper, and the preparation of the review report, are all mandatory requirements to pass the course successfully.
Files and Templates
- Latex-Template Its use is compulsory.
- Slides-Template Using this template is recommended but not mandatory.
- Review-Template (mandatory usage)
- Kick-Off Slides
Topics
To be announced
- Saccade-Contingent Rendering
Yuna Kwak, Eric Penner, Xuan Wang, Mohammad R. Saeedpour-Parizi, Olivier Mercier, Xiuyun Wu, T. Scott Murdison, and Phillip Guan
SIGGRAPH 2024
Advisor: Susana Castillo
Attention: Supervision, preparation, and talk should all be done in English. (Achtung: Betreuung, Ausarbeitung und Talk auf Englisch.)
This paper presents a new method for gaze-dependent rendering in virtual reality headsets that only requires the detection of saccades and, thus, bypasses high-precision eye tracking. In several experiments, the authors show that the visual resolution can be reduced after a saccade without users noticing a difference to the full resolution. - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections
Dor Verbin, Pratul P. Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, Jonathan T. Barron
SIGGRAPH Asia 2024 (To Appear)
Advisor: Florian Hahlbohm
This paper improves modeling of view-dependent effects inside neural radiance fields by tracing reflection rays from the expected surface. Transforming a Non-Differentiable Rasterizer into a Differentiable One with Stochastic Gradient Estimation
Thomas Deliot, Eric Heitz, and Laurent Belcour
I3D 2024
Advisor: Sascha Fricke
Differenzierbare Renderer erlauben es, zahlreiche Informationen aus Photos der echten Welt zu extrahieren, wie zum Beispiel Materialeigenschaften oder Geometrie. Eine bestehende Renderengine differenzierbar zu machen ist dabei allerdings meistens nicht praktikabel. Stattdessen nutzt man hier die bekannten Python basierten Autodiff Frameworks, wie Pytorch.Diese Arbeit präsentiert eine praktikable Methode, mit der beliebige Renderer mit minimalem Aufwand differenzierbar gemacht werden können, was sie am Beispiel von Unity präsentieren. Hierdurch lässt sich das Verfahren sogar problemlos auf Smartphones ausführen.
- Fast View Synthesis of Casual Videos
ECCV 2024
Advisor: Moritz Kappel
The Paper introduces a new method for dynamic scene reconstruction and novel view synthesis from monocular in-the-wild video.
A new scene representation based on explicit geometric proxies for static and dynamic regions enables competitive image quality with significant improvements in training and rendering speed. - BoostTrack: boosting the similarity measure and detection confidence for improved multiple object tracking
Vukasin D. Stanojevic, Branimir T. Todorovic
Machine Vision and Applications 2024
Advisor: JP Tauscher
Dieses Paper stellt eine Methode zur Verfolgung mehrerer Objekte (MOT) in Videostreams vor, die die Erkennungssicherheit und Ähnlichkeitsmaße verbessert, um Identitätswechsel zu reduzieren und unzuverlässige Erkennungen effektiv zu handhaben. Der Ansatz verwendet Mahalanobis-Distanz und Formähnlichkeit, um die Genauigkeit der Zuordnung von Tracklets und Erkennungen zu verbessern, und erreicht damit Spitzenleistungen in relevanten Benchmarks. BoostTrack+ kombiniert diese Methoden mit visuellen Einbettungen, übertrifft dabei Standard-Benchmarks in der Echtzeitverarbeitung und rangiert in mehreren Metriken auf dem ersten Platz. TimeTunnel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual Reality
Qian Zhou, David Ledo, George Fitzmaurice, and Fraser Anderson
CHI 2024
Advisor: Susana Castillo
Attention: Supervision, preparation, and talk should all be done in English. (Achtung: Betreuung, Ausarbeitung und Talk auf Englisch.)
This paper from Autodesk, introduces an immersive motion editing interface that integrates spatial and temporal control for 3D character animation in VR. This integration is done by superimposing onto a 3D character a set of automatically computed representative poses, and the corresponding animation curves which depict the temporal 3D transformations of the positions of the character's joints in-between such poses.The authors empirically demonstrate that the interface reduces time and effort when editing motion.- Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking
Mingzhan Yang, Guangxin Han, Bin Yan, Wenhua Zhang, Jinqing Qi, Huchuan Lu, and Dong Wang
AAAI Conference on Artificial Intelligence 2024
Advisor: JP Tauscher
Multi-Object Tracking (MOT) aims to detect and associate all desired objects across frames. This becomes a challenging task when object occlusion and clustering occur, as the most used cues (spatial and appearance information) become ambiguous simultaneously. This paper demostrates the efficiency of incorporating weak cues (i.e., object confidence and object height) to alleviate this long-standing problem.
- TimeTunnel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual Reality
Qian Zhou, David Ledo, George Fitzmaurice, and Fraser Anderson
CHI 2024
Advisor: Susana Castillo
Achtung: Betreuung, Ausarbeitung und Talk auf Englisch!
With the overarching goal of translating the graphical perception literature into a knowledge base for visualization recommendation, the authors present a dataset that collates existing theoretical and experimental knowledge and summarizes key study outcomes in graphical perception.