Computer Graphics
TU Braunschweig

Alternate Exposure Imaging

Abstract

Traditional optic flow algorithms rely on consecutive short-exposure images. In contrast, long-exposed images contain integrated motion information directly in form of motion blur. In this project, we use the additional information provided by a long exposure image to improve robustness and accuracy of motion field estimation. Furthermore, the long exposure image can be used to determine the moment of occlusion for the pixels in any of the short exposure images that are occluded or disoccluded.

This work has been funded by the German Science Foundation, DFG MA2555/4-1


Code and Resources

The code and the test sequences are for research purposes only. No commercial usage is allowed in any form. If you use this code for your publications, make sure to cite the corresponding papers. Start downloading the test sequences by clicking on the images.

Publications

Anita Sellent, Martin Eisemann, Bastian Goldlücke, Daniel Cremers, Marcus Magnor:
Motion Field Estimation from Alternate Exposure Images
in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 33, no. 8, pp. 1577-1589, August 2011.

Anita Sellent:
Dense Correspondence Field Estimation from Multiple Images
PhD thesis, TU Braunschweig, June 2011.
Monsenstein und Vannerdat, ISBN 978-3-86991-339-1

Anita Sellent, Martin Eisemann, Bastian Goldlücke, Thomas Pock, Daniel Cremers, Marcus Magnor:
Variational Optical Flow from Alternate Exposure Images
in Proc. Vision, Modeling and Visualization (VMV), pp. 135-143, November 2009.

Anita Sellent, Martin Eisemann, Marcus Magnor:
Motion Field and Occlusion Time Estimation via Alternate Exposure Flow
in Proc. IEEE International Conference on Computational Photography (ICCP), no. 1, pp. 1-8, April 2009.


Related Projects

Image-space Editing of 3D Content

The goal of this project is to develop algorithms in image space that allow photo-realistic editing of dynamic 3D scenes. Traditional 2D editing tools cannot be applied to 3D video as in addition to correspondences in time spatial correspondences are needed for consistent editing. In this project we analyze how to make use of the redundancy in multi-stereoscopic videos to compute robust and dense correspondence fields. these space-time correspondences can then be used to propagate changes applied to one frame consistently to all other frames in the video. Beside the transition of classical video editing tools we want to develop new tools specifically for 3D video content.

This project has been funded by ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.

Multi-Image Correspondences

Multi-view video camera setups record many images that capture nearly the same scene at nearly the same instant in time. Neighboring images in a multi-video setup restrict the solution space between two images: correspondences between one pair of images must be in accordance with the correspondences to the neighboring images.

The concept of accordance or consistency for correspondences between three neighboring images can be employed in the estimation of dense optical flow and in the matching of sparse features between three images.

This work has been funded in parts by the ERC Grant #256941 `Reality CG` and the German Science Foundation, DFG MA2555/4-2.

Perception-motivated Interpolation of Image Sequences

We present a method for image interpolation which is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to an image interpolation that is perceived as physically correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute such solutions. In our user study we confirm the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases the perceived quality of the interpolation results, compare the results obtained by other methods and investigate the achieved quality for different types of scenes.