Theory and Methods of Lightfield Photography

ECCV Tutorial, Zurich, Sept 7, 2014

Todor Georgiev and Andrew Lumsdaine

Course Syllabus (not final)

Introduction and Course Overview

We open the course by discussing some of the fundamental limitations with conventional photography and present some motivating examples of how lightfield photography (radiance photography) can overcome these limitations.

Rays and Ray Transforms

The theory and practice of radiance photography requires a precise mathematical model of the radiance function and of the basic transformations that can be applied to it. We begin the theoretical portion of the course by presenting basic ray optics and ray transformations, cast in the language of matrix operations in phase space. In this portion of the course we will cover two-plane and position-direction parameterization, Klein quadric and other non-traditional parametrizations, transport and lens transformation, phase space representations, composition of optical elements, and principal planes. With the machinery of ray transforms in hand, we can characterize how optical elements will transform radiance (the lightfield). Based on physical properties, we develop mathematical techniques for representing radiance, optical transformations, and image rendering.

Radiance Capture (Plenoptic / Lightfield Cameras)

Although radiance is a 4-dimensional quantity, to capture it, we still must use 2-dimensional sensors. In this portion of the tutorial we discuss how cameras can be constructed to multiplex 4-dimensional radiance data as a 2-dimensional image. Beginning with basic camera models, we will develop and analyze radiance sampling in phase space, pinhole and lens-based cameras, plenoptic cameras (from Lippmann through Lytro), camera arrays, and the focused plenoptic camera. We also derive an equivalence between types of cameras based on principal planes.

Rendering and Computation

Radiance photography has been made practical by the availability of computational techniques that can perform 2D image rendering from the 4D radiance function. The following computational issues will be discussed during this portion of the tutorial: Sensors, pixels, digital image representations, image rendering, space multiplexing, algorithms for refocusing, changing points of view, depth of field. We will present efficient (real-time) implementation using GPU Hardware.

Multimode Capture and Superresolution

This section describes methods of capturing modalities like polarized light, HDR, multispectral and manipulation of the final image after the fact based on plenoptic camera approach. We also cover approaches for super resolution rendering from captured plenoptic images.

The Lytro Camera

The Lytro approach to light field capture, rendering, and plenoptic camera design is explained in detail. We also cover practical aspects including file formats and rendering approaches.

Please send any questions to Todor.