Theory and Methods of Light-Field Photography

Anchorage, June 28, 2008

Todor Georgiev and Andrew Lumsdaine

Course Outline

Compared to traditional (2D) photography, light-field (or integral) imaging captures the entire 4D radiance of a scene, i.e., the ``plenoptic function''. Since this function contains all the ray-optics information about the scene, it provides a substantially richer basis for computer vision tasks compared to a traditional 2D image.

This tutorial presents radiance analysis in a rigorous mathematical way, which often leads to surprisingly direct results. The mathematical foundations will be used to develop computational methods for radiance processing and image rendering, including digital refocusing and perspective viewing. Various computational techniques for digital refocusing and rendering will also be presented, including Ng's Fourier slice algorithm and the "heterodyning" approach with both mask-based and lens-based camera.

Hands-on demonstrations: Real world sensors are 2-dimensional. To multiplex the 4D radiance onto 2D sensors, lightfield photography demands more sophisticated imaging technology. We will demonstrate state of the art light-field cameras that implement different methods for radiance capture, including the microlenses approach of Lippmann [1, 2] and the plenoptic camera; the MERL mask enhanced camera; the Adobe lens-prism and multilens cameras, and a new Adobe camera using a ``mosquito net'' mask in front of the main camera lens.

New: For the first time we present our Plenoptic Camera 2.0, which increases the resolution of the traditional plenoptic camera by 500 times and more! This is possible based on a method of sampling the 4D radiance that provodes flexibility in the choice of the spatio-angular resolution tradeoff point. Our sampling is adapted to the statistics of real world objects, taking advantage of a well-known redundancy related to their Lambertian nature.