Plenoptic Camera and Integral Photography

Keynote, WSCG 2010

February 2010, Plzen, Czech Republic

Todor Georgiev, Adobe Systems Inc.

Integral Photography was proposed by Gabriel Lippmann in 1908, well before the invention of the computer. Before holography, and even before color photography as we know it. The method has greatly improved over the past 100 years, but has yet to become commercially viable. Now, with advances in digital capture and parallel image processing on the GPU, Lippmann’s methods are being revitalized. For the first time we have the computational power to assemble high quality images from integral photographs, and do it interactively, something that was impossible just a few years back! This computational power, and the availability of digital media to display dynamic and 3D content in real time, lay the groundwork for the Plenoptic camera and the lightfield rendering techniques to become the 3D photography of the future.

In this presentation we will review the Plenoptic camera designs in versions 1.0 and 2.0. We will show mathematically and visually demonstrate the difference in resolution. Our presentation also introduces a number of algorithms, like Fourier Slice refocusing and the direct, advanced and refocused rendering and editing of 3D views from Plenoptic 2.0 radiance data. New developments, including superresolution, HDR and spectral color capture with the Plenoptic camera will be demonstrated. As part of the presentation we will also take live pictures and render refocused 3D views -- in real time.

Background and Motivation (Shown are 3D pictures, not traditonal 3D models.)

What is Light Field


Focused Plenoptic Camera


Demo: Click on the flowers or on the horizon