Before you draw conclusions from my recent post that I am some bitter photography-hater, I want to set the record straight. I am not a photography-hater (although I reserve the right to be a stock-photography-stealing-good-illustration-opportunities hater), and to prove it to you, I want to introduce you to the future of photography. I recently got wind of a new technology that has the potential to overturn photography as we know it. It is still in the development stages, but if the images on their website are any indication of what they’ve accomplished so far, this technology is powerful. I daresay I’m a believer.
The camera is called Lytro and it was born of Stanford graduate Ren Ng’s 2006 dissertation on light wave physics. Basically, Ng got sick of his photography skills and rather than continue pumping out image after blurry image, he decided to do something about it. The result is a new type of photography, dubbed light field photography, that allows you to snap away, carefree as the day you were born, and focus later, in the lab.
Snap now, focus later? How the?!?
Traditional cameras function much like our eyes. Light enters through a small opening and is focused by a lens onto a plane. In an eye, this plane is our retina which collects data about color and light and sends them through our optic nerve to the visual cortex in our brains. Our brains then do some fancypants hocus pocus and dish up an image that makes sense to us. In a camera, the plane is either film (film! Remember that stuff?), or in the case of the 1.6 million digital cameras sold last year, a sensor that turns light into electrical impulses. These electrical impulses are interpreted by a computer chip and turned into points of light, or pixels, to be displayed on a screen for your viewing pleasure. Each pixel, then, is the sum of all the light that hit one point on the sensing plane. The output image is a 2-dimensional grid summary of a 3-dimensional scene.
By contrast, the Lytro camera has a sensor which more closely resembles that of an insect’s compound eye. Rather than record the sum of light at each point on the sensor, each point on the sensor acts as its own independent eye. The result is that the camera can compare information from each “eye” to understand the direction that light is coming from as well as its brightness and color. Now, rather than a flat, 2-dimensional representation of a scene, it has a little more “ammo” in its arsenal. Given the right computing software and power, it can process the image after the fact, changing focus or even the position of the viewer in a scene. Dang, that’s fly!
More dynamic images at the Lytro Photo Gallery
For the technically inclined: Ren Ng's Dissertation