When a set of online teasers for a new camera called the Lytro appeared earlier this year, you could have been forgiven for seeing the invention as just another gimmick. The camera’s attention-grabbing feature is a kind of after-the-fact autofocus: with a click, any blurry portion in a picture can be snapped into sharpness—another step in the march of idiot-proof photography.
In fact, such image correction is merely a side effect of what is genuinely different about the technology. The Lytro, scheduled to reach buyers early next year, creates a wholly new kind of visual object, one that both exemplifies and exploits the way images are consumed in the digital era.
The underlying technique is called “light-field photography.” A traditional camera, of course, captures light reflected off its subject through a lens and onto a flat surface. Proper focus is important to ensure that the image you get is the precise slice of visual reality you want. But “computational photography,” pioneered by Marc Levoy of Stanford University and others, takes a different approach, essentially using hundreds of cameras to capture all the visual information in a scene and processing the results into a many-layered digital object. One of Levoy’s former students, Ren Ng, added the twist that resulted in the Lytro: instead of using multiple cameras, he integrated hundreds of micro lenses into a single device.
The upshot is a photograph that’s less a slice of visual information than a cube, from which you can choose whichever layer would make the most pleasing two-dimensional image for printing and framing. But you can also leave the picture as it is—a three-dimensional capture suitable for digital display or distribution—and let others do the fiddling. Rather than a definitive, static image, a light-field visual object is intrinsically interactive.
In the pictorial examples the Lytro company has released online, this flexibility comes across as a fun novelty: you can focus on the Empire State Building in the distance, or the raindrop-splattered window in the foreground. But the implications are more profound. “It’s fair to say that this technology is a game changer,” says Richard Koci Hernandez, a photographer and assistant professor of new media at the University of California at Berkeley. The company gave Hernandez a Lytro prototype to beta test, and he argues that it represents as important a breakthrough as auto-focus itself, or even the great shift to digital photography.
Imagine, he suggests, a photojournalist covering a presidential speech whose audience includes a clutch of protesters. Using a traditional camera, he says, “I could easily set my controls so that what’s in focus is just the president, with the background blurred. Or I could do the opposite, and focus on the protesters.” A Lytro capture, by contrast, will include both focal points, and many others. Distribute that image, he continues, and “the viewer can choose—I don’t want to sound professorial—but can choose the truth.”
by Rob Walker, The Atlantic | Read more:
Graphic: Bryan Christie