News Blog

Seeing beyond the diffraction limit in 3-D


PITTSBURGH—At a meeting of the American Physical Society (APS) here this past week, physical chemist W. E. Moerner of Stanford University presented a clever new trick for looking inside living cells. The technique allows views in three-dimensions and well beyond the so-called diffraction limit that ordinarily fuzzes up images at around half the wavelength of the light used. Moerner was this year's recipient of the APS's Irving Langmuir Prize in Physical Chemistry.

Techniques such as electron microscopy have long allowed exquisite imaging at the nanoscale, but they typically require careful preparation of the object to be imaged and are not practical for, say, looking inside living cells to see the processes taking place there. As physics students learn early on in optics, the best images usually obtainable using light can make out features no smaller than about half the light's wavelength, or about 200 nanometers using the shortest-wavelength visible light. (A nanometer is a billionth of a meter, or about 40 billionths of an inch.) Biochemical structures in cells are much smaller than that.

"Near-field" optical scanning pushes beyond the diffraction limit by placing a screen with a tiny window, or aperture, up against the object and scanning it across the object to build up an image. But this approach only succeeds at getting extra-high-resolution images of things very close behind the screen—things within the "near-field" range that is short enough that wave effects have not yet washed all of the finer details out of the light.

The first trick of the new imaging process is to image light from a single fluorescent molecule, or fluorophore. Such a light source has a size of around a single nanometer. The optical image of the fluorophore will still be a blob several hundred nanometers across, but with today's high-quality detection systems one can analyze the intensity of the blob and locate its central maximum with very high precision. Moerner compares it to looking down at a small, conical volcanic island. The island may be a few miles across, but one can analyze the topography and locate the mountain peak at the island's center down to tens of yards, say.

This trick is actually as old as Heisenberg, who in the 1920s noted that given n photons, one can locate an electron with a precision that goes roughly as one over root-n of the diffraction limit. Researchers later generalized the idea to other types of imaging.

That's all very well for getting single points very accurately—say, the location of a protein of interest, by tagging the protein with and then imaging the fluorophore—but it wouldn't be much use for looking at two nearby points each labeled with a fluorophore. The two "mountaintops" would be close together and difficult to separate.

So the second trick of the imaging method makes use of "photoswitching" of certain organic fluorophores. Such molecules may be photoactivated by one wavelength of light, then made to fluoresce with a second wavelength, and ultimately become photobleached, or switched off again. The researchers cannot control which fluorophores in a cell will be activated, but by sending in the correct amount of light they ensure a good probability that, even if a lot of fluorophores are in a small region, only isolated fluorophores are activated. They can then image those fluorophores until photobleaching occurs, repeating the trick often enough to build up a picture of all the fluorophores in the cell. Three separate groups independently proposed this trick in 2006.

Thus one can get a very high-resolution picture of where the tagged proteins or other nanoscopic objects are inside the living cell, and follow them over time through processes such as cell division. (Moerner in particular mentioned studying how a certain kind of bacteria that divides into two dissimilar daughter cells sends different proteins to each of its progeny.)

But this high-resolution picture is still a flat, two-dimensional image. The third trick of the imaging technique that Moerner described adds the third dimension. Recall the "mountain" again—the blob of light with a central peak in intensity. In technical terms it is a so-called point spread function—how the image of light from a tiny point source becomes spread out as it goes through the researchers' imaging system. The point spread function need not be a simple symmetrical mound like the idealized volcanic island. Instead it can be arranged to be more of a dumbbell shape. Furthermore, the function can "twist" around depending on the depth of the point source. Moerner calls this twisting dumbbell form a double-helix point spread function, for obvious reasons.

So: The fluorophores are randomly photoswitched to image them individually. The midpoint of the "dumbbell" for each one provides the fluorophore's precise position in the 2-D plane. And the orientation of the dumbbell places it in the third dimension—how far it is along the line of sight into the cell.

Imaging in three dimensions with a double-helix point spread function was demonstrated last year by Sri Rama Prasanna Pavani and Rafael Piestun of the University of Colorado at Boulder, but with fluorescent microspheres as the target—much larger and brighter than single molecules. Those two researchers, along with Moerner and collaborators, reported in the March 3 Proceedings of the National Academy of Sciences USA that they had extended the double-helix technique to image individual fluorophores in a thick polymer sample. They located the fluorophores to about 10 to 20 nanometers in all three dimensions over a depth of 2,000 nanometers. Now it falls to excited biologists to apply the technique to study actual cells.

Photo of W. E. Moerner: Stanford University

The views expressed are those of the author and are not necessarily those of Scientific American.

Share this Article:


You must sign in or register as a member to submit a comment.


Get All-Access Digital + Print >


Email this Article