At right is a picture of someone's brain as seen through functional magnetic resonance imaging or fMRI. This particular subject is taxing his neurons with a working memory task—those sunny orange specks represent brain activity related to the task. fMRI images show the brain according to changes in blood oxygen level, a proxy for degree of mental activity. It's a pretty amazing tool; it has validated a lot of assumptions about brain regions and helped us make comparisons between groups of people, shedding light on addiction, development and disease. Some scientists believe it can help us read minds (more on that later) or even predict the future.

But fMRI doesn't actually provide detail at the level of a cell. The 3-dimensionsal image it provides is built up in units called voxels. Each one represents a tidy cube of brain tissue—a 3-D image building block analogous to the 2-D pixel of computers screens, televisions or digital cameras. Each voxel can represent a million or so brain cells. Those orange blobs in the image above are actually clusters of voxels—perhaps tens or hundreds of them.

fMRI is also too slow to capture all of the changes in the brain. Each scan requires a second or two, enough time for a neuron to fire more than a hundred times. That means it can't provide a clear sense of precisely when things happen. Trying to explain whether activity in one spot causes activity in another is not possible through fMRI alone. Furthermore, you have to be careful with your conclusions. Just because voxels corresponding to one region 'light up' when your subject sees a terrifying tiger doesn't mean that every time this region appears active, your subject is frightened. Many of the brain's regions are quite complex and involved in multiple processes.

When we read about fMRI studies, it sounds as though spotting an active brain region is obvious, but it isn’t. Scientists have to sift through a mass of data and apply sophisticated statistical techniques to spot the voxels correlated to the activity. For example, say a scientists asks you to lie down in an fMRI machine and tap your finger, wait five minutes, then tap again. Mapping the movement of your tapping to active voxels is not as straightforward as it sounds. The raw fMRI data will likely show an eruption of signals. "You see this frothing cauldron with this little blip of extra activity," says computer scientist Francisco Pereira, a researcher at Princeton University. There may be an extra blip when you moved your finger, but it need not have been a radical difference—perhaps just a five percent change in blood oxygen levels. Depending on the task, even a fraction of a percent change could be a strong signal. Luckily, through careful statistics, it's possible to identify significant activity by comparing changes within each individual voxel when the finger was idle and when it was tapping. fMRI is a brilliant tool for generating these correlations, allowing you to observe associated activity in clusters of neighboring voxels—which correspond nicely to brain regions—as well as activity in more distant areas of the brain.

Some researchers take an extra step with fMRI data, using sets of correlations to make predictions. Neuroscientist John Gabrieli of MIT hopes to find ways of using fMRI data to make diagnoses. Last year, he and his colleagues suggested that fMRI-based work could predict which dyslexic students would improve in reading performance over the course of two and a half years more accurately than a battery of typical education tests. The challenge, he says, is finding predictions that he can ‘generalize’ to students other than the ones in his study.

The realm of prediction is also where we encounter mind-reading, untangling what someone is thinking based on brain activity. Already, researchers at the University of California Berkeley and San Francisco have had some success in reconstructing words being heard by a subject through another neuroscience tool, electroencephalography. Neuroscientist Jack Gallant and colleagues at the University of California Berkeley have demonstrated the ability to decode brain activity and reconstruct what a subject is seeing using fMRI.

So how do we go from illuminated voxels to what Gallant calls brain-reading? First, you need to model how activity in the brain's voxels corresponds to what a subject is seeing. Gallant will ask a subject to watch hours of movies while sitting in an fMRI. The researchers then map out the movies' visual stimuli to the pattern of voxel activation. This process is part of model making, in this case making 'encoding models,' which combine information from the stimulus (movies) and the cortex, to explain how the subject experiences watching the movie within the brain. For each voxel, researchers must actually create dozens of models, test them, and ultimately select those that seem to build a clear and comprehensive description of what happens in the brain.

Next you have to reconstruct what a viewer watched using only fMRI data. This puts the discoveries made during encoding to the test, reversing the process and decoding brain activity to recreate the video. What's more, Gallant believes his decoders incorporate enough data that they do generalize. The decoder works even when scanning a subject who's watching a movie he's never seen before—a movie that wasn't used to build the original encoder model. It's a demonstration that might be a first step towards visualizing another's dreams. But to those concerned about protecting the privacy of their thoughts: Don't panic. The visual cortex is relatively easy to read compared to other parts of the brain that work together and influence our private thoughts. Gallant believes that eventually, as neuroscience creates more thorough models of the brain, more complex forms of brain reading are possible. But for now our mind reading, like voxel-level resolution, is limited.