About the SA Blog Network

Basic Space

Basic Space

Space and astrophysics research made simple
Basic Space Home

How Brain Scans Can Help Astronomers Understand Stars

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

A false color image of Cassiopeia A using observations from both the Hubble and Spitzer telescopes, and Chandra X-ray Observatory. Credit: NASA/JPL-Caltech

They may come from completely different fields of study, but brain scans and supernovae have more in common than you would think.

In a new TED talk, Michelle Borkin explains how software developed for use in a hospital was able to help astronomers study the structure of supernovae.

An astronomer colleague of Borkin’s at the Harvard-Smithsonian Center for Astrophysics had eight years worth of data from the supernova remnant Cassiopeia A. She wanted to use the data to understand the remnant’s structure so she could work out how the star exploded. But there was a problem: she had no good way to look at the data. Luckily, Borkin did, and suggested that the astronomer try using 3D slicer software, originally developed in a hospital in Boston for looking at brain scans. It worked beautifully.

It is not just data analysis in these two fields that uses the same tools. The way data is collected from brain scans and radio telescopes is similar too. Even images in the fields of medicine and astronomy are alike: a confocal microscopy image of a human cornea looks much like a radio telescope image of star forming region NGC1333, despite the difference in scale.

This collaboration between astronomy and medicine is not the only example of an interdisciplinary connection in science – a lot of interesting science is now happening at the interface between two or more fields of study. Scientists working in all areas are looking outside their own lab in search of new ideas and methods, and more could benefit from joining them.

Video credit: TED

More about the Astronomical Medicine Project.

Kelly Oakes About the Author: Kelly Oakes has a master's in science communication and a physics degree, both from Imperial College London. Now she spends her days writing about science. Follow on Twitter @kahoakes.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 3 Comments

Add Comment
  1. 1. jtdwyer 11:04 pm 01/9/2012

    I believe the explanation of how 3d telescopy data can be generated from observation of EM emissions, especially its analogy to medical scanner data, is pure fantasy.

    While not an expert in the field, as I understand medical imaging scanners actually do generate data by actually laterally ‘slicing’ through tissue. EM beam emissions are directed laterally through the tissue under examination, then the beams are typically physically repositioned for the next beam slice.

    Obviously, telescope images cannot be generated from separate lateral observations, progressing from far to near the observer, for example. I suspect that depth information about an astronomical object might be simulated by analyzing the redshift of detected EM emissions (all observed from our single observation point). Alternatively multiple images of an object collected from separated telescopes might capture some slight 3d depth information, but at best the separate observation points would still present only a slight separation of two very similar observational perspectives: unlike medical scanners much of a distant objects ‘backside’ would be hidden from any simulated view.

    There is no hint in this article of the actual methods used here to generate 3d data for astronomical objects, characterized as ‘just like CT scans’.

    Without more detailed information I can’t assess how accurate any 3d image of astronomical objects might be, but I think they cannot be anywhere nearly as precise as local scanning equipment…

    Link to this
  2. 2. Kelly Oakes in reply to Kelly Oakes 12:12 pm 01/11/2012


    I don’t know about the specific case that is discussed in the video I posted (which was the point of the post really, the text was meant to serve as an introduction rather than a complete article) but I have seen such data generation before that could be said to be ‘just like a CT scan’. Here’s an article about a paper that uses such a technique: but tomography has been used in more than just that study. Obviously, it doesn’t work in exactly the same way as a brain scan, but there are definite parallels that can be made.

    Link to this
  3. 3. Quinn the Eskimo 8:55 pm 01/15/2012

    Her presentation was pretty good. It makes a compelling argument for the differentiation of data by color-scheming it.

    There is an inherent draw back to this technique though. Colorblind practitioners are inherently precluded. So, how many of the population are colorblind? Figures show only 8% are severely handicapped! But, when you consider that colorblindness is *not* binary, that figure can expand considerably.

    It is possible that many such practitioners, whom have otherwise much to share, will be excluded from the discussion. Not because they are dramatically colorblind, but rather share some color inhibitions in part of the visible spectrum.

    The percentage of limited colorblind population might be as high as 30%.

    Something to keep in mind, when a colleague doesn’t “get it.”

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article