About the SA Blog Network

Guest Blog

Guest Blog

Commentary invited by editors of Scientific American
Guest Blog HomeAboutContact

Authenticating Cells Out of Curiosity, Not Fear

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

Interior of an incubator showing cells growing in culture flasks, petri dishes, and microtiter plates.

Interior of an incubator showing cells growing in culture flasks, petri dishes, and microtiter plates.

Cell lines are standard tools in biomedical research, and yet when it comes to their genetic identity, they are remarkably unstable. That volatility comes with their defining trait—immortality. Over time, cells accumulate mutations that may ultimately change the structure of chromosomes and alter cellular functions.

A number of those genetic changes can be detected with cell line authentication, although despite the authority implied by its name, such testing is not yet performed routinely in many laboratories. In fact, for the better part of the 60 years since its inception, authentication has not featured prominently on scientists’ to-do lists. It was even actively dismissed for a time, because the manner in which attention had been called to scientists’ mistakes (in many cases committed unknowingly) was perceived as career-threatening.

By the 1960s, it was clear that authentication would be an effective way to catch obvious mix-ups and contamination between cell lines. But it would have been difficult to predict its usefulness for detecting the many diverse genetic variations that bear so critically on cell line identity over time, since those variations were unknown then. And even now, equipped with relatively advanced technologies and with an understanding that genetic variation can be both powerful in effect and subtle in form, they are still able to escape detection.

The elusiveness of variation is a fascinating aspect of cell biology, and its significance is emphasized particularly by the widespread use of immortalized cell lines in laboratory research and by the fact that genetic variation is now a cornerstone of biomedical science. Curiosity about variation in nature is not a new phenomenon, of course. Observations of plants, notably those recorded by Gregor Mendel and Dutch botanist Hugo De Vries in the 19th century, were critical to the realization that genetic variation is the basis for evolution by natural selection. Knowledge of induced mutation, or mutagenesis, introduced in the 1920s with the work of German geneticist Hermann Joseph Müller, piqued the interest of not only scientists but also writers and the general public, notably in the form of science fiction and comics, which are rife with mutant characters.

Types of chromosomal mutations.

Types of chromosomal mutations.

Still today it is difficult not to be amazed by genetic variation and mutagenesis. The depth of variation that exists in the human genome, for instance, is astonishing. In 2006 scientists reported that copy number variations, which include relatively large deletions, duplications, and insertions of genetic material that alter the structure of DNA, affect from 6 to 19 percent of any given chromosome in the human genome. Prior to that study, it had been estimated that just 0.1 percent of the human genome was affected by genetic variation, much of which had been attributed to single nucleotide polymorphisms, which alter individual building blocks of DNA (changing an A to a T, for example).

The sheer diversity of variation in humans is illustrated further by cancer. Scientists have identified nearly 225,000 unique variants for this disease alone. Presumably many of those represent acquired mutations, or changes that have occurred as a result of time or exposure to cellular stressors, such as certain chemicals.

Which brings us back to cell lines.

The longer cells are kept in culture, and the more stressors they are exposed to, the more mutations they acquire. Eventually, they gain the mutations they need to make them immortal, giving rise to a cell line. A cell line, then, is an established lineage of continuously dividing cells, one in which the cells have effectively surpassed the Hayflick limit, or the finite number of cell divisions that normally would bring about replicative senescence (a state in which cells are metabolically active but not capable of division).

In the human body, the time that cells require to overcome the Hayflick limit typically comes with aging, which leads to age-related diseases, such as cancer. Most human cell lines used in biomedical research have in fact been developed with cells isolated from patient tumors. Such cells have already acquired immortality-conferring mutations that facilitate cell-line establishment. Non-tumor-derived cells (“normal” cells), on the other hand, may require exposure to a cancer-causing virus or some form of genetic engineering to become immortal. Once immortalized, they are only several steps removed from becoming tumorigenic (capable of forming tumors when injected into animals).

Immortalization is key for enabling researchers to work with fairly homogenous populations of cells, making for more robust experiments and data. But with each passage, in which a subset of cells is transferred to a new plate or flask to encourage the growth of still more cells, other mutations begin to work their way into the genetic material, producing populations of cells with chromosomal abnormalities or other alterations. If passaged excessively, those populations may become dominant, ultimately changing the cell line’s genetic identity. For some cell lines, such as certain lineages of embryonic stem cells, such changes occur relatively rapidly, after a dozen or so passages, which can have implications for their clinical use.

An early advocate of implementing procedures to catch those identity crises before they resulted in the publication of inaccurate data was Walter Nelson-Rees, who specialized in the characterization of cells at the Naval Biosciences Laboratory in California. In the 1960s and ’70s, Nelson-Rees began validating cell lines using karyotype analysis (which had been used in the first authentication studies in the 1950s) combined with the identification of certain cell surface proteins and isoenzymes. He found that many cell lines were contaminated with genetic material from other cells lines and, perhaps most significantly, that some human cell lines had been overtaken by HeLa cells, the first established human line and the most widely distributed.

Many scientists were caught off guard by Nelson-Rees’s findings. His job essentially was to fact-check the genetic profiles of cancer cell lines. But it seems that many of the creators of the lines and other researchers who had used them became aware of their errors only after Nelson-Rees highlighted offending papers in a series of articles published in Science. His approach, while bringing attention to the problem, had a strong isolating effect on scientists, leading to the eventual condemnation of his work as unscientific.

Still, cell lines are not always what researchers think they are. While the most egregious errors—a human cell line that turns out to be hamster—seem to be relatively uncommon, at least 18 percent of cell lines continue to be misidentified. Methods of authentication have advanced significantly, owing mainly to the development of rapid and inexpensive tools for the analysis of very short, repetitive segments of DNA, known as short tandem repeats. So, it would seem that there is no reason not to have cells checked, although arguably authentication is not required for some types of cell research.

The responsibility of knowing when to verify cells, and actually following through on it, rests with scientists, since journals and funding agencies have been reluctant to require it. And maybe that is all right. The greatest motivating factor for authentication should not be fear of not being published or funded. Rather, it should be curiosity, the reason why scientists explore questions about the natural world in the first place.

Images: Cell culture; Chromosomal variations

Kara Rogers About the Author: Kara Rogers is a freelance science writer and the senior editor of biomedical sciences at Encyclopaedia Britannica, Inc. She is the author of Out of Nature: Why Drugs From Plants Matter to the Future of Humanity (University of Arizona Press, 2012), which explores the human relationship with nature and its relevance to plant-based natural products drug discovery and the loss of biodiversity. She holds a Ph.D. in Pharmacology/Toxicology and enjoys reading and writing about all things science. Follow her on Twitter at @karaerogers, and visit her website. Follow on Twitter @karaerogers.

The views expressed are those of the author and are not necessarily those of Scientific American.

Comments 6 Comments

Add Comment
  1. 1. ronyrao 1:41 am 11/27/2012

    Thanks for publishing this blog ,i gone through this blog.

    Link to this
  2. 2. Bill_Crofut 6:21 pm 11/28/2012

    Re: “Most human cell lines used in biomedical research have in fact been developed with cells isolated from patient tumors. Such cells have already acquired immortality-conferring mutations that facilitate cell-line establishment.”

    One biologist recognized over 50 years ago that mutations are neutral at best, fatal at worst:

    “…[M]utations…must be eliminated in nature, which would otherwise present a spectacle entirely different from the reality. This is partly due to the fact that mutations are not adaptive. If we say that it is only by chance that they are useful, we are still speaking too leniently. In general, they are useless, detrimental, or lethal. Darwin himself did not think that the races of domesticated animals were capable of surviving in nature but the modern Darwinians are obliged to explain evolution as the result of mutations.”

    [Prof. W. R. Thompson. 1956. Introduction. In: Charles Darwin. Origin of Species. Everyman Library No. 811. London: J. M. Dent and Sons. Reprinted with permission. Evolution Protest Movement. 1967. NEW CHALLENGING ‘INTRODUCTION' TO THE ORIGIN OF SPECIES. Selsey, Sussex: Selsey Press Ltd., p. 10]

    Does the quote from the text of this page indicate otherwise?

    Link to this
  3. 3. kerogers 9:09 am 11/30/2012

    Re: Bill_Crofut

    Like to be able to answer your question, but I’m not sure I understand what you’re asking.

    Link to this
  4. 4. Bill_Crofut 10:44 am 11/30/2012

    Dr. Rogers,

    Please accept my apology for obfuscation and for the opportunity, hopefully, to clarify my rambling.

    The text quoted from the article seems to me to be giving mutations credit for conferring some advantage to cells. Prof. Thompson would seem to have challenged the notion that mutations can confer any advantage. Is my perception of the quoted text correct?

    Link to this
  5. 5. kerogers 5:02 pm 12/4/2012

    Re: Bill_Crofut

    It definitely sounds strange to me to say “mutations are not adaptive.” In terms of an organism’s fitness, mutations may be deleterious, advantageous, or neutral. Many mutations appear to be neutral, having neither deleterious nor advantageous effects.

    Hope this helps!

    Link to this
  6. 6. Bill_Crofut 9:49 am 12/5/2012

    Dr. Rogers,

    Adaptive mutations is a concept that is unclear to me. What is clear, from my limited research, is that mutations are deleterious at worst, neutral at best. Have you an example of a mutation that is beneficial? An evolutionist correspondent (c. 1984) wrote me of the sickle cell mutation as beneficial in preventing malaria. My response was to invite him to inform someone suffering from sickle cell anemia how fortunate he/she was to have an immunity to malaria. His response was to chide me for bringing a moral issue into a discussion of science.

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article