ADVERTISEMENT
  About the SA Blog Network













Guest Blog

Guest Blog


Commentary invited by editors of Scientific American
Guest Blog HomeAboutContact

Using Computers to Model the Heart… Why Bother?

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



It’s often said that these are exciting times to be a computational biologist, and indeed they are. But beyond the flashy, gee-whiz aspects of computational biology, I find myself excited for another reason: the tools of in silico biology offer us views of biological systems that we wouldn’t otherwise have.

Biological systems are complex and contain an extraordinary amount of information – the human genome contains over three billion letters, for example.  Furthermore, biological systems are comprised of a large number of processes that occur simultaneously over a wide range of scales of both time and space.  Chemical processes within cells can operate on the nanosecond time scale (or even shorter), while at the other extreme lifetimes of organisms are often measured in days or years.  The spatial range scales are similarly wide – individual machines that operate within cells are nanometers in size, while the sizes of individual organs are measured in centimeters, and larger organisms are meters in size.  Depending on the particular problem that you’re interested in, it’s possible that you’ll be trying to wrap your head around processes that differ in time and size by over a millionfold.

What I’ve described above is the multiscale nature of biological systems.  Multiscale is a buzzword that increasingly pops up in discussions of modeling and understanding biological systems, and for good reason.  In medicine, the effects of a disease are often observed at larger scales of time and space – from the enlargement of hearts in some types of cardiomyopathy to the inappropriate proliferation of cells in cancer to the loss of entire limbs due to the consequences of advanced diabetes.  The aforementioned examples are all the result of cascading failures that progress over years in many cases.  However, the causes of disease, and our treatments for them, usually occupy the smaller end of the time-space spectrum – proteins that are defective because of mutated DNA (or environmental damage) and the drugs that target those proteins. 

But why is computational biology becoming increasingly popular?  And what does it get you?  To begin with, computational biology is becoming more popular because it is becoming more accessible.  Processing power has greatly increased over the past couple of decades, while at the same time becoming much cheaper.  This has enabled researchers to extend the bounds of the scales to be explored in both directions – allowing increasingly detailed (and computationally intensive) descriptions of individual cell components, or perhaps simulating very large numbers of cells at the same time.  Also, increased computing power has allowed a wide range of scales to be integrated into the same simulation.  In other words, not only can we model the very small and the very large, but we can do both at the same time.

As for what computational biology gets you, there are at two significant benefits that I’ll touch on here.  The first relates to the fact that biological systems are messy.  Cells, organs, and whole organisms are all made up of a very large number of interconnected parts that interact in complex and often non-intuitive ways.  It’s practically impossible to perform an experiment that has only the exact effect that you intended.  If you administer a drug or mutate a gene, you’ll likely affect targets along with the one that you’re interested in studying, and there’s a good chance that you won’t even know what those off-target effects are. 

When studying a mathematical proxy of a living system, the perturbations that you make will only directly have an impact on the components that you want.  The results may still be unexpected, but you can be sure that there were no off-target effects.  As an example, if I want to study the consequences of inhibiting a particular sodium channel in a cell, I can do it by applying a drug that is known to inhibit that particular channel.  The problem is, many drugs are promiscuous, interacting with multiple different channels.  So the problem becomes: how much of whatever effect I observe is due to blockade of my sodium channel of interest, and how much of that effect is due to the drug inhibiting some other channel that I’m not interested in? 

With a mathematical model, I can tell the computer to reduce the activity of my sodium channel by some amount, and know that my in silico inhibition is working only on the target that I’m interested in.  What’s more, I can specify exactly how much inhibition I want to induce in the model.  Using computer models allows you to make surgically precise changes to your system, reducing confounding effects.

The second significant point pertains to the amount of information that can be gleaned from an experiment.  When doing studies on living tissue, you’re often limited in both resolution and the number of different types of measurements you can make at the same time.  Because of technical limitations, you might be able to take measurements once every hour, when really you’d like to see what’s going on every minute. 

Worse, you might only be able to take one measurement on a given specimen, because the measuring technique requires something that kills the cell or invalidates all subsequent observations.  Or, maybe you’re interested in being able to record five different ion concentrations as they change during an experiment, but current technology only allows you to measure two at the same time.  When performing an experiment in silico, you can measure as many different parameters as you want, as often as you want (as long you have enough memory to store all that data)!

Now, I’d like to provide an example of recent computational biology research – more specifically, computational cardiology research – which will hopefully illustrate the promise of this type of science and how it can be applied to real problems in medicine.

Atrial Fibrillation: A Multiscale Phenomenon

One of the most common cardiac arrhythmias, atrial fibrillation (AF), is estimated to affect millions of patients.  Fibrillation is a condition in which the affected chambers of the heart are not beating in an organized manner, but rather quivering as separate regions beat chaotically.  If fibrillation occurs in the two bottom chambers of the heart (the ventricles), death will ensue within minutes unless the patient is defibrillated.  In AF, the two top chambers (the atria) quiver while the ventricles continue to beat well enough to pump blood to the rest of the body.  The electrocardiogram (ECG) shown below in Figure 1 provides an example of AF.

Left: Figure 1: Normal and atrial fibrillation ECGs. Image Credit: Wikimedia Commons

The bottom tracing in Figure 1 shows a normal ECG.  Each normal heart beat begins with the two atria depolarizing – causing them to contract and force blood down into the ventricles – which shows up as what’s called a P wave (purple arrow) on the ECG.  Shortly afterward, the ventricles depolarize and contract, pumping blood out to the rest of the body.  This shows up as a larger deflection, called the QRS complex, right after the P wave.

In the top tracing, you can see that there are still QRS complexes (although they no longer appear at regular intervals), but there is a conspicuous absence of P waves.  The red arrow points to where one location where a P wave should be In its place we find a squiggly baseline.  The fact that a nice flat baseline interrupted with regular P waves has been replaced with a noisy, squiggly baseline signifies that the atria are not beating in a unified, organized fashion, but rather are dominated by many different regions, each depolarizing chaotically.  Occasionally, an electrical wave manages to propagate down from the atria to the ventricles, producing a QRS complex. 

Fibrillation belongs to a class of arrhythmias called reentrant arrhythmias.  Normally a heart beat is the result of an electrical wave that begins in one location and spreads throughout the rest of the heart in a specific way.  However, under the right conditions, a “short circuit” can develop which allows an electrical wave to rapidly reenter the same region repeatedly.  If you use fluorescent dyes to image a heart while it’s fibrillating, you’ll often see patterns that are in fact rapidly spinning spiral waves.

AF is not life threatening on its own, but it can still be a very serious condition.  AF can cause palpitations, lightheadedness, and shortness of breath.  It can also lead to the development of blood clots which can produce a stroke, and it can lead to a gradual weakening of the heart to the point that fluid begins to back up and congest the lungs.

Previously I mentioned that AF is a reentrant arrhythmia, and that reentry can occur under the right conditions.  One way to help set the stage for reentry is to change the electrical properties of individual cells.  Each cardiac cell contains a large number of ion channels of different types, and the proper function of a cardiac cell depends on a choreography involving all of these ion channels.  Not only must there be the proper number of each type of ion channel, but the individual channels must open and close at the proper times.  A process called ionic remodeling changes the number of ion channels of a given type, potentially creating the conditions necessary for reentry. 

Zooming out a bit in scale, the second way to help set the stage for reentry is to alter the ability for electrical waves to spread from cell to cell.  Even when cells have the correct densities of ion channels, if they are not properly coupled to each other, electrical propagation can become too slow or even eventually fail.  Structural remodeling refers to a constellation of processes that interfere with the normal coupling between cardiac cells.

AF is a difficult problem to study because it involves both ionic and structural modeling, often at the same time.  Both types of remodeling can lead to a heart that’s afflicted with AF, and AF can in turn cause more remodeling.  The feedback loop just goes on and on.  These factors make AF a great candidate for computational cardiology.

Trine Krogh-Madsen at Cornell University’s Weill Medical College is using an anatomically detailed computer model of the atria to study how both types of remodeling can affect the frequency and persistence of AF episodes.  The model consists of about 2 million virtual cells, each with a full complement of ion channels.  It also represents the atria in 3D, including such anatomical features as the pulmonary veins, venae cavae, and tricuspid and bicuspid valve annuli.  The pulmonary veins are annuli are commonly implicated as starting points or anchors for spiral waves during episodes of AF. 

The nice thing about this atrial model is that ionic and structural remodeling can be controlled independently.  The activity of any ion channel can be controlled precisely – mimicking ionic remodeling – and structural remodeling can be simulated by precisely varying the amount of coupling between cells in different regions of the virtual atria.  In some of her simulations, Trine examines different amounts of remodeling in three different ion channels (that have previously been implicated in AF), in others she simulates different degrees of structural remodeling (but no ionic remodeling), and in others she includes both.  Figure 2 below shows a snapshot from a simulation, providing four different views of the virtual atria.  The colors correspond to the membrane voltage (the hotter the color, the more “excited” the tissue is), and the formation of a spiral wave can be clearly seen in the top two views.

Right: Figure 2: 3-D simulation of the atria with an anatomically detailed computer model. Image Credit: Trine Krogh-Madsen (Cornell University Weill Medical College)

What these simulations have revealed is interesting: when the atria experience only ionic remodeling, AF episodes originate mostly around the pulmonary veins, but when the atria have only undergone structural remodeling, the majority of AF is located around the tricuspid annulus.  However, when both types of remodeling are introduced, there is no preference for where AF episodes originate. 

These types of experiments would be extremely difficult (if not impossible) to perform in real tissue, but they yield information that can be used to guide further experiments in living systems.

About the Author: Byron Roberts is currently a graduate student in the Tri-Institutional Training Program in Computational Biology and Medicine. For the past several years, at Weill Cornell Medical College, he has been developing a mathematical model and using it to further explore how cardiac ischemia-reperfusion injury can lead to arrhythmias. Prior to moving to New York, Byron completed his bachelor’s degree in genetics at the University of California, Davis, followed by two years of postgraduate research in brain tumor biology. His research interests are largely motivated by experiences during a past career as an EMT and paramedic. Byron is the author of the Emergent Phenomena blog and can be found on Twitter (@bnroberts).

The views expressed are those of the author and are not necessarily those of Scientific American.

Take a look at the complete line-up of bloggers at our brand new blog network.

Tags: ,






Comments 13 Comments

Add Comment
  1. 1. jtdwyer 9:25 am 07/9/2011

    Unfortunately, unless statistically valid sampling methods can be used to determine the values represented in processing models and their results can be formally verified and validated, a given simulation is simply an abstract mechanism producing results that represent no physical process. Sort of like an animation of Pluto – draw any conclusion you want; there is no formal proof that your conclusions are meaningful.

    Link to this
  2. 2. bnroberts 5:44 pm 07/9/2011

    Good point. However, these models ARE informed by experimental data. Parameters such as ion concentrations, conductances of ion channels, fiber orientation (in the case of 3D models), etc. are obtained from experiments. In addition models are validated – that they reproduce relevant behavior that has been observed in real living systems – before being used to make novel predictions. I can say from personal experience that the scientific community thoroughly scrutinizes models before they are published, and before they are deemed useful for yielding new insight. Computational biology is an iterative process: experimental data is used in developing the model, and the model in turn is used to guide additional experiments. Also, comparisons between models and well-established behavior in real systems are used to identify and improve model weaknesses.

    That being said, models are still abstract representations, and that should never be forgotten. Modeling studies are useful, but one must keep in mind that they do not, by their very nature, encompass all aspects of their real counterparts.

    Link to this
  3. 3. tharter 7:36 pm 07/9/2011

    You utterly miss the point jt. If I make a model of something, and I apply my understanding of the model to the world, make predictions, and those predictions provide utility, then it is no different from any other scientific theory. After all, the equations of Newtonian physics are merely a numerical model of nature, yet we can determine the future locations of heavenly bodies with them to great precision.

    Link to this
  4. 4. bnroberts 10:15 pm 07/9/2011

    Very interesting point!

    Link to this
  5. 5. denysYeo 12:02 am 07/10/2011

    Using a model of a process that is not impacted on by "external" influences that may be hard to control for makes a lot of sense. On the other hand it is probably also important to know how these external factors do interact, particularly if treatment ideas are introduced as a result of the "pure" modeling. If these interactions are not understood the overall effectiveness of the treatment may be far less than expected from results gained from data obtained from a "pure" model only.

    Link to this
  6. 6. jtdwyer 6:25 am 07/10/2011

    I have no experience in biology, but I have had some extensive significant experience with performance models of very large scale computer systems.

    The article’s remark about modeling allowing the isolation of component subsystems:
    "…biological systems are messy. Cells, organs, and whole organisms are all made up of a very large number of interconnected parts that interact in complex and often non-intuitive ways."
    IMO, the isolation from interacting complex subsystems is certainly convenient but increases the risk that important critical factors will be ignored in the model evaluations.

    My remarks about sampling measurements of model parameters were responses to the article’s lament regarding the difficulty of monitoring measured results.

    It seems to me that a model of the human heart could not be fully evaluated without considering its interactions with many ‘external’ subsystems.

    In my experience, conveniently simplistic mathematical models of ‘messy’ complex systems often produce oversimplification of results and invalid conclusions. Also, in my experience, those trained in mathematical modeling are reluctant to abandon that approach even when results are not useful. Hopefully this is not the case in computational biology, but I suggest these issues must be assessed as carefully as test results.

    Link to this
  7. 7. jtdwyer 7:15 am 07/10/2011

    Based on your repeated comments, it seems that I continuously utterly miss all points. Since you raise the issue of gravitational evaluation, I’ll remind you that Einstein’s justification for developing the gravitation of general relativity was that Newton’s equations, as they were specifically applied to Mercury, could not produce useful results.

    Adjustments to Newton’s imprecise results are also necessary when spacecraft employ the gravitation of planets to gain momentum in close proximity ‘fly byes’.

    The requirement for the existence of dark matter in galaxies was established by simply noting the discrepancy between observed galactic rotational characteristics and Keplerian rotational curves illustrating the laws of planetary motion, derived solely from empirical observations of the planets in our Solar system. Applying the empirically derived gravitational model of the Solar system to spiral galaxies has produced a whole new field of study for the past several decades.

    Please refer to:
    "Need for context-aware computing in astrophysics",
    http://arxiv.org/abs/0805.4163

    "Rotating thin-disk galaxies through the eyes of Newton",
    http://www.arxiv.org/abs/1007.3778

    Link to this
  8. 8. jtdwyer 9:09 am 07/10/2011

    Also, to pick a nit (again), "in silico" is an inaccurate and inappropriate term, unless biologists are etching their mathematical models in CMOS circuits. Technically, I think ‘in programmo’ (or whatever Latin term for programming) would be more appropriate assuming your models are specified by programming language statements.

    Good article, though, despite my complaints!

    Link to this
  9. 9. desai 2:18 am 07/11/2011

    Very nice article.

    Before there is computational biology, I suppose there would be a mathematical biology, with mathematical equations to describe the behavior of the organs or parts there of. In essence, these would be the mathematical laws for that organ. What do these equations look like and what do these laws say? What kind of terms are involved in these laws and how do you measure them?

    Link to this
  10. 10. jtdwyer 9:50 am 07/11/2011

    Excellent question! This is the crux of my concern: in my experience models are most often used on an ad hoc basis to produce the desired result, not necessarily based on any comprehensive understanding of the processes being represented. While a mathematical proxy may produce useful results under some conditions, when applied to new conditions invalid results are often produced.

    These brute force methods using computational power to derive some limited representation of complex processes can produce invalid results in extended circumstances.

    Unless a mathematical equation can be proven to correctly represent a physical process it is of limited value to physics. Similarly, an analytical model of a physical process cannot be considered reliable until it has been formally verified to produce previously observed results from prior conditions and formally validated to produce heretofore unobserved newly confirmed results from new conditions.

    Link to this
  11. 11. bnroberts 3:51 pm 07/14/2011

    Desai-

    True, and really, computational biology IS mathematical biology, just on a larger scale. But at the end of the day, we supply systems of equations to the computer and ask them to be solved.

    As for the forms of the equations, the terms involved, what they represent, etc., that’s highly variable and depends on the biological problem being investigated. In cardiac electrophysiology, the models often contain ordinary differential equations that represent the behavior of different types of ion channels, which change as a function of time and/or voltage. That’s just one example. In models that couple cells in a 1-D cable, 2-D sheet or 3-D structure, partial differential equations are involved to represent the spread of electrical waves in space and time. Back to the single-cell level, regular old algebraic equations represent changes in ion concentrations, cell volume, etc.

    Link to this
  12. 12. bnroberts 3:55 pm 07/14/2011

    As for jtdwyer’s concerns, I’ll elaborate one final time, since I appear to have been unclear or not credible in my earlier response. The parameters of these models ARE informed by experimental measurements from real systems. That being said, there it’s also not uncommon to have to do some parameter fitting to fit a variety of data types. For example, when using binding or rate constants to simulate ion channel activity that produces a change in voltage and/or ion concentration, the starting values for those binding constants come from experiments. However, you also want your model to fit experimental behavior at larger scales, such as the aforementioned voltage and ion (multiple species) changes. Given that data come from a variety of experimental sources (all with different amounts of error), and more importantly, models by definition are abstract and simplified representations of living systems, this type of adjusting microscopic parameters to achieve accurate macroscopic behavior is expected. The question is, how much deviation from experimental values is okay? That’s the tough question in mathematical/computational biology, physics, etc. True, sometimes this concern is not taken seriously by modelers, but all modelers that I know, and those who do peer review when new models are published, take this very seriously. Also, as I stated before, new models are tested to ensure they reproduce established biological behavior before being put to work making novel predictions. And at the end of the day, those predictions must be tested (to the extent possible) in real systems if they are to have any significance.

    In the Afib modeling work that I discuss at the end of the article, it was observed that A-fib initiated/maintained sometimes around the pulmonary veins, sometimes in the right atrium. Both of these help demonstrate the validity of the model, as pulmonary veins are often where the triggers of A-fib originate in patients, and other patients, especially those with atrial flutter (a more organized form or A-fib) show activity originating in the right atrium. What is not known yet, and what the model attempts to shed light on, is how the two different types of remodeling affect the origin and duration of A-fib episodes. So we have a model that is informed by a large amount of experimental data that I did not go into, produces clinically valid behavior, and makes predictions that can set the stage for further experiments in tissue.

    Link to this
  13. 13. jtdwyer 11:25 am 07/16/2011

    Your explanation offers very encouraging descriptions of the methods currently employed in computational biology. Thanks very much.

    However, in the context of desai’s perceptive remarks:

    Are specifications of standard equations and methods rigorously defined?

    Are critical or new model specification details peer reviewed as part of the research publication process?

    Are model specifications modified on an ad hoc basis following the publication of results?

    Again, I have no knowledge of the practices and procedures of ‘computational biology’, but in general models are most useful when they can be precisely specified descriptions of discrete systems, for use in materials research, for example, rather than highly interpretive representations of complex, ‘messy’ systems.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Back To School

Back to School Sale!

12 Digital Issues + 4 Years of Archive Access just $19.99

Order Now >

X

Email this Article



This function is currently unavailable

X