ADVERTISEMENT
  About the SA Blog Network













The Curious Wavefunction

The Curious Wavefunction


Musings on chemistry and the history and philosophy of science
The Curious Wavefunction Home

Are climate change models becoming more accurate and less reliable?

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



A sampling of the myriad factors typically included in a climate change model (Image: Maslin and Austin, Nature, 2012, 486, 183)

One of the perpetual challenges in my career as a modeler of biochemical systems has been the need to balance accuracy with reliability. This paradox is not as strange as it seems. Typically when you build a model you include a lot of approximations supposed to make the modeling process easier; ideally you want a model to be as simple as possible and contain as few parameters as possible. But this strategy does not work all the time since sometimes it turns out that in your drive for simplicity you have left a crucial factor out. So now you include this crucial factor, only to find that the uncertainties in your model go through the roof. What’s happening in such unfortunate cases is that along with including the signal from the previously excluded factors, you have also inevitably included a large amount of noise. This noise can typically result from an incomplete knowledge of the factor, either from calculation or from measurement. Modelers of every stripe thus have to tread a fine balance between including as much of reality as possibility as possible and making the model accurate enough for quantitative explanation and prediction.

It seems that this is exactly the problem that has started bedeviling climate change models. A recent issue of Nature had a very interesting article on what seems to be a wholly paradoxical feature of models used in climate science; as the models are becoming increasingly realistic, they are also becoming less accurate and predictive because of growing uncertainties. I can only imagine this to be an excruciatingly painful fact for climate modelers who seem to be facing the equivalent of the Heisenberg uncertainty principle for their field. It’s an especially worrisome time to deal with such issues since the modelers need to include their predictions in the next IPCC report on climate change which is due to be published this year.

A closer look at the models reveals that this behavior is not as paradoxical as it sounds, although it’s still not clear how you would get around it. The article especially struck a chord with me since as I mentioned earlier, similar problems often plague models used in chemical and biological research. In case of climate change, the fact is that earlier models were crude and did not account for many fine-grained factors that are now being included (such as the rate at which ice falls through clouds). In principle and even in practice, there are a bewildering number of such factors (partly exemplified by the picture on top). Fortuitously, the crudeness of the models also prevented the uncertainties associated with these factors from being included in the modeling. The uncertainty remained hidden. Now that more real-world factors are being included, the uncertainties endemic in these factors reveal themselves and get tacked on to the models. You thus face an ironic tradeoff; as your models strive to mirror the real world better, they also become more uncertain. It’s like swimming in quicksand; the harder you try to get out of it, the deeper you get sucked in.

This dilemma is not unheard of in the world of computational chemistry and biology. A lot of the models we currently use for predicting protein-drug interactions for instance are remarkably simple and yet accurate enough to be useful. Several reasons account for this unexpected accuracy; among them cancellation of errors (the Fermi principle), similarities of training sets to test sets and sometimes just plain luck. The similarity of training and test sets especially means that your models can be pretty good at explanation but can break down when it comes to prediction of even slightly dissimilar systems. In addition, error analysis is unfortunately not a priority in most of these studies, since the whole point is to publish correct results. Unless this culture changes our road to accurate prediction will be painfully slow.

Here’s an example from my own field of how “more can be worse”. For the last few months I have been using a very simple model to try to predict the diffusion of druglike molecules through cell membranes. This is an important problem in drug development since even your most stellar test-tube candidate will be worthless until it makes its way into cells. Cell membranes are hydrophobic (water-hating) while the water surrounding them is hydrophilic (water-loving). The ease with which a potential drug transfers from the surrounding water into the membrane depends among other factors on its solvation energy, that is on how readily the drug can shed water molecules; the smaller the solvation energy, the easier it is for drugs to get across. This simple model which calculates the solvation energy seems to do unusually well in predicting the diffusion of drugs across real cell membranes, a process that’s much more complex than just solvation-desolvation.

One of the fundamental assumptions in the model I am using is that the molecule exists in just one conformation in both water and the membrane. A conformation of a molecule is like a yoga position for a human being; typical organic molecules with many rotatable bonds usually have thousands of possible conformations. The assumption of a single conformation is fundamentally false since in reality molecules are highly flexible creatures that interconvert between several conformations both in water and inside a cell membrane. To overcome this assumption, a recent paper explicitly calculated the conformations of the molecule in water and included this factor in the diffusion predictions. This was certainly more realistic. To their surprise, the authors found that making the calculation more realistic made the predictions worse. While the exact mix of factors responsible for this failure can be complicated to tease apart, what’s likely happening is that the more realistic factors also bring more noise and uncertainty with them. This uncertainty piles up, errors that were likely canceling before no longer cancel, and the whole prediction becomes fuzzier and less useful.

I believe that this is what is partly happening in climate models. Including more real-life factors in the models does not mean that all those factors are well understood or tightly measured. You are inevitably introducing some known unknowns. Ill-understood factors will introduce more uncertainty. Well-understood factors will introduce less uncertainty. Ultimately the accuracy of the models will depend on the interplay between these two kinds of factors, and currently it seems that the rate of inclusion of new factors is higher than the rate at which those factors can be accurately calculated or measured.

The article goes on to note that in spite of this growing uncertainty the basic predictions of climate models are broadly consistent. However it also acknowledges the difficulty in explaining the growing uncertainty to a public which has become more skeptical of climate change since 2007 (when the last IPCC report was published). As a chemical modeler I can sympathize with the climate modelers.

But the lesson to take away from this dilemma is that crude models sometimes work better than more realistic ones. My favorite quote about models comes from the statistician George Box who said that “all models are wrong, but some are useful”. It is a worthy endeavor to try to make models more realistic, but it is even more important to make them useful.

Note: As a passing thought it’s worth pointing out some of the common problems that can severely limit the usefulness of any kind of model, whether it’s one used for predicting the stock market, the global climate or the behavior of drugs, proteins and genes:

1. Overfitting: You fit the existing data so well that your model becomes a victim of its own success. It’s stellar at explaining what’s known but it’s so overly dependent on every single data point that a slightly different distribution of data completely overwhelms its predictive power.

2. Outliers: On the other hand if you fit only a few data points and ignore the outliers, your model again runs the risk of failing when facing a dataset “enriched” in the outliers.

3. Generality vs specificity: If you build a model that predicts average behavior, it may turn out to have little use in predicting what happens under specific circumstances. Call this the bane of statistics itself if you will, but it certainly makes prediction harder.

4. Approximations: This is probably the one limitation inherent to every model since every model is based on approximations without which it would simply be too complex to be of any use. The trick is to know which approximations to employ and which ones to leave out, and to run enough tests to make sure that the ones that are left out still allow the model to explain most of the data. Approximations are also often dictated by expediency since even if a model can include every single parameter in theory, it may turn out to be prohibitively expensive in terms of computer time or cost. There are many good reasons to approximate, as long as you always remember that you have done so.

This an updated and revised version of a post on The Curious Wavefunction blog.

Ashutosh Jogalekar About the Author: Ashutosh (Ash) Jogalekar is a chemist interested in the history and philosophy of science. He considers science to be a seamless and all-encompassing part of the human experience. Follow on Twitter @curiouswavefn.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 24 Comments

Add Comment
  1. 1. cookchh 4:43 pm 02/27/2013

    The best way to deal with these modeling concerns is to run a sensitivity analysis, whereby you modulate your inputs to determine your potential range of outputs. This is somewhat common in engineering practice… but then we throw a factor of safety of 2 or 3 on our average and call it good.

    Link to this
  2. 2. SteveO 4:47 pm 02/27/2013

    Thanks for the article Ashutosh. Might be teaching you about sucking eggs here, but in my experience even those who build big research models often don’t know about the following, so maybe it will do some good.

    This is a problem in any model, from multiple regression on up. When I teach these, we have a variety of things we do to try to prevent these occurrences. Let’s assume we have all continuous factors and are building a model from data. There are ways of handling general linear models (discrete, continuous, or whatever) and non-linear models, but that would go on for a while…

    First, you need to check for multicollinearity. What you have described above is what happens when you include in factors that are themselves highly correlated to each other, so my first guess is that this check wasn’t done. Say pressure and dew point and insolation are highly correlated – if I include them all I am not only making the model needlessly complex, I might also actually increase its volatility – how wildly it swings when variables change. Add to that uncertainties in measurement and you have a crazy model. We use the variance inflation factor (VIF) as a criterion to weed out multicollinear factors.

    Also, the adjusted R-sqr will help to determine if you have added unneeded factors. If you add in a crazy number of factors, your R-sqr will look great, but it will be useless for prediction. The adjusted R-sqr accounts for adding in extra factors. If you are using an automated process to add or subtract factors, there are other criteria that you can use (Cp, AIC or SBC come to mind, or even a simple F-test on the added or deleted factor). This protects against overfitting.

    If you are simply adding all potential factors to the model and not using any selection criteria, you are doing it wrong and the model will likely be useless.

    You have to assess for outliers, but there are criteria here. The objective is not to drop outliers (you only do that when you can show that they are outside of the process you are modelling) but to understand if you have influential outliers. Outliers (extreme values in the data set) are not necessarily bad as they might be representative of the process you are trying to model. But if it is an outlier and has an outsized effect on the final model, you want to know about it and possibly justify dropping it. Standardized deleted residuals, DfBeta, DEFIT and Cook’s distance will help you there.

    The next thing that I would guess might be behind such occurrences is not testing the assumptions of the model, whatever flavor it is. This is a very common error in the literature, and could also manifest itself as a volatile model that is right on average, but with high variability.

    Anyway, I probably don’t have space here to cover half a semester’s material, but there are well-proven techniques for dealing with the problems you describe. The problem is that a lot (and by that I mean I work at a Tier I research university, and there are a LOT) of researchers have never heard of any of this, throw everything into a model, and publish with nary a peep from their peer-reviewers.

    Anyway, hope it helps someone…

    Link to this
  3. 3. sault 6:34 pm 02/27/2013

    While climate modeling is useful for adaptation to a changing climate, the uncertainties in the models don’t absolve humanity from reducing our GHG emissions. The timing, mode, and geographic dispersion of climate change effects are uncertain. The fact that more GHGs = more warming and climate disruption is not. I don’t think we need to know how quickly a bear can maul somebody to confirm that we need to stop poking the bear in the first place.

    Link to this
  4. 4. Vincentrj 6:52 pm 02/27/2013

    At last! The truth about Anthropogenic Climate Change.

    As someone who is skeptical about the accuracy of alarmist claims for the consequences of our CO2 emissions, I’ve always been puzzled as to how it is possible for climatologists to place a figure on the certainty of predictions based upon computer models.

    Whilst there is no doubt at all that climate is changing; it is always changing; it cannot do otherwise; change is inherent in its nature and complexity, I think it’s an impossible task to predict with plausible certainty the role that increases in relatively miniscule percentages of CO2 will have on our climate.

    To put the quantities in common parlance, we’re talking about increases in atmospheric CO2 levels from approximately ‘one quarter of one tenth of one percent’ to ‘two fifths of one tenth of one percent’ over the past couple of hundred years.

    To predict with any degree of reliable certainty the effect that such tiny increases in CO2 levels may have, would surely require a perfect understanding of all the influences that affect our climate, and the precise magnitide of such influences.

    Does anyone really believe we have such near-perfect knowledge?

    Link to this
  5. 5. curiouswavefunction 7:06 pm 02/27/2013

    SteveO: Thanks for your note. I am aware of some of these techniques and I am sure the list will be useful for model builders.

    Link to this
  6. 6. Trafalgar 7:20 pm 02/27/2013

    Vincentrj: You could have just written “The world heats up, the world cools down, you can’t explain that!”

    Reacting to the potential species-and-civilization-ending danger (if something isn’t done) by immediately going into “no evil, hear no evil, speak no evil” mode isn’t benefiting humanity at all.

    Link to this
  7. 7. geojellyroll 7:33 pm 02/27/2013

    The issue with the concept is that there is no end game to compare the model to. No set of data that says ‘we got it right’. More accurate or less accurate according to what bottom line?

    Sure incomplete models can be accurate but so can a clock be accurate twice a day. climate is a big moving target. Less hurricanes than predicted but more snow…less rain but more people with flu…fewer monarch butterflies but increase in polar bear numbers. Climate models are sort of right and sort of wrong. More variableas added make them continue to besort of right and sort of wrong.

    Link to this
  8. 8. sault 9:19 pm 02/27/2013

    vincentrj,

    You need to read up on basic climate science as you are just parroting a bunch of long-debunked climate denier canards.

    While the climate has indeed changed in the past before humans came along, that doesn’t necessarily mean that humans can’t have an impact on the climate right now. Just like there were forest fires before humanity existed, it’s silly to think that this means people can’t start forest fires on their own, whether intentionally or unintentionally.

    And you are severely lacking in your knowledge of CO2′s properties and the role it plays in the atmosphere. You try to obscure the effect of CO2 by claiming that it is a small part of the atmosphere. Well, did you know that 99% of the gas in Earth’s atmosphere is TRANSPARENT to the longwave radiation that CO2 absorbs? In addition, 400ppm and COUNTING is nothing to laugh at! Try breathing in 400ppm of cyanide or drinking water with 400ppm arsenic before you make such ridiculous claims!

    Look, we know the radiative properties of CO2 rather well and we’ve OBSERVED the growing energy imbalance it is causing in the Earth’s climate. You can’t ignore the FACT that more CO2 = more warming. There is some uncertainty on what the Earth’s climate sensitivity really is and what the timing / distribution / magnitude of climate change’s impacts will be. But this is NO excuse to delay action since waiting for 100% certainty about these things GUARANTEES that we will be saddled with a climate that is mostly unrecognizable for generations.

    Are you and the other deniers ready to take that risk? Or can we put a reasonable price on carbon emissions and work as a species to reduce them? This prospect scares the bejeezus out of the people who profit massively from the use of fossil fuels, so they’ve started spending “one quarter of one tenth of one percent” of their multi-TRILLION-dollar revenues to fool people like you into thinking that the world’s climate scientists have overlooked something that an anonymous commenter on a discussion board can easily identify. Do you really think this is plausible?

    Link to this
  9. 9. Sisko 9:49 pm 02/27/2013

    All of the conclusions used by the IPCC that humanity needs to take drastic steps to reduce CO2 emissions have been based on the outputs of models that forecasted that conditions important to the lives of humans would be negatively impacted as a result of higher CO2 concentrations. These same models have now been demonstrated to have done a poor job in accurately simulating the conditions.

    If you reach a conclusion based on the output of a model that has been proven to do a poor job does it necessarily mean than your conclusions were wrong? Imo, no but it does mean that others are reasonable in being skeptical of your recommendations.

    http://judithcurry.com/2013/02/22/spinning-the-climate-model-observation-comparison/

    Link to this
  10. 10. Sisko 10:00 pm 02/27/2013

    Sault

    You comment to vincentrj was highly unjustified.

    vincentrj accurately points out that we do not have reliable information regarding how conditions that are important to the lives of humans will change in different places around the world many years from now and you use the term “denier”. As was written you do not know the consequences of climate change for any place.

    Link to this
  11. 11. Dr. Strangelove 10:36 pm 02/27/2013

    If you cannot accurately model clouds, no climate model can make accurate predictions. Present climate models are inaccurate because their prediction of sensitivity for doubling CO2 has a large range from 1C to 4C.

    Because of this uncertainty, people can play games. ‘Warmers’ will say 4C. ‘Deniers’ will say 1C. But if you extrapolate recent history, it’s about 2C. The catch is history will not repeat itself. It’s not that our models are poor but climate is inherently chaotic and unpredictable.

    Link to this
  12. 12. sault 10:51 pm 02/27/2013

    Sisko,

    You might want to look at this:

    http://thecontributor.com/why-climate-deniers-have-no-scientific-credibility-one-pie-chart

    “…out of 13,950 peer-reviewed articles published on global warming since 1991, only 23, or 0.16 percent, clearly reject global warming or endorse a cause other than CO2 emissions for observed warming.

    A few deniers have become well known from newspaper interviews, Congressional hearings, conferences of climate change critics, books, lectures, websites and the like. Their names are conspicuously rare among the authors of the rejecting articles. Like those authors, the prominent deniers must have no evidence that falsifies global warming.”

    Sorry, but vincentrj is just denying basic science and TOTALLY earns that denier label. Whether it’s repeating the “climate has changed before humans” canard that was easily taken down by my forest fire analogy or the “one quarter of one tenth of one percent” malarkey that takes only cursory knowledge of physics to see through, his (just guessing here) unscientific blatherings about a subject he CLEARLY knows nothing about beyond the fossil fuel propaganda he’s been gulping down is readily apparent.

    Anyway, you, me and the ALL THE WORLD’S CLIMATE SCIENTISTS agree that there is some uncertainty in climate modeling and future projections. But let me say this for the upteenth time, this is NOT an excuse for inaction! CO2 traps heat and we’ve increased its concentration in the atmosphere by 40% with NO signs of slowing down. If we don’t act NOW, the projections of future impacts range from bad to utterly catastrophic.

    So what’s so bad about regulating fossil fuel pollution adequately so that humanity isn’t unnecessarily burdened with its effects? Just cleaning up coal power plants in the USA could lower the $100B – $500B in yearly damages that coal pollution causes, and for pennies on the dollar too as has been demonstrated time and time again by numerous pollution control measures. Going by historical compliance-to-benefit ratios, why should we cause society as a whole to endure $100B in pollution damages just so the few who profit from delivering coal-powered electricity can get to skip out on $10B in pollution prevention measures? I just can’t see how this makes sense.

    And you seem to agree that CO2 does cause climate change, but you and I both are just uncertain about how large those effects will be. Estimates for the “social cost of carbon” range from $20 per ton to around $900 per ton! The exact value is somewhere in the middle and all people who are worried about climate change want is a price on carbon that comes in at the LOWEST (and least probable) value for the damages a ton of CO2 will do over its time in the atmosphere. I’m even all for refunding 80% or more back to taxpayers by reducing payroll taxes with the revenues. Let’s tax something we all agree is bad (excess CO2 in the atmosphere) and reduce taxes on something we all agree is good (employment and productivity). Use the rest of the money to cut the 50% of energy the USA wastes via energy efficiency and fund R&D / deployment of renewables and nuclear power while you’re at it!

    I’m very interested in where you think this plan has shortcomings.

    Link to this
  13. 13. HowardB 7:54 am 02/28/2013

    The ‘problem’ that hides behind the issues here is very simple.
    The problem is that the climate is enormously more complex than the models being used to try to predict it’s future behaviour. Not only is it more complex, but our understanding of what influences it and by how much, is still in it’s infancy.
    Add to that the widespread inaccuracy and bias of the core data being used, and the result we have is a mish-mash of results that essentially ends up being more of a product of the bias of the modeller than a valuable scientific product.
    It is time that this issue was exposed and admitted by the Climate prediction community, and it is time that they tell the simple truth to the public – they ‘think’ they know how it works and they ‘think’ that human activity is seriously causing the current climate changes …. but they are in no way close to saying so for sure.
    The public would have a lot more respect for them if they did this. And when the science gets better, as it inevitably will, then the public will have more respect for their future assertions, in turn.

    Link to this
  14. 14. rodestar99 8:23 am 02/28/2013

    To make matters worse. Researchers can be influenced
    by their own biases to include or not include factors
    that make their models give the desired results.
    This makes the models highly questionable.

    Link to this
  15. 15. Quantumburrito 8:26 am 02/28/2013

    I think the point about predicting only averages and global (as opposed to local) changes is important. For instance warming can be good for certain cold places but I don’t know if the models can predict this. I am sure there are at least some people in the world in cold regions who are happy about “global” warming. In fact I think that’s one of the problems with the whole argument; it assumes that all effects of climate change must necessarily be bad ones.

    Link to this
  16. 16. Sundance 10:21 am 02/28/2013

    AJ – Do you have any thoughts on divergence over time as it relates to model integrity? Say for example the mean ensemble average for global temperature rise for IPCC models is found to exceed 2SD after 17 years, is it meaningful? At what point do you get concerned that the models are problematic? TIA.

    Link to this
  17. 17. Sisko 11:08 am 02/28/2013

    You seem to exhibiting your lack of read comprehension as well as a tendency to show your prejudice. Neither I nor vincentry wrote that additional CO2 would not warm the planet to some extent.

    We both accurately pointed out that there is no reliable information available today that can tell us the impacts to any specific place over time. There are not models that can tell us for example that it will rain 20% more in Dallas in 30 years due to AGW.

    You Sault are actually seem to be a denier. You deny the truth of the ability of the current set of models. You deny the truth that virtually every peer reviewed paper describing the net negative impacts forecasted for the planet as a result of AGW was written based on the simulations of multiple models of unknown accuracy (at the time they were written) and now these same models have been demonstrated to not perform well making accurate forecasts.

    Try to notice that I have not denied the physics of AGW. Try to notice that the issue is what the rate of warming will be in the actual system and what the net positive and negative impacts will be over the long term and where. Try to notice that you wish to make decisions based on what you believe and I wish to make those same decisions based upon reliable data.

    It is not an excuse for inaction; it is trying to encourage people to implement actions that don’t waste our limited resources.

    The projections of future impacts DO NOT range from bad to utterly catastrophic. They range from net positive impacts to catastrophic and the catastrophic impact forecasts have been demonstrated to be wrong based on what is actually observed.

    What is wrong with regulating fossil fuel pollution? Nothing, but is CO2 actually a pollutant and to what extent. That is subject to great debate in everyone’s mind except for the zealots and even zealots have to agree there is debate regarding at what levels it is considered pollution.

    Link to this
  18. 18. curiouswavefunction 11:54 am 02/28/2013

    Sundance: That’s a very good question and while I am no climatologist I can only imagine that it’s a valid concern. In my own field where people simulate the motions of proteins using molecular dynamics, there tends to be a “drift” of the energy over time which can lead results to become less accurate over longer time scales. This is partly because the experimental data points used for parametrizing the fundamental equations may be valid only over shorter times. I am guessing that climate models also have to deal with these issues since not all the data that’s used for parametrization is going to be applicable over all time periods.

    Link to this
  19. 19. sault 12:07 pm 02/28/2013

    Sisko,

    Please share with me the projections of positive impacts. I have found a few, but they are relatively minor, limited to the extreme northern latitudes that are sparsely populated and are overwhelmed by the projected negative effects:

    •Improved agriculture in some high latitude regions (Mendelsohn 2006)
    •Increased growing season in Greenland (Nyegaard 2007)
    •Increased productivity of sour orange trees (Kimball 2007)

    •Increased cod fishing leading to improved Greenland economy (Nyegaard 2007)

    •An ice-free Northwest Passage, providing a shipping shortcut between the Pacific and Atlantic Oceans (Kerr 2002, Stroeve 2008)

    Like I’ve said, over 200,000 papers pop up when you search for “Negative Effects of Climate Change” in Google Scholar, and that’s just for publications since 2009! And in my previous post, I showed that only 0.16% of papers published since 1991 rejected the theory that GHG emissions are causing global warming.

    Anyway, looks like we can both agree that CO2 emissions cause some damage. If we tax those emissions, then energy markets have the price signal they need to find the most efficient solutions. What price on carbon emissions do you think is adequate?

    Link to this
  20. 20. Dr. Strangelove 8:27 pm 02/28/2013

    Sundance: According to this study, the margin of error in a 5-year forecast of climate models is plus or minus 8.8C. This means the models are totally unreliable for prediction purposes.

    http://www.skeptic.com/wordpress/wp-content/uploads/v14n01resources/climate_of_belief.pdf

    Link to this
  21. 21. kienhua68 8:38 pm 02/28/2013

    Like many times before, the majority won’t react until the effects are felt in the literal sense. Like the recent tragedies ‘necessary’ to stimulate serious discussion on
    solution.
    Humans can act based on thought but prefer physical suffering as a verification. After all no one wants to
    get fooled!!

    Link to this
  22. 22. MikeMacCracken 11:15 am 03/1/2013

    Speaking as a former climate modeler, early models included a number of empirically based representations of processes (e.g., cloud cover being simply a function of relative humidity) and parameters (e.g., specifying an average value of cloud reflectivity for various types of clouds). These had the benefit of being set at observed values, but the disadvantage of treating all situations the same–for example, cloud reflectivity and absorptivity did not take account of the level of small particles that cloud droplets could form on, which would determine the likely size distribution of the drops and so cloud reflectivity. Well, the particles can be from sea salt, air pollution, and other factors, so the model simulations would presumably be improved by accounting for such factors, and so some improved representations were made in the models. But then the model would need to be representing pollution, etc., and the process of improving the model representations of processes would continue.

    Pushing for better representations of processes is important in seeking to improve climate models because as the climate is changed by the increasing concentrations of greenhouse gases (and the physics on this are quite straight-forward–there will be even more uncharacteristic and disruptive warming than we have started to experience as emissions continue), a key question is whether the empirical relationships models are based upon will hold up–and the only way to get at this is to work toward more and more comprehensively representing the actual basic physics (and chemistry and biology to extent it can be done). There seems no way that such improvements will make the problem go away (after all, the representations we use today are tested for lots of different conditions over the Earth as we run the models now; and in addition, paleoclimatic information provides some guidance), but, on a global basis, the details can change, and that can turn out to be very significant at the local to regional scale (e.g., variations in the path, speed, and temperature of storms striking California as global warming occurs can make a big difference in where it rains more or less and whether precipitation falls as rain or snow, so improving processes that determine the water content of clouds is very important).

    Now, as the article suggested, as one process is upgraded (e.g., how convective clouds are represented), it can mean that other processes need to be upgraded, and so the performance of the leading models can shift back and forth a bit. Fortunately, there are a good number of models, each of which is pushing forward on its own path, and an established protocol for running and testing models, and the “wisdom of crowds” effect seems to kick in–that is, taken across the set of climate variables of interest, the net result of the leading models taken together seems to give a bit better representation of the weather situations that add up to make the climate than the results of any single model.

    Yes, model simulations of the climate (i.e., set of weather statistics) of the present and recent past is not perfect, there are differences or uncertainties, but the models are giving quite good representations of the changes occurring from pole to equator and through the seasons and over the 20th century and longer, and efforts are underway to further improve them. In all of this effort, however, there is no basis for reconsidering the fundamental physics that underpins the climate change findings that were first put forth scientifically in 1896, reported on in 1965 to President Johnson and Congress by his scientific advisory panel, and elaborated on by four assessment reports of the Intergovernmental Panel on Climate Change accepted and adopted unanimously by the nations of the world and endorsed by all the major academies of science, etc. Yes, getting the details better worked out is important, but waiting for the details is like continuing to smoke while the particular role of smoking each cigarette is worked out–it is a huge risk to take, and for climate change, it is the health of the one planet we depend on for resources and safety, not just an individual’s life (important as those can be).

    So, let’s get on with the energy transformation that is needed.

    Link to this
  23. 23. Qiloff 12:09 pm 03/1/2013

    Just a casual technician’s opinion, 50 years ago we called it suntan lotion, now we call it sunblock lotion. 50 years ago I could play all day at the beach, today an hour will burn me. Besides I read a lot here about how climate change affects humans, just turn the A/C up or down, the fauna and flora have no such luxury.
    If humans have not caused this rapid change then what has? If it is not the consumption of the Earth’s resources then what? Is it the CO2 that is causing this change, or is there something else being introduced to the atmosphere that is responsible. I just know it is going to be another scorcher this summer. To me it does not seem to be a ordinary.

    Link to this
  24. 24. Dr. Strangelove 10:48 pm 03/4/2013

    @mikecracken

    Fundamental physics predicts that without feedbacks, doubling CO2 will increase global temp. by about 1C. The problem is in the real world, climate is dominated by feedbacks. The same fundamental physics predicts that feedbacks are chaotic and inherently unpredictable.

    Retrospectively the climate is ‘predictable’ because you know the exact values of all the variables in your model. Prospectively it is unpredictable because you don’t know the exact values of the input variables. You can make guesses but the probability you will get all of them right is vanishingly small.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Dinosaurs

Get Total Access to our Digital Anthology

1,200 Articles

Order Now - Just $39! >

X

Email this Article

X