May 15, 2013 | 12
When I was in graduate school I once came across a computer program that’s used to predict the activities of as yet unsynthesized drug molecules. The program is “trained” on a set of existing drug molecules with known activities (the “training set”) and is then used to predict those of an unknown set (the “test set”). In order to make learning the ropes of the program more interesting, my graduate advisor set up a friendly contest between me and a friend in the lab. We were each given a week to train the program on an existing set and find out how well we could do on the unknowns.
After a week we turned in our results. I actually did better than my friend on the existing set, but my friend did better on the test set. From a practical perspective his model had predictive value, a key property of any successful model. On the other hand my model was one that still needed some work. Being able to “predict” already existing data is not prediction, it’s explanation. Explanation is important, but a model such as mine that merely explained what was already known is an incomplete model since the value and purpose of a truly robust model is prediction. In addition, a model that merely explains can be made to fit the data by tweaking its parameters with the known experimental numbers.
These are the thoughts that went through my mind as I read a recent paper from Nature Climate Change in which climate change modelers “predicted” the last ten years of global temperature stagnation. The lack of global warming since about 2000 does not disprove everything we know about climate change; the discovery of global warming is based on much more than just computer models (Weart, 2008). But models are still an integral tool for predicting future changes, and the fact that the current stagnation was not accurately encompassed by the models did pose an inconvenient truth for climate change scientists. In the latest paper scientists from Spain and France seem to have located the reason for the failure; it seems that the models were underestimating the contribution of the oceans in acting as a sink for the heat. Heat absorption by the ocean is a long established mechanism for the cessation or slowing down of atmospheric warming but it seems that the models were not accounting for this natural variability well enough. What happens is that when human induced global warming and ocean absorption reinforce each other you get a net warming signal. However when they oppose each other then the ocean sink puts a brake on the warming, which is what we see during recent years. From what I can tell, once they bumped up the value of the parameters dealing with ocean absorption of heat they could use one particular model to reproduce the observed stagnation of temperatures.
That’s fair enough. This kind of retrospective calculation is a standard part of model building. But let’s not call it a “prediction”, it’s actually a “postdiction”. The present study indicates that models used for predicting temperature changes need some more work, especially when dealing with tightly coupled complex systems such as ocean sinks. In addition you cannot simply make these models work by tweaking the parameters; the problem with this approach is that it risks condemning the models to a narrow window of applicability beyond which they will lack the flexibility to take sudden changes into account. A robust model is one with a minimal number of parameters which does not need to be constantly tweaked to explain what has already happened and which is as general as possible. Current climate models are not useless, but in my opinion the fact that they could not prospectively predict the temperature stagnation implies that they lack robustness. They should really be seen as “work in progress”.
I can also see how such a study will negatively affect the public image of global warming. People are usually not happy with prediction after the fact, and there is little doubt that skeptics and deniers will play up the futility of climate change models based on this study to varying degrees. But this is really a problem with any models that are designed to make predictions about complex systems. The right thing to do is to honestly own up to the failures of your models and suggest modifications, and it’s only through such constant feedback that the models can be improved. The next assessment of the IPCC should clearly state this discrepancy. Georgia Tech professor Judith Curry puts the issue in context:
“The flawed assumption behind the orthodoxy was that natural variability is merely ‘noise’ superimposed on the long term trend. The natural variability has been shown over the past two decades to have a magnitude that dominates the greenhouse warming signal. It is becoming increasingly apparent that our attribution of warming since 1980 and future projections of climate change needs to consider natural internal variability as a factor of fundamental importance. I sincerely hope that the (IPCC) AR5 provides an assessment of what we know and what we don’t know and areas of disagreement, rather than trying to manufacture a consensus.”
Unfortunately this standard process of introspection and improvement is subverted when a topic like climate change becomes highly politicized. Proponents are often wary of publicizing limitations as part of a healthy process of scientific give and take for fear of retribution by denialists. The politicization of science harms both proponents and honest skeptics and we are all worse off for it.
12 Digital Issues + 4 Years of Archive Access just $19.99X