There’s a lot of fuss about climate modeling is definitely a hot one, with the validity of certain models (and models in general) often being brought to question. So how could you test if the today’s models are correct, except for waiting? That’s still a debatable question, but if we were to look at past models, how correct did they turn out to be? The answer is (even without the computing power we have today)… surprisingly good.
In 1999, 14 years ago, Myles Allen and colleagues at Oxford University published a very significant prediction, which was met with both enthusiasm and skepticism. His team was one of the first to ever combine complex computer simulations of the climate system with adjustments based on historical observations. Their objective was to produce both a global mean warming and a range of uncertainty. He predicted that the decade ending in December 2012 would be a quarter of degree warmer than the decade ending in August 1996.
Now, a recent paper published in Nature Geoscience compared this prediction to actual data observed in the past years. The paper showed that Allen was spot on with his predictions – and if anything, he was being a little bit optimistic. Compared to his forecast, the early years of the new millennium were somewhat warmer than expected, but after that, temperatures settled in and almost perfectly match his model.
Allen said:
“I think it’s interesting because so many people think that recent years have been unexpectedly cool. In fact, what we found was that a few years around the turn of the millennium were slightly warmer than forecast, and that temperatures have now reverted to what we were predicting back in the 1990s.”
“Of course, we should expect fluctuations around the overall warming trend in global mean temperatures (and even more so in British weather!), but the success of these early forecasts suggests the basic understanding of human-induced climate change on which they were based is supported by subsequent observations.”, he concluded.
The only thing that’s left to discuss is if analyzing just this study is a case of survivorship bias – and one could make a good case supporting this idea. Survivorship bias is a statistical error, in this case, taking just one study that turned out to be correct and ignoring others that were not so spot-on. It would be a titanic work to take all the climate prediction models ever made, split them into peer reviewed and non peer reviewed, then see which ones were right and which ones were wrong (or better put, how close were they to reality). You’re probably going to read in other newspapers titles like “Global warming predictions are accurate” or “Climate change models are correct” – and that’s just bad journalism. The only thing I’d like to point out is that wasn’t just a forecast right taken just to serve as dust in your eyes. Several other models turned out to be spot-on (1990 and 1981, just off the top of my head) – this is just another proof supporting the validity of scientific claims regarding climate forecasts, not proof that all of them are correct.