Coldest morning in 126 years, page-161

  1. 6,398 Posts.
    lightbulb Created with Sketch. 9
    I like this way of comparing models and data a lot more than the old "mean of models plus shaded confidence intervals" approach. Why? A bunch of reasons:

    - if the models give a good replication of the stochastic variability of the real world, you would *expect* roughly 1 year in 20 to fall outside of the 95% confidence interval. But most people don't get that, and just assume that "outside of the shaded area = models are wrong"

    - When you average all the models together, you get a smoothly upward trend (as one would expect, since averaging smooths out all the random noise). This allows the unethical to fool the naive by saying, "see! Those stupid warmists didn't predict the 'pause'!" When all the model realizations are plotted together, anyone who cares to look can see that many of them include 10-15 year "pauses", as is inevitable when the range of stochastic noise is on the same scale as 15-20 years of the underlying trend.

    Incidentally, a recent paper (which I linked in another recent thread) shows that individual model runs that happened to match the real-world ENSO events fairly well (remember that the timing of ENSO events is inherently unpredictable, but the models match their statistical behaviour) also did a good job of matching the "pause". Talk to a climate scientist and they'll tell you that given all the stochastic cooling influences that happened to coincide over the past decade we should actually have seen some substantial cooling. Instead, we've seen a La Niña year hotter than the extreme El Niño year in 1998 that at the time was considered to be an extraordinary outlier. Talk to a real climate scientist and you'll find they're really quite worried about this.
 
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.