SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Brumar89 who wrote (920145)2/9/2016 7:07:22 PM
From: Wharf Rat  Read Replies (1) of 1570016
 
Perhaps you should have read the entire article, along with the comment by Zeke from Cabin Creek, who put the Hausfather in Hausfather et al. (2016) . Does Goddard know they switched from mercury thermometers to electronics? Maybe he should start a campaign to switch back.

Adjusting U.S. Temperature Data
Posted on February 9, 2016 |
One of the loudest and longest criticisms of U.S. (and other) temperature data is that the raw data are adjusted to compensate for non-climate factors, so we can better identify the changes due to climate factors (which is, after all, what we really want to know).

All the adjustment procedures are well documented, programs, raw and adjusted data are publicly available, but deniers continue to imply either total incompetence or, far too often, outright fraud — that adjustments only exist to exaggerate warming trends. In truth, adjustments exist to make the data better.

When conditions change at some station, the temperature record will change, not because of weather or climate, and this makes the data inhomogeneous. The instrument might be changed, the “microsite” environment may be different, even the time of day at which data are recorded can change, and all these may make a noticeable difference. It’s far better to compensate for those kinds of changes, to make the data more like what it would have been if conditions hadn’t changed. Perhaps the process of allowing for them is better described not as “adjustment” but as “homogenization.”

Two chief methods are applied. One is a time-of-observation correction. The other is the “pairwise homogenization algorithm,” or “PHA,” which compares nearby stations to see when one might have experienced some change in conditions. Each station is compared to many of its neighbors, and correction is applied only if a station shows significant change relative to many of its neighbors. The real question is, how well is that process working?

To know that, Hausfather et al. (2016) compared station records before and after homogenization to a more reliable network of stations which don’t need adjustment (much more detail, including data and code, are available here.

In the early 2000s, we introduced a new set of weather stations specifically designed to avoid such inhomogeneities: the U.S. Climate Reference Network, or USCRN. Of course it’s not possible to go back in time to do so, so it doesn’t help check the accuracy of past temperature data directly. But, as it turns out, we now have enough USCRN data that it has helped investigate the accuracy of those past data indirectly. This is because it gives insight into how well the homogenization procedures bring all the other temperature data into line with the USCRN data that don’t need adjustment.

The USCRN stations are part of the ensemble used for the pairwise homogenization algorithm (PHA), so they began by applying it without the USCRN stations, to ensure that the homogenization was independent of USCRN data. They applied the homogenization procedures to create a version of the adjusted USHCN (U.S. Historical Climate Network) data. Finally, they compared temperature trends with and without homogenization, to temperature trends based on the no-need-for-adjustment USCRN.

The results were extremely encouraging, showing that the adjustment procedure for USHCN brought it much more closely into alignment with USCRN. This is strong evidence that the adjustments are doing exactly what they were intended to do: remove the influences that don’t really tell us about temperature change, so what remains really does tell us about temperature change, not irrelevant change.

Their figure 2 shows just how well the homogenization procedure is working:



It shows the trend difference between station pairs that are within 100 miles of each other. In the upper graphs, the blue dots are the differences before homogenization, the red dots after. Note how the homogenization procedure not only makes nearby stations show much more similar trends (differences reduce), it eliminates the “outlier” cases where a station shows such dramatically different trend from its neighbor that it’s just not believable. In such cases, one of them simply must be under the thumb of some non-climate factor, and homogenizing — removing that misleading factor — will definitely improve the quality of the individual station data, and of regional and national trends we estimate from them.

It’s also interesting to compare the difference between trends estimated from the USHCN, on which national climate reports are based, to the USCRN, which is a better record but doesn’t go far enough back in time to give us the climate perspective we need. They show this in their figure 1, part of which looks like this:



The upper panel is for daily high temperature, the lower one for daily low temperature. In both cases, the blue line is the difference between raw USHCN and the USCRN, the red line is the difference between homogenized USHCN and USCRN. The interesting part is that for daily high temperature, USHCN is showing a consistently lower trend than USCRN, whether homogenized or not. For daily low temperature (and for the daily mean, not shown), the USHCN and USCRN are in much better agreement. In any case, the homogenization procedure brings the USHCN into closer agreement with USCRN.

The real upshot of their research is that the homogenization procedure is working, and it’s doing the right thing; it makes raw USHCN data significantly closer to the “reference” data from USCRN. That doesn’t mean it’s perfect, and much remains to be done, including better understanding of why the USCRN shows a higher trend in daily high temperature than USHCN, even after adjustment.

But it does show, unambiguously, that critics of the entire adjustment process have absolutely no scientific basis for their complaints. Their real reason for complaint is that when non-climate factors are corrected for, it reveals that warming in the U.S. is even greater than the raw data suggests. My opinion is that their complaints aren’t rooted in science at all, but in ideology.

I also expect that Ted Cruz will either avoid this research entirely, or will make up some ridiculous story about how they’re just trying to destroy our economy while instituting world government. He might even join forces with Lamar Smith to demand all the personal e-mails of the authors … in the name of Freedom.

tamino.wordpress.com

Zeke Hausfather | February 9, 2016 at 6:22 pm | Reply
Thanks for the excellent writeup Tamino.

One thing folks should note is that while there is a wide variation in trends in the raw data compared to nearby CRN stations (as shown in our Figure 2), they tend to mostly cancel each other out over the 2004-2015 period we are examining. The same is not true in the U.S. in the 1980s and 1990s where both time of observation changes and the switch from mercury thermometers to electronic instruments introduced large cooling biases that are corrected through homogenization (NOAA does a separate time of observation correction prior to homogenization, though it’s not really needed as pairwise homogenization is pretty good at detecting breakpoints due to time of observation changes as discussed in Williams et al 2012).

While we can’t directly test the effectiveness of adjustments during a period with systemic trend biases using the CRN, the fact that adjustments do a good job controlling divergent trends over the recent period certainly increases our confidence that they also work well in the past. Further important work has been done by Williams et al (2012) and Venema et al (2012) using synthetic data to test homogenization, and show that it performs quite well for trend-inducing biases in either direction.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext