SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Brumar89 who wrote (835057)2/7/2015 7:34:53 PM
From: Wharf Rat  Read Replies (1) of 1578006
 
Another trilogy from the pen of Zeke, from the Koch-funded BEST study

How not to calculate temperature
5 June, 2014 (14:04) | Data Comparisons Written by: Zeke

The blogger Steven Goddard has been on a tear recently, castigating NCDC for making up “97% of warming since 1990? by infilling missing data with “fake data”. The reality is much more mundane, and the dramatic findings are nothing other than an artifact of Goddard’s flawed methodology. Lets look at what’s actually going on in more detail.

Goddard made two major errors in his analysis, which produced results showing a large bias due to infilling that doesn’t really exist. First, he is simply averaging absolute temperatures rather than using anomalies. Absolute temperatures work fine if and only if the composition of the station network remains unchanged over time. If the composition does change, you will often find that stations dropping out will result in climatological biases in the network due to differences in elevation and average temperatures that don’t necessarily reflect any real information on month-to-month or year-to-year variability. Lucia covered this well a few years back with a toy model, so I’d suggest people who are still confused about the subject to consult her spherical cow.

His second error is to not use any form of spatial weighting (e.g. gridding) when combining station records. While the USHCN network is fairly well distributed across the U.S., its not perfectly so, and some areas of the country have considerably more stations than others. Not gridding also can exacerbate the effect of station drop-out when the stations that drop out are not randomly distributed.

rankexploits.com

How not to calculate temperatures, part 2
24 June, 2014 (18:08) | Data Comparisons Written by: Zeke

Unfortunately some folks who really should know better paid attention to the pseudonymous Steven Goddard, which spawned a whole slew of incorrect articles in places like the Telegraph, Washington Times, and Investor’s Business Daily about how the U.S. has been cooling since the 1930s. It even was the top headline on the Drudge Report for a good portion of the day. This isn’t even true in the raw data, and certainly not in the time of observation change-corrected or fully homogenized datasets.

As mentioned earlier, Goddard’s fundamental error is that he just averages absolute temperatures with no use of anomalies or spatial weighting. This is fine when station records are complete and well distributed; when the station network composition is changing over time or the stations are not well-distributed, however, it gives you a biased result as discussed at length earlier.

There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:

Egads! It appears that the world’s land has warmed 2C over the past century! Its worse than we thought!

rankexploits.com

How not to calculate temperatures, part 3
26 June, 2014 (14:21) | Data Comparisons Written by: Zeke

My disagreement with Steven Goddard has focused on his methodology. His approach is quite simple: he just averages all the temperatures by year for each station, and then averages all the annual means together for all stations in each year.

I’ve been critical of this approach because I’ve argued that it can result in climatology-related biases when the composition of the station network changes. For example, if the decline in reporting stations post-1990 resulted in fewer stations from lower-latitude areas, it would introduce a cooling bias into the resulting temperature record that is unrelated to actual changing temperatures.

Lets take a look at how big a difference the choice of methods makes. The figure above shows Goddard’s method using the raw data in red, the correct method (gridded anomalies) using the raw data in blue, and the fully homogenized data (e.g. NCDC’s official temperature record) in green. Goddard’s method serves to exaggerate the effect of adjustments, though significant adjustment remain even when using a method unbiased by changes in underlying climatology. The majority of these adjustments are for changing time of observation at measurement stations, while much of the remainder is correcting for a max cooling bias due to the change from liquid in glass thermometers to MMTS thermometers. NCDC’s adjustments have been discussed at length elsewhere, so for the remainder of this post I’m going to focus on the factors that result in the difference between the red and blue lines. These differences may seem small, but they result in a non-negligible difference in the trend, and drive incorrect claims like Goddard’s assertion that the U.S. has been cooling since the 1930s and that most of the warming post-2000 is due to “fabricated data” from infilling.

rankexploits.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext