SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Brumar89 who wrote (833714)2/1/2015 2:13:00 PM
From: Wharf Rat  Read Replies (1) of 1579540
 
LMAO
Deniers deny. It's what they do. Have you thought to deny the existence of temperature recording devices?

Temperature homogenisation

Posted on February 1, 2015 by ...and Then There's Physics

Amazing as it may seem, the whole tampering with temperature data conspiracy has managed to rear it’s ugly head once again. James Delingpole has a rather silly article that even Bishop Hill calls interesting (although, to be fair, I have a suspicion that in “skeptic” land, interesting sometimes means “I know this is complete bollocks, but I can’t bring myself to actually say so”). All of Delingpole’s evidence seems to come from “skeptic” bloggers, whose lack of understand of climate science seems – in my experience – to be only surpassed by their lack of understanding of the concept of censorship :D .

Since I haven’t written much about the adjustments to temperature data, I thought I might take this opportunity to do so. One reason I haven’t done so very often before, is that it’s not something I’m particularly familiar with, so I’m happy to be corrected by those who know more than I do (Victor Venema and Steven Mosher, for example). Let’s start, though, with a little thought experiment.

Imagine we want to create a dataset showing global temperatures over as long a timescale as possible; what would we do? Well, we’d design a temperature sensor and we’d place as many sensors as possible around the globe, in as regular a distribution as we could. We’d then take a temperature measurement at every location at exactly the same time every day. We’d also ensure that we didn’t move the sensors, change them in any way, change the site in any way, or change the time at which we took the measurement. If a sensor did need to be replaced, we’d ensure that the new one was calibrated to be exactly the same as the old one, and we’d keep meticulous records of everything related to the site and the measurements.

Having done this, we could then generate a record of temperatures for every site, from which we could determine a monthly average for every site. From this record, we could determine a long-term average for each month for each site (I think it’s actually for a region, rather than a site, but that’s not important for this thought experiment) and could then determine how the average temperature at each site for each month differed from this long-term average. This is called the temperature anomaly. We could then average all these anomalies to determine the global temperature anomaly. One reason for averaging anomalies, rather than averaging actual temperatures, is that it is less sensitive to missing data. Anomalies also allow you to better compare climatic trends at different locations that may have very different absolute temperatures.

So, it all sounds easy. The problem is, we didn’t do this and – since we don’t have a time machine – we can’t go back and do it again properly. What we have is data from different countries and regions, of different qualities, covering different time periods, and with different amounts of accompanying information. It’s all we have, and we can’t do anything about this. What one has to do is look at the data for each site and see if there’s anything that doesn’t look right. We don’t expect the typical/average temperature at a given location at a given time of day to suddenly change. There’s no climatic reason why this should happen. Therefore, we’d expect the temperature data for a particular site to be continuous. If there is some discontinuity, you need to consider what to do. Ideally you look through the records to see if something happened. Maybe the sensor was moved. Maybe it was changed. Maybe the time of observation changed. If so, you can be confident that this explains the discontinuity, and so you adjust the data to make it continuous.

What if there isn’t a full record, or you can’t find any reason why the data may have been influenced by something non-climatic? Do you just leave it as is? Well, no, that would be silly. We don’t know of any climatic influence that can suddenly cause typical temperatures at a given location to suddenly increase or decrease. It’s much more likely that something non-climatic has influenced the data and, hence, the sensible thing to do is to adjust it to make the data continuous.

So, once you have a continuous record for every site, you can determine your long-term averages, determine the temperature anomalies, and average these to get your global temperature record. There’s nothing suspicious about this, and it’s not some kind of major conspiracy; it’s basic data analysis. Also, for those who claim that these adjustments always increase the warming trend, as Richard Betts points out, one of the biggest corrections was the bucket correction, which actually reduced the trend.

Anyway, that’s my attempt to explain the reason for temperature adjustments. If anyone thinks I haven’t got something quite right, or wants to add to this, feel free to do so through the comments. To be fair to James Delingpole, he did link to a post by Zeke Hausfather that explains – in much more detail than I have – the need for temperature adjustments. If you want a more thorough description of why temperature adjustments are necessary, from someone who works for one of the teams that produces a global temperature dataset, it would be worth a read. You could also read some of Victor Venema’s posts, and you could look at the global temperature anomalies from four of the other groups that produce such datasets, and see if you can spot any major differences.

I’ll finish with Kevin Cowtan’s video that explains the need for temperature adjustments in the datasets from Paraguay, which is what started this whole kerfuffle all over again.

andthentheresphysics.wordpress.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext