SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Brumar89 who wrote (831102)1/20/2015 8:44:05 AM
From: Brumar89  Read Replies (1) | Respond to of 1578933
 
NASA Facing The Heat For Hottest Year Lie

With great fanfare NASA announced that 2014 was the hottest year on record. The part of the story they didn't mention is they are only 38% sure it was the hottest on record. The other thing they didn't mention is that surface temperatures were used rather than satellite data and part of their data used estimated temperatures.

Per the UK Daily Mail, "the claim made headlines around the world, but yesterday it emerged that GISS’s [NASA's Goddard Institute for Space Studies] analysis – based on readings from more than 3,000 measuring stations worldwide – is subject to a margin of error. NASA admits this means it is far from certain that 2014 set a record at all.
Yet the Nasa press release failed to mention this, as well as the fact that the alleged ‘record’ amounted to an increase over 2010, the previous ‘warmest year’, of just two-hundredths of a degree – or 0.02C. The margin of error is said by scientists to be approximately 0.1C – several times as much.

As a result, GISS’s director Gavin Schmidt has now admitted NASA thinks the likelihood that 2014 was the warmest year since 1880 is just 38 per cent. However, when asked by this newspaper whether he regretted that the news release did not mention this, he did not respond.
To put it another way, it was almost 2x as likely that 2014 was not the warmest year since 1880 a fact that Mr. Schmidt did not want the public to know.

@hockeyschtick1 I'm 97% sure that even the 38% is a bunch of BS https://t.co/q9aMG0C5bw pic.twitter.com/HyBZ92Ty0Q
— Steve Goddard (@SteveSGoddard) January 18, 2015Climate Scientist Dr Roy Spencer of the University of Alabama was embarrassed by the claim:
I am embarrassed by the scientific community’s behavior on the subject. I went into science with the misguided belief that science provides answers. Too often, it doesn’t. Some physical problems are simply too difficult. Two scientists can examine the same data and come to exactly opposite conclusions about causation.

We still don’t understand what causes natural climate change to occur, so we simply assume it doesn’t exist. This despite abundant evidence that it was just as warm 1,000 and 2,000 years ago as it is today. Forty years ago, “climate change” necessarily implied natural causation; now it only implies human causation.

What changed? Not the science…our estimates of climate sensitivity are about the same as they were 40 years ago.

What changed is the politics. And not just among the politicians. At AMS or AGU scientific conferences, political correctness and advocacy are now just as pervasive as as they have become in journalism school. Many (mostly older) scientists no longer participate and many have even resigned in protest.

Science as a methodology for getting closer to the truth has been all but abandoned. It is now just one more tool to achieve political ends.

Reports that 2014 was the “hottest” year on record feed the insatiable appetite the public has for definitive, alarming headlines. It doesn’t matter that even in the thermometer record, 2014 wasn’t the warmest within the margin of error. Who wants to bother with “margin of error”? Journalists went into journalism so they wouldn’t have to deal with such technical mumbo-jumbo. I said this six weeks ago, as did others, but no one cares unless a mainstream news source stumbles upon it and is objective enough to report it.
The fudging may even be bigger, climate blogger Steven Goddard at Real Science examined the raw numbers and discovered:
Gavin quietly says that there is a 62% chance that 2014 was not the warmest year on record, but he had to give his boss a talking point for the State of the Union address this week.

So Gavin simply fabricated warm temperatures across huge areas like Greenland, where he had no actual thermometer data in December. Gavin showed much of western Greenland 1-2C above normal, when it was actually 2C below normal. It doesn’t take a lot of that sort of cheating to get temperatures up 0.02 globally.The claim also ignores the satellite temperature data, also from NASA which shows that the earth hasn't warmed in 18 years and 3 months.



Not only was the claim as made by Gavin Schmidt bogus, but the numbers/analysis was wrong also. Or to quote Physicist Luboš Motl, "Please laugh out loud when someone will be telling you that it was the warmest year."

http://yidwithlid.blogspot.com/2015/01/nasa-facing-heat-for-hottest-year-lie.html



To: Brumar89 who wrote (831102)1/20/2015 10:05:46 AM
From: Wharf Rat  Read Replies (1) | Respond to of 1578933
 
I predict that Obama won't call Goddard a liar, cuz he's never heard of him.

So 2014 may not have been warmest?

That has been the meme from people who don't like the thought. Bob Tisdale, at WUWT, gives a rundown. There is endless misinterpretation of a badly expressed section in the joint press release from NOAA and GISS announcing the record.

The naysayers drift seems to be that there is uncertainty, so we can't say there is a record. But this is no different from any year/month in the past, warmest or coldest. 2005 was uncertain, 2010 also. Here they are, for example, proving that July 1936 was the hottest month in the US. Same uncertainties apply, but no, it was the hottest.

So what was badly expressed by NOAA/GISS. They quoted uncertainties without giving the basis for them. What do they mean and how were they calculated? Just quoting the numbers without that explanation is asking for trouble.

The GISS numbers seem to be calculated as described by Hansen, 2010, paras 86, 87, and Table 1. It's based on the vagaries of spatial sampling. Temperature is a continuum - we measure it at points and try to infer the global integral. That is, we're sampling, and different samples will give different results. We're familiar with that; temperature indices do vary. UAH and RSS say no records, GISS says yes, just, and NOAA yes, verily. HADCRUT will be very close; Cowtan and Way say 2010 was top.

I think NOAA are using the same basis. GISS estimates the variability from GCMs, and I think NOAA mainly from subsetting.

Anyway, this lack of specificity about the meaning of CIs is a general problem that I want to write about. People seem to say there should be error bars, but when they see a number, enquire no further. CI's represent the variation of a population of which that number is a member, and you need to know what that population is.

In climate talk, there are at least three quite different types of CI:
  • Measurement uncertainty - variation if we could re-measure same times and places
  • Spatial sampling uncertainty - variation if we could re-measure same times, different places
  • Time sampling uncertainty - variation if we could re-measure at different times (see below), same places
I'll discuss each below the jump.

Update and digression (OT). I'll just post this plot of the progress of the GISS record. 2014 was a modest jump:
Measurement uncertaintyThis is least frequently quoted, mainly because it is small. But people very often assume it is what is meant. Measurement can have bias or random error. Bias is inescapable, even hard to define. For example, MMTS often reads lower than thermometers. It doesn't help to argue which is right; only to adjust when there is a change.

I speak of a random component, but the main aspect of it is that when you average a lot of readings, there will be cancellations. A global year average has over a million daily readings. In an average of N readings, cancellation should reduce noise by about sqrt(N); in this case by a factor of 1000.
Spatial sampling uncertaintyThat is present in every regional average. As said above, we claim an average over all points in the region, but have only a sample. A different sample might give a different result. This is not necessarily due to randomness in the temperature field; when GISS gives an uncertainty, I presume that reflects some randomness in choice of stations, quite possibly for the same field.

A reasonable analogy here is the stock exchange. We often hear of a low for the year, or a record high, etc. That reflects a Dow calculation on a sample of stocks. A different sample might well lead to a non-record. And indeed, there are many indices based on different samples. That doesn't seem to bother anyone.

What I find very annoying about the GISS/NOAA account is that in giving probabilities of 2014 being a record, they don't say if it for the same sample. I suspect it includes sample variation. But in fact we have very little sample variation. In 2010 we measured in much the same places as 2014. It makes a big difference.
Time sampling uncertainty.This is another often quoted, usually misunderstood error. It most frequently arises with trends of a temperature series. They are quoted with an uncertainty which reflects a model of variation within timesteps. I do those calculations on the trend page and have written a lot about what that uncertainty means. The important distinction is that it is not an error in the trend that was. It is an uncertainty in the trend that might have been if the climate could be rerun with a new instance of random variation. That might sound silly, but it does have some relevance to extrapolating trends into the future. Maybe you think that is silly too.

Briggs has a muddled but interesting article, trenchantly deprecating this use of CI's. RealClimate cited a trend (actually just quoting Cowtan and Way) as 0.116 +/- 0.137. Said Briggs:
"Here’s where it becomes screwy. If that is the working definition of trend, then 0.116 (assuming no miscalculation) is the value. There is no need for that “+/- 0.137? business. Either the trend was 0.116 or it wasn’t. What could the plus or minus bounds mean? They have no physical meaning, just as the blue line has none. The data happened as we saw, so there can not be any uncertainty in what happened to the data. The error bounds are persiflage in this context."

I don't totally disagree. 0.116 is the trend that was. The interesting thing is, you can say the same about the commonly quoted standard error of the mean. Each is just a weighted sum, with the error calculated by adding the weighted variances.

I've used this analogy. If you have averaged the weights of 100 people, the CI you need depends on what you want to use the average for. If it is to estimate the average weight of the population of which they are a fair sample, then you need the se. But if you are loading a boat, and want to know if it can carry them, the se is of no use. You want average instrumental error, if anything.

And the thing about trend is, you often are interested in a particular decade, not in its status as a sample. That is why I inveigh against people who want to say there was no warming over period x because, well, there was, and maybe a lot, but it isn't statistically significant. SS is about whether it might, in some far-fetched circumstances, happen again. Not about whether it actually happened.

Briggs is right on that. Of course I couldn't resist noting that in his recent paper with Monckton, such CI's featured prominently, with all the usual misinterpretations. No response - there never is there.

Statistical TieOK, this a pet peeve of mine. CI's are complicated, especially with such different bases, and people who can't cope often throw up their hands and say it is a "statistical tie". But that is complete nonsense. And I was sorry to see it crop up in Hansen's 2014 summary (Appendix) where 2014, 2010 and 2005 were declared to be "statistically tied".

You often see this in political polling, where a journalist has been told to worry about sampling error, and so declares a race where A polls 52%, B 48% with sampling error, as a "statistical tie".

But of course it isn't. Any pol would rather be on 52%. And such a margin close to the election usually presages a win. Any Bayesian could sort that out.

2014 was the warmest year. It doesn't matter how you juggle probabilities. There is no year with a better claim.

moyhu.blogspot.com