SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Politics of Energy

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Brumar8911/22/2009 10:38:33 PM
2 Recommendations  Read Replies (1) of 86355
 
Maybe the leaked emails were just the tip of the iceberg. No wonder they were so desparate to hide their data - its crap.

The Harry_Read_Me File
November 22, 2009

Love the ironic title. Got this from reader, Glenn. I’m out of my depth trying to read the code–and apparently so were several folks at CRU. If what he, and the techies at the links, say is true, it’s no wonder they had to spin this for 10 years–it’s all absolute bullshit.

Here’s Glenn’s take with links:

The hacked e-mails were damning, but the problems they had handling their own data at CRU are a dagger to the heart of the global warming “theory.” There is a large file of comments by a programmer at CRU called HARRY_READ_ME documenting that their data processing and modeling functions were completely out of control.

They fudged so much that NOTHING that came out of CRU can have ANY believability. If the word can be gotten out on this and understood it is the end of the global warming myth. This much bigger than the e-mails. For techie takes on this see:

tickerforum.org

neuralnetwriter.cylo42.com

To base a re-making of the global economy (i.e. cap-and-trade)on disastrously and hopelessly messed up data like this would be insanity.

Send it far and wide.

Steve has it up on his mirror site–see blogroll–with the first link. Bwuhahahaha.

cbullitt.wordpress.com

tickerforum.org

"Global Warming" SCAM - Hack/Leak FLASH in forum [Ticker]
Asimov
Posts: 21296
Incept: 2007-08-26

east tennessee
Online

Ok.... Who is Tim Mitchell? Did he die or something? There's a very disturbing "HARRY_READ_ME.txt" file in documents that APPEARS to be somebody trying to fit existing results to data and much of it is about the code that's here. I think there's something very very wrong here...

This file is 15,000 lines of comments, much of it copy/pastes of code or output by somebody (who's harry?) trying to make sense of it all....

Here's two particularly interesting bits, one from early in the file and one from way down:

Quote:
7. Removed 4-line header from a couple of .glo files and loaded them into Matlab. Reshaped to 360r x 720c and plotted; looks OK for global temp (anomalies) data. Deduce that .glo files, after the header, contain data taken row-by-row starting with the Northernmost, and presented as '8E12.4'.
The grid is from -180 to +180 rather than 0 to 360.
This should allow us to deduce the meaning of the co-ordinate pairs used to describe each cell in a .grim file (we know the first number is the lon or column, the second the lat or row - but which way up are the latitudes? And where do the longitudes break?
There is another problem: the values are anomalies, wheras the 'public' .grim files are actual values. So Tim's explanations (in _READ_ME.txt) are incorrect..

8. Had a hunt and found an identically-named temperature database file which did include normals lines at the start of every station. How handy - naming two different files with exactly the same name and relying on their location to differentiate! Aaarrgghh!! Re-ran anomdtb:

Uhm... So they don't even KNOW WHAT THE ****ING DATA MEANS?!?!?!?!

What dumbass names **** that way?!

Talk about cluster****.
This whole file is a HUGE ASS example of it. If they deal with data this way, there's no ****ing wonder they've lost **** along they way. This is just unbelievable.

And it's not just one instance of not knowing what the hell is going on either:

Quote:
The deduction so far is that the DTR-derived CLD is waaay off. The DTR looks OK, well OK in the sense that it doesn;t have prominent bands! So it's either the factors and offsets from the regression, or the way they've been applied in dtr2cld.

Well, dtr2cld is not the world's most complicated program. Wheras cloudreg is, and I immediately found a mistake! Scanning forward to 1951 was done with a loop that, for
completely unfathomable reasons, didn't include months! So we read 50 grids instead of 600!!! That may have had something to do with it. I also noticed, as I was correcting THAT, that I reopened the DTR and CLD data files when I should have been opening the bloody station files!! I can only assume that I was being interrupted continually when I was writing this thing. Running with those bits fixed improved matters somewhat, though now there's a problem in that one 5-degree band (10S to 5S) has no stations! This will be due to low station counts in that region, plus removal of duplicate values.

I've only actually read about 1000 lines of this, but started skipping through it to see if it was all like that when I found that second quote above somewhere way down in the file....

CLUSTER.... ****. This isn't science, it's gradeschool for people with big data sets.

......................

Christ. It gets better.

Quote:
So.. we don't have the coefficients files (just .eps plots of something). But what are all those monthly files? DON'T KNOW, UNDOCUMENTED. Wherever I look, there are data files, no info about what they are other than their names. And that's useless.. take the above example, the filenames in the _mon and _ann directories are identical, but the contents are not. And the only difference is that one directory is apparently 'monthly' and the other 'annual' - yet both contain monthly files.


Lets ignore the smoking gun in a legal sense, and think about the scientific method for just a moment....

I do believe this is more than one gun and there's some opaque mist coming from the "fun" end. I won't claim it's smoke, but holy ****, this is incredible.

..........

Quote:
The conclusion of a lot of investigation is that the synthetic cloud grids for 1901-1995 have now been discarded. This means that the cloud data prior to 1996 are static.

...
Eventually find fortran (f77) programs to convert sun to cloud:
...
There are also programs to convert sun parameters:
...
For 1901 to 1995 - stay with published data. No clear way to replicate process as undocumented.

For 1996 to 2002:
...
This should approximate the correction needed.

On we go.. firstly, examined the spc database.. seems to be in % x10.
Looked at published data.. cloud is in % x10, too.
First problem: there is no program to convert sun percentage to cloud percentage. I can do sun percentage to cloud oktas or sun hours to cloud percentage! So what the hell did Tim do?!! As I keep asking.

Examined the program that converts sun % to cloud oktas. It is
complicated! Have inserted a line to multiple the result by 12.5 (the result is in oktas*10 and ranges from 0 to 80, so the new result will range from 0 to 1000).

Next problem - which database to use? The one with the normals included is not appropriate (the conversion progs do not look for that line so obviously are not intended to be used on +norm databases). The non normals databases are either Jan 03 (in the '_ateam' directory) or Dec 03 (in the regular database directory). The newer database is
smaller! So more weeding than planting in 2003. Unfortunately both databases contain the 6190 normals line, just unpopulated. So I will go with the 'spc.0312221624.dtb' database, and modify the already- modified conversion program to process the 6190 line.

Then - comparing the two candidate spc databases:

spc.0312221624.dtb
spc.94-00.0312221624.dtb

I find that they are broadly similar, except the normals lines (which both start with '6190') are very different. I was expecting that maybe the latter contained 94-00 normals, what I wasn't expecting was that thet are in % x10 not %! Unbelievable - even here the conventions have not been followed. It's botch after botch after botch. Modified the
conversion program to process either kind of normals line.

Decided to go with the 'spc.94-00.0312221624.dtb' database, as it hopefully has some of the 94-00 normals in. I just wish I knew more.

Conversion was hampered by the discovery that some stations have a mix of % and % x10 values! So more mods to sp2cldp_m.for. Then conversion, producing cldfromspc.94000312221624.dtb. Copied the .dts file across
as is, not sure what it does unfortunately (or can't remember!).


Your tax dollars at work.

....................

I'm just absolutely STUNNED by this ****. **** the legal stuff. RIGHT HERE is the fraud.

Quote:
These are very promising. The vast majority in both cases are within 0.5 degrees of the published data. However, there are still plenty of values more than a degree out.

He's trying to fit the results of his programs and data to PREVIOUS results.


Quote:
TMP has a comforting 95%+ within half a degree, though one still wonders why it isn't 100% spot on..

Quote:
DTR fares perhaps even better, over half are spot-on, though about 7.5% are outside a half.

The percentages below is the percentage of accuracy

Quote:
However, it's not such good news for precip (PRE):
...
Percentages: 13.93 25.65 11.23 49.20

21. A little experimentation goes a short way..

I tried using the 'stn' option of anomdtb.for. Not completely sure what it's supposed to do, but no matter as it didn't work:

Oh yea, don't forget. He's getting 0.5 and 1 degree differences in results... while they are predicting temperatures to a supposed accuracy of tenths...

.........................

Ok, one last bit to finish that last one off:

Quote:
..knowing how long it takes to debug this suite - the experiment endeth here. The option (like all the anomdtb options) is totally undocumented so we'll never know what we lost.

22. Right, time to stop pussyfooting around the niceties of Tim's labyrinthine software suites - let's have a go at producing CRU TS 3.0! since failing to do that will be the
definitive failure of the entire project..

............................

Here's harry, btw, just so you know who's writing this ****:

Mr. Ian (Harry) Harris
Dendroclimatology, climate scenario development, data manipulation and visualisation, programming

...............................

Ok, one thing still to say. After reading 4000 lines of this now, I actually feel sorry for the guy. He's trying his damnedest to straighten out somebody else's mess.

NOT something to crucify him for.

I sure would like to know what happened to Tim Mitchell and why he wasn't around to explain all his undocumented **** to. And why he didn't document it.

And why such a ****ty programmer was running this.

And several other things, but still, my main point is:

Harry didn't make the mess, he's trying to clean it up. So don't think TOO bad of him. I really do feel sorry for him now, and there's a good chance that some of the things I've noted above have been fixed now....

I'm up to sometime after 2007 now in the file. He's pasting data from then right where I'm stopping. This isn't old. It started in 2006 or so.

.................................

More on this:

neuralnetwriter.cylo42.com

chiefio.wordpress.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext