>> "They all get the same data, " They don't
Of course they do. There isn't that much data to be had. They pointed out in the article there are four major datasets (although there are a few more than that). That's it. There is no more. And frankly, we have a pretty good idea going forward about which is the best. But you can't arbitrarily compare data between datasets. They're different -- in some ways we know about, but in a ton of ways we don't know about.
For all intents and purposes, there just isn't that much data to work with. So, yeah, they "adjust" it and fuck around with it to try to transform it into something it isn't.
All these things you say there are "more of" are bullshit. I'm sure you believe it, but it is all bullshit. There aren't "more droughts." Even if there were more, you cannot just ASSUME it is because of GW. In CA and NV, it probably has a lot more to do with the fact that there have been massive population increases and little in the way of new reservoir capacity. In other places we have know water was going to be a major problem for many years.
It is apophenia, and nothing more. Just because you think you are seeing something substantive doesn't mean it is. When you are dealt four blackjacks in a row, it doesn't mean you're cheating. Sometimes, that just happens.
One of the things I write simulations for often is for video poker games. There are 2,598,560 possible hands you can be dealt. Yet, when testing code for drawing poker hands it is necessary to test 100s of millions of hands to develop confidence in the fairness of the deal. We calculate the probability of their being a defect with each draw and that probability does not get reduced to an acceptable level until you have a few hundred million draws. You just cannot tell by reference to anecdotes. They surprise you sometimes.
Even if you showed there were more droughts, you have to then show they were caused by GW and not something else.
You're just not there, statistically. |