Recent Research Shows Climate Models Are Mostly “Black Box” Fudging, Not Real Science
By P Gosselin on 24. March 2017
Climate models fail on the test stand By Dr. Sebastian Lüning and Prof. Fritz Vahrenholt [German text translated/edited by P Gosselin)
20 years ago climate models were celebrated as a huge breakthrough. Finally we were able to reproduce reality in the computer, which had been becoming ever more powerful and faster. Everyone believed that only minor adjustments were necessary, and the target would be reached. But when the computer-crunched results were finally compared to reality, huge unexplained discrepancies appeared.
In parallel, paleo-climatologists produced increasingly robust reconstructions of the real climate development, which served to make the computer problems even more glaring. Month after month new papers appeared exposing the major problems of the climate modelers. Model tests were preferably started in the middle of the Little Ice Age, around 1800, because the warming seemed to fit well with the rise in CO2 emissions.
But if one goes back 1000 years, the model technology falls apart.
[ That's why Michael Mann tried to get rid of the MWP. Didn't fit the models so claim it didn't happen. ]
In March 2016 Fabius Maximus pointed out the obvious: The models have to be more strictly tested and calibrated before they can be approved for modeling the future.
We can end the climate policy wars: demand a test of the models […] The policy debate turns on the reliability of the predictions of climate models. These can be tested to give “good enough” answers for policy decision-makers so that they can either proceed or require more research. I proposed one way to do this in Climate scientists can restart the climate change debate & win: test the models!— with includes a long list of cites (with links) to the literature about this topic. This post shows that such a test is in accord with both the norms of science and the work of climate scientists. […] Models should be tested vs. out of sample observations to prevent “tuning” the model to match known data (even inadvertently), for the same reason that scientists run double-blind experiments). The future is the ideal out of sample data, since model designers cannot tune their models to it. Unfortunately…
“…if we had observations of the future, we obviously would trust them more than models, but unfortunately observations of the future are not available at this time.” — Thomas R. Knutson and Robert E. Tuleya, note in Journal of Climate, December 2005.
There is a solution. The models from the first four IPCC assessment reports can be run with observations made after their design (from their future, our past) — a special kind of hindcast.”
Another large point of criticism on climate models is the so-called “tuning”. Here climate models are adjusted so that they nearly produce the desired result. This takes part mostly in clandestine rooms behind closed doors where there is little transparency. Hourdin et al. 2016 described the problem in detail in an assessment paper. Judith Curry sums it up best:
Two years ago, I did a post on Climate model tuning, excerpts: “Arguably the most poorly documented aspect of climate models is how they are calibrated, or ‘tuned.’ I have raised a number of concerns in my Uncertainty Monster paper and also in previous blog posts.The existence of this paper highlights the failure of climate modeling groups to adequately document their tuning/calibration and to adequately confront the issues of introducing subjective bias into the models through the tuning process.”
Think about it for a minute. Every climate model manages to accurately reproduce the 20th century global warming, in spite of the fact that that the climate sensitivity to CO2 among these models varies by a factor of two. How is this accomplished? Does model tuning have anything to do with this?”
Read the entire post at Climate Etc.
In November 2016 in the renowned journal Science, Paul Voosen described the necessity of ending all the secrecy and black boxes in order to allow some public transparency:
Climate scientists open up their black boxes to scrutiny Climate models render as much as they can by applying the laws of physics to imaginary boxes tens of kilometers a side. But some processes, like cloud formation, are too fine-grained for that, and so modelers use “parameterizations”: equations meant to approximate their effects. For years, climate scientists have tuned their parameterizations so that the model overall matches climate records. But fearing criticism by climate skeptics, they have largely kept quiet about how they tune their models, and by how much. That is now changing. By writing up tuning strategies and making them publicly available for the first time, groups hope to learn how to make their predictions more reliable—and more transparent.”
- See more at: notrickszone.com
Climate scientists are famous for their failed predictions: Children won't know what snow is, we'll have more Katrinas, the polar bears will die off, the bees will die off, the sea ice will be gone by 2000 or 2005 or 2010 or 2020 ... |