SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : The Exxon Free Environmental Thread

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Wharf Rat9/6/2025 10:20:37 AM
   of 48878
 
Bad Science on Sea Level | Open Mind
Posted on September 6, 2025 |

In a recent report about climate change from the U.S. Department of Energy (DoE), the authors state that “U.S. tide gauge measurements reveal no obvious acceleration beyond the historical average rate of sea level rise.” This is false. Judith Curry has defended this statement by pointing to Voortman & DeVos (2025), who analyze tide gauge data and state that


“Statistical tests were run on all selected datasets, taking acceleration of sea level rise as a hypothesis. In both datasets, approximately 95% of the suitable locations show no statistically significant acceleration of the rate of sea level rise.”


This too is false, and I believe I know how they came to this mistaken conclusion.

Consider the tide gauge record from Cedar Key, Florida (yearly values from PSMSL, the Permanent Service for Mean Sea Level) from 1950 to the present:





Two of the known influences on sea level which do not contribute to the trend are the lunar nodal cycle (period 18.61 years) and the perigean cycle (period 8.85 years). Their influence is relatively small, but like Voortman & De Vos I will account for them; adjusting for their impact gives yearly values which — if there’s no acceleration of sea level — should be indistinguishable (statistically) from a straight line plus stationary noise:



I tested that idea by fitting a continuous piecewise linear function (PLF) which allows for more than just a straight line, it includes a change in the slope at a time chosen by changepoint analysis:




Testing this model statistically presents some unique challenges because there is so much freedom to choose the changepoint time, and because the noise itself is inherently autocorrelated. But Monte Carlo simulation enables us to determine correct p-values, even with autocorrelated noise. For the data from Cedar Key I get a p-value of 0.0015, less than 0.05 (the critical value for 95% confidence), so acceleration is definitely statistically significant (at 99.85% confidence, no less!).

Having detected acceleration (more correctly, having rejected the single-straight-line model), I can apply further analysis for a more detailed scrutiny of how the rate of sea level rise at this location has changed over time. The PLF model from the changepoint analysis gives one estimate, with not one rate but two. I also applied a lowess smooth, and my program computes slopes (and their uncertainties) as well as values. Here is how the rate has changed over time, according to lowess (red line with pink shading for 95% confidence interval) and the PLF model (blue line with light blue shading):





Acceleration to a higher rate seems to have happened recently (around 2004 according to the changepoint analysis), and isn’t just significant, it’s sizeable. The current rate is estimated at 9 mm/year, about 3 feet per century.

I didn’t just apply this analysis to one tide gauge station. I surveyed all stations in the conterminous USA (the “lower 48 states) with data available from PSMSL, and 44 of them had enough data covering a long enough time span to meet my requirements for inclusion. Of all these stations, the one with the lowest p-value (a mere 0.0015) was Cedar Key.

That means I didn’t just have one chance to find statistical significance, I had 44 chances. With so many tests, there’s a much better chance of getting an apparently “significant” result just by chance. Ordinarily, a test at 95% confidence requires a p-value below 0.05, but with 44 chances to hit the target we need to adjust that.

The “old” way is the Bonferroni correction.” We take the critical p-value we’re using (0.05 for 95% confidence) and divide it by the number of tests (44) to get a modified p-value of 0.00114. The result for Cedar Key is not below that, so using this method we do not find statistically significant acceleration at Cedar Key. If we apply that critical p-value to all the stations in the conterminous USA, none of them reach significance. Does that mean we have not found acceleration after all?

No it doesn’t, because in this case the “old way” isn’t good enough. Much better is the Benjamini-Hochberg method (if you’re interested in the details you can read about it here), which confirms that not only is the result at Cedar Key definitely significant at 95% confidence, so are at least 5 other tide gauge stations in my survey.



Voortman and De Vos applied their test (different from mine but with the same purpose) to all the stations in the data holdings of PSMSL which had enough data to meet their criteria for inclusion. When I apply my own criteria to the entire PSMSL (not just American stations), it includes 211 stations; their criteria include 204.

I believe they are using the Bonferroni correction to tests of PSMSL data (because they say so) to get an “effective p-value” of 0.05/204 = 0.000245. This is not an appropriate procedure for this analysis — not even close — and it’s no wonder that they only find significance at a tiny number of stations, none in the USA. Their results are wrong.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext