SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Pastimes : G&K Investing for Curmudgeons -- Ignore unavailable to you. Want to Upgrade?


To: tekboy who wrote (6906)10/3/2000 12:20:41 PM
From: Mike Buckley  Respond to of 22706
 
Do you disagree with my basic point, or did you just not understand it?

I'll lay good odds that Apollo's answer is, "Yes."

--Mike Buckley



To: tekboy who wrote (6906)10/3/2000 12:26:58 PM
From: Dr. Id  Read Replies (1) | Respond to of 22706
 
I would guess that the probability that the two of you are boring the rest of the thread to tears is significant at the .01 level.

Dr.Id@ifIwantedtodealwithstatisticsIdbeatwork.com



To: tekboy who wrote (6906)10/3/2000 2:13:07 PM
From: Apollo  Read Replies (2) | Respond to of 22706
 
Tekster, that was a very brave reply on statistics. Nice job with the Math/statistics hyperlink at trying to deflect the truth. True magic where the magician gets the audience to keep their eyes over to one side while hiding something elsewhere.

Actually, I haven't really been discoursing on multivariate analysis

Neither have I.

And the Bonferroni method for adjusting statistical significance has nothing to do with "multiple variables" for analysis. Perhaps you were thinking of ANOVA, or analysis of variance?

No, the Bonferroni method is used to adjust the determination of statistical significance when the investigator(s) takes multiple looks or comparisons at the data. As you pointed out (sorta), if one looks many times at a question, or conducts the same test many times, any given finding may be just a "random" event, rather than a dramatic new scientific truth. Normally, the hard sciences use a P <0.05, meaning that the chances that a result with such a p value is a random event are less than 5%. Being the expert on Bonferroni that I am sure you are, you know that multiple comparisons requires that 0.05 is divided by the number of comparisons in order to derive the new and more rigorous determinate for statistical significance (ie, 0.05/# of comparisons). So, for example, dredging the data in 10 different ways would require one to achieve a P value of 0.005 before declaring a dramatic new discovery.

Do you disagree with my basic point, or did you just not understand it?

No, I understood what you were trying to get at; I just didn't think the answer was very clear. Besides, I was taking a break to be a little curmudgeonly.

it would be interesting to explore whether there's a difference between the natural sciences and social sciences in this regard. It's so much more difficult to come up with real intellectual progress in the latter that I am extraordinarily skeptical of people in those fields who use mathematical correlations without theoretical analysis to make a point.

Aha! We agree. Seriously, my personal take is that the "social sciences" are important, but lack of rigorous evaluation with proper statistics and experimental design can sometimes undermine investigative efforts. And I think this lack of rigor may possibly be more common in the social sciences than in the physical sciences.

I remember that that all-star flakcatcher, Sir Dancelot, once ventured on the Main Thread, about last December or so, that the "soft" sciences weren't as precise as the "hard" sciences. I agreed roughly with his point, but it was drowned out in all the flames that quickly ensued.

ca