It seems that many people start running into problems when they take statistical correlations and jump to "cause and effect" relationships that may or may not exist. Comments?
This is a big problem with statistics of any kind! Or to be more precise ... a big problem with the misuse of statistics wether deliberate or not. Of the latter variety I believe it was the British Prime Minister Disraeli who said ... "There are three kinds of lies: LIES, DAMMED LIES, and STATISTICS!"
Of course at the other end of the spectrum there are statisticians themselves. I have never considered myself a statistician but I am a professional mathematician who frequently employs statistics and hence might be conceivably counted among them, at least for the purposes of the following quotation:
"There are three kinds of statisticians: those who can count, and those who can't!"
There is an excellent little book that I recommend to everybody. The author is one Harold Duff and the title is "How To Lie With Statistics". It is a beautiful little book which demonstrates the various statistical fallacies into which one can fall. Here are just two examples from the book:
(1) A motorist stops by a roadside stand which offers fresh grilled rabbit burgers. He buys one and bites into it. He then says to the stand owner, "This is the most delicious rabbit burger I have every had. How can you afford to sell them for only fifty cents per burger?"
The stand owner replies, "That's because they are not 100% rabbit. I mix in some elephant. But they are 50% rabbit ... one rabbit to one elephant!"
(2) On the front page of the book one finds the phrase "187 years of experience went into the publication of this book!"
This phrase points to a footnote which reads, "the combined ages of the author, editor, typesetter, and publisher."
Just because things happen to be correlated does not necessarily mean that there is a cause and effect relationship between them. It could be that they are both independent results of an unmentioned and possibly unknown cause. For example I recall reading once about the close correlation between the increases in the salaries of ministers (of a certain protestant denomination whose members were reputed to abstain from alcohol) with that of the prices of rum. Obviously both were the result of inflation but one could present the statistic in such a way as to suggest that the ministers were secretly imbibing and hence driving up the price of rum! This would be a flagrant misuse of statistics but would seem to have some validity to those who were not well versed in the limitations of statistics.
I think that many investors fall victim to the same kind of misinformation when they accept statistical information as evidence of something that may not in fact be true. I believe it was Will Rodgers who said, "It's not what we don't know that hurts us. It's what we do know ... that ain't so!"
BETA for example is supposed to be, among other things, a measure of a stock's volatility. Thus all other things being equal a high BETA stock might be a good AIM candidate. However as I mentioned in a previous post volatility should be measured more by frequency and amplitude. Also BETA only measures volatility with respect to the overall market. A stock with a high BETA might be volatile with respect to the overall market but if the overall market was not very volatile that the stock itself might not be very volatile in an absolute sense. Thus it might not be a very good AIM candidate.
The long and short of this is ... BETA is a very good screening tool in general but has no absolute meaning in and of itself especially as it pertains to stock selection under AIM.
When you asked for comments ... you in effect said to me, "a penny for your thoughts." Well I have just given you my "two cents worth!" What happened to the other penny? I'm sure we could solve that one with a little statistical analysis! ;-)
Barry |