If you are going to do technical trading, what you want is to find patterns that hold most of the time. What you don't want is to simply memorize the past. If you have too many degrees of freedom relative to the data then the data just get memorized. When you trade by hand, you look for simple patterns. You don't develop rules like "if gnp goes up .01 and cpi goes up .01 and the yen goes up .01 or .03 then buy, but if the yen stays even or goes up .02 then sell". Simple models are forced to adopt more general rules that don't exactly fit the data, but which may identify real patterns, while models with too many degrees of freedom can come up with ridiculous rules that exactly fit the past data yet have no predictive value whatsoever.
I remember years ago someone using fourier analysis to break down stock behavior into waves. The only problem is that there are an infinite number of different patterns that can exactly describe the past, each with a different projection for the future. No matter how many days of data you put in, you could exactly fit the data with multiple functions some with up projections, and some with down projections. Obviously this approach had no merit and I haven't seen it around in years.
Lets put it this way. If I told you that in the last three years I had seen a situation <whatever> 20 times that was kind of like the current situation and 18 of those times the stock had moved up over the next 10 days, would you go long? Suppose I told you that I had seen a situation like <whatever> once before and the stock went up? Which gives you more confidence? I think this makes it clear why models that memorize data do not generalize well.
TDNN models are more complicated than BP models, and thus require more data before forcing the models to make generalizations. CATNN models are more complicated yet. With enough data these models would probably be fine, but I can tell you that from my experience 3 years of data is not enough. Also, these models might work better if you pared the number of inputs down to only 5 or 6. Remember that the number of inputs also increases the degrees of freedom.
Temporal SOM is also quite good at memorizing patterns, so I would recommend avoiding them. The GRNN models are relatively simple, so I don't know why these models would be worse than BP.
Carl |