I saw something a while back where someone wrote something to the effect "There's a tendency in neural networks to throw everything available to a network, including the kitchen sink, and let the net sort it out...but at some point, the network quits learning and starts memorizing, which does away with the network's ability to make generalizations".
Anyway, the point of the article was that if a net can be trained with a few reliable indicators as inputs, as opposed to a lot of inputs that may or may not have much effect on the outcome, then the predictions it make will often times be more accurate to real world systems.
As best as I remember, the article suggested monitoring the weight structure to see which inputs had the more important influence on the output.
The problem is, I suppose, with Neurostock, do we have access to the weight structure at all, and do we know exactly how the input training sets are submitted to the training loop?
More food for thought.
Bill |