SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Len Giammetta who wrote (419)11/27/1998 6:24:00 PM
From: Bill Scoggin  Read Replies (1) of 805
 
Jay,

From what I've read and studied, the goal of most neural net applications is to find a net that produces suitable results from a test sample of data that is taken out of the larger training set of data. Once a set of weights is found that does produce fairly accurate (it does not HAVE to be perfect) results when presented with the test data (which it has never seen before), training is stopped. This becomes a workable "model" that SHOULD be accurate as long as the input data stays within ranges that the training set covered.

After this, the network would be ran in production mode (or Prediction Mode, for Neurostock) using new real-time data, without continued training. More training would only be required when the net started making too many incorrect predictions too often.

I assume that at some point, when a suitable verify period pattern is found, training could be stopped. As Len pointed out, once a net is trained accurately, you should be able to use it for quite a while without retraining it. If we had access to the total network dataset error value (at least for Backprop training), then we could determine the network's training level (ie the smaller the better, usually around 0.001 or less for each data set presented to the network). I can only guess that perhaps some of the confidence level NS displays might perhaps be based on this???

Something that might need consideration: one book I have on developing neural applications discusses the fact that once a data set is gathered, it might be a good idea to find the lowest and highest values in the inputs/outputs set, and add 10% or so to the highest values, and subtract 10% from the lowest, as part of the data pre-conditioning. This will allow the network to be able to predict trends going slightly higher and slightly lower than the data values the training set contains. That book points out that a prediction made using data that lies outside of the training set will NOT be reliable.

What I'm staring to see is that if I get a good, clean diagonal on my training set, AND if my verification period (usually about a month to two months, depending on volatility during that time) also produces a diagonal, (and is not scattered as is most often the case) then I usually quit training it and devote my computer's time to new nets that or nets that do not show a suitable verify period.

My note-taking leaves something to be desired, but I've got two or three nets that have done this, and they seem to be predicting fairly well. I've not yet bought on their signals, but probably will soon - I wanted to observe them for a while. Both of them have been giving buy signals for the last week or two and in the past few days, both have gained 10-15%, and NS is now recommending Hold, so, I guess they were correct.

Somewhere in the NS help files, it says that the network will alert you when it has drifted to the point of needing more training - I have not as of yet had this warning come up, but I think its because I've been letting them train a lot - probably more than is necessary.

Time will Tell...

This, of course, is just another set of lunatic ravings, "from a slightly different point of view" (to quote a song), but maybe some of it is relevent.

Have a good weekend.

Bill
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext