SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: Len Giammetta who wrote (419)11/27/1998 6:24:00 PM
From: Bill Scoggin  Read Replies (1) | Respond to of 805
 
Jay,

From what I've read and studied, the goal of most neural net applications is to find a net that produces suitable results from a test sample of data that is taken out of the larger training set of data. Once a set of weights is found that does produce fairly accurate (it does not HAVE to be perfect) results when presented with the test data (which it has never seen before), training is stopped. This becomes a workable "model" that SHOULD be accurate as long as the input data stays within ranges that the training set covered.

After this, the network would be ran in production mode (or Prediction Mode, for Neurostock) using new real-time data, without continued training. More training would only be required when the net started making too many incorrect predictions too often.

I assume that at some point, when a suitable verify period pattern is found, training could be stopped. As Len pointed out, once a net is trained accurately, you should be able to use it for quite a while without retraining it. If we had access to the total network dataset error value (at least for Backprop training), then we could determine the network's training level (ie the smaller the better, usually around 0.001 or less for each data set presented to the network). I can only guess that perhaps some of the confidence level NS displays might perhaps be based on this???

Something that might need consideration: one book I have on developing neural applications discusses the fact that once a data set is gathered, it might be a good idea to find the lowest and highest values in the inputs/outputs set, and add 10% or so to the highest values, and subtract 10% from the lowest, as part of the data pre-conditioning. This will allow the network to be able to predict trends going slightly higher and slightly lower than the data values the training set contains. That book points out that a prediction made using data that lies outside of the training set will NOT be reliable.

What I'm staring to see is that if I get a good, clean diagonal on my training set, AND if my verification period (usually about a month to two months, depending on volatility during that time) also produces a diagonal, (and is not scattered as is most often the case) then I usually quit training it and devote my computer's time to new nets that or nets that do not show a suitable verify period.

My note-taking leaves something to be desired, but I've got two or three nets that have done this, and they seem to be predicting fairly well. I've not yet bought on their signals, but probably will soon - I wanted to observe them for a while. Both of them have been giving buy signals for the last week or two and in the past few days, both have gained 10-15%, and NS is now recommending Hold, so, I guess they were correct.

Somewhere in the NS help files, it says that the network will alert you when it has drifted to the point of needing more training - I have not as of yet had this warning come up, but I think its because I've been letting them train a lot - probably more than is necessary.

Time will Tell...

This, of course, is just another set of lunatic ravings, "from a slightly different point of view" (to quote a song), but maybe some of it is relevent.

Have a good weekend.

Bill



To: Len Giammetta who wrote (419)11/28/1998 11:07:00 AM
From: Jay Hartzok  Respond to of 805
 
Len,

Thanks for challenging my findings on this subject. Because of your replies, I took a much closer look at my testing process and found a flaw in it. I'm a bit embarrassed to say what it was, because it's something that I should noticed right away. I suppose I was looking for complexity when I should have been looking for simplicity. Anyway, it turns out that for some reason , when you open certain nets, the volatility bars in the verify periods are not restored to the point at which they were when the net was closed. Other nets seem to open perfectly. { I have no idea why this is and at this point I'm not sure that I even want to know } This does not seem to effect the final prediction, it never changed on any test net. At any rate, I found that if I trained the net with the original training dates intact for 100 cycles, and then opened the other net and removed the sliding window, restoring the original training dates, and trained this net for 100 cycles, the verify periods became identical or nearly identical, depending on which net comparison I was looking at.

So, apologies to all for making an issue of this. I generally don't type until I am absolutely certain, and although I thought I was certain at the time I posted, I should have double and triple checked all the results before I started to type.

I have deleted all of my test nets and am now returning to "simplicity". This lunatic is now officially back to normal. Thank God.

Jay