SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: Jay Hartzok who wrote (406)11/26/1998 9:11:00 PM
From: CatLady  Read Replies (1) | Respond to of 805
 
Jay,

Interesting observation. I've formed a similar opinion myself, that the "forget" button, at least at times, doesn't actually forget old training completely. Nor does "file" and "new" from the menu.

But, shutting down and restarting NS, without rebooting seems to be enough, maybe?

Pure speculation on my part, but maybe whatever random process is used to "re-seed" the training process, maybe isn't a truly random process at all, but starts with the old weights. ???

Anyway, I've upgraded from a Cyrix 200MX to an Intel 233MMX and because of the better FPU, nets are training about 50% faster. I'm letting my old nets do some additional training and seeing some positive results. I'm also starting to think that it actually is OK to stop annealing before it's completed, as long as it's always followed by some more backprop time. In fact, a little annealing may be better than too much?

Overall, I found several of my nets were overly pessimistic over the last 2 to 4 weeks, and missed out on some very good gains. The overall market has been showing remarkable strength, so the nets seem to have been fooled by a less than common situation. I am going to start keeping copies of the nets with short or no verify periods in hopes of spotting such anomalies faster.

CL




To: Jay Hartzok who wrote (406)11/26/1998 11:03:00 PM
From: Jay Hartzok  Read Replies (3) | Respond to of 805
 
More observations,

I have noticed that supposedly identical nets will train differently and produce different results under certain situations.

Nets that are set up with individual inputs that have multiple boxes checkmarked will train differently than the same identical net that has an individual input for each box checked. I do not know which is better for certain, but I am leaning towards one box checked for each input. Setting up nets this way may, of course, be a problem for people using shareware versions that are limited to seven inputs.

Nets that are trained using the maximum available neurons from the start or near to the start of training, train differently than those trained by starting with a minimal amount and letting the program add more neurons as needed. Nets trained by adding more as you go seem to be more finely tuned, especially if you have both back prop and annealing checked. I attribute this to the fact that every time the net adds more neurons it trains through an entire cycle of back prop and all four phases of annealing. And as a result of going through all of the annealing training passes each time new neurons are added, nets trained this way take quite a bit longer to train. Which is better? I am leaning towards starting with 6 to 14 and letting the net add more as it goes.

One final observation: Not all, but most of the premature buy signals can be eliminated by using the Skittish strategy, which only buys when sure and sells at the first sign of a possible downtrend. Nets that I have trained from the beginning using this strategy rarely produce premature buy signals in the verify period.

There is an old saying about how to make money in the market, that being, never buy at the bottom and always sell too soon. If anyone has any methods for setting up nets that will produce buy and sell signals that will conform to the trading strategy of this saying, please post the settings.

Jay