SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: Jay Hartzok who wrote (648)1/20/1999 10:21:00 PM
From: Bob Anderton  Read Replies (2) | Respond to of 805
 
Jay, In the previous note you describe nets being able to rebuild themselves after a crash more quickly than the initial training. That is consistent with the hypothesis that one or a few variables have been lost or corrupted in the model. One might suppose that the slope of the goodness of fit with respect to these variables would be the steepest and the net would recognize that these were the variables that needed to be stepped and in the correct direction.

The fact remains however, and I think this is a huge clue and the root of the mystery, that (supposing a net crashed today) copying today's values onto the front of yesterday's data file and overwriting today's data file restores the net. Note: I am not overwriting the model file or changing it. Only the data file. For example, AMAT.CSV The data file is not supposed to contain any information about the model, just the raw data. How could reverting to the previous data file restore the crashed net, even with the newest data added (manually)?

Bob