SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: Jay Hartzok who wrote (636)1/15/1999 1:14:00 AM
From: Bill Scoggin  Read Replies (1) | Respond to of 805
 
Jay, Len, etc.

Not to get too long winded, but back to the missing data question and how a NN will treat this...

Remember, a neural network, once trained, simply becomes one big huge matrix math problem...

A column of numbers (taken from the latest day's data, based on our pre-processing selections) is multiplied through the matrix from each input and summed at the input to the hidden layer neuron formulas (sigmoid). No matter how big or small this number is, the sigmoid formula (the most commonly one used in Backprop training) will Always be a number between 0 and 1. The output of the hidden layer is multiplied through by another matrix and summed together at the output layer formula. The next day's prediction is based on the answer(s) at the ouput. There could literally be hundreds or thousands of numbers multiplied and manipulated to get the final answer output from the network.

The hours and hours of training are spent trying to find a suitable set of numbers in these matrices that will produce correct pattern matches between the Training Set known output and the calculated output from the network. As each pattern, or set of patterns, is presented to the network, the matrix values change in a direction to try and offset the error between the predicted and known output sets.

Once this is found, it can be assumed that those number matrices should produce similar results when up-to date data is then sent through the matrices.

Obviously this does NOT always happen - thats why some of the Verify graphs are totally wrong, even after many, many hours of training. Sometimes, just starting the net over to retrain, even with the same influence period will produce accurate results - because the training started with different random numbers in the matrices.

Remember also, a NN is supposed to make good Generalized decisions. So, I think that as time goes on, even if this data is never corrected, that the network would compensate with additional training. It might not be 100% accurate on any given 1 or 2 days, but they should still catch trends - assuming they were accurate to start with.

Anyway, just late night ponderings stimulated by one too many cups of coffee at the Waffle House.

Bill