SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: Jay Hartzok who wrote (492)12/18/1998 10:22:00 AM
From: Bill Scoggin  Respond to of 805
 
Jay,
I think you are right. Backpropagation routines usually start by initializng the hidden layer and output layer weight matrices to random numbers, usually between -0.5 to + 0.5, or somewhere in that ball park. Sometimes, simply reinitializing the weight values will result in a network that will train to a suitable error, without changing anything else. Also, adding more neurons during training (and of course, additional random weights) sometimes aids training.

If the verify period is suitable, and the scattergraph is a diagonal, that's our only test of the net's fit to the problem at hand with Neurostock. There is no way to tell if a given net will train suitably from the start.

At least thats what I read...

Bill