SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Vic Nyman who wrote (491)12/21/1998 8:47:00 AM
From: Jay Hartzok  Read Replies (1) of 805
 
Andrew's Comments:

*********************************************************

Training neural networks is not a deterministic process, two networks
trained with identical data WILL result in two differently trained
networks.

Both BackProp and Simulated Annealing training use random starting
points and random search patterns during training, it would take months or even years to do an exhaustive search of the solution space.

The solution space for this application is incredibly huge, a typical
network may have over 30 dimensions of mostly uncorrected data, hundreds of variables to balance and hundreds of thousands of computations per training pass. It is not very probable that two networks would take different paths and arrive at the same solution, that would be like two little ants searching for the tallest mountain on earth starting in different continents; most likely each will find the tallest mountain it can see but not always the tallest mountain there is (and that's only three dimensions! our problem is 10 to the 10th power more complex).

I hope this helps, and remember that we are not trying to find the
absolute best fit network, just one that gives us an edge.

Andrew.

Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext