SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: Vic Nyman who wrote (491)12/18/1998 8:41:00 AM
From: Jay Hartzok  Read Replies (1) | Respond to of 805
 
Vic,

A few thoughts:

One would think that two identical nets would train exactly the same. Obviously that's not the case. It would seem that out of the infinite number of neural patterns available, Neuro randomly selects the neural patterns to begin the training. If this is the case, then two identical nets would almost never train alike. As the training progresses and Neuro tries out and discards unsatisfactory neural patterns, I would think that the two identical nets would eventually become the same providing, of course, that each net finally found the same best pattern to make predictions from. To over simplify, I suppose you could say that the two nets took different roads to end up at the same place. How much additional training it would take to get these two nets to become identical is anyone's guess.

These are just some of my thoughts and may not have any validity at all. I will post Andrew's comments as soon as he replies.

Jay



To: Vic Nyman who wrote (491)12/21/1998 8:47:00 AM
From: Jay Hartzok  Read Replies (1) | Respond to of 805
 
Andrew's Comments:

*********************************************************

Training neural networks is not a deterministic process, two networks
trained with identical data WILL result in two differently trained
networks.

Both BackProp and Simulated Annealing training use random starting
points and random search patterns during training, it would take months or even years to do an exhaustive search of the solution space.

The solution space for this application is incredibly huge, a typical
network may have over 30 dimensions of mostly uncorrected data, hundreds of variables to balance and hundreds of thousands of computations per training pass. It is not very probable that two networks would take different paths and arrive at the same solution, that would be like two little ants searching for the tallest mountain on earth starting in different continents; most likely each will find the tallest mountain it can see but not always the tallest mountain there is (and that's only three dimensions! our problem is 10 to the 10th power more complex).

I hope this helps, and remember that we are not trying to find the
absolute best fit network, just one that gives us an edge.

Andrew.