SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Optim who wrote (381)11/14/1998 9:46:00 AM
From: Vic Nyman  Read Replies (2) of 805
 
Annealing versus Backprop - use both!

Hi Everyone,

I have been playing with NeuroStock for a couple months now and have been benefiting greatly from the accumulated wisdom of the group here... thanks for the extra profits!

On the subject of training methods, I have found that using both Annealing and Backprop is essential. They have different, but complementary capabilities that were well described in one of the first NN FAQs I read ( I'll post the URL as soon as I can find it again ).

The FAQ basically described Backprop as a mountain climber who looks 1 step ahead. It causes the model to improve in the direction of the next step. If the "solution space" of an optimal stock prediction model looks like a mountain range, that Backprop mountain climber is going to start climbing the first mountain he encounters. He will climb to the top and believe that he is on the highest mountain peak in the world. The problem is that there may be other, higher mountains that he cannot see.

This is where Annealing comes in. Annealing is kind of like a helicopter. It looks to see if other mountains exist that may be higher in the mountain range. If so, it picks up the mountain climber and places them over on the other mountain... NOT NECESSARILY AT THE TOP OF THE MOUNTAIN. In this case, that mountain climber needs to find his way to the top of that new mountain... that's where Backprop comes back in.

OK, I know it sounds like an advertisement for Outward Bound, but it seems to work. It seems like using Backprop first allows NeuroStock to learn the basics of the stock behavior, then Annealing makes sure you are on the "right" prediction model solution of the several that may exist, then using Backprop again fine-tunes the solution to the most predictive model.

I don't claim to be an expert on the subject, but the above FAQ explanation has helped me to understand NeuroStock better when it comes to training. Using this procedure, I have found only 1 stock that refused to train at least to some degree.

Thanks again to the group for all of the hints & tips. I hope the above info is useful.

Vic
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext