SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: Optim who wrote (381)11/11/1998 3:44:00 PM
From: Jay Hartzok  Read Replies (2) | Respond to of 805
 
Optim,

I've had some nets that refused to train, also. And, like yours, the dot pattern stayed in a horizontal line. What I found was that sometimes the program seems to "lock up" and not respond to the data. Simply changing settings or relateds would not help. The only way I got the net to train was to delete the neu file and set up the entire net once again. If I tried to save over the "locked up" neu file, the program wouldn't train the net. I have no idea whether or not this applies to what was happening with your net.

Jay



To: Optim who wrote (381)11/14/1998 9:46:00 AM
From: Vic Nyman  Read Replies (2) | Respond to of 805
 
Annealing versus Backprop - use both!

Hi Everyone,

I have been playing with NeuroStock for a couple months now and have been benefiting greatly from the accumulated wisdom of the group here... thanks for the extra profits!

On the subject of training methods, I have found that using both Annealing and Backprop is essential. They have different, but complementary capabilities that were well described in one of the first NN FAQs I read ( I'll post the URL as soon as I can find it again ).

The FAQ basically described Backprop as a mountain climber who looks 1 step ahead. It causes the model to improve in the direction of the next step. If the "solution space" of an optimal stock prediction model looks like a mountain range, that Backprop mountain climber is going to start climbing the first mountain he encounters. He will climb to the top and believe that he is on the highest mountain peak in the world. The problem is that there may be other, higher mountains that he cannot see.

This is where Annealing comes in. Annealing is kind of like a helicopter. It looks to see if other mountains exist that may be higher in the mountain range. If so, it picks up the mountain climber and places them over on the other mountain... NOT NECESSARILY AT THE TOP OF THE MOUNTAIN. In this case, that mountain climber needs to find his way to the top of that new mountain... that's where Backprop comes back in.

OK, I know it sounds like an advertisement for Outward Bound, but it seems to work. It seems like using Backprop first allows NeuroStock to learn the basics of the stock behavior, then Annealing makes sure you are on the "right" prediction model solution of the several that may exist, then using Backprop again fine-tunes the solution to the most predictive model.

I don't claim to be an expert on the subject, but the above FAQ explanation has helped me to understand NeuroStock better when it comes to training. Using this procedure, I have found only 1 stock that refused to train at least to some degree.

Thanks again to the group for all of the hints & tips. I hope the above info is useful.

Vic



To: Optim who wrote (381)11/18/1998 12:27:00 AM
From: CVDave  Read Replies (2) | Respond to of 805
 
Annealing...How do you know when to stop the program when you use simulated annealing. I realize that if you are letting NS control everything, it will eventually go back to back prop. BUT, it has been stated several times that you should let SA continue through all four stages, letting pass 4 be "completed". My question is, if you are manually controlling the training, how do you know when you are done? I assume the program doesn't just stop, and everytime I observe it, it's on pass#4, but it's not obvious when to quit.
Thanks,
Dave