SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: Len Giammetta who wrote (242)10/16/1998 12:07:00 AM
From: CatLady  Respond to of 805
 
Len -

Somehow, though, I feel that using the maximum neurons will identify more patterns, but what do I know?

I've been starting my nets at 6 neurons and letting them add as needed. A few nets have bumped up the number very quickly, so I'm fairly well convinced that it's OK to let NS decide how many it needs.
Backprop seems to want more neurons than annealing, FWIW.

All -
I recently took a look at the sine wave data file and net from the NS site. Playing around with different influence periods on price and ST filter yields some interesting results, I'm just not sure I understand yet what the conclusions are.

CL




To: Len Giammetta who wrote (242)10/16/1998 10:08:00 AM
From: Optim  Read Replies (1) | Respond to of 805
 
Somehow, though, I feel that using the maximum neurons will identify more patterns

True. But that isn't necessarily a good thing. The more a net can remember, the more likely it is to 'memorize' the data. By reducing the number of neurons you force it to generalize and its predictions will apply better to newer data.

I've found that you can run multiple incidences of the program simultaneously, which of course increases the number of nets you can train over a given period of time.

True. But don't forget that the CPU is split between the two nets so if you train for 12 hours, the net is really only trained for 6. Does this make sense? I think the idea of time as a criteria for training is poor. I guess as long as your net 'looks' fit then you are okay.

I prefer to build my nets and then have a batch file train them individually while I am at work or sleeping (sometimes the same!). For example train net 1 for 4 hours, then load net 2 and train for 4 hours, etc...

In very simple terms, could you explain some benefit of genetic training. Why would it be worth the additional investment?

I e-mailed Andrew Cilia a long while back about this. His response was that the GA process patches together various indicators (inputs) and then NS evaluates its fitness.

The way a GA works is somewhat simple, but it is hard to explain. Basically you would construct a chromasome or a string of numbers that represent the items you want to optimize. In this case a chromasome could be built representing each of the influences (1 being on and 0 being off). So for a s/t, l/t, price and volume string, with price turned off, it would be 1101. The net is then trained and evaluated. If there is a good 'fitness' or in this case profit, then the string is saved. In NS you can control the number of chromasomes built and evaluated (from 40 to 120, called the population). The larger the number, the more variations of inputs you try. Once the best few have been been found, they are saved and the poor resulting chromasomes are replaced with copies of the better performing ones. Some of these copies have a 'crossover' performed. This means that if there are two good solutions that are different (say 0011 and 1100) then parts of them are exchanged (00 11 and 11 00 become 1111 and 0000, hence 'crossover'). In addition, some of the copies of the better solutions are 'mutated' into slight variations of the variables at random (ie 0011 becomes 0010 by a random process selecting the last digit). This introduces new combinations into the population, which might be a alternative, more fit solution. Then the process is repeated, evaluating all these new members. Eventually the genetic process will 'converge' or build a population of the most optimal solutions by recombining the most profitable solutions, and by mutating them to introduce new solutions. The solutions found aren't always the absolute best solution available, but they are very close to optimal.

The reason you go through this whole mess is to optimize something that would be too computationally intensive otherwise. For example, imagine you have 64 inputs and you want the neural net to pick a combination of only 5. Trying all the combinations in an iterative process would take forever. But a GA could come up with a good solution usually within a few generations (1 generation = 1 population = 40 chromasomes).

If someone is really interested, there are a number of FAQs on the 'net that detail this sort of stuff. It can be intimidating at first though! :)

Hope that helped more than it confused.

Optim