To: Jay Hartzok who wrote (703 ) 2/25/1999 4:00:00 AM From: Bill Scoggin Read Replies (2) | Respond to of 805
Jay, Here is some more food for thought and questions...as if its not late enough tonight already... I read the posts on the Yahoo site...it appears there is some perception by some of the people there that a weight exists for every input days input data. That is simply not the case - that is not how backpropagation works. There is only the weight matrices that are set up and randomized at the start of training. There could, for example, be thousands of days of input data, but only a few weights, depending on the number of neurons and other check boxes selected for filters. As a way of verifying this I tried the following: I set up a network that only had 1 stock (the target) as an input. I selected 1 neuron, and saved this to a file. When I hit train, it automatically changed the 1 neuron to 2 neurons, for some reason, but nonetheless, it behaved as I expected: With a 1 input network with 2 hidden neurons, and 2 output nodes (which can be seen in the .neu files as a parameter) you would have the following weights established: Input Layer Weight 1- goes to Neuron 1 from input 1 (W1-1) Input Layer Weight 2- goes to Neuron 2 from input 1 (W1-2) Output Layer Weight 1 - goes from Neuron 1 to Output 1 (OW1-1) Output Layer Weight 2 - goes from Neuron 2 to Output 1 (OW2-1) Output Layer Weight 3 - goes from Neuron 1 to Output 2 (OW1-2) Output Layer Weight 4 - goes from Neuron 2 to Output 2 (OW2-2) There are also usually some Bias terms associated with each processing point: therefore, there should be a B1, B2, B3, and B4 term found in the starting .neu file. Well, when I looked at the .neu file that I'd created, sure enough, there was 10 numbers stored at the end of the file. As a check to this, I then changed the network so that it had 1 related, with only Short term selected (ie, just one more input), which I guessed would cause a second input node and (W2-1, W2-2) to be established. This should cause a total now of 12 numbers stored at the bottom of the .neu file. Which it Did do. At least is looks like the 3 layer network structure is establish in a mannor that is consistent with the textbook examples of back-propagation. As another check, I wrote a few of these numbers down, and then went back to Neurostock and hit Forget - which should simply re-randomize these numbers. When I went back to the newly saved .neu file, the numbers had changed slightly. This indicates to me that the Forget button should indeed work. One reason it may appear not to, is that the random numbers are so small to begin with, that a slight change in them doesn't really mean much to the overall net output during initial training. Anyway, as far as the other problems we've discussed, I've got no clue. But the conception that the sliding window is causing certain coefficients to suddenly see the wrong values applied to them just doesn't hold up. The same data (ie, closing price, etc) will be input to these weights exactly the same way every iteration. Actually, I don't know that there is even a point to this at all - but it does make me feel somewhat better knowing that the architecture of the networks does at least seem to match what I'm reading in books about the back-prop training algorythm - for what its worth. Bill