SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : NeuroStock -- Ignore unavailable to you. Want to Upgrade?


To: CatLady who wrote (708)2/25/1999 9:37:00 PM
From: Bill Scoggin  Read Replies (1) | Respond to of 805
 
Catlady,
I think your right about the date error possibility. From what limited amount of knowledge I have about NN technology, I understand that the historical data, and any pre-processed or calculated indicators determined from that, will be store in a large array. I visualize this as looking like a spreadsheet, with the most recent data on the top row. There will be however many columns are necessary to hold the historical and indicator data mentioned above.

Each row in the matrix will correspond to each days data for the past time period of the training set, determined by the user for start and end point.

The most recent rows at the top will be the verify set, also determined by the training dates entered by the user.

There are 2 weight matrices - an input to the hidden layer matrix, and an output matrix. There could also be Bias matrices for each hidden and output layer node also.

These weights are fixed in number typically and are set to random values at the start of training. I assume that the Forget button is supposed to randomize these weights.

When the net trains, each Row of the data set is applied to the input layer. There must be as many input nodes as there are columns in the data set.

Row by row, each day's values are presented to the network. The weights are changed according to calculations that should offset the total network error and bring it closer to zero on the next iteration.

When a set of weights is found that minimizes the total error for ALL of the training set data (Rows of the spreadsheet or matrix) to a value very close to zero, the net is considered trained.

In other words, the weight values just happen to work out such that the calculated output values very closely match the historically known correct values. Then, the network is ran with the test set data in the verify period to see if the calculated values closely match the real-life values. If they do, we assume the network MIGHT be able to model that stock by feeding it current data and seeing what it calculates as an output. By offsetting the actual values of the training set by a few days and learning these patterns, the net can "Predict" trends.

Anyway, thats how I understand it at this point anyway - it may not be correct 100%, but most of the limited amount of reading material I can find tend to support this concept.

What I don't understand is exactly what the Scattergraph is actually plotting. I don't know if it is the difference between the actual verses predicted response or what. All we as users know (unless I'm in the dark even more than usual) is that when the data points fall on a good, clean diagonal line, the net is supposed to be trained well.

I too do no see how the scattergraph could become so disorderly by simply adding a few days data. Perhaps a stock near a new high or something that the net has not seen before might be a factor, but that doesn't seem to be consistent either.

I wish the documentation would explain the details a little more.

Anyway, sorry for the long post...I'm trying to take what I think is correct and justify the results I'm seeing - and there is little consistency.

I hope that this new release will fix some of these problems...if not, maybe I can get a few lucky nets and make Neurostock pay for itself soon. Then, perhaps I'll buy Neuroshell Trader or Profit, etc. I think the technology works well...I'm just starting to have doubts about Neurostock itself.

What's your opinion as someone with a Computer Science background. Actually, an elective class several years ago in Artificial Intelligence is how I got interested in this stuff to start with...sometimes I wish I'd skipped that class!!

Bill