SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : TA-Quotes Plus -- Ignore unavailable to you. Want to Upgrade?


To: Bob Jagow who wrote (4406)6/14/1998 6:36:00 PM
From: ftth  Read Replies (1) | Respond to of 11149
 
[off topic](edited) O.K. that changes things a little, but it would seem if you knew how much data was on drive 1 vs. where the drive 2 data starts, you could construct some scans that MAY be faster because of the 2 drives (no one has yet convinced me of this, but let's assume that's the case for the sake of argument), and others that wouldn't. If most of the data for the scan comes from drive 2, then for all practical purposes, this is the only drive that is being accessed after the scan executable has been loaded.

Also, it seems we'd be reading sequential records from drive 1 until the recent data was used up, THEN sequential records from drive 2, no ping-ponging, just 1 switch in the middle.

But back to one of my original points, if the data were loaded into block A of RAM, as we used the data in Block B to feed the execution of the scan out of, then ping ponged back and forth, toggling A and B as load/execute, we wouldn't even be having this 1 vs. 2 drive discussion, because that would blow any drive configuration away. All I originally wanted to know is why they weren't doing this and if they still could.

dh