To: rkral who wrote (48966 ) 5/13/2002 12:45:56 PM From: rudedog Respond to of 64865 Ron - the disks I was referring to are in an array. The array controller considers a "block" to be a set of physical disk blocks drawn from more than one actual disk. The page size used in the application generally determines the best stripe size for optimal performance. Win2K, for example, has a 2K page, so four disks with 512 byte blocks can all contribute to a single page fetch. On a single disk, that would probably still be a single seek but would pull more blocks. But the array controllers typically pull more than a single stripe - as long as they are moving the head, they pull the whole track, sometimes several tracks. The next requests are filled from the array controller RAM, and don't require a disk operation. This a single "block" from the array might be 8K, and contain 4 LRU or OS pages, and draw from 16 disks simultaneously to get the data. Asynchronous disk drivers designed to work with array controllers take advantage of this fact by aggregating requests to build up larger blocks. The array controller can then pump the whole block to the application in a single DMA transaction. Drivers which use the VI architecture standard go a step further, and set up the whole programmed transfer with a single driver call. The data is then provided directly to the application using memory semantics with no additional interrupt. So the answer to your question is really that the "block" that an application or driver sees is not a physical disk block, and the array controller groups the disks to get the effective throughput of all the disks, and the latency of the disks divided by the number of spindles. As Win has pointed out, it is not a perfect mathematical relation in practice because the physics of the drives and locality of the data can still make problems. But it is a pretty good working rule of thumb.