To: Frodo Baxter who wrote (136 ) 11/13/1998 1:41:00 AM From: Carl R. Read Replies (1) | Respond to of 1989
Lawrence, I hope you won't mind me disagreeing but you wrote:They never allow bandwidth to become a bottleneck. At least they try not to. But they can't prevent it being a bottleneck, so they are constantly trying to speed it up to prevent it from having a significant impact. Otherwise we would still be using a ISA bus, but we aren't. Similarly we went from FPM to EDO to SDRAM and now we are headed to RDRAM. Why? Because we want better overall system performance by eliminating or reducing bottlenecks.As I've mentioned before, the UltraDMA/33 spec is over twice as fast as the theoretical maximal internal data transfer rate of current hard drives. There is no bottleneck. No argument here because the DMA is at twice the drive speed, unlike the RAM example below where the RAM is at 1/5 the CPU speed.System bandwidth operates on the same principle. Your system (it doesn't matter what particular system you own) has enough internal bandwidth such that the processor is not crippled. For example, your processor may run at 333Mhz, 5 times as much as the 66Mhz memory bus. This, however, is not prima facie evidence of insufficient bandwidth, because this design is internally arbitrated by L1 and L2 caches. L1 cache operated at CPU speed, and typically gets about 80% cache hits if I recall, thus eliminating the memory bus bottleneck 80% of the time - but it is still a bottleneck the rest of the time. L2 may run at half the CPU speed, and raises the overall hit rate to 90% or above, but again, you still hit the bottleneck sometimes. Thus 80% of the time you run at full speed, 20% of the time at half speed, and 10% of the time at 1/5 speed, though each application is different. If you never hit the memory bottleneck, INTC wouldn't care about RDRAM. Of course some applications have more tight loops than others, and have less than a bottleneck than others, but some big programs are definitely constrained by the memory bus. I recall reading about a simulation program that had few loops and was larger than L2, and was thus constrained almost exclusively by memory access speed.Multitasking is an irrelevant issue. If you have ten tasks running instead of one, you don't get the same performance ten times over. Instead, the processor time is sliced up ten ways. Therefore, the bandwidth math remains the same. Multitasking does affect bandwidth. If you are multitasking you keep getting your L1 and L2 cache flushed for some other job, meaning you have to go to memory more often. Worse, if you have so many tasks running that you are using virtual memory (i.e. swapping to disk), you have a serious bandwidth constraint. But multi-tasking on a good, interrupt driven operating system also reduces the penalty of limited bandwidth. That is because whenever a job has to wait due to limited bandwidth it can pass off control to another job that isn't limited, thus giving you better overall use of system resources. Thus if a program requests a disk sector read, it can pass off control to another job that is doing screen updates or internal calculations. Oops. We're talking Microsoft here. Never mind. <VBG>In other words, if you upgraded your system with Rambus memory and an UltraDMA/66 interface without the attendant upgrade in processor and hard drive, your system will not be demonstrably faster, not even if you multitask. Again, with faster memory and the same CPU the system will be faster, no question about it. Not noticeably faster except for certain unusual applications, but faster nevertheless. I would assume that the difference would be about 2-5% or so. Try turning off L1 and L2 cache, and the difference would be much larger, though. The fact of the matter though is that ultimately if your CPU is twice as fast, the overall system performance is not twice as fast because the CPU is not the limiting factor. Memory is a minor limitation, but much time is spent waiting for the disk drive or doing screen updates, neither of which are affected linearly by CPU speed increases. And the incontrovertible fact that a faster CPU does not linearly affect overall system performance was really Kevin's main point, I think. But of course, what does any of this have to do with SEG? <VBG> Carl