If you have a display card that deviates far from this, give me a link. Until then, you haven't done anything as far as finding a counterexample to this rule.
I already did. The PS2 example, which you didn't respond to. They moved part of the graphics memory into a 2560 bit wide eDRAM because it would not have been cost effective to build a 48GB/sec memory architecture using conventional DRAM techniques. If your rule was right, then .48 GB/sec would have been enough for them, right?
when I did calculate the ratios for those products
Uh, Carl, that doesn't prove anything. I already said they just started hitting the limit with the TNT2, so comparing the TNT and TNT2 would fit your rule of thumb nicely.
But I appreciate your post, as I do understand your point better. Historically, it seems to be accurate and a nice rule. However, I still submit that going forward, it is broken and will not work, as the PS2 has already shown and the next generation graphics accellerators will show.
(2) Rambus provided the highest bandwidth/device ratio, and, consequently, was a natural contender for those design wins.
(3) Despite this advantage, Rambus lost that market due to other costs of the technology.
Agreed. Rambus was too expensive to fit into the commodity graphics chip market. This is a market where they sell chips with more transistors than a P3 for $35, so cost is a huge consideration.
Plaz |