To: dumbmoney who wrote (29717 ) 9/16/1999 8:50:00 PM From: Tenchusatsu Read Replies (1) | Respond to of 93625
Dumbmoney, <First, VC is fully compatible with autoprecharge. Second, the VC hit rate is higher. So the benefit may be small, but it's unlikely to be zero.> OK, you're right. A row in each row cache can stay around for a while, in case another access hits that row. When I say it's useless, I don't mean the benefit is absolute zero. I meant that the benefit is probably too small to make a difference in a server, even if the hit rate were higher. The problem with implementing VC technology in RDRAM still remains, however. With SDRAM, each bank spreads across all the chips on a module. Therefore, each row in a bank also spreads across all chips on a module. So does each row cache, which is why each row cache can address any row in any bank (i.e. fully associative). But with RDRAM, each bank resides completely within a chip. Therefore, each row also resides completely within one chip. If row caches are implemented, each one must reside completely within a chip, and therefore it can only address all the rows within one chip. Sure, the row caches can help even if they are confined within a chip, but they won't be fully associative across multiple chips, which is a shame because there could be up to 32 chips on one RDRAM channel. This problem will still remain if Rambus comes up with a fewer-bank design, unless this design was meant for systems that will only use one or two (high-density) RDRAM chips on the channel. (Maybe the engineers at Rambus are thinking about this sort of stuff. If not, they really ought to.) By the way, I've got a question. Would looking up the address tags on the row caches take up a clock cycle? I guess if you don't get very many hits, your average latency may actually worsen somewhat because of that wasted clock cycle. The concept is, of course, to make sure the hit rate is significant enough to more than overcome the one clock miss latency. Tenchusatsu