SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC)
INTC 36.06+4.5%1:32 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Tenchusatsu who wrote (128527)2/28/2001 2:08:42 PM
From: Rob Young  Read Replies (1) of 186894
 
Tench,

You have got the hops down and using the slides can
determine latency. It is very good as you can assess.
i.e. if each hop takes 13 cycles (see slide 13) 14 hops is a very low number on a wall clock (compare 400 ns latency for a true 64 CPU UMA , Sun's UE10000). AND the best thing is that is *worst* case. When you do *averages* you see that you have amazing numbers.

Regarding infrastructure and cost, the key to keeping
that down is leaving out external cache (L2 or L3
depending on on-chip L2, etc.). You can see
that L2 is shared CPU<->CPU via router. Secondly, with
on-chip memory controllers and routers your supporting chipset
costs are much lower (as we noted earlier). Finally,
256 CPUs in a single box will be cheaper
than 64 CPUs in 4 boxes , plus have a Terabyte of memory
to play with. Note: Cheaper is a relative term, hope
we can agree on that :-)

"[Won't happen] Period. It's a wild fantasy to think that 256 CPUs can scale UMA-style"

But then you go about the business of determining hops
and from the technical data we know what that translates
to in wall clock time.... so what was so bad about it again?

Surely, Compaq is winning all but one SuperComputer bid
for some strange reason?

Rob
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext