SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : All About Sun Microsystems

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: rudedog who wrote (48967)5/13/2002 5:59:32 PM
From: Ali Chen   of 64865
 
"So even if the raw cost to go to RAM is only twice as long as for flat memory"

??? There is no difference between the models - the
address calculations are done in flight in the hardware,
and both models use the same hardware. The total amount of
TSS overhead should depend on granularity of time sharing.

"..because of the size of the context, often flushes L2 as well, which increases the memory access times by a factor of 16 or more."

If the size of context is bigger than L2, the off-chip
memory traffic will bear the same penalty regardless of the
memory model, flat or segmented.
But if some recent CS graduate had decided to flush
the entire 2MB of Xeons L2 using WBINVD on every TSS
switch, of course you should be in a big trouble
(of course, there might be other "features" of Intel
architecture that would require that atrocity, I
don't know).

Do not get me wrong - I am not a big fan of Intel
segmented architecture, in fact I hate it. However,
what I am trying to say is that it is highly irresponsible
to expect any significant advantages just from switching
to flat memory model in the essentially same architecture.

- Ali
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext