SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Joe NYC who wrote (151390)12/6/2001 6:33:31 PM
From: Tenchusatsu  Read Replies (2) | Respond to of 186894
 
Joe, <As far as latency of L2 being variable based on the clock speed, I don't think it is the case. Ever since L2 caches have been placed on CPU die, the latency is fixed in terms of processor cycles.>

Uh, no. I can almost guarantee that a 1.4 GHz P4 has a shorter latency (in terms of # of clock cycles) than a 2.0 GHz P4. But the bandwidth per clock is constant.

Only the L1 caches have fixed latency.

We're going on a tangent here, so let me just say that most of the P4 really does run at its rated speed, including the x86 instruction decoders. Kap and the other AMDroids are coming up with bogus experiments in a futile attempt to accuse Intel of fraud. That should be plain and obvious to non-Droids.

Tenchusatsu

EDIT: Even if the decoders ran at half-speed, I don't think that would make a difference in performance as long as the decoders did double the work every other main clock. In theory, you should be able to fetch and decode more instructions per clock than you can execute. So if you're going to architect a split-clock processor, it makes sense to speed up just the execution side of the pipeline instead of the whole processor.