SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC)
INTC 40.78+0.7%Dec 10 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Mary Cluney who wrote (118989)11/22/2000 8:03:08 PM
From: Dan3  Read Replies (1) of 186894
 
Re: can you describe some of the markets you understand ...

I'm not an authoritative source. I'd defer to the guys at the web sites that actually tested the machines. But I can try to explain why I don't think the P4 bus is an issue for servers near term.

Here is what I do know. Data comes in to and goes out from the server by 100 megabit cards (now sometimes 1 gigabit cards) - which usually works out to a peak of 6 to 10 megabytes/sec per port (or 10 times that for gigabit cards). Data comes in from and goes out to disk arrays at rates that can be high enough to be limited by the PCI bus if you have multiple multichannel array controllers. That same bus is also hosting the network ports, so it's a zero sum game between the two (disk and network I/O). PCI is usually 32 bit (4 byte) times 33MHZ or 133 Megabytes/second. Adaptec has been aggressive in pushing 64 bit (8 byte) 66MHZ PCI cards and a few motherboards support them. This means that very high end servers (which won't be using P4 for a half year to a year) can theoretically handle up to 533 Megabytes per second of data throughput. The PC66 interface used by the Celeron is 64 bits (8 bytes) wide and runs at 66MHZ giving a theoretical data rate of 533 Megabytes per second - just like the very highest capacity servers. While it's true that these theoretical data rates aren't usually achieved by the frontside memory bus, that same factor means that the demand (from peripherals on the PCI bus) is also usually well below that theoretical rate. A PC133 interface doubles that theoretical maximum load, Athlon's DDR 266MHZ front end quadruples it, and P4's 400MHZ front end is six times as fast as the fastest current X86 system bus (this may change in the future, but not the near future). The 400MHZ P4 bus is a great frontside bus, and the fastest out there right now, but it looks like it's going to be a very long time before that speed can be used for very much.

If the CPU is busy transferring data out across the PCI bus, then it is limited by that bus, but I list these loads on the PCI bus because DMA controllers on the PCI bus can use up some of the available memory bandwidth while the CPU is doing something else, and if they were large enough loads, they could leave the CPU waiting for access to the memory bus for a significant amount of time.

But the fact is that about the only things that can tax the bus of anything faster than a Celeron, are operations from main memory to main memory. If the data is already in main memory in its final form, it need only be transferred out at a rate equal to that of the fastest peripheral (network or disk) and this rate is bounded by the PCI bus. If the data in not yet in main memory, but must instead be calculated, then with a very few exceptions, the processor must jump around in many regions of the memory as it performs calculations, moves data around, etc. These are the server applications that perform database indexing, create database queries, or provide other calculations using applets or ASP code. These are rarely streaming applications and this is why I don't think that P4 is a particularly "server oriented" chip, certainly not in the sort of enterprise configurations in which large servers are usually found. This lack of constraint due to CPU in servers, by the way, may be why SUN continues to do well in these markets. They have long fielded 64 bit "compact PCI" bus machines (why is 32 bit called "regular" and 64 bit "compact", shouldn't it be the other way around?) and that is the real constraint for large server systems. Anything over PC133, or even PC100 frontside bus is overkill and will not have a major effect on overall performance for these machines.

There has been good analysis made of the P4 for the applications such as MPEG encoding (this isn't what midrange servers do) voice recognition (this isn't what midrange servers do either), etc. on Ace's Hardware, and Anandtech, and Tom's hardware etc. Those guys have all done a much better job than I could ever do of explaining those issues.

Most midrange servers that I've seen look up stuff in a database or just as files on a hard drive and send it out, or accept changes to some of the data in the database in the form of transactions or just plain edits. This is mostly a matter of "streaming" data, but limits on the rate at which that data can be streamed are not set by the frontside bus of the CPU (or CPUs), but rather by the I/O bus.

I hope some of this babbling made sense, hopefully Tenchusatsu (who does chipset design, I believe) will stop by and correct any errors I have made.

Regards,

Dan
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext