To: Tenchusatsu who wrote (83256 ) 12/16/1999 10:18:00 PM From: Dan3 Respond to of 1571129
Re: Especially in servers, the biggest cause of latency is unsustainable throughput.... There are various definitions of throughput, that apply to various situations. For servers, it's usually responding to a large number of concurrent requests. The server is usually a very fast machine, with a fast connection to the network, its clients, are often slower systems, and often have slower connections to the network. Even if the clients are large and have fast connections to the network, if the server "batches" the requests, perceived performance is poor. The problem is that the server cannot complete a large transfer to one machine without postponing the demands of the other clients on the network. In such a configuration, the server would appear unresponsive to all but one machine at a time. So the server OS breaks up these big transfers into many small ones, sharing out access to each client. Regardless, most very large servers of the kind being described are doing transaction work, not streaming blocks of data, so only a small amount of data is sent to each client. In these configurations, "throughput", at least when I've seen it used as a term, describes an operation where a server is sending a packet or a few packets at a time to each of dozens or hundreds of clients. There is no opportunity to blast through large blocks of memory. Instead, what is most important is the ability to send and receive many small blocks of information, from many random locations in memory and on disk, to many different machines, in each second. A number of small accesses to disk and system RAM are also required to determine or calculate what data is to be sent to each client. It is these many small transfers that make up the throughput most often discussed in regard to servers, which is quite different from the definition of throughput used when discussing raw memory performance. I think that it is this other type of throughput that the article you made a reference to was discussing. The most important factor for this throughput (in my experience - which is by no means universal) is disk subsystem performance, followed by total system RAM, followed by network performance, followed by CPU and memory performance. But any of these factors can become the bottleneck under the right (the wrong?) circumstances. Tom's Hardware has an interesting review at:www6.tomshardware.com with the first direct, documented DDR to Rambus comparison I've seen. He has an interesting qualifier regarding the alpha release of the DDR chipset, pointing out that the production 820/840 chipsets have actually been running slower that the pre-release parts after it was found necessary to modify them a bit to improve stability. He points out that the same could be necessary for DDR. It's an interesting article, and DDR, so far at least, appears to be living up to its promise. Dan