SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Elmer who wrote (149136)11/22/2001 12:43:56 AM
From: Paul Engel  Read Replies (2) | Respond to of 186894
 
Servers gas up with 4-Gbyte/s PCI-X 2.0 spec

By Rick Merritt, EE Times
Nov 21, 2001 (11:48 AM)
URL: eetimes.com

SAN JOSE, Calif. — The Peripheral Component Interconnect interface is about to get a hefty boost via PCI-X version 2.0. Some companies say they will support the revision with products by summer, on the heels of the arrival of first-generation PCI-X.

The new spec will use double- and quad-data-rate techniques to forge links with 2 to 4 Gbytes/second of data throughput, outstripping even the nascent Infiniband interconnect and driving Intel-based servers deeper into data center computing.

The interface appears poised to get broad support as the internal bus of choice for Intel-based servers. But some OEMs warn that the industry must carefully position what's becoming an embarrassment of fast-I/O riches if it hopes to avoid market confusion.

PCI-X is seen as an easy-to-implement enabler for supporting 10-Gigabit Ethernet and other fast interfaces in next-generation servers. And some backers believe the bus will dominate PC server design for the foreseeable future.

The interconnect is not expected to find much of a foothold in desktop or notebook PCs, however. Nor is it expected to supplant the role of Infiniband in linking multiple systems or subsystems in a data center.

"We're expecting the PCI road map to continue a number of years into the future with this technology," said Dwight Riley, PCI-X 2.0 chairman and a server architect at Compaq Computer Corp. (Houston). "The DDR [double-data-rate] version of PCI-X itself will take us out to 2004. And Compaq will use PCI-X across its entire server line."

"I think the committee did a good job of implementing DDR with minimal impact on the silicon," said Dave Pulling, vice president of sales and marketing at chip set maker Serverworks, a division of Broadcom Corp. (Irvine, Calif.). Serverworks plans to support PCI-X 2.0 in chip sets for two- and four-way Intel servers that it will launch at the end of next summer, Pulling said. The company's first PCI-X-based chip sets will roll in the first quarter, and it will begin support of Infiniband with a 4X version late next year.

Not everyone is convinced PCI-X 2.0 will become the dominant interconnect in PC servers in the long term. "Compaq has no plan for using in its servers 3GIO [a serial version of PCI slated to debut in late 2003], but I have a different take," said Michael Krause, who heads up interconnect technology for Hewlett-Packard Co.'s server group. "I think 3GIO makes a lot of sense come 2004. The best chip sets I have seen targeted for 2004 and beyond are using 3GIO."

Significant savings

The serial nature of 3GIO will also result in lower pin counts on chip sets. That creates significant cost savings at the chip and board level for companies prepared to move to the new interconnect, Krause said. Compaq's PCI-X 2.0 drive is largely trying to forestall a change of I/O technology for its existing users, he said. HP plans to use 3GIO as a replacement for its existing proprietary serial I/O server technology, called Ropes.

The PCI-X 2.0 work group of the PCI Special Interest Group (SIG) has brought under its umbrella work on double-data-rate and quad-data-rate (DDR and QDR) versions of PCI-X as well as work on bringing error-correction code to PCI. It expects to put its work out for member review in the first quarter of next year. The SIG typically takes specs to a final vote 30 to 60 days after a successful review.

The double-data-rate version of PCI-X uses source-synchronous signaling to measure data on rising and falling edges of a clock. The mode depends on I/O signaling in the range of 750 to 800 millivolts, although PCI-X core silicon is still expected to run at existing voltages, generally at 3.3 V.

A typical PCI-X implementation using phase-locked loops with a flip-chip package would need no change in pinouts to support DDR. Implementations using ball-grid array packages might need a thorough analysis of I/O voltages.

"QDR is a no-brainer after that," Riley said. "But do systems need the QDR bandwidth today? Probably not."

Roger Tipley, chairman of the PCI SIG, said the new spec arrives just as designers are looking for ways to bring high-speed interfaces like 10-Gigabit Ethernet elegantly into servers.

"At some point we may want to move to dual 10-Gigabit Ethernet links, and that's where QDR comes in. Right now that might seem like overkill, but in a few years it won't seem so strange," Tipley said.

Cause for delay

PCI's fast-forward leap stands in marked contrast to the rather sedate pace of development of the popular bus technology just a few years back. A falling-out between the PCI SIG and a key Intel interconnect designer who spearheaded development on the Accelerated Graphics Port caused Intel to pull out of the initial PCI-X effort. "That's why PCI-X took two years to take off," said Cary Snyder, a senior analyst with the Microprocessor Report.

At the same time, a deep division over the route to a serial, message-passing interface split computer makers into competing camps, diverting attention from PCI-X until the groups reunited on the Infiniband spec.

"Today there are a number of interconnect technologies available — PCI, PCI-X, DDR PCI-X and Infiniband — and we want to position these clearly," said Tom Bradicich, director of PC server technology and architecture at IBM Corp. "I don't think you will see one win out over another. There will be overlapping bands, but adapter-card and bridge-chip makers share some concerns that there could be confusion."

Indeed, a broad group of computer makers drawn from the ranks of the PCI SIG and the Infiniband Trade Association is said to be hammering out usage scenarios for the various technologies. The group may also try to influence the still-fluid work on 3GIO, which aims to become the PCI 3.0 standard, so that further overlap among I/O specs is minimized.

Riley said 3GIO offers low pin counts for desktop and notebook designers needing a fast internal chip-to-chip interconnect. It could be useful as those systems migrate toward 1-Gigabit Ethernet connections.

Infiniband is also a serial interconnect, but it uses a message-passing approach aimed at letting it support links between systems equipped with their own processors and operating systems. By contrast, PCI and its follow-ons are memory-mapped buses geared for direct attachments between devices that share a common host and a single operating system.

Infiniband aims to replace a number of expensive, proprietary system interconnects like Giganet, Myranet and Servernet, which are used to cluster systems or link subsystems. "There are four or five versions and they can cost $1,000 per card," Tipley said.

The emergence of fast I/O links is helping systems designers drive PC architectures deeper into corporate and Internet data centers. At Comdex, IBM detailed its plans for its Enterprise X Architecture, a set of scalable Intel-based servers that can be configured with as many as 16 processors, 256 Gbytes of RAM and 48 PCI-X I/O slots.

The new IBM systems will come in versions for Intel 32-bit Foster and 64-bit McKinley processors, simply by swapping out a processor interface chip. The servers leverage what IBM describes as mainframe-class capabilities, including hot-swap memory, up to 64 Mbytes of Level-4 cache and a 3.2-Gbyte/s coherent scalability port for linking multiple processors, even across separate chassis.



To: Elmer who wrote (149136)11/22/2001 12:57:48 AM
From: Bill Jackson  Read Replies (1) | Respond to of 186894
 
Elmer, AMD has been selling each chip at a profit and has gained share this quarter. It did lose share last quarter, but kicked Intel hard in the profit teeth as it declined.

Bill



To: Elmer who wrote (149136)11/22/2001 7:57:08 AM
From: Dave  Read Replies (1) | Respond to of 186894
 
Elmer,

AMD eeks out a relatively small "gross profit" on their uP sales. Gross Profit is defined as (Sales - Cost of Goods Sold). Therefore, if AMD sold each uP at cost, their gross profit would be zero, increasing their net loss every quarter.

A couple of comments to make. The people who purchase AMD uP assert that the AMD uP is "faster" than an Intel equivalent. While AMD has few Tier 1 vendors for its uP line, AMD has established itself in the "white box" market. However, if the AMD uP is excellent for gaming, why didn't Microsoft chose AMD over Intel? Was it b/c of performance issues? supply issues?

The reason why AMD as a 2nd source will no longer work in the marketplace is b/c Intel has developed a "proprietary" architecture, thus leaving AMD to develop its own. Face it, computer companies, like IBM, HP, CPQ, etc., do not want to be holding inventory. By having two difference uP suppliers, i.e. INTC and AMD, they are forced to have storage facilities (and infrastructure) to support it.

The question then becomes, can AMD survive as being the "premier" supplier to whiteboxes?