HotRail 8-way Athlon chipset overview in latest issue of Microprocessor Report:
Unfortunately, MPR isn't something that every investor can read for themselves, so I'll give a short two-minute summary of what I've read. But before I go on, here's a quote regarding release dates:
The new core logic ... is still some months away from being taped out
In other words, we probably won't see eight-way Athlon systems based on HotRail technology for quite some time.
Now on to the technical stuff ...
Imagine a central switch chip which acts kind of like a big "Grand Central Station" for all communication between processors, memory, and I/O devices. This switch chip supports a number of ports, called HotRail Channel (HRC) interfaces. Each interface consists on two 10-bit uni-directional ports, each capable of sustaining 1.6 GB/sec of bandwidth. Adding up both directions gives us a total of 3.2 GB/sec bandwidth per HRC. On the other side of each interface is a bridge chip, which connects each HRC to some other interface, either a processor bus, or memory, or I/O. The architecture is designed to be flexible, so some ports can be connected to processors, some to memory, and some to I/O (PCI-X, NGIO, Future I/O, whatever), depending on the needs of the server.
For example, an eight-way Athlon system will require eight of those HRC's to be connected to processors. Then four HRC's will serve as the memory interface, and two HRC's will connect to I/O, for a total of 14 HRC's. All 14 HRC's will come together into one switch chip, and that switch will also handle everything from traffic routing to cache coherency.
The advantage of the HotRail platform is its flexibility. The architecture can easily be scaled up or down. For example, to create a 4-way Athlon chipset, HotRail can just scale down the switch chip from 14 to 10 HRC's, or convert the four unused HRC's into more memory channels or I/O interfaces. Also, the HotRail platform is processor-neutral. It's possible for them to support Intel processors (if Intel allowed them to) just by converting some of the bridge chips (metioned above) from Athlon to P6 buses (or Merced, McKinley, Foster, etc.). Finally, each HRC supports huge bandwidth (3.2 GB/sec) over a relatively small number of pins, which allows one switch chip to support a huge number of HRC's at once.
The main disadvantage is the latency imposed by all of the bridge chips and the switch chip itself. Going from processor to memory involves jumping three chips in the HotRail chipset, and that's not even including the return trip (for read data). HotRail estimates this overhead will impose some 30 nsec of extra delay compared to 450NX. But in servers, total sustained throughput is more important than latency.
The other disadvantage is the complexity. For every HRC, there must be a bridge chip. Thus, for the eight-way Athlon system mentioned above, there must be at least 15 HotRail chips on the motherboard (14 bridge chips, plus the huge 14-port switch chip).
According to the article, HotRail's main competition will come from Intel's standard high-volume (SHV) server designs based on the 4-way 450NX chipset and the 8-way Profusion chipset. I disagree, however, since by the time HotRail is released, the situation in the marketplace may have changed somewhat.
But in any case, HotRail's architecture is pretty interesting, to say the least.
Tenchusatsu |