To: William Hunt who wrote (23770 ) 6/11/1999 9:58:00 AM From: Rob C. Respond to of 27012
Intel's GHz Chip Needs NUMA RFG believes that the Intel architecture, as currently designed, will yield diminishing returns as processor speeds approach and exceed the gigahertz (GHz) barrier. The long-touted advantages of Intel's 64-bit gigahertz processors are still years away and will only achieve success with features such as EPIC or NUMA. CIOs who are counting on larger, more powerful Intel processors and clusters to alleviate the operational morass created by today's rapidly growing systems should be talking to their vendors of Intel-based hardware and software to understand both the impacts to their business and the alternatives. Business Imperatives: · Intel is on a strategy of trying to satisfy two masters with the same solution. It is attempting to provide a common chip architecture and family of 64-bit products for servers and desktops. To meet the cost constraints of the entry level servers and desktops, Intel has chosen uniprocessor chips as its design point. Hence the high-performance systems anticipated by CIOs are being viewed as secondary markets. · High-end servers must have a balanced system architecture with equally good performance in the processor, memory, and input/output (I/O) subsystems. Memory, bus speed and bandwidth are not keeping pace with the processor speeds. Additionally, Intel intends to continue with its slow Peripheral Component Interconnect (PCI) interface. CIOs may need to seek NUMA-based hardware solutions to achieve the desired performance levels. · New software will also be required to realize the performance objectives of the 1 GHz processors. Not only must the operating system and Commercial Off-The-Shelf (COTS) software be 64-bit enabled but optimized for the new processors. CIOs should be prepared to redesign and/or recompile all existing programs with the new Explicitly Parallel Instruction Computing (EPIC) compilers or for Non-Uniform Memory Access (NUMA) optimization. Moore's Law continues apace as processor speeds are doubling every 12 to 18 months. By the end of 2001, gigahertz (GHz) 64-bit processors will be available from Intel. The common view is that these high-speed engines will allow CIOs to scale them to enterprise-class servers. Hence through consolidation or clustering, CIOs hope to reduce the administration costs associated with existing Intel server farms. Unfortunately, achievement of this goal may not be as easily attained as hoped. CIOs need to understand that processors speed alone does not determine a server's performance. Bus structure and bandwidth, memory size and proximity, and I/O bus widths and speeds impact a Central Processing Unit's (CPU) effectiveness. If it cannot be fed fast enough, it is forced to waste cycles – or idle – and thereby appears to perform at a slower speed. Chip designers have developed solutions to some of the above problems, but not all. Intel is aware of the options and tradeoffs and has chosen the uniprocessor model as its 64-bit Merced and McKinley processor design point. This is good news for entry level servers and client systems as it keeps the price point low. However, it is not good news for CIOs hoping to scale up their existing Intel servers. High-performance servers, to be effective, require a carefully balanced system architecture. Memory, including all levels of cache, must be strategically placed as close to the CPU as possible. The internal bus and the I/O subsystem should have the broadest possible bandwidth. In Symmetric Multiprocessing (SMP) and clusters this balancing act becomes more difficult to achieve. Since Intel did not choose to optimize to this architectural target, it has opted to utilize an untested methodology called EPIC to achieve the desired results.