SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Elmer who wrote (150732)12/2/2001 2:43:36 PM
From: Dan3  Read Replies (1) | Respond to of 186894
 
Re: how a DB product would even be aware of much less designed for a I/O subsystem?

<brief time-out>

Oracle *Net runs its own stack. Some versions even run their own TCP/IP layers. That stack could be tuned for the infiniband layer and use appropriately optimized packet sizes and timing for client access and replication. The tnslistner service gets pretty specific about the network configuration.
download-east.oracle.com

I don't work with IBM's DB2, but I expect that it has a similar structure.

</brief time-out>
<resume diatribes>

AMD will bury Intel!

No, Intel will bury AMD!

NO! .....



To: Elmer who wrote (150732)12/2/2001 3:01:53 PM
From: tcmay  Respond to of 186894
 
Infiniband--part of "the switch is the computer"

"Wanna, can you tell us how a DB product would even be aware of much less designed for a I/O subsystem?"

Though I'm not Wanna, here are some general points:

* Data base applications are obviously very heavy users of I/O.

* This is practically the canonical "crosses cache boundaries" application, as data sets of tens of GB are accessed, rewritten, etc.

* A switching fabric, like Infiniband, that speeds access for data packets, disk subsystems, other processors, etc., is much more useful for DB apps than for cache-resident number crunching or graphics applications.

* An example: One of the earliest uses of large clusters of processors for business applications was for the inventory control system for K-Mart. This was long before the Beowulf-style clusters. Lots of I/O amongst 256 processors and a similar number of disk drives....a switching fabric would have been wonderful for this kind of app.

* Likewise, Oracle and IBM have both been developing systems optimized for very large clusters of processors. Recall that Oracle bought NCube several years ago (didn't go very far, but shows their interest; just one of several such projects.) This trend is good news for Intel, of course, and even better news if Infiniband is adopted as the primary interconnect.

Speculating, I expect that coming generations of video editors and streaming multimedia servers will be heavily I/O-bound, making I/O processors and switching fabrics more important than even the CPUs. Arguably, the switches _are_ the computer.

(Note that IBM is doing good work on radical new architectures like "PIM," standing for "processor in memory." In this approach, processors become an adjunct to RAM and are interconnected in very large hypercube topologies. Some of the plans for "Blue Gene," the next-generation ultracomputer, are using PIM. I see this as a more important threat to Intel's continued dominance than a gnat like AMD is.)

And so the importance of Infiniband is that it gets Intel into a leadership position in this important "topology" area.

--Tim May