SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Silkroad -- Ignore unavailable to you. Want to Upgrade?


To: Sector Investor who wrote (500)8/21/1999 9:56:00 PM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 626
 
Sector, are you finished? Wait 'til you see the charge-back invoice we send over to the MRVC thread.

"Transmitting 300GB of packetized DATA is one thing, but when the source is a single FILE, what kind of storage technology could read-write that size data file as fast as even the MRVC CWDM technology, let alone the claimed Silkroad speeds could transmit it? So other bottlenecks exist."

Indeed, other bottlenecks do exist.

The trick is to integrate within the storage entity [memory complex] a multi-gigabit switch-router fabric which is not only integral to the storage complex [archive, say], but which also directs traffic internal and external to it, between memory modules and application servers, and onto external transmission line media, as well.

The high speed component cannot be an after-thought, or a subsequent stage to the storage entity. Rather, it moreoptimally should be a network-aware and addressable network element, a part of it, operating at speeds comparable to the designated external line media.

If we are talking about an actual server device which is not capable of this because of its limited bus speeds, clocks, or whatever, then a data staging scheme must be employed (store and forward), whereby data is first loaded, or staged in a holding space that is provisioned in the fashion I have outlined above, until it is ready for launch.

This form of store and forward is often used when sending very large bulk files, sometimes using a scheme known as "network data mover," or NDM, which can be deployed over both switched and contention {even IP} types of facilities, provided those facilities are rated high enough.

I obviously don't know of any interfaces capable of 300 GB [equiv to 2.4 Tb/s] throughput rates. The highest device I/Os I am aware of [in archive complexes] are capable of GbE and perhaps some OC-48c's (2.5 Gb/s), the latter variants not quite being ready for prime time, though, for the most part. In the future, we may see these and much higher speeds under a standard now being called "Future I/O." See: Message 10014974

Fibre Channel, too, plays a huge role in these situations, but in ways which are local to the storage and not over the WAN (although some vendors are bringing FC over optical DWDM to the fore now) as does FDDI in less demanding roles, but these are not normally conducive to WAN type communications, unless true optical is used. And even then, those speeds seldom exceed 800 Mb/s to 1.1 Gb/s at this time.

Regards, Frank Coluccio