<<You appear to be refusing to accept, or you regard it highly unlikely that, such an online storage environment can exist.>>
Not at all. Of course it exists and I interact with them - but the environment is designed for maximizing the throughput of MULTIPLE users and applications, not any ONE.
<<I hope that's not the case, because if it is you have just eliminated one of the most influential drivers behind the future sales of all memory and dwdm that exists today. The fact is simply this, that I'm directly involved with several such beasts right now, and they are growing, as we type. They are real.
Yes they are real, no that is not the case. But you keep talking about aggregate capacities and throughput, while I am talking about any single sequential file transfer. Those are quite different.
<<Another assumption which you incorrectly make, IMO, is that the ESCON attached memory unit needs necessarily to be a part of the mainframe. It can be, and most often is, attached to LPARs in the mainframe under the same director-ship as other mainframe channel resources, but it does not have to be the same ESCON director, nor does it necessarily even have to be attached to the same system. >>
Frank, an LPAR is just a logical partition - a carveout of the total mainframe processors and connections into separate, IPL-able entities.
<<It can be free standing, allowing all of the ESCON-like director activity to be devoted to memory transfer, only, eliminating the worries associated with the sharing you cited. (Actually, the storage companies and the channel extension companies have their own directors in many cases.) The storage complex is separate and distinct from the mainframe itself, in other words..>>
Of course multiple ESCON directors can be attached to the storage system or to multiple LPARs. So what? Try taking ONE (repeat ONE) 300GB file off of there (driven by ONE (repeat ONE) address space or application) and getting the data out at those speeds.
Being able to STORE the data is one thing, transferring it (single sequential transfer, not an aggregate flow) is another.
<<I am working under the assumption that enough transfer can take place to accommodate such a file size as 300 GB or greater over very high speed links in an acceptable time frame, as dictated by the economics of the situation and what the company could afford. If I can stream out ten FC or GbE feeds at 1 Gb/s each, then that is 10 Gb/s which would be suitable for an OC-196 line. >>
I think you need to check that assumption. Remember - ONE file, transferred SEQUENTIALLY (ala image copy) via ONE JOB, in any kind of a normal (non stand-alone) environment. Even if multiple 1GB lines exist, that one JOB won't use them all simultaneously, or even if it could you could not (IMO) sustain a data transfer rate high enough from the storage device (over 300 GB) to handle it - little things like Operating system and application software constraints, hardware interrupts after each 32K or smaller block you know, and the fact that the EMC box may be EMULATING older types of disk devices (like 3390s, and using those older data storage structures and data transfer limitations).
<<If I design my data storage complex correctly with these requirements in mind, then I am not concerned with all of the weak links within a server farm environment or within a data center. I am only concerned about transferring data from a stand alone active archive or other disc-based storage entity to a distant like entity. The weak links you've mentioned go away in this case, if I have taken the steps I've mentioned, and avoid taking circuitous routes all over the SAN or LAN, and avoid the effects of the mainframe channel's own contention.>>
<<Your statements about the impossibility or improbability at this time of sustaining these kinds of rates even with SR speeds are largely irrelevant. I am aware of at least several situations where clients will be doing this routinely, in another couple of months, using multiple dynamically varying [bandwidth on demand, in effect] T3s and OC-n's, several times per day. And in these cases the carrier will obviously not be using SR at this time.>>
<<Like I stated, for the transmission time frames which have been deemed acceptable (hours, not seconds) these capabilities are already on the books, and they will be routine very soon. With, or without, SR speeds.>>
Aggregates again. I don't disagree with aggregates. But, how fast can a single job or application transfer ONE 300GB file sequentially from storage, using the systems you are working with, in the intended working environment? I'll shut up until you get an approximate answer to that. |