SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Son of SAN - Storage Networking Technologies -- Ignore unavailable to you. Want to Upgrade?


To: J Fieb who wrote (3755)8/4/2001 5:13:11 PM
From: Gus  Respond to of 4808
 
I don't have much more than what is available out there, J. EMC is very cagey with its product announcements. For example, there were plenty of rumors about the stand-alone Celerra SE with the Linux-based Control Station (cluster management OS), but virtually no word about Celerra HighRoad, their pioneering multiple path file system (MPFS), before those products were formally introduced in December. Typically, EMC ships their new products for a few months before they issue the press release.

Anyway, this is what we have. This excerpt is from a March 2001 report from WitSoundview.

........In 3Q we anticipate that EMC will have three announcements. The first is a further enhancement to the Symmetrix line, which we dub Symm5.8. In 5.8 we believe EMC will upgrade the PowerPC processors that are used in the controller, improve the bandwidth of the backplane, and move to 15K RPM drives. Symm5.8 will not be a complete rework but rather part of the natural product evolution. We also believe that there will be additional software enhancements.

Also in 3Q we believe EMC will introduce a switch that will fan out ports to allow for attachment of additional servers in a SAN or large shared server environment. Today, Symmetrix is limited to 16 active ports while Hitachi boasts 64 ports. These switches, code named Spider, will allow Symmetrix to go to 60 ports of which 44 would be active. We believe that McDATA is the front-runner for supplying the Spider switches to EMC.

The final announcement we anticipate for 3Q is a new "Storage Virtualization" offering. Virtualization allows amongst other things the physical placement of data under the control of the storage subsystem, invisible to the application server. This is another step to easing the issues of storage management. We understand that EMC’s approach to Virtualization will be to have a Virtualization Appliance that will have agents in the Host Buss Adapters control where the data is stored physically. Most other
vendors are approaching Virtualization to provide a vendor agnostic view of storage devices. We do not know whether EMC will take this approach or take their historic approach, which has tended to be EMC centric. In any event we anticipate that EMC will have an entry in this important technology before the end of the year.

EMC’s competitors are also developing virtualization technology. Compaq was supposed to introduce its version called VersaStor next month but that now has been postponed until the end of the year losing what would have been a time to market advantage for Compaq. IBM has announced a new storage architecture called StorageTank. StorageTank, which is being developed by Tivoli with collaboration of much with a number of IBM development organization, will initially come out this summer with a Distributed File System, which will compete with Veritas. We expect that by late in the year IBM will introduce its virtualization offering as part of StorageTank. Hitachi is also doing work in this area and we believe Hitachi may be collaborating with Veritas. If true this would put pressure on the EMC Veritas relationship much as happened with Brocade when Brocade brought Hitachi into sales situations competing against EMC.

The next sets of major announcements are not expected until 2H02. At that time we believe EMC will introduce Symm6, which will be a major redo of the architecture. Symm6 moves from a Buss architecture to a switch architecture. This is important to meet performance requirements. We understand that that switch is likely to be based on Infiniband but that it is likely to be bought from a vendor as opposed to being developed internally. It is too early to speculate on the performance of EMC’s switch, but we note that Hitachi’s switch has a bandwidth of 6.8GBs/sec and will move to nine by mid-year and 13 by yearend....

witsoundview.com



To: J Fieb who wrote (3755)8/4/2001 5:29:59 PM
From: Gus  Respond to of 4808
 
More broad strokes from Jim Rothnie, EMC's Chief Technology Officer.

.....The three main focuses of EMC's R&D investment have been exploiting the advances in storage density, driving connectivity and re-inventing storage management. EMC has made great advances in reducing the number of people required to manage information. Soon each manager will be able to manage hundreds of terabytes of information, Rothnie said. This will come about by widespread adoption of storage networking and by advances in EMC management software. The vision is automation.

Just as a long-distance phone call is incredibly complicated yet remarkably simple for the end user, storage management should have the same ease of use. "We expect our customers to think of having one seamless information storage system," he said.

Rothnie said automation is different than "storage virtualization." Virtualization is a new term for creating an abstract view of information – something that EMC has been doing for years with the Symmetrix. Automation means moving that information to where it needs to be with minimal human involvement.

"Automation leads to better performance, simplicity in making changes in configuration and cost effectiveness," Rothnie said.

The brains of automation will be the next generation of EMC ControlCenter software. "ControlCenter will watch traffic at all points and will make a decision that the data should be here rather than there, using I/O Redirectors (next generation of PowerPath) and Data Movers (next generations of SRDF and MirrorView) to make it happen," Rothnie said. The result is the coordination of all these actions to automatically move information objects.


EMC's information storage works with all server types, all forms of connectivity, has open APIs and cooperative support agreements with many other companies. Soon, EMC's software products will work with storage hardware from other vendors, he said....

emc.com

Now, I'm going to take these steps, these kind of layers in the layer cake one at a time, and talk about what are the key issues for each of them, where do things stand today, and in some cases, a glimpse into the future for where we see these layers of the cake headed.

And we'll start out with the physical layer. This is the foundation, this is the hardware platform, which lies beneath the entire stack of information storage resources in the information plant. This is the place where the foundations of reliability and performance and capacity all sit. Many of the suppliers of storage systems today, really focus all of their attention on those three characteristics of the storage system, really focused on what the hardware can do.

AT EMC, we also focus a lot of attention on reliability, performance and capacity, and today achieve a higher level of each of those measurements than anybody else in the industry, but we also understand that this physical layer is the foundation of everything else. And what you need to have inside this physical layer, is the capability to execute the software, which is ultimately going to build on all of the layers above.

Why is it necessary to do this? Back in the old days, when people organized storage this way, and their needs were much, much simpler, that is single server, single applications, with storage organized as a peripheral device connected to the backend, it was unnecessary to worry about the classes of functionality we'll talk about here today.

But the reality is our customers are not dealing with that kind of simple environment anymore. They have hundreds, or often thousands of separate computer systems. If they did deploy their storage resources as peripherals behind each one, that would lead to the thousand islands phenomenon - a thousand islands of separate corners of information which are extremely difficult to manage and share and protect.

So customers are looking to storage technology to simply this picture, to put that storage into a common resource, which everything can connect to and use effectively.

In order to do that, it is essential that higher levels of functionality be delivered. Higher levels of functionality like replication and multipath file sharing. Things like data protection, and things like data optimization - that's really data placement optimization. I want to focus, just for a second, on that particular function as an illustration of what's important and where things are head for the future.

Data placement optimization is something that I think is not yet well understood in our industry. EMC introduced products in this space about 18 months ago, and there're now around a thousand customers around the world using this capability.
What it's about is similar to what customers used to do themselves back a couple of years ago, in terms of striping data, taking popular data objects, and spreading it across multiple disk drives. But in an era of petabyte information plants, and in much more dynamic information, it has become literally impossible for human beings to deal with this. And so the function of our optimization software is to do this automatically - to keep track of what's actually happening in the system and to reshuffle data as necessary to get it in the right place.

One of the facets of high functionality that I want to discuss with you here for a moment, is where does the intelligence belong in a modern storage deployment, the type that all of our customers are heading to today? The various resources involved with accessing information are organized, somewhat simplistically, in the three layers that are shown here.

1) Servers, which are running the applications,
2) a connectivity layer, which connects the servers to the information,
3) then the storage system itself, which is storing that data.

Functionality can be put in any of those three places, but for each particular function, it turns out that there is a best place to put it. And if you put that function somewhere else, it can cost you an order of magnitude in performance, and often really spells the difference between being able to do it at all, and being able to do it effectively.


Storage optimization is actually an excellent example of this, because optimization involves moving data from one part of the storage system to another. That really belongs in the storage system. If you don't put it there, for example, if you put that software up in the application server, it costs an enormously larger amount to perform these optimization functions. The data has to travel all the way up, across the connectivity layer, into the servers, and back down to accomplish that data moving function.

Data replication and remote mirroring are other examples of functions which involve data movement and are performed much more effectively in the storage layer than anywhere else.

And another place that functionality can be placed, is in the server complex itself. And again, there are certain functions which belong there. This chart illustrates one, called path failover, which involves controlling the channels which come out from the servers and down into the storage complex. Controlling them when a failure occurs or to balance the load across them, must be performed inside the server, so that's the right place to put it.

What you find in the storage market today is that vendors who have products in one particular layer try to put all the value-add there. I can say in practice, I think of it as "Commoditize thy neighbor." The effort to move value into the part of the complex where you have the strongest position.

Certainly we see storage software suppliers who try to put it all in the server layer. We see suppliers of things like SAN appliances, they try to put it all in the connectivity layer, and back in the old days, when EMC had a narrower scope of interest, we thought it all belongs in the storage system itself. But as our scope of understand of the whole storage environment matured, and through discussions with customers, and a clear understanding of what they required, we have come to understand that the right thing to do is to put functions where they belong.

And as a rule of thumb, it's putting that intelligence close to the thing that the intelligence is controlling.
And you get implementations of advanced functionality which are far superior to any other approach. So distributed intelligence is really the answer to handling advanced functions effectively.

wwpi.com



To: J Fieb who wrote (3755)8/4/2001 10:33:40 PM
From: Vikas Deolaliker  Read Replies (1) | Respond to of 4808
 
Let me take a stab at this.

Current symmetrix has 16 channel directors (kind of hbas on the storage side), each has 2 fibre channel ports. When you connect a server to this channel director, you use both for redundancy, so you can connect upto 16 servers. However, if you are trying to consolidate storage, then many more servers are likely to want to connect to this storage box. I don't know the average size of a SAN today, however, if I were to guess, I would say around 80-100 ports would be average. So with spider, EMC is trying to kill the external (small switch) market. That's why the name spider as a pun on Brcd's Silkworm. (Did they ever think about people who are arachnophobic;).

Now EMC's spider is kludge and competitive response to these new generation of storage systems out there from startups like Yotta Yotta, 3ParData, Cereva etc. See the storage folks are doing what was done in servers a while back. When you have devices hanging off a bus, the bus becomes the bottleneck. So you put a switch on the bus and then all devices have full bandwidth access to whoever they are talking to on the bus. The bus I am talking about here is the bus which is inside of the storage system. The disks are the devices. EMC's internal bus is still a bus and not a switched bus. The startups on the other hand came from the server background and said hey what not switch the bus. It will give increased throughput to the disks. EMC's spider is a response to this, but the switch is not on the internal bus but on to the I/O bus i.e FC.

Hope I didn't confuse anybody more, but that is what spider is, as you know spider eats the worm.



To: J Fieb who wrote (3755)8/5/2001 4:52:53 PM
From: David A. Lethe  Read Replies (1) | Respond to of 4808
 
Does the spider have the single-point-of-failure common cache that symmetrix has?

- David