....Intel has always shown a disdain for Fibre Channel because to them it is a low volume chip market and thus never peaked their interest.
Traditional data networkers, including Intel's LAN communications division, naturally want to accelerate the 5 year upgrade cylce of LAN/WAN gear and the 10 year upgrade cycle of corporate networking cables. Also, Intel naturally wants to accelerate the 1-3 year upgrade cycle for its PCs and servers in order to keep its growing portfolio of factories humming as it prepares to move deeper into enterprise space.
.....I think there is a home for Fibre Channel, particularly at the high end. I think Fibre Channel will be a lot like we see with Token Ring. Ten years from now we may see a few percentage points of the SAN market still owned by Fibre Channel........."
Like the data networkers, Intel makes frequent comparisons between Fibre Channel (interconnect) and Token Ring (IBM LAN technology) without making a distinction between the architecture (SAN) and the interconnect (FC). As a result they tend to completely ignore the invaluable lessons learned from the mainframe SANs that were deployed in the 1990s using ESCON - a half duplex optical interconnect - that are being applied today by heteregeneous SANs using Fibre Channel - a full duplex interconnect.
The only similarity shared by ESCON SANs and Token Ring LANs is that both are deterministic networks. Both technologies, however, were developed to address two different sets of problems with Token Ring LANs targeted at the emerging market for distributed services while ESCON SANs were targeted at the perennial I/O issues between transactional/analytical applications, mainframes and mainframe storage. A deterministic network like Token Ring proved to be overkill in local area networks particularly against the ethernuts, but a deterministic ESCON SAN became indispensable in the data center.
These two articles below from EMC and BMC Software show how the decoupling of server from storage, or the virtualization of storage, is the necessary foundation for what EMC calls the virtualization of applications.
BMC Software competes with market-leader CA, HWP and Tivoli in a global systems management market that, according to IDC, is expected to grow from $13.8B in 2000 to $24.9B in 2005. While these vendors generally have varying approaches to enterprise-wide system management, their basic network framework consists of core system management modules, network management modules and storage management modules that serve as the foundation for more advanced modules related to applications and business processes.
By contrast, Gartner expects the storage management software market is expected to go from $5.3B in 2000 to $16.7B in 2005.
Industry Revenue Forecast By Product Type 2000 2005
Data Management 44% 34% Storage Infrastructure 37% 43% Enterprise SRM 19% 23%
Total 100% 100% Industry $5.3B $16.7B
EMC and Veritas are growing faster than the traditonal storage management software vendors like Tivoli, HWP, CA and BMCS.
Designing an Enterprise Storage Network to Manage Growth and Change by Paul Ross
......It is important to note that ESN represents an inclusive strategy. It includes SAN, NAS, and direct attached connections because the needs of most practical situations cannot be met by a single connection topology. ESN encompasses both SANs and NAS in order to address the demands of realistic environments.....
.....Traditional information storage infrastructures are intimately coupled with the application infrastructure (including processors, software, etc.) they support.
What this means is that when you need to make a change in the application infrastructure, the information infrastructure is affected as well (and vice versa).
For example, consolidating a number of applications onto a single host platform would involve migration of applications as well as configuring and implementing the new (consolidated) host platform. In addition to this application infrastructure change, the storage infrastructure must be revisited to ensure that this new single host has access to the combined information of all the original applications. Ultimately, this coupling makes it more difficult for organizations to be flexible and to rapidly deploy, change, or remove applications to meet their evolving needs.......
...........Finally, the pace of change has accelerated to the point where any infrastructure put in place must either be flexible enough to address both current and anticipated future needs, or it must be rebuilt time after time as the needs of the organization change. Applications, databases, and the computers that host them are being re-engineered and redeployed. Ultimately, the content itself is the center around which the changing infrastructure revolves. Information storage architectures are tied to those applications, databases, and computers; and therefore, must also be flexible enough to enable the re-engineering and redeployment of an organization's assets
What's Needed
Enterprise storage networks present the ability for organizations to de-couple their application and information infrastructures and thereby generate both the flexibility and speed demanded of information technology today.
Practically speaking, this means being able to move, add, and change application hosts and/or information storage systems without having to rebuild the infrastructure around the change. In order to achieve this goal, three design concepts should be considered. These are:
1) Functional (Enterprise) storage.
The information infrastructure must be functional enough to be capable of performing information-related tasks (such as data replication) independent of the application/host.
2) Virtualization of storage.
The information storage pool must appear (to the applications/hosts) as if it were contained in a single array making information able to be accessible from anywhere (in enterprise).
3) Virtualization of applications (hosts).
Applications software must be host-gnostic, that is, it must be capable of being deployed on one processor today and on a different processor tomorrow.
While full implementation of these design concepts together would create the most flexible infrastructure, it is often neither workable nor necessary. More practically, organizations use the applicable portions of these design concepts to meet a balance of their current and anticipated future needs....
ctrtab.com
Bad Transactions Happen To Good Databases
Historically, recovery was performed mostly because of disasters and hardware failures. However, this is simply not the case anymore. In fact, application level recovery needs to be performed the majority of the time, not hardware recovery. Industry analysts estimate that as much as 80% of application errors are due to application software failures and human error. Although hardware failures and operating system panics were common several years ago, today's operating systems are quite reliable, with a high mean time between failure.
In reality, except for disaster recovery tests, very few DBAs ever need to perform true disaster recovery. While media does fail, it's actually quite rare in this day and age. So, user errors and application failures are the most common causes of problems that require recovery. Therefore, these types of errors also are the primary cause for system unavailability. As databases grow in size and complexity, so, too, do the chances that bad transactions will corrupt the data on which your business depends.....
Transaction Recovery Defined
Simply stated, Transaction Recovery is the process of removing the undesired effects of specific transactions from the database. This statement, while simple on the surface, hides a bevy of complicated details.......
ctrtab.com |