SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : McData (MCDT)

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Gus who started this subject8/29/2000 5:46:30 PM
From: Gus  Read Replies (1) of 234
 
Here's a look at the future of storage networking courtesy of McDATA and............. Network Engines.

There's a Linux connection here because NENG licenses its clustering technology to VA Linux, which is starting to see some promising revenue growth. More interesting, however is the fact that one of the venture capital firms that funded Network Engines is Egan-Managed Capital which includes John Egan, an EMC board member who spearheaded the herculean sales effort at EMC that propelled EMC from $190M in annual sales in 1990 to $1.6B in annual sales in 1995. In market share terms, EMC went from 0% market share in 1990 to 41% market share in 1995 while IBM went from over 80% market share to 31% in the same time period.

IBM is also a licensee of NENG's clustering technology. That is notable given IBM's declaration that they will participate in the NAS market and the way they seem to be positioning their considerable manufacturing assets to support Infiniband, which is where all these server-side clustering technology will be converged in a standards-driven process.

Given the massive push that Intel and Microsoft are making in the $25,000 and below server segment, it looks like these ultra-thin 1U (1.75 inch height) rack-mounted servers (up to 2 processors, up to 1 GB in memory, up to 2 half-height disk drives, embedded OS and embedded memory roadmap) will dominate the front-end of the intelligent storage node architecture -- (SAS, SAN, NAS depending on scale and application) -- now being adopted by small, medium and large organizations. One can already see the type of consolidation cycles forming as processors become more powerful, as disk drives increase density, as memory becomes larger/faster, and as network bandwidth increases --all allowing the regular consolidation and integration of the functions that, for example, are now being performed by different inexpensive thin load-balancing servers, thin caching servers, thin database servers and thin storage servers.

NENG's technology can theoretically cluster up to 256 servers. Each standard-sized rack can accomodate up to 40 1U ultra-thin servers. Server clusters are extremely bullish for SANs so it's not surprising to see the quarter to quarter growth of, say, Intel's $25,000 and below processor business (50%+), DSS' low-end NAS business (50%+) and EMC's ESN (60%). NENG increased revenues 142% from $6M in 2Q2000 to $14.5M in 3Q2000, which is typical of fast-growing companies below $50M in quarterly sales.

From McDATA's Resource Center:

TECHNOLOGY OVERVIEW

Since the advent of mainframes, computer scientists have constantly wrestled with various architectures to speed the I/O performance with increasing processor performance. Earlier efforts to improve data access involved tight coupling file systems and I/O with operating systems. The rise of networked distributed computing brought the challenge of sharing files amongst heterogeneous computers running different operating systems. This gave rise to network-attached-storage servers to be independent of applications servers and dedicated to only serving files to users while offloading data management tasks from the over burdened application servers.

Faced with the lack of a practical technology that would interconnect these servers, the industry gave birth to a high speed Fibre Channel technology which in turn provided the impetus for a third generation storage architecture called storage area networks (SANs) to emerge.

SANs create a dedicated network, focused on creating a universal any-to-any connectivity between storage and server nodes. SANs combine the best of mainframe bus and channel's high speed and data integrity benefits with networks' distance benefits as well as freeing the main LAN from backup duties that consume valuable bandwidth. Additionally, SANs are scalable allowing increments in capacity without disruptions while leveraging the existing investments in legacy platforms and existing data. SANs provide centralized control while providing remote data vaulting for disaster recovery — a network that offloads storage management tasks from application servers and speeds up the entire network, thus allowing users the benefit of fast data access. SANs will eventually be at the core of every enterprise's data center, allowing companies to design centrally-managed data centers that embrace and interconnect far flung global SANs and provide service to all of their servers, no matter how far or no matter what operating systems they are running on.

This new focus on data storage as a key asset to manage is obvious given the rise in dollars being spent on storage to the tune of 40-50% of total IT budgets in 1998. The rise in storage requirements, is being fueled by the birth of incessantly newer internet, data warehousing and ERP applications and further stoked by the lure of cheap disk drives at 5 cents per MB at the end-user level today.

Server-Attached Storage

To achieve the fast data access, the data paths (or channels) between storage and processor were widened, the storage bus kept adjacent to the processor bus for data/signal integrity while boosting the speeds. Server-attached storage (SAS) architectures dominated the scene for several years from mainframe processor channels to PC server bus slots and adapters.

One of the handicaps of the traditional server attached storage comes from the tight coupling between storage and the operating system. A general-purpose SAS server performed a variety of tasks concurrently from running applications, manipulating databases, file/print serving, providing communications, checking data integrity to many housekeeping functions. This meant that all data access requests from a client must continuously compete with these tasks. As the number of users accessing the common centralized data storage increases, the file access takes a back seat to other tasks leading to slow response time for queries. For years one of the major jobs of IT administrators was to keep the storage performance fine-tuned to achieve a certain minimum level of user query response time.

Another limitation imposed by the SAS architecture was that of limited distance imposed by the interface — the wide parallel connections in mainframes and wide differential parallel SCSI connections in servers were limiting the distance between computers and servers to a few meters. This led to the creation of raised-floor data centers but posed a severe constraint and limitation on interconnectivity in multi-site operations. One of the major benefits of Fibre Channel connectivity that is not fully emphasized, is the removal of SCSI wires interconnecting storage to servers and the associated improvement in reliability. This is over and above the advantage of allowing high-speed connectivity and increased distance between centrally managed data repositories and dispersed LAN servers.

NAS-Network Attached Storage

Network-attached storage as compared to server-attached storage, is a dedicated file server optimized to do one function only and do it well-file serving. NAS is a system independent, shareable storage that is connected directly to the network and is accessible directly by any number of heterogeneous clients or other servers. NAS file servers are essentially stripped down servers specifically designed for file serving and offloading file management services from the more expensive application servers.

With NAS, you can add storage at random without disrupting the network. When the storage was on the server (as in SAS), the administrator had to take down the system, install or upgrade the drives and bring the system back up again. That created a lot of unacceptable downtime. NAS is being installed increasingly now to mitigate the downtime associated with SAS. NAS is making inroads into the marketplace at different price, performance and size levels. As business operations become more global and around the clock, more and more applications become mission-critical demanding 24x7 uptime. Feeding this frenzy of 24x7 uptime is the ubiquitous Internet, using email messaging and around the clock customer information, browsing demanding richer and richer content from text to images to audio/video clips, virtual private nets for e-commerce, data warehousing and ERP applications on the Intranet.

NAS architectures generally sport a light proprietary OS kernel and file system able to operate independently of other applications and are thus devoid of all overhead from extraneous drivers prevalent in SAS architecture. The NAS operating system is fully compatible with server operating systems such as NT, Unix, Netware, etc. NAS devices or appliances are relatively easy to set up — turning painful storage upgrades into simple plug-and-play devices requiring no server downtime to set up. After plugging a NAS server onto a network, assigning an IP address, setting up control lists and user permissions and voila, all is done. This is because the NAS server boards integrate the Ethernet connection, the SCSI (or Fibre Channel) controller-to-disk connections, the operating system and boot up software all on one simple card. Although NAS devices have built-in security features, administrators generally choose to rely on existing robust security features of their networks. One of the main benefits of NAS is that it allows clients to directly access data without burdening the application servers.

While NAS works well for documents, file manipulations and transaction-based applications, it is not necessarily most advantageous for database applications because it is file-oriented. Also for high bandwidth video applications, NAS slows down since the shared network on NAS gets clogged fast with multiple large files and starts to become a bottleneck.

SAN-Storage Area Network

A SAN (storage area network) is a dedicated high performance network to move data between heterogeneous servers and storage resources. Being a separate dedicated network it avoids any traffic conflict between clients and servers. A Fibre Channel based SAN combines the high performance of an I/O channel (IOPS and bandwidth) and the connectivity (distance) of a network.

To interconnect distributed systems over distance, IT system administrators have been forced to use Fast Ethernet links which are terribly inefficient because of large packet overhead (associated with small 1500 Byte transmission packets) and high latency. In smaller computer room environments unwieldy SCSI wires or copper cables used in mainframe environments to connect storage to servers are commonplace.

Adopting SAN technology through the use of Fibre Channel and hubs and switches allows high-speed server-to-storage, storage-to-storage, or server-to-server connectivity using a separate network infrastructure which mitigates problems associated with existing network connectivity. SANs also have the potential to allow cable lengths up to 500 meters today and up to 10 km in future so servers in different buildings can share external storage devices. And because the new emerging SAN/VIA (virtual interface architecture) interconnects have low latency and lesser overhead as compared to traditional LAN/WAN networks, they are ideally suited for clustering and mirroring/replication applications. The capability of connecting existing SCSI devices to SAN using SCSI-to-Fibre Channel bridges also preserves investments made in existing storage devices. This will help fuel the growth of SAN infrastructures.

Factors Motivating SANs

Performance

SAN enables concurrent access of disk or tape arrays by two or more servers at high speeds across Fibre Channel, providing much enhanced system performance.

Availability

SAN has disaster tolerance built in since data can be mirrored using FC SAN up to 10 km away.

Cost

Since SAN is an independent network, initial costs to set up the infrastructure will be higher but the potential exists for rapid cost erosion as SAN installed base increases.

Scalability

Scalability is natural to SAN architecture, depending on the SAN network management tools used for interoperability. Like a LAN/WAN it can use a variety of technologies such as serial SCSI, ESCON, FICON, SSA, ATM, SONET etc. This allows easy relocation of backup data, restore operations, file migration and data replication between heterogeneous environments.

Manageability

Data centric; part of server cluster; thin protocol for low latency; DMA to server RAM-direct communication to data.

Future of SAN

-- Embedded and distributed file system
-- intelligent SAN-smart file system where portion of file system is in SAN
-- data routing
-- storage network management
-- concurrent processing and manipulation of intelligent data streams
-- server independent storage tasks (peer to peer copying, peer to peer backup, automatic back up using Fibre Channel)
-- data sharing, data formatting
-- security-authorization, authentication, access control.

SAN technology, in the future, may also interconnect worldwide with other SAN Intranet sites to provide instantaneous replication of corporate data to these remote sites to create a global information system.

Challenges

The ability to manage SANs is as vital as having its speed and distance benefits. Unless the storage management features are built into the operating systems, customers end up buying them from server vendors or third parties who license them to the server vendors. To simplify management, SAN vendors need to adopt SNMP and WBEM type standards to monitor, alert and manage data on SAN networks. Also there is a need for dynamic logical partitioning of different network operating systems being managed by the centralized console. Since there are a number of different devices from different vendors, the big challenge system administrators face is making sure that they are interoperable and have one centralized management tool with which other management software packages are compatible.

The lack of SAN-optimized applications, management utilities, fault-tolerance features and full plug-and-play interoperability at this time are the caveats and cautions for administrators to use before plunging into adopting SANs.

mcdata.com

Network Engines’ Internet Appliance Architecture (IAA) is a new approach to building high-performance, small footprint, scalable appliances targeted toward dot.coms, ISPs, ASPs, and large corporate sites. This architecture enables us to combine all the hardware and software needed to install, manage, optimize, and expand your mission critical e-business processes.

It is based on the concept that in an internet-based business world, appliances tuned to a particular function are essential to success. This approach enables you to save time, space and money by reducing management costs, infrastructure expenses, and capital equipment outlays.

Inherent in our architecture is the realization that simple, comprehensive management of your installation is as important if not more than ease of set up. All of our products can be managed from any location using any standard web browser or SNMP-compliant application, enabling network managers to operate more efficiently without sacrificing site performance and availability. Backed by our comprehensive service and support programs, the Internet Appliance Architecture combines best-of-breed technologies to deliver a high-density, plug-and-play, low-overhead solution that easily scales to meet your growing e-business requirements.

networkengines.com./webengine.htm
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext