SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Son of SAN - Storage Networking Technologies -- Ignore unavailable to you. Want to Upgrade?


To: D. K. G. who wrote (2458)11/22/2000 10:08:03 PM
From: J Fieb  Read Replies (1) | Respond to of 4808
 
OT? Denis G., Back, not too long ago a guy named Tom Clark was Vixels most visible guy...

SNIA Interoperability Committee co-chairs Tom Clark of Vixel Corporation and Sheila Childs of Storage Provider said the SNW Interoperability Demo will be the culmination of months worth of work by volunteers representing every segment of the storage networking industry

That was Oct. 24...

Then in that storage SNIA article they said....

Message 14865619

SearchStorage sat down with Tom Clark, the newly elected member of the SNIA Board of Directors, to talk about the inner-workings of the group, the reasons behind its success, and what users can expect to see in the near future. Clark also serves as co-chair of the SNIA Interoperability Committee, and Technical Marketing Director for Nishan Systems, Inc., a storage company specializing in Storage over IP technology base

or this...

searchStorage.com: What do your day-to-day duties for the SNIA entail? For Nishan Systems?
Clark: As a board member, I am involved in setting SNIA's strategic objectives and ensuring that our resources and finances are effectively allocated. The board meets every other month for strategic planning and to discuss and approve initiatives that will help drive the market. When you’re on the board you have to wear several hats. You really have to be concerned with the global issues that rise above [any vendor issues] so that everyone has the opportunity to play together. As a chair of the SNIA Interoperability Committee, I work with my co-chair Sheila Childs on standards compliance test suite generation and organization of interoperability demos for events like Storage Networking World (SNW). Given the scope of our membership, trying to coordinate interoperability activities is always a challenge. SNIA Board and Interop activity takes at least a third of my time. On Nishan's behalf specifically, I'll be working within SNIA to promote standardization of issues related to Storage over IP. For my day job at Nishan, I'll be working with customers and partners to define applications for Storage over IP, help represent Nishan at industry conferences and events, continue SAN evangelizing around the world, and develop white papers on SoIP solutions. Nishan has assured me there will be plenty to do.

What happened to Vixel job?

A month later. Is this the same guy, leaving a sinking ship for another shot at the big time?? Interesting/



To: D. K. G. who wrote (2458)11/23/2000 1:27:24 AM
From: Douglas Nordgren  Read Replies (1) | Respond to of 4808
 
"Our business model is different from theirs in that we OEM our ASICs,"

trusan oems asics from qlogic. paladin is impressive reading, very similar to what hitachi did with the lightning at the drive end, but with parallel processing added at the server. tom isakovich was an early denizen of this thread. glad to see he has moved on to bigger and better things -g-. way to go tom.

speaking of lightning, the new hitachi array was the hit of the fctc, especially when it got around that the movers had dropped it off the trailer but the only thing that needed replacing was a side panel. now, if only hitachi could sell...

douglas



To: D. K. G. who wrote (2458)11/25/2000 9:48:17 AM
From: J Fieb  Read Replies (1) | Respond to of 4808
 
IBM keep trying..

IBM Looks For Storage Room In Government Corridors
(11/22/00, 5:32 p.m. ET) By Kim Renay Anderson, TechWeb News
In an effort to carve itself a bigger portion of the computer storage pie, IBM Corp. is chipping away at the decided market advantage held by its major competitor, EMC Corp.

IBM (stock: IBM) recently won a $500,000 contract to set up computer storage for the city of Boston, the latest contract it has signed with a government entity.

The city is using IBM's Shark Enterprise Storage Server to handle payroll and human resources records. Previously, IBM had signed public-sector contracts with the Food and Drug Administration and the Department of Defense.

"The public sector is important because it's not only a base of influential people, but you can have an impact on a great deal of people in this sector," said Clint Roswell, an IBM spokesman.

Jack Malinksy, Boston's director of operational technology, said IBM won out over a bid from EMC (stock: EMC) because Shark was immediately available. EMC couldn't deliver its system until early next year.

Analysts say that while IBM's public push is worthwhile, the company has a lot of ground to cover in challenging EMC. Years ago, IBM controlled the storage segment, but EMC has since overtaken it.

"If IBM is going to make headway into EMC's domain, it must grow its storage business as well as take customers," said Jack Scott, analyst at the Evaluator Group in Denver.

SMart move by Compaq I think....

Compaq Setting Up Home For SNIA
(11/22/00, 4:56 p.m. ET) TechWeb News
Compaq Computer Corp. (stock: CPQ) is building a home for the Storage Networking Industry Association. SNIA said Wednesday that construction has begun on its 14,000-square-foot building in Colorado Springs, Colo. Scheduled to open in January, the Storage Networking Industry Association Technology Center will showcase the development and testing of advanced network storage technologies that require interoperability of multi-vendor storage products. Currently in Mountain View, Calif., SNIA is a nonprofit organization of companies and individuals in the storage industry.

DAFS...

Section: REVIEWS
--------------------------------------------------------------------------------
DAFS: Just Daffy?
OLIVER RIST

Last year, SANs were the hottest thing going. Just last week, a local network designer scoffed that even thinking about NAS or SAN projects was folly, as both were dead issues. Years ago, I heard that the Internet was a fad, then I heard it would soon be dead due to too much traffic. And last year, e-commerce was a sure road to gold, while today they're all gasping their last breath.

I believe that if the idea is good, it'll survive-and network storage is not only a great idea, it's an evolutionary one. Without question, this is how internetworking needs to evolve.

There are only three basic problems that make network storage (both SAN and NAS) difficult right now. Most glaring is that it requires operating system support-anathema in heterogeneous networks. The proposed benefit of any network storage solution is that the storage acts as a generic network resource regardless of client platform or connectivity. Additionally, there is a performance requirement, where typical network read/write requests add significant overhead to the same requests issued locally. That means that high-speed transaction systems (a big target for network storage) usually want better throughput than an NAS/SAN implementation can supply. And, lastly, there are storage management schemes. Depending on performance requirements and underlying hardware, this chore can fall to such a granular level as to be heavily burdensome in both time and talent.

Do these difficulties mean SAN and NAS initiatives are phantom rainbows? Heck, no! You just need to wait until technologists catch up to the concept-and this past summer, it looks like they did. The acronym is DAFS, for the Direct Access File System protocol. DAFS is based around the Virtual Interface (VI) technology originally developed by Compaq, Intel and Microsoft to easily connect server clusters. What sets DAFS apart is that it addresses the performance issue of network storage as well as management issues.

By basing itself around VI, the DAFS protocol can handle direct memory-to-memory bulk data transfers that can bypass the overhead associated with typical network read/write requests. It will also be implemented as a file-access library to application developers, which means applications can be built from the ground up to be network storage-aware, essentially letting them send data transfers directly to the storage system regardless of any underlying OS requirements. And this isn't pie in the sky, either; the required SDK was made available by the DAFS Collaborative in late October at www.dafscollaborative.org.

Admittedly, DAFS is designed as a local area network storage protocol since the VI architecture requires an optimized local network like FE, FC or GbE. That makes it difficult to implement over the Web, although a clever front-end server load balancing and switching infrastructure should be able to compensate. Bottom line: DAFS may be the most forward-thinking enabling technology to network storage that I've seen to date. But even without it, I would never have dismissed network storage as a dead issue. Evolution takes time.

Oliver Rist is contributing technical editor at InternetWeek and vice president of product development at RCash In The Realm. He can be reached at orist@cmp.com.



To: D. K. G. who wrote (2458)11/25/2000 10:18:45 AM
From: J Fieb  Respond to of 4808
 
I'll have to read that one a few more times before it all sinks in. Metadata is something we should continue to track...

Start your metadata engines!

In the lingua franca of storage area networking, server-less backup is the "killer app," but coordinated, concurrent data sharing is the "Holy Grail."


By John Webster


Data sharing was a popular discussion topic a few years ago, but it lost cachet when it became all-too-obvious that there were monumental barriers to overcome to accomplish data sharing among heterogeneous hosts, switches, and storage subsystems. However, the current maturation and acceptance of SAN technology gives us a previously missing framework for open, heterogeneous data sharing. SANs are here and now, and it's time to put the data-sharing goal back into circulation.


In addition to the SAN frame-work, getting to heterogeneous data sharing will require the creation of SAN file systems that are accessible by all applications, regardless of what host or operating system they run on. SAN file systems define how data is stored, and they contain the rules by which any host can access, retrieve, and manipulate the data stored on any SAN-attached device. As such, SAN file systems are critical enablers of heterogeneous data sharing.


Click here to enlarge image
In this first installment of a series of articles, we look at SAN file-system metadata and the engines that use metadata to control application data access.


Data about data

As a first step to sharing data among multiple hosts in a SAN environment, logical unit number (LUN) masking techniques have been used to create logical partitions within networked storage pools.


LUN masking allows applications sharing resources on the SAN to "see" only the disk volumes, file systems, and files assigned to them. This supports sharing a physical infrastructure and management among various hosts and applications, but does not allow for simultaneous multi-host access to files and the data within them.


To accomplish this, we need a more sophisticated facility for locking files, records, or data blocks when they are being accessed by an application, and afterwards unlocking them when they become free. Metadata engines answer the call, underpinning the creation of a SAN-wide file system.


Click here to enlarge image
Metadata engines are aware of two types of data: actual user or application data and information that describes the structure and state of the file system at any given point. The information describing the file system-data about data, if you will-is called "metadata." While not all metadata engines are equal, they generally create and maintain:



An inventory of all stored objects-files, databases, digitized images, etc.-that are to be made visible to a heterogeneous set of users and applications.
A set of interrelationships between the hosts, users, applications, and stored objects. These four relationships include both security information and concur rency control information (primarily locks).
A repository for policy information used to control placement of files under control of the metadata engine.



In current practice, metadata engines also make a distinction between the protocols used to transfer data (SCSI command set, usually running over a Fibre Channel physical layer) and metadata (usually running over a separate network, such as a TCP/IP LAN). Both data and control traffic could flow over the Fibre Channel SAN infrastructure, but most implementations split the data path from the metadata-based control path.


Four engine models

Conceptually, each of the four methods of implementing metadata engines is at a different stage of development.


Metadata appliances
To accomplish SAN data sharing, the metadata engine implemented in a metadata appliance receives I/O requests to open a file from an application, permits access to the file while temporarily locking out others, and returns the file to an unlocked status when the I/O completes. All communication between the host application and the SAN regarding file access is typically passed over the IP network, while the actual data requested by the applications moves over the SAN.


Appliance-based metadata engines function as active participants in the SAN. However, they do not actually service any I/Os. Rather, they manage the path of an I/O from application to storage devices and back again through the SAN infrastructure.


Current incarnations include Tivoli's SANergy version of the metadata appliance model. SANergy was released originally by Mercury Computer Systems in 1998 for SAN-based data sharing in streaming environments.


Cluster file systems
Though most "clusters" historically found in Unix and Windows environments are really just availability boosters based on simple "failover" techniques, more sophisticated "concurrent sharing" implementations have emerged. Coupled with back-end SAN storage, cluster file systems (CFS) create a true SAN file system.


Click here to enlarge image
Examples include Compaq's Tru-Cluster CFS and SGI's Irix CXFS. The key limitation is that because clusters are almost always homogeneous, the any-host-can-access-any-data generality associated with SAN technology is often unavailable. Veritas' emerging SANPoint product line will, however, extend the CFS approach on a heterogeneous basis.


Smart switches
Smart switches use the SAN fabric transfer data and communicate metadata control messages. Locking and coordination mechanisms are implemented by the SAN fabric itself, typically under the con-trol of a dedicated processor within, or attached directly to, the switch. In addition, the metadata engine may include a large local memory that can be used as an onboard cache, allowing the switch-attached processor to service some I/Os directly.


Gadzoox Networks (Axxess) and DataCore Software (SANsymphony) have teamed up to introduce the first intelligent switch.


Distributed option
The application of a truly distributed processing model to SAN-based metadata remains in its infancy. In this approach, metadata, including the awareness of locking mechanisms, is distributed across multiple SAN components. This can be as basic as spreading the metadata coordination across multiple cluster hosts in a CFS configuration, or as radical as distributing coordination responsibilities and intelligence across application hosts, controllers, switches, and storage subsystems.


The distributed CFS approach will likely be available in the next generation of CFS products, but the everything- distributed model is more distant. Many decisions have yet to be made about what protocols will be used or how cache could be implemented. The promise of the distributed model lies in its potential to be the most scalable and most recoverable of the various implementations. However, it is also by far the most complex model to implement.


Tricord Systems is developing an implementation of the distributed model, but hasn't made any product announcements yet.


Once around the track

Using metadata engines to manage I/O makes the LUN masking and zoning techniques common in most of today's SAN implementations appear quite primitive. Yet, it has made good sense to have some simple, practical results to show along the way to SAN's Holy Grail. Some metadata engines and SAN file systems have now inched their way into production, but most of the emerging players have yet to even make it to the test track.


The winners in SAN data sharing will be those who make the most efficient use of their metadata engines. The models they use must exhibit low latency, high performance, and high resilience. Ladies and gentlemen, start your engines!


John Webster is a senior analyst with Illuminata (www.illuminata.com), a research and consulting firm in Nashua, NH.



John Webster
Senior Analyst,
Illuminata

Intersting. Clearly in the distributed model QLGCs would seem to have some things to offer as compared to the other players. Can QLGC make some serious progrress in this arena or do the OEMs have the responsibilty to make progress in this arena? Or is it partnerships? I vote for the last-prolonged partnerships to work on the big issues.
Forward thinking companies should give QLGC serious consideration.