SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: ftth who wrote (35)10/9/1999 11:01:00 PM
From: Frank A. Coluccio  Read Replies (1) of 1782
 
[Some risk factors associated with distributed ASP architectures]

Dave, from your last post,

"The things we could get away with as a start-up simply are not acceptable anymore. We can't go down, so we can't depend on just one location or [hosting] vendor."

Some, maybe a growing number of, hosting vendors and ASPs, are now distributing their assets, networking their databases and points of access, and backing up user data as a standard matter of course. This means that their overall presence, even when distributed and backed up, assumes some unifying quality in order to ensure that they can scale the reach and I/O functionality of their users (both to the main repositories and to their backups).

Let's examine some similar frameworks. Frame relay and ATM networks, as do SS7 networks, share some monolithic qualities similar to where ASPs are going. Whenever you can identify a single thread which supports an architecture, you in one way or another have a single point of failure. When the architecture is fragmented sufficiently to avoid this, you lose scalability and sacrifice on profits. It's a dilemma that the established carriers have grappled with for decades.

At some point the question gets asked: Does this mean that something that goes wrong in an East Coast data center can also affect operations by the same provider in their West Coast data center?

The answer could be deduced by viewing similar venues in the common carrier model during recent history, specifically those affecting the frame relay networks of both AT&T and MCI. With or without Murphy's help, the answer to this is, "eventually, yes.. bank on it." Of course, this isn't always the case, but no one can deny that it happens... perhaps too often.

There are some very good reasons for splitting the load among different providers, as it turns out, besides those which could be attributed to fire, backhoes, virus attacks, and natural disasters (although, I don't mean to minimize these, either).

For example, a weekend software upgrade that goes awry. It happens. A new revision of user authenticating software which was distributed electronically across all data centers between midnight and 4 a.m. It takes a strange twist in the night and erases all existing entitlements, for some inexplicable reason.

At what cost do providers keep multiple sets of data in entirely isolated and protected repositories, and still allow free interchange between the two (or more) of them when required by users? Could MCI be expected to maintain two or three different Frame Relay networks without sacrificing the scalability of each of them? No. If they could, they would.

[Begin Sidebar: Ironically, one of MCI (now WCOM's) problems is or was that they did, in fact, have the remnants of several different Frame networks in place which they inherited and patched together through multiple mergers. They melded these disparate networks together under one roof in order to achieve synergy, as each of them was acquired. To do otherwise would have meant sacrificing on scalability, relegating each set of users to networks with lower degrees of overall reach, while at the same time keeping overhead costs higher than need be.

In the server hosting and ASP realms, there are some analogs to this example, since they too must leverage single, albeit somewaht larger, common systems in order to save on the pitfalls of duplication. In the extreme, one could envisage two or tree or more primary centers per ASP, each with their own satellite centers downstream with feeds to their users. But to make this work with maximum efficiency, by taking advantage of economies of scale, there usually needs to be some unifying management scheme, and whenever you introduce one of these, you also find yourself potenitally introducing a single point of failure to contend with, as well.

In the latter case what I am talking about is network management software, software distribution, automated provisioning- and application management- software, themselves. Beware the dates of new platform releases.
End Sidebar.]

Replicating data among multiple sites could prove too expensive a proposition for many of the startups and established providers, alike, even if true separations from common threads [and threats] could be achieved. In due time we could easily begin seein the pendulum swing the other way, again. Some things which are now being contemplated for outsourcing now might be seen as being better-served in house, or under one's own management and control, where reliability is truly paramount. Even when the physical spaces needed to house boxes are leased from others. Proving the past, once again, to be prologue. Comments welcome.

Regards, Frank Coluccio
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext