SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP -- Ignore unavailable to you. Want to Upgrade?


To: ftth who wrote (35)10/9/1999 9:35:00 PM
From: ftth  Read Replies (2) | Respond to of 1782
 
re:<<Other site executives
expressed serious concerns about depending on the performance of a
single hosting company or network. These executives believe that the best
means of ensuring reliability is to spread the risk among multiple hosting
and network providers. >>

So how do they do this, short of full, synchronized duplication? How do they synchronize their data across hosting companies so that if one goes down, the other doesn't miss a beat and has all the same up-to-date records? Or, do they do nightly mirror copies, then if one site goes down, just transfer the data so far that day and in a fairly short time the second one is up to date? One is an instantaneous switch, the other has a lag and is probably lower cost.

The lagged method might not always work, depending on the nature of the failure. It may, however, be sufficient for certain types of sites.

Seems there's a good opportunity for hosting companies to have exclusive peering arrangements with other hosting companies. Each one backs the other up, and they both share the revenues. From a customer standpoint that's probably attractive too since the customers don't have to evaluate multiple companies for their backup.

The trouble is, of course, overcoming the greedy "I want it all" attitude. Seems the hosting companies should realize the desire for uncorrelated backup and adopt this at some point. In fact it may result in more total customers that they could have had alone due to the convenience it provides over non-peered hosting companies.

Somewhere in this mess there may even be an opportunity for a satellite link handling strictly the backup traffic exchange between peered hosts. Can get much more uncorrelated to ground-based problems than that. An enterprising satellite operator might just want to try and create such arrangements, rather than leaving it to the hosting companies to come to them.

dh



To: ftth who wrote (35)10/9/1999 11:01:00 PM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 1782
 
[Some risk factors associated with distributed ASP architectures]

Dave, from your last post,

"The things we could get away with as a start-up simply are not acceptable anymore. We can't go down, so we can't depend on just one location or [hosting] vendor."

Some, maybe a growing number of, hosting vendors and ASPs, are now distributing their assets, networking their databases and points of access, and backing up user data as a standard matter of course. This means that their overall presence, even when distributed and backed up, assumes some unifying quality in order to ensure that they can scale the reach and I/O functionality of their users (both to the main repositories and to their backups).

Let's examine some similar frameworks. Frame relay and ATM networks, as do SS7 networks, share some monolithic qualities similar to where ASPs are going. Whenever you can identify a single thread which supports an architecture, you in one way or another have a single point of failure. When the architecture is fragmented sufficiently to avoid this, you lose scalability and sacrifice on profits. It's a dilemma that the established carriers have grappled with for decades.

At some point the question gets asked: Does this mean that something that goes wrong in an East Coast data center can also affect operations by the same provider in their West Coast data center?

The answer could be deduced by viewing similar venues in the common carrier model during recent history, specifically those affecting the frame relay networks of both AT&T and MCI. With or without Murphy's help, the answer to this is, "eventually, yes.. bank on it." Of course, this isn't always the case, but no one can deny that it happens... perhaps too often.

There are some very good reasons for splitting the load among different providers, as it turns out, besides those which could be attributed to fire, backhoes, virus attacks, and natural disasters (although, I don't mean to minimize these, either).

For example, a weekend software upgrade that goes awry. It happens. A new revision of user authenticating software which was distributed electronically across all data centers between midnight and 4 a.m. It takes a strange twist in the night and erases all existing entitlements, for some inexplicable reason.

At what cost do providers keep multiple sets of data in entirely isolated and protected repositories, and still allow free interchange between the two (or more) of them when required by users? Could MCI be expected to maintain two or three different Frame Relay networks without sacrificing the scalability of each of them? No. If they could, they would.

[Begin Sidebar: Ironically, one of MCI (now WCOM's) problems is or was that they did, in fact, have the remnants of several different Frame networks in place which they inherited and patched together through multiple mergers. They melded these disparate networks together under one roof in order to achieve synergy, as each of them was acquired. To do otherwise would have meant sacrificing on scalability, relegating each set of users to networks with lower degrees of overall reach, while at the same time keeping overhead costs higher than need be.

In the server hosting and ASP realms, there are some analogs to this example, since they too must leverage single, albeit somewaht larger, common systems in order to save on the pitfalls of duplication. In the extreme, one could envisage two or tree or more primary centers per ASP, each with their own satellite centers downstream with feeds to their users. But to make this work with maximum efficiency, by taking advantage of economies of scale, there usually needs to be some unifying management scheme, and whenever you introduce one of these, you also find yourself potenitally introducing a single point of failure to contend with, as well.

In the latter case what I am talking about is network management software, software distribution, automated provisioning- and application management- software, themselves. Beware the dates of new platform releases.
End Sidebar.]

Replicating data among multiple sites could prove too expensive a proposition for many of the startups and established providers, alike, even if true separations from common threads [and threats] could be achieved. In due time we could easily begin seein the pendulum swing the other way, again. Some things which are now being contemplated for outsourcing now might be seen as being better-served in house, or under one's own management and control, where reliability is truly paramount. Even when the physical spaces needed to house boxes are leased from others. Proving the past, once again, to be prologue. Comments welcome.

Regards, Frank Coluccio