Dino (et al),
Please allow me to offer some opinions on this topic, and answer at least one of your questions concerning Vienna.
To add to what others have stated re Vienna, and to partially corroborate your point concerning their targeting carriers and ISPs, I am aware of one implementation of a notable size, to date. So, since it's only one situation that I am aware of, this is not a comprehensive or statistically meaningful assessment, by any measure. But it serves a useful purpose for this discussion nonetheless.
In this case, a long-distance carrier reseller is supporting end users using the Vienna product on the West Coast, lets say, using VoIP to connect them to a Class 5 switch and several PBXs on the East Coast. The end sections (loops) are being provisioned by CLECs, inclusive of a Class 5 switch on one end, and the long haul portion is supported by VoIP over the Internet.
These West Coast users are effectively pulling "local" dial tone, from three thousand miles away, from switches located on the East Coast, through an Internet backbone connection. The Internet connection is being provided by a commercial grade ISP, who happens to be employing ATM underneath the IP Layer. There is a reason why I'm being so descriptive. Give me a moment.
While the quality of these calls is reported to be toll, or commercial grade, in all fairness and in an attempt to be thorough, I don't know what the class of service or the QoS stipulation is on the ATM component, or what the percent utilization on the backbone or the PVC is, or what its rated capacity is, or what else the pipe is being used for. And I don't know what kind of service contract they have with the user, or what the packet-cell discard levels are, or, if, in fact, discards are even taking place. There's probably much more that that I don't know about this connection arrangement. [[Now, the reason I mention these qualifiers is to make the point that there's more engineering involved with something like this than ordering a Swiss Cheese Sandwich. The same will hold true for the common carriers, to the power Z, as they re-engineer the public network to a VoIP model.]]
Consequently, I don't have any idea of how this or similar products perform under congestion-induced stress. For all I know, it may recover very nicely. But it's a good starting point for conversational purposes, and it answers, hopefully, one of your questions. From the reports that I have heard [and please keep in mind that this is, in fact, second hand, albeit from a consultant who I respect and who has no ax to grind on this matter] it's performing quite nicely, on a par with traditional tie line services, and is allowing the user organization to save money. And that is what it's all about.
In a general way, this should be a good real-life indication of how VoIP gateways might serve as a niche solution, and how it can be implemented in a corporate network setting. I don't necessarily know (maybe due to my own ignorance of the facts in this case) that this vendor has any inherent or unique advantages over the others, per se, that couldn't be satisfied by, say, VocalTec's, or Franklin's or Netspeak's or others' gateways, as well. There may be more to the actual implementation that I am unaware of in this case, though, so I remain ignorant on this point.
What is important, though, is that applications like this are successful, and lend credence to this class of technology and the market segment as a whole. The rising tide, and all that.
While this is all well and good, it's also important to understand where the existing limitations are in VoIP, and the Internet in general, and what areas need attention and improvement. [What follows, as well as most of the above, is generic, and doesn't apply to any particular vendor.]
It appears that the VoIP gateway vendors will fair best in the early stages in the "virtual tie line" or "virtual inter-machine trunk" market, where, using Intranets and possibly VPNs IP will be used to supersede static voice circuits which serve as the links between switching "machines," i.e., tie lines.
Secondly, they may also serve to satisfy PSTN-like applications, specifically as a means of connecting multiple groups of users in different cities using local PSTN connections and the Internet in a limited way, while improvements and standards are implemented and incorprated into their wares to allow them to be used in a more global fashion as the state of the art improves.
I say in a limited way, because presently in order to offset existing routing and enhanced-feature limitations inherent in most gateways, as they apply to the PSTN's routing and enhanced services database libraries, many more boxes would need to be deployed to take advantage of the "local calling rate" strategy (rates proximate to the location of each box) being used toady. I'm not singling out any particular vendor here, since it applies to all of them, currently.
This is in contrast to an any-to-any, end-to-end connection capability (read: direct substitution of any conceivable calling combination that can be achieved over the PSTN today) regardless of where the boxes are and where the call parties happen to be placed. Since I am aware of some progress in this area, I invite corrections or comments on this point.
In any event, it will be the evolution of the PSTN, I have to surmise, that will ultimately handle and correct this shortcoming over time.
Another possibility, aside from the PSTN "evolution" scenario, although less likely, would be for a wholesale paradigm "substitution" of the existing call routing and enhanced features provisioning, deferring to IETF rules and RFCs and CTI-like code, exclusively. In other words substituting a new set of rules for those of the established carriers (ANSI/BELLCORE/ITU). What's the likelihood of this happening?
More than likely, we will see a blending of these approaches take place over time, rendering a hybrid solution, heavily weighted with yesterday's rules, only shifted and elevated to a new Layer in the protocol stack. But it will still continue to use the existing subscriber- and advanced information- databases that are being used today, until these can likewise undergo gradual transformations and be absorbed into the IP rules domain.
Question: Does anyone here think that the wholsesale substitution scenario will work within a short horizon time frame, say over the next two or three years? If not, how long will it take, then?
"Good Luck," is all I have to say to anyone who would like to attempt doing this overnight. I don't know when it'll happen. More to the point, IMO it probably WONT happen the way we envisage it "today," because the drivers will keep changing, constantly, as this evolutionary process is under way. In two years time there will be new attributes and newly conceptualized networking techniques that we haven't even heard of yet today. These new aspects would simply become a part of the evolutionary process. This would actually be a truer characterization a evolution, as it would even take into account things we know nothing about today.
Back to the present, global standards need to be implemented and adhered to at the ITU level on down, so that interoperability can be achieved between now-disparate approaches to the same goals. Fortunately, to this end, there's already a highly cultivated draft being mulled over and reviewed at the ITU IMTC level as we type, but I'm not sure when the expected date for ratification is, or if or when further calls for comments are due, or how far along they actually are. I seem to recall the next meeting is in December. Anyone want to clarify this for us? Dino? Atin?
Until these standards are firm, and even beyond, gateways can continue to be used to effect the savings of large sums of cash in certain ways, and should be employed for that reason where technical conditions warrant and where the ROIs prove in. But users will still be left with certain service-level limitations for the time being. When this occurs, they can always default to the P$TN.
Having said all that, I am inclined to agree with Passmore at Decisys about how this will all play out in a free-wielding public Internet scenario: Not likely to be very promising for very long unless Tiering and QoS parameters are invoked. Once you do that, however, are you still using the all-you-can-eat variety? Not really. Not by today's conventions, at least. Maybe tomorrow it will be the norm. Can't say right now.
This "show down" is still a big question mark to me, and IMO will probably "never" be answered fully in a manner that we can conceive of today. The reason for this is elusive, by definition, but explainable: Abrupt changes and *advances* will take shape at the application level, unpredictably, that will only *hamper* the congestion situation due to higher bandwidth requirements, while negating recently implemented congestion mitigation controls and processes. In turn, piecemeal solutions will be put in place that will alter the landscape by some measure. In the process, some of these changes and incremental measures will preclude or entirely replace the conditions that we know of today, and render a new playing field on a moving window basis. The "show down" as we think of it today, therefore, may never occur.
Lets look at it another way. If we froze the clock for a moment in time, I would tend to give it (pulic calling patterns) a better-than-even chance on the Public Internet "at this time," and for the next year or so, with certain reservations about the future, due to the unpredictability associated with the "all-you-can-eat" variety of Internet existence.
Bandwidth tiering and sloped QoS levels will offset this on a sliding scale cost-performance basis, however, I'm fairly certain.
Why will things otherwise get worse in terms of congestion? For one thing, I don't know what the effects of broadcast storms will be as multimedia applications (videos, speeches, music, American Webstand, etc.) begin to proliferate and become comonplace on the "free" Internet (they haven't even started yet, so that you'd notice, or "You Ain't Seen Nuttin Yet!"), nor do I understand or pretend to understand what the potential effects of other less-structured network phenomena might be. And the levels of traffic double or triple... what is it, every eight months now? ...
The answers to these questions would depend, in part, on the abundance and costs of DWDM implementations between now and then by the Fiber Barons, as George Gilder likes to call them, and how individual ISPs take-up and shape-up their bandwidth allocations.
But if there is one conspicuous and overriding principle that I have learned to believe in and trust with my paycheck, without qualifications or permission from the boss in over thirty years in this business, it is the principle of "The Turnpike Effect!!" Next comes The Corridor Effect. In spades, if the bandwidth is fast and free.
Bandwidth, however, carries more connotations today than as a simple bit-per-second metric. It also connotes massive amounts of administration, and additional processing, features, attributes, capabilities and security issues which are stacked one on top of the other. Think of it in terms of apps per sec, or MIPS even, and then it becomes some kind of hybrid metric resembling TBPS + GIPS = Bottleneck and Waiting, or B+W.
Come to think of it, the problem may not "necessarily" be in getting "more" bandwidth at some point. But it will center around the logistics involved in "adjusting" to more bandwidth, both on the level of individual utilization, and in the sense of global orchestration on the parts of those delivering it. At the end of the day, it will not simply be the supply of bandwidth in Tbps^n, per se, that will cause us to pay more [lest we come down to our knees at every yield sign]; but it may also be the "administration" of the network that may prove to be the ultimate bottleneck that needs constant management and flushing.
IMO, the levels of success that VoIP will enjoy over the next couple of years will reside on different strata and evolve gradually at different paces over each stratum over time, between (in vague terms):
1 - Public Internet at large, both workstation level using CTI and black tel sets through gateways on end-office and PBX switches [good -not great (more times than not) at the present time; sometimes shaky; but increasing levels of concern about dependability and quality as time goes on, depending on access line and ISP capacity, tuning employed by the chosen ISP, and the tier/grade of ISP service subscribed to];
2 - Hybrid configurations, scenarios wherein new local access techniques incorporating the hooks for VoIP (in remote access gear, authentication tools, etc.) will merge with both public and private avenues of traffic, as in some proposed variations of VPNs [ mostly good voice quality; "mildly" cumbersome to administer; "reach" limitations inherent in any cloud that imposes security measures; probably used mostly by telecommuters and SOHOs]; and
3 - Private IP Backbones and Enterprise Intranets [robust; high-quality and secure 99.95% of the time if "tuned" properly;incremental cost for such quality; within a year or so]
Perhaps the public network will take on the attributes of the private IP backbone model (3 above) in due time. I don't see any apparent limiting factors that would preclude this from happening, other than the shere scale of things at that level, if the dominant carriers actually want this to happen. But lets not forget the plant investments of the past several years which are still depreciating...
Regarding the insignificant contribution of voice to the overall traffic flows of the future: Of course, if all voice traffic in question is to be contained within an enterprise, or community of interest, then it may indeed amount to nothing more than a trickle, as some pundits have suggested, especially in those organizations with tsunami data streams. In those instances, in fact, the voice would be insignificant in comparison and get lost in the noise, where costs are the concern. But where voice is a discrete service that has to "reach out" (sorry about that) beyond the enterprise and on to public highways, or in the case of normal consumer calling patterns, then that will be an entirely different story for the foreseeable future.
These are my views, and thank you for hanging in there. Comments, corrections, questions are welcome.
Best Regards, and Good Luck to All,
Frank Coluccio |