VERY LONG ARTICLE 0n the future of Telco Software
Embedded Systems
By Bernard Cole
The telecommunications and telephony industry is going through a revolution that is being felt at several levels. Evolving in a highly regulated environment with an emphasis on universal access, robustness and quality of service, it is now increasingly faced with a deregulated environment driven by constant competition based upon increasing product features and performance.
Telephone standards, though based upon proprietary equipment in many cases, have long been in place to ensure that users can traverse systems throughout the world and communicate with one another. But within that context, nothing is the same for the embedded-system designer.
Public networks
Within the public network segment of the market, the so-called witching cloud telecom vendors,large and small, have begun the shift from proprietary systems and subsystems built using either in-house-developed ASICs or a combination of older CPUs: an eclectic concoction of older generation 68000s, 80186s, 80286s, Z80s and Z8000s, to name a few.
On the software side, the mix is also varied, reflecting the articular network and regional characteristics of the particular telephone operating companies. Operating systems and languages used range from in-house concoctions built using anything from Cobol to attempts to come up with generalized solutions such as the so-called universal assembly language.
With the opening up of the industry, new opportunities are emerging. Among them are video and multimedia services to the home, video and simultaneous voice/data conferencing and collaboration, and the shift to higher-bandwidth public networking technologies such as ISDN and ATM. Even the largest of public network vendors such as AT&T, Nynex, Pacific Telesis and U S West have seen that to get the most bang for their development dollar, it is best to focus their efforts on the end-user applications. This means a move from the original focus on proprietary technologies to an adoption of "open systems" standards. It is these standards that allow many computing platforms to successfully operate with one another, and software applications to move easily from one platform to another while protecting the hardware investment.
Increasingly, said Susan Mason, analyst and principal at market esearch firm The Information Architects (Los Altos, Calif.), the move has been to more advanced processors-the 68300, the PowerPC, the entium and a variety of alternative RISC architectures such as Sparc-within the context of an industry-accepted bus architecture such as VME 64 or PCI. The move, she said, has been swiftest amongst the smaller telecom switch makers and vendors, which do not have a vested interest in existing hardware and have moved more quickly to the opportunities that lie in the higher bandwidth domains of ATM.
"Although they have been moving at a snail's pace up to now," said Mason," the larger telcos are on the verge of a major paradigm shift, creating enormous opportunities for companies with the right mix of embedded hardware and software expertise."
If anything is going to retard the growth of new embedded hardware architectures in the switching cloud, said Inder Singh, chief executive officer at Lynx Real-time Systems Inc. (San Jose, Calif.), it is the issue of legacy code. "Each of the major telcos have not only a tremendous vested interest in existing hardware, but in illions upon millions of lines of code written for these archaic systems," he said. Although portions of this code have been rewritten in high-level languages such as C and C++, a large portion continues to remain in assembly, as well as in a variety of in-house developed languages and in the form of older languages such as Cobol, Fortran and others.
Regardless of language, other issues retard the transference and conversion of the code, including the underlying processor architecture. Previous switching systems, for reasons of reliability, modularity, scaleability and ease of maintenance, have been multiprocessor based, using not only multiple processors in the same system, but a loosely coupled environment containing modules with widely divergent CPU designs: Z8000s, Sparcs, 68000s, 68020s and Coldfire CPUs. "There are even systems around that still use older bit slice architectures that predate the x86 and 68k families," said Singh.
To address the legacy code issues, embedded systems developers have evolved a number of different strategies, from wholesale replacement to piecemeal module-by-module, processor-by-processor and ystem-by-system replacement and enhancement of code. According to Mitch Bunnell, chief technical officer at Lynx, the strategy taken varies widely, because the nature of the legacy code problem varies from vendor to vendor. "For example, suppose you are a telecom vendor with a four-CPU system built around bit slice CPUs, and are looking to move to a standard PCI platform using a Pentium," said Bunnell. One of the first things you face is how to run code designed for a four-processor environment in a single-processor environment. "Of particular concern is how code designed and written to deal with very strict scheduling of code execution on four different processors is translated to a uniprocessor environment without any performance or latency problems." A large part of the difference between the two environments lies in scheduling. "The ways the tasks are scheduled are very different," said Bunnell, "and the only way to deal with that is to buy or build an RTOS [run-time operating system] that has scheduler extensions that allow the users to write their own scheduler emulation." What makes this kind of problem so difficult is that in a four-processor system there are a great many separate threads and processes-in the hundreds-active at the same time.
In a single-processor environment with a number of processes or threads, the accepted solution is to use a priority preemptive scheduling algorithm. "If you use this algorithm on code that was written for use in a four-processor environment the most important tasks can be handled, but those lower on the priority list, which were able to get some time on at least one of the four processors in the older system, get little attention in the uniprocessor environment," said Bunnell. "Other than rewriting the code, which often runs into the millions of lines, the solution is to pick or design an RTOS that preserves the scheduling information and then be careful to preserve the original priority mechanisms." Because this requires that the OS employ time slicing between hundreds of processes or threads, rather than 10 or 20, he said, a vendor might be forced to go to a much more powerful CPU.
Switching over
If the telecom vendor has written previous applications in a proprietary internal language or in the CPU's particular assembly code, the millions of lines of code often rule out converting to a more standardized language such as C. "The solution here is to run an emulation of the original processor within the new CPU/OS environment, again requiring that the vendor go to a much higher performance processor," said Bunnell. The result, he said, is often a hodgepodge quilt-like pattern. Some portions of the code run older hardware and applications operating in the original environment-with new applications written in C and others in assembly or machine language-to gain back some of the performance lost when running emulations of the CPU using the original code.
One segment of the telecommunications industry that has moved rapidly has been the customer-premises segment. The reason: companies, rustrated with the lack of product innovation and response to their needs by the major telecom vendors, have moved to computer-based telephony systems that complement, enhance and often replace the proprietary private branch exchanges and smaller switching systems. In essence, the computer simply controls the telephone connection for use as the company's communication link. As the computer has evolved into a standards-based platform, companies established standard application programming interfaces (API) that enabled the development of applications such as fax broadcast, call centers and interactive voice-response systems. According to L.J. Urbano, manager of computer telephony marketing at Dialogic Corp. (Parsippany, N.J.), computer telephony is now on the threshold of dramatic changes that will go beyond mere integration of legacy systems; instead, they will provide a new paradigm that totally converges computing and telephone functions.
"In much the same way that client-server computing has changed database access and other back-office-type functions, computer telephony will make a wide range of telephone functions available to all end users," he said, being utilized in larger and larger systems from the customer-premise-equipment (CPE) environment to the central office (CO) environments. System platforms such as VME64, PCI and the standards within these platforms are acting as catalysts and in many bases leading the way for this newly born industry. They offer a wide ange of applications such as voice mail, IVR (interactive voice response), fax servers, phonemic speech recognizers and text-to-speech capabilities in a timely and flexible way.
Opportunities
For embedded system developers, said Stephen Li, vice president of telecommunications and multimedia at Wind River Systems Inc. (Alameda, Calif.), the opportunities in computer telephony, or CT, lie primarily in three areas: network interface, resource or processing function and station interface. The most basic CT function is that of network interface. Whether analog subscriber loop or multiple, digital T1 or E1 trunks, boards are available to receive or make calls into the public telephone network. Once connected, they exchange telephone signals with other function cards. The central resource in a CT system is usually a "voice board," a term shortened from "voice store and forward boards," which provides the voice prompting to the caller. Some voice boards also provide 24 (T1) or 30 (E1) channels of simultaneous voice record and playback. Typically, voice boards also detect network tones during dialing or DTMF (dual-tone, multifrequency) digits entered from the caller's keypad.
Many CT systems used within the customer premises also provide a number of speech-processing functions as well, using a DSP board with specialized software to recognize human speech within a fixed vocabulary. This capability is usually called automated speech (or speaker) recognition (ASR). Another classic speech-recognition and voice-playback application is directory assistance automation (DAA), which locates and speaks phone numbers when callers dial 411. Some DSP recognition algorithms can even validate the identity of the aller.
Other functions for the office environment include the ability to send, receive and manage Group III (T.30) fax for later retrieval, or convert ASCII or text files into fax modem signals for transmission through the network. A common application for such boards are fax-back systems, which distribute documents selected by callers. Fax mail is also a useful application. Cordless telephone (CT) systems also perform connections to local telephone sets used by human operators. These interface boards can provide a number of analog subscriber loops or digital ISDN basic rate interface (BRI) ports that, respectively, connect to either POTS (plain old telephone service) phones or elaborate ISDN feature phones with many buttons and a display area.
Software scramble While the hardware environment is much more stable, based largely on X86 or Pentium processors with some Sparc-based systems at the high end, the software environment is less so, said Wayne Andrews, vice president and chief technology officer at Geotel Computer Corp. (Littleton, Mass.). Although Windows 3.1 and Windows 95 have been the operating environments of choice in smaller low-end systems, larger systems with hundreds or thousand of users have required more sophisticated and sturdy operating systems. Among them are Windows NT for multitasking, multithreaded applications, and run-time operating systems such as Lynx OS and QNX for more real-time applications, requiring immediate machine response to queries. Atop this environment has been overlaid a variety of application-development environments that allow end users as well as software developers to adapt quickly to the still rapidly changing computer telephony market. One approach has been to employ a client-server architecture in which the server architecture is somewhat separate from that of the client. The server is dependent on the requirements of the particular switching cloud or PBX to which it is interfaced, and the client is allowed to change as the requirements and demands of the user environment dictate, using high-level software packages, loosely termed toolkits.
In addition to allowing the particular CT vendor or user to adapt to hanging user requirements, such software toolkits, said Mary Ann Walsh, president of Aurora Systems Inc. (Acton, Mass.), are useful in debugging and testing the interface to the public telephone network. "Providers of media-processing systems prefer to trial a network service interface with a panel of human testers before they finally implement the service in deployable form," she said, because local RBOCs do not permit the use of their dial-up lines for these purposes.
One popular type of toolkit presents a Basic-like interface in which to program, using Visual Basic or any one of a number of Basic language-based application generators. Another type of toolkit, said Walsh, presents a sophisticated graphical interpreter that allows a developer to visually model the actual services to be provided. These powerful languages allow simplified versions of a proposed telephone network service to be developed in hours or days. An equivalent application created completely in C code took months to develop. These packages also contain debug facilities that provide stepwise execution and that simulate telephone network responses. The debug utility allows the logic of an application to be fully debugged before connecting it to the actual telephone network.
But this dependence of simple application generators and use of Visual Basic for primary programming is changing, especially as computer telephony size and functionality increases. "Such approaches, while they have the advantages of ease of learning, quickly run out of steam when the CT application moves much beyond several tens to a hundred or so users," said Glenn Smith, president of Cygnus Technology Ltd. (New Brunswick, Canada). "Some of the sophisticated 800-number types of CT systems employed by . . . banks, airlines and financial services have thousands and tens of thousands of users and hundreds of thousands of messages and transactions to be processed at any one time."
Another development that is forcing a shift in developers' thinking away from the likes of Visual Basic is the development and increasing popularity of the Internet, in the form of Internet telephony, which allows the delivery of voice messages at little or no cost. Although a number of the RBOCs and public network service providers are taking legal action to prevent use of the Internet for communicating via voice, many of the CT hardware and software vendors are looking to add Internet voice telephony to their repertoire of services.
Spanlink Telecommunications Inc. (Minneapolis, Minn.), a vendor of all-messaging and-management systems to corporations that want to install their own 800 number customer-service networks, has now introduced WebCall, a software utility that links to any Web page and allows access via voice telephone to customer service. VocalTec Inc. (Northvale, N.J.), is another Internet phone system that allows real-time voice conversations over the telephone via the Internet to another phone, bypassing the long distance network providers.
A study by International Data Corp.(Framingham, Mass.), found that ctive Internet telephony users currently run about 500,000, accounting for $3.5 million in sales. But as telephony system vendors provide hardware and software links to the Net, that number can grow to about 16 million by 1999, with about a half billion dollars in annual sales. With a market that large, many telephony system and software providers are rethinking their dependence on Visual Basic as the main tool for developing applications.
"With its links to other elements in Microsoft's Windows environment through DirectX, such as Direct3D, DirectPlay, DirectSound and DirectDraw, Visual Basic is a powerful and easy development environment for many end users to work in," said Mark Kovalsky, resident of Ottawa Telephony Group (Ottawa, Canada). "And with the addition of Internet extensions to Windows, such as ActiveX and ActiveVRML, Visual Basic looks like a very easy way for telephony systemvendors and users to add Internet functionality."
Language alternatives
Balanced against this, however, are major efforts to develop scripting language alternatives more appropriate to telephony and telecommunications. AT&T Research Labs, now Lucent Technologies, has developed Inferno, while other telecom vendors are looking seriously at Sun Microsystems' Java. Northern Telecom, for one, has just announced that it will incorporate Sun's Java software and hardware into its PowerTouch telephones and wireless phones, turning them into a new class of inexpensive Internet appliances. In addition, Lucent may be rethinking its strategy regarding Inferno. It is now working with Sun Microsystems to develop specifications that permit integration of Internet and telephony operations using Java.
"Much depends on what Microsoft is going to do," said Brett Schokley, hose company developed WebCall using a combination of C and Lucent's bject-oriented Inferno scripting language. "If Microsoft comes up with an equivalent to Visual Basic, say Visual Java, you will not see just a rush by telephony vendors to Java, but a stampede."
|