I did manage to experience the happenings at the ESC West in San Jose, if only long enough to develop a sense of this year's conference. The following summarizes what I believe materially affects WIND.
First and foremost, the conference was a huge success. It seemed to me that all exhibitors were thrilled with size and scope of this year's conference. Foot traffic was up as much as double from last year's level. For example, the WIND people indicated that half way through the show the number of contacts and leads surpassed the entire number from last year's conference.
I saw the INTS pRISM+ presentation, which consisted almost entirely of videotaped endorsements by customers, and building-block diagrams - in which they subtly mentioned "open API" in the same phrase with "CORBA standard", thereby implying that the pRISM+ API is a standard along with with CORBA. Maybe this is what mislead the H&Q analyst, as I discussed in a previous post. I also read the I2O press release that not-so-subtly referred to IxWorks as a "rudimentary IRTOS".
In my mind, the only explanation for INTS' tacky twisting of words is desperation. All this screeching confirms that pSOS not only is losing market share, but declining license revenues must be a primary concern. Consequently, the serious investor must look elsewhere for possible threats to the WIND hegemony.
Advances in the Rapid Application Development space interested me. Visual Basic for developing embedded systems seems like an oxymoron, and no doubt contributed to lot's of hype and confusion about the MSFT threat. Emultek displayed a graphical automatic code-generating, state-machine along the lines of Object Time, also at the show. Both of these RAD vendors generate code that generally sits on top of an RTOS, in an attempt to provide the highest level conceivable for abstracting applications.
Thin servers, as demonstrated in the WIND booth, deserve a full presentation on this thread. This is because they need not be servers in the traditional sense whatsoever. Thin servers will most often be clients as well as servers, requiring a completely new lexicon to be meaningful. I notice that WIND now refers to "applications" rather than clients or servers. Interesting thin servers perform both as clients and servers, and might best be thought of as bifurcated applications. As a client, they sense the real-world and take on various responsibilities in response to inputs. As a server, a web server to be precise, they support HTML and enable any authorized client to view a web page and accordingly engage the application. The application programmer can endow the application with the ability to respond to queries and change application parameter values, including what the application senses and does in response. The programmer can do this with the same HTML-generating software used to prepare any web page.
WIND's example of a thin server was a digital camera networked to a PC. The Netscape browser on the PC could query the camera web page just like it might access any web page on the internet. In particular, the latest digital picture could be downloaded on command to the PC. Now imagine yourself on your next trip to France. Concerned about goings-on in your home, you connect to the internet, then access one of your camera servers. You can quickly download a current picture of your living room, point the camera in a different direction and download a picture of the foyer. You might also want to check other sensors in and around your home. Certainly you would want to check if any of your sensors sent out an alarm recently (which also can be done by checking your email, since all alarm actions automatically send a copy to your mailbox.) Most important, you could do these, and a myriad of similar probes, without reading a single page of an owner's manual.
The main point of the bifurcated application is that once the basic design is complete, enabling an intelligent device to behave both as a client and server, final product development is simplified while owner maintenance and control are enhanced. Development is simplified because desktop HTML programmers, perhaps lacking any direct experience programming embedded systems, can endow the device with control and maintenance mechanisms. Owner maintenance and control are simplified because of the prevalence and acquaintance the general population has with the generic internet browser, generally obviating any need for owner manuals or special instructions.
Windows CE and x86 architecture clearly was a source of concern and confusion at the conference, somewhat mirroring debates routinely played out on this thread. As much as Mark Brophy and David Stuart sometime seem to delight in championing the mighty Wintel consortium, and as much as it might seem that defensive arguments favoring a new regime are becoming repetitive, nevertheless it is important to monitor and constantly re-examine the threat represented by both MSFT-based software and the x86 architecture. (These are no longer the same as "Wintel" because Windows CE is targeted to non-Intel architecture and x86 is not restricted to MSFT operating systems and application libraries.) Let's look closely at this threat.
MS-DOS in an x86 box has always held sway probably as the most popular and easiest way to piece together a functional embedded system, using readily available software libraries and minimal knowledge of hardware. What has changed is that Windows NT is a modern operating system that is more robust than MS-DOS, while Windows CE is lighter and being readied for taking on more embedded applications. Both are programmed with inexpensive, shrink-wrapped IDE's familiar to generations of programmers, now including Visual Basic for Windows CE. Further, when applied to x86 architecture, add-in enhancements are available off-the-shelf for both for hardware and software - at so-called PC prices, i.e. at highly competitive prices based on high PC sales volumes. (By the way, Intel benefits less from the x86 applications than might be expected since clone processors are often selected in order to hold costs to a bare minimum.)
The savvy embedded engineer concerned about extensive required resources to support MSFT OSs, and inefficiencies of code, is rebuffed by being reminded that both memory and MIPS are cheap, and the focus should be on total cost of product, including development, not just unit costs of production. To accommodate MSFT x86 product development, companies like Radisys and Phar Lap are serving up pre-packaged reference designs, including hardware modules and desktop software stripped to essentials and enhanced with extensions that cater to embedded designs. After all, a develop team enamored of Visual Basic is unlikely to possess penetrating hardware or system software skills.
Is this the future of embedded systems? Will bulky, ill-fitting systems simply be patched together cheaply and pawned off as befits a throw-away society? Before we answer this question, it is important to consider parallel developments on-going in embedded systems. While the PC desktop advocates are pushing their wares, others are trying to squeeze every last ounce out of processors, by integrating as much functionality as possible on a single chip, by off-loading signal processing onto special chips, by designing application specific chips, and even by trying to design hardware and software simultaneously using so-called co-design and co-verification of design. Based on the MSFT x86 arguments, why is all this necessary? Especially since no one yet has offered up a suggestion for how Windows CE fits on a DSP.
The reason is that, like it or not, resources remain critical for consumer products. Companies will spend large sums and incredible effort designing and re-designing products like digital PCS handsets, which contain every flavor of DSP, ASIC and now an emerging need for increased generic processing. Each new design generation attempts to integrate more on single chips, reducing real estate and power consumption, while adding significantly to functionality. Incidentally, as more and more consumer products contain 32-bit microprocessors, economies of scale often associated with the desktop PC will be dwarfed in comparison. Thus, if economies of scale is the measure, then the desktop PC architecture in the embedded systems will be torpedoed by reference designs growing out of consumer product development. The same company that supplies tens of millions of integrated handset processors can provide sophisticated, highly engineered, reference designs for wireless devices at consumer prices, which is much less than PC prices, and totally devoid of monopolistic price structures. Just as the desktop PC advocates suggest that engineering can be minimized by relying on pre-packaged, scrunched assemblies, imagine what could be provided by highly-engineered reference designs that brooked no compromise with hardware or software. And since most interesting embedded applications will have a wired or wireless communication component and sensors needful of DSPs, these reference designs could cover most applications envisioned.
Now refer back to the thin server capability discussed above, or a RAD from Emultek or ObjectTime, to see that the embedded systems industry can accommodate rapid development without resorting to re-assembling the desktop PC - just like the PC developed without downsizing the mainframe.
Perhaps the best analogy of how the embedded systems industry will mature comes from the database software sector. It was reasonable to believe as late as the early 1990s that the individual resourcefulness encouraged by the desktop PC would promote desktop databases like dBase, Foxpro, Paradox, Access and even SqlServer (built with the help of Sybase) from the desktop to department servers, and finally enable them to take over mission critical business systems. Well, not only did it not happen, but Borland is not even thought of anymore as a database company. Oracle dominated its competition, but more important it beat down encroachment of the desktop.
Why did the desktop database fail to reign supreme? Because it was never considered adequate; it was never an acceptable, fail-safe solution for mission critical business data maintenance. As unwieldy and difficult as industrial-strength databases were to install and maintain, nevertheless they grew in importance within large organizations. They became the standard and penetrated small and medium organizations rather than the other way.
The analogy fits because the desktop is not an adequate starting point for many embedded systems, particularly embedded consumer products. As with databases, the refined, industrial-strength solutions being developed by large consumer product companies will find their way into lower volume applications through reference designs and high-level tools. The company that dominates the industrial, high-end embedded systems tools business will dominate the entire embedded systems space.
Allen |