Mega Trend 1: Moore’s Law works both ways
Most desktop techies take it on faith that working with constrained resources is a waste of time. Moore’s Law always guarantees resources will become less constrained around the corner, so it never matters if the current footprint of an OS is larger than what may be practical with today’s hardware. Bill Gates systematically turned Microsoft into a titan by developing software for the next generation of processors and storage – Windows itself being the best example. Apply the Gates approach to the embedded world and it leads to an embedded OS and tools just like Windows CE – essentially an embedded OS in desktop clothing. Even techies that would question the use of Microsoft software still often have the faith when applied to Unix or Linux. Thus, in many technical minds, sooner or later embedded computing will converge to the high-end, generic OS model. This is the heart of the matter about network processors Ning and I debated a year ago, in which I resisted the notion that server-type operating systems would rule the roost.
The facts are that Moore’s Law applies to embedded computing exactly like it does in on the desktop and in servers – and simultaneously exactly the in the opposite way. Every 18 months Moore’s Law doubles what a dollar can buy storing and processing information, so it follows that a number of embedded devices become feasible with generic operating systems that earlier would have failed on a performance or cost basis. In this way, Moore’s Law affects embedded computing just like it did on the desktop.
But Moore’s Law works probably with greater force in the opposite direction. Semiconductor design teams seek to squeeze embedded computing into smaller and smaller places, requiring less and less power. Since both processor speed and memory require power, techniques that maximize processing speed while minimizing overall size and cost are pursued, if anything, more relentlessly today than ever before. If you look at what’s actually happening in semiconductor companies, it becomes clear that Moore’s Law helps them dig deeper down, not just pile things higher up.
Jerry Fiddler demonstrated one new technique by displaying a graphic program running in raw mode on a 500 Megahertz notebook computer. He then ran the same program that was compiled “directly” to an FPGA executing at the power-saving speed of 20 Megahertz (see below for more on this technology). The surprising result: two orders of magnitude increase in performance on a chip nearly two orders of magnitude less expensive and power hungry. This is music to the ears of embedded designers trying to meet stringent constraints pertinent to portable devices. Eighteen months from now with Moore’s Law, the same engineers will be able to do as much with half again the power and size.
Meanwhile, semiconductor companies are expanding wildly on this theme. DSP cores are being combined with traditional and specialized microprocessor cores and memory to form multi-core chips and packages, rather than just adding more horsepower to traditional microprocessors. Multi-core chips mainly reduce chip count, save on power consumption, and enable more to be done with less.
Neither the trend to push the limit on making chips small and efficient nor the trend to develop faster and larger processors will ever end, both being propelled by Moore’s Law. In traditional computing, only the faster and larger implication of Moore’s Law manifested itself. In embedded computing, the complete continuum of computing becomes a manifestation of Moore's Law, with no single point of convergence. Thus, unlike what produced Microsoft’s success on the desktop, the leading embedded software tools company of the future must reach down to the lowest denominator of computing (DSPs, FPGAs, multi-core exotics), deal comfortably with high-end, server-like generic operating systems, while also handling everything in between, and in every conceivable combination.
Now you know why the recent acquisitions of Eonic and BSD Unix are so meaningful to WIND. Eonic provides a leading OS for DSPs and multi-core connectivity software, while BSD Unix is a first-rate high-end server OS ripe for high-end embedded applications. VxWorks and VxWorks AE sit comfortably in the sweet spot of traditional embedded computing, while Virtuoso (Eonic) and BSD Unix fill the void at either end of the spectrum of embedded computing. Imagine the synergy of accessing a high-end communications box with Tornado simultaneously tracking BSD Unix on the main CPU and VxWorks and Virtuoso in multi-cores on line cards. Don’t be surprised to find special, inter-process compatibility features built into future versions of all of WIND’s operating systems.
It was pretty clear that WindRiver Networks will be the primary beneficiary of BSD Unix initially. WIND now can challenge embedded Windows NT, Linux and Solaris in high-end communications devices. Expect Networks to introduce a slew of server appliance reference designs that take the notion of vertical solution to a new level (translation: very high ASPs). There is no expectation that BSD Unix will augment VxWorks on consumer devices, which says volumes about the so-called Linux threat in that space.
Allen
More are the FPGA demonstration: A Celoxica compiler was used to compile a C program running on the portable computer to a hardware description language compatible with Xilinx place and route algorithms. The latter produces a system-gate configuration file that sits in SRAM during execution of the program. The demonstration suggests a new programming concept whereby “field programmable” field arrays (FPGA) become “dynamically reconfigured” (whereby SRAM is reconfigured on the fly) under control of a traditional microprocessor. A multi-core chip containing, say, a PowerPC and an FPGA core point in this exciting new direction. |