The Digital Age and Floating Point Math: Pt II
So what is the place of floating point math in the digital revolution. The first given, is that the the first phases of the digital age were not so terribly math intensive. Indeed, the PC era in its beginning was only partially computationally intensive. Computers were really developed to help scientists and engineers do math calculations that were very hard to do by hand. But this was a limited very specialized use of computers. The 80's saw the Apple/IBM/Wintel revolution turn the computer into a consumer item. But consumer use of the PC was not particularly computation intensive. Word processing, financial spreadsheets, databases, basic text/modem ASCII communications etc. did not in general even require much in the way of floating point math.
But the true coming of the digital age is very different. It requires the conversion of analog signals to the ditial domain, the processing of that information for compression, enhancement, calculation etc., and then, perhaps, the reconversion of that information to some analog format that our bodily senses can comprehend e.g. audio, video, or other modeling of natural phenomenon. This is almost always computationally very intense. But this difficulty has to be balanced with the "smaller, better, faster, cheaper" requirements of the consumer space and applications. Thus, the industry, as good engineers do, found ways to compromise.
A nice summary of this sort of thing is found in a recent article in EETIMES: eetimes.com
"The major problem in specifying fixed-point DSP functions, said Herman Beke, Frontier's president, is that most of the high-level DSP algorithm-development tools assume that DSP mathematics operations can be implemented in floating-point math. While this gives the algorithm developer a wide latitude, it can create very inefficient use of custom silicon.
Designers attempting to make more efficient use of silicon will convert a floating-point design into a fixed-point implementation. But the conversion process — especially the construction of C codes in the processor — is ad hoc and error-prone."
And this is the state of the art, a tension-filled compromise between the need for intense math computations and small effecient silicon for such computation. This tension can be seen everywhere in the industry but it is the norm - everyone is used to it. The above article actually is about a company that is offering its product as a standard to help in the conversion from floating point to fixed point math.
But the drive continues toward higher resolution graphics etc. which require more and more difficult and intense computations as well as the drive for smaller devices using less current that are faster and faster, i.e. the tensions between requirements continues to increase with no end in sight at present.
Examples of how this is currently playing out:
eetimes.com "Microprocessor Forum: ARM10 positioned for the consumer challenge By Peter Clarke EE Times (10/15/98, 5:47 p.m. EDT)
SAN JOSE, Calif. — ARM Ltd. announced details of its next generation core, the ARM10 Thumb, at the Embedded Processor Forum on Thursday (Oct. 15).
The ARM10 has been beefed up from previous ARM cores with instruction set enhancements, an optional floating-point coprocessor and 64-bit on-chip data paths. It will be a better-than 400-Mips 32-bit processor aimed at driving a range of applications, from third-generation mobile cellular terminals through cable-modems to consumer information appliances."
eetimes.com Motorola revamps PowerPC instruction set
By Anthony Cataldo AUSTIN, Texas — In the most profound modification to the PowerPC architecture since its debut in 1991, Motorola this year will provide its first copper-based PowerPC that will include a set of 162 new instructions, which promise to process up to 16 complex data streams in a single cycle. Motorola appears less concerned about positioning the revamped PowerPC architecture against Wintel systems than about driving it into networking, speech, video and image-processing applications where it would displace digital signal processors.
The additional instruction set, which has been under development since early 1996, signals a fundamental change in the very notion of RISC — an acronym which suggests an MPU that relies on processing fewer instructions more efficiently rather than on a bloated instruction set. Indeed, Motorola proposes 162 new instructions in its latest set, or more than double the number of new floating-point instructions that Intel will include in its next-generation Katmai processor, also due next year.
Motorola said the vector execution unit is needed in the same way floating-point units were needed several years ago, i.e., to provide better precision for scientific calculations. "We've added these new instructions for dealing with next-generation data types," said Sam Fuller, manager of system architecture and product planning for Motorola's engineering group (Austin, Texas). "The idea is that these are the right instructions and are defined in suc h a way that we do only what we need. We've moved beyond the name RISC."
...
Dubbed AltiVec, the new architecture represents the third leg in Motorola's strategy to extend the reach of PowerPC into consumer, automotive and now networking applications. AltiVec instructions are implemented as a new vector execution unit that works alongside the processsor's integer and floating-point units, all of which take instructions from a main branch unit and send and receive data from memory. "
eetimes.com Hearing instruments become core-centric
By Christian Berg, Manager of Research and Development Phonak AG Stafa, Switzerland
Marc Van Canneyt, Business Development Manager Frontier Design BV Beluven, Belgium
The common perception of a system on a chip is that third-party "intellectual property" (IP) cores are modified as needed and then integrated on a single IC. However, this approach does not take full advantage of the special expertise of algorithmic IP that may not be well served by a general-purpose processor. Nor does it allow the exploration of architectural options that can be used to optimize the design for power, area or performance. Finally, some systems are so large that off-the-shelf IP would require too much silicon to achieve full functionality.
Phonak integrated the digital functions in its state-of-the-art hearing instruments in a 200,000-gate system-on-a-chip (SOC) that required performance of 130 Mips and power consumption of 1.5 mA or less. Since third-party processing cores could not meet the company's power, size or performance constraints, Phonak developed a proprietary digital-signal-processing core that implements the company's algorithmic IP and offers performance that exceeds that of five Motorola 56000 processors. This proprietary DSP core was designed by Frontier Design, a design house specializing in DSP implementations, and integrated with multiple third-party IP cores from Phonak's silicon vendor, Xemics.
Digital implementation of Phonak's advanced hearing instruments allows the use of advanced digital noise canceling, pattern recognition, adaptive filtering and other techniques to create "intelligent" hearing aids based on psycho-acoustical principles. The adaptive nature of the end product requires that several programs be loaded in response to real-time conditions. For example, the filtering techniques used to enhance audio perception in a quiet room are very different from those used in a crowded restaurant with a lot of background noise.
...
A prototype board was developed that included five Motorola 56000 processors, a microcontroller, RAM and ROM. Although the processors delivered 20 Mips each running at 40 MHz, they could not deliver enough throughput to handle the entire design. An off-the-shelf general-purpose DSP core was not an option, and multiple DSP cores would consume too much power and use too much silicon. So the only way to implement the digital portion of the design was to create a proprietary DSP core with the digital algorithms embedded in it.
Phonak chose to work with two outside consultants: Frontier Design for the creation of the DSP core and Xemics for the third-party IP and the integration of the SOC. Phonak used Matlab to develop the floating-point algorithms that would define system functionality. Phonak engineers created several test benches with test vectors, and the algorithms were tested in real time using the prototype board described earlier. But floating-point algorithms cannot efficiently be used for low power-minimal size hardware implementation; fixed-point data representations and arithmetic are required. Unfortunately, the M-language used within Matlab doesn't really support fixed-point arithmetic, and designers typically rely on C code, which again does not really support fixed-point, so a lot of ad hoc code must be written in a lengthy trial-and-error process. "
cont. 2 |