To: kash johal who wrote (615 ) 5/5/1999 5:43:00 PM From: mnispel Read Replies (2) | Respond to of 698
The idea that major semiconductor or DSP companies are going to embed LGPT's algorithms into silicon is Laughable at best. The reason is that general purpose CPU's and DSP's need to be precise. And the whole reason that LGPT's algorthms (sic) work is that they are imprecise. This comment combined with the immediate response given it seemed to assume that the thread had discussed this previously, although I've been unable to find it. From the FAQ on LGPT's homepage: "3. What kind of precision can I expect using efp? Is this always going to be the same? Answer: The Soft CoProcessor implementing high performance efp computation in extremely compact (3 to 5 kbytes) software on standard 8-bit microprocessors usually provide 16, 24 or 32 significant bits (equivalent to 5, 7+ or 9+ decimal digits) of precision. Software for the 16-, 32-, and 64-bit microprocessors presently provide 20, 22, or 24 significant bits of precision in a 32-bit data format, using 17 to 55 kbytes of memory. These levels of precision meet most embedded computational needs, but 16, 24, 32, 48, or 64 significant bits of precision at similar or even higher levels of performance can be provided by using substantially larger amounts of memory (1/8 to 1/2 Mbyte). " The response by logpoint (presumeably Dr. S) speaks for itself. One probably can assume that since LGPTs technology is made up in part by look up tables, precision is also in (large) part a function of the tables. More precision requires more bits per element per table and perhaps larger tables (rows and/or columns). BTW, given the press releases we've seen it seems safe to assume as well that the exact memory figures given in the FAQ may be somewhat out of date. Thus, it appears more probable that a lack of precision should be predicated to the comments made in the referenced post rather than to LGPT. Mark N.