More on analog vs. digital, general microprocessors and the symbiosis of communications with computing - a visionary case study. Consider the evolution of hearing aids.
The analog hearing aid converts sound into electric current by a microphone, amplifies the current and reconverts it to sound in an amplifier. The problem is that the hearing impaired usually need certain frequencies amplified more than others, which requires numerous filters with steep slopes and attenuation. These demand a large number of external capacitors which take up space, generate noise and consume current. As an example, noise can only be contained by increasing the signal current so that it is dominate over the noise. Complex circuits with a large number of components are either noisy or use a lot of current - and they always take up a lot of space. Since the practical limits of physics have been reached for hearing aids that must run on ordinary hearing aid batteries, further significant advances in the analog hearing aid, if any, must await some sort of groundbreaking advance in science.
Instead, suppose one could develop a digital hearing aid, consisting of an Analog to Digital (A/D) converter connected to the microphone, resulting in a digital signal which would be filtered and amplified using a programmed DSP with the output reconverted to analog and piped to the receiver. Noise will only be generated as a result of miscalculations, which can be minimized by increasing the precision of the calculations. For example, using 11 bit calculations rather than 10 bits increases the signal-to-noise ratio by 6db, while only increasing current consumption by 10%. A 20 bit signal representation increases the signal-to-noise to 120db. A 28 bit precision effectively eliminates miscalculations, i.e. noise. (Unlike analog circuits, noise in the DSP circuits themselves have almost no importance to the noise in the signal, since it is only necessary for the circuit to recognize two conditions: full voltage or zero.) Using digital technology, external capacitors can be done away with, noise can be effectively eliminated, and the signal can be processed logically anyway wanted.
Now imagine a digital hearing aid based on DSP technology performing at 40 MIPS calibrated interactively to compensate precisely for the owner's hearing deficiency - and small enough to fit unnoticed, completely inside the ear canal. Bits of precision and sound sampling rate are beyond anything distinguishable by humans. This would be an example of a smart device, one that probably would not require VxWorks, mainly because it would focus on doing one thing (processing digitized sounds 32,000 per second) and not much else. The control logic is more complex than this implies, because there are a number of circumstances requiring special processing, such as an anti-feedback circuit to provide high gain needed in quiet surroundings. However, it is not sufficiently complex to warrant an operating system.
I know, a 40 MIPS computer in your ear seems far fetched, but if you think this you would be wrong. All the imaginary numbers I specified above were taken from the specs of the SENSO hearing aid made by Widex. So actually, you don't have to imagine this device, you can buy one for $5,200, fully calibrated to your hearing.
Now think about the functionality of such a digital hearing aid. Every human has a unique voice print that can be statistically identified from spoken words. It would be possible to add functionality to the hearing aid to monitor words heard to identify the speaker. In particular, if the speaker is the owner, then coded words could cause the hearing aid to do something special, but only if spoken by the owner. For example, "Computer: one plus one equals" would cause a pleasant voice in your head to say, "Allen, one plus one equals two". "Computer answer only: integrate from 1 to 2 the function one over x squared." The voice says, "xxxx [whatever the answer is]".
Add memory and VCSI's voice recognition and translation algorithms, we get: "Computer: translate to English". Afterwards, any foreign language word spoken by someone other than the owner (remember the voice print) would be translated to English and spoken by the voice.
If city noises like airplanes disturb you, "Computer, quiet airplane noise" would cause the airplane noises to be filtered out (minimized), which would be possible since the hearing aid plugs the ear, requiring amplification to hear anything. Similarly, "Computer, quiet everything and monitor" turns down all sounds enabling comfortable sleep or concentration even in noisy surroundings. Of course, the computer would be wide awake, so you might hear, "Allen, wake up, I heard glass breaking."
Unlike the SENSO hearing aid that is on the market, this imaginary device would require an RTOS like VxWorks. The reason is that the 32,000 per sec work must continue while also doing voice print checks, voice recognition, speech synthesis, language translation, natural language, sound-type pattern recognition (neural networks) and fuzzy logic in the background. Incidentally, the software technology needed to add each of these functions exists adequately today. The only thing that prevents Widex from making this latter product is fitting the general microprocessor and memory in the ear canal, too. Certainly that should be possible in a few years, especially if the DSP functions can be folded into the general microprocessor as well. By the way, the neural networks needed for pattern matching will work perfectly fine on an ordinary microprocessor, obviating any need for special, parallel processing absolutely necessary for large-scale neural networks.
Of course anyone reading this realizes that if the microphone picks up frequencies just above human hearing, those frequencies would be ideal for data communication. So suppose there is a wireless antenna/transmitter/receiver close to the owner's ear (say in a belt) that can communicate with the hearing aid. Of course any such relatively large device can also communicate over a Wireless Local Loop (WLL), PCS or LEO satellite system. In this fashion, the hearing aid could be in constant touch with any source of information in the world (server) from any point in the world by means of the internet. Now, when the owner says, "Computer, identify people", voice prints picked up by the hearing aid that are not the owner's are transmitted to the wireless device with instructions, which in turn connects first to the owner's private voice print server seeking a hit. If not found, a public voice print server in accessed, and publicly available information about the person returned. Finally, the voice says, "John Doe". "Computer: when did I talk to him last." Voice says, "May 15, 1997 at 10:00 am." (If this seems overly exotic, it shouldn't. This is the exact logic being used today to guard against cellular phone theft. Each phone has a unique RF footprint, which is checked first in the roaming area with activity for a statistical hit. If none, then a public source is checked. Security-related information is associated with a hit.)
With communications, when the owner is sleeping in quiet mode, the hearing aid not only would awaken the owner if certain sounds occur, but also would notify the police or fire department if warranted. Sleep better with elevator music in the background? The communications link can serve up all music by request.
The number of useful applications using ubiquitous connectivity of the hearing aid to thin servers is absolutely uncountable, at the same time both exciting and frightening. For every useful function possible without communications, there are untold numbers of extensions of the function if instant world-wide communication using the internet is possible. In fact, there are so many useful things that WILL be done with "Completely In the Canal" hearing aids, that they will become attractive to everyone, whether or not hearing is impaired - of course then they would need to be called something else.
In summary, the basic digital computer-in-your-ear hearing aid exists, already with functionality that overwhelms analog versions. For this reason alone I can't see much of any future for analog except for sensors (at the front- and back-ends of embedded devices).
All it takes new to put into practice everything described above is the addition of a small but capable general-purpose microprocessor and memory - and of course VxWorks. I think you will agree that the advanced version of this device gives new meaning to the phrase "Embedded Internet Device".
Allen
PS - The astute reader no doubt noticed the potential for ambiguity when initiating control of the hearing aid by saying, "computer". What if the word is used in a different context? There are three possible answers: (1) Use a nonsense word to initiate control, like "AAHBAA", pronounced "aahbaa". (2) Use natural language to discern the context and resolve potential ambiguities. (3) Bang yourself on the side of the head after saying "Computer" to confirm that your intentions. The latter solution would have immediate credibility with generations of StarTrek fans .
PSS - Don't get hung up about potential for invasion of privacy with advanced versions of these devices. The march of technology will not slow down because of abstract fears, so like it or not it is coming. The challenge is for individuals, companies and governments to safeguard the individual from invasion of privacy while protecting freedoms we cherish. |