SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : General Magic -- Ignore unavailable to you. Want to Upgrade?


To: Straight Up who wrote (6749)7/22/1999 4:34:00 PM
From: scott bieda  Read Replies (1) | Respond to of 10081
 
2 7/8ths after earnings!! unless markman takes that donut out of
his mouth !!



To: Straight Up who wrote (6749)7/23/1999 9:17:00 AM
From: cdtejuan  Respond to of 10081
 
FYI:

-----------------

Calling to a phone near you: Advertising
By John Borland
Staff Writer, CNET News.com
July 22, 1999, 12:50 p.m. PT

Imagine a trip through a shopping mall. While passing Macy's department store, your cellphone beeps, and on the
screen flashes a message: "10 percent off at Macy's."

For committed coupon clippers, this might be the best invention since the mail-order catalog. For those leery of advertising, it
could spell trouble for the last medium still safe from advertisers.

A handful of service providers and Internet portal companies aiming at the wireless phone market are toying with the idea of
subsidizing portal-like cell phone services with highly targeted advertising.

In return for having advertisements beamed straight into a standard digital phone--or pager, PalmPilot, or other wireless
device--a consumer would get inexpensive or free access to information services like maps, calendars, personal messaging
systems, and even entertainment functions like horoscopes or chat lines.

"The idea is to deliver messages--or coupons--that pertain to what you're interested in, at a time when they're most valuable
to you," said Dave Weinstein, @Motion's vice president of marketing. @Motion, funded by Intel and Deutsche Telekom, creates
the infrastructure and software for these "voice portals," and is in the early stages of testing its service with wireless carriers
and Internet content companies.

Several start-up companies, including @Motion, AirFlash, and GeoWorks, as well as larger players like General Magic and
Phone.com all are looking at creating this kind of mobile phone portal.

Direct marketing
As on the Internet, the content and advertising would be targeted at a user's demographic group and personal interests. When
signing up for a service, Weinstein said, a customer would likely give up enough information to allow customized targeting of
advertisements and personalized content.

Since cell phones can be used to pinpoint a user's location, stores will be able to push their advertisement when a customer
comes within shopping range. This location targeting facility is currently limited to an average of about a quarter mile, but may
shorten as technology improves.

"The problem with the Internet is that typically normal stores can't advertise," said George Sollman, CEO of @Motion. "This
allows brick and mortar stores to start leveraging the Internet's advantages of targeting ads."

But the real question is whether people who use these services will accept advertisements over their cell phones, one of the last
mediums that has remained largely commercial-free.

"I think for the overall delivery of content to mobile devices, the jury is still out on what the business model is," said Mark
Desautels, managing director of the Wireless Data Forum.

Proponents of the ad-based model are quick to point out that the ads aren't unsolicited, and that individual users will be able to
choose a pricing plan that best suits their needs. If a user doesn't want to be targeted by ads consistently, they could opt to
even pay a small fee with every use, similar to the way dial-up 411 directories work today.

But the industry still risks touching the same raw nerve so often inflamed by unsolicited Internet email, or spam, Desautels said.

"This has such potential as a direct marketing device, that I think it has a lot of people drooling," he said. "But it's tempered by
the realization that people have reacted very, very strongly and negatively to the spam they've received on the Internet."

Too early to worry?
Some inside the wireless industry itself say that advertising model is far too premature.

"Getting information on a mobile phone is very different from getting information on the Web," said Rama Aysola, CEO of
AirFlash, a mobile information service now in trials with Pacific Bell. "When people want information on a mobile phone, they're
not surfing. They want the information now, and they're willing to pay for it."

AirFlash sees the information services supported by a combination of e-commerce-like activities and fees for services like
driving directions or traffic updates, Aysola said.

Analysts say the idea has promise, but will have to be tested with consumers before it's rolled out to the mass market.

"I think the concept is wonderful. But its' a matter of the usual statement, the devil is in the details," said Dave Berndt,
associate director of the Yankee Group's wireless division. "In 12 months we'll have a better sense of whether this works or
not."

news.com



To: Straight Up who wrote (6749)7/23/1999 9:54:00 AM
From: cdtejuan  Read Replies (1) | Respond to of 10081
 
note also this article!
sciam.com
FYI:
----------------------Talking with Your Computer

Speech-based interfaces may soon allow computer users to retrieve data and issue instructions without
lifting a finger

by Victor Zue

...........

RELATED
ARTICLES:

The Future of
Computing

Talking with Your
Computer

Communications
Chameleons

Raw Computation

SUBTOPIC:

Galaxy Speaks

SIDEBAR:

A Conversation with
Jupiter

FURTHER
READING

For decades, science-fiction writers have envisioned a world in which speech is the most commonly used interface
between humans and machines. This is partly a result of our strong desire to make computers behave like human
beings. But it is more than that. Speech is natural--we know how to speak before we know how to read and write.
Speech is also efficient--most people can speak about five times faster than they can type and probably 10 times
faster than they can write. And speech is flexible--we do not have to touch or see anything to carry on a
conversation.

The first generation of speech-based interfaces is beginning to emerge, including high-performance systems that can
recognize tens of thousands of words. In fact, you can now go to various computer stores and buy speech-recognition
software for dictation. Products are offered by IBM, Dragon Systems, Lernout & Hauspie, and Philips. Other systems
can accept extemporaneously generated speech over the telephone. AT&T Bell Labs pioneered the use of
speech-recognition systems for telephone transactions, and now companies such as Nuance, Philips and
SpeechWorks have also entered the field. The current technology is employed in virtual-assistant services, such as
General Magic's Portico service, which allows users to request news and stock quotes and even listen to e-mail over
the telephone. But the Oxygen project will need far more advanced speech-recognition systems.

I believe the next generation of speech-based interfaces will enable people to communicate with computers in much
the same way that they communicate with other people. Therefore, the notion of conversation is very important. The
traditional technology of speech recognition--which converts audible signals to digital symbols--must be augmented
by language-understanding software so that the computer can grasp the meaning of spoken words.

On the output side, the machine must be able to verbalize; it has to take documents from the World Wide Web, find the
appropriate information and turn it into well-formed sentences. Throughout this process the machine must be able to
engage in a dialogue with the user so that it can clarify mistakes it might have made--for example, by asking
questions such as "Did you say Boston, Massachusetts, or Austin, Texas?"

Galaxy Speaks

We at the M.I.T. Laboratory for Computer Science have spent the past decade working on systems with this kind of
conversational interface. Unfortunately, the machines developed so far are not terribly intelligent; they can deal only
with limited domains of knowledge, such as weather forecasts and flight schedules. But the information is up-to-date,
and you can access it over the telephone. The machines are capable of communicating in several languages; the three
to which we pay the most attention are American English, Spanish and Mandarin Chinese. These systems can answer
queries almost in real-time--that is, just as quickly as in a normal conversation between two people--when the
delays in downloading data from the Web are discounted.

The speech-based applications we have produced are founded on an architecture called Galaxy, which our group
introduced five years ago. It is a distributed architecture, which means that all the computing takes place on remote
servers. Galaxy can retrieve data from several different domains of knowledge to answer a user's query. The system
can handle multiple users simultaneously, and last but not least, it is mobile. You can access Galaxy using only a phone,
but if you also have an Internet connection, you can tell the machine to download data to your computer.

Galaxy has five main functions: speech recognition, language understanding, information retrieval, language
generation and speech synthesis. When you ask Galaxy a question, a server called Summit matches your spoken
words to a stored library of phonemes--the irreducible units of sound that make up words in all languages. Then
Summit generates a ranked list of candidate sentences--the machine's guesses at what you actually said. To make
sense of the best-guess sentence, the Galaxy system uses another server called Tina, which applies basic
grammatical rules to parse the sentence into its parts: subject, verb, object and so forth. Tina then formats the question
in a semantic frame, a series of commands that the system can understand. For example, if you asked, "Where is the
M.I.T. Museum?" Tina would frame the question as the command "Locate the museum named M.I.T. Museum."

At this point, Galaxy is ready to search for answers. A third server called Genesis converts the semantic frame into a
query formatted for the database where the requested information lies. The system determines which database to
search by analyzing the user's question. Once the information is retrieved, Tina arranges the data into a new semantic
frame. Genesis then converts the frame into a sentence in the user's language: "The M.I.T. Museum is located at 265
Massachusetts Avenue in Cambridge." Finally, a commercial speech synthesizer on yet another server turns the
sentence into spoken words.

Our laboratory has so far created about half a dozen Galaxy-based applications that can be accessed by telephone.
Jupiter offers weather information for 500 cities worldwide. Pegasus provides the schedules of 4,000 commercial
airline flights in the U.S. every day, updated every two or three minutes. Voyager is a guide to navigation and traffic in
the greater Boston area. To move from one application to another, the user simply says, "I want to talk to Jupiter" or
"Connect me to Voyager." Since May 1997 Jupiter has fielded more than 30,000 calls, achieving correct
understanding of about 80 percent of the queries from first-time users. The calls are recorded and evaluated to
improve the system's performance [see sidebar].

Speech recognition would be an ideal interface for the handheld devices being developed as part of the Oxygen
project. Using speech to give commands would allow much greater mobility--there would be no need to incorporate
a bulky keyboard into the portable unit. And spoken language would enable users to communicate with their devices
more efficiently. A traveling executive could say to his or her computer, "Let me know when Microsoft stock is above
$160." The machine would act much like a human assistant, accomplishing a variety of tasks with minimum
instruction.

Of course, several research problems still need to be addressed. We must create speech-recognition applications that
can handle many complex domains of information. The systems must be able to draw data from different
domains--the weather information domain, for example, and the flight information domain--without being
specifically instructed to do so. We must also increase the number of languages that the machines can understand.
And finally, to exploit the spoken-language interface fully, the systems must be able to do more than just what I
say--they must do what I mean. Ideally, tomorrow's speech-based interfaces will allow machines to grasp their
users' intentions and respond in context. Such advanced systems probably will not be available for at least a decade.
But once they are perfected, they will become an integral part of the Oxygen infrastructure.

Further Reading:

Publications from the Spoken Language Systems Group at LCS

The Author

VICTOR ZUE is an associate director of the M.I.T. Laboratory for Computer Science and head of the lab's Spoken
Language Systems Group. He is also a senior research scientist at M.I.T., where he received his Sc.D. in electrical
engineering in 1976.