SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : PC Sector Round Table -- Ignore unavailable to you. Want to Upgrade?


To: Mark Oliver who wrote (1151)10/27/1998 4:35:00 PM
From: Pierre-X  Read Replies (1) | Respond to of 2025
 
Re: shopping "agents"

I'm afraid I'll have to disagree with you on this one. I've done some more thinking about the "agent" scene and here's my current thinking:

Server-side processes will continue as the predominant (over 95%) handler of "agent"-style queries. Ultimately, all conceivable uses of "agents" are simply DATABASE QUERIES AGAINST MULTIPLE DATABASES. Take the example of:
shopper.com

This is a website that tracks pricing on thousands of products submitted by hundreds of participating online vendors of computer gear.

There is simply no possible way for a CLIENT side agent application to conduct the database queries involved against THIS many databases for any significant number of products and hope to keep up to date. There is also the problem of maintaining a "meta-database" i.e. a database that contains the locations of all the searchable databases. This is not a problem from a technical standpoint--quite the opposite--but a problem from a funding standpoint: who would pay to contruct and maintain such a metadatabase?

The shopper.com site solves the metadatabase problem elegantly. The business model allows the selling of advertising space (something you can't do as a metadatabase serving client side agents) and charging the "member" vendors a fee for the service of giving them exposure to shoppers who come to this site.

Now, clearly there will be agent tasks that simply not demanded by enough people to build a site that caters to them. I believe the search engines (altavista, yahoo, dejanews, et al) will evolve the capability to do more sophisticated searching that will be the best solution to such tasks. Again, this is a SERVER side solution, because, again, all agent tasks are inherently the result of multiple queries against multiple databases, and giant, incredibly fast servers with high bandwidth net connections and massive internal reference tables such as the search engines are best suited to such tasks. The search engines are become the mainframes of tomorrow.

Finally, the 5% of agent tasks for which search engines simply do not contain the requisite data within their internal reference tables, or which are too complex to be specified, THAT'S when we'll see client side agents being employed.

The above may be a little incoherent since it's the first time articulating this theory for me. Flame away.

NOTE: As a matter of fact I told you gentlemen about a client-side "agent" at diskcon--the Silicon Investor spider that I'm developing which allows offline thread browsing. It's fully functional now, but I still have to combine the dowload and reader modules and augment the UI. Beta testers, anyone?