SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP -- Ignore unavailable to you. Want to Upgrade?


To: Frank A. Coluccio who wrote (628)12/4/1999 2:51:00 PM
From: Jay Lowe  Respond to of 1782
 
re: improved, smarter browsers... where should they reside?

There's good math for this ... paging policies have been sifted through by 2 generations of grad students ... applications to distributed systems where processors, stores, and connections have quantifiable characteristics has also been studied to some degree. There are also developmental process implications.

E.g.: ATHM deploys smart adaptive caching as a software accessory for their client browsers ... tune, tune, experiment, grow ... move the caching into the head-end ... caching now a distributed intelligence shared by PC and head-end ... project this into a toaster, set-top box ... as a library for Windows CE, etc ... partner with Akami to offer the capability to the non-ATHM heterogeneous web.

ATHM is basically a private Abilene ... they should be using their control to develop and deploy new technologies which will be the infrastructure currency of the future.

ATHM should be cloistering little groups of weenies to develop stuff like this for the forthcoming phase where their HFC advantage starts to erode.

Sun, Microsoft, and Cisco have different kinds of gorilla leverage in this domain ... Sun especially in this context.

- Sun has the Java/SunOS claim to resource control
- Microsoft has the Java/NT/CE claim
- Cisco has the IOS claim
- etc, etc

The players own overlapping sets of the solution space. The end case is where all their offering share an interoperating solution ... Java, SunOS, NT Client, NT Server, CE, IOS and whatever heterogeneous internet resource management system comes to rule the bone ... they all share an intelligence about adaptive resource management. In fact, they do today ... it's just that the policies are, by necessity, the most simplistic ones ... the ones that have no distributed intelligence.

But ATHM has more leverage in this game than it imagines, I think ... and much more experiment and deployment control ... they can widely deploy, test, and refine technologies completely in-house. I think they ought to go for owning some fundamental technologies because they have the entire solution space within their span of control.

Tutorial Mode
=========================================================
Adaptive algorithms for managing a distributed data processing workload
by J. Aman, C. K. Eilert, D. Emmes, P. Yocom, and D. Dillenberger
research.ibm.com

Many VERY cool tools exist in this area (scroll down in)
inrialpes.fr

A bibliography in ...
research.ibm.com
Predicting the performance of distributed virtual shared-memory applications
by E. W. Parsons, M. Brorsson, and K. C. Sevcik
research.ibm.com
This is actually about predicting performance of a homogenous distributed virtual memory implemented across a network of workstations ... but the math extrapolates to heterogeneous networks (web-servers, PCs, cellphones, toasters).

I love this stuff! Used to be a main area of interest back when I was hacking OS internals on DEC machines.

Investment Mode
=========================================================
The question remains, harkening back to our meta-web discussion, how will comprehensive distributed resource management be deployed in the real world?

Reactively, as folks cherry-pick system-local performance gains by eliminating obvious bottlenecks, and

Proactively ... how? (Akami,SandPiper,?) + (Sun,Microsoft,Cisco) ... some initiative at the net OS level? ... some Internet2 or IETF design to lead the way?

Because the technology is moving so fast, the reactive individualistic approach will rule ... creating solutions that are gradually subsumed within more proactive designs.

The meta-web paradigm is interesting to me because it suggests where to look for big wins.

Suppose Akami/Sandpiper cast their frame as "distributed resource management and optimization for the web" ... from server farms to coffeepots and toasters ... and everything in between. This implies they'd have to get in bed with the likes of Sun or Cisco. Retro-reasoning suggest they must to the former to fulfill the vision, therefore they will do the latter in the process, somehow ... in some form or other ... merging, partnering, or being bypassed/extinguished.



To: Frank A. Coluccio who wrote (628)12/6/1999 12:40:00 PM
From: Stephen L  Respond to of 1782
 
Frank and thread,
Have there been any constructive dialogues on the use so "smart agents". It was my understanding that lack of standardization on web pages (which may be partially remedied by XML) and on "handshakes" between agents were among the greatest obstacles to the proliferation of smart-bots. This appears to be changing and I was wondering if any can suggust good referencies on the subject. We can make data, send it, receive it, store it, compress it, encrypt it....but can we make sense out of it.

Thank you in advance for your thoughts and guidance,
Steve