SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Pastimes : Kosovo -- Ignore unavailable to you. Want to Upgrade?


To: Neocon who wrote (12141)6/16/1999 1:54:00 AM
From: MNI  Read Replies (1) | Respond to of 17770
 
46K is quite okay for your modem. The 56K are a theoretical upper bound, which is reachable only in an IDEAL situation of line quality. If everything else is ideal, there is still an upper bound of the physical length of copper cable between sender and receiver.
I normally reach around (45+1/3)K in the best case, which is early in the morning here.
Download capacity from this server is often around annoying 250 Bytes/s, so even a very old MODEM would do.

MNI



To: Neocon who wrote (12141)6/16/1999 8:28:00 PM
From: D. Long  Read Replies (1) | Respond to of 17770
 
The industry has been slightly misleading regarding 56k modems, which for one thing can never (by FCC regulation) achieve 56k. As has already been pointed out, 56k is the theoretical limit, which would require lab quality conditions. My company has customers that do regularly get 52k connections, but these are mostly in newer subdivisions without the burden of legacy components in the local exchange, and fresh copper. Im quite lucky to be in an area that now has ADSL, which is exciting though I have no use for an internet connection at home, since I get my fill at work.

As to the reconfigurable computing, its really an old field. A lot of work has been in the area, and HP for one has attempted in the past to slap together a working reconfigurable machine. It has mostly been a matter of architecture and software that limits the practical implementation of a fully reconf. computer for general uses. The specific chips used have been exploited widely. Im not a computer scientist, and I surely dont pretend to understand the math, but it seems to be a matter in the architecture of matching ability to crunch data with the overhead of communication amongst the chips on the chipset. Since the algorithms are performed in hardware, the software is the deciding factor, getting an implementation that is able to manage the highly dynamic nature of the reconfigurable computer and path the most efficient use of power. Thats what I understand from what Ive read from MIT and other sources, at least.

If these guys at StarBridge have made the leap, they've performed one hell of a feat. I for one would be eager to see what kind of breakthroughs this would engender in graphics. StarBridge also claims to have routers and switches built on their model, which would mean unbelievable performance for ISP's and telecoms. Theyve already sold one machine to a Cali based company named Caveoi, I believe, for powering a next-gen intelligent search engine. The applications are limitless. Exciting stuff if true.

Derek