SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The *NEW* Frank Coluccio Technology Forum

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Frank A. Coluccio who wrote (19018)1/20/2007 5:04:41 PM
From: Frank A. Coluccio  Read Replies (1) of 46821
 
If Bittorrent usage by 5% of users is not an anomaly, as Cringely asserts (and I agree), but merely a harbinger of what a majority will wind up doing, instead, then how does placing three or five or a hundred very-large data centers in every state alleviate the problem of first mile congestion? He argues that carpet bombing states with data centers will achieve this, and that's why Google is doing so.

At the very root of the argument, it is nothing short of antithetical to suggest that Bittorrent is fed from data centers, to begin with. Bittorrent is peer to peer, demanding, at most, that ISPs' routers & switches and colocation centers' resources possess sufficient capacity, in addition to the first and second mile bandwidth pipes connecting them to end users. Did he mean to suggest that there should be more wire centers, hubs and colocation centers situated closer to end users? Or did he actually mean data centers, as would be supported by his delving into the number of servers? If the latter, then any discussion concerning the number of servers (cheap PCs) gets shunted to ground. Yes, Bittorrent's use demands large amounts of transit and first mile bandwidth resources, preferably situated somewhere between population centers of heavy users, but it doesn't demand more servers in data center cabinets.

Now, if he had stuck with an earlier rationale (which he once used when introducing the 18-wheeler container-sized data center concept) and argued that the the proximity of user populations to the servers that perform their search functions and interact with Google's growing number of Web-based applications bore a relationship to Google's overall processing and response time performance, then that would have been easier to swallow, and another matter, entirely.

But, to suggest that a gazillion data centers would improve p2p performance (when a tenth as many DWDM huts would do a better job for p2p) is just plain technologically incorrect, imo. And hey, I could always be wrong, too. Any thoughts?

FAC
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext