SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The *NEW* Frank Coluccio Technology Forum -- Ignore unavailable to you. Want to Upgrade?


To: axial who wrote (2018)2/11/2001 3:37:45 PM
From: axial  Read Replies (2) | Respond to of 46821
 
An afterthought to the article: it makes no reference to the heat generated by these many interconnected devices.

I wonder what additional energy is consumed by HVAC systems trying to cool these devices? I suppose one could make the argument that the energy lost in summer is regained in winter (ie, less heat required/produced by HVAC systems), but I wonder if that's so: I wonder if engineers are calculating the effect of say, 1000 PCs and their associated equipment, in an office building?



To: axial who wrote (2018)2/11/2001 4:41:23 PM
From: Frank A. Coluccio  Respond to of 46821
 
Good article, Jim. Thanks.

The author misses a couple of opportunities to expand his point on the amount of electrical consumption in enterprise computing, especially in the larger building structures and offsite data centers. I'll only address the commercial building space this time, and save the data center for some other time. This is a topic that is near and dear to me right now, btw.

Consider a forty-story commercial building with a floorplate size sufficiently large to warrant two to four LAN closets on each floor. Take the latter for the purposes of this discussion, four LAN closets per floor.

Next, consider that even if these buildings were occupied by a single tenant, there would be a minimum of three combined (or discreet) communications centers and server farms.

Each of the latter three "main" rooms consumes electricity at a rate equivalent to ten to thirty LAN distribution closets that exist in 4's on each floor, and a commensurate amount of air conditioning and un-interruptible power supply (UPS) provisions.

Conceivably (in actuality, more often than not), you wind up with (40*4) + (3*40) or the equivalent of 280 equipment room requirements that must be conditioned for clean power, temperature and humidity, while the machines in them continue to function unabated at all hours of the day and week.

Here, we are not only talking 9 to 5 workday. Instead, we're talking about 24 * 365, nonstop. Run the numbers on this singular, ordinary office building, and then multiply it by the 750,000 that are out there, not counting the larger offsite data centers. Lotta BTUs, I'd say, that need to be conserved.

One way of conserving on those BTUs is to remove the four closets per floor which demand distances to closets less than one hundred meters due to copper cabling constraints. Instead, bring all PC connections to the three primary rooms I referred to using relatively distance-insensitive fiber, namely to the main communications rooms and server farms.

In so doing, more than half of the air and UPS requirements are obviated, in the process, since the efficiency ratings of each of the three remaining, larger rooms would be enhanced proportionately.

Try suggesting this to a Facilities Management Group within a large enterprise some day, and watch the collective looks on their faces when you tell them that a half to three quarters of their power and air budgets for new buildings are going to be cut. Fuhgeddaboudit. Indeed, in those very same communications closets where carriers house their massive SONET boxes and wdm's, try telling them that they will have to ignore the alarms twice a day and on weekends when power is shot off, or cut back. No way.

So you don't get fiber to the desk right away. But what "do" you do about all of those hundreds of conditioned closets that churn away, nonstop, over three day weekends and after 5 PM each day?

One would think that backhauling all computational needs via the transparency afforded by dark fiber to offsite data centers would alleviate some of this, and to some degree it does, where it is being practiced. Although this approach is not really gaining as much momentum as it ought to, IMO. But there is no free lunch. Where turning off power inside office buildings is still discretionary to some extent, this is not the case when you consider for the offsite data center, where power and backup, as well as conditioned air, is a mandatory nonstop proposition.

To make matters worse, larger data centers usually consume or reserve poly-dundant amounts of power through contracts with the power company, sometimes from multiple power grid segments, even though they 'do' have back up generators on site. And so it goes.

FAC