Energy Consumption And The New Economy Authorities hotly debate the Internet’s effect on U.S. energy consumption, but everyone agrees data centers pose serious problems for the local grid.
networkmagazine.com
Each time you order a book online or download an MP3 song, a lump of coal is burned somewhere in America. And that wireless Palm VII you swear by and put little batteries into? It eats up power like a refrigerator-up to 1,000 kilowatt hours a year (kWh/y)-when all the devices it connects to are taken into consideration.
At least, that's according to Mark Mills, coauthor of the Gilder Technology Group's Digital Power Report. Mills' claims first burst to prominence via an influential article, “Dig More Coal, the PCs Are Coming,” in the May 31, 1999 issue of Forbes. Among other things, the article stated that the Internet-client PCs, hubs, routers, repeaters, amplifiers, servers, and more-soaks up from 8 percent to 13 percent of America's energy consumption total.
“This whole investigation began with a simple question,” says Mills. “Since all bits are electrons, when do they represent a significant amount of electricity? We are moving petabytes' worth of data, and a core law of physics is that there's no free lunch. When Peter Huber and I started our research, we were stunned at the magnitude of the demand that had been created.”
Received Wisdom
The Mills/Huber figures have now gained the status of received wisdom. They've been introduced into testimony before a House subcommittee in February 2000, cited in George W. Bush campaign speeches, and even referenced in a Doonesbury comic strip.
Nonetheless, they remain fiercely controversial. One reason is that in mid-1999, the most recent figures for commercial-sector energy use by PCs-and related equipment such as monitors and scanners-were four years old. They were based partly on a 1995 Commercial Buildings Energy Consumption Survey (CBECS) by the Energy Information Administration (EIA), part of the Department of Energy (DoE), and partly on measurements by the DoE's Lawrence Berkeley Labs (LBL) in California. (For more information on both sources, see Resources.)
The 1995 numbers showed 43 million PCs in use in the United States, consuming about 98 billion kWh/y. Using this as a baseline, Mills noted not only the vast increase in PC sales-tracked by other sources-but also the rise in the number of Web servers, routers, and other equipment. Such Internet infrastructure had flown beneath anyone's radar in 1995.
Mills concluded that the Internet, in mid-1999, had to be using about 290 billion kWh/y. Critics quickly assailed his report as “junk science,” charging that its numbers-which, among other things, said that a PC uses from 600W to 1,000W, while routers use from 500W to 2,000W-were overstated. Mills, it was charged, had also ascribed too much of the 1995 CBECS usage numbers to PCs, while ignoring systems that had been retired.
Another significant reason for controversy is that the Mills report, entitled “The Internet Begins with Coal,” was funded by the Greening Earth Society (GES). The society is an arm of the Western Fuels Association, a cooperative that represents 19 power companies. The GES opposes the 1997 Kyoto Protocol, which, if and when ratified by Congress-to which it has not even been submitted-would call for emissions of six greenhouse gases to be cut to pre-1990 levels. Testifying before a Senate Committee in September 2000 (see Resources), GES head Fred Palmer stated that the United States depends on coal for its economic well-being, and that increased carbon dioxide in the air benefits agriculture.
“After making the observation that the Internet uses a lot of electricity, we discovered that there were some people who didn't want to believe it-for reasons that are inscrutable to me,” says Mills. “Some, even in the environmental community, think the Internet is free, has no energy cost, and is going to save the planet.”
“By the focus on conserving their way out, they have blown it,” adds Ned Leonard, the Western Fuels Association's assistant general manager. “If utilities cannot respond quickly to this demand, there's going to be a situation where people self-generate. You'll have micro-turbines sitting on the roofs of buildings in cities where air quality is already of concern. This thing has mushroomed on the sidelines while everyone was focused on coming up with efficient refrigerators.”
Mills now calls his critics “serial obfuscators...who have tried to dissect our analysis disingenuously. Where they really failed is by concentrating on the desktop. That's only about 20 percent of the power used by the Internet-the rest is in the network. They've asserted that they can attribute no energy consumption to that because the telecommunications network was already in existence. Try telling that to your readers!”
Not Everyone Agrees
One of the staunchest critics of the Mills/Huber analysis is Amory Lovins, cofounder and CEO of the Rocky Mountain Institute. In fact, the Institute's Web site hosts an increasingly acrimonious 35-page exchange of e-mail between Lovins and Mills.
“I view this as a deliberate campaign of misinformation,” Lovins says. “They're trying to create the urban myth that a prosperous digital economy requires lots more power plants.
“They started it as a matter of trying to tilt the climate debate,” he adds. “But now a lot of utilities, having read Mark's work, are radically revising their consumption estimates upwards, and planning multibillion-dollar investments to meet loads that won't exist. In other words, they're about to cause a lot of people to lose a lot of money.
“Mark's numbers are off by a factor of about eight on the Internet, and about four on total office equipment electric usage,” charges Lovins. “Jonathan Koomey [a staff scientist at LBL] has nailed this in the proper scientific fashion, from measurement, and Joe Romm [executive director of the Center for Energy and Climate Solutions] has aggregated statistics. We've told Mills the effect he's saying exists doesn't actually show up in the consumption data, but he refuses to acknowledge it.
“Koomey sent at least eight messages to Mills asking for documentation of his claim that the Palm VII uses as much electricity as a refrigerator, and he's had no reply to any of them. Well, it can't be documented because it isn't true.”
“I'd be the first one to say it, if the numbers that Mills used were accurate,” says Jonathan Koomey. “But I have to say that they are at odds with measurement.”
In June 2000, LBL completed its first comprehensive assessment of office equipment energy use since the 1995 report. This report now includes electricity used by network equipment, estimating total energy use for residential, commercial, and industrial sectors. (However, it does not focus on just the “Internet-related” portion of electricity use, as does Mills, so care must be used in comparing findings.)
Koomey and his colleagues conclude that electricity used for all office, telecommunications, and network equipment-including the energy used to manufacture the equipment-is about 3 percent of the country's energy use. Admitting that commercial sector electricity use is about 15 percent higher than the 1995 report predicted for 2000, they write, “The difference is explainable by more people leaving their computers and printers on at night than we expected in 1995.”
Even Better News?
Joe Romm, a former Acting Assistant Secretary of Energy at the DoE, made headlines during 1999-and testified at both the House and Senate hearings-with an intriguing claim. Even more startling than any made by Mills and Huber, it's this: The Internet is actually saving energy, thereby making reduction of greenhouse gases ever more practical.
Romm accepts the figures set forth by Koomey and LBL for electricity consumption, but widens the debate to encompass consumption of fossil fuels. “In the immediate pre-Internet era [1992-1996],” he writes, “GDP growth averaged 3.2 percent a year, while total energy demand grew 2.4 percent a year. In the Internet era [1996-2000], GDP growth is averaging over 4 percent a year, while energy demand is growing only 1 percent a year.”
According to Romm, the Internet economy has pulled this off by generating both structural and efficiency gains. Structural gains are realized when growth shifts to sectors of the economy that are not particularly energy-intensive-such as the IT industry-and away from sectors such as chemical or pulp manufacturing. Efficiency gains happen as businesses change their activities, reducing energy use relative to their output of goods and services.
E-commerce can save energy because a warehouse holds far more product per square foot, and uses less energy per square foot, than a retail store. Writes Romm, “We calculated the ratio of building energy per book sold in traditional bookstores vs. the online retailer Amazon.com to be 16-to-1.”
Contrary to what most people think, he adds, Internet shopping also uses less energy to deliver packages to people's homes. “Shipping ten pounds of packages by overnight air-the most energy-intensive delivery mode-still uses 40 percent less fuel than driving roundtrip to the mall.”
Ground shipping via the U.S. Postal Service would be even more efficient, since carriers normally pass most homes daily. Romm also points to factors such as more efficient manufacturing and procurement, remote-controlled energy management, outsourcing, and saving of paper previously used for newspapers and catalogs.
Faced with this spectrum of opinion, can we draw any conclusions about the national effect of Internet energy consumption? Frankly, it may be impossible, not least because it's so difficult to define where the “Internet” leaves off. Do we discuss only routers and other equipment that must be left on 24-by-7, or add factors such as a home PC that is used more because of a broadband connection; the mail carriers who tote fewer and fewer personal letters; or the car that less frequently leaves the garage?
“Based on what I've seen and what I've read,” says Steve Rosenstock, a manager of electric solutions at the Edison Electric Institute, “I think the LBL numbers are closer to the mark than the Mills numbers. But that doesn't change the fact that the Internet has gone in six years from around 0 percent to 1 or 2 or 3 percent. That's a very fast rate of growth.”
But, he adds, “you'd think that electric growth overall should be exploding, and it really isn't.” The EIA's Annual Energy Outlook 2000 says that, while retail electricity sales continue to grow, the rate of growth is steadily lessening (see Figure 1). On the other hand, electricity used for office equipment, PCs, and Internet infrastructure is projected to grow at 3 percent per year-more than twice the growth rate of total commercial sector electricity use, and more than 2.5 times the growth rate of commercial floor space.
Thinking Globally, Browning-Out Locally
While overall Internet energy usage remains a matter of debate, no one can deny that some parts of the United States are facing serious local grid issues-none more so than Silicon Valley.
During June 2000, demand soared as a result of abnormally high temperatures. Rolling blackouts, designed to keep the grid from going down totally, struck more than 100,000 San Francisco Bay Area customers-including Network Magazine-over a period of three days.
In November 2000, consumers were again warned that power could go out, this time due to cold weather. The Independent System Operator (ISO), which manages the flow of power over most of the state's network of high-voltage lines, issued a “stage-two emergency” because more than 95 percent of available electricity was in use.
Blackouts are likely when electricity consumption exceeds 98.5 percent, a condition the ISO calls a stage-three emergency. While that threshold was not reached in November, the power shortage was the 20th stage-two emergency in 2000, compared with one in 1999 and five in 1998.
According to the California Energy Commission, demand in the state has grown by 11 percent since 1995, while supply has increased by only 1 percent. One reason among many: Stringent environmental regulations and population density mean that new power plants have been few and far between. Nor do high-tech companies necessarily welcome generating facilities as neighbors. Cisco Systems, which wants to build a large new campus in a southern part of San Jose, has opposed a power plant also proposed for the region, citing health and safety concerns.
All this means that California is, to a large degree, dependent upon imported power. The state's Public Utilities Commission (CPUC) has issued a scathing report to Governor Gray Davis, stating flatly that California's electric system is unreliable and in trouble. The CPUC report cites booming demand, reduced interest in energy efficiency, aging power plants, and limited transmission facilities.
Yet the state is also an exceedingly popular place to locate server farms, also known as “data hotels.” Exodus hosts Yahoo, eBay, and more than 500 of the world's other most frequently requested Web sites. In Santa Clara, just north of San Jose, it recently opened its fifth data hotel and is planning a sixth. That's a significant chunk of the 36 centers the company planned to have open or under construction by the end of 2000.
Crammed with Cisco routers, Foundry Networks BigIron switches, F5 Networks load balancers, and lots of servers, each new Exodus data center is 100,000 square feet or more in size. Each uses from 10 to 20 times the energy of an equivalent-size office building, or, according to one CPUC analyst, enough juice to power 100,000 homes.
As a result, Exodus is Santa Clara's third-largest utility customer. Since the city owns its own utility, Silicon Valley Power, the sales at least add to the city's coffers. However, it must hustle to meet the demand, since about 80 percent of the power is generated elsewhere.
Naturally, Exodus has adopted state-of-the-art protection against outages. Each data hotel has dual lines to utility substations, backup batteries, and a bevy of diesel generators capable of generating emergency power as long as necessary.
According to numerous sources, hosting companies such as Exodus expect to be billing more than $1 million per minute. Meanwhile, according to Hewlett-Packard, a 20-minute outage at a chip fabrication plant would result in the loss of a whole day's production-a comparatively cheap disaster at “only” $30 million.
With so much money at stake, reliability is essential. “An important point that has gotten lost in the electricity debate,” says Mills, “is that it's not just about the absolute quantity of the demand, it's the change in its character-which we call 'nines'.”
“Outside hospitals and military bases, there weren't that many places you could sell high-nines electrons. Now, it's becoming the fastest-growing business in the American infrastructure, period. And for every dollar spent on wholesale electrons, it costs ten dollars to make them high nines.” (For a view of how nines equate to downtime, see Figure 2.)
In the inaugural issue of the Digital Power Report, Mills wrote that “when you buy an American Power Conversion UPS to keep a desktop PC isolated from line voltage sags, you are paying $20/kWh, roughly 200 times retail. True, you buy only one kilowatt-hour per year at that price-spread out over 20 five-second events, each using 0.05 kWh. But, with millions of customers, APC is a $1 billion-a-year power company.”
“We're now building data centers that have energy demands comparable to steel mills yet occupy the space of grocery stores,” says Mills. “I would argue that supplying power for these is more challenging than anything else in networking...an engineering task comparable to building an airplane.”
Other observers agree there is a problem. “If you get a pile of these data centers coming to specific regions,” says Koomey, “the power just isn't going to be there. This is where the old economy and the new economy collide: There's a two- to four-year time lag on building new power plants.”
However, he adds, administrators have made life harder for themselves and utilities alike by overestimating their power needs. “These guys have taken the power density of a rack and multiplied it over the total square footage of a building, including corridors and stairwells. They've also put in huge factors to account for cooling, such as taking the 65 server watts per square foot and multiplying by three.”
Actually, according to Koomey, “with compressor-based cooling, it typically takes one unit of electricity to move two units of heat. So, you'd expect 50 percent additional power consumption for cooling.” (For Koomey's November 2000 predictions about data hotel power consumption, based on an “upper bound” estimate of 100 watts per square foot, see Figure 3.)
“If you actually go measure, you'll see the latest data centers are on the order of 50 to 70 watts a square foot,” says Lovins. “They actually have nowhere to go but down because they have terrible HVAC [heating and ventilation] design. But, there's a cover-your-ass mentality-people know they'll get blamed if they let the lights go off, but not if they waste money.”
The Edison Electric Institute's Steve Rosenstock concurs: “The companies building server hotels have been working with developers and asking for 150 to 200 watts per square foot. But our members have been measuring what they're actually using after installation. It's closer to 25 to 40 watts per square foot.
“It's a risk for a transmission company to install infrastructure to support a dense load,” worries Rosenstock. “That Internet company might go away in two years, and someone would be stuck with the bill.”
As a result, utilities are looking at rate structures that would put Internet hosting companies, and other power-hungry operations, into a special class. They'd be asked to pay more of the true cost of service for power generation and transmission. Pricing will be tricky, however, as there's a danger of pushing some customers into generating their own four-nines electricity around the clock.
What's To Be Done?
Indeed, onsite generation is considered an increasingly practical option. Exodus has discussed adoption of natural-gas-powered microturbines for generating its own power on a more regular basis. These-and other ingenious solutions, such as flywheels-are regularly discussed by Mills and Huber in their Digital Power Report.
Lovins favors the notion, too. “The coming revolution in fuel cells and microturbines means you can get as many nines as you want,” he says. “It's not just a way to get reliable high-quality digital power, it's also a way to do co- and tri-generation, where you are heating and preferably also cooling by using waste heat.
“Then you can get system efficiencies around 90 or 92 percent.” Adds Lovins, HVAC costs could readily be slashed by two-thirds via the adoption of efficient centrifugal systems. “Large energy savings can actually cost less than small, or no, savings.”
Not that small savings aren't worth bothering with. Lovins, for example, praises the Rebel NetWinder, a Linux box running on a StrongARM processor. “It peaks at 15 watts, with no fan, and normally pokes along at two or three watts. The electricity savings are 98 or 99 percent, compared to the four Windows NT servers we replaced with it at RMI.”
Until recently, however, those building data centers have been interested mostly in speed. “We get the feeling that people who buy servers aren't willing to trade away performance for lower power consumption, or even lower cost,” says Shannon Poulin, a marketing manager with Intel's Enterprise Platform Group.
Steve Cumings, a product strategy manager for Compaq Computer's high-end server line (eight-way and above), concurs. “These systems go mostly to enterprise customers, but we're also selling them to the ASPs as back-ends. I have never heard a customer say they have made a server decision based on power consumption.”
But in the corridors and trenches of the data hotel, consciousness is being raised. Paul Miller, also with Compaq, is director of product marketing for mainstream servers, including the company's popular 1U ProLiant DL360 (see “Rack 'Em Up”, August 2000). “Power is a real hot topic with customers,” he says. “What's interesting is how the conversation has changed. Even before people talk about speeds and feeds, they're spending 30 percent of the conversation talking about the environmentals-power consumption per square foot, how they cool that, and how they cable and manage it.”
Miller touts product features such as the ProLiant's Lights-Out Management (LOM) capability, which allows all keyboard, video, and mouse operations to be done from a remote desktop. Monitors in every rack use significant power, even when they're flat-panel LCDs.
The DL servers can also be configured to spin down their hard drives, or even be turned off entirely. Their wake-on-LAN feature can then be used to turn on nodes in a cluster when required. Most customers don't use such options, however, because they want optimal Web-site performance. And, as Miller notes, “by powering up devices just as peak hits come in, you could cause a local brownout in your environment.
“One important technology that will help drive down power consumption is SANs [Storage Area Networks],” he says. “When these become ubiquitous and you can boot from them, they'll mean that servers don't need to have boot volumes within every rack. These suck up about a third of the power in an average configuration, and they generate a lot of heat.”
“We've begun initiatives to cut server power consumption, although some are not yet public,” says Intel's Poulin. “We're definitely cognizant of all the issues. The amount of power that you can put in a 1U dual-processor server is limited not just by the supply you can get but also by how much cooling you can put into the box to get the heat out. These boxes are thermally constrained.”
Therefore, he says, vendors will adopt laptop-style cooling technology such as heat pipes. Replacing a processor's normal, tall heat sink, these move heat to multiple locations inside a machine. Then, it can be removed by relatively small, quiet fans.
There's only so much that can be done to cut the power consumption of a high-clock-speed processor because power is largely a function of frequency. It can be determined by the formula P = CV2F, where P equals power, C is the capacitance of the overall system, V is voltage, and F is the operating frequency.
“The capacitance and voltage are a function of the process you use to make the chip,” notes Poulin. Intel's long-awaited 0.13-micron manufacturing process will, in the first half of 2001, make it possible to create smaller chips that use copper instead of aluminum to conduct power.
At Comdex in November 2000, Intel announced plans to sell a 0.13-micron Pentium III that slashes CPU power consumption to less than half a watt of power. This compares with the 33 watts needed to juice an existing Pentium III, or the more than 50 needed for a Pentium 4.
Poulin wouldn't be drawn on when this CPU-targeted at laptops-might make it into server designs. Clearly, however, it's a future option if and when OEMs become sufficiently interested in saving power.
Meanwhile, competitor Transmeta already sells a 0.5-watt processor, the Crusoe, that can run Windows 2000. Found only in portable computers so far, it has underwhelmed reviewers by running software at Pentium II speeds.
However, the company says upgrades to its Code Morphing Software (CMS) will allow the Crusoe to run faster, save more power, and be custom-tuned for running specific applications. That could include, for example, acting as a Web server. The Crusoe can adjust its clock frequency on the fly, according to load, and might be wellsuited for this purpose. Again, however, Transmeta has announced no plans to sell the Crusoe to server vendors.
Software Solutions
Some wags say the data hotel “energy crisis”-if there is one-could be solved by having all of them run Linux instead of Windows NT or 2000. Why? Because, the argument runs, Linux supports an equal number of users and applications while requiring much less RAM.
The jury will remain out on that one, at least until someone has carefully instrumented a server and measured its power consumption with different amounts of RAM. However, there is growing awareness that software and hardware, tuned for one another, can save large amounts of electricity.
In August 2000, Northwestern University received a $2 million grant from the Defense Advanced Research Projects Agency (DARPA), which is concerned about minimizing power consumption for military applications and platforms. Called Power-Aware Architecture and Compilation Techniques, (PACT) the Northwestern project will work to optimize a C compiler for the capabilities of a power-aware embedded processor, and vice versa.
Motorola and Cadence Design Systems are partners in PACT and will have access to its findings. The project's ultimate goal is to reduce power consumption by factors of 10 to 100.
The traditional American approach to energy and space consumption-perhaps best exemplified by a giant sport-utility vehicle carrying a single passenger-has been to use a lot of it, just because we can. The confines of a data center, however, resemble the streets of Manhattan or Hong Kong more than they do the prairie.
You'll have to read the sources, plus your own electricity bills, to decide whether power consumption is of concern to you today. Eventually, more efficient servers will be available as a matter of course. In the networking world-if not the automotive one-it's clear that waste benefits nobody.
Jonathan Angel can be reached at angelnm@earthlink.net.
Resources
You can read the Congressional hearings that took place during 2000, with testimony from most of the players mentioned in this article. For the February House hearings, go to this page, then scroll down to section 106-125, “Kyoto and the Internet: The Energy Implications of the Digital Economy.” For the September Senate hearings, go here and search for “Solutions to Climate Change.”
The Energy Information Administration's Commercial Buildings Energy Consumption Survey (CBECS) home page offers detailed reports plus a reprint of the questionnaire used to gather information. At the time of writing, data from 1999 was not yet available, but it should be listed by the time you read this.
For Lawrence Berkeley Labs (LBL) data and a variety of useful links to other sources, see this link.
You can read the original Forbes article, “Dig More Coal, the PCs Are Coming”. Also look at the Mills/Huber Digital Power Report Web site.
The exchange of e-mails between Amory Lovins and Mark Mills may be read on the Rocky Mountain Institute (RMT) Web site here. Other interesting sources, which are RMI spinoffs, are the Natural Capitalism Web site and eSource.
Joe Romm's Center for Energy and Climate Solutions may be reached at www.cool-companies.org.
Cisco Systems maintains a Web page that cites the ways it believes the Internet is benefiting the environment at this link.
The California Public Utilities Commission's (CPUC) report to Governor Gray Davis is located here
For a regularly updated survey of the most frequently requested Web sites, hosts, and number of servers on the Internet, see www.netcraft.com/survey/.
An interesting Web site with information about processors and their thermal characteristics is Tom's Hardware Guide.
Northwestern University's PACT project home page.
Finally, Einhorn Yaffee Prescott is a firm that specializes in designing data centers and has published a number of relevant papers.
Cupertino Electric specializes in building data hotels.
See also the New York Times article “E-Business Alters ABC's of Real Estate”. |