SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
Recommended by:
pocotrader
rdkflorida2
To: Broken_Clock who wrote (1551586)8/12/2025 9:04:31 AM
From: sylvester802 Recommendations   of 1574490
 
BOOM: Data centers are DESTROYING the US economy, and we’re not even using them
The average server utilization rate hovers between 12%-18% of capacity, while an estimated 10 million servers sit completely idle.
sltrib.com

(Getty Images) An estimated 10 million servers sit completely idle, representing $30 billion in wasted capital.

By Baris Saydag | Fortune
| Aug. 11, 2025, 11:48 a.m.

Comment

As tech giants announce hundreds of billions in new data center investments, we’re witnessing a fundamental misunderstanding of our compute shortage problem. The industry’s current approach, throwing money at massive infrastructure projects, resembles adding two more lanes to a congested highway. It might offer temporary relief, but it doesn’t solve the underlying problem.

The numbers are staggering. Data center capital expenditures surged 53% year-over-year to $134 billion in the first quarter of 2025 alone. Meta is reportedly exploring a $200 billion investment in data centers, while Microsoft has committed $80 billion for 2025. OpenAI, SoftBank, and Oracle have announced the $500 billion Stargate initiative. McKinsey projects that data centers will require $6.7 trillion worldwide by 2030. And the list goes on.

Yet here’s the uncomfortable truth. Most of these resources will remain dramatically underutilized. The average server utilization rate hovers between 12%-18% of capacity, while an estimated 10 million servers sit completely idle, representing $30 billion in wasted capital. Even active servers rarely exceed 50% utilization, meaning the majority of our existing compute infrastructure is essentially burning energy while doing nothing productive.

The highway analogy holds trueWhen faced with traffic congestion, the instinctive response is to add more lanes. But transportation researchers have documented what’s known as “induced demand.” It’s a counterintuitive finding that proves additional capacity temporarily reduces congestion until it attracts more drivers, ultimately returning traffic to previous levels. The same phenomenon applies to data centers.

Building new data centers is the easy solution, but it’s neither sustainable nor efficient. As I’ve witnessed firsthand in developing compute orchestration platforms, the real problem isn’t capacity. It’s allocation and optimization. There’s already an abundant supply sitting idle across thousands of data centers worldwide. The challenge lies in efficiently connecting this scattered, underutilized capacity with demand.

The Environmental Reckoning Data center energy consumption is projected to triple by 2030, reaching 2,967 TWh annually. Goldman Sachs estimates that data center power demand will grow 160% by 2030. While tech giants are purchasing entire nuclear power plants to fuel their data centers, cities across the country are hitting hard limits on energy capacity for new facilities.

Get Inside Voices newsletter. A weekly collection of ideas, perspectives and solutions from across Utah.

This energy crunch highlights the significant strains on our infrastructure and is a subtle admission that we’ve constructed a fundamentally unsustainable system. The fact that companies are now buying their own power plants rather than relying on existing grids reveals how our exponential appetite for computation has outpaced our ability to power it responsibly.

The distributed alternativeThe solution isn’t more centralized infrastructure. It’s smarter orchestration of existing resources. Modern software can aggregate idle compute from data centers, enterprise servers, and even consumer devices into unified, on-demand compute pools. This distributed approach offers several advantages:

Immediate availability: Instead of waiting years for new data center construction, distributed networks can utilize existing idle capacity instantly.

Cost efficiency: Leveraging underutilized resources costs significantly less than building new infrastructure.

Environmental sustainability: Maximizing existing hardware utilization reduces the need for new manufacturing and energy consumption.

Resilience: Distributed systems are inherently more fault-tolerant than centralized mega-facilities.

The technical realityThe technology to orchestrate distributed compute already exists. Some network models already demonstrate how software can abstract away the complexity of managing resources across multiple providers and locations. Docker containers and modern orchestration tools make workload portability seamless. The missing piece is just the industry’s willingness to embrace a fundamentally different approach.

Companies need to recognize that most servers are idle 70%-85% of the time. It’s not a hardware problem requiring more infrastructure. Nor is it a capacity issue. It’s an orchestration and allocation problem requiring smarter software.

Instead of building our way out with increasingly expensive and environmentally destructive mega-projects, we need to embrace distributed orchestration that maximizes existing resources.

This requires a fundamental shift in thinking. Rather than viewing compute as something that must be owned and housed in massive facilities, we need to treat it like a utility available on demand from the most efficient sources, regardless of location or ownership.

So, before asking ourselves if we can afford to build $7 trillion worth of new data centers by 2030, we should ask whether we can pursue a smarter, more sustainable approach to compute infrastructure. The technology exists today to orchestrate distributed compute at scale. What we need now is the vision to implement it.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

This story was originally featured on Fortune.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext