SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,076-1.1%3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired9/13/2024 8:44:37 AM
1 Recommendation

Recommended By
Tobias Ekman

   of 42701
 
Google big into rolling own vast assortment of chips.

Google data centers still rely on general-purpose central processing units, or CPUs, and Nvidia's graphics processing units, or GPUs. Google's TPUs are a different type of chip called an application-specific integrated circuit, or ASIC, which are custom-built for specific purposes. The TPU is focused on AI. Google makes another ASIC focused on video called a Video Coding Unit.

Google also makes custom chips for its devices, similar to Apple's custom silicon strategy. The Tensor G4 powers Google's new AI-enabled Pixel 9, and its new A1 chip powers Pixel Buds Pro 2.

The TPU, however, is what set Google apart. It was the first of its kind when it launched in 2015. Google TPUs still dominate among custom cloud AI accelerators, with 58% of the market share, according to The Futurum Group.

Google coined the term based on the algebraic term "tensor," referring to the large-scale matrix multiplications that happen rapidly for advanced AI applications.

With the second TPU release in 2018, Google expanded the focus from inference to training and made them available for its cloud customers to run workloads, alongside market-leading chips such as Nvidia's GPUs.

"If you're using GPUs, they're more programmable, they're more flexible. But they've been in tight supply," said Stacy Rasgon, senior analyst covering semiconductors at Bernstein Research.

The AI boom has sent Nvidia's stock through the roof, catapulting the chipmaker to a $3 trillion market cap in June, surpassing Alphabet and jockeying with Apple and Microsoft for position as the world's most valuable public company.

"Being candid, these specialty AI accelerators aren't nearly as flexible or as powerful as Nvidia's platform, and that is what the market is also waiting to see: Can anyone play in that space?" Newman said.

Now that we know Apple's using Google's TPUs to train its AI models, the real test will come as those full AI features roll out on iPhones and Macs next year.

Broadcom and TSMC
It's no small feat to develop alternatives to Nvidia's AI engines. Google's sixth generation TPU, called Trillium, is set to come out later this year.

Google showed CNBC the sixth version of its TPU, Trillium, in Mountain View, California, on July 23, 2024. Trillium is set to come out later in 2024.

"It's expensive. You need a lot of scale," Rasgon said. "And so it's not something that everybody can do. But these hyperscalers, they've got the scale and the money and the resources to go down that path."

The process is so complex and costly that even the hyperscalers can't do it alone. Since the first TPU, Google's partnered with Broadcom, a chip developer that also helps Meta design its AI chips. Broadcom says it's spent more than $3 billion to make these partnerships happen.

"AI chips — they're very complex. There's lots of things on there. So Google brings the compute," Rasgon said. "Broadcom does all the peripheral stuff. They do the I/O and the SerDes, all of the different pieces that go around that compute. They also do the packaging."

Then the final design is sent off for manufacturing at a fabrication plant, or fab — primarily those owned by the world's largest chipmaker, Taiwan Semiconductor Manufacturing Company, which makes 92% of the world's most advanced semiconductors.

When asked if Google has any safeguards in place should the worst happen in the geopolitical sphere between China and Taiwan, Vahdat said, "It's certainly something that we prepare for and we think about as well, but we're hopeful that actually it's not something that we're going to have to trigger."

Protecting against those risks is the primary reason the White House is handing out $52 billion in CHIPS Act funding to companies building fabs in the U.S. — with the biggest portions going to Intel, TSMC, and Samsung to date.

Processors and power
Risks aside, Google just made another big chip move, announcing its first general-purpose CPU, Axion, will be available by the end of the year.

"Now we're able to bring in that last piece of the puzzle, the CPU," Vahdat said. "And so a lot of our internal services, whether it's BigQuery, whether it's Spanner, YouTube advertising and more are running on Axion."

Google is late to the CPU game. Amazon launched its Graviton processor in 2018. Alibaba launched its server chip in 2021. Microsoft announced its CPU in November.

When asked why Google didn't make a CPU sooner, Vahdat said, "Our focus has been on where we can deliver the most value for our customers, and there it has been starting with the TPU, our video coding units, our networking. We really thought that the time was now."

All these processors from non-chipmakers, including Google's, are made possible by Arm chip architecture — a more customizable, power-efficient alternative that's gaining traction over the traditional x86 model from Intel and AMD. Power efficiency is crucial because, by 2027, AI servers are projected to use up as much power every year as a country like Argentina. Google's latest environmental report showed emissions rose nearly 50% from 2019 to 2023 partly due to data center growth for powering AI.

"Without having the efficiency of these chips, the numbers could have wound up in a very different place," Vahdat said. "We remain committed to actually driving these numbers in terms of carbon emissions from our infrastructure, 24/7, driving it toward zero."

It takes a massive amount of water to cool the servers that train and run AI. That's why Google's third-generation TPU started using direct-to-chip cooling, which uses far less water. That's also how Nvidia's cooling its latest Blackwell GPUs.

Despite challenges, from geopolitics to power and water, Google is committed to its generative AI tools and making its own chips.

"I've never seen anything like this and no sign of it slowing down quite yet," Vahdat said. "And hardware is going to play a really important part there."

How Google makes custom chips used to train Apple AI models and its own chatbot, Gemini (msn.com)

PS
Color me very excited about 2H24 on.

ASML
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext