SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Semi Equipment Analysis
SOXX 316.33+1.3%Dec 10 4:00 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Julius Wong9/21/2023 8:34:55 AM
1 Recommendation

Recommended By
Return to Sender

  Read Replies (1) of 95550
 
Broadcom slips as Google said to discuss cutting firm as supplier to reduce costs

Sep. 21, 2023 7:18 AM ET
Alphabet Inc. (GOOG), AVGO, MRVL, GOOGL NVDA
By: Chris Ciaccia,SA News Editor
7 Comments

Justin Sullivan/Getty Images News

Broadcom (NASDAQ: AVGO) shares tumbled more than 7%in pre-market trading on Thursday after it was reported that Google (NASDAQ: GOOG) (NASDAQ: GOOGL) has looked into the possibility of cutting the San Jose, California-based semiconductor company as a supplier in an effort to save on computing costs related to artificial intelligence.

Mountain View, California-based Google ( GOOG) ( GOOGL) could make the move to drop Broadcom ( AVGO) as soon as 2027, The Information reported, citing a person with direct knowledge of the effort.

If Google ( GOOG) ( GOOGL) chooses to drop Broadcom ( AVGO), it may rely exclusively on the chips, known as tensor processing units, in house.

The news outlet added that Google ( GOOG) ( GOOGL) could also replace Broadcom ( AVGO) with a custom chip from Marvell Technology (NASDAQ: MRVL) that connects servers to ethernet switches inside data centers

Marvell ( MRVL) shares rose 4.8%in pre-market trading on Wednesday.

In April, Google ( GOOG) ( GOOGL) touted the capabilities of its own TPUs. The Sundar Pichai-led company said the fourth-generation TPUs that train its artificial intelligence models were faster and more power-efficient than Nvidia's ( NVDA) A100 chips.

The custom chips are between 1.2 and 1.7 times faster than Nvidia's ( NVDA) A100 chips, with up to 1.9 times more power efficient, Google ( GOOG) ( GOOGL) added.

Mountain View, California-based Google ( GOOG) ( GOOGL) did not compare the fourth-generation TPUs to Nvidia's ( NVDA) H100 chips because those came to market after Google's chips and use newer technology.

Last month, Google ( GOOG) ( GOOGL) showedoff its new fifth-generation tensor processing unit chip, known as TPU v5e.

Google said the TPU v5e is built to train large language models, or LLMs, and bring cost efficiency as well, delivering up to twice the higher training performance per dollar and up to two-and-a-half times the performance per dollar for LLMs and generative AI models compared to the previous model.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext