To: Joe NYC who wrote (60069 ) 11/1/2024 12:16:20 PM From: Doug M. Respond to of 72145 <<While it is possible that the actual product - Falcon Shores GPU - will be a competent product, gaining ground even with a good product will be challenging.>> Intel doesn't intend to compete against NVIDIA, Amazon, Google, etc. in training - but AMD does. Intel wants to be a foundry for them later this decade. And as Craig Barrett said recently (a former Intel CEO - I posted his article the other day) if Intel provides leading process technology, the business will come. And that's exactly what Intel intends to do with High NA EUV at 14A.Here are Intel's GPU plans: << The Next Phase of Transformation Intel CEO Pat Gelsinger [ last month said ] the company is refocusing its product strategy around a “strong x86 franchise as we drive our AI strategy while streamlining our product portfolio in service to Intel customers and partners.” Gelsinger has also communicated that Intel will play in the AI inferencing market with Gaudi and other AI chips. He acknowledged Intel was behind Nvidia, AMD, and Google’s GPU in the AI training market. “As I view it… in the four-horse race on this side of the page, Nvidia, (AWS’s) Trainium and Inferentia, Google Cloud’s TPU, and AMD, and Intel’s number four… that’s hard,” Gelsinger said.>> Stayin' Alive: Intel's Falcon Shores GPU Will Survive Restructuring And this should give paus if anyone thinks that AMD will grow training GPU's in perpetuity - I saw one of those posts earlier today. Yes, this is long-term but so are Intel's foundry plans: <<Thirdly, comments from Broadcom CEO Hock Tan on Thursday after earnings point to a major threat to Nvidia demand from those same megacap tech companies: There's one market for enterprises of the world, and none of these enterprises are incapable nor have the financial resources or interest to create the silicon, the custom silicon, nor the large language models and the software going maybe, to be able to run those AI workloads on custom silicon . It's too much and there's no return for them to do it because it's just too expensive to do it. But there are those few cloud guys, hyperscalers with the scale of the platform and the financial wherewithal for them to make it totally rational, economically rational, to create their own custom accelerators because right now, I'm not trying to overemphasize it, it's all about compute engines. It's all about especially training those large language models and enabling it on your platform. It's all about constraint, to a large part, about GPUs. Seriously, it came to a point where GPUs are more important than engineers, these hyperscalers in terms of how they think. Those GPUs are much more -- or XPUs are much more important.And if that's the case, what better thing to do than bringing the control, control your their own destiny by creating your own custom silicon accelerators. And that's what I'm seeing all of them do. It's just doing it at different rates and they're starting at different times. But they all have started.">> Did a flawed Goldman Sachs report roil the market on Friday? | Forexlive