Hock E. Tan: AI ASIC bigger than I thought 6 months ago.
Edward F. Snyder -- Analyst
Thank you very much. Hock, that was a perfect segue into my question. You've said in the past calls that you thought that AI compute would move away from ASICs and go to merchant market. But it looks like the trend is kind of heading the other way.
Are you still of the opinion that that's going to be the long-term trend of this? And secondly, as you just pointed out, power is becoming the defining factor for deployment with all the big guys at this point. Given the performance per watt of the ASICs over GPUs, which is superior to GPUs, why shouldn't we see more of these guys moving to custom ASIC? I know it takes a long time and it takes a lot of funding, etc. But especially as the enterprise starts getting more involved with this, there are going to be some applications that are kind of standard across, some of the enterprises wouldn't even see some of the bigger, like AWS, move to a custom silicon for a specific workload. So, basically, the overall trend in ASICs in AI.
Thanks.
Hock E. Tan -- President, Chief Executive Officer, and Director
OK. Ed, did I hear you right to say at the beginning that you meant that there's a trend toward ASIC or XPU from general-purpose GPU, right?
Edward F. Snyder -- Analyst
Yep.
Hock E. Tan -- President, Chief Executive Officer, and Director
You're right, and you're correct in pointing out to me that, hey, I used to think that general-purpose merchant silicon will win at the end of the day. Well, based on history of semiconductors mostly so far, general-purpose small-merchant silicon tends to win. But like you, I flipped in my view. And I did that, by the way, last quarter, maybe even six months ago.
But nonetheless, catching up is good. And I actually think so because I do think there are two markets here on AI accelerators. There's one market for enterprises of the world, and none of these enterprises are incapable nor have the financial resources or interest to create the silicon, the custom silicon, nor the large language models and the software going maybe, to be able to run those AI workloads on custom silicon. It's too much and there's no return for them to do it because it's just too expensive to do it.
But there are those few cloud guys, hyperscalers with the scale of the platform and the financial wherewithal for them to make it totally rational, economically rational, to create their own custom accelerators because right now, I'm not trying to overemphasize it, it's all about compute engines. It's all about especially training those large language models and enabling it on your platform. It's all about constraint, to a large part, about GPUs. Seriously, it came to a point where GPUs are more important than engineers, these hyperscalers in terms of how they think.
Those GPUs are much more -- or XPUs are much more important. And if that's the case, what better thing to do than bringing the control under the control of your own destiny by creating your own custom silicon accelerators? And that's what I'm seeing all of them do. It's just doing it at different rates and they're starting at different times. But they all have started.
And obviously, it takes time to get there. But a lot of them, there are a lot of learning in the process versus what the biggest guy of them had done longer, have been doing it for seven years. Others are trying to catch up and it takes time. I'm not saying you'll take seven years.
I think you'll be accelerated. But it will still take some time, step by step, to get there. But those few hyperscalers, platform guys, will create their own if they haven't already done it, and start to train them on their large language models. And yes, you're right, they will all go in that direction totally into ASIC or, as we call it, XPUs, custom silicon.
Meanwhile, there's still a market for in enterprise for merchant silicon.
PS EUV + rolling MoAPS = bit bonanza. My story. Sticking with it.
ASML |