SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The end of Moore's law - Poet Technologies -- Ignore unavailable to you. Want to Upgrade?


To: toccodolce who wrote (1522)11/12/2025 12:22:06 PM
From: toccodolce1 Recommendation

Recommended By
longz

  Read Replies (1) | Respond to of 1567
 
Latest from Agoracom: poetmmc

Re: Luxshare 800G silicon photonics (SiPh) modules in mass productionIn Response To Re: Luxshare 800G silicon photonics (SiPh) modules in mass production by ohiotom123

I believe this is VERY good news for POET investors.

Consider this:

- It was 7 months ago at OFC that Luxshare was demonstrating 800G with POET's OI.

- At OFC, I was told by a Luxshare representative involved with the POET relationship that Luxshare "already has a large hyperscaler customer for this 800G product. I don't think this would have been stated unless this was a qualified opportunity with an existing customer who had already been introduced to the 800G solution. It takes time for any end user to validate the modules within their infrastructure. It's been 7 months, at least. It is a reasonable timeframe for the customer (hyperscaler) to commit to an order, which would lead to "mass production".

- Luxshare is a supplier of optical modules to META.

- If (when) it is announced that Luxshare/POET has received an order from META for modules, the market will push our sp up. Like, WAY up..



To: toccodolce who wrote (1522)11/12/2025 12:27:16 PM
From: toccodolce  Respond to of 1567
 
Latest from Agoracom: fairchijisback

The AI bottleneck is power a rapidly emerging driver for POET
Posted On: Nov 12, 2025 07:43AM

Here is the latest critical insight that appears to be most relevant to your question (from a POET friend).

AI market investors appear to have woken up to the fact that newly purchased GPUs will collect dust, nearly completely depreciating in 18 months, unless the owners have the power to run their new data centers.

AI is now doubling its utility and lowering its cost in just a few months. At the same time, data centers won't be able to access new sources of power for years, especially when competing for grid allocations that are needed for residential use. There is not enough power to support the construction of AI centers, as the US power infrastructure is already failing and will take years to repair.

China has all the power it needs to continue building out AI, but the US does not.

This AI thread and the whole Youtube conversation with Satya, CEO of MSFT is worth your time.

https://x.com/aakashg0/status/1985176339712970900

Satya just told you the entire AI trade thesis is wrong and nobody is repricing anything.

Microsoft has racks of H100s collecting dust because they literally cannot plug them in. Not "won't," cannot. The power infrastructure does not exist. Which means every analyst model that's been pricing these companies on chip purchases and GPU count is fundamentally broken. You're valuing the wrong constraint. The bottleneck already moved and the market is still trading like it's 2023.

This rewrites the entire capex equation. When $MSFT buys $50B of Nvidia GPUs, the Street celebrates it as "AI investment" and bids up both stocks. But if half those chips sit unpowered for 18 months, the ROI timeline collapses. Every quarter a GPU sits in a dark rack is a quarter it's not generating revenue while simultaneously depreciating in performance relative to whatever Nvidia ships next. You're paying data center construction costs and chip depreciation with zero offset.

The players who actually win this are whoever locked in power purchase agreements 3-4 years ago when nobody was thinking about hundreds of megawatts for inference clusters. The hyperscalers who moved early on utility partnerships or built their own generation capacity have structural leverage that cannot be replicated on any reasonable timeframe. You can order 100,000 GPUs and get delivery in 6 months. You cannot order 500 megawatts and get it online in 6 months. That takes years of permitting, construction, grid connection, and regulatory approval.

Satya's point about not wanting to overbuy one GPU generation is the second critical insight everyone is missing. Nvidia's release cycle compressed from 2+ years to basically annual. Which means a GPU purchased today has maybe 12-18 months of performance leadership before it's outdated.

If you can't deploy it immediately, you're buying an asset that's already depreciating against future products before it earns anything. The gap between purchase and deployment is now expensive in a way it wasn't when Moore's Law was slower.

The refresh cycle compression also means whoever can deploy fastest captures disproportionate value. If you can energize new capacity in 6 months vs 24 months, you get 18 extra months of premium inference pricing before competitors catch up. Speed to deployment is now a direct multiplier on chip purchase ROI, which means the vertically integrated players with their own power and real estate can move faster than anyone relying on third party data centers or utility hookups.

What makes this really interesting is it changes the competitive moat structure completely. The old moat was model quality and algorithm improvements. The new moat is physical infrastructure and energy access. You can train a better model in 6 months. You cannot build a powered data center in 6 months. This is the kind of constraint that persists for years and creates durable separation between winners and losers.