OpenAI deployment of MI450 '26. 3nm/432GB HBM4.
The agreement covers the deployment of hundreds of thousands of AMD's AI chips, or graphics processing units (GPUs), equivalent to six gigawatts, over several years beginning in the second half of 2026.
AMD said OpenAI would build a one-gigawatt facility based on its forthcoming MI450 series of chips beginning next year, and that it would begin to recognize revenue then.
As part of the arrangement, AMD issued a warrant that gives OpenAI the ability to buy up to 160 million shares of AMD for 1 cent each over the course of the chips deal. The warrant vests in tranches based on milestones that the two companies have agreed on.
The first tranche will vest after the initial shipment of MI450 chips set for the second half of 2026. The remaining milestones include specific AMD stock price targets that escalate to $600 a share for the final installment of stock to unlock.
In September, Nvidia announced an investment of up to $100 billion in OpenAI that included a plan to supply at least 10 gigawatts worth of Nvidia systems. The plan includes OpenAI deploying a gigawatt of Nvidia's next-generation Vera Rubin chips in late 2026.
AMD signs AI chip-supply deal with OpenAI, gives it option to take a 10% stake
Copilot: AMD’s MI450 series is expected to be built on TSMC’s 3nm-class node.
Here’s the breakdown:
?? Process Node: The MI450X accelerators are reportedly manufactured on TSMC’s 3nm-class node, likely N3P or N3X, depending on binning and power targets. This aligns with AMD’s push for higher density and efficiency in rack-scale AI deployments.
?? Packaging: AMD is using CoWoS-L (Chip-on-Wafer-on-Substrate with Large Interposer) for advanced packaging, enabling high-bandwidth memory integration and dense interconnects.
?? Memory: MI450X features HBM4, with up to 432 GB per GPU and ~19.6 TB/s bandwidth, optimized for massive model training.
?? Compute Density: Each MI450X GPU targets ~40–50 PFLOPS FP4, with rack-scale configurations (IF128) aggregating 2.3 PB/s of inter-GPU bandwidth.
?? Launch Window: Deployment begins in H2 2026, with OpenAI’s 1GW facility as the flagship rollout.
So while MI450 is cutting-edge, it’s not on 2nm. That node is likely reserved for AMD’s post-MI450 roadmap or future EPYC Venice refreshes. If you’re mapping competitive positioning vs. NVIDIA’s Rubin (also 3nm), or tracking TSMC’s 2nm ramp for 2026–2027, I can help chart that next. Want to compare MI450 vs Rubin VR200 node and bandwidth strategy? |