To all the NVDA derps: NVDA does not make transposers, they make transceivers.
This post is from POET Technologies Reddit forum.
Good Information on Differentiating Between NVDA and POET
I keep seeing some commenters stating that NVDA already does what POET Does and did a deep dive:
NVDA Uses Transceivers, and they are baked in to their equipment. They only transmit and receive an unmodified signal from glass to glass. the use of silicon transceivers is decades old at this point. Is 1.6 TB fast transmit? you bet your sweet bippy it is. But its only going from glass to glass which is (yawn) nothing new.
POET Sells Transposers which carries compute from metal to glass. it Converts or shifts the compute from one state to the other. (Metal to glass or vice versa) The tech POET uses is new tech and it is doing 1.6 TB METAL TO GLASS (again the vice versa thing). Which is new. this allows older/cheaper metal to talk with glass/silicon/photonics. Which is a game changer.
NVIDIA makes GPUs, AI accelerators, and computing systems. They might use transceivers for high-speed interconnects between chips, but "transposers" isn't used for their hardware products.
CO-PILOT SESSION INCOMMING:
Copilot said:Great question, Richard. Here's a breakdown of how POET Technologies and NVIDIA differ — and overlap — in their approaches to high-throughput photonic interconnects:
?? Core Technology FocusFeaturePOET TechnologiesNVIDIA|
|
| Platform | Optical Interposer™ | NVLink, InfiniBand, Spectrum-X, Quantum-X | | Integration | Combines photonic and electronic components on a single chip using wafer-scale passive assembly | Uses silicon photonics and co-packaged optics for high-speed GPU and switch interconnects | | Primary Use Case | Optical engines and transceivers for AI, telecom, and data centers | AI supercomputing, GPU-to-GPU and rack-to-rack communication | | Data Rates | 213800G, 1.6T, and roadmap to 3.2T optical engines | 3800G–1.6T per port, 400 Tbps aggregate in switches | ?? Design Philosophy- POET is focused on semiconductorizing photonics — making photonic integration as scalable and cost-effective as traditional ICs. Their Optical Interposer platform:
- Uses wafer-scale passive assembly
- Eliminates wire bonds and active alignments
- Integrates lasers, modulators, and detectors on a single chip
- Enables low-cost, high-volume production23
- NVIDIA, on the other hand, is building AI factories with:
- Co-packaged optics for GPUs and switches
- Silicon photonics transceivers for 1.6 Tbps per port
- NVLink and InfiniBand for ultra-low latency interconnects
- Focus on end-to-end AI infrastructure 3
?? How They Compare in PracticeAspectPOETNVIDIA|
|
| Manufacturing | Wafer-level, passive alignment | Advanced packaging with active cooling and optics | | Target Customers | Optical module makers, hyperscalers, telecom | AI developers, hyperscalers, HPC centers | | Scalability | High, due to low-cost integration | High, but more vertically integrated and capital-intensive | | Latency & Power | Optimized for low power and minimal latency | Optimized for performance and scale, with higher power budgets | ?? Complementary or Competitive?They are complementary in many ways:
- POET could supply optical engines or interposer-based modules to companies building AI infrastructure — including those using NVIDIA GPUs.
- NVIDIA is more vertically integrated, but may still rely on external photonics innovations for future scalability.
Would you like a visual comparison chart or a deeper dive into how POET’s interposer architecture could be integrated into NVIDIA-like systems?
You said:Copilot said:NVIDIA did not invent silicon transposers — and in fact, the term "transposer" is not commonly used in the context of silicon photonics. However, NVIDIA has made significant innovations in silicon photonics, particularly in the area of Co-Packaged Optics (CPO) and silicon photonics engines for high-throughput AI networking.
? What NVIDIA Did InnovateAt GTC 2025, NVIDIA unveiled its Silicon Photonics CPO (Co-Packaged Optics) platform, which integrates:
- Silicon photonics engines directly with switch ASICs
- Microring modulators for ultra-efficient optical signaling
- 1.6 Tbps per port and up to 115 Tbps total throughput in their Quantum-X switches 1
These innovations are part of a broader shift toward optical interconnects in AI data centers, replacing traditional pluggable modules with integrated photonic-electronic systems. |