SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,073+2.0%1:44 PM EDT

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired10/27/2025 2:03:56 PM
  Read Replies (1) of 42214
 
Synopsys teases 'silicon bring-up' of next-gen LPDDR6 IP fabbed on TSMC's new N2P process node
Synopsys unveils huge development in mobile memory, teases silicon bring-up of its LPDDR6 IP on TSMC's bleeding-edge N2P process node.

Anthony Garreffa

Gaming Editor
Adelaide, Australia

Published Oct 17, 2025 10:20 PM CDT

TL;DR: Synopsys has successfully completed silicon bring-up of its LPDDR6 IP on TSMC's advanced N2P process, delivering up to 86GB/sec bandwidth and significant generational upgrades over LPDDR5. This licensable IP targets high-performance mobile, AI, and HPC applications, enabling early market adoption of next-gen memory by 2026.
Synopsys has just announced a big milestone in its development of next-gen mobile memory, with the company unveiling the silicon bring-up of its LPDDR6 IP based on TSMC's bleeding-edge N2P process node.

The silicon bring-up stage of development is when the first power-on of a new chip happens, and in this case, it's an IP block. This involves testing a particular product in multiple stages, including hardware checks, power sequencing, and more. Synopsys has successfully developed a licensable building block of next-gen LPDDR6 memory technology, with bandwidth of up to 86GB/sec, aligning with JEDEC standards.

This is also one of the first times integrating TSMC's new N2P process node into an LPDDR6 IP block, with the IP block featuring two main components: the controller, and the PHY interface onboard. The controller works on the JEDEC protocol engine, joined with timing control and low-power states. TSMC's new N2P node will feature PHY analog and I/O circuits built on it, with N2P metal stacks and N2P I/O libraries.

Read more: Rambus unveils industry-first HBM4 controller IP, ready to super-speed next-gen AI workloads
Read more: AMD's next-gen Zen 6 CPUs: TSMC 2nm 'N2P' CCD with TSMC 3nm 'N3P' IOD in 2026
Read more: AMD is aiming for an insane 7.0GHz or more clock speed with its next-gen Zen 6-based Ryzen CPUs

Synopsys says that the new stack pushes 86GB/sec of bandwidth, which works out to around 10.667Gb/s, with pin speeds at a theoretical top speed of 14.4GB/sec, working out to around 115GB/sec. This means that LPDDR6 versus LPDDR5, the new tech will feature massive generational upgrades, thanks to TSMC's new N2P process tech.

We should expect to see LPDDR6 memory going mainstream in 2026, and beyond.

Dino Toffolon, Senior Vice President of Engineering-Interface IP at Synopsys, wrote: "We're pleased to share our recent silicon bring up on TSMC's N2P process with our LPDDR6 IP. This achievement not only reinforces our leading position in advanced process node IP, but also provides access for our customers to trusted IP for the most demanding mobile, edge AI, and HPC computing applications".

He continued: "Customers can benefit from the recently released LPDDR6 standard, while reducing risk, enabling high-performance, and achieving an earlier time-to-market. Our proven IP portfolio for N2/N2P now also includes LPDDR6 along with USB and MIPI. This provides design teams confidence as they pursue aggressive power, performance, and area (PPA) targets".

"By extending our leadership in high-speed interfaces, we enable silicon designs that are both bandwidth-rich and power-efficient. Our close collaboration with TSMC offers customers a smooth transition to angstrom scale process nodes. With LPDDR6 available on N2P and N3P, device makers in mobile, AI inference, edge compute and HPC domains will have a low-risk path to harness the next generation of memory performance".

NEWS SOURCES:wccftech.com and linkedin.com

Read more: tweaktown.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext