Thread.......Interview of Rambus CEO Tate...
Q&A: Rambus chief speaks to future of Direct RDRAM market By EBN Staff Electronic Buyers' News (08/20/99, 03:46:26 PM EDT) The introduction next month of Intel Corp.'s Camino core-logic chipset will signal the maiden flight of Direct Rambus DRAM as a viable, volume-computer main-memory technology-and critics and supporters alike are lining the runway.
Whether it shakes the heavens with a sonic boom, or comes crashing back to earth, Direct RDRAM will make its impact felt in a market that's awaited the arrival of the memory interface with a mix of hope and dread.
Geoff Tate, president and chief executive of the Mountain View, Calif., company, met with EBN editors in New York this week to discuss Rambus Inc.'s future as the PC industry's force majeur.
Originally positioned as a technological savior for OEMs (and potential cash cow for suppliers), Rambus memory has been painted with a more modest marketing message in recent months, primarily because of the delay to Intel's Camino chipset. Moreover, Intel's imminently anticipated support of PC133 SDRAM threatens to steal some of the limelight from Rambus during the Intel Developer Forum (IDF) in Palm Springs, Calif. Aug. 30.
As a public company, Rambus has had a brief but volatile history. From chat rooms to boardrooms, its technology has been praised as a high-bandwidth marvel that will narrow the widening gap between processor and main-memory speeds. But it's also been portrayed as a royalty-bearing interloper that is using Intel's market clout to bend the DRAM industry to its wishes.
Whatever the prevailing view, Rambus-both the company and the technology-has been one of the electronics industry's hottest topics since the day late in 1996 when Intel selected it to succeed SDRAM.Tate discussed the remaining hurdles Direct RDRAM faces in the market, and the role he expects the interface will play.
Q: What are you going to be presenting at the Intel Developer Forum?
A: We'll be presenting with Intel on our new memory module for the mobile space, the SO-RIMM. It's really up to Intel to talk about things like their chipsets and their product roadmaps, but we're working directly with the DRAM companies on things like yield. That's separate from IDF, although it impacts Intel as well as other customers.
Right now, the issue of frequency yield has been an area we put a lot of effort into, and in the last six months we've made a lot of progress. Certainly, there is a lot of variation vendor to vendor, some are a lot further ahead than others. But overall, on an industry weighted basis, we estimated about 70% of the parts are 700 or 800 MHz , and the reason I clump those together is that's what is most important for the PC. That goes with the higher speed front side bus.
Through continuing the yield improvement programs that we've got, and through the work our partners are doing, we expect that [by the] middle, or second half of next year we should be able to get to a 100% yield at 800 MHz.
Q: Can you say what that 70 percent, represents? How many units, roughly?
A: Not at this time, that would end up translating and disclosing how many Intel chipsets would be shipping, if [you factor] the ratio of memories to chipsets. We're talking many, many millions of parts. Millions to tens of millions.
Q: You mention DRAM companies, there are more than a dozen licensees right now. How many of those seem to be committed to Rambus?
A: Right now we've got 14 licensed DRAM companies. There are some that are really in wait-and-see mode, most of them are working on engineering, but there is quite a range between. There's the folks that are already loading and shipping millions of units, and there are the folks that are just putting their first mask sets into fab for prototype, and everything in between.
We've got four companies that are going to be producing the vast majority of parts over the next couple of months. There's another four companies that have parts that are working in systems, so they're only a few months away from mass production, and then the other six companies are still in the very early prototype stage or still in fab. So that's the rough distribution.
Q: The companies are Samsung, Toshiba, [Hyundai and NEC]?
A: [Right, the] ... two Koreans and two Japanese. Micron had its the first samples 45 days ago. So they're moving fast, they're closing the gap, but they're kind of in the second clump at this point. But not far behind. I would expect that by the end of this year, start of next year, we should have at least six and probably more like seven or eight companies in high volume mass production and shipping.
In terms of commitment, when it comes right down to it [the question] is do they have customers that want to buy parts. Obviously, six months ago there was a lot of uncertainty about that with the delay in the program. At this point, hard orders have been placed, there is backlog, some of our partners have sold out their capacity. Definitely we are at a point today where demand exceeds supply. Which is a good problem.
Q: Without getting into specific units, do you still maintain an estimate as to what percentage of the DRAM market Rambus will account for this year and in 2000?
A: When we make a forward looking estimate, it's based on our best guess and what we think we can do and what we know, but there's certainly room for error either way. It could be higher or it could be lower. The biggest factor for market penetration in 2000 is going to be price, price relative to synchronous DRAM. We believe at IDF that Intel is going to show some benchmarks that are pretty attractive for the 820 [Camino] chipset using Rambus.
I think there was some concern a few months ago, was there any performance advantage? Intel will address that at IDF. If all you do in life is ASCII email and use inkjet dot matrix printers or something, you might not notice it, but if you're doing voice recognition and PhotoShop and all the kind of typical high-end, performance-oriented programs, you'll see a big difference. Today, no programs are using the streaming SIMD extensions that are in the Pentium III, and as programs start to come out...late this year early next year, you should be seeing even bigger differences in performance.
We think the performance advantage is there, and that in the desktop segment of the market where Rambus will be introduced, people do care about performance. Price is also important, and our sort of simplistic view is that Rambus comes in at the performance portion of the desktop market. How quickly we move down in that space will be a function of two things, [and] one is applications driving the need for bandwidth.
It is possible using streaming SIMD instructions, AGP4X and so forth, in 2000 to generate Gbytes per second of memory bandwidth requirement. A PC100 at 500 Mbytes per second would be swamped in terms of ability to supply that performance. So if killer apps come out in market segments, especially big market segments that need more performance than SDRAM can provide, that will certainly accelerate the transition to Rambus.
Q: Are you doing anything to excite that market?
A: Like working with software developers? No, we're just 150 people and we have our hands full working with something like 50 different systems companies in all different parts of the world, much less trying to evangelize the software developers.
Intel has, I believe, something like hundreds of people in some evangelism group and they're off promoting the streaming SIMD instruction set. They're actually investing money in a lot of start ups and companies to accelerate that trend, so we have a shared interest there and that's their job to evangelize the software companies.
Q: You mention price. You're close now so you must have a feel for what the price premium will be [relative to SDRAM].
A: We don't sell the parts and we don't buy the parts. We do have a feel, but it's all second hand. Depending on who we're talking to, the data can be sort of skewed one way or another. If you're doing a price premium comparison and you want to make it sound good, you compare it to some high-density, more expensive flavor of SDRAM at a contract price. If you want to make it sound bad, you compare it to the lowest spot price you've ever heard, and you can come up with pretty big differences.
But having said that, it is certainly the case that price premiums initially are going to be quite high. Very significant, well over 50% versus SDRAM, on any comparison. The reasons for that are two. One is demand versus supply. You can go back to the synchronous DRAM. Back when it came out, there was a short period of time-three, four, six months-when there was a substantial demand in excess of supply. It takes a while for the industry to gear up when you're switching something as big as a segment of the PC market over to a new memory type.
The other issue is the SDRAM itself has gotten to be such a low-priced part. The DRAM companies are very reluctant to be investing money in a new memory type like Rambus, which they know delivers significant performance advantages, and sell it at a loss. So probably the worst thing for us in the last six months has been how rapidly the SDRAM prices went down. I think most of the DRAM guys have been viewing it not so much as a percentage...If they're losing money selling SDRAM, they're not going to lower the Rambus price to be some fixed percentage increase.
The reason they feel they need to make money is that you do have to invest to build and sell Rambus DRAMs. With SDRAM...they already built fabs, they already bought the testers, they already have the assembly lines. With Rambus, they can use the same wafer fab, which is the big investment, but they do need to buy new testers and they do need to buy [equipment] for chip-scale assembly. In that case, they can use sub contractors as well, but at a minimum they have to buy testers.
There are kind of two mentalities right now among the DRAM companies. There's some that just go hey, we don't want to stick our necks out. Maybe Intel will delay, maybe there won't be any performance advantage, maybe this market won't take off. We don't want to invest $10 million and find out that it sits there unused. That's the wait-and-see crowd.
Then there's the other folks that analyze it and are pretty sure it's going to happen, and they see that if they stick their necks out, they have a chance for a reward, that there's going to be a price premium. But beyond that, the people that are going to transition first to Rambus are going to be the most attractive companies, people like the Dells and the Compaqs, the leaders.
Q: Do you foresee these companies who are getting in early really distancing themselves from the second tier, and pretty much absorbing all the early price premium, creating a larger rift between first tier and second tier DRAM companies?
A: What's your definition of first tier and second tier?
Q: Pretty much those supporting Rambus today and those who aren't. With a couple of exceptions.
A: It has something to do with customer base. The second-tier DRAM guys generally aren't the strategic suppliers to the lead PC guys. So it's not so much the second-tier guys aren't as capable, maybe their customers are maybe more wait-and-see too.
We do see a growing rift between the big DRAM companies and the small ones in a lot of ways, especially the smaller Japanese companies we all know have been very hard pressed to keep up. The scale of business is growing, the amount of money you need to compete is growing, and the difference in size between the big and the small seems to be growing.
Back 10 years ago, it was like 20 DRAM companies, and none of them were very big, with the exception of Samsung, who was kind of the one big guy. Now we've got three giant companies, and even NEC is quite a ways back in size to any of the three large companies. So I think that's going to put a lot of pressure on the lower-tier companies.
To succeed they're going to have to find some ways to overcome their disadvantage of scale, like NEC and Hitachi are talking about cooperating. I believe that's proven by now. Or they're going to have to have a lower cost basis. Maybe the folks in Taiwan can still compete successfully just because they [might] have lower costs of doing business in Taiwan, than say the U.S.
Q: Are you fearful that [Direct RDRAM] will enter a niche market at the introduction, and how do you break out of the niche market?
A: The real thing that will determine the rate of penetration in the short term and the long term will be do the people need the performance, and what is the relative price difference. If there was little or no price difference, people would switch to Rambus anyway, just because, hey, why not, maybe I'll need the performance.
In the performance desktop market segment, that is the segment where people are willing to pay for performance and where they do use their systems for things like PhotoShop and so forth. So price is always an issue, but it's certainly less of an issue in the market segment we're starting in. The PC OEMs we're talking with are definitely seeing the price situation right now as something they've been through before with SDRAM, and something that is a transitional thing.
Our target is to get within the range of a 10% price premium by the second half of 2000, a year or so from now. So we do that two ways. We work on costs with our partners and certainly volume helps cost, and the third thing is just increasing the number of suppliers and have supply exceed demand like it does for SDRAM so that price and cost correlate.
Q: [In the past, you've] mentioned that the way to look at Direct RDRAM's price/performance improvement is really system to system, as opposed to chip to chip. How do you carry that market message out? Do you need to involve the PC OEMs and in turn have them identify with that message?
A: Performance benefits? Maybe I'll use the Sony PlayStation as an example just because it's a little simpler.
Sony basically decided they were going to build a game system, they set performance targets, they architected the system, they looked at the choices, they looked at the numbers, and without having to worry about politics and mystery dynamics, they could sort of just figure out well, here's the best technical solution and not have to try to guess about what other companies are going to do.
So the Sony PlayStation II, which they've already disclosed, will use two Rambus DRAMS. They'll be connected in parallel, so they'll be 3 Gbytes per second of bandwidth going into the microprocessor. And sure, they'll pay more per chip for the Rambus DRAM, but if they try to get 3 Gbytes per second out of synchronous DRAM, they'd have to use something like 16 or more [chips], so they would have paid more for all those extra DRAMs-not to mention the PC boards and the extra pins on the controller.
So Rambus sounds expensive from the chip level, but for the Sony PlayStation II, it's dirt cheap compared to synchronous DRAM, and they get more performance.
In the PC space, things get a lot more clouded. Probably the biggest thing that clouds the issue in the PC space is the issue of today's software versus the software that's coming. In the game space, when Nintendo announced its Nintendo 64, they shipped it with a killer app right off the bat, Super Mario. Now it may not have used all the performance, but it used a heck of a lot more than the previous Nintendo required.
In the PC space, things work exactly the opposite. You bring out this roaring hardware, and it just runs yesterday's software. So people do benchmarks with yesterday's software and they go 'well, it doesn't run any faster.' The reason is because it didn't need more hardware for that software. But what happens is the software guys, year by year, target their software to the platforms that are out there, and they take advantage of all these new features.
If you buy a PC that's good enough this year, you'll find long before three years you'll have thrown it away, because it won't be good enough for what's coming down the road. So in the PC space you've got to buy more than what you need now and have the headroom to run the software that's coming in the next two or three years.
Q: So what I heard you say was performance matters, but people will not pay that much more for it.
A: That's right. Unless there's a killer app. Somebody comes out with a killer app tomorrow that everybody's got to have, then they'll be willing to pay more.
Q: Would you be designing anything special for graphics or any other specialized applications?
A: Well, at least one thing that will likely happen is that parts that run at 800 MHz in the PC, if you want to go as fast as possible, they'll probably run 900 MHz or [1 GHz] in the exact same silicon, chip to chip with graphics applications.
So at some point over the next 12 months, I'm sure some of our partners will start looking at that at the request of some of these companies and just recharacterize the silicon, not redesign it, at a higher frequency.
They'd have to re-screen the core. It's possible the interface could run faster, but maybe the core can't keep up... It's not a new part, it's just taking advantage of the fact that it's a simpler electrical situation than a graphics system.
Q: On the back end, we've heard some concern that the testing infrastructure is still not in place to people's satisfaction. A, because of the expense, but B, because of the fact that the likes of HP, Teradyne, Advantest, haven't come out with the ability to do single-pass testing... Can you address that issue?
A: At this point, all of the tester companies, HP, Schlumberger, Advantest, and Teradyne have announced Rambus systems, and have shipped mass production systems. All of those four companies have testers that are completely capable of single-pass test right now, but nobody in their right mind would use them for single pass right now for economic reasons.
The DRAM industry is very constrained on capital, so you want to make your capital work hard, as much as possible, and when you think about testing in Rambus DRAM, there's two things you've got to do. You've got to make sure the core is working okay. A long time ago, we designed a mode into the Rambus DRAM where you can test that using the low frequency testers. They have a lot of low frequency testers sitting around, they're not using them, so why not use those at the core, and only use these new expensive testers for testing the interface.
So it's a two pass test. Arguably, if you were buying testers from scratch you'd buy the new testers exclusively, because they can test the core even faster and the cost to test will end up in the long term being the same or lower than SDRAM.
If you're tight on capital, if you have to lay out new money, you'll want to get the most out of that, so this two-pass test is what most of our partners are going to do, initially. |